Introduction
Duration: April 2018 - June 2018 (8 weeks)
Instructor: Michael Smith
Team members: Cecelia Zhao, Sarah Chu
My role: Contributed to design research, ideation, storyboard, and video editing. Primarily responsible for the task analysis and user flow chart. In charge of video filming. 
Tools: Indesign, Photoshop, Illustrator, After Effect, Adobe Premiere Pro
Context
From observation, we found that finding a suitable parking space in a busy urban area is often a frustrating and tedious task, especially in Seattle.
Design Challenge:
How might we design a system to help drivers find suitable parking in the busy urban area efficiently?
Our Solution
Parko is an augmented reality (AR) in-car voice-controlled assistant that helps drivers find parking spaces efficiently. 
Parko can help drivers choose a suitable parking lot efficiently by providing users with information such as price, capacity and distance to destination. It can also navigate users to the selected parking lot as well as  indicate available parking spaces in the lot with AR visual aids projected through the windshield onto the environment.
(The project is based on the assumption that the data of parking lots information and real-time parking situation is available. Concerns regarding source data will be addressed in the What's Next section) 
Research
Quick Insight: 
Through research, we learned that parking could be broken down into 3 key steps: driving around to find an empty spot, evaluating the spot, and parking the car. 
The first two steps are very time-consuming, and can even lead to a vicious loop: after the driver finds an empty spot, he still might not be able to park due to limitations such as price, time limit, and distance to destination. Then he will have to find a new spot again.   
Therefore, we narrowed down our problem area and created a point of view statement:
Drivers in the busy urban areas need assistance finding suitable parking spots efficiently because they don't have access to key decision-making information in advance. 
Observation & Photo Analysis: 
Task Analysis: 
Ideation
With our point of view statement in mind, we brainstormed and organized 30+ ideas. We then evaluated these ideas based on three criteria: viability, desirability, and feasibility.

Brainstormed 30+ ideas 

Talked through each idea and categorized similar ones

Organized ideas

Functional Storyboard
After narrowing down to one concept, we fleshed out the details by using methods such as role playing and sketching. 
We identified two main user personas and created our functional storyboard based on them:
The planner: this type of users prefer to plan out where to park before they set off for a destination.
The reactor: this type of users prefer to figure out where to park after they get closed to the destination.
Flowchart
Situational Storyboard
Final Interface Example
Reflection and What's Next
Reflection 1: 
In the future, we should address the source of Data including the parking lots information such as prices, time and location as well as real-time information of available spots in the lots. We should also think about who would be the best fit to collect the data and who would be the stakeholders.
Action: 
Consider turning the project into a practical product with different phases:
Phase 1: Revise the product so it only requires the parking lots information and leaves out the real-time data. Even consider integrating the product into an existing product such as Google Map.
Phase 2: Explore the possibilities of the collection of real-time data and consider who is the appropriate stakeholder.
Reflection 2: 
We did well on considering the balance between AI decision making and user decision making. But we should also take into consideration that rich information given to the users might compromise driving safety. 
Action: 
Re-examine all related design decisions (user personas, preference setting options, information displays) Flesh out all the detailed AR interfaces that present only necessary information visually. 
Reflection 3:
How to cue the users to input with the correct format in a voice-controlled conversational UI?
Action: 
Conduct more research. Try better connect the voice-control system with the visual display.

Back to Top