Orcam

Improving Usability for the Orcam MyEye2.
Designing for the Visually Impaired.

NDA

The Orcam MyEye (the Device) is an assistive tech wearable aimed at helping blind people regain their independence.

To comply with my non-disclosure agreement, I have omitted and obfuscated confidential information in this case study. All information in this case study is my own and does not necessarily reflect the views of Orcam.

My Role

I was brought on as a UX consultant with three main goals:

1. Identify pain points: user interviews, mapping out user stories, and heuristic evaluation of the device.

2. Brainstorm improvements, build prototypes, and test them.

3. Teach the product team how to use a human-centered approach.

The Company

Orcam is a unicorn company whose primary focus has been developing the complex technology behind their product. They are at a stage where they would like to focus on making the interface more intuitive for their users as they have many devices returned annually, which directly impacts their bottom line.

User Research

At the outset of the project, we didn’t have a clear mission or specific goal. I spent my first day calling a meeting with the customer support team, setting up several user interviews, and leaning how to use the Device. My goals were to understand the challenges blind users faced and the workarounds they employed.

Getting to Know You

In order to start understanding the perspective of blind users, I became an active member in several Facebook groups geared towards vision impairment. I also subscribed to The Blind Life, a Youtube channel dedicated to helping people understand what it means to be visually impaired.

Additionally I held user interviews with 8 participants. My goals were to understand the challenges people faced and the workarounds they employed. This included a moderated user test so I could watch them as they used the Device.

Another method I employed was bodystorming. This included walking around with a blindfold, learning braille, and turning off the screen on both my phone/laptop, while navigating around using only the accessibility tools on the Device.

Because of the various types of visual impairment, many users found the instruction “look at the object in front of you” very difficult (many only have peripheral vision). Having gotten to know their communities, we tried the phrase “hold the object in front of you and point your nose at it.” This small tweak worked wonders and increased success rates considerably.

Takeaways

After deliberating with the team we chose 4 of the most urgent problems to serve as our North Star. In this case study, I’ll be focusing on the first three points.

1. The Device did not give actionable advice when the user slipped up or forgot how to complete an action.

2. Aligning the features to user goals.

3. Improving the voice interface.

4. Onboarding: with a large number of new users returning the Device and several power users reporting that they had almost returned it, improving the devices onboarding was crucial.

In order to more fully focus on the other steps, I have left the process of how I redesigned the onboarding for another case study.

#1 Guiding the User

Because the user feedback on the Device is auditory, we had to balance between keeping the length of the Device’s messages down to a minimum (crucial for not slowing down power users). At the same time we wanted inexperienced users to get more guidance on how to accomplish their tasks.

After brainstorming possible solutions, I came up with the idea of “Situational Escalating Assistance.”

For example: let’s say the user wants to teach the Device to recognise a specific coffee mug? The current instructions are simple. Point at the object three times and then give it a name.

Sounds simple, but this feature is finicky. You have to hold the object a certain distance away from your face. Three points of the finger really means you have to point your finger at the object, leave it there till you hear a beep, remove your finger, hear the sound of a camera shutter, and repeat. It’s very easy if you know what you’re doing, but even the PM’s on our team had trouble getting the hang of it.

After consulting with some of the software engineers on the best way to operate this feature, I commenced writing out an “Escalating Assistance” script, which I tested with several users.

This new flow which gave the user more information, as they made more attempts had a large impact on user comfort. Users who had previously put this feature on the side, suddenly felt comfortable using it.

Situational Escalating Assistance

#2 Solving for User Goals

Throughout my continuing interviews with users, I kept hearing things which raised red flags for me. For example, several power users told me variations of the following statement.

“When I’m looking at food in the supermarket, the device works great! I only need to ask someone for help, ask them where on the box the ingredient list is, and then I can then point at it and the Device will read the ingredients to me!”

The team had been working under the mistaken impression that the Device’s features were working, when actually, the feature worked well but it didn’t answer the user goal.

This problem of thinking in terms of feature success rates as opposed to solving user needs, showed up mainly in two features.

Barcode Reader

The barcode reading feature was meant to help people shop more independently.

I looked at the flow for a sighted person, in order to better understand the user goals.

Let’s say you want to buy a box of cereal.
1. You look at the box and read the name of the cereal (Honey Bunches of Oats).
2. You look at the flavor (maple walnut or chocolate blueberry?).
3. Maybe you look at the size of the box or the ingredient list.

This flow is really easy. I know which information I want and can find it while scanning the shelf.

Now how might this flow work if I’m visually impaired and using a barcode reader?
1. Take a box off the shelf, turn it around in your hands until the Device finds the barcode and scans it.

Already, this is not a convenient flow for the user as they have to take down every box and turn it, until they find the right side. Then they have to put the box back, pick up another box, and repeat. But this is only the tip of why the barcode feature is problematic.

Due to the nature of barcodes, the user probably won’t get the information they need for two reasons.

1. Barcodes only contain a small amount of information. For one brand it might tell you the name of the product, flavor, and size of container, etc. And for another it might only tell you the name (You’re holding Honey Bunches of Oats vs. you’re holding a 32 ounce box of maple walnut Honey Bunches of Oats).
2. Barcodes are kept in databases. To recognize a barcode, the company must add them to the Device, which means that if we haven’t uploaded a specific barcode, the Device will only read back the barcode number.

Having written out this flow, I was able to change the conversation from “How do we improve our barcode reader?” to “How can we help someone find the right product in a supermarket?” Or more generally, “How can we help (user) achieve (goal)?” At the request of the lead PM, we adjusted our goal from shopping for a product, to helping users read a menu.

To answer this new question, I suggested a redesign for the smart-reading feature. 

Smart Reading

Try out the smart-reading prototype.

User: (Double tap to get the user to listen) “Smart- reading.”

Device: “What are you looking at?” (If they don’t answer, the Orcam can prompt) “What are you trying to read? For example, is it a menu, a bank statement, a newspaper, or a food product?” (This step can usually be skipped as the device can usually recognise what it’s looking at).

User: “A menu.” (If it’s the first time someone is using this feature then the Device can say, “Would you like me to teach you how to use the menu reader?”).

Device: “I see the menu. Would you like me to read you the food categories?”

User: “Yes.” (If the user says no then the Device will say something like “Tell me what you’d like to know”)

Device: “I see 5 categories. They are salads, main dishes, appetizers, hot beverages, and wine list. Which category do you want to hear more about?”

User: “Salads.”

Device: “There are 3 salad options. Quinoa salad, egg salad, and fruit salad.  Would you like to hear more about any of the salads?”

User: “The quinoa salad.” (if I say “yes” it will prompt the Orcam to say “Which one?”).

Device: “The quinoa salad contains (list of ingredients) and costs (price).” “Would you like me to remember the quinoa salad, or would you like me to tell you about something else”?

User: “Remember the quinoa salad.” (Keyword “remember”). (If they say “Tell me about something else” – it will go back to “Would you like me to read you the food categories?”).

Device: “Remembering the quinoa salad. When the waiter comes over, just say ‘review order’ and I’ll read back everything you asked me to remember.” – “Would you like to hear about anything else on the menu?”

Etc.

* If they don’t answer, then the Device can say, “Would you like to hear more about the options?” or “Would you like me to read everything?” The third waiting message would be “Remember, some menus have multiple pages or information on the back, if you’d like to look at a different part of the menu just go to that page and say ‘new page.'”

This approach relies on the fact that when you phrase a question “Would you like X, Y, or Z?” people are prompted to answer with one of the options you just gave and not use their own synonyms.

This helped us with the question of “how might we help our users shop independently?” And also the question of “how might we help our users order from a menu independently?” But it brought up the idea of using this conversational prompting for the general voice commands.

The idea of the smart-reader was originally geared towards people reading the newspapers, and it worked great, for newspapers. But due to the naming structure of the vocal commands, it didn’t really work for a task like finding useful info on a cereal box or reading a menu.

To make this powerful feature work for more use cases, I recommended that we make an option for the smart-reading feature to be situational. Reading a paper? Here’s one set of commands. Looking at a menu? Here’s another! I then took it a step further by asking “what if we can guide the user in order to avoid choice paralysis?” The team liked it, so I wrote a flow of a scenario where someone needs to read a menu. I found that there are similarities in how people viewed menus and therefore there was a way we could guide most users (using a “wizard”).

I found this by looking at how 9 sighted people approached ordering off a menu. It tended to follow a similar pattern:

(User quote) “When I’m looking at a menu in a cafe, I don’t want to know the full menu right away.”

– Browse the categories.
– Decide where you want to look and browse the dish names.
– Look into a specific dish to see what’s in it.
– Decide and remember where it was on the page so I can quickly reference it to the waiter.

We can tell from this pattern that skimming is key.

My next step was to write out an interactive wizard, which would mimic this pattern.

#3 Conversational Voice Commands

Users have a hard time with voice commands. Since the commands have to be phrased a certain way, it can make them difficult to remember.

Could we use the same prompting technique from the upgraded smart-reading for other voice interactions, like raising the volume on the Device?

From the user’s perspective this would be ideal. Currently, to raise the volume, it’s not clear if I have to say “raise volume,” or “increase volume.” Maybe I have to say “volume up,” and what happens if I just say “volume?” Several of the users I interviewed had even printed out a list of vocal commands, which they read with a pocket magnifier because they had a hard time remembering all of the vocal commands.

In the current interface saying “volume” simply raised the volume, but what if you want to lower it? (This problem exists for several commands including setting the reading speed, and as mentioned above, the smart-reading feature).

To solve this, I suggested that we make the whole voice interface conversational. This way the user doesn’t need to remember specific phrasing for a vocal command. They can just say the word “volume” and the device will respond with “Would you like to raise or lower the volume?” This prompts the user towards saying a specific word (in this case either “raise” or “lower”).

This method intuitively teaches users the vocal command, while also not slowing down power users because they can skip this step by either saying “Raise volume” or “Lower volume.”

Summary

Throughout my time consulting for Orcam, I also dealt with improving the onboarding flow, helping the team connect to their users, creating solutions for the reading flow (including solutions for accidental stop commands and cut off text), and showing the different departments, the value of a user-centered process.

Enjoyed this case study? Shoot me an email and let’s start collaborating!

Toby@trachtmandesigns.com

Built by Toby Trachtman. Illustrations by Blush and Flaticon

Enjoyed this case study?
Shoot me an email and let’s start collaborating!

Toby@trachtmandesigns.com

Built by Toby Trachtman. Illustrations by Blush and Flaticon