Up until recently, all that product managers and UX designers had to worry about were visual and graphical elements of design and how best to arrange them in order to give customers a seamless experience. But recently, another medium has been added to the UX of a product that is changing the game. And that medium is audio.
Voice navigation makes it possible to communicate with a system of a device by giving it only voice input. Through voice navigation, users can now interact with the product like never before. And although this is a totally new realm for most designers and product managers, it presents a new challenge that is full of possibility.
Voice navigation has been around since the 1980s, but somehow, perhaps because it was difficult to use and clunky, it never really caught on until recently. It started becoming more popular with the introduction of Siri in iPhones in 2011, but it was Amazon’s Alexa in 2014 that really brought it to the forefront. Today, voice navigation is everywhere. From Google’s Google Assistant to Microsoft’s Cortana, voice navigation is in built into our devices and presents a great opportunity for designers.
The challenge comes from the fact that users interact with voice navigation very differently than they do with the more visual elements of design. The experience of using voice navigation should, in most users minds, be comparable to having a real conversation with another person – in fact this experience is the most human way of communication currently available on our devices. This explains the massive surge in popularity of the medium.
Clearly, voice navigation has some advantages:
But, voice navigation comes with its limitations as well:
Hence, right now voice navigation is ideally used in conjunction with the more traditional visual UX of a product. Depending on context, users can choose which medium – text or voice – they would prefer to use with the product.
A designer will mainly work on creating a flow of the process between the user and the machine. He or she will build up as many scenarios as possible to map out and design the conversation. To do this, they will think about how to get the users from A to B at the earliest. Then they write and document every single possible answer the system could give the user to any question when interacting with the product.
Most users who interact with voice navigation tend to overestimate or underestimate what it can do. The capability of the system is not immediately apparent to the user and so it is important that the experience be designed in such a way that makes it clear what its capacity, and limitations, are. If the user is not familiar with the limits of the system and asks for more from the voice navigation than it can deliver, the user will end up having a bad user experience.
Users expect to be able to talk to the voice navigation like they would to another person, and it is up to the UX designers to provide this seamless and easy experience to them where it feels like you can have a conversation with the machine. This means that it would be important to incorporate natural speech patterns, including colloquialisms to the conversation.
It is also important to keep the information relayed by the voice navigation system to the user at a minimum, because since the user cannot see the information, he or she will have to recall what they hear. Unlike in visual systems, they cannot scroll up to go through the instructions/information again.
If the device provides visual feedback to the users, they can be sure that it heard what they were saying. For example, Alexa glows blue when you say her name to indicate that she is listening. These visual cues make conversation between machine and man easier.
It is also important for designers and product manager to conduct UX Analytics after using voice navigation. Once a voice command is rolled out, you can track how the app is being used with analytics using a built-in analytics tool or a third-party service. Some of the key metrics to keep an eye out for are:
Privacy and Security – The fact that voice-based assistants are always waiting for queues listening to the sounds of their environment poses a big concern for the users. The paranoia of their privacy being tampered on is valid. Designers should try to build back trust and allay these fears through secure design.
Since the main tenet of voice technology is language, it becomes crucial for any voice-based AI to be fluent in understanding as well as speaking. Adding other languages and distinct accents to the interface is still a work in progress.
In a combination of graphics and voice, features support each other. That way, they create a pleasing experience for the users. When designing a product, designers should start thinking of the best use for voice features. These can often be related to information-heavy data input and happen typically when the user is on the go or when the users’ hands are otherwise tied. Designers should stop thinking about only buttons and text but of other ways to accomplish a task. User journeys should include situations where the user can’t or doesn’t want to type. By adding a new tool – voice – to the design toolbox, designers can build better even user experiences.
UNIT NO. 709B Seventh Floor, Good
Earth Business Bay, Sector – 58,
Gurugram, Haryana – 122101
Padmavati Complex, #2, 3rd Floor, Office
no 2280 Feet Road, Koramangala, 8th
Block, Bengaluru, Karnataka - 560095