“Alexa, show me the future!” — Voice Summit 2019

Posted

August 28, 2019

Authors
Filip Guzy
Series

Voice Summit, the biggest voice-related event in the world, started on 21st July in the New Jersey Institute of Technology in Newark. The conference has brought together specialists from different countries. People from companies like Amazon, Microsoft, Samsung, and many others have shared their ideas and thoughts about the current state and the future of voice technologies in more than 100 sessions.

Presented topics were mostly related to Conversational AI systems. Amazon defines them as “computers that people can interact with simply by having a conversation, our most natural form of interaction.” Have you ever thought about using your computer without any manual interaction? What does it mean for user experience? What new opportunities does it create? This article describes emerging trends in the field of Voice User Interfaces based on the topics presented at the conference.

Context and natural conversation

Conversational AI is a hot topic in the automotive industry. Mercedes employees have shared their thoughts about progress in the area of MBUX (Mercedes-Benz User Experience) interface. They have extended functionality of the Mercedes voice assistant by improving context understanding. Previously, to search for restaurants in the navigation, the driver had to precise: “Mercedes, show me the nearest restaurants.” Now there is only a need to say: “Mercedes, I’m hungry,” and the car will quickly deduce the meaning of this sentence.

Offline support

Developers have also moved some core functionalities to the offline mode to make them independent from WiFi. It is a vast improvement for drivers, as they will be able to use the assistant also on a trip without any access points. Amazon and Google also implement a similar approach in their automotive assistants. It shows us that in the nearest future offline assistants will be possibly implemented in most new cars. Thanks to the progress in this area, the next step will increase the capabilities of our mobile assistants.

Emotions and personality

If you have ever used any of the most popular voice assistants, you might notice that human-like emotions do not characterize their voice. Amazon engineers have met user demands and proposed solutions that will make Alexa sound more emotionally. Feelings are fundamental in communication, and they allow us to express ourselves and create stronger relationships with our interlocutors. These improvements should make people treat voice assistants more human.

The influence of voice-related technologies is also visible on the websites. Microsoft has presented AI solutions allowing to create smart chatbots simulating behaviors of well-known people. There is a single requirement — to gather some information related to the selected person. It should describe his or her personality, way of speaking, or any other details and upload them to the system. In the end, it will output ready-to-use chatbot.

Work automation

Software developers working in Scrum methodology must deal with daily standups and drop the work status to their coworkers. As we know, all meetings take some time and distract people from their work. Every appointment needs some essential preparation, which in the case of daily standups, usually means going through multiple tools like GitHub, Jira, and Trello. The company named Convessa has created Mastermind, a voice assistant designed especially for software developers, which allows them to automate many things. For example, they can ask him to gather information from multiple tools, prepare a status message, and paste it on a Slack channel. No more classic daily standups! Mastermind can also send emails, call coworkers, search for different locations in maps, and do many other things. It is possible to invoke commands directly through other voice assistants, such as Alexa, Siri, or Google Assistant.

Adobe, the biggest software company focused on the creation of multimedia and creativity software products, also invests a lot in the integration of voice user interfaces and AI into their products. The above solutions can increase the productivity of repeatable daily tasks a lot and create a space for experiments and prototyping without the need to do the manual work.

Gaming experience

Apart from practical use cases, everyone may expect a big move in the entertainment area, too. Some companies have presented their progress in developing games based on voice technologies. Soon, there will be no need to steer game characters manually — a gamer voice is going to play the primary role in the gameplay.

Summary

Voice Summit 2019 has shown that the voice industry is growing continuously and expanding in different areas of life. Our nearest future is going to be voice-based, and we should be excited about this fact!

About the authors

Filip Guzy

Software Engineer, Siili

Latest POSTS

April 15, 2021

Driving down automotive costs for fully digital HMI experience

March 31, 2021

Designer’s hints for accelerating the design to target workflow with Qt Bridge for figma

March 31, 2021

Future of voice interfaces in the next-gen Auto HMI

August 28, 2020

The Future of Automotive Design