When voice assistants become co-drivers

August 20, 2019
|
Holger G. Weiss

Voice assistants are ubiquitous these days. Tell them what you want and (if they understand), your music will play, your shopping list will be updated and your dog food will be ordered. The 2019 Consumer Electronics Show in Las Vegas was dominated by Alexa and Google Assistant, who proudly decorated Las Vegas with enormous posters and marketing campaigns. Then there’s Siri, Bixby, Cortana, and so on. We are the generation learning how to use our voices to tell machines what we want and what we need – and the machines are learning how best to respond.

But how will all these assistants differentiate from one another? The answer lies in their skill sets and capabilities. The vast majority of use cases in our daily lives will be covered by one or two assistants. Your home will be either Alexa or Google-connected. They are the ones who will grant you access to a seemingly infinite skill set, and their services will evolve over the next years to the point that you will be able to do almost anything – with your voice alone.

However, Alexa, Google Assistant and their ilk, are so called horizontal platforms, specialized to serve as many use cases in as many languages as possible. There will remain some very important and unique use cases. The car is perhaps the biggest challenge of all. Opening a sunroof or finding a destination should be no different to playing music at home. The challenge is the driver. Fully autonomous driving remains 15 to 20 years away, in terms of widespread adoption. Drivers need to focus on the road, and while in certain instances they might be able to take their hands off the wheel, we’re not at the point where they can go to sleep or watch a movie – yet.

This in-car environment demands special skills for an assistant. Much like a rally co-driver, the system doesn’t only need to understand what the driver wants, it also has to ‘read his mind’ and predict situations coming up on the road.

This is exactly what we are working on here at German Autolabs. One component of our technology is trying to predict a driver’s personal cognitive status. We call this SAFE - the Situational Awareness Factor.

Supported by the German Ministry of Transportation and their mFund initiative, the SAFE algorithm uses historical traffic data and real time information such as weather or traffic data. This is combined with a risk factor which is calculated for each individual street segment. We then have a SAFE score which will reflect the cognitive stress level that a driver will be subjected to in any given situation. This technology helps to keep drivers and other passengers safe by reducing interactions with assistants at critical moments, prioritizing certain information and optimizing the contextual load for the driver.

Take a look at the video where my co-founder Patrick, our Head of Product, demonstrates this new piece of German Autolabs technology.

SAFE Technical Demonstration

More featured articles