Machine Learning: Language is the key to success

March 9, 2017
|
Holger G. Weiss

67 years ago, a computer scientist called Alan Turing developed a test to determine whether a machine could communicate and behave as if it were a human being. During the 1950’s this was the stuff of sci-fi comics and only ever theorized in written forms of communication. However, today’s computers are more than capable of mimicking humans, even on a language and linguistic level. But how can us human beings benefit from this?

Language-controlled machines have been around for quite a long time. Even before Amazon’s Alexa and Apple’s Siri, there were numerous applications that assisted with instructing a computer what to do. Examples from the automotive industry are particularly well-known. Nowadays, vehicles from the upper medium class and upper class as well as some simple models have voice recognition interfaces. These intelligent listeners are intended to help with some navigation systems in particular.

Nice, but dumb

There was just one problem: it didn’t work. While being a nice technical novelty, many users gave up using voice recognition after only a few attempts. Difficult names or inaccurate pronunciation have often led to frustration, effectively ruining the whole experience. At least the children on the rear seat had fun listening to the dozens of failed attempts by their geeky father, who was exasperated that the technology he put in his car for extra bucks simply didn’t do the job. More effective advances to this technology came later. With its arrival in 2011, Siri was able to make almost normal speech communication available to the mass-market by recognizing whole sentences.

Robot talks

The reason for this evolution is based on two technical developments in the last few years: First, the speech recognition technology has made enormous leaps in the last five years. After taking almost 30 years to reach a recognition rate of 75 percent, the average is now at the level of 90 percent. This is almost human level. Some could say: machines understand the language as well as humans. However, it’s not so simple.

Between understanding and comprehension

But language consists of two components: simple understanding (listening) as well as comprehension of the content and meaning of what is said. This is where we come to the second technical development, in the discipline of “Natural Language Processing” (NLP: not to be confused with with Neuro-Linguistic Programming). There has also been a great deal of progress in this area. Since these advances depend on large amounts of data, both developments are mutually dependent: the better a system understands, the more users will use it and the more data will be generated. This in turn improves the systems.

Talk to me!

The increasing, obvious success of voice systems leads to a growing acceptance for people to use their own voices as a control unit. Future generations for example, will no longer be irritated if there is a person talking to their computer on a busy subway train. Until then, however, the steps are slow. What we already have nowadays is people talking to their computer or phone while driving a car hands-free. But still these language systems fail in those important situations, which makes it dangerous in large part is the people inside and outside the car. Language assistants are not specifically build for traffic safety so far.

Help – we need somebody

During his lifetime, Alan Turing dreamed of intelligent machines that could do everything a human could. But it wasn’t until 2014 that a software passed the Turing test for the first time. If Turing was still alive he would have worked hard for a language based test to scale the Artificial Intelligence. Unfortunately he is not and unfortunately we are still steps away from it. But we have to speed up – we will need much upgraded systems in the cars very fast. Voice assistants need to manage to understand and talk to us with no mistakes and totally hands-free , they’ll allow us for example to communicate safely without taking the hands off the steering wheel. Because it is not only a safety-relevant difference, whether one is talking with a "co-driver"or reading messages while driving. The need is there, and it needs to happen as soon as possible.

Holger G. Weiss
CEO & Founder at German Autolabs
https://twitter.com/travlrhttps://www.linkedin.com/in/holgerweiss
Blog

We are building the first digital co-driver to create a safer and smarter mobility experience for everyone.

Get notified when we launch:

Wait, you are building what? To read the full story, click here!

Great read, do you have more interesting articles?
Show me all articles