We have done a lot of work recently on porting the IV system to run on small embedded devices, specifically those that will be used in self-driving cars.

I have a particular issue with the way the industry has been looking at the implementation of self-driving capabilities, particularly the reliance on the “cloud”.

A lot of the core driving competencies of the self-driving vehicle will be built in: it has to be, as you can’t be sending data to a central server in a timely enough fashion to avoid a crash. But we have seen that at the moment, a lot of the “non-core” elements, from mapping to the HID (Human Interface Device), are being tossed over the fence to cloud providers.

http://www.freeimages.com/photo/useless-old-car-1541336

Given my interest on “on-premise” voice recognition, an obvious bug-bear for me is doing the Natural Language Understanding piece in the cloud, ie where you talk to the car in a natural way, and it reacts accordingly. Some elements such as booking a restaurant have to be done with some form of connectivity (known as “fulfillment”), but many can be done in car, and I would argue, have to be.

The computational power to build a spoken dialogue system that can react to many situations and many languages is pretty big – Gigabytes of memory may be needed just to allow for a wide vocabulary to be used by a driver in even one language. Add to that the need to allow the system to converse, and that is a lot of horsepower, so surely, it makes sense to use what the cloud is good at, and provide vast computational power to throw at difficult problems, all for the cost of a data link. Why have a supercomputer in every car, when you can just have one at the end of a pipe?

The answer becomes simple for anyone who has spent time away from a conurbation, or who has gone on a long-distance road trip. Quite often, there is no data link at all, or if it is, it is so puny as to only be able to send SMS or limited data.

I think that most people who are designing and investing in these systems are commuting up and down routes well served by 3G and 4G data:  Route 101 in California probably carries more VC’s in a day than most other roads in the world carry in a lifetime.

I can see hybrid models being adopted, where there is a fallback from the cloud to a pretty intelligent system in a car when the main data links are slow or unavailable, even perhaps doing basic pre-processing in the car (turn the audio into text, and then send it by SMS to a server for “meaning” parsing and fulfillment)

But for me, if you want an autonomous vehicle, it needs to be actually autonomous, not autonomous except when you can’t connect to the internet:  so all of the “smarts” need to be in the car. Yes, you do need more processing and more memory and more storage in the car, but by making the car less reliant of data links, you also make it less vulnerable to hacking, and much less vulnerable to internet outages.

A few weeks ago, the entire East Coast of the US was affected by a massive Distributed Denial of Service (DDOS) Attack. This meant that many popular websites could not be accessed.  Imagine how you would feel if were at work one night, hopped into your car, and it told you that it couldn’t get you home, because the Internet was down?  And because there is no manual control, you’re stuck in the car park, getting cold and not a little upset with your less than autonomous car.

And in the Internet of Things (IOT) future that will have self-reliant, meshed devices at its core, the autonomous car will be the star:  and the most popular target for hackers.  What better way to bring a country to standstill than by having half the cars on the road accelerate, while the other half hits the brakes?

My vision is that one day pretty soon, you will be able to talk to your car, not just to tell it basic things (“Go Home”, “Pick up Mum”), but to actually have a conversation, the logical extension of the famous Turing Test, where a machine is indistinguishable from a human.  On long journeys, having a companion who gets to know you, helps to take away boredom, particularly if you no longer have to drive.  This type of interactivity will become more and more prevalent as we see the chatbot move from being a toy to a real-world tool:  As the elderly care crisis gets worse, home-care “robots” will be a reality, and these need as human an interface as possible to make them acceptable to the frail and home-bound.

I know that the race at the moment is on getting a car that can drive itself:  But in our haste, let’s not forget what it is we are trying to achieve:  “Autonomous” means a lot more than just “self-driving”


Also see my recent article on self-driving cars in the Huffington Post here


 

Leave a Reply

Your email address will not be published. Required fields are marked *

+ 59 = 69