Practice makes perfect: Driverless cars will learn from their mistakes

Autonomous cars are further away than we thought. Despite developments from Audi, Volvo, BMW, Mercedes and several other manufacturers, we’re still a long way from connected cities of fully driverless cars. However, rather than technology or know-how, the connected, autonomous car is being held back by infrastructure.

The ultimate goal of autonomous cars is a world in which there are no drivers, and no traffic lights or signals to control them. Cars will communicate with each other to anticipate movements. There will be no traffic lights because traffic will be digitised, and operate like a slick, well-oiled machine. But that future relies on a ridiculous amount of infrastructure that we simply don’t have.

audi-keynote-ces-asia-audi-r8-e-tron-piloted-driving

And even if a connected infrastructure were to work, what are the chances of it being safe and reliable? Some, including Nvidia’s director of automotive Danny Shapiro, are sceptical: “The connectivity is something I’m not sure I’d like to rely on for any kind of collisions avoidance system,” he warned. “I just think about how often my cellphones drops calls when I’m driving – you can’t really rely on an outside connection to be able to react to a fraction of a second.”

What’s the next *realistic* step for autonomous cars?

Several cars currently on the market benefit from semi-autonomous technology, and the first fully driverless cars could use similar systems as a basis for more advanced technology. Cars such as the Volvo XC90 use a range of cameras and sensors for advanced driver warning systems and semi-autonomous parking.

As we experienced in our recent test, the XC90 uses a network of sensors to find and measure suitable parking spaces, and then uses close-range radars and autonomous steering to position the car into place. All the driver has to do is change gear and make observations.

https://youtube.com/watch?v=GIa1mWr1kNs

“We start to put a lot of cameras and sensors around the car,” explained Shapiro. “We have front-facing, surround vision cameras, we have rear-view mirror cameras and back up cameras, and they’re basically generating a massive amount of data – but what we’re doing now is working with automakers to interpret what those cameras are seeing.”deep_learning_2

Deep learning and artificial intelligence

Rather than simply displaying vast amounts of information to a driver, the first autonomous cars will be able to decode these images and understand their environment using AI. The name for this new process? Deep learning.

Used in everything from Siri to Google Now and Microsoft’s Cortana, deep learning – or machine learning is training a computer to think like a human.

Machine learning is training a computer to think like a human

When coupled with a range of sophisticated car sensors, this AI could theoretically drive a car on its own – and that’s exactly what Nvidia is working towards.

Teaching software to think like humans

Nvidia’s Drive computer uses two Tegra X1 chips, 12 cameras and a range of lidars, radars and laser scanners to interpret what’s happening around the car.

“A picture is just a bunch of pixels and, when you look at a picture, your brain recognises what it’s of, but we’ve got to teach a computer how to recognise what’s in a picture,” said Shapiro. “Each pixel has a colour value, and what this system does first of all is break down the images and look for edges.”

Nvidia’s deep neural network uses different algorithms to search for edges, and then combines them to form elements. “We look for edges because that’s how your brain assembles images.” Essentially, they delineate one object from the next – something our brain does without us noticing. After that, Nvidia’s system is able to form elements, and assembles them to create and recognise an entire object.deep_learning_3

Humans are able to understand the concept of objects through experience and understanding. For example, even if you see a chair for the first time, you’ll still identify it as a chair because it has all the attributes you associate with chairs.

The car actually thinks for itself, and isn’t controlled by outside infrastructure

For machines, this leap of understanding isn’t as simple. Deep learning systems must be fed with thousands of images and videos, and taught over time what each object is. “We load that model into the car so, as the car drives around, we build a map of everything that’s happening around it,” revealed Shapiro.

“We can classify a person, another car, and a traffic light. Then we have an application that takes on that information and decides whether the car is accelerating, braking, turning left or turning right.” In this way, the car actually thinks for itself, and isn’t controlled by outside infrastructure.mercedes-self-driving-6

Interestingly, Shapiro said that, like a human, these types of self-driving cars undergo a constant learning process. If the car sees things it hasn’t encountered before, they’re recorded and then incorporated into the next deep learning session. “Once we collect all this different information, we create a deep neural network model and can then update the car over the air, so it will basically have additional vocabulary.”

“This ability to process is going to enable automakers to make the cars smarter and handle more and more driving autonomously over time.”deep_learning_4

Teaching computers to drive

The next step? To use this intelligence to not only understand situations, but to react to them as a human would. Soon, deep learning won’t just be about recognising objects, it’ll be about recognising objects and attaching their predicted behaviour to them. “The next generation of driver’s assistance is going to help the cars brake, accelerate and steer, and, to be able to to do this, we’ll need to understand everything going on around the car and the notion of free space. Where is there an object and where can we actually drive?”

After recognition, Nvidia’s next step is to predict the behaviour of objects on the road – something human drivers do automatically. When encountering a cyclist, a human driver knows it will behave differently to a normal vehicle. While the first step is recognising the bicycle, the second layer of behaviour prediction is the next problem for AI-enabled cars.

A car with a mind of its own

However, despite their different approaches, deep learning and connected cars aren’t directly opposed to each other. Instead, it’s likely that one will come well before the other. Deep learning uses existing sensors and feeds them through to next-gen technology but, significantly, it can work with our existing infrastructure.mercedes-f015-self-driving-main-image-2

While connected cars represent the most elegant, idealistic interpretation of driverless tech, it’s decades away. “What we’d do is we would get rid of all the cars today, and everyone would get a brand new car that communicates with others and the intersection,” said Shapiro. “We wouldn’t even need traffic lights, we could just have a traffic free for all. None of the cars would collide because they’d all be managed, but it’s going to be quite some time before we have real vehicle to vehicle communication that’s robust enough to help”

What does all this mean?

It’s clear that connected cars will be where we end up, but they’ll require a massive undertaking in infrastructure and investment to work. Until then, AI represents our best chance at a driverless city. By using existing sensor technology, and utilising the vast power of deep learning, artificial drivers will be able to share the roads with humans, and it’s going to happen sooner rather than later.

For more about the state of autonomous cars read: How far away are we *really* from autonomous cars?

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.