Musk condemns “crazy” Tesla Autopilot videos

In the first video, Tesla’s semi-autonomous Autopilot is being used on a narrow, winding road rather than a motorway. Alarm bells should already be ringing.

Nevertheless, the Model S’ Autopilot feature appears to function normally, but then suddenly disengages before handing control back over to the driver. Instead, they continue to film and the Tesla veers into the path of an oncoming car. As you’d expect, the video ends abruptly after that.

In another video, the driver leaves the Autopilot mode on after leaving the motorway. The result? Not great.

Tesla Autopilot: What is it?

Launched earlier in the month to Tesla owners in North America, Autopilot allows the Model S to autosteer, change lanes on motorways, adjust speed depending on the traffic and even parallel park semi-autonomously. The software was initially trialled in August by a group of 600-700 beta testers, and uses a radar system and 12 sensors to survey the car’s surroundings.

Despite being subject to a lengthy trial period, not all of the Autopilot’s mistakes have been the result of user errors. A quick look at the Tesla Motors Club reveals several instances of the Autopilot going wrong, even when used in the conditions for which it was designed.

Even before launch, Elon Musk advised caution over the new Autopilot features. Rather than relying solely on the software, the Tesla boss said the Autopilot mode was designed to improve confidence and enjoyment of motorway driving. “We’re being especially cautious at this early stage, so we’re advising drivers to keep their hands on the wheel just in case,” he added.

So pretty much the opposite of stuff like this, then:

Humans are the weakest link

When Google’s car had its injury-causing first accident, it wasn’t because of a software glitch or a sensor failure – it was caused by a human not paying attention. In the same way, the majority of “Autopilot malfunctions” are being caused by drivers failing to understand the capabilities of the Autopilot system.

But what about the genuine issues with Tesla’s autonomous software? Simply put, they’re a necessary and expected part of the development of autonomous technology. One day, cars will be wirelessly connected to each other, and their movement will be mapped and regulated by a central, cloud-based or ad hoc system. In the meantime, current autonomous systems use a combination of sensors and software to intelligently react to the road – just like a human would.

Tesla’s Autopilot uses deep learning to achieve its artificial intelligence, and this learning process is constant. With each mile covered, Tesla’s system will improve – and, in a few years time, it could be as good as any human driver.

tesla_model-s_autopilot

“Each driver is a trainer in how the Autopilot should work,” Musk told VentureBeat. “There are some things that the vehicle may not be completely prepared for, like always recognising stop signs and traffic lights, but future versions will address these issues.” On the launch of the Autopilot features, Musk added: “Long term it will be way better than a person. It never gets tired, it’s never had anything to drink, it’s never arguing with someone in the car. It’s not distracted.”

Until then? It’s probably best to read – and follow – the instructions first.

To find out how far away we are from a self-driving future, read: How far away are we really from autonomous cars?

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.