Back in June, Apple’s Tim Cook revealed that the iPhone maker would be turning its attention to building autonomous systems for self-driving cars. He called this “the mother of all AI projects,” made it clear that Apple wouldn’t be building its own car, then fell silent. Nothing has been publicly heard about the company’s efforts in this area since.

Now, research from Apple’s machine learning wing has given the first indication of how the firm could be tackling AI for autonomous cars. Submitted to arxiv.org, an online repository for research papers, the work describes how a mapping system could make on-board sensors more accurate for autonomous navigation, and could also improve “housekeeping robots, and augmented/virtual reality”.
Apple’s system is dubbed VoxelNet. It might sound like the hottest nightclub in Berlin, but it’s based around squeezing more accuracy from LIDAR sensors – the eyes in the majority of self-driving cars.
LIDARs work by firing rapid laser pulses onto their surroundings, then measuring the amount of time it takes for the light to bounce back. This allows them to gauge distance, and build a 3D map of their surroundings. One of the main setbacks of this method is that maps are often patchy due to objects blocking anything behind them. If you look at a LIDAR map, you’ll see this as long shadow-like gaps in the scene.
Engineers get around this by setting up systems capable of dividing LIDAR data into voxels (3d pixels) and then identifying what’s in it, such as other cars and pedestrians. Apple’s researchers Yin Zhou and Oncel Tuzel essentially propose a single neural network that’s about to do all this without the need of “manual feature engineering”. They describe VoxelNet as “a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network.”
Apple isn’t the first company to try and overcome LIDAR’s limitations, but the results sound encouraging. The system was only tested on computer simulations, but the researchers conclude that “VoxelNet outperforms state-of-the-art LiDAR based 3D detection methods by a large margin”.
The paper is also significant in that it’s a relatively open move for a company that is famously closed about its AI research. The research does not necessarily indicate that Apple is working on this technology for a particular product – and arvix.org is a means to get ideas to a wider community, not a scientific journal – but it certainly hints at an avenue of thought the company could be pursuing. In July, Apple launched a blog about its engineers’ efforts in machine learning. There are posts there about Siri and face detection, although autonomous cars are notably absent.
Image: A slide from the paper by Yin Zhou and Oncel Tuzel
Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.