DeepMind is teaching its AI to navigate cities using Street View images
Google is no stranger to teaching AI to learn like a human would. We’ve seen its AI team tap into methods of childhood learning to help AI understand its environment, even helping it learn how to create or discover how to walk. Now, though, Google’s research teams are working on helping an AI remember the layout of a city the same way that humans learn to navigate their way through a complex environment.
Currently, machines navigate through a city by utilising maps. Tracing a route through human-mapped roadways while staying on course via GPS. Humans and animals, however, decipher their way around a city via landmarks, remembering what’s around certain corners or in certain fixed locations and building up a mental map of an area. It’s why you can still find your way around your hometown without having to continually stare at Google Maps. Even if almost all road layouts were changed, you’d still know how to make it to your old friend’s house or the newsagents you bought those countless football stickers from.
Teaching an AI to remember the layout of a city in a similar way to a human is no simple task, but it’s fallen to Google’s DeepMind team to do just that. Stopping short of rolling stacks of servers down some streets, DeepMind used images from Google Street View to help its AI learn the streets of New York City, Paris and London. By walking these virtual streets, the team hoped to build up an intimate knowledge of a section of the city.
According to one of the researchers, Piotr Mirowski, navigation really boils down to two questions, “Where are you? And how do you get where you want to go?”. Speaking to Inverse, Mirowski states that this line of thought is true for “a child walking in a neighbourhood without a smartphone, a bird learning to fly back to its nest, or a robot.”In a paper, published by the Cornell University Library, the DeepMind team explain how they made a robot navigate its way through a city without a map or GPS data. The neural networks of the test AI are, essentially, like clueless tourists visiting a city for the very first time – they have no prior knowledge to lean on. By feeding imaging data into an AI’s neural network, it can then build up a picture of the city and its layout, helping it navigate its way through to set points.
“We train the neural network to navigate through Central Park, the West Village, Midtown and Harlem,” explained Mirowski. “It’s able to memorise a map of the environment without ever seeing a map of the environment. It does this by exploring an area at random in the beginning, but then it receives a reward after getting to a destination. It’s establishing a connection with that [reward] signal and its perception.”
Needless to say, this human-like interpretation of navigation is seriously impressive. However, we’re still a little way from seeing it rolled out into real-world use as there’s plenty of kinks still to be worked out. For instance, the system needs to be retrained every time its dropped into a new city, showing that it’s still not capable of learning a new city on the fly – making it rather tricky for, say, an autonomous car to navigate an unfamiliar city.
However, once the DeepMind team are able to retain the AI’s navigational skills from one city and have it inform how it understands a new, unfamiliar city, we could see a whole new wave of intelligent autonomous vehicles.