Google DeepMind just learned to read the London Underground map through memory and basic reasoning
Learning to read the London Underground map is a rite of passage for any new Londoner, but DeepMind – Google’s deep learning AI with an interest in healthcare and card games – has joined the ranks of those that know the fastest way to get from Acton Town to Wapping.
That may not sound hugely impressive, but the manner in which it has learned to read the Tube map is very interesting for the future of artificial intelligence, as it used basic reasoning and memory to conquer the commute. In other words, it was more human than your average Tube map app.
“I think this can be described as rational reasoning,” Herbert Jaegar, a computer scientist from the University of Bremen, told The Guardian. “They [the tasks] involve planning and structuring information into chunks and re-combining them.” Combining deep learning with an external memory means that DeepMind could take what it learned from the London Underground, and apply it to navigating other similar transport networks around the world.
This is different from things that have gone before. As Alex Graves, a research scientist at DeepMind, told Wired: “You can’t give normal neural networks a piece of information and let them keep it indefinitely in their internal state – at some point it will be overwritten and they will essentially forget it.” This neural network, however, could keep the memory forever.
The same strategy was used on two other tasks – both of which, again, seem trivial to humans. DeepMind was given simple extracts of stories, such as “John is in the playground. John picked up the football.” From there the AI would be asked where the football was, and provided the correct answer to these kind of puzzles 96% of the time. Graves concedes that while these puzzles “look so trivial to a human that they don’t seem like questions at all,” it’s the methodology that is interesting. Traditional computers, he says, “do really badly at this.”
In another puzzle, explained in the video above, DeepMind was able to establish familial relationships by reading a family tree.
“Taken together, our results demonstrate that [differentiable neural computers] have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory,” the authors concluded in their paper. “Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data.”