Google DeepMind can backstab with the best of them
One potentially huge advantage that hard-headed, logical AI has over our squishy, fallible human methods of getting things done is that emotions don’t get in the way. AI has no nose to cut off in order to spite its face, which means in theory things should be much more efficient.
But what if there were two AIs competing for the common good? Would they work together to achieve their shared aim, or try to outshine the other in a bid for glory? That’s what Google’s latest DeepMind experiments sought to find out, and intriguingly it left the AI looking far more human than you might have expected.
Google pitted two versions of DeepMind – red and blue – against each other in a couple of computer games to see how they would deal with each other. In the first, they had to collect apples that appeared onscreen, but with the added twist that each AI had a laser that could temporarily disable its opposite number if it chose to use it.
The result from thousands of run throughs? Surprisingly human. When digital apples were in abundance, the AIs would generally co-operate peacefully, collecting the apples as they went along. As soon as apples became scarce, however, the weapons came into force. Sound familiar?
Oh, and larger, more intelligent neural networks tended to shoot their opponent no matter how many apples there were around. Read into that what you will, although Google doesn’t necessarily think this means selfish is smart. It might just be that because shooting requires more “skill”, the dumber AI didn’t want to distract from the task of apple hunting unless absolutely necessary.
The second game was one that favoured co-operation: “Wolfpack”. In this game, the two AIs were charged with catching a tricksy blue dot around a map. Anyone who has been blue-dot hunting themselves will know that it’s far easier to put one in your hunting-lodge trophy cabinet if you co-operate to corner it. If you chase the blue dot on your own, things get far more “Benny Hill”.
In this instance, smaller, “dumber” networks would often go solo, but the larger smarter networks quickly established that working together achieved better results. Though of course, with only one blue dot per game, resource scarcity is not at play here.
So artificial intelligence can be co-operative or competitive depending on the context. How disappointingly human. You can read more about the experiment on the DeepMind blog, and in the research paper.