Google DeepMind’s AI is becoming adept at team eSports
Google DeepMind has proved pretty effective at mastering various games over the years. It is officially the finest player of Go in the world after beating the sitting human world champion and has since turned its figurative hand to Starcraft 2, while also proving adept at guessing when patients are going to die. It truly is a jack of all trades.The problem is that DeepMind is very much a lone wolf. It works alone and doesn’t necessarily know how to play nice with humans – or indeed other AIs. You might not consider the best remedy for this to be arming DeepMind, but the good news is that the weapons and battlefield are purely virtual: the AI has learned how to play Quake III’s Capture the Flag multiplayer mode and, predictably, has got to the point where it can teach humans a thing or two.
For those unfamiliar with Capture the Flag, like ‘Pin the Tail on the Donkey’ the name really leaves little to the imagination. Two armed teams both have a flag, and need to bring their opponents’ back to their base to score a point. If they’re shot en route, the flag is returned to the base.
You’re probably up to speed now. Give you a gamepad, and you’d make a fair fist of it – but it’s a bit more of a struggle for artificial intelligence, which not only has to learn the rules and tactics, but even the basics of the game – moving around, changing weapons, shooting, and even learning to spot what an enemy looks like. It learns all this on the spot by experimenting, meaning that you’d likely go through several hundred thousand nil-nil draws before anyone figures out how to score a point. That’s not an exaggeration: it took DeepMind nearly half a million, five-minute matches on randomly-generated maps to get up to speed.
Once it was there though, it proved hugely formidable. DeepMind’s AI agents picked up on the basics but also learned human-like strategies, such as guarding their own flag, camping at the opponent’s base, and teaming up to ensure they outnumbered enemies that crossed their paths.
These tactics worked. The researchers held a mini-tournament in the fairly flimsy name of work, mixing 40 human helpers with DeepMind in a combination of purely human teams, AI-only teams and a blend of both. The best combination was the bot-only team, which managed to win with a 74% win probability. Middling humans had a 43% probability, while strong human players only managed 52%.
In other words, not only can DeepMind’s AI be better than humans, it also knows how to cooperate with others. Oh.
However, there is one pretty important caveat: that 74% win probability is for 2v2 matches. When teams of four were introduced, DeepMind’s effectiveness dropped to 65%. That’s still more effective than the humans but does suggest that some of the lessons don’t seem to scale up to larger teams.
Of course, literally beating humanity at its own game is a side effect, rather than the intention of the research. Teaching AI to cooperate is a noble goal, and if there’s one thing it can learn from humans as a species historically, it’s that we’re definitely more than the sum of our parts.