DeepMind has been training its AI to play StarCraft II – and now anyone can do the same
Not content with taking on Atari’s finest, and trouncing the world champion of the complex Chinese game of Go, Google’s DeepMind has been setting its AI on StarCraft II.
The AI company has worked with StarCraft’s creator Blizzard to design an API that will let researchers and developers train their AI programs in the game’s complex environment.
In particular, this platform will help them test their AI on so-called “reinforcement learning techniques” which typically involves making the AI repeat tasks until it makes a mistake. The AI then learns from the mistake and tries again.
Eventually, the AI carries out enough moves for it to learn the most effective way to complete the task, which in this case will be completing the game, or at least mini-games within the game.
“DeepMind’s scientific mission is to push the boundaries of AI by developing systems that can learn to solve complex problems,” explained DeepMind’s Oriol Vinyals, Stephen Gaffney and Timo Ewalds in a blog post. “To do this, we design agents and test their ability in a wide range of environments from the purpose-built DeepMind Lab to established games, such as Atari and Go.”
The post continued that testing “agents” in games not specifically designed for AI research, and particularly games which humans excel at, is crucial to improving AI performance.
Blizzard added: “We recognise the efforts made by researchers over the years to advance AI using the original StarCraft. With the StarCraft II API, we’re providing powerful tools for researchers, gamers, and hobbyists to utilize the game as a platform to further advance the state of AI research. This API also exposes a sandbox for the community to experiment with, using both learning based AI and scripted AI to build new tools that can benefit the StarCraft II and AI communities.”
The release is being referred to as SC2LE and it includes a set of tools DeepMind says it hopes will accelerate AI research. These include:
A machine learning API developed by Blizzard which includes the release of tools for Linux for the first time.
A dataset of anonymised game replays, which will increase from 65k to more than half a million in the coming weeks.
An open-source version of DeepMind’s toolset that lets researchers use Blizzard’s feature-layer API with their own AI.
A series of simple mini-games that will let researchers test their AI on specific tasks.
The research has also been published in a joint paper that gives more detail about the StarCraft II environment and reveals how the AI fared in initial tests, during which it played mini-games and were supervised as it learnt from replays.
The partnership was originally unveiled in November. At the time, DeepMind said: “An agent that can play StarCraft will need to demonstrate effective use of memory, an ability to plan over a long time, and the capacity to adapt plans based on new information.” This is a significant challenge for AI because computers have historically struggled to keep up with the same number of “actions per minute” as humans do when playing such games.
DeepMind calls these lessons “curriculum” scenarios, and each will be increasingly complex tasks to “allow researchers of any level to get an agent up and running, and benchmark different algorithms and advances.”