DeepMind's AI achieves grandmaster level in StartCraft II game
DeepMind (a Google subsidiary) has designed an AI system, called AlphaStar that now outranks the vast majority of active StarCraft II players, demonstrating a much more robust and repeatable ability to strategize on the fly than before. This was quite a feat. StarCaft II is highly complex, with 10 to the power of 26 choices for every move. It’s also a game of imperfect information - and there are no definitive strategies for winning.
The achievement marked a new level of machine intelligence. AlphaStar used reinforcement learning, where an algorithm learns through trial and error, to master playing with all the games. The AI reached a rank above 99.8% of the active players in the official online league. DeepMind team modified a commonly used technique known as self-play, in which a reinforcement-learning algorithm plays against itself to learn faster. DeepMind famously used this technique to train AlphaGo Zero, the program that taught itself without any human input to beat the best players in the ancient game of Go.
The results have ben pubished in Nature on Oct 30, 2019 at https://www.nature.com/articles/s41586-019-1724-z
The achievement marked a new level of machine intelligence. AlphaStar used reinforcement learning, where an algorithm learns through trial and error, to master playing with all the games. The AI reached a rank above 99.8% of the active players in the official online league. DeepMind team modified a commonly used technique known as self-play, in which a reinforcement-learning algorithm plays against itself to learn faster. DeepMind famously used this technique to train AlphaGo Zero, the program that taught itself without any human input to beat the best players in the ancient game of Go.
The results have ben pubished in Nature on Oct 30, 2019 at https://www.nature.com/articles/s41586-019-1724-z