MOUNTAIN VIEW, Calif., Jan. 28 (UPI) -- Google[1]-owned company DeepMind have developed a computer program that has been able to master the ancient Chinese game of Go.
The program known as AlphaGo[2] was able to defeat European Go champion Fan Hui, a feat some believed artificial intelligence could not achieve due to the intuitive nature of the 2,500 year-old-game.
"I've always thought it would be a great challenge for computers to be able play such an aesthetic game, an intuitive game like Go, a much greater challenge than it was to play chess." DeepMind CEO Demis Hassabis said.
Go involves players alternately placing black and white game pieces known as "stones" on a 19x19 grid with the goal of surrounding the other player's pieces without having their own pieces surrounded.
Other AI researchers believe that AlphaGo's victory over Hui represents a significant advance in AI technology.
"On the technical side, this work is a monumental contribution to AI," OpenAI director Ilya Sutskever told MIT Technology Review[3]. "The same technique can be used to achieve extremely high performance on many other games as well."
While AlphaGo's next step is to test it's abilities against other Go champions worldwide, Hassabis believes the potential of the technology behind the program can be useful far beyond the world of games.
"Ultimately we want to apply these techniques to important real-world problems," he said. "Because the methods we used were general purpose, our hope is that one day they could be extended to help address some of society's most pressing problems, from medical diagnostics to climate modeling."
References
- ^ Google (www.upi.com)
- ^ known as AlphaGo (www.youtube.com)
- ^ told MIT Technology Review (www.technologyreview.com)