Artificial intelligence took a historic step forward last week when a Google team announced that it taught a machine to master the ancient Chinese game Go, a feat researchers have chased for decades.
While computers learned to outclass humans at checkers and chess in the 1990s, Go — a 2,500-year-old game — was still vexing computer scientists.
Because the game offers players a nearly infinite number of moves — and is difficult to score in the middle of a match — it has proved to be the most difficult of classic games to teach computers to play.
But that all changed last week as Google's researchers brought a fresh approach and wealth of computing power to findings published in the scientific journal Nature.
"It's a real milestone and surprise for me how quickly things have happened," said Martin Müller, a professor at the University of Alberta and longtime researcher of Go. A decade ago, his work helped computers draw closer to the caliber of human players, which Google then used in its approach. The company's researchers "have these new ideas, and they showed they're very effective."
The Google team hopes that in the long term, the technology behind the breakthrough can be applied to society's most challenging problems, including making medical diagnoses and modeling climates.
Such efforts are years away, the researchers admit. In the near term, they're looking to integrate the work into smartphone assistants — think of iPhone's Siri or Google's voice assistant.
In Go, players place black and white stones on a grid to spread across open areas and surround their opponent's pieces.