By Shen Gao
Staff Writer
It is believed that Google’s artificial intelligence (AI) research has solved one of the biggest challenges in AI after a computer has beaten a professional player at the ancient game of Go.
The game of Go has been one of the major goals of AI research due to its nature of being one of the most difficult games for computers to win, because there are an incredible number of possible moves that a player can make.
Traditionally, computers use what is called “brute-force” computing—computing as fast as possible in order to plan out the consequences of every possible move—to dominate over human capabilities during a match. But for the game of Go, this strategy is far from useful.
Instead, in order to beat a human at a game like Go, a computer would have to think and work like a human. Ironic, right?
Google’s AI research program DeepMind has developed a solution that allows computer programs to act on the decisions gathered from its systems, which are composed of two networks. One of these is the “value network,” which analyzes the computer’s position on the board. The other network, the “policy network,” decides where to move.
In October 2015, DeepMind’s computer AlphaGo brought home victory in the game of Go against the reigning European Go champion, Fan Hui. The computer beat Fan Hui in five straight games.
DeepMind’s CEO, Demis Hassabis, commented that one advantage computers have over humans at playing Go is that a computer such as AlphaGo can “play through millions of games every single day” while humans only can process so many Go games in a lifetime, due to the complexities of the game itself.
Early this month, AlphaGo again scored victory in the game of Go against a human player—Korean professional Lee Sedol. Five games were played, and AlphaGo won all except for the fourth game. The winner, AlphaGo, won $1 million in prizes, and Google DeepMind announced that it would be donating the funds to charities.