Google achieves AI ‘breakthrough’ by beating Go champion
A Google artificial intelligence program has beaten the European champion of the board game Go.
The Chinese game is viewed as a much tougher challenge than chess for computers because there are many more ways a Go match can play out.
The tech company’s DeepMind division said its software had beaten its human rival five games to nil.
One independent expert called it a breakthrough for AI with potentially far-reaching consequences.
The achievement was announced to coincide with the publication of a paper, in the scientific journal Nature, detailing the techniques used.
Earlier on Wednesday, Facebook’s chief executive had said its own AI projecthad been “getting close” to beating humans at Go.
But the research he referred to indicated its software was ranked only as an “advanced amateur” and not a “professional level” player.
What is Go?
Go is thought to date back to ancient China, several thousand years ago.
Using black-and-white stones on a grid, players gain the upper hand by surrounding their opponents pieces with their own.
The rules are simpler than those of chess, but a player typically has a choice of 200 moves compared with about 20 in chess.
There are more possible positions in Go than atoms in the universe, according to DeepMind’s team.
It can be very difficult to determine who is winning, and many of the top human players rely on instinct.
DeepMind’s chief executive, Demis Hassabis, said its AlphaGo software followed a three-stage process, which began with making it analyse 30 million moves from games played by humans.
“It starts off by looking at professional games,” he said.
“It learns what patterns generally occur – what sort are good and what sort are bad. If you like, that’s the part of the program that learns the intuitive part of Go.
“It now plays different versions of itself millions and millions of times, and each time it gets incrementally better. It learns from its mistakes.
“The final step is known as the Monte Carlo Tree Search, which is really the planning stage.
“Now it has all the intuitive knowledge about which positions are good in Go, it can make long-range plans.”
Tested against rival Go-playing AIs, Google’s system won 499 out of 500 matches,
And last October, DeepMind invited Fan Hui, Europe’s top player, to its London office for a series of games, each of which the AI won.
“Many of the best programmers in the world were asked last year how long it would take for a program to beat a top professional, and most of them were predicting 10-plus years,” Mr Hassabis said.
“The reasons it was quicker than people expected was the pace of the innovation going on with the underlying algorithms and also how much more potential you can get by combining different algorithms together.”
Prof Zoubin Ghahramani, of the University of Cambridge, said: “This is certainly a major breakthrough for AI, with wider implications.
“The technical idea that underlies it is the idea of reinforcement learning – getting computers to learn to improve their behaviour to achieve goals.
“That could be used for decision-making problems – to help doctors make treatment plans, for example, in businesses or anywhere where you’d like to have computers assist humans in decision making.
“It doesn’t mean that Google is ahead of all other companies in AI – there are many artificial intelligences.
“But in terms of devoting resources to Go, Google has clearly done more.
“Facebook has achieved some pretty spectacular results in other areas of artificial intelligence, but I think Google has beaten them to this particularly important challenge.”
DeepMind now intends to pit AlphaGo against Lee Sedol – the world’s top Go player – in Seoul in March.
In addition, it continues to develop AI systems that can play computer games without any help, following last year’s success at getting its bots to teach themselves how to play several dozen classics.
“For us, Go is the pinnacle of board game challenges,” said Mr Hassabis.
“Now, we are moving towards 3D games or simulations that are much more like the real world rather than the Atari games we tackled last year. “