Google Makes Giant Leap Towards AI

Google's latest Artificial Intelligence (AI) system 'AlphaGo' wins its first five-match challenge of Go against a professional human player.

SHARE

Screen Shot 2016-01-28 at 4.56.06 pmFor decades, developers have been pitting humans against machines.

In order to drive interest, the battlefield of choice has typically been in the board game environment: backgammon, naughts and crosses, checkers, and chess have all seen major man versus machine tussles.

The prize was an algorithm so sophisticated than it could outmanoeuvre the best a human champion had to offer.

When, in 1997, IBM’s Deep Blue beat the world’s top-ranked chess player Garry Kasparov, there remained only the ‘Holy Grail’ of board games for computer researchers to conquer… the ancient game of Go.

This week Google announced that their Artificial Intelligence (AI) system ‘AlphaGo’ has brought home that triumph too – a decade earlier than experts expected it to happen.

Last October AlphaGo – the progeny of Google’s DeepMind ‘Apollo Program’ for AI – played professional Go player and European champ, Fan Hui, in five matches and beat him five times.

Amongst researchers, this has created excitement by the fact that Go, which originated around 2,500 years ago, is far more complicated than chess.

Played with black and white tiles on a 19×19 board, Go has over 250 moves a turn, whereas chess only has 35. It is still strategic but more intuitive and less logical, which should make it harder for computers to learn.

Demis Hassabis, DeepMind’s founder, told the media that Go has more configurations of the board than the number of atoms in the universe.

“This complexity is what makes Go hard for computers to play,” Google has blogged. “Therefore an irresistible challenge to AI researchers, who use games as a testing ground to invent smart, flexible algorithms that can tackle problems, sometimes in ways similar to humans.”

Traditional AI methods of game-playing hold to a ‘best-worst’ ranking method, where the computer is ‘taught’ all the possible moves, then instructed to use the ideal choice to win.

Screen Shot 2016-01-28 at 4.38.24 pmThis strategy, says Google, would not work with Go. Instead, the DeepMind researchers incorporated a ‘Deep Learning’ technique. AlphaGo could ‘learn’ the game as ‘trial and error’ using 30 million moves from Go experts. The researchers then built the system to ‘beat’ itself, producing more moves and making it competitive against a human opponent. In effect, it taught itself to win by producing its own data and beating itself.

Experts in the space see AlphaGo as a profound move for AI. It can be incorporated into science, robotics, FinTech, or combat, anything that can be configured as a ‘game’.

“While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems,” Google writes.

Google suggests the technology could put to use for climate modelling and complex disease analysis.

Still, pundits have qualified the success of Google’s program. The AI of AlphaGo, which after all follows the programmatic, adversarial construct of a game, should not be confused with actual human thought.

Right now AlphaGo’s designers are readying the machine for its next bout: a five matches with Go’s world champ, Lee Sedol, who has held the title for ten years.