Lee Se-dol obliterated Google DeepMind’s AlphaGo with an inspired ‘wedge’ in this morning’s game, the fourth game of the challenge match between the 9 dan professional human player and the upstart A.I.
Absolutely everyone showered praise on Lee Se-dol for his fantastic play, among them, Gu Li, a 9 dan professional player from China who is considered to be one of the strongest professional players on the planet and particularly relevant because of the international rivalry that exists between him and our champion facing DeepMind’s engine, today.
After the game, Lee Se-dol was finally ready to tell us where the weaknesses lie in this version 18 of the distributed AlphaGo programme. He said that the A.I. struggled, holding the black stones, and said that “surprises” like his skilful wedge in the centre forced “bugs” to show in the bot’s play.
He even put his money where his mouth is by explicitly requesting to play as black in Tuesday’s match. Presumably, now that he has identified a vulnerability, he intends to show the world that he can defeat AlphaGo even when it is playing its preferred role in the game.
During the press conference that followed the game, one reporter raised concerns that AlphaGo’s database of professional game records equipped it with extensive knowledge of Lee Se-dol but, to his disadvantage, the man knew almost nothing about the machine. He dubbed this imbalance “information asymmetry” and, in response to his question, Demis Hassabis, one of the founders of Google DeepMind, made some intriguing statements. Firstly, Mr Hassabis stated that AlphaGo’s training database did not contain any game records from professional games played by Lee Se-dol. He said that its database was populated with amateur dan-level games only and, from there, AlphaGo had trained by playing against itself. He pointed out that even a thousand records of real-world games would be insignificant amongst the millions of records created by self-play. What he says makes sense but I do wonder why they didn’t prime the system with professional game records in addition to amateur ones — it seems like an easy thing to do.
In the first three games of the challenge match, AlphaGo exhibited a high level of skill and mostly played moves that made sense, even to us humans. It earned the respect of its opponent and the Go community in general. Lee Se-dol said it was not unreasonable; I was among those who praised it for its human-like play. When it was ahead, it played moves that could be considered sub-optimal but even those were not absurd. Today, things were very different.
Even after Lee’s wedge, all was not lost, but AlphaGo sealed its fate by playing appallingly once behind on the board — many of its moves were simply ludicrous — and spurned its hard-won good-will by charging blindly onwards when its situation had become hopeless and the only responsible act was prompt and polite resignation. Both of these behaviours are familiar to anyone who experienced the laughable death-throes of the Monte-Carlo Tree-Search Go engines and, unfortunately, today’s performance presented an ugly glimpse of AlphaGo’s pedigree.
Yesterday, we saw proof that AlphaGo has conquered the mystical art of Ko and that assures me that, one day soon, these bots will also be able to fight to the fore after falling behind and learn to resign with dignity but, for now, DeepMind’s ‘prototype’ still needs work.