An amateur Go player defeated a high-class AI by exploiting a weakness discovered by another bot, reports The Financial Times. Taking advantage of this flaw, American player Kellin Pelrine defeated the KataGo system, winning 14 out of 15 games without further assistance from the computer.

This is a rare human victory in the game of Go since AlphaGo’s landmark victory in 2016. It also shows that even the most advanced artificial intelligence systems can have glaring blind spots.

Pelrine’s victory was made possible by research firm FAR AI, which developed a program to find weaknesses in KataGo. After playing over a million games, it was able to find a weak spot that could be exploited by a decent amateur player. It’s “not completely trivial but it’s not super-difficult” to learn, said Pelrine. He used the same method was to beat Leela Zero, another top Go AI.

Here’s how it works: the goal is to create a large “loop” of stones to encircle an opponent’s group, then distract the computer by making moves in other areas of the board. Even when its group was nearly surrounded, the computer failed to notice the strategy.

“As a human, it would be quite easy to spot,” Pelrine said, “since the encircling stones stand out clearly on the board”.

This flaw demonstrates that AI systems can’t “think” outside of their training, so they often do things that seem silly to a human.

We’ve seen similar things with chatbots like the one used by the search engine Bing by Microsoft. While it was good at repetitive tasks like plotting a travel route, it also provided incorrect information, scolded users for wasting time, and even exhibited “inappropriate behavior” – likely due to the models it was trained on.

Lightvector (developer of KataGo) is certainly aware of the problem that players have been exploiting for several months. In its post on GitHub, the company said it is working on patching for different types of attacks.