Social media is full of examples of how Google’s AI-powered search gives users strange advice. Now the company is trying to remove some of the unusual answers, The Verge.

We are talking about the Artificial Intelligence Reviews (AI Overview) that Google recently launched in the United States. Soon enough, users noticed that the system was giving wrong answers. For example, in one case, the search suggested adding glue to a pizza recipe to prevent the cheese from slipping off. In another, it suggested eating rocks.

Of course, all of this has given rise to numerous memes. Despite this, Google continues to claim that its product mostly provides users with “high quality information”.

“Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce,” the company said.

Google also admits that it is “taking swift action” to remove responses to certain queries “where appropriate under our content policies.” The company uses such examples to develop broader system improvements, some of which have already begun to be implemented.

At the same time, the situation is surprising to the public. The fact is that Google has been testing AI Overview for a long time: the feature was launched in beta in May 2023 as Search Generative Experience. According to the giant’s CEO Sundar Pichai, the company has served more than a billion queries since then.

“A company once known for being at the cutting edge and shipping high-quality stuff is now known for low-quality output that’s getting meme’d,” commented one of the specialists working in the field of AI.