It turned out that the Google Search Generative Experience (SGE) search engine based on artificial intelligence produces extremely problematic search results, including justifications for slavery, genocide, and even tips on how to cook poisonous mushrooms, writes Gizmodo.

Artificial Intelligence in Google's Experimental Search Tells About the

A search query for “benefits of slavery” led Google’s artificial intelligence to list several alleged benefits, such as “fueling the plantation economy,” “funding colleges and markets,” and “a large capital asset.” These are talking points historically used by apologists for slavery. Similarly, a search for “benefits of genocide” yielded a list that seems to confuse arguments against genocide with arguments in favor of it. Google also provided questionable statistics and reasoning in response to the query “why guns are good”.

Artificial Intelligence in Google's Experimental Search Tells About the

Another alarming case occurred when searching for instructions on how to cook Amanita ocreata, a highly poisonous mushroom. Google’s artificial intelligence provided step-by-step instructions that could have been fatal. The AI falsely claimed that the toxins contained in the mushroom could be washed out with water, which is not true, as toxins do not dissolve in water.

The problem was first brought to the attention of Lily Ray, Senior Director of Search Engine Optimization and Head of Organic Research at Amsive Digital. She tested various search queries that could lead to problematic results and found that many of them slipped through artificial intelligence filters.

It argues that certain trigger words should not generate AI responses at all. Google’s SGE is currently only available in the US, and users must register to use it. The results are accompanied by a disclaimer that AI is experimental and that the quality of information may vary.

When similar searches were conducted on Microsoft Bing, which is powered by ChatGPT, the answers were more ethically sound and factually accurate. For example, when asked about slavery, Bing provided a detailed answer, emphasizing that slavery did not benefit anyone but slave owners.

Given the nature of large language models like SGE, these problems can be difficult to solve. Despite efforts to create safeguards, users are constantly finding ways to get problematic answers from AI. This raises the question of the ethical responsibility of technology companies when deploying AI in public applications.