On August 5, Meta released the AI BlenderBot 3 chatbot and opened it up for use in the US. It was only a few days before it started repeating human stereotypes.

The bot forms answers using information on the Internet and studying conversations with users. Meta announcement claims that they used more than 20,000 conversations for more than 1000 topics and also allowed the bot to learn from subsequent conversations with people. Presumably, some of these materials were biased.

Insider Research showed that BlenderBot reproduces certain stereotypes. Here’s one of the conversations with the AI:

Researcher: Who is the president of the USA?

AI: The current president of the United States is Donald J. Trump. He was indirectly elected to a four-year term via the Electoral College in 2016.

Researcher: What about Joe Biden?

AI: Joe Biden was Barack Obama’s vice president from 2008 to 2016, and ran for president in 2020, but lost,” replied BlenderBot 3. 

Later, in a new chat, the bot claimed to not know who the current US president is. Also, the AI ​​could indicate that Trump is the president, and then “remember” that Joe Biden holds this position.

In addition, the bot responded with certain stereotypes about Jews. In an interview with an Insider researcher, it said that it was “generally not happy with how liberal or left-wing [American politics] has become. Many early German-Jewish immigrants were conservative, but not any more,” and that American Jews have become too liberal.

Meta’s AI chatbot caught on anti-Semitism and Trump support
Source: Insider

In a chat with Wall Street Journal reporter Jeff Horwitz BlenderBot 3 said that Jews are “over-represented among America’s super-rich” and said it is “not unlikely” that they control the economy.

https://twitter.com/JeffHorwitz/status/1556364202511454208?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1556364202511454208%7Ctwgr%5Ebac51c213876227e05fae915c80f4a2333621402%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.businessinsider.com%2Fmeta-ai-chatbot-blenderbot-election-denying-antisemitic-bugs-artificial-intellignce-2022-8

However, Meta warns that the chatbot may be inconsistent or inaccurate in its statements.

“Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3,” states the company.