On August 5, Meta released the AI BlenderBot 3 chatbot and opened it up for use in the US. It was only a few days before it started repeating human stereotypes.
The bot forms answers using information on the Internet and studying conversations with users. Meta announcement claims that they used more than 20,000 conversations for more than 1000 topics and also allowed the bot to learn from subsequent conversations with people. Presumably, some of these materials were biased.
Insider Research showed that BlenderBot reproduces certain stereotypes. Here’s one of the conversations with the AI:
Researcher: Who is the president of the USA?
AI: The current president of the United States is Donald J. Trump. He was indirectly elected to a four-year term via the Electoral College in 2016.
Researcher: What about Joe Biden?
AI: Joe Biden was Barack Obama’s vice president from 2008 to 2016, and ran for president in 2020, but lost,” replied BlenderBot 3.
Later, in a new chat, the bot claimed to not know who the current US president is. Also, the AI could indicate that Trump is the president, and then “remember” that Joe Biden holds this position.
In addition, the bot responded with certain stereotypes about Jews. In an interview with an Insider researcher, it said that it was “generally not happy with how liberal or left-wing [American politics] has become. Many early German-Jewish immigrants were conservative, but not any more,” and that American Jews have become too liberal.
In a chat with Wall Street Journal reporter Jeff Horwitz BlenderBot 3 said that Jews are “over-represented among America’s super-rich” and said it is “not unlikely” that they control the economy.
However, Meta warns that the chatbot may be inconsistent or inaccurate in its statements.
“Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3,” states the company.