AI researcher and YouTuber Yannic Kilcher trained an AI using 3.3 million threads from 4chan’s infamously toxic Politically Incorrect /pol/ board. He then unleashed the bot back onto 4chan with predictable results.
he AI was just as vile as the posts it was trained on, spouting racial slurs and engaging with antisemitic threads. Kilcher activated nine instances of the bot and allowed them to post for 24 hours on /pol/. In that time, the bots posted around 15,000 times. This was more than 10 percent of all posts made on the politically incorrect board that day.
“The model was good in a terrible sense.It perfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on /pol.”, says the researcher.
After Kilcher published a video of the experiment and a copy of the program on Hugging Face, this worried researchers in the field of artificial intelligence. The bot, which Kilcher called GPT-4chan, was extremely effective in reproducing the tone and feel of 4chan posts. Researchers saw this as not just a prank, but an unethical experiment.
“Medical research has a strong ethics culture because we have an awful history of causing harm to people, usually from disempowered groups…he performed human experiments without informing users, without consent or oversight. This breaches every principle of human research ethics,” says Lauren Oakden-Rayner from the Australian Institute for Machine Learning. In response, Kilcher said that he was not a scientist, and the experiment with the bot was a light-hearted trolling. On 4chan there are much worse posts – this environment is so toxic that bots will not affect it.
Hugging Face blocked the program’s model after a scandal erupted around it. It has not been removed, but no one can download it anymore.