Character.AI chatbot is accused of driving a teenager to suicide

A 14-year-old teenager Sewell Setzer from Orlando, Florida, committed suicide. On the last day of his life, he said goodbye to the character Daenerys Targaryen, with whom he communicated using the Character.AI service. Now, the boy’s mother has filed a lawsuit against the AI company and calls it responsible for her son’s death, The New York Times reports.

Character.AI is a service that allows you to create a customized chatbot that impersonates a character or person. In Setzer’s case, it was Daenerys Targaryen from Game of Thrones. For months, he communicated with the heroine, whom he called “Dany,” telling her about all his experiences and, in particular, mentioning his desire to commit suicide.

The teenager realized that he was communicating with an artificial intelligence rather than a real person, but even so, he developed an emotional attachment to the chatbot. He was constantly writing messages, telling us about his day, and checking the responses. Some messages were of a romantic or even sexual nature, but the vast majority of communication was in a friendly manner.

Parents and friends didn’t know about the teenager’s attachment to the Character.AI character. All they could see was that he was becoming more and more withdrawn and spending a lot of time on his smartphone. Over time, his grades deteriorated and he began to have problems at school. His parents took him to a therapist, where he was diagnosed with anxiety and a disruptive mood regulation disorder. But despite his visits to a specialist, Setzer preferred to talk about his problems to “Danny.”

Companies such as Character.AI claim that their chatbot can help people suffering from loneliness and depression, but these claims have no scientific evidence, and chatbots can have a dark side. In particular, for some people, like 14-year-old Setzer, AI can replace live communication and further distance them from friends and family.

Sewell’s mother, Megan L. Garcia, filed a lawsuit against Character.AI. The draft of her lawsuit states that the company’s technology is “dangerous and untested” and that it can “trick customers into expressing their most private thoughts and feelings.” Garcia says the company’s chatbot is responsible for her son’s death.

Jerry Ruoty, Head of Trust and Safety at Character.AI, sent a statement saying that the company takes the safety of its users very seriously and is looking for ways to develop the platform. He also noted that the company’s policies prohibit “the promotion or depiction of self-harm and suicide” and that more safety features for underage users will be added in the future.

The company also said in a statement that it recently introduced a new window that activates when a user enters certain phrases related to self-harm or suicide. It directs the user to the National Suicide Prevention Lifeline.