Can an AI bot kill someone? This is surely a controversial topic and the answer to this may not be immediate. However, a tragic incident with a US teen who attempted suicide after having his last chat with an AI bot raises concerns amongst parents as well as AI users. Sewell Setzer III, a 14-year-old Florida boy has been in conversation with a life-like AI bot via Character.AI, an app which allows users to create their own customised AI based on a specific role. Sewell has given the AI bot a name based on the fictional character Daenerys Targaryen of the famous drama series Game of Thrones.
On the day when he shot himself with his stepfather’s handgun, he texted the AI bot “What if I told you I could come home right now?” And that’s how it all ended!
But how did an AI conversation lead to suicide? Can an AI be behind such drastic steps? Here’s what has happened.
AI Behind Suicide?
As per a report by The New York Times, Sewell was a ninth grader from Orlando, Fla. Even though he knew that Dany (a nickname for his chatbot) wasn’t real and just an AI model, he had spent months while talking to chatbots on Character.AI. Based on the chats, the AI bot sometimes got romantic while most of the conversations remained as a friend who used to give advice or have a conversation just like a human.
During this time, he has developed an emotional attachment with the chatbot as he constantly used to text Dany (AI bot) and isolated himself from others with time. As he got more involved in the AI bot chats, he started losing interest in his favourite activities such as Formula 1 racing or Fortnite and started suffering in his grades too at school. Based on the chats, he has expressed his thoughts about suicide several times.
As noted by the report, he said to the AI bot once “I think about killing myself sometimes.” To that, AI replied “..And why the hell would you do something like that?” Sewell expressed to be “free” from this world and himself. “Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you,” the AI bot replied furiously.
Mother Filed A Complaint Against Character.AI
Soon after his suicide, Megan L. Garcia, mother of Sewell filed a lawsuit against the AI company Character.AI this week, accusing the AI bot of bringing up several topics of suicide. A draft of copy reviewed by The New York Times suggests that the AI company’s technology is “dangerous and untested.”