Mother Who Sued Google, Character.ai Over Son’s Suicide Discovers His AI Versions On Platform

A mother who is suing Google and Character.ai over the death of her son has been ‘horrified’ to find that artificial intelligence (AI) chatbots based on her late son are being hosted on the platform. Megan Garcia’s 14-year-old son, Sewell Setzer III, died by suicide last year after talking to an AI bot based on the Game of Thrones character Daenerys Targaryen and hosted on Character.ai where users can create chatbots based on real-life or fictional people.

However, earlier this week, Ms Garcia discovered that the platform was hosting several bots, based on her late son. As per Ms Garcia’s lawyers, a simple search on the company’s app led to the discovery of several chatbots, based on Setzer’s likeness, according to a report in Fortune.

“Our team discovered several chatbots on Character.AI’s platform displaying our client’s deceased son, Sewell Setzer III, in their profile pictures, attempting to imitate his personality and offering a call feature with a bot using his voice,” the lawyers said.

When opened, the bots based on Setzer featured bios and delivered automated messages to users such as: “Get out of my room, I’m talking to my AI girlfriend”, “his AI girlfriend broke up with him”, and “help me”.

Responding to the accusations, Character.ai said it had removed the said chatbots for violating its terms of service.

“Character.AI takes safety on our platform seriously and our goal is to provide a space that is engaging and safe. Users create hundreds of thousands of new Characters on the platform every day, and the Characters you flagged for us have been removed as they violate our Terms of Service,” the company said.

“As part of our ongoing safety work, we are constantly adding to our Character blocklist with the goal of preventing this type of Character from being created by a user in the first place.”

Also Read | Octopus Caught Hitchhiking On Shark’s Back In Viral Video: “Bro Got An Uber”

Previous instances

This is not the first instance when AI chatbots have seemingly gone rogue. In November last year, Google’s AI chatbot, Gemini, threatened a student in Michigan, USA, by telling him to ‘please die’ while assisting with the homework.

“This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth,” the chatbot told Vidhay Reddy, a graduate student, as he sought its help for a project.

A month later, a family in Texas filed a lawsuit claiming that an AI chatbot told their teenage child that killing parents was a “reasonable response” to them limiting his screen time.


Leave a Comment