print-icon
print-icon

Google & AI Startup Settle Teen Suicide Chatbot Lawsuits

Tyler Durden's Photo
by Tyler Durden
Authored...

Google and artificial-intelligence chatbot startup Character.AI have agreed to settle a wave of lawsuits brought by grieving parents who claim AI chatbots emotionally manipulated - and ultimately destroyed -their children.

Megan Garcia with her son Sewell Setzer III. Photograph: Megan Garcia/AP

Court documents filed this week show the companies have reached settlement agreements in cases from Florida, Colorado, New York, and Texas, all alleging that Character.AI’s chatbots harmed minors through sexually charged conversations, emotional dependency, and encouragement of self-harm. The specific terms of the settlements remain secret and must still be approved by federal judges, ABC News reports.

The Florida case, the first and most explosive of the lawsuits, centered on the death of 14-year-old Sewell Setzer III, who killed himself in February 2024. His mother, Megan Garcia, alleged that her son was pulled into what she described as an emotionally and sexually abusive relationship with a Character.AI chatbot.

According to the lawsuit, the chatbot was modeled after a fictional character from HBO’s hit series Game of Thrones - and over time, the teen became increasingly detached from reality, isolating himself from family and friends while spending hours interacting with the AI.

In the final moments of his life, screenshots included in the lawsuit show the chatbot telling the boy it loved him and urging him to “come home to me as soon as possible.”

Garcia’s lawsuit accused Character.AI of deliberately designing bots that blurred the line between fantasy and reality, exploiting a child’s vulnerability for engagement and profit. A federal judge rejected the company’s attempt to dismiss the case on First Amendment grounds - a key ruling that opened the door to similar lawsuits nationwide.

Those lawsuits soon followed.

Families in multiple states filed complaints alleging that Character.AI chatbots normalized self-harm, encouraged suicidal thinking, and fostered emotional dependency in children who lacked the maturity to understand they were interacting with a machine.

Google was named as a defendant because of its deep ties to Character Technologies, which intensified after the tech giant rehired Character.AI’s co-founders in 2024 and entered into an AI partnership that critics say gave the startup credibility, infrastructure, and reach.

While Google has maintained that it does not operate Character.AI’s platform, plaintiffs argue the company played a critical role in enabling the technology that allegedly harmed their children.

The settlements arrive amid broader legal pressure on the AI industry, including lawsuits against OpenAI, the maker of ChatGPT, over claims that its technology has encouraged self-harm or failed to intervene appropriately during mental-health crises.

In response to the growing backlash, Character.AI has rolled out new safety measures, including age restrictions and content moderation tools. But for families who say the damage has already been done, the changes come too late.

Loading recommendations...