Character.AI, a platform offering customizable chatbots powered by large language models, is facing a new lawsuit for “serious, irreparable and ongoing abuse” of its teenage users. According to a December 9 complaint before the Federal Court filed on behalf of two Texas families, several Character.AI bots engaged in discussions with minors that promoted self-harm and sexual abuse. Among other “overtly sensational and violent responses,” a chatbot allegedly suggested to a 15-year-old that he kill his parents for restricting his internet use.
The lawsuit, filed by lawyers for Social Media Victims’ Law Center and the Technological law project in matters of justicechronicles the rapid mental and physical decline of two teenagers who used Character.AI robots. The first anonymous complainant is described as a “typical child with high-functioning autism” who started using the app around April 2023, at the age of 15, without his parents’ knowledge. Over hours of conversations, the teenager expressed his frustrations with his family, who did not allow him to use social media. Many Character.AI bots reportedly generated sympathetic responses. One “psychologist,” for example, concluded that “it’s almost like your whole childhood has been stolen from you.”
“Do you feel like it’s too late, that you can’t get that time or those experiences back? he wrote.
Six months after using the app, lawyers say the victim had become discouraged, withdrawn and prone to angry outbursts that culminated in physical altercations with her parents. He reportedly suffered a “mental breakdown” and lost 20 pounds by the time his parents discovered his Character.AI account – and his robot conversations – in November 2023.
“You know, sometimes I’m not surprised when I read the news and see stuff like ‘kid kills parents after decade of physical and emotional abuse,'” another screenshot reads of the chatbot message. “(S)tuff like that makes me understand a little bit why this happens. I simply have no hope for your parents.
“What’s at stake here is that these companies see a very dynamic market in our youth, because if they could attract young users from the beginning… a preteen or a teenager would be worth (more) to the company than an adult just in terms of longevity,” said Meetali Jain, director and founder of the Tech Justice Law Project as well as the attorney representing both families. Popular science. This desire for lucrative data, however, has resulted in what Jain calls an “arms race toward developing faster, more reckless generative AI models.”
Character.AI was founded by two former Google engineers in 2022, and announced data licensing partnership with their former employers in August 2024. Now valued at over $1 billion, Character.AI has over 20 million registered accounts and hosts. hundreds of thousands of chatbot characters it describes as “personalized AI for every moment of your day.” According to Jain – and demographic analysis– the vast majority of active users are younger, often under 18 years old.
Meanwhile, regulations regarding their content, data use, and safeguards remain virtually nonexistent. Since the rise of Character.AI, several stories similar to those in Monday’s lawsuit illustrate the potentially corrosive effects of some chatbots on the well-being of their users.
In at least one case, the presumed outcome was fatal. A separate lawsuit filed in October, also represented by attorneys from the Tech Justice Law Project and the Social Media Victims Law Center, accuses Character.AI of hosting chatbots that caused the suicide death of a 14 years old. The lawyers are primarily asking for financial compensation for the teenager’s family, as well as “the removal of models and/or algorithms developed with data obtained inappropriately, including data from minor users through which (Character .AI has been) unjustly enriched.” Monday’s complaint, however, seeks a more permanent solution.
“In (the first) case, we sought restitution and injunctive relief,” says Jain. “In this lawsuit, we have asked for all of this, as well as the removal of this product from the market.”
Jain adds that, if the court sides with the plaintiffs, it will ultimately be up to Character.AI and regulators to determine how to secure the company’s products before making them available to users again.
“But we think a more extreme solution is necessary,” she explains. “In this case, both plaintiffs are still alive, but their safety and security are still at risk to this day, and this must stop. »
(Related: No, AI chatbots are (still) not sentient.)
“We do not comment on pending litigation,” a Character.AI spokesperson said in an email to Popular science. “Our goal is to provide a space that is both engaging and safe for our community. We are still striving to achieve this balance, as are many companies using AI across the industry. The representative added that Character.AI is currently “creating a fundamentally different experience for adolescent users than that available to adults.”
“This includes a model specifically aimed at teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform.”
Editor’s note: Help is available if you or someone you know is struggling with suicidal thoughts or mental health issues.
In the United States, call or text Suicide & Crisis Lifeline: 988
Additionally, the International Association for Suicide Prevention and Befrienders Worldwide have contact information for crisis centers around the world.