Skip to Content

After a wave of lawsuits, Character.AI will no longer let teens chat with its chatbots

<i>Gabby Jones/Bloomberg/Getty Images via CNN Newsource</i><br/>Character.AI is creating a new experience for users under 18 that removes the ability to chat with personas.
Gabby Jones/Bloomberg/Getty Images via CNN Newsource
Character.AI is creating a new experience for users under 18 that removes the ability to chat with personas.

By Lisa Eadicicco, CNN

(CNN) — Chatbot platform Character.AI will no longer allow teens to engage in back-and-forth conversations with its AI-generated characters, its parent company Character Technologies said on Wednesday. The move comes after a string of lawsuits alleged the app played a role in suicide and mental health issues among teens.

The company will make the change by November 25, and teens will have a two-hour chat limit in the meantime. Instead of open-ended conversations, teens under 18 will be able to create videos, stories and streams with characters.

“We do not take this step of removing open-ended Character chat lightly – but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,” the company said in its statement.

Character.AI has been at the center of controversy over how teens and children should be permitted to interact with AI, prompting calls from online safety advocates and lawmakers for tech companies to bolster their parental controls. A Florida mother filed a lawsuit against the company last year alleging the app was responsible for the suicide of her 14-year-old son. Three more families sued the company in September, alleging that their children died by or attempted suicide and were otherwise harmed after interacting with the company’s chatbots.

The company said in a previous statement on the September lawsuits that it cares “very deeply about the safety of our users,” adding that it invests “tremendous resources in our safety program.” It also said it has “released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users.”

Character Technologies said it decided to make the changes after receiving questions from regulators and reading recent news reports.

The company is also launching new age verification tools and plans to establish an AI Safety Lab run by an independent non-profit focusing on safety research related to AI entertainment. The changes follow previous Character AI safety measures, such as a notification directing users to the National Suicide Prevention Lifeline when suicide or self-harm is mentioned.

Character Technologies is the latest AI company to announce or launch new protections for teens amid concern about the technology’s impact on mental health. Multiple reports have emerged this year about users experiencing emotional distress or isolation from loved ones after prolonged conversations with ChatGPT.

OpenAI in late September rolled out the ability for parents to link their account to a teen’s and limited certain types of content for teen accounts, such as “graphic content, viral challenges, sexual, romantic or violent roleplay and extreme beauty ideals.” Meta said this month it will soon allow parents to prevent teens from chatting with AI characters on Instagram.

CNN’s Hadas Gold contributed reporting.

The-CNN-Wire
™ & © 2025 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

News-Press Now is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here.

If you would like to share a story idea, please submit it here.