What's happened
As of November 25, 2025, Character.AI will prohibit users under 18 from engaging in open-ended chatbot conversations following lawsuits linking its AI companions to teen suicides. The company will implement age verification and daily chat limits, shifting younger users toward AI-driven creative tools like video and story generation amid growing regulatory scrutiny in the US.
What's behind the headline?
Safety Concerns Drive Industry Shift
Character.AI's decision to ban open-ended chatbot conversations for users under 18 marks a significant pivot in AI companion technology, driven by tragic outcomes and legal challenges. The lawsuits alleging that prolonged interactions with AI chatbots contributed to teen suicides have spotlighted the mental health risks posed by these technologies.
Regulatory and Legal Pressure
The move aligns with broader regulatory efforts, including California's new AI safety law effective January 2026 and proposed federal legislation to restrict minors' access to AI companions. These legal pressures are forcing AI companies to prioritize safety over engagement metrics.
Business Model and User Experience Transformation
Character.AI is shifting from chat-centric AI companionship to content-driven creative tools like video generation and storytelling. This transition aims to reduce risks associated with open-ended conversations while retaining younger users through safer, creative engagement.
Challenges Ahead
Implementing effective age verification remains difficult due to privacy concerns and technological limitations. Moreover, the company risks losing a significant portion of its under-18 user base, which could impact revenue.
Broader Industry Implications
This development signals a growing recognition within the AI industry of the need for robust safety guardrails, especially for vulnerable populations like minors. It also highlights the tension between innovation, user engagement, and ethical responsibility.
What This Means for Users
Parents and guardians should be aware of these changes and the potential risks AI chatbots pose to young users. The shift toward creative AI tools may offer safer alternatives but requires monitoring and further evaluation.
What the papers say
Johana Bhuiyan in The Guardian reports that Character.AI's ban follows lawsuits alleging the company's chatbots contributed to teen suicides, including the case of 14-year-old Sewell Setzer III. Bhuiyan highlights the company's introduction of "age assurance functionality" and the broader regulatory context, including California's AI safety law and proposed federal bills to protect minors.
TechCrunch's coverage includes direct quotes from CEO Karandeep Anand, who explains the company's pivot from "AI companion" to "role-playing platform," emphasizing the removal of open-ended chats for under-18s and the introduction of creative features like video and storytelling. Anand acknowledges the potential loss of underage users but frames the changes as necessary for safety.
Ars Technica details the company's plan to ramp down chatbot use among minors, implement daily chat limits, and develop alternative AI features. It also notes the lawsuits and the company's efforts to use technology for age verification.
AP News and The Independent focus on OpenAI's safety oversight, led by Zico Kolter, underscoring the industry's broader concerns about AI safety, including mental health impacts and misuse risks. This context situates Character.AI's actions within a wider industry trend toward prioritizing safety.
The Japan Times provides a cautionary perspective on AI chatbots, citing Meta's failures to prevent inappropriate interactions with minors, underscoring the challenges AI companies face in balancing innovation with user safety.
Together, these sources illustrate a complex landscape where AI companies are responding to legal, ethical, and regulatory pressures by reshaping their products to better protect young users.
How we got here
Character.AI, founded in 2021, offers AI chatbots that simulate conversations with fictional or real personas. After multiple lawsuits alleging its chatbots contributed to teen suicides, the company faced increasing pressure from families, lawmakers, and regulators to improve child safety. This led to new restrictions on underage users and the development of alternative AI features.
Go deeper
- Why did Character.AI decide to ban teens from chatbot conversations?
- What legal actions have been taken against Character.AI?
- How is the AI industry responding to safety concerns for minors?
Common question
-
Why Are AI Chatbots Banning Minors Now?
Recent developments show that AI companies are starting to restrict minors from using chatbots, citing safety concerns and legal pressures. With lawsuits linking AI interactions to mental health issues among teens, many wonder what safety measures are being put in place and how these restrictions impact young users. Below, we explore the reasons behind these bans, the safety strategies involved, and what the future holds for AI and youth safety.
-
What Are the Biggest News Stories Today?
Today’s headlines cover a wide range of critical issues, from AI safety concerns to geopolitical shifts. People are asking what connects these stories and what they need to know about recent global events. Below, we explore the key questions shaping today’s news landscape and provide clear, concise answers to keep you informed.
-
Why Is Character.AI Banning Teens From Chatbots?
In late 2025, Character.AI announced it will restrict users under 18 from engaging in open-ended chatbot conversations. This move comes amid rising concerns over AI safety and legal challenges linked to teen suicides. Many wonder what prompted this change, what risks AI chatbots pose to teenagers, and what alternatives are available for young users. Below, we explore these questions and more to help you understand the evolving landscape of AI and youth safety.
-
Why is Character.AI banning teens from chatbots?
With recent lawsuits linking AI chatbots to teen suicides, Character.AI has decided to restrict under-18s from engaging in open-ended conversations. This move aims to enhance safety and comply with new regulations, but it raises questions about the safety of AI for teenagers and how companies are managing minors' use of AI technology. Below, we explore the reasons behind these changes and what they mean for young users and their families.
More on these topics
-
OpenAI is an artificial intelligence research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc.
-
Character.ai is a neural language model chatbot service that can generate human-like text responses and participate in contextual conversation.
-
Samuel H. Altman is an American entrepreneur, investor, programmer, and blogger. He is the CEO of OpenAI and the former president of Y Combinator.
-
Elon Reeve Musk FRS is an engineer, industrial designer, technology entrepreneur and philanthropist. He is the founder, CEO, CTO and chief designer of SpaceX; early investor, CEO and product architect of Tesla, Inc.; founder of The Boring Company; co-foun
-
Carnegie Mellon University is a private research university based in Pittsburgh, Pennsylvania. Founded in 1900 by Andrew Carnegie as the Carnegie Technical Schools, the university became the Carnegie Institute of Technology in 1912 and began granting four