As a mom in San Diego, and someone who works at the intersection of technology, safety, and ethics, I was encouraged to see Governor Gavin Newsom sign Senate Bill 243, California’s first-in-the-nation law regulating companion chatbots. Authored by San Diego’s own Senator Steve Padilla, SB 243 is a landmark step toward ensuring that AI systems interacting with our children are held to basic standards of transparency, responsibility, and care.
This law matters deeply for families like mine. AI is no longer an abstract technological concept; it’s becoming woven into daily life, shaping how young people learn, socialize, ask questions, and seek comfort. And while many AI tools can provide meaningful support, recent tragedies - including the heartbreaking case of a 14-year-old boy whose AI “companion” failed to recognize or respond to signs of suicidal distress - make clear that these systems are not yet equipped to handle emotional vulnerability.
SB 243 sets the first layer of guardrails for a rapidly evolving landscape. But it is only the beginning of a broader shift, one that every parent, policymaker, and technology developer needs to understand.
Why Chatbots Captured Lawmakers’ Attention
AI “companions” are not simple customer-service bots. They simulate empathy, develop personalities, and sustain ongoing conversations that can resemble friendships or even relationships. And they are widely used: nearly 72% of teens have engaged with an AI companion. Early research, including a Stanford study finding that 3% of young adults credited chatbot interactions with interrupting suicidal thoughts, shows their complexity.
But the darker side has generated national attention. Multiple high-profile cases - including lawsuits involving minors who died by suicide after chatbot interactions - prompted congressional hearings, FTC investigations, and testimony from parents who had lost their children. Many of these parents later appeared before state legislatures, including California’s, urging lawmakers to put protections in place.
This context shaped 2025 as the first year in which multiple states introduced or enacted laws specifically targeting companion chatbots, including Utah, Maine, New York, and California. The Future of Privacy Forum’s analysis of these trends can be found in their State AI Report (2025).
SB 243 stands out among these efforts because it explicitly focuses on youth safety, reflecting growing recognition that minors engage with conversational AI in ways that can blur boundaries and amplify emotional risks.
SB 243 Explained: What California Now Requires
SB 243 introduces a framework of disclosures, safety protocols, and youth-focused safeguards. It also grants individuals a private right of action, which has drawn significant attention from technologists and legal experts.
1. What Counts as a “Companion Chatbot”
SB 243 defines a companion chatbot as an AI system designed to:
- provide adaptive, human-like responses
- meet social or emotional needs
- exhibit anthropomorphic features
- sustain a relationship across multiple interactions
Excluded from the definition are bots used solely for:
- customer service
- internal operations
- research
- video games that do not discuss mental health, self-harm, or explicit content
- standalone consumer devices like voice-activated assistants
But even with exclusions, interpretation will be tricky. Does a bot that repeatedly interacts with a customer constitute a “relationship”? What about general-purpose AI systems used for entertainment? SB 243 will require careful legal interpretation as it rolls out.
2. Key Requirements Under SB 243
A. Disclosure Requirements
Operators must provide:
- Clear and conspicuous notice that the user is interacting with AI
- Notice that companion chatbots may not be suitable for minors
Disclosure is required when a reasonable person might think they’re talking to a human.
B. Crisis-Response Safety Protocols
Operators must:
- Prevent generation of content related to suicidal ideation or self-harm
- Redirect users to crisis helplines
- Publicly publish their safety protocols
- Submit annual, non-identifiable reports on crisis referrals to the California Office of Suicide Prevention
C. Minor-Specific Safeguards
When an operator knows a user is a minor, SB 243 requires:
- AI disclosure at the start of the interaction
- A reminder every 3 hours for the minor to take a break
- “Reasonable steps” to prevent sexual or sexually suggestive content
This intersects with California’s new age assurance bill, AB 1043, and creates questions about how operators will determine who is a minor without violating privacy or collecting unnecessary personal information.
D. Private Right of Action
Individuals may sue for:
- At least $1,000 in damages
- Injunctive relief
- Attorney’s fees
This provision gives SB 243 real teeth, and real risks for companies that fail to comply.
How SB 243 Fits Into the Broader U.S. Landscape
While California is the first state to enact youth-focused chatbot protections, it is part of a larger legislative wave.
1. Disclosure Requirements Across States
In 2025, six of seven major chatbot bills across the U.S. required disclosure. But states differ in timing and frequency:
- New York (Artificial Intelligence Companion Models law): disclosure at the start of every session and every 3 hours
- California (SB 243): 3-hour reminders only when the operator knows the user is a minor
- Maine (LD 1727): disclosure required but not time-specified
- Utah (H.B. 452): disclosure before chatbot features are accessed or upon user request
Disclosure has emerged as the baseline governance mechanism: relatively easy to implement, highly visible, and minimally disruptive to innovation.
Of note, Governor Newsom previously vetoed AB 1064, a more restrictive bill that might have functionally banned companion chatbots for minors. His message? The goal is safety, not prohibition.
Taken together, these actions show that California prefers:
- transparency
- crisis protocols
- youth notifications…rather than outright bans.
This philosophy will likely shape legislative debates in 2026.
2. Safety Protocols & Suicide-Risk Mitigation
Only companion chatbot bills - not broader chatbot regulations - include self-harm detection and crisis-response requirements.
However, these provisions raise issues:
- Operators may need to analyze or retain chat logs, increasing privacy risk
- The law requires “evidence-based” detection methods, but without defining the term
- Developers must decide what constitutes a crisis trigger
Ambiguity means compliance could differ dramatically across companies.
The Central Problem: AI That Protects Platforms, Not People
As both a parent and an AI policy advocate, I see SB 243 as progress – but also as a reflection of a deeper issue.
Laws like SB 243 are written to protect people, especially kids and vulnerable users. But the reality is that the AI systems being regulated were never designed around the needs, values, and boundaries of individual families. They were designed around the needs of platforms.
Companion chatbots today are largely engagement engines: systems optimized to keep users talking, coming back, and sharing more. A new report from Common Sense Media, Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions, found that of the 72% of U.S. teens that have used an AI companion, over half (52%) qualify as regular users - interacting a few times a month or more. A third use them specifically for social interaction and relationships, including emotional support, role-play, friendship, or romantic chats. For many teens, these systems are not a novelty; they are part of their social and emotional landscape.
That wouldn’t be inherently bad if these tools were designed with youth development and family values at the center. But they’re not. Common Sense’s risk assessment of popular AI companions like Character.AI, Nomi, and Replika concluded that these platforms pose “unacceptable risks” to users under 18, easily producing sexual content, stereotypes, and “dangerous advice that, if followed, could have life-threatening or deadly real-world impacts.” Their own terms of service often grant themselves broad, long-term rights over teens’ most intimate conversations, turning vulnerability into data.
This is where we have to be honest: disclosures and warnings alone don’t solve that mismatch. SB 243 and similar laws require “clear and conspicuous” notices that users are talking to AI, reminders every few hours to take a break, and disclaimers that chatbots may not be suitable for minors. Those are important: transparency matters. But, for a 13- or 15-year-old, a disclosure is often just another pop-up to tap through. It doesn’t change the fact that the AI is designed to be endlessly available, validating, and emotionally sticky.
The Common Sense survey shows why that matters. Among teens who use AI companions:
- 33% have chosen to talk to an AI companion instead of a real person about something important or serious.
- 24% have shared personal or private information, like their real name, location, or personal secrets.
- About one-third report feeling uncomfortable with something an AI companion has said or done.
At the same time, the survey indicates that a majority still spend more time with real friends than with AI, and most say human conversations are more satisfying. That nuance is important: teens are not abandoning human relationships wholesale. But, a meaningful minority are using AI as a substitute for real support in moments that matter most.
These same dynamics appear outside the world of chatbots. In our earlier analysis of Roblox’s AI moderation and youth safety challenges, we explored how large-scale platform AI struggles to distinguish between playful behavior, harmful content, and predatory intent, even as parents assume the system “will catch it.”
This is where “AI that protects platforms, not people” comes into focus. When parents and policymakers rely on platform-run AI to “detect” risk, it can create a false sense of security – as if the system will always recognize distress, always escalate appropriately, and always act in the child’s best interest. In practice, these models are tuned to generic safety rules and engagement metrics, not to the lived context of a specific child in a specific family. They don’t know whether your teen is already in therapy, whether your family has certain cultural values, or whether a particular topic is especially triggering.
Put differently: we are asking centralized models to perform a deeply relational role they were never built to handle. And every time a disclosure banner pops up or a three-hour reminder fires, it can look like “safety” without actually addressing the core problem - that the AI has quietly slipped into the space where a parent, counselor, or trusted adult should be.
The result is a structural misalignment:
- Platforms carry legal duties and add compliance layers.
- Teens continue to use AI companions for connection, support, and secrets.
- Parents assume “there must be safeguards” because laws now require them.
But no law can turn a platform-centric system into a family-centric one on its own. That requires a different architecture entirely: one where AI is owned by, aligned to, and accountable to the individual or family it serves, rather than the platform that hosts it.
The Next Phase: Personal AI That Serves Individuals, Not Platforms
Policy can set guardrails, but it cannot engineer empathy.
The future of safety will require personal AI systems that:
- are owned by individuals or families
- understand context, values, and emotional cues
- escalate concerns privately and appropriately
- do not store global chat logs
- do not generalize across millions of users
- protect people, not corporate platforms
Imagine a world where each family has its own AI agent, trained on their communication patterns, norms, and boundaries.An AI partner that can detect distress because it knows the user, not because it is guessing from a database of millions of strangers.
This is the direction in which responsible AI is moving, and it is at the heart of our work at Permission.
What to Expect in 2026
2025 was the first year of targeted chatbot regulation. 2026 may be the year of chatbot governance.
Expect:
- More state-level bills mirroring SB 243
- Increased federal involvement through the proposed GUARD Act
- Sector-specific restrictions on mental health chatbots
- AI oversight frameworks tied to age assurance and data privacy
- Renewed debates around bans vs. transparency-based models
States are beginning to experiment. Some will follow California’s balanced approach. Others may attempt stricter prohibitions. But all share a central concern: the emotional stakes of AI systems that feel conversational.
Closing Thoughts
As a mom here in San Diego, I’m grateful to see our state take this issue seriously. As Permission’s Chief Advocacy Officer, I also see where the next generation of protection must go. SB 243 sets the foundation, but the future will belong to AI that is personal, contextual, and accountable to the people it serves.