California's Companion Chatbot Law Enters Force as Platforms Scramble for Compliance
California's first-in-the-nation regulation of AI companion chatbots has been in effect for over two months, forcing NSFW AI platforms to fundamentally reshape their user interactions or face legal consequences. Senate Bill 243, which took effect January 1, 2026, requires operators of "companion chatbots"—AI systems providing adaptive, human-like social interactions—to implement mandatory disclosures, mental health safety protocols, and strict content restrictions for minor users.
The law represents a significant regulatory milestone for the rapidly growing AI girlfriend and AI companion market, which industry analysts project will reach $7 billion globally by 2033. Platforms like Candy AI, Girlfriend GPT, and AIGirlfriends.ai now operate under a fragmented compliance landscape as states increasingly target AI companions.
Key Requirements Under SB 243
Under the new regulations, AI companion chatbot operators must issue clear notifications at the beginning of every interaction and at least every three hours during ongoing conversations, informing users that they are speaking with an artificially generated entity rather than a human. For users identified as minors, platforms must add frequent break reminders and explicitly disclose that companion chatbots may not be suitable for some young users.
The legislation also mandates that operators implement protocols to prevent chatbots from producing content related to suicidal ideation, self-harm, or suicide—including mandatory referrals to crisis hotlines and crisis text lines. These protocols must be published on operator websites. Additionally, platforms must take reasonable steps to prevent their chatbots from providing unpredictable rewards or otherwise encouraging increased engagement through addictive design patterns.
For minor users specifically, operators must institute reasonable measures to prevent chatbots from producing sexually explicit visual material or directly encouraging sexually explicit conduct. The law includes a private right of action, allowing affected users—including families—to sue noncompliant operators for violations.
Platforms Adapt or Risk Enforcement
The compliance deadline has created significant operational challenges for NSFW AI companion platforms. Many established platforms have historically relied on persistent engagement mechanics and minimal disclosures about the artificial nature of interactions. Now, companies must retrofit their systems with mandatory break prompts, disclosure banners, and enhanced content filtering—particularly for users who may be minors.
The law excludes chatbots used solely for customer service, business operations, video game interactions, and stand-alone consumer electronic devices—narrowing its scope specifically to relationship-oriented AI companions. Annual reporting requirements to California's Office of Suicide Prevention begin July 1, 2027, with platforms required to submit to regular third-party audits.
Broader Regulatory Landscape
California's law joins New York's AI Companion Models statute, which took effect in November 2025 and requires similar disclosures to prevent users from being misled into believing they're interacting with humans. Texas has also moved to restrict AI-generated CSAM and nonconsensual deepfake sexual imagery.
Additional proposed legislation in Colorado, Nebraska, and New York aims to further restrict features that could create emotional dependency in minors or enable explicit content. The California LEAD for Kids Act, currently in progress, would specifically ban companion chatbots from enabling "sexual relationships" with minors.
For NSFW AI platform operators, the message is clear: the era of unregulated AI companions is ending. Compliance is no longer optional, and platforms that fail to adapt risk both legal liability and exclusion from the nation's largest state market.