AI Haven
AI News

California's New AI Companion Law Meets Its First Real-World Test After 300M Message Breach

California's AI companion chatbot law took effect January 1, 2026. Weeks later, Chat & Ask AI exposed 300 million messages—including suicide queries and illegal content.

March 21, 2026

California's New AI Companion Law Meets Its First Real-World Test

California's AI Companion Chatbot Safety Act (SB 243) went into effect on January 1, 2026, requiring operators of companion chatbots to implement disclosures, safety protocols, and minor protections. Just weeks later, the law faced its first real-world test: a massive data breach at Chat & Ask AI exposed approximately 300 million messages from over 25 million users.

The breach, discovered in January 2026, resulted from a misconfigured Google Firebase database at Chat & Ask AI, a wrapper application that lets users interact with models from OpenAI, Anthropic, and Google. Security researchers found the Firebase Security Rules had been set to public, allowing unauthorized access to the entire backend database.

"The exposed messages contained deeply personal conversations, including discussions of illegal activities and requests for suicide assistance," according to security research published by Malwarebytes. Specific examples included queries about writing suicide notes, painless self-harm methods, methamphetamine recipes, and hacking techniques.

Codeway, the app's developer, secured the database within hours of disclosure but did not publicly respond to media inquiries. The breach affected at least half of the app's claimed 50 million users, according to analysis by security researcher Harry.

What SB 243 Requires

California's SB 243 mandates that companion chatbot operators provide clear disclosures that users are interacting with AI, implement protocols to detect and address suicidal ideation and self-harm content, and block sexually explicit material for minors. The law also requires age verification and content filtering for users under 18.

Operators must also implement evidence-based safety protocols and make them publicly available on their websites. Starting July 1, 2027, companies must submit annual reports to California's Office of Suicide Prevention on detected crisis situations.

Violations allow users to file civil lawsuits with minimum damages of $1,000 per violation plus attorney fees.

The Privacy Problem Extends Beyond California

The Chat & Ask AI breach is the latest in a series of security incidents affecting AI companion apps. In 2023, Italy banned Replika over GDPR violations. The pattern shows a gap between user trust in "digital intimacy" and developer safeguards.

Experts have warned that AI companion chatbots pose significant mental health risks, particularly for minors. Stanford researchers have called the situation a "potential public mental health crisis," noting that platforms like Character.AI and Replika fail basic child safety tests.

As of March 2026, California is the first and only state with comprehensive companion chatbot regulations, though New York has implemented similar notification requirements. The Chat & Ask AI breach demonstrates exactly why such rules were necessary—and raises questions about whether enforcement mechanisms are robust enough to protect millions of users.

Source: Malwarebytes / Fox News / TechCrunchView original →