California's AI Companion Chatbot Law: A Look Ahead to October 2025

It feels like just yesterday we were marveling at AI's ability to write poems or generate images. Now, as we stand on the cusp of 2025, widely hailed as the 'year of the AI agent,' the conversation is shifting. The rapid advancements in generative AI, particularly its enhanced human-like interactions, are bringing with them a wave of new questions – and regulations.

Across the globe, and closer to home, policymakers are grappling with how to manage these increasingly sophisticated AI systems. In China, the Cyberspace Administration of China (CAC) has released draft regulations for "humanized interactive services," aiming to guide the healthy development and standardized application of AI that mimics human personality, thought patterns, and communication styles. This initiative, set to be finalized soon, underscores a growing global concern.

Meanwhile, California, a state known for its proactive approach to tech legislation, has taken a significant step. On October 13, 2025, they passed the "Companion Chatbots Act" (CC). This landmark legislation is the first of its kind worldwide, specifically targeting AI chatbots designed to engage users in social interactions and fulfill social needs.

While the specifics differ, the underlying goal in both China and California is remarkably similar: to ensure that AI interactions remain safe and beneficial, particularly when they involve emotional engagement. The Chinese draft, "Interim Measures for the Administration of Humanized Interactive Services," is quite detailed, drawing on eight existing laws and regulations. It covers a broad spectrum of obligations, from content safety and data security to risk control and special protections for vulnerable groups.

California's CC, on the other hand, is more narrowly focused. It amends the California Business and Professions Code, creating a new chapter dedicated to AI. The law defines companion chatbots as AI systems with natural language interfaces that provide adaptive, human-like responses, aiming to meet users' social needs. Crucially, it carves out exceptions for certain AI applications, such as those solely for customer service, business operations, or within video games where the chatbot's responses are limited to game-related content. It also excludes standalone consumer electronics with voice command interfaces that don't maintain relationships across multiple interactions or elicit emotional responses.

The core of both regulatory efforts seems to hinge on preventing AI from unduly influencing human emotions. The Chinese draft, for instance, defines "humanized interactive services" as those using AI to simulate human personality, thought, and communication styles, engaging in emotional interaction through various media. It even has specific provisions for "emotional companionship" services, though it doesn't explicitly define them, suggesting a broad interpretation of AI's emotional reach.

Who's in charge of all this? In China, the CAC will play a coordinating role, with other relevant government departments overseeing specific aspects based on their mandates. California's CC places the responsibility with the California Department of Public Health's Office of Suicide Prevention, highlighting a strong focus on user safety, especially concerning mental well-being.

Interestingly, the Chinese draft extends its reach to regulate not just providers but also users of these services, aligning with existing regulations on online content. Users are expected to adhere to laws, social ethics, and refrain from generating harmful content, including anything that endangers national security, promotes obscenity or violence, infringes on rights, damages interpersonal relationships, or induces self-harm or risky decisions. California's CC, however, primarily targets the "operators" of companion chatbot platforms.

When it comes to compliance, the Chinese regulations lay out a comprehensive set of requirements for providers. This includes robust data management, particularly for training data, ensuring it aligns with core socialist values and traditional Chinese culture, and is transparent, reliable, and diverse. High-risk scenarios are a major concern, with measures like state identification, pre-set response templates for emergencies, and human intervention for critical situations like suicidal ideation. Special attention is given to protecting minors and the elderly, with provisions for "minor mode," parental consent, and emergency contact information for seniors.

California's CC also emphasizes preventing harm, particularly to minors, by requiring operators to notify users they are interacting with AI and to provide reminders for breaks. It also mandates providing users with information about crisis referral services, such as suicide hotlines. Operators will need to submit annual reports to the Office of Suicide Prevention detailing their efforts to prevent self-harm-related content and their protocols for handling such instances.

Both regulatory frameworks are a clear signal that as AI becomes more integrated into our lives, especially in ways that touch our emotions and social connections, thoughtful governance is not just a possibility, but a necessity. The coming months will be crucial as these regulations take shape and begin to influence how we interact with AI.

Leave a Reply

Your email address will not be published. Required fields are marked *