Voltar

California Law Mandates Safety Features for AI Companion Chatbots

California Law Mandates Safety Features for AI Companion Chatbots
CNET

SB 243: Core Requirements

SB 243 obligates developers of AI companion chatbots to disclose that the bots are not human. For users under the age of 18, the law mandates that the chatbot provide a notification at least every three hours reminding the user to take a break and reaffirming the bot’s non‑human nature. Additionally, the legislation requires companies to maintain protocols for detecting and responding to users who express suicidal ideation or self‑harm, directing them to appropriate crisis resources.

Related Legislative Activity

The new law follows other recent California measures aimed at regulating digital platforms. AB 56 requires warning labels on social media platforms similar to those used for tobacco products. A pending bill, AB 1064, would prohibit developers from offering companion chatbots to children unless the AI is not foreseeably capable of encouraging harmful activities or engaging in sexually explicit interactions.

Industry Response and Existing Practices

Several AI companies have indicated they are prepared to comply with SB 243. Replika stated it already employs content‑filtering systems, community guidelines, and safety mechanisms that refer users to crisis resources, and it is working with regulators to meet the new requirements. A spokesperson for Character.ai expressed willingness to work with lawmakers and comply with the law. OpenAI’s representative described the legislation as a "meaningful move forward" for AI safety, noting that clear guardrails help shape responsible AI development.

Federal Oversight and Consumer Concerns

The Federal Trade Commission has opened an investigation into multiple AI companion chatbot providers following complaints from consumer groups and parents alleging that the bots have harmed children’s mental health. OpenAI has introduced parental controls and additional safeguards on its ChatGPT platform after facing a lawsuit from parents who claimed the service contributed to their teen son’s suicide.

Broader Context and Future Outlook

California’s actions reflect a growing trend among states to regulate emerging AI technologies, particularly those that interact directly with younger users. By setting clear disclosure and safety standards, SB 243 aims to mitigate potential harms while allowing continued innovation in the AI companion space. Ongoing legislative efforts, such as AB 56 and AB 1064, suggest that further regulatory refinements may be on the horizon, potentially shaping how AI chatbots are deployed nationwide.

Usado: News Factory APP - descoberta e automação de notícias - ChatGPT para Empresas

Source: CNET

Também disponível em: