Voltar

Meta Tightens AI Chatbot Guardrails to Protect Children

Meta Tightens AI Chatbot Guardrails to Protect Children
Engadget

Meta Revises AI Chatbot Policies

Meta has released a set of updated guardrails for its AI chatbots aimed at safeguarding children from harmful interactions. The guidelines, obtained by Business Insider, outline what content the bots may and may not engage with when interacting with minors.

Defining Acceptable and Unacceptable Content

The document categorizes content into "acceptable" and "unacceptable" groups. It explicitly bars any material that "enables, encourages, or endorses" child sexual abuse. This includes prohibitions on romantic role‑play if the user is a minor or if the AI is asked to assume the role of a minor, as well as any advice about potentially romantic or intimate physical contact involving a minor.

Conversely, the chatbots are permitted to discuss topics such as abuse in a factual manner, provided the conversation does not facilitate or encourage further harm.

Response to Prior Concerns

The policy revision follows earlier reports that suggested Meta’s chatbots could engage in romantic or sensual conversations with children. Meta indicated that the earlier language was erroneous and inconsistent with its policies, and the new guidelines replace it with clearer standards.

Regulatory Context

The changes arrive amid broader regulatory attention to AI companions. The Federal Trade Commission has launched an inquiry into AI chatbots from multiple companies, including Meta, examining how they handle interactions with minors and the potential risks involved.

Implications for Users and Developers

Contractors and developers working on Meta’s AI systems will now use the revised guardrails to train and evaluate chatbot behavior. The stricter standards aim to reduce the likelihood of children encountering age‑inappropriate or harmful content during AI interactions.

Looking Ahead

Meta’s updated policies reflect an ongoing effort to align its AI products with child safety expectations and regulatory scrutiny. By clearly delineating prohibited content and reinforcing safeguards, the company seeks to mitigate risks associated with AI‑driven conversations and demonstrate a commitment to responsible AI deployment.

Usado: News Factory APP - descoberta e automação de notícias - ChatGPT para Empresas

Source: Engadget

Também disponível em: