Voltar

Anthropic Expands Claude Chat Data Use, Offers Opt-Out Option

Anthropic Expands Claude Chat Data Use, Offers Opt-Out Option
Wired

Background

Anthropic’s Claude chatbot has historically been one of the few major AI assistants that did not use user interactions as training data for its large language models. The company’s privacy policy has now been revised to allow the repurposing of chat logs and coding tasks for model improvement, aligning its approach with industry norms.

Policy Change

The updated privacy notice states that, starting from the effective date, all new and revisited chats—unless the user opts out—may be incorporated into Anthropic’s training pipeline. The change also lengthens the data retention period, moving from a typical thirty‑day hold to a maximum of five years for stored user data. This policy applies to both free and paid personal accounts, while commercial‑tier users, including government and educational licenses, are explicitly excluded.

Opt‑Out Process

During the sign‑up flow for new Claude users, a clear choice is presented: a toggle labeled “Allow the use of your chats and coding sessions to train and improve Anthropic AI models.” The toggle is on by default, meaning users who do not actively disable it are opted in. Existing users who have already encountered a privacy pop‑up can adjust their preference at any time via the Privacy Settings menu. The relevant setting, titled “Help improve Claude,” can be switched off to prevent any future chat data from being used for training.

Implications for Users

For users who opt out, Anthropic will not use their new conversations or coding work for model training. However, if a user later reopens an archived chat, that interaction becomes eligible for inclusion unless the opt‑out remains active. The expanded retention window means that stored data—whether opted in or not—will be kept for a longer period, potentially up to five years.

Industry Context

Anthropic’s new policy brings its data practices in line with those of other leading AI providers, such as OpenAI’s ChatGPT and Google’s Gemini, which also permit model training by default and require users to opt out if they wish to restrict it. By offering a straightforward opt‑out mechanism, Anthropic aims to balance the need for real‑world interaction data to improve Claude with user privacy concerns.

Usado: News Factory APP - descoberta e automação de notícias - ChatGPT para Empresas

Source: Wired

Também disponível em: