Atrás

Anthropic Expands Claude Data Use, Offers Opt-Out for Users

Anthropic Expands Claude Data Use, Offers Opt-Out for Users
Wired

Policy Change and Rationale

Anthropic is preparing to incorporate user conversations with its Claude chatbot, as well as coding tasks performed within the tool, into the training data for future large language models. The company explained that large language models require extensive datasets, and real‑world interactions provide valuable insights into which responses are most useful and accurate for users. This represents a departure from Anthropic’s previous stance, where user chats were not automatically used for model training.

Implementation Timeline

The updated privacy policy is set to take effect on October 8. The change was originally scheduled for September 28 but was postponed to give users additional time to review the new terms. Gabby Curtis, a spokesperson for Anthropic, indicated the delay was intended to ensure a smooth technical transition.

Opt‑Out Mechanism

New Claude users will encounter a decision prompt during the sign‑up process, while existing users may see a pop‑up outlining the changes. The default setting, labeled “Help improve Claude,” is turned on, meaning users are opted in unless they actively turn the switch off. To opt out, users should navigate to the Privacy Settings and toggle the switch to the off position. If users do not opt out, the policy applies to all new chats and any revisited conversations that are reopened, but it does not automatically retroactively apply to older archived threads unless those threads are reactivated.

Data Retention Extension

Alongside the training data change, Anthropic is extending its data retention period. Previously, most user data was retained for 30 days; under the new policy, data will be stored for up to five years, regardless of whether the user has opted in to model training.

Scope of Affected Users

The policy covers both free and paid commercial‑tier users of Claude. However, commercial users who are licensed through government or educational plans are exempt; their conversations will not be used for model training. Claude’s popularity as a coding assistant means that coding projects submitted through the platform will also be included in the training dataset for users who have not opted out.

Industry Context

Before this update, Claude was one of the few major chatbots that did not automatically use user conversations for training. In contrast, OpenAI’s ChatGPT and Google’s Gemini default to allowing model training on personal accounts unless users choose to opt out. The shift places Anthropic in line with industry practices regarding data use for AI model improvement.

What Users Can Do

Users who wish to keep their Claude interactions private should locate the “Help improve Claude” toggle in the Privacy Settings and switch it off. Those interested in broader privacy considerations can consult guides that outline opt‑out procedures for various AI services.

Usado: News Factory APP - descubrimiento de noticias y automatización - ChatGPT para Empresas

Source: Wired

También disponible en: