Anthropic adds identity verification to Claude, sparking user backlash
Anthropic announced it is extending identity verification to Claude users in a limited rollout aimed at curbing abuse. When the system detects activity that may violate its usage policy, users will see a prompt asking for a valid government‑issued photo ID and a selfie taken with a phone or computer camera. The facial image is matched against the ID before access to the requested capability is restored.
The verification process is outsourced to Persona, a provider that also offers age‑verification services for OpenAI and Roblox. Persona’s backers include Founders Fund, co‑founded by Peter Thiel, whose other venture Palantir supplies surveillance technology to federal agencies such as the FBI, CIA and Immigration and Customs Enforcement.
Anthropic frames the measure as a safeguard against “potentially fraudulent or abusive behavior” that breaches its policy. In its statement, the company emphasized that all data transmitted through Persona is encrypted in transit and at rest, that Persona is contractually limited in how it can use the images, and that Anthropic will not store the IDs or use them to train its models.
Reaction from the Claude community has been largely negative. Long‑time paying subscribers, who already have credit‑card details on file, question why a physical ID is now required. Critics also highlight Persona’s connections to government surveillance, fearing the move expands user tracking beyond the platform’s original scope.
Privacy advocates warn that facial‑recognition technology can be repurposed, pointing to Palantir’s contracts with U.S. agencies as a cautionary example. Some users have threatened to abandon Claude in favor of competing chatbots that do not demand biometric checks.
Anthropic has not disclosed the specific use cases that trigger the verification prompt. A spokesperson told Engadget that the requirement applies only to a “small number of cases where we see activity that indicates potentially fraudulent or abusive behavior.”
The rollout begins this week and will be phased in across accounts. Users who encounter the prompt can expect a short verification flow before regaining access to the requested feature.
This development adds to a broader debate over data‑collection practices in the AI industry. OpenAI, for example, recently required phone verification for ChatGPT Plus users, but Anthropic’s biometric step is more invasive, prompting fresh scrutiny of how generative AI services balance security with privacy.
Industry analysts suggest the move could set a precedent for stricter identity checks across AI platforms, especially as regulators examine misuse of large language models. While biometric verification may deter malicious actors, it also risks excluding legitimate users who lack convenient access to government‑issued IDs.
Anthropic’s assurances about encryption and limited data use aim to quell concerns, yet the lack of detailed retention policies leaves many questions unanswered. The episode may shape upcoming policy discussions at the Federal Trade Commission, which is beginning to assess AI firms’ data‑handling practices.
Used: News Factory APP - news discovery and automation - ChatGPT for Business