Trump Orders Federal Halt to Anthropic’s Claude AI Over Surveillance Concerns
Background
Anthropic’s Claude AI model has become the most widely used artificial‑intelligence system within the U.S. military, appearing in classified Pentagon projects and other federal applications. The company was founded with an explicit focus on AI safety and has embedded contractual safeguards that bar the use of Claude for mass domestic surveillance of Americans or for fully autonomous weapons systems without human oversight.
The Dispute
President Donald Trump used his Truth Social platform to order an immediate cessation of federal use of Claude, calling the company a “radical left, woke company.” He announced a six‑month phase‑out for agencies such as the Department of Defense. Earlier in the week, Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei that he would invoke rarely used powers to force the company to allow the Pentagon to use Claude for any lawful purpose, or label Anthropic a supply‑chain risk. Hegseth gave Anthropic a Friday deadline to comply.
Amodei responded that Anthropic “cannot in good conscience accede” to the Pentagon’s request to remove the contractual provisions that prohibit use of Claude in autonomous weapons or domestic surveillance. He warned that existing laws have not kept pace with AI’s ability to aggregate scattered, innocuous data into comprehensive personal profiles at massive scale, raising significant privacy concerns.
Broader Context
Legal experts noted that contract language around “lawful purposes” is often ambiguous, and Anthropic’s stance reflects a broader industry reluctance to enable mass surveillance or lethal autonomous weapons. Employees at rival firms such as OpenAI and Google have circulated petitions urging their companies to stand with Anthropic on these red‑line issues. OpenAI’s CEO Sam Altman reportedly affirmed similar guardrails, emphasizing technical safeguards like cloud‑only deployment.
The clash underscores a growing mismatch between rapid AI adoption in government and military settings and the slower development of regulatory oversight. Critics argue that the dispute could set precedents for how tech companies negotiate with government agencies when ethical boundaries are at stake.
Used: News Factory APP - news discovery and automation - ChatGPT for Business