Back

Anthropic Resumes Negotiations with U.S. Defense Department Over AI Contract

Background of the Dispute

Anthropic originally signed a multi‑year contract with the Defense Department in 2025, valued at $200 million. During subsequent negotiations, the Pentagon sought to include language that would allow the use of Anthropic’s AI models for the analysis of bulk‑acquired data. Anthropic’s leadership argued that this clause could be used for mass surveillance, and they insisted on removing the specific phrase.

Escalation and Government Response

When Anthropic refused to amend the contract, the Defense Department threatened to cancel the existing agreement and label the company a "supply chain risk," a designation typically reserved for foreign entities. The threat prompted a presidential order directing all government agencies to stop using Anthropic’s technology.

Current Negotiations

According to reports, Amodei has resumed discussions with Under Secretary of Defense for Research and Engineering Emil Michael. The two are attempting to resolve the contractual language dispute. The department reportedly offered to accept Anthropic’s terms if the contentious phrase about "analysis of bulk acquired data" is deleted, which Anthropic identified as the single line that matched its primary concern.

Implications for Government Use

The contract includes a six‑month phase‑out period that would have allowed the government to continue using Anthropic’s AI tools for certain operations, such as staging an air strike. The ongoing talks aim to prevent a full termination of the partnership and to avoid the supply‑chain risk label.

Industry Context

The dispute has highlighted differences in how AI companies approach government contracts, especially regarding surveillance and ethical use. Competing firms have taken varied stances, with some emphasizing explicit prohibitions on mass surveillance in their agreements.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Engadget