Back

Anthropic CEO Dario Amodei Returns to Pentagon Negotiations to Preserve Defense Deal

Background

Anthropic, the startup behind the Claude family of artificial‑intelligence models, has been a key supplier of AI technology to the U.S. military. Its systems have been granted security clearance for handling classified information and have reportedly been used in operational contexts. However, the relationship soured when the Department of Defense sought unrestricted access to the company’s technology.

Standoff Over AI Access

The Pentagon’s insistence on “any lawful use” of Claude conflicted with Anthropic’s policy constraints. The company has drawn two firm red lines: it will not permit mass surveillance of American citizens, and it refuses to support lethal autonomous weapons that could operate without human control. The Department of Defense’s demand for carte blanche access and the inclusion of a clause about “analysis of bulk acquired data” in the contract were seen by Anthropic as crossing those red lines.

After weeks of public feuding, negotiations fell apart on a Friday, and Secretary of Defense Pete Hegseth threatened to label Anthropic a supply‑chain risk—a designation typically reserved for firms with foreign ties that pose national‑security concerns. Such a label could force other contractors to drop Anthropic’s technology from their own defense projects.

New Negotiations

According to the Financial Times, Amodei is now in talks with Under‑Secretary of Defense for Research and Engineering Emil Michael about a revised contract that would allow continued use of Claude while preserving the company’s red lines. Michael previously criticized Amodei on social media, calling him a “liar” and accusing him of endangering national safety. Despite the harsh rhetoric, the latest round of talks suggests the department may be willing to remove the contentious phrase regarding bulk data analysis.

Amodei circulated an internal memo to Anthropic staff, describing the Pentagon’s earlier position as “straight‑up lies” and contrasting Anthropic’s stance with OpenAI’s approach, which he labeled “safety theater.” He also noted that Anthropic’s relationship with the federal government had deteriorated because the company had not made political donations to the current administration.

Potential Implications

If a new agreement is reached, it could preserve Anthropic’s foothold in the defense sector and keep Claude available for classified missions. Failure to secure a deal could result in the supply‑chain‑risk label taking effect, prompting other tech firms to distance themselves from Anthropic to protect their own defense contracts. The outcome may also influence how other AI companies negotiate access and usage terms with government agencies, especially as rivals like OpenAI and xAI have reportedly agreed to more permissive terms.

The dispute highlights the growing tension between national‑security objectives and corporate policies on ethical AI use. It underscores the challenge of balancing military needs for advanced technology with safeguards against misuse, a debate that is likely to shape future AI procurement and regulation in the United States.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: The Verge