Back

Trump Moves to Ban Anthropic from the US Government

Escalating Tensions Between the Pentagon and Anthropic

The Department of Defense and the artificial‑intelligence firm Anthropic have entered a public dispute, with both sides trading barbs on social media. The disagreement reached a new level after Defense Secretary Pete Hegseth met with Anthropic’s chief executive, Dario Amodei. Following the meeting, Hegseth set a firm deadline for Anthropic to amend the terms of its contract so that the company would allow “all lawful use” of its AI models.

Hegseth expressed appreciation for Anthropic’s technology during the discussion and indicated the Pentagon’s desire to maintain the partnership, according to a source familiar with the interaction. The source noted that the department wants to continue working with Anthropic, though the specifics of the conversation were not disclosed publicly.

Expert Perspective on the Underlying Issues

Military‑technology analyst Michael Horowitz characterized the dispute as more of a clash of attitudes than a substantive policy conflict. He remarked, "This is such an unnecessary dispute in my opinion," and explained that the disagreement revolves around theoretical use cases that are not currently on the table. Horowitz added that Anthropic has, to date, supported every way the Department of Defense has proposed using its technology.

He further observed that both parties appear to agree on the current limitations of the technology, noting that the Pentagon and Anthropic “agree at present about the use cases where the technology is not ready for prime time.”

Anthropic’s Foundational Commitment to Safety

Anthropic was established with a core focus on building artificial intelligence responsibly and safely. The company’s leadership has publicly discussed the potential dangers of powerful AI, particularly the prospect of fully autonomous weapons. While acknowledging that such weapons could have legitimate defensive applications, the company’s CEO warned that they also constitute “a dangerous weapon to wield.”

This stance reflects Anthropic’s broader philosophy that AI development should prioritize safety, even as the technology becomes increasingly integrated into national‑security contexts.

Implications for Government AI Partnerships

The ongoing dispute underscores the challenges of aligning corporate AI safety priorities with the strategic objectives of government agencies. As the Pentagon seeks broader access to advanced models, companies like Anthropic must balance contractual obligations, ethical considerations, and public scrutiny. The outcome of this disagreement could shape future collaborations between defense institutions and AI firms, influencing how “lawful use” clauses are negotiated and implemented.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Ars Technica2

Also available in: