Back

Anthropic’s Standoff with the Pentagon Over AI Use Policy

Background

Anthropic, known for its Claude AI model, signed a $200 million contract with the Department of Defense last year. The agreement embeds the company’s "acceptable use policy," which bars the use of its technology for autonomous kinetic operations and mass domestic surveillance.

Pentagon’s Position

The Pentagon, led by Undersecretary of Defense for Research and Engineering Emil Michael, is pushing for an "any lawful use" clause that would give the military carte blanche to employ Anthropic’s services in any capacity, including mass surveillance and lethal autonomous weapons.

Anthropic’s Red Lines

Anthropic has made clear it will not comply with requests that conflict with its policy. The company cites existing DoD directives that require human judgment in the use of force and prohibit intelligence collection on U.S. persons without specific legal authority.

Potential Consequences

If the Pentagon classifies Anthropic as a supply‑chain risk, the $200 million contract could be terminated and defense contractors that rely on Claude may be forced to remove the technology from their systems. This would have a ripple effect across the defense industry, which currently uses Anthropic’s model for classified work.

Industry Reaction

Other AI firms such as OpenAI, xAI, and Google have already renegotiated their own Pentagon contracts to align with the "any lawful use" language. However, Anthropic’s stance has drawn criticism and support from various tech workers and policy experts, underscoring the broader debate over responsible AI deployment in national security contexts.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: The Verge

Also available in: