Pentagon Designates Anthropic as Supply‑Chain Risk
Pentagon Takes Unprecedented Action Against Domestic AI Firm
The U.S. Department of War announced a supply‑chain risk designation for Anthropic, the San Francisco AI company behind the Claude models. This marks the first application of the designation to an American company, a status previously reserved for foreign adversaries such as China’s Huawei. Under 10 USC 3252, the designation obliges defense vendors and contractors to certify that they do not employ Anthropic’s Claude models in any work for the Pentagon.
Anthropic entered negotiations with the Department of War seeking written assurances that its technology would not be used for “mass domestic surveillance of Americans” or “fully autonomous weapons” lacking human involvement in targeting decisions. The company maintained that these limits were reasonable for any AI deployment, not just its own. The Pentagon countered that existing law already prohibits mass surveillance and that internal Department of Defense policy restricts fully autonomous weapons, arguing that contractual limits were unnecessary.
When talks ended on March 5, 2026, the Department of War formally informed Anthropic that the company and its products were deemed a supply‑chain risk, effective immediately. Anthropic’s CEO, Dario Amodei, stated the company would challenge the designation in court, asserting that the action is “not legally sound.” He also noted that the designation cannot affect Anthropic’s commercial customers or other government agencies, only its use in Department of War contracts.
Contradictions and Ongoing Use
Despite the blacklist, reports indicated that Claude remained in active use by the military in Iran through Palantir’s Maven Smart System, which integrates Claude for data management in operational contexts. President Donald Trump directed federal agencies to cease all use of Anthropic’s technology, though the impact on third‑party deployments such as Palantir’s was unclear.
Industry Reaction and Competing Deals
The tech industry responded with a split stance. Hundreds of employees at Google and OpenAI signed an open letter urging support for Anthropic, while Elon Musk sided with the Trump administration, claiming Anthropic “hates Western Civilization.” OpenAI, in contrast, announced a separate deal with the Department of War that allows the military to use OpenAI models for “all lawful purposes,” language described by some OpenAI employees as deliberately ambiguous.
Dean Ball, a former Trump White House AI adviser, criticized the supply‑chain risk designation as a “death rattle” of American strategic coherence, arguing that treating a domestic firm worse than a foreign adversary reflects “thuggish tribalism.”
Future Outlook
Anthropic has indicated a willingness to de‑escalate and seek an agreement that works for both parties, and Bloomberg reported that talks had quietly reopened even as the formal designation was announced. The pending legal challenge will test the government’s interpretation of the supply‑chain risk provision against Anthropic’s reading that it applies only to Department of War contracts.
The designation sets a precedent, marking the first time an American AI company built on principles of responsible AI has been classified alongside a foreign adversary by the very government it sought to serve.
Used: News Factory APP - news discovery and automation - ChatGPT for Business