Judge Calls Pentagon’s Move to Label Anthropic a Supply‑Chain Risk ‘Attempt to Cripple’ Company
Background of the Dispute
Anthropic, the creator of the Claude artificial‑intelligence system, has taken legal action against the U.S. Department of Defense after the Pentagon designated the company a supply‑chain risk. The label was applied after Anthropic pushed for restrictions on how its AI tools could be employed by the military. The company argues that the designation is retaliation for its public scrutiny of a contract dispute, potentially violating First Amendment protections.
Judicial Scrutiny
During a court hearing, District Judge Rita Lin expressed concern that the Pentagon’s action resembled an effort to cripple Anthropic. She noted that the supply‑chain‑risk authority is typically reserved for foreign adversaries, terrorists and other hostile actors, and questioned whether the designation was appropriately tailored to genuine national‑security concerns. Judge Lin indicated that she could issue a temporary order to pause the designation only if she finds Anthropic likely to succeed on the merits of its case.
Government Position
The Department of Defense, referring to itself as the Department of War, defended its decision by asserting that Anthropic’s AI tools could not be relied upon during critical moments. A Trump‑administration attorney, Eric Hamilton, argued that the department had followed proper procedures and that the security assessment should not be second‑guessed. The Pentagon also announced plans to replace Anthropic’s technology with alternatives from Google, OpenAI and xAI, and claimed to have safeguards to prevent any tampering during the transition.
Contractor Restrictions and Legal Authority
Defense Secretary Pete Hegseth posted a statement indicating that any contractor, supplier or partner doing business with the U.S. military was barred from commercial activity with Anthropic. However, during the hearing Hamilton acknowledged that Hegseth lacks legal authority to impose such a blanket ban on contractors for work unrelated to the Department of Defense. When asked why Hegseth made the statement, Hamilton said he did not know.
Implications for AI in the Military
The case has sparked a broader public conversation about the role of artificial intelligence in armed forces and the degree of deference Silicon Valley firms should give to government determinations about technology deployment. Critics argue that the Pentagon’s approach may set a precedent for punitive measures against companies that raise concerns about military applications of AI.
Next Steps
Judge Lin is expected to issue a ruling on the temporary injunction in the coming days. A related appeal is also pending in a federal appeals court in Washington, D.C., with a decision anticipated soon. The outcome will shape both Anthropic’s relationship with the government and the broader landscape of AI procurement for national‑security purposes.
Used: News Factory APP - news discovery and automation - ChatGPT for Business