Back

Appeals Court Keeps Anthropic Supply‑Chain Risk Label in Place

The U.S. Court of Appeals for the District of Columbia Circuit issued a stay on Wednesday that preserves the Pentagon’s supply‑chain risk designation on Anthropic, the company behind the Claude AI system. The appellate panel, in a 2‑1 decision, said removing the label would jeopardize military operations during an ongoing conflict, even though the company may face financial harm.

Anthropic’s legal fight stems from two separate statutes the Department of Defense used to bar the firm from supplying AI tools to the armed forces. A San Francisco federal judge last month found the DoD acted in bad faith, citing the company’s pushback against restrictive usage policies and its public criticism of those limits. That judge ordered the risk label removed, prompting the Trump administration to restore access to Claude across the Pentagon and other federal agencies.

In Washington, the appellate court focused on a different statutory provision but reached the opposite conclusion. The judges emphasized the unique pressures of wartime procurement, noting that “granting a stay would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict.” The panel acknowledged Anthropic’s potential loss of revenue but prioritized the Department of Defense’s judgment on national‑security matters.

Acting Attorney General Todd Blanche praised the decision on X, calling it “a resounding victory for military readiness.” He reiterated that the commander‑in‑chief and the Department of War (as the Pentagon calls itself under the current administration) must retain full access to AI models integrated into sensitive systems.

Anthropic’s spokesperson, Danielle Cohen, expressed gratitude that the D.C. court recognized the urgency of the issue and reaffirmed the company’s confidence that the courts will eventually deem the designations unlawful. The firm argues that the label has cost it contracts and that its Claude model lacks the precision required for fully autonomous weapon systems, a stance that has drawn criticism from the Pentagon.

Legal experts say the case tests the breadth of executive power over private tech firms, especially as the Pentagon accelerates AI deployment in its conflict with Iran. Some scholars warn that the DoD’s actions could stifle open debate among AI researchers about model performance and safety.

Both lawsuits are expected to continue for months. The D.C. Circuit will hear oral arguments on May 19, while the San Francisco case proceeds on its own timeline. Details about how the Department of Defense has used Claude, or the extent of its transition to alternatives from Google DeepMind, OpenAI, or other vendors, remain scarce.

As the legal battles unfold, Anthropic faces uncertainty about its role in federal AI initiatives. The outcome may shape how future supply‑chain risk designations are applied to domestic tech companies, a question that looms large for the industry and for national‑security policymakers alike.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Wired AI

Also available in: