Back

Anthropic Sues U.S. Government Over Supply Chain Risk Designation

Background

Anthropic, a leading artificial‑intelligence developer, received a letter from the Department of Defense confirming that the agency had labeled the company a supply‑chain risk. This designation would place Anthropic on a national‑security blocklist, effectively barring the firm from many federal contracts.

Weeks of back‑and‑forth between Anthropic and the Defense Department preceded the lawsuit. In late February, Defense Secretary Pete Hegseth and senior officials pressured Anthropic to remove safeguards that prevent the company's models from being used for mass surveillance or the development of autonomous weapons. CEO Dario Amodei made clear the company would not consent to such uses.

Legal Action

When Anthropic refused to alter its safeguards, the Pentagon threatened to add the firm to the supply‑chain risk list and to cancel a $200 million contract. The company responded by filing a lawsuit seeking judicial review of the designation. The complaint alleges that the government’s action is unlawful, violates Anthropic’s free‑speech and due‑process rights, and lacks any authorizing federal statute.

Anthropic’s statement to the press says, “These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.” The lawsuit characterizes the government’s conduct as an “unprecedented and unlawful … campaign of retaliation.”

Company Position

Anthropic emphasized that pursuing legal review does not change its “longstanding commitment to harnessing AI to protect our national security,” but it is a necessary step to protect its business, customers, and partners. The company also noted that it had agreed to “collaborate with the Department on an orderly transition to another AI provider willing to meet its demands.”

Industry Reaction

OpenAI entered the picture by securing a separate agreement with the Defense Department. OpenAI CEO Sam Altman highlighted the company’s safety principles, including prohibitions on domestic mass surveillance and human responsibility for the use of force, including autonomous weapon systems. The contract explicitly states that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

Following OpenAI’s deal, the company’s head of robotics hardware resigned, and employee Caitlin Kalinowski posted on X that “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”

Implications

The lawsuit underscores a growing clash between the federal government’s security objectives and AI developers’ ethical safeguards. It also highlights the legal uncertainties surrounding the government’s authority to impose supply‑chain risk designations on technology firms.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Engadget

Also available in: