The Verge The U.S. Department of Defense has officially labeled Anthropic, the creator of the Claude AI model, as a supply‑chain risk after negotiations over the company's use restrictions collapsed. The designation bars defense contractors from using Claude in any government work and threatens to cancel contracts for firms that engage with Anthropic commercially. Anthropic’s CEO said the department’s action is legally unsound and the company will contest it in court. The dispute centers on Anthropic’s refusal to allow the Pentagon to employ Claude for autonomous lethal weapons without human oversight and for mass surveillance, raising questions about private control of government‑grade AI.
Read more →