OpenAI Secures Pentagon Contract While Anthropic Rejects Terms
OpenAI's Pentagon Agreement
OpenAI’s chief executive announced that the company had reached a contract with the Department of Defense. The company emphasized that its two core safety principles—prohibitions on domestic mass surveillance and the requirement for human responsibility in the use of force—are reflected in the agreement. According to OpenAI, the contract ties any use of its models to existing U.S. law, including the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence Surveillance Act of 1978, Executive Order 12333 and relevant Department of Defense directives.
The company also said it would deploy technical safeguards such as classifiers that can monitor model behavior and that some employees would receive security clearances to oversee the systems.
Critics Question the Safeguards
Industry observers and former OpenAI staff argue that the agreement’s reliance on “any lawful use” effectively leaves the Pentagon free to employ the technology for any activity the government deems legal. They note that U.S. intelligence agencies have historically interpreted legal authorities to permit extensive data collection, including bulk domestic surveillance. Critics say the language about “unconstrained,” “generalized” or “open‑ended” use is vague and may permit optionality for the military.
Experts also question the effectiveness of the technical safeguards. Classifiers, they explain, cannot verify whether a human reviewed a decision before a lethal strike or whether a query is part of a mass‑surveillance program. Because the contract allows the government to define what is legal, the safeguards could be overridden if a legal interpretation changes.
Anthropic's Stance and Fallout
Anthropic, a rival AI firm, declined to sign a similar contract that it says would specifically prohibit mass surveillance and unsupervised lethal autonomous weapons. After negotiations collapsed, the Pentagon classified Anthropic as a supply‑chain risk, a designation usually reserved for foreign companies with cybersecurity concerns. Anthropic announced plans to challenge the classification in court.
The disagreement sparked public support for Anthropic within the tech community, with notable figures and users praising the company’s decision to stand by its red lines.
Implications for AI and Defense
The contrasting approaches of OpenAI and Anthropic illustrate a broader debate over how AI companies should engage with military customers. While OpenAI argues that adhering to current laws provides sufficient protection, critics warn that legal frameworks can shift and may not adequately safeguard civil liberties or prevent autonomous weapon use without human oversight.
The situation underscores the importance of clear contractual language, robust technical safeguards, and ongoing public scrutiny as artificial intelligence becomes increasingly integrated into national security operations.
Used: News Factory APP - news discovery and automation - ChatGPT for Business