OpenAI’s Pentagon Deal Raises Concerns Over Military Use and Domestic Surveillance
Background
Anthropic, an AI firm, was labeled a supply‑chain risk by Defense Secretary Pete Hegseth and subsequently lost a $200 million Pentagon contract after refusing to allow its models to be used for autonomous weapons systems and mass domestic surveillance. This development set the stage for OpenAI’s latest engagement with the U.S. military.
OpenAI’s Pentagon Contract
OpenAI signed a new agreement with the Department of Defense that, according to internal sources, contains language that could permit the use of its artificial‑intelligence models for domestic surveillance and other contentious purposes. Earlier in 2023, OpenAI had a contract clause that barred military use of its models, but employees have disclosed that the Pentagon accessed OpenAI technology through a Microsoft‑Azure arrangement that was not subject to the same restrictions.
In 2024, OpenAI removed the blanket ban on military applications of its models and later entered a contract with defense contractor Anduril to deploy its models for national‑security missions. OpenAI CEO Sam Altman has publicly expressed support for Anthropic’s stance against using AI for nefarious purposes, yet the new agreement appears to leave similar avenues open.
Regulatory Gaps and Privacy Risks
Current regulations have not kept pace with rapid AI advancements, creating opportunities for government agencies to acquire personal data from data brokers and employ AI to generate detailed citizen profiles. Critics argue that the contract’s wording fails to address novel ways AI could enable legal surveillance, raising concerns about the opacity of military AI use and its impact on civilian privacy.
Expert Reactions
OpenAI researcher Noam Brown noted that the original contract language left “legitimate questions unanswered” about how AI might be used for surveillance, and that the updated language attempts to address those concerns. Former head of OpenAI’s geopolitics team Sarah Shoker warned that everyday people and civilians in conflict zones are the biggest losers, as technical design and policy opacity hinder understanding of military AI effects.
Overall, the deal places OpenAI under scrutiny similar to that faced by Anthropic, highlighting the tension between national‑security objectives and the need for robust safeguards against misuse of artificial‑intelligence technologies.
Used: News Factory APP - news discovery and automation - ChatGPT for Business