White House blocks Anthropic's plan to widen Mythos AI access, citing security and compute limits
The White House has formally rejected Anthropic’s request to broaden the rollout of Mythos, the company’s advanced cybersecurity AI, to an additional 70 organizations. Officials cited two primary worries: the risk that the model could be misused for malicious cyberattacks and Anthropic’s limited computing capacity, which could strain the government’s own use of the system.
Mythos, unveiled in early April under Anthropic’s Project Glasswing, can autonomously locate and exploit vulnerabilities across a wide range of critical software. Anthropic has kept the model under tight control, allowing a select group of roughly 50 organizations to test it on their own networks. The firm’s expansion plan would more than double that cohort to about 120 entities.
According to a senior administration source, the White House’s objection rests on security and operational grounds. Officials fear that a larger user base could increase the chance of the model falling into the hands of actors intent on weaponizing its capabilities. In parallel, they argue Anthropic does not possess sufficient compute resources to serve an expanded set of users without degrading performance for existing government customers, notably the National Security Agency, which already uses Mythos.
Unauthorized access sparks alarm
Compounding the White House’s concerns, a small group of unauthenticated users reportedly gained access to Mythos on a private online forum the same day Anthropic announced its limited‑release plan. Details of the breach remain unclear, but the incident highlighted the difficulty of containing a model designed to operate autonomously in hostile environments. The episode has amplified government anxiety about any further expansion of the user base.
Mythos’s capabilities are well documented. In testing, the model autonomously discovered thousands of zero‑day vulnerabilities across major operating systems and browsers, succeeded on 73% of expert‑level capture‑the‑flag tasks, and became the first AI to complete a 32‑step simulated corporate network attack from start to finish. Those results underscore why the U.S. government views the model as both a strategic asset and a potential security threat.
The dispute unfolds amid a broader policy tug‑of‑war. Earlier this year, the Pentagon labeled Anthropic a national‑security supply‑chain risk after negotiations stalled over whether the military could deploy Anthropic’s Claude model for autonomous weapons and mass surveillance—uses the company’s CEO, Dario Amodei, has publicly ruled out. Simultaneously, the White House is drafting an executive action that would let agencies bypass the Pentagon’s designation and integrate Anthropic models, including Mythos, into federal operations.
White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent met with Amodei in recent weeks, describing the discussion as productive. The administration says it is trying to balance innovation with security while working with the private sector, but the contrasting tracks—blocking Mythos’s commercial expansion while courting the company for government use—remain unresolved.
The outcome of these negotiations will shape Anthropic’s rollout strategy and set a precedent for how the United States regulates AI systems capable of offensive cyber operations. For now, the company’s expansion plans are on hold, and the debate over AI governance in the national‑security realm intensifies.
Used: News Factory APP - news discovery and automation - ChatGPT for Business