Back

OpenAI to Roll Out GPT-5.5-Cyber to Select Cybersecurity Teams

OpenAI is set to introduce a new, purpose‑built artificial‑intelligence model called GPT-5.5-Cyber, but the company will not make it available to the general public. Instead, CEO Sam Altman announced on X that the model will be rolled out "in the next few days" to a narrowly defined cohort of trusted cybersecurity professionals, whom the firm describes as "cyber defenders." The limited launch is meant to give institutions a chance to bolster their digital defenses while the company, together with the broader AI ecosystem and government partners, works out a framework for trusted access.

Details about GPT-5.5-Cyber's architecture, capabilities, or pricing remain scarce. The model's name suggests it builds on the recently released GPT-5.5, which OpenAI billed as its "smartest and most intuitive to use" model yet. Beyond the label, the company has not disclosed whether the new version adds specialized threat‑detection tools, real‑time analysis features, or other security‑focused enhancements.

OpenAI's decision reflects a growing industry trend: firms are increasingly shielding their most powerful models from open release, citing the risk of malicious exploitation. Earlier this year, OpenAI introduced GPT‑Rosalind, a life‑science‑oriented model designed to accelerate drug discovery and biological research. Like GPT‑Rosalind, GPT‑5.5-Cyber is being positioned as a high‑impact tool whose misuse could have serious consequences.

Anthropic, a rival AI lab, recently attempted a similar approach with its Claude Mythos model, a cybersecurity‑focused system. The rollout, however, attracted criticism after a series of security lapses that exposed the model to unintended users. The White House, according to a report in The Wall Street Journal, pushed back against expanding Mythos's access, warning that broader distribution could both heighten cyber‑risk and strain the government's ability to leverage the technology effectively.

OpenAI appears to be learning from that episode. By limiting GPT‑5.5-Cyber to a pre‑selected group, the company hopes to maintain tighter control over who can query the model and how its outputs are used. Altman emphasized collaboration with the entire ecosystem, suggesting that industry partners, academic researchers, and federal agencies will all have a role in shaping the model's deployment policies.

The exact criteria for "trusted" access have not been disclosed. In previous "trusted access" programs, OpenAI vetted both individual professionals and institutions, often requiring background checks, security clearances, or adherence to strict usage guidelines. It is likely that a similar vetting process will govern GPT‑5.5‑Cyber's initial user base.

While the announcement offers little concrete insight into the model's technical prowess, the move signals OpenAI's confidence that AI can meaningfully augment cyber‑defense operations. Companies and government entities that face increasingly sophisticated attacks may soon have a tool that can parse massive threat logs, generate remediation recommendations, or simulate attack scenarios at scale.

Critics, however, caution that even restricted AI tools can be reverse‑engineered or leaked, potentially giving adversaries a powerful new weapon. The balance between empowering defenders and preventing weaponization remains a delicate one, and OpenAI's rollout will likely be scrutinized closely by both security experts and policymakers.

As the rollout proceeds, industry observers will watch for signs of how OpenAI manages access, monitors usage, and addresses any inadvertent disclosures. The outcome could set a benchmark for how AI firms handle the distribution of high‑risk models in the future.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: The Verge

Also available in: