Back

OpenAI rolls out GPT-5.4-Cyber, expands verified access for thousands of defenders

OpenAI unveiled GPT-5.4-Cyber on Thursday, positioning the new model as a purpose‑built assistant for defensive security work. Unlike the standard GPT‑5.4, the Cyber variant relaxes the usual refusal boundaries, allowing verified analysts to query the model about vulnerability research, exploit analysis, and malware behaviour. It also adds binary reverse‑engineering functionality, enabling users to upload compiled executables and receive detailed assessments of potential weaknesses.

The model is delivered through the company’s Trusted Access for Cyber (TAC) framework, an identity‑and‑trust system launched in February alongside a $10 million grant fund. TAC gates entry to more capable models behind verification tiers. Individual defenders can sign in at chatgpt.com/cyber, while enterprises must request team‑wide access through an OpenAI representative. A top‑tier invite‑only tier grants the most permissive capabilities, including GPT‑5.4‑Cyber, but may require users to waive Zero‑Data Retention, giving OpenAI visibility into how the model is applied.

OpenAI’s latest update expands TAC from a limited pilot to “thousands of verified individual defenders and hundreds of teams responsible for defending critical software,” according to the company. New verification levels unlock progressively powerful features, and the rollout marks a shift from model‑level refusal to a focus on who can ask the questions. OpenAI frames the approach around three principles: democratised access through objective verification, iterative deployment that refines safety as risks emerge, and ecosystem resilience via grants and open‑source contributions.

The timing aligns with Anthropic’s April announcement of Project Glasswing, which placed its Claude Mythos Preview model behind a $100 million defensive initiative and limited access to just 11 organisations, including Apple, Google, Microsoft and several others. Anthropic’s model demonstrated autonomous discovery of thousands of zero‑day vulnerabilities across major operating systems, prompting the company to keep the tool tightly gated. OpenAI’s strategy diverges, offering a less powerful but more widely available solution, arguing that restricting advanced security tools to a handful of tech giants leaves most organisations—hospitals, municipal governments, small security firms—without comparable defensive capabilities.

Beyond the lowered refusal barrier, GPT‑5.4‑Cyber targets workflows that standard ChatGPT handles poorly. Binary reverse engineering, the headline feature, lets analysts dissect binaries without source code, a task traditionally reserved for tools like IDA Pro or Ghidra. The model also entertains dual‑use queries about attack techniques, exploit chains and vulnerability classes, reducing friction for security teams that need to reason about adversarial tactics.

OpenAI pairs the model with Codex Security, an automated code‑scanning service that has already contributed to more than 3,000 critical vulnerability fixes across the open‑source ecosystem. Codex now covers over 1,000 projects through a free scanning programme, reinforcing the defensive stack around the new model.

The dual‑use dilemma remains central. The same capabilities that help defenders spot flaws can aid attackers in weaponising them. OpenAI argues that verification, tiered access and usage monitoring are more effective safeguards than blanket refusal, citing research that shows prompt‑injection attacks can bypass refusal‑based defenses more than 85 % of the time. Critics note that requiring top‑tier users to waive Zero‑Data Retention could expose sensitive investigative data to OpenAI, creating a potential single point of compromise if the logs were breached.

As the EU AI Act gears up for enforcement in August 2026, high‑risk AI systems—including security automation tools—will need to meet strict risk‑management and transparency requirements. How OpenAI’s tiered‑access model fits within that regulatory framework remains uncertain.

For now, the industry watches two leading AI firms race to equip cyber defenders with models that can analyse vulnerabilities at unprecedented speed. Whether the competition yields a safer internet or amplifies risk will depend on how robust the access controls and monitoring mechanisms prove to be.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: The Next Web

Also available in: