Back

Anthropic Limits Access to Claude Mythos, Its New Cybersecurity AI Model

Anthropic rolled out Claude Mythos Preview, its newest AI model designed for cybersecurity, to a select group of customers on Tuesday. The list includes industry giants Amazon, Apple and Microsoft, as well as security‑focused firms Broadcom, Cisco and CrowdStrike. Anthropic said the model will remain available only to organizations that meet strict vetting criteria and that discussions are under way with the U.S. government about potential deployments.

The announcement comes on the heels of two high‑profile data incidents at the San Francisco‑based startup. Last month, a publicly accessible data cache revealed internal documents describing Mythos and other projects. A week later, the internal source code for Anthropic’s personal‑assistant model Claude Code was posted online. Both leaks were attributed by the company to human error, raising concerns about its data‑handling practices.

Despite the setbacks, Anthropic said Mythos has already been in use with partners for several weeks. The model is billed as a "general purpose" AI with broader capabilities than previous offerings, but Anthropic is the first to limit its release because of the technology’s dual‑use nature. "We believe technologies like this are powerful enough to do a lot of really beneficial good but also potentially bad if they land in the wrong hands," said Dianne Na Penn, head of product management, research at Anthropic.

According to the company, Mythos can identify cyber‑security vulnerabilities at a scale that exceeds human analysts. It can parse massive codebases, flag weaknesses and suggest remediation steps faster than traditional tools. At the same time, the same analytical power could enable the model to discover novel ways to exploit those vulnerabilities, a risk Anthropic is unwilling to take with a broad public release.

Anthropic’s cautious approach reflects a growing industry debate over the responsible deployment of powerful AI systems. By restricting access to a curated set of customers, the company hopes to give early adopters a "head start" in securing their environments while limiting the chance that malicious actors obtain the same capabilities.

Customers receiving Mythos will reportedly gain the ability to detect code flaws and potential attack vectors at a scale previously unattainable. The company believes this could reshape cybersecurity practices across sectors, shifting the balance toward proactive defense rather than reactive patching.

While the limited rollout aims to mitigate misuse, Anthropic has not disclosed a timeline for a broader release. The firm emphasized that the model will remain unavailable to the general market until it can ensure robust safeguards are in place.

Industry observers note that the move underscores the tension between innovation and security in AI development. As more firms explore AI‑driven security solutions, the pressure to balance rapid advancement with ethical considerations will likely intensify.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Ars Technica2

Also available in: