What is new on Article Factory and latest in generative AI world

Discord Users Crack Anthropic’s Restricted Mythos AI Model

Discord Users Crack Anthropic’s Restricted Mythos AI Model Wired AI
A group of Discord community members accessed Anthropic’s tightly guarded Mythos Preview AI model after exploiting a breach at AI‑training startup Mercur and leveraging existing permissions from a contracting role. The researchers used the model only to create simple websites, avoiding detection, but their actions expose gaps in Anthropic’s access controls and raise concerns about the security of advanced AI tools. Read more →

Anthropic’s Mythos Preview Bypassed CISA, Raising Cybersecurity Concerns

Anthropic’s Mythos Preview Bypassed CISA, Raising Cybersecurity Concerns The Verge
Anthropic’s new AI‑driven security tool, Mythos Preview, is being tested by several U.S. federal agencies, but the Cybersecurity and Infrastructure Security Agency (CISA) reportedly lacks access. While the Commerce Department and the National Security Agency are evaluating the model, CISA’s exclusion comes amid broader budget cuts and staffing limits imposed by the Trump administration, prompting questions about the nation’s readiness to defend critical infrastructure. Read more →

Unauthorized Group Gains Access to Anthropic’s Mythos Cybersecurity Tool, Report Says

Unauthorized Group Gains Access to Anthropic’s Mythos Cybersecurity Tool, Report Says TechCrunch
A private online forum has reportedly breached Anthropic’s newly unveiled cybersecurity AI, Mythos, according to Bloomberg. The group, linked to a Discord channel that hunts unreleased AI models, accessed the tool through a third‑party contractor that works with Anthropic. Anthropic confirmed it is investigating the incident but said no evidence yet shows the breach affected its own systems. Mythos, rolled out to a handful of vendors such as Apple under the Project Glasswing initiative, was designed to strengthen enterprise security, raising concerns that the tool could be repurposed by malicious actors. Read more →

OpenAI CEO Sam Altman slams Anthropic's Mythos as fear‑based marketing

OpenAI CEO Sam Altman slams Anthropic's Mythos as fear‑based marketing TechCrunch
OpenAI chief Sam Altman accused rival Anthropic of using fear‑mongering to promote its new cybersecurity model, Mythos, during a recent podcast appearance. Altman suggested the rhetoric was designed to keep advanced AI tools in the hands of a select few, echoing broader industry debates about hype, safety and market positioning. Read more →

Anthropic's Mythos AI Model Raises Alarm Over Surge in AI-Driven Hacking

Anthropic's Mythos AI Model Raises Alarm Over Surge in AI-Driven Hacking Ars Technica2
Anthropic's new Mythos AI model has sparked concern among security experts after data from CrowdStrike showed AI‑enabled cyber attacks jump 89 percent in 2025. The model's ability to automate vulnerability hunting could overwhelm defenders, with internal warnings that companies may discover more flaws than they can patch. Recent incidents, including a Chinese‑linked AI espionage campaign that used Anthropic's Claude Code to breach dozens of high‑profile targets, underscore the growing threat. Analysts argue that granting AI agents unrestricted access to data, the internet, and external communication creates a “lethal trifecta” for hackers. Read more →

NSA Deploys Anthropic’s Mythos AI Model Amid Ongoing Government Dispute

NSA Deploys Anthropic’s Mythos AI Model Amid Ongoing Government Dispute Engadget
The National Security Agency has begun using Anthropic’s new Mythos Preview, a general‑purpose language model touted for its strength in computer‑security tasks. Sources familiar with the rollout say the NSA is one of roughly 40 agencies granted access and that usage is expanding within the department. The move comes despite a months‑long feud between the AI firm and the Pentagon, a February order from former President Trump to halt government use of Anthropic services, and ongoing lawsuits over the company’s designation as a supply‑chain risk. Read more →

AI 'doom influencers' amplify warnings as advanced models face limited rollout

AI 'doom influencers' amplify warnings as advanced models face limited rollout Digital Trends
A growing cohort of AI researchers, tech leaders and content creators—dubbed “doom influencers”—is pushing warnings about the risks of increasingly powerful artificial intelligence. Their messages, ranging from job displacement to existential threats, are gaining traction as companies like Anthropic hold back the release of its most advanced model, Mythos, limiting access to a handful of vetted partners. Governments in the UK, Canada and India are also taking note, sparking a broader debate on how to balance rapid AI progress with safety and regulation. Read more →