What is new on Article Factory and latest in generative AI world

Anthropic Announces Claude’s New Computer-Use Capabilities with Built‑In Safeguards

Anthropic Announces Claude’s New Computer-Use Capabilities with Built‑In Safeguards Ars Technica2
Anthropic introduced a computer‑use feature for its Claude AI model, allowing the system to interact directly with a user's desktop. The company emphasized a set of safeguards designed to block risky actions such as moving money, modifying files, or accessing sensitive data, though it warned that these protections are not absolute. Users are advised to start with trusted applications and avoid handling sensitive information during the preview phase. Anthropic’s rollout follows similar moves by Perplexity, Manus, and Nvidia, and comes after the viral spread of OpenClaw, which prompted OpenAI to hire its creator to advance personal agents. Read more →

Anthropic Expands Claude with Autonomous Computer Control in Code and Cowork

Anthropic Expands Claude with Autonomous Computer Control in Code and Cowork The Verge
Anthropic has introduced a new research preview that lets Claude’s Code and Cowork agents control a Mac computer on behalf of users. The feature lets the AI open files, browse the web, run development tools and interact with apps without any setup, and it is available to Claude Pro and Max subscribers. Users must run the Claude desktop app on a supported Mac and pair it with the mobile app. The system asks for explicit permission before taking actions and can fall back to direct control of the mouse, keyboard and display when integrations are unavailable. Read more →

Anthropic Launches Claude Cowork: An AI Assistant for PC Tasks

Anthropic Launches Claude Cowork: An AI Assistant for PC Tasks Digital Trends
Anthropic has introduced Claude Cowork, a research‑preview AI assistant for Claude Pro and Max subscribers that can perform computer tasks on macOS and Windows without complex setup. The tool can open files, browse the web, interact with apps, and run developer utilities, using built‑in connectors for services like Gmail, Google Drive, and Slack when available, and otherwise controlling the mouse and keyboard. It always requests permission before accessing new apps or files and can be stopped at any time. Additional features include Claude Dispatch for mobile command input, Claude Channels for event integration, and scheduled task automation. Read more →

Anthropic Introduces Claude Computer-Control Feature for Pro and Max Subscribers

Anthropic Introduces Claude Computer-Control Feature for Pro and Max Subscribers CNET
Anthropic announced that its Claude AI can now control a MacOS computer, allowing it to perform tasks such as opening files, scrolling, clicking, and using apps like Google Calendar or Slack. The capability is limited to Claude Pro and Claude Max subscribers, requires permission before each action, and includes safety safeguards to block prompt injections and other vulnerabilities. Users are advised not to use the feature with apps that handle sensitive data. The new function works with Anthropic's Dispatch service, enabling task delegation from a phone and supporting morning briefings or test runs. Read more →

Anthropic Expands Claude Code and Claude Cowork with Computer Interaction Capabilities

Anthropic Expands Claude Code and Claude Cowork with Computer Interaction Capabilities Engadget
Anthropic announced that its Claude Code and Claude Cowork tools are being updated to operate directly on a user's computer. The new functionality lets the AI open files, browse the web, and run development tools. When activated, Claude first looks for connectors to services like Google Workspace or Slack, but can still perform tasks without a connector. The system asks for permission before taking actions, and Anthropic advises against using it for sensitive data. The feature launches as a research preview for Claude Pro and Claude Max subscribers on macOS and integrates with the Dispatch messaging platform. Read more →

Senator Elizabeth Warren Calls Pentagon’s Ban on Anthropic ‘Retaliation’

Senator Elizabeth Warren Calls Pentagon’s Ban on Anthropic ‘Retaliation’ TechCrunch
U.S. Senator Elizabeth Warren labeled the Department of Defense’s decision to label AI lab Anthropic as a supply‑chain risk as “retaliation.” Warren argued the move punishes Anthropic for refusing to let its technology be used for mass surveillance or fully autonomous weapons without human oversight. The dispute has drawn support from several tech firms and legal groups, and Anthropic is suing the DoD over alleged First Amendment violations while a judge considers a preliminary injunction. Read more →

Inside Amazon’s Austin Chip Lab: The Trainium Story and Its Impact on AI Partnerships

Inside Amazon’s Austin Chip Lab: The Trainium Story and Its Impact on AI Partnerships TechCrunch
Amazon invited a journalist on a private tour of its Austin chip lab, showcasing the development of the Trainium AI processor family. Lab leaders Kristopher King and Mark Carroll explained how Trainium, originally built for training, now powers inference for services like Bedrock and supports major partners such as Anthropic, OpenAI, and Apple. The lab’s work includes custom servers, liquid‑cooled chips, and a mesh network that reduces latency. Engineers described the intense silicon bring‑up process, welding stations, and a private testing data center. CEO Andy Jassy highlighted Trainium as a multibillion‑dollar business driving AWS’s AI strategy. Read more →

Anthropic Refutes Claims It Could Disrupt Military AI Systems

Anthropic Refutes Claims It Could Disrupt Military AI Systems Wired AI
The U.S. Department of Defense has expressed concern that Anthropic’s AI model, Claude, could be manipulated to interfere with military operations. Anthropic responded by stating it has no ability to shut down, alter, or otherwise control the model once deployed by the government. The company highlighted that it lacks any back‑door or remote kill switch and cannot access user prompts or data. In parallel, Anthropic has filed lawsuits challenging a supply‑chain risk designation that limits the Pentagon’s use of its software. The dispute underscores tension between national‑security priorities and emerging AI technologies. Read more →

OpenAI Pursues Desktop “Superapp” Combining ChatGPT, Codex and Atlas

OpenAI Pursues Desktop “Superapp” Combining ChatGPT, Codex and Atlas CNET
OpenAI is developing a desktop application that unifies its three flagship AI tools—ChatGPT, the coding platform Codex, and the AI‑first browser Atlas—into a single “superapp.” The move, reported by The Wall Street Journal, aims to simplify the user experience and allow the company to focus on its core offerings. Executives, including Fidji Simo, say the consolidation will reduce distractions and enhance personalization, as the integrated AI can learn from users across chat, coding and browsing tasks. The strategy also positions OpenAI against rivals such as Anthropic. Read more →

OpenAI Acquires Astral to Bolster Codex with Open‑Source Python Tools

OpenAI Acquires Astral to Bolster Codex with Open‑Source Python Tools Ars Technica2
OpenAI announced an agreement to acquire Astral, the creator of popular open‑source Python development tools such as uv, Ruff, and ty. The acquisition will integrate Astral’s projects into OpenAI’s Codex team, allowing AI agents to work more directly with tools developers already use. OpenAI pledged continued support for the open‑source community while enhancing Codex’s capabilities. The move intensifies competition with Anthropic’s Claude Code, which recently added the JavaScript runtime Bun. Earlier this month, OpenAI also secured Promptfoo, an open‑source security tool for large language models. Read more →

Pentagon Declares Anthropic an Unacceptable Security Risk

Pentagon Declares Anthropic an Unacceptable Security Risk Engadget
The Department of Defense has argued that allowing Anthropic continued access to its warfighting infrastructure would introduce an unacceptable risk to supply chains and national security. In a court filing responding to Anthropic's lawsuit over a supply‑chain risk designation, the Pentagon cited concerns that the company could disable or alter its AI models during operations if corporate “red lines” were crossed. The filing notes that the agency’s secretary, Pete Hegseth, included a provision in AI contracts permitting use for any lawful purpose, which Anthropic refused, prompting the department to label the partnership unsafe. Read more →

Pentagon Plans to Train AI Models on Classified Military Data

Pentagon Plans to Train AI Models on Classified Military Data Engadget
The Department of Defense is reportedly preparing to have artificial‑intelligence companies train versions of their models on classified information for exclusive military use. The initiative would take place in a secure data center authorized for classified projects, with the Pentagon retaining ownership of all training data. Companies such as OpenAI and xAI are expected to participate, while Anthropic may be excluded due to its policy restrictions. Experts warn that training on sensitive data could expose classified material to personnel lacking proper clearance, raising security concerns about broader model deployment within the defense establishment. Read more →

Justice Department Declares Anthropic Unreliable for Military AI Use

Justice Department Declares Anthropic Unreliable for Military AI Use Wired AI
The U.S. Justice Department defended a Pentagon decision to label AI developer Anthropic as a supply‑chain risk, arguing the company cannot be trusted with warfighting systems. Anthropic sued, claiming the label violates its rights and threatens its business, but the government maintained the action was lawful and necessary for national security. The dispute centers on whether Anthropic's Claude models should be allowed to support defense operations, with the Department of Defense seeking alternative AI providers while the lawsuit proceeds in federal court. Read more →

OpenAI Unveils Faster GPT-5.4 Mini and Nano Models for Coding Tasks

OpenAI Unveils Faster GPT-5.4 Mini and Nano Models for Coding Tasks CNET
OpenAI has launched GPT-5.4 mini and nano, the smallest and quickest variants of its GPT-5.4 family. Designed as workhorse models for coding and data‑processing tasks, the mini model is reported to be more than twice as fast as its predecessor on coding, reasoning, and tool‑use benchmarks, while still approaching the performance of the full GPT-5.4. The nano model targets even lighter workloads such as classification and data extraction. Both models are available through OpenAI’s API, with the mini model also integrated into Codex and the ChatGPT "Thinking" feature, positioning OpenAI against rivals like Anthropic’s Claude Code. Read more →

Pentagon Pursues New AI Models as Anthropic Contract Falls Apart

Pentagon Pursues New AI Models as Anthropic Contract Falls Apart TechCrunch
After a contentious split, the Pentagon is developing its own large‑language‑model tools to replace Anthropic's AI. The Department of Defense announced engineering work on multiple LLMs for government‑owned environments and expects operational use soon. Anthropic's $200 million contract collapsed over disputes about unrestricted access, mass‑surveillance prohibitions, and autonomous weapon use. While OpenAI and Elon Musk’s xAI have secured separate agreements with the Pentagon, Defense Secretary Pete Hegseth labeled Anthropic a supply‑chain risk, a restriction that Anthropic is now challenging in court. Read more →