Back

Anthropic Introduces Safer Auto Mode for Claude Code

Overview of Claude Code Auto Mode

Anthropic announced a new auto mode for its Claude Code product, a tool that enables artificial intelligence to make permission-level decisions on behalf of developers. The addition targets a middle ground between constant manual oversight and granting the model unrestricted autonomy, which can lead to undesirable outcomes such as accidental file deletion, unintended data sharing, or execution of malicious code.

How Auto Mode Enhances Safety

The auto mode is designed to intercept actions that could be risky before they are executed. When Claude Code attempts an operation that may pose a threat, the feature flags the action, blocks it, and either offers the model a chance to try an alternative approach or asks the user to intervene. This safety layer aims to provide developers with a more secure environment while still leveraging the convenience of AI‑driven assistance.

Current Availability and Planned Expansion

At launch, the auto mode is offered as a research preview limited to users on Anthropic’s Team plan. Anthropic has indicated that access will be broadened to include Enterprise customers and users of its API in the coming days, allowing a wider audience to test the feature.

Experimental Nature and Recommended Use

Anthropic cautions that the auto mode remains experimental and does not eliminate risk entirely. The company advises developers to employ Claude Code in isolated environments to mitigate potential impacts. By acknowledging the limitations, Anthropic encourages responsible experimentation while continuing to develop safer AI assistance tools.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: The Verge

Also available in: