Back

Anthropic Unveils ‘Dreaming’ Feature for AI Agents, Sparks Debate Over Anthropomorphic Naming

Anthropic introduced a feature called “dreaming” for its AI agents during a developer conference in San Francisco on May 6, 2026. The company described the addition as a way for agents to sort through the transcript of recent tasks, identify recurring patterns and use those insights to improve future performance. In Anthropic’s own blog, the firm said memory lets each agent capture what it learns while working, and dreaming refines that memory between sessions, pulling shared learnings across agents and keeping the knowledge base up‑to‑date.

Developers who build multi‑step workflows—such as navigating several websites or parsing multiple documents—will now have a tool that automatically looks for efficiencies in the agents’ activity logs. The goal, according to Anthropic, is to create self‑improving agents that can adapt without manual re‑training, a step forward for its recently launched AI agent infrastructure.

The naming of the feature, however, has ignited a broader conversation about how AI companies brand their technologies. Anthropic is not the first to borrow terminology from human cognition. OpenAI unveiled a “reasoning” model in 2024 that emphasized a longer “thinking” period before responding, while many startups market their bots as having “memories” of user preferences. Critics argue that such language encourages users to attribute human‑like qualities to software that operates on statistical patterns.

Anthropic’s internal documents reinforce the human‑centric framing. The company’s constitution references the Claude model in terms like “virtue” and “wisdom,” and a resident philosopher is tasked with interpreting the bot’s “values.” Proponents claim that using familiar concepts helps developers and end‑users understand system behavior, but scholars warn that anthropomorphism can distort moral judgments about AI, including assessments of responsibility and trust.

A recent paper in the journal AI & Ethics highlighted the risk that anthropomorphic language leads people to overestimate what machines can achieve, potentially eroding critical scrutiny. The authors note that describing AI functions with human analogues may inflate expectations and obscure the technical limits of the underlying models.

Anthropic’s announcement comes at a time when the AI industry is rapidly expanding its portfolio of agentic tools. The “dreaming” feature represents a technical advance, yet the surrounding debate underscores an ongoing tension between marketing appeal and accurate representation. As companies continue to embed human‑like descriptors in product names, observers will likely monitor how such choices shape public understanding and regulatory conversations around artificial intelligence.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Wired AI

Also available in: