Anthropic Suspends OpenClaw Creator’s Claude Access, Restores Account Hours Later
Peter Steinberger, the developer behind the popular OpenClaw framework, posted a screenshot on X early Friday showing a message from Anthropic that his Claude account had been suspended for “suspicious” activity. The image, accompanied by a terse comment – “Yeah folks, it’s gonna be harder in the future to ensure OpenClaw still works with Anthropic models” – quickly spread across the platform.
Within hours, Steinberger announced that his access had been restored. The reversal came after the post attracted hundreds of replies, including one from an Anthropic engineer who insisted the company never bans users for using OpenClaw and offered assistance. Whether the engineer’s outreach prompted the reinstatement remains unclear; Anthropic has not commented publicly.
The suspension did not occur in a vacuum. Last week Anthropic announced that its standard Claude subscriptions would no longer cover “third‑party harnesses,” explicitly naming OpenClaw. Subscribers now must pay for any usage of OpenClaw through Claude’s API on a consumption basis. Anthropic framed the change as a response to the “usage patterns” of claws, which can demand more compute than typical prompts because they run continuous reasoning loops, retry tasks automatically, and integrate with numerous external tools.
Steinberger, who also works for OpenAI, expressed skepticism. He posted that the timing felt intentional, noting that Anthropic had recently added features to its closed‑source Cowork agent—such as Claude Dispatch, which lets users remotely control agents and assign tasks—just before the pricing shift. “Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source,” he wrote.
The controversy revived old tensions between open‑source AI tooling and proprietary platforms. OpenClaw users, who rely on the framework to run agents across multiple model providers, now face an extra cost layer when they choose Claude. Steinberger explained that he continues to test Claude because it remains a popular choice for OpenClaw users, even though his day job at OpenAI focuses on product strategy for the rival’s models.
Online reactions were mixed. Some commenters accused Steinberger of “choosing the wrong side” after moving to OpenAI, while others defended his right to critique Anthropic’s pricing. One user wrote, “You had the choice, but you went to the wrong one,” to which Steinberger replied, “One welcomed me, one sent legal threats.” The exchange highlighted the personal stakes for developers navigating corporate affiliations in the fast‑moving AI landscape.
Anthropic’s decision to charge a “claw tax,” as some observers dubbed it, reflects a broader industry trend: providers are tightening control over how their models are accessed and monetized. By separating subscription revenue from high‑intensity workloads, companies aim to protect infrastructure costs and incentivize developers to stay within proprietary ecosystems.
For now, OpenClaw remains functional with Claude, but developers must monitor usage costs closely. Steinberger said he is “working on” compliance with the new pricing model, hinting at possible adjustments to the framework to mitigate expenses. The incident underscores the fragile balance between open‑source innovation and the commercial interests of AI model owners.
Anthropic has not issued a formal statement regarding the suspension or its pricing policy. Steinberger declined further comment. The episode adds another chapter to the ongoing debate over how AI platforms will manage third‑party integrations while preserving the openness that many developers champion.
Used: News Factory APP - news discovery and automation - ChatGPT for Business