AI Pulse

Anthropic banned a dev for using Claude the way it just told everyone to

Anthropic banned a dev for using Claude the way it just told everyone to

Last week, Anthropic changed its subscription terms: third-party tools like OpenClaw no longer count as covered usage. Every call to Claude now has to go through Anthropic’s official API, billed per token.

Peter Steinberger, who built OpenClaw, switched his traffic right away. Within hours, his account was suspended for “suspicious activity.” The flagged requests were fully compliant—and paid for. Anthropic’s system mistook legitimate use for abuse.

Steinberger works at OpenAI, Anthropic’s rival. He’d previously called out Anthropic for copying OpenClaw-style features like remote agent control—then shutting off third-party access shortly after launching them. Even so, he still tests OpenClaw with Claude because more people run Claude through OpenClaw than use ChatGPT directly.

OpenClaw is a widely used open-source connector. It lets developers plug Claude, Llama, or other models into their workflows without rewriting code. Anthropic’s move follows a familiar script: when open tools catch on, infrastructure providers often respond with pricing walls and access cuts instead of embracing integration.

The company seems intent on locking AI agents inside its own product, Cowork—a walled garden where everything runs under Anthropic’s roof. OpenClaw doesn’t break Claude. It just skips Cowork.

Developers who followed the new rules are now getting suspended without warning—even while paying per token. Anthropic temporarily banned Steinberger from using Claude and hasn’t said what triggered the alert, whether there’s an appeal path, or if others have been hit too.

When compliance isn’t enough, the real gate isn’t technical—it’s arbitrary.

📎 阅读原文 · TechCrunch