Anthropic Bans OpenClaw: Prompt Caching Costs Explode
Anthropic ends Claude subscriptions for third-party tools like OpenClaw because they break prompt caching, forcing 10-25x higher compute costs than official apps.
Prompt Caching Enables Subsidies, But Third-Party Tools Break It
Anthropic's Claude subscriptions ($200/month) provide $2,000-$5,000 in API compute credits—a 10-25x subsidy—because their official Claude Code app optimizes prompt caching. Cached tokens skip recomputing attention mechanisms, slashing costs for repeated prompts in long sessions. Third-party harnesses like OpenClaw bypass this: they generate uncached requests, consuming far more compute per dollar spent. Boris Cherny (Claude Code creator) confirmed this usage pattern mismatch and submitted GitHub PRs to improve OpenClaw's caching, some already merged. Result: Anthropic prioritizes capacity for official workloads, refunding affected users with equivalent API credits while enforcing the February policy explicitly from Dec 12 PT. Use API keys directly for OpenClaw to avoid bans, but expect full pricing without subsidies.
Fix Quota Burn with Model Switches and Session Caps
Users report exhausting Claude Pro limits in 70 minutes due to larger 1M-token contexts and prior 2x capacity boosts now removed. Anthropic denies overcharging, blaming prompt cache misses and recommending: Start sessions with Sonnet (4:6 ratio) over Opus—it burns tokens twice as fast initially while preserving cache. Reduce effort level or disable extended thinking mid-session. Cap contexts at 200k tokens despite 1M support, as pricing stays flat but larger windows trigger cache misses. Avoid resuming idle sessions (>1h); start fresh. These tweaks align usage with optimized workloads, extending quotas without hardware changes. Anthropic subsidizes less than OpenAI/Google, making it priciest among frontiers, but collects session data for model training as the true "cost" of subsidies.
Free Lunch Ends: Demand Outpaces Subsidized Supply
Industry pattern: Subsidies for dev tools like Claude Code, Cursor, and Google AI Pro shift to tiered access (e.g., Google Pro limits premium models to taste-tests, defaults to Flash). OpenAI resets limits reactively and bans fraud, burning cash fastest but retaining goodwill. Anthropic/Google explicitly block OpenClaw-like abuse to preserve capacity amid surging demand. Expect price hikes and reduced tokens as efficient models + scale become key. Claude's Opus leads, but competitors like Anthropic's potential Code Desktop loom. Pay API rates for serious work; subsidies never promised third-party support.