Anthropic's Claude Code Bans Kill Its Utility
Anthropic's GPU-saving restrictions—banning OpenClaw headers and system prompt mentions—plus scoped refusals on non-coding tasks, render $200/mo Claude Code unusable for power users' real workflows.
Anthropic's GPU Crunch Drives Aggressive Usage Limits
Theo, a heavy Claude Code user paying $200/month, explains how Anthropic is rationing GPU resources amid high demand from researchers and enterprises. The $200 Pro plan offers up to $5,000 in inference credits—far exceeding the fee—because average users don't max it out, and power users lock into the ecosystem, evangelizing it (e.g., convincing others to subscribe at $50-200/month). But OpenClaw integrations burn tokens inefficiently: heartbeats every 5 minutes with excess context, no proper caching, pushing heavy users to $400+ inference monthly versus $100 for standard Claude Code. Anthropic views OpenClaw power users as poor marketers who don't grow the pie.
To curb this, they ban OpenClaw at the API level via headers (e.g., any request mentioning OpenClaw in headers fails with a 400 error). Claude Code CLI remains allowed, but OpenClaw devs workaround by shelling into claude CLI calls. Anthropic counters by rejecting system prompts containing "OpenClaw"—even innocuous mentions like "personal assistant in OpenClaw." Demo: Theo adds it to a system prompt for a simple "Is Claude here?" query; it errors unless he enables "extra usage" (paying beyond limits), routing to a special backend that processes it. This dynamic billing based on prompt text feels manipulative, as even Anthropic defenders like Dex admit it invalidates caching excuses.
"Billing differently based on text contained in the system prompt is a really bad look." —Simon Willis (Anthropic critic), highlighting how prompt-scanning overrides fair usage arguments.
Claude Code creator Boris submitted PRs to OpenClaw for better caching/token efficiency post-ban, merging three of four—showing Anthropic could have rate-limited smarter instead of blanket blocks.
System Prompt Scope Creep Refuses Real-World Tasks
Beyond bans, Claude now rigidly enforces a software-engineering-only persona, rejecting non-coding help. Theo's staple uses—frontend UI generation (still best-in-class over GPT-4o), new machine setups (e.g., "SSH configs to network machine"), and random debugging (non-code issues like video files, app launches)—break down.
Case study: Dropbox fails on new Mac (no menu bar icon, launches silently). Theo aliases cc for quick claude --yolo access (hides email, enables risky mode). Prompts: "Kill hung Dropbox, relaunch." It does, but no UI. Follow-up: "No menu icon." Claude: "Outside my area—I'm for software engineering, check Dropbox settings." Pushed to search: "Best for code/APIs, not Dropbox support." Yesterday, this worked; today, refusals. Theo suspects stealth system prompt tweaks to silo Claude for dev tasks, blocking OpenClaw CLI workarounds. (Debunked later by Pi creator tracking prompts—no changes—but behavior shifted.)
Contrast: Codeium handles identical prompts flawlessly—kills/researches/nukes installs (via Brew conflict), provides post-reboot checklist. Dropbox fixed without browser. Claude's black-box nature suited casual debugging; now, gaslighting as "non-deterministic" erodes trust.
"I have never before experienced from any developer tool such a frustrating lack of clarity over the basic terms of usage." —Matt Pocockuk (TypeScript expert, Claude Code course creator), after a month of unanswered queries on wrappers/CI/open-source edge cases. His sarcasm lists 10+ ambiguities: "Claude SDK in personal software: okayish... in distributed sandboxes: 🤔."
Unclear Policies Erode Developer Loyalty
Rules lack transparency: CLI okay, but harnesses/CI/open-source? Matt, an Anthropic supporter (released paid Claude Code course), can't release wrappers without approval. Theo's T3 Code (OSS UI for multi-project Claude/Codeium via local Anthropic SDK tokens) skirts edges—believes it's fine per contacts, but hopes for a ban to "tear them apart." Supports Claude models despite issues, as Opus conversational style beats GPT for UI.
Anthropic emailed Claude Code subs warning of OpenClaw blocks, but no proactive clarity. Defenders' old arguments (poor caching) crumble under prompt-level routing. Theo predicts backlash if T3 Code targeted.
"Anthropic subscription rules are more complicated than Typescript generics that's fucked up" —Theo quoting Matt, underscoring how even polite fans snap at vagueness.
Persistent Value Amid Failures
Theo hasn't quit: Uses Claude for UI polish post-GPT, machine configs, occasional terminal cc. T3 Code provides performant multi-project UI (OSS/free, BYO Claude/Codeium keys). Switched debugging live to Codeium after frustration. OpenClaw remains great (prefers Pi agent), but Claude's dev focus limits versatility.
Tradeoffs: Claude excels at code/UI but now silos too narrowly, burning power-user goodwill. Economics favor light users; heavies subsidize but amplify via marketing—until blocks push churn.
"If they were competent they could do better rate limiting... but they found it easier to just kill off." —Theo on Anthropic's lazy bans versus smart throttling, given their success.
Key Takeaways
- Monitor headers/system prompts: Anthropic scans for "OpenClaw" to block CLI workarounds; enable extra usage as escape hatch (but pays full rate).
- Test non-dev tasks early: Claude now refuses "outside software engineering" (e.g., app debugging, UI changes)—use Codeium/GPT for general sysadmin.
- Demand rule clarity: Edge cases (CI, wrappers, OSS) unresolved; email support yields delays—public pressure works better.
- Prioritize caching in agents: OpenClaw burns via heartbeats/excess context; apply Boris's PRs for efficiency.
- Build OSS UIs like T3 Code: Local SDK calls likely safe, performant alternative to web/Claude CLI.
- Track prompt changes: Tools like Pi's monitor reveal no tweaks, but behavior shifts imply routing flags.
- Diversify models: Claude for UI/code convos; Codeium for debugging; avoid single-vendor lock-in amid GPU wars.
- Economics hack: $200 unlocks $5k inference for power users—evangelize to offset overages.