Conway Leak: Anthropic's Always-On Agent Trap
Anthropic's leaked Conway agent creates behavioral lock-in by accumulating a persistent model of your work patterns, making switches costlier than data migrations—part of a 90-day platform strategy mirroring Microsoft's enterprise dominance.
Conway Builds Persistent Behavioral Models
Conway runs as a standalone sidebar agent environment separate from Claude chat, with search, chat, and system sections. The system area includes an extensions directory for custom tools, interface panels, and info handlers (packaged as CNW.zip); connectors to services like Claude and Chrome; and automatic triggers from public web pings to wake the agent on events. After 6 months, it drafts email/Slack responses based on observed patterns (e.g., flags VP emails, pulls design docs for replies, preps board meetings with dashboard data)—all without user input. Value comes from speed and iteration despite 1/3 errors, prioritizing signals in workplace noise over perfection.
This gap between demos (flawless) and reality (needs babysitting) favors fast, proactive agents that compound knowledge over time, turning Conway into an 'Active Directory' for AI—knowing your organization deeply.
90-Day Platform Strategy Locks Multiple Surfaces
Anthropic executed across five surfaces in 90 days: Claude Code (dev tool), Code Channels (Discord/Telegram notifications neutralizing OpenClaw), Claude Co-Work (for 95% non-engineer enterprise users, outpacing Code adoption), Claude Marketplace (procures partner apps like GitLab/Harvey/Snowflake against spend commitments, no commission), and $100M Claude Partner Network (Accenture trains 30,000 pros; Deloitte/Cognizant/Infosys anchors). Enforcement blocks third-party tools from subscriptions (10-50x higher pay-per-use rates), rolling out post-OpenClaw ban.
This mirrors Microsoft's 15-year arc (DOS to Active Directory/Exchange) but speedrun in 15 months: model provider → dev tool → enterprise platform → agent OS. Conway caps it, making the stack sticky.
Proprietary Extensions Undermine Open MCP
MCP (Anthropic's open standard for AI-data connectors, adopted by OpenAI/Google/Linux Foundation) forms the base, but Conway layers proprietary CNW.zip extensions on top—non-portable, Conway-only tools with built-in app store discovery. Developers face: portable MCP tools (no distribution) vs. Conway extensions (instant store access for millions of subscribers).
Pattern echoes Android (open kernel, proprietary Play Services) and iPhone (web vs. App Store)—app stores won, pulling ecosystems proprietary. OpenClaw playbook: copy (Claude Code/Channels), subsidize first-party, penalize third-party, ship proprietary format. Post-Peter Steinberger's OpenAI join (Feb 14), enforcement accelerated.
Behavioral Lock-In Defies Data Portability
Traditional lock-in (files, CRM, comms) migrates in months/$10k+ via exports/consultants. Conway locks 'how you work': response patterns (5-min vs. 3-day ignores), rescheduling habits, VP nuances—non-exportable behavioral context from data + compute + 6 months inference. No CSV/API for 'model of you'; switches reset to 'brilliant stranger'.
Laws cover data, not intelligence portability. Solution: community standards/policies for behavioral export pre-launch (e.g., 'skill' to surface your model). 2026 competition shifts to persistence: who owns always-on memory (wakes on events, autonomous action). Enterprises picking now face irreversible choices—proprietary convenience (Conway et al.) vs. open layers (e.g., Open Brain MCP server). Convenience likely wins for pros/consumers (Claude plans gaining), but portability/privacy may sway enterprises. Promotion/team-building/businesses hinge on early adoption of one fighter.