AI Turns Competitive Edge into Average Baseline
AI delivers productivity gains today (2-3x output) but erodes differentiation as everyone adopts the same models and automations, converging to efficient commodities unless companies go AI-native.
AI's Short-Term Boost Masks Long-Term Convergence
Marco van Hurne, an AI-first company builder, argues AI currently amplifies output—his own rose 2-3x in research and communication—via faster responses, personalization, and process automation, promising revenue lifts and efficiency gains into 2026. This mirrors past waves: servers in the 1960s-70s, client-server computing, internet-driven operations. But unlike those, AI's "age of intelligence" enables reasoning, research, planning, and decision-making, automating messy workflows with agents and robotics.
The pivot: uneven adoption creates today's edge. Early movers like JPMorgan, Walmart, and Delta run production AI-automation; others toy with pilots. Yet by 2030, post thought experiment, back offices vanish into agent-orchestrated, human-supervised flows. Everyone accesses identical frontier models (e.g., from same vendors), yielding uniform research speed, planning, and "strategic frameworks." Creativity converges too—generative models excel at variations but falter on ruptures like relativity or the iPhone, pulling outputs toward training data's dense center.
"AI is going to make a lot of companies less competitive," van Hurne states upfront, as baseline intelligence commoditizes, leaving speed, cost, and branding—"a race to become a commodity with a logo."
AI-First Bolt-Ons vs. AI-Native Redesign
Van Hurne contrasts adoption modes. AI-first shoves tools into human workflows for quick gains but breeds chaos. AI-native rebuilds operating models around intelligence as infrastructure: decompose work into machine tasks, human oversight, feedback loops, with governance, resilience, audit logging baked in—like core IT.
"AI native does not mean 'we bought Copilot licenses', but it means that the operating model is designed around intelligence as infrastructure," he clarifies. Tradeoff: redesign demands upfront effort but sustains edge; bolt-ons deliver fast but collapse under scale.
To avoid averaging, reject first-plausible outputs. Van Hurne references his "Wanderer’s algorithm" paper, modeling creativity via neurodivergence/ADHD math to escape probability mass toward true breakthroughs—unlike models defaulting to safe averages.
Hybrid Stack Automates Processes in Layers
2026 brings breakthroughs fusing agentic AI (planning/acting), RPA (screen/bot automation), and browser-based RPA (UI navigation like ChatGPT's Atlas). Processes decompose into 5 layers, illustrated via regulated employee onboarding (target: day-one productivity, compliance).
- L1 Value Stream: Outcome definition (e.g., access by 09:00 day one). AI-copilot drafts checklists; humans accountable.
- L2 Process Chains: Rules/exceptions (e.g., contractors get time-boxed access, high-risk triggers checks). AI translates policy to decision tables, tests edges.
- L3 Subprocesses: Orchestration (HR record → IAM → mailbox → laptop ship → trainings). Agentic AI coordinates APIs/tools, monitors SLAs; RPA for legacy.
- L4 Task Execution: App-level actions (ServiceNow tickets, Azure AD users, MDM devices). RPA/browser-RPA handles provisioning across silos.
- L5 Interface: Pixel-clicks (e.g., IAM portal paths). Browser-RPA thrives on brittle UIs, adapts to drift.
Hybrid wins: copilots L1/L2, agents L2/L3, browser-RPA L4/L5, classic RPA legacy. Result: back-office as vending machine (invoice in, processed out). Pitfall: UI redesigns break it; gains demand guardrails.
Van Hurne notes, "The browser is quickly becoming a universal adapter for systems nobody wants to integrate properly."
Interface Traps Accelerate Averaging
Chat interfaces—the "chat coffin"—worsen convergence, rewarding quick accepts of plausible outputs under pressure, shaping behavior toward average. With system access (CRM, workflows), work funnels into approving/nudging. Antidote: Generative UI for spatial/structured tasks over linear chats.
"If you accept the first good looking output, you drift toward the average with frightening efficiency," he warns, tying to plateau where "Good Enough" reigns.
Key Takeaways
- Go AI-native: Redesign operating models around intelligence infrastructure, not bolt-ons, to avoid chaos and sustain differentiation.
- Layer automations strategically: Use copilots for L1/L2 thinking, agents for L3 orchestration, RPA/browser-RPA for L4/L5 execution.
- Hybrid stacks rule 2026: Fuse agentic AI, classic RPA, browser-based RPA for back-office collapse (e.g., onboarding vending machines).
- Combat convergence: Reject model averages; pursue "ruptures" via techniques like Wanderer’s algorithm for breakthrough creativity.
- Ditch chat coffins: Shift to Generative UI to escape linear traps enabling lazy accepts.
- Act now on uneven adoption: Productionize before plateau hits, as shared brains commoditize reasoning/planning.
- Measure beyond 1950s metrics: Track full workflow collapse, not just task speed.
- Prepare for post-human back office: Humans oversee agents, but roles evolve to outcomes over clicks.