Unbundling Management: AI Automates Routing, Humans Own Sense & Accountability
Management breaks into three: routing (AI excels), sensemaking (human signal from noise), accountability (human ownership). Kimi, Block, Meta experiments show flat structures speed up but strain without all three, causing drift and burnout.
Management's Three Core Bundles and AI's Uneven Impact
Managers perform three bundled functions rooted in centuries of organizational history: information routing, sensemaking, and accountability/feedback. Routing—aggregating updates from teams and cascading directives upward—is the most automatable, consuming most manager time historically (e.g., Roman legions to railroads). AI excels here via synthesis and distribution; examples include agents scanning 3,000 user feedbacks, interpreting multilingual sentiments, and monitoring competitors to generate 70% of code in hours.
Sensemaking filters noise into signal bidirectionally: distilling team realities for leadership (e.g., probing delay patterns beyond surface facts) and buffering corporate noise for teams. This resists AI due to domain-specific experience and human-to-human depth. "The problem is not a shortage of information. The problem is a shortage of signal." Nate B. Jones emphasizes new managers often prioritize good news, requiring training for honest signal like fast bad news. Even at 10x AI improvement, sensemaking stays human-partnered, specializing around human agents and strategic pivots.
Accountability/feedback enforces ownership and timed coaching, irreplaceable for long-term attachment (e.g., PM owning a goal for 2+ years). AI assists with data synthesis but can't simulate felt liability or nuanced mentorship. Jones notes, "Accountability is a very human thing... the manager is liable for how the team is performing." At 10x scale, AI partners for feedback routing, but core remains human.
Tradeoffs: Compressing layers automates routing for speed but erases load-bearing sensemaking and accountability, leading to "slightly wrong" feelings post-layoffs. Nearly half of US companies cut managers last year, chasing "flatter, leaner, faster" via AI hype without decomposition.
Real-World Experiments: Speed Gains vs. Human Costs
Kimi (Moonshot AI, makers of Kimi K2): $16B valuation, 300 employees (avg age <30), zero hierarchy/titles/OKRs/KPIs. AI agents route info (e.g., PM's morning workflow: feedback → requirements → 70% code). Five cofounders sensemake for 50 direct reports each via constant direct comms. Accountability via self-reflection and intense culture—employees cry in meetings over shortfalls, screening for self-directing "general purpose tool users."
Results: Extraordinary speed (days-to-hours launches). Costs: Cognitive strain on founders, mid/senior exits (3+ from big tech, one left industry), "weightlessness" causing anxiety/isolation/drift. Former employee: "Some mornings you walk in and you just don't know what you should do. No one tells you whether you're doing well." Jones predicts competitive pressures force accountability layer as scale hits 300+.
Block (Jack Dorsey's DRI model): Directly Responsible Individuals own outcomes without middle layers, compressing management. (Details truncated, but positioned as distinct from Kimi's flatness and Meta's cuts.)
Meta (Zuck's compression): Mass manager layoffs to flatten, betting AI fills routing gaps. Risks losing sensemaking buffers in matrixed orgs.
All hit walls: Kimi's no-accountability drift, Block/Meta's compression overloads ICs without unbundling. "If things have felt slightly wrong at work since then, you're not alone. You're not imagining it. And the company did remove something loadbearing."
Future-Proof Playbook: Decompose Before Cutting
Don't compress—decompose. Automate routing with AI/agents first (reduces meetings, enables agent-led flows). Retain humans for sensemaking (train for signal prioritization, pair with agents) and accountability (cascade ownership, AI-assist feedback). As agents proliferate, sensemaking specializes to human-agent alignment and strategy.
Even agent-led firms may falter without trust-building humans; market will test commoditized vs. high-trust categories by 2026. Jones: "Ultimately, I think that we should expect all three management functions to be handled in companies of the future. I think you need information routing. I think you need accountability. I think you need the ability to sensemake. And I think if you compromise on any of those three, you see culture strain."
For leaders: Audit bundles pre-layoffs. Screen for self-starters only if betting on AI scale-up. Build durable teams by unbundling, not slashing.
Key Takeaways
- Decompose management into routing (AI-automate), sensemaking (human-filter noise), accountability (human-enforce ownership) before flattening.
- Use AI agents for routing: Scan feedback, generate code/docs—cuts days to hours, as at Kimi.
- Train managers for sensemaking: Prioritize bad news fast; probe patterns beyond facts.
- Retain accountability to avoid drift—self-reflection works short-term but scales poorly past 50-300 people.
- Watch experiments: Kimi's speed/casualties show flat bets on AI; add layers under pressure.
- Partner AI with humans: 10x intelligence assists, doesn't replace felt liability or deep context.
- Audit post-layoff: If work feels "wrong," restore missing bundles to rebuild signal and trust.
- For ICs/managers: Own signal delivery; expect evolution to human-agent sensemaking roles.
Notable Quotes:
- "The problem is not a shortage of information. The problem is a shortage of signal." (Jones on sensemaking—reveals why AI context layers fall short without human filtering.)
- "Some mornings you walk in and you just don't know what you should do. No one tells you whether you're doing well." (Kimi ex-employee—highlights accountability void's daily toll.)
- "If things have felt slightly wrong at work since then, you're not alone. You're not imagining it. And the company did remove something loadbearing." (Jones intro—validates post-layoff unease as structural loss.)
- "Accountability is a very human thing... the manager is liable for how the team is performing." (Jones on feedback—why AI can't simulate long-term ownership yet.)
- "Ultimately... if you compromise on any of those three, you see culture strain." (Jones verdict—core insight: All bundles needed for scale.)