Skip Agent Teams: Build Skills and Queue Tasks Instead

Paperclip's CEO-led agent hierarchy mimics human companies but is slow and overkill; author's workflow shifted to specialized agent skills, browser/computer access, and simple task queuing for reliable automation.

Orchestration Problems: Sub-Agents Beat Teams for Most Work

AI agent orchestration fails when agents need to collaborate like office coworkers—communication and handoffs create latency and complexity. Claude Code's sub-agents solve this better by assigning narrow scopes to independent sessions that report back to a main agent, avoiding inter-agent chatter. Agent teams, pushed by tools like CrewAI (technical, LangChain competitor for production pipelines), Vibe Kanban (Kanban-board Claude sessions), Mission Control (team access to agents), and Gasedown (un supervised infinite runs), mimic human teams but burn tokens and distract without oversight. Sub-agents are now 'older tech,' yet they handle 80% of orchestration needs without the overhead.

Trade-off: Teams enable complex outputs like software or reports via handoffs, but for solopreneurs or consulting, independence scales faster. Author warns against hype—true breakthroughs like Claude or ChatGPT came from experiments, not dogma.

Paperclip's Hierarchy: Innovative UX but Slow Execution

Paperclip spins up local instances (npx paperclip onboard) connected via Tailscale for multi-computer access, creating 'zero-human' companies with a CEO agent that decomposes tasks into hires for specialized agents (e.g., engineers, PMMs). Key features include:

  • Adapter for any provider (Claude Code, Open Cloud gateways via invite codes).
  • Pre-made companies/skills from repos (e.g., agency teams).
  • Project boards like Linear/Jira: issues, routines (recurring workflows), cost tracking, org charts.
  • Live stdout monitoring, agent status.

Pros: Clear separation of concerns (projects/issues per agent), reviewable dashboards, routines for repetition. Cons: Mimics human orgs unnecessarily—AI agents pack multiple skills unlike humans, so rigid roles waste time. Setup overcomplicates (e.g., full hiring plans), runs slowly due to inference delays (use Opus fast mode or accept background processing). Author demoed Syntax GTM company for AI PMM services; it scaffolded excessively before delivering.

Evolving Workflow: Skills + Simple Queuing Wins

Author tested sequentially: Nano Clone (minimal, no dashboard) → Paperclip (task explosion) → fb.dev (no AI assignment). Now prioritizes:

  1. Build domain skills (e.g., HubSpot admin manifest compiling knowledge).
  2. Grant access: Browser (still.dev, Anchor), computer control (new Computer Use API).
  3. Queue tasks via CLI/GUI to skilled agents in Claude sessions—one-shot outputs.

Outcome: Automates consulting (Syntax GTM's Consume offering) without orchestration overhead. Custom tools beat general platforms—author plans simple task runner. Lesson: Focus on automation depth per agent, not breadth of fake teams; great UX awaits an 'Open Claude moment' like Pieter Levels.

Summarized by x-ai/grok-4.1-fast via openrouter

8155 input / 1740 output tokens in 12368ms

© 2026 Edge