Symphony: Agents Autonomously Manage Tasks from Linear

OpenAI's Symphony spec lets Codex agents pull open tickets from Linear, work independently until completion, and self-file issues—boosting merged PRs 6x in 3 weeks by eliminating human micromanagement.

Eliminate Human Bottlenecks by Letting Agents Pull Work

OpenAI identified human attention as the key limiter in scaling AI agents: developers could only juggle 3-5 parallel Codex sessions before context-switching killed productivity. Instead of micromanaging, Symphony flips the workflow—agents monitor task trackers like Linear, claim unblocked "Todo" tickets, advance them through "In Progress," "Review," and "Merging" states, and restart if they crash. This turns Linear into a state machine where agents handle routine tasks in parallel, including multi-repo PRs, research, or analysis without code. Product managers submit feature requests directly and receive review packages with video walkthroughs, bypassing repo checkouts.

Agents also spot ancillary issues like performance bugs or refactors and file new tickets autonomously, enabling opportunistic improvements without derailing the main task. Blockers respect dependencies, e.g., a React upgrade waits for Vite migration. To leverage LLM reasoning, assign high-level goals over rigid steps: provide tools, context, and let models "cook," as OpenAI's team advises—this adapts to improving models tackling larger problems than initial templates anticipate.

Simple Spec-Driven Implementation Scales Across Languages

At its core, Symphony is a Markdown SPEC.md defining the problem and solution, plus WORKFLOW.md outlining steps like accepting tickets, checking out repos, attaching PRs/videos, and updating status. Agents implement this themselves—no complex monitoring system needed. OpenAI's Elixir reference handles concurrency well; Codex generated one-shot ports to TypeScript, Go, Rust, Java, and Python for validation.

Editing WORKFLOW.md propagates process changes instantly. Deploy as open-source reference (not maintained product), forking easily—e.g., adapt for Anthropic's Claude Code with GitHub Issues. Pairs with OpenAI's ChatGPT workspace agents for offline-running team automation via Slack.

Measurable Gains with Clear Task Boundaries

Internal OpenAI teams saw merged pull requests increase sixfold in the first three weeks. Linear reported spikes in new workspaces post-release. Use Symphony for routine, well-defined work to free humans for ambiguous problems requiring judgment, handled via interactive sessions. Avoid overapplying: it's a workload absorber, not universal replacement.

Summarized by x-ai/grok-4.1-fast via openrouter

4656 input / 2098 output tokens in 16607ms

© 2026 Edge