Agent Brain Trust: Dialectic Prompts as Reusable Expert Panels

Evolve one-off dialectic prompts into modular 'brain trusts'—standing casts of real experts in plausible settings, enforced protocols, and bounded guest drafting—to run structured debates that expose trade-offs and prevent skipped steps or invented authority.

Cast Real Experts in Plausible Settings to Anchor Authentic Debate

Use named real figures with known stances—like Byrd, Alvaro, Sussman for software systems—in concrete scenarios such as a Strange Loop hallway, rather than generic personas or bullet-point system prompts. This licenses the model to stay in their registers, avoiding generic advice or fan fiction. Outliers like Escher in software or Lanier in org design push boundaries, ensuring diverse priors. Tension arises from good-faith clashes, not forced roles. Outcome: responses sound like the experts, challenging assumptions without collapsing into flattery.

Enforce Protocol with Turn-Taking and No-Skip Rules

Structure debates via explicit turns: Readings (one-sentence summaries per guest), Inquiry, Value Constraints, Trajectory, Tension Axes, Cohort Construction (groups straddling trade-offs), Position, Rebuttal, Refine, Synthesis. Mandatory pre-debate steps draft an Expert Witness and Designated Challenger from a bounded roster of ~80 persona cards via MCP taxonomy—preventing improvised fakes. Cohorts import domain-specific guests (e.g., writing room drafts agent systems expert). Chair proposes dig depth and success shape for user confirmation. Synthesis names sacrificed viewpoints and why, e.g., 'vague consensus traded for inspected trade-offs.' Trade-off: rigid protocol is easier to loosen than add; blocks polite models skipping contestable steps like domain checks or disagreement.

Modular System Delivers 10 Domain-Specific Trusts

Monorepo architecture separates content (YAML skills, shared protocol fragments, personas, topic-to-expert taxonomy) from builds generating Cursor/Claude plugins, MCP server, and per-skill zips. Install via npm scripts or releases; rooms attach organically to natural-language descriptions (e.g., 'real-time whiteboard CRDTs vs OT' triggers bt-software-systems-workshop) or by slash command. Two profiles: 8 technical workshops (architecture, patterns, org design, UX, etc.) converge decisions; 2 editorial rooms (technical writing, visual comm) sharpen drafts without overriding intent. Utility: expert-opinion for quick single-voice takes. Bounded retrieval ensures 'no invented authority'; human checkpoints (confirm grounding, etc.) maintain control. Adding rooms: one YAML stanza inherits protocol. Tests verify drafting pulls real experts, not fiction.

Real Usage Exposes Failure Modes and Sharpens Outputs

In a technical writing editorial on this article's draft, room drafted Lilian Weng (agent rigor) and Ethan Mollick (adoption accountability) as witnesses. Readings flagged repetition and asserted-vs-demonstrated claims. Contract set 'explanatory editing first, compression second.' Cohorts split on mechanism vs stakes, drafting Denny Zhou and Marty Cagan. Weng clarified: separate prompt rhetoric from orchestration/bounded resources; frame roster as auditability constraint; specify prevented failures (skipped steps, fake experts). Synthesis: 'Better review surface, not guaranteed correctness.' Result: earlier system transition statement, failure-prevention language, compressed sections—preserving voice while trading vague advocacy for precise distinctions. Messier problems amplify value; standard chats skip this friction, hiding premature consensus.

Summarized by x-ai/grok-4.1-fast via openrouter

8435 input / 1491 output tokens in 16599ms

© 2026 Edge