Three Types of Instruction Rot That Limit AI
Advanced LLMs improve monthly, making old detailed prompts counterproductive—they act as handcuffs, reducing output quality past diminishing returns. Stale instructions fail when processes change (e.g., moving pricing from end to middle of client messages in January but forgetting to update prompts, forcing manual edits). Contradictory rules create chaos, like demanding "be concise" then "be thorough," or "use only this document" but "add helpful context"; the model picks randomly, yielding inconsistent results. Redundant instructions constrain new models unnecessarily (e.g., specifying "warm and professional tone" plus "don't be robotic, casual, or use slang"—just state the tone, and state-of-the-art models deliver without extras). Removing bloat gives the model more context space for the core task, sustaining or improving quality.
Quarterly Detox: Trim Prompts in 30 Minutes
For high-leverage tasks, run this monthly/quarterly process on key system prompts (e.g., Claude Projects, GPT custom instructions). Step 1: Pick 2-3 critical use cases. Step 2: Manually read for rot—spot staleness from process shifts, contradictions like concise vs. thorough, redundancies post-model upgrades. Step 3: Feed to AI for review: "Review these instructions for staleness, contradictions, redundancies. Suggest improvements while preserving intent." Paste cleaned version at bottom. Test the new prompt on your task. Step 4 (high-stakes only): Line-by-line deletion test—remove suspected rules, run task; if output worsens, restore; if same/better, delete. Clients delete 30-50% of rules, often gaining quality as models aren't restrained. This uncovers space for actual thinking on your goal.
Progressive Disclosure and Rule-Adding Guardrails
Bonus for advanced setups: Use progressive disclosure to show only needed info, avoiding constant bloat. In browser projects (Claude/GPT/Gemini), reference knowledge files conditionally (e.g., "For follow-up emails, check email-templates.md in knowledge base"). In desktop agents (Cloud Code, Co-worker, Codex), use subfolders/instructions.md (e.g., "For client emails, review emails/ folder"). Bundle skills with titles/descriptions—AI checks them only if relevant (e.g., "For emails, call email-writing skill"). Ongoing: Before adding rules, ask: 1) Did AI actually err, or is it precautionary? Skip if no error. 2) Can you edit an existing rule instead? Only add new if both yes. This keeps prompts lean as models evolve every 3-6 months.