Organize Persistent Context for Model Navigation
Store all code in ~/src and knowledge work in ~/vault (split into projects/, notes/, kb/) to enable easy retrieval via grep or glob patterns. This directory structure lets models lean on prior artifacts like code, docs, and analysis. For organizational knowledge in Slack, Drive, or Mail, use Model Context Protocols (MCPs) in tools like Claude Code. Maintain a per-project INDEX.md with annotated URLs, owners, and summaries—what's inside and when to read—to avoid models wasting tokens scanning irrelevant links.
Onboard every session like a new hire using per-project CLAUDE.md files, which include glossaries for acronyms/code names/teammates, suggested reading order (e.g., skim INDEX.md, then TODOS.md), and domain specifics. Split memory into ~/vault for facts/project state and ~/.claude for preferences/workflows (with its own CLAUDE.md, skills/, guides/). This setup compounds: finished artifacts become context for future sessions.
Encode Taste and Workflows as Hierarchical Config
Define behavioral contracts in ~/.claude/CLAUDE.md, loaded at every session start, specifying directness ("push back when you disagree"), error handling ("investigate root cause before retrying"), diff scoping, and teaching style (e.g., 💡 1-2 sentence explanations for new terms). Scope configs hierarchically: global preferences in ~/.claude/CLAUDE.md, repo conventions (linting, naming) at repo root, project details in subdirs—Claude Code walks the tree to load them dynamically.
For long CLAUDE.md files, lazy-load by listing guides (e.g., ~/.claude/guides/writing.md for docs, evals.md for reports) without @import to avoid context bloat. Convert weekly tasks into skills: Markdown files with triggers and procedures, like /polish (checks diffs, runs evals/metrics, inspects browser renders, or executes code). Build skills by doing the task once interactively, asking the model to codify it, correcting in-session for before/after pairs in transcripts, then merging feedback—refining via transcripts, not direct edits, to avoid overfitting.
Use simple mode (CLAUDE_CODE_SIMPLE=1) for brainstorming to skip agentic overhead while still loading CLAUDE.md.
Verify Early, Delegate Big, and Scale Parallel
Catch errors at write time with low-cost hooks like ruff format and ruff check --fix on edited files, before pricier tests/evals/LLM reviews. Enable model self-verification: run evals and optimize metrics; inspect browser outputs via Claude in Chrome (e.g., check tooltips, labels); read errors from Docker builds or code runs and iterate. For long tasks, run pair-programming sessions in tmux panes: a primary dev session and secondary reviewer checking spec against transcripts for execution drift (tactical errors) or direction drift (strategic misinterpretation).
Delegate bigger chunks by specifying intent, constraints, and metrics upfront (e.g., "build containers per eval suite, run n times for CIs, generate verified report, Slack results"). Run 3-6 parallel sessions using git worktrees to avoid conflicts; observe via tmux titles (⏳/🟢 emojis, haiku labels), stop-hook sounds (e.g., afplay Glass.aiff), Claude status lines, and /remote-control for quick unblocks.
Close Loops by Mining Transcripts and Refactoring
Work in shared repos/docs/channels so context persists org-wide—test: could a new teammate replicate last week's work? Automate updates via CLAUDE.md instructions to post task summaries/PR links in worklogs. Analyze transcripts (e.g., ~2,500 user turns revealed frequent "can you also…" or "still wrong") to spot missing unprompted steps, update CLAUDE.md/skills/verification. Refactor periodically: consolidate overlapping rules (one place per rule), prune stray settings.json, ensure no conflicts—critical instructions can repeat in main CLAUDE.md.