Context Repetition Tax Degrades Agent Performance—Solve with 10 Modular Markdown Files

Enterprise AI lags because data isn't structured for agent consumption; personal context faces the same issue, forcing repeated explanations of roles, projects, and preferences across tools like Claude or ChatGPT. This 'context repetition tax' wastes time and omits details, reducing output quality. Leading orgs provide AI-native context access, unlike 'copilot-dropping' laggards.

Counter with a Personal Context Portfolio: a living, portable 'operating manual' as 10 Markdown files (universal AI-readable format). Design principles: Markdown-first for interchangeability, modular for selective access (e.g., agents grab only projects file), living (agents maintain it), portable across LLMs.

Files cover:

  • identity.md: Name, role, org, one-paragraph summary (priority file).
  • roles-and-responsibilities.md: Day-to-day realities, decisions, outputs, weekly rhythm.
  • current-projects.md: Active streams with status, priority, collaborators, goals, KPIs, 'done' criteria (changes weekly).
  • team-and-relationships.md: Key people, roles, interaction needs (powers meeting prep).
  • tools-and-systems.md: Your stack, configs, integrations (aligns agent actions).
  • communication-style.md: Tone, formatting prefs, dislikes (e.g., avoid fluff; makes outputs feel like yours).
  • goals-and-priorities.md: Optimization horizons (week-to-career) for decision weighting.
  • preferences-and-constraints.md: Always/never rules (e.g., no specific tools, dietary limits).
  • domain-knowledge.md: Expertise, terminology (e.g., biotech phase 2 trials; expandable).
  • decision-log.md: Past decisions + reasoning (underrated for new choices).

This 10x improves baseline zero-context setups, escaping memory-based lock-in (e.g., Claude's simplistic export prompt).

AI Interviews Populate and Evolve the Portfolio Effortlessly

Don't hand-write: Use AI as interviewer in a Claude/ChatGPT project. Loop: Interview → Draft → React → Revise. One project shares process context across files.

Resources:

  • GitHub repo (play.brief.ai): Templates per file with interview protocols + output structures; overall setup protocol; synthetic examples (entrepreneur, executive, knowledge worker); 'wiring' folder for Claude/MCP/API.
  • Free app (play.brief.ai): Opus-powered perpetual interview adds to all relevant files simultaneously (e.g., one answer updates identity, projects, domain knowledge). Download anytime; private.

Maintain as living: Agents update on project shifts; expand files over time.

Deploy as MCP Server for Remote Agent Access and Troubleshooting

For high portability, host as MCP (Model Context Protocol) server: Responds to agent requests listing/delivering resources (your files).

Use AI tutor (Claude/ChatGPT) step-by-step:

  1. Decide local/remote, read-only/read-write.
  2. Local: Copy files, run server code (Node.js); troubleshoot (e.g., port 3000 conflict → switch; file naming; copy-paste full code blocks).
  3. Remote: GitHub repo → Railway deploy (minimal changes; faster than local).

~10-15 mins total, mostly screenshots-for-debug (AI zero-judgment). Result: Agents query 'What do you know about my identity?' → pulls file. GitHub hosting works too for simple access.

Value: Do-once setup frees agents from repetition; learn MCP via this low-stakes project.