Read-Only AI Analyzes Cognitive Exhaust Fumes
Query personal data sources (email, journal, tasks, CRM, browser, notes) with read-only AI to detect cross-source patterns like intention-action gaps and attention drift—safer and more insightful than write-enabled agents.
Cognitive Exhaust Fumes Unlock Cross-Source Insights
Cognitive exhaust fumes are digital byproducts of your thinking—emails, journal entries, tasks, CRM contacts, browser sessions, and notes—that reveal patterns no single tool detects. Analyzing them across six read-only sources exposes intention-action gaps (e.g., planned tasks ignored in browsing), attention drift (e.g., browsing contradicting journal priorities), and relationship blind spots (e.g., unread emails from key contacts). This cross-source synthesis, powered by LLMs like Anthropic's Claude, delivers insights like weekly reflections highlighting commitments, tensions, and omissions, or suggestions for discussing recent readings with network matches based on article topics, CRM profiles, and email history.
To implement, use a GitHub template (https://github.com/shippy/personal-intelligence-kit) with Python scripts that ingest data into structured outputs via API calls, then synthesize in a workspace before exporting to Obsidian, Notion, or text files. For example, a weekly GTD-style reflection script pulls data, prompts for structured summaries (themes, conflicts, notable moments, reflection questions), and generates a Markdown report reviewable in Cursor—taking minutes but providing brutal honesty on thinking patterns, not just productivity metrics.
A cross-source query demo combines browser tabs (via Weaviate SQLite), Clay CRM searches (for AI/European tech/education interests), and email to recommend unread contacts per article, even spotting article authors in your network—all in plain language via Claude skills, consuming high tokens but yielding unique suggestions no isolated tool (email client, task manager, browser) provides.
Read-Only Constraint Beats Agents on Safety and Purity
Write-enabled agents risk unbounded downsides (e.g., nuking relationships via bad emails), while read-only errors cost nothing—you ignore bad analysis. This asymmetry suits high-stakes personal data (career, reputation). Read-only also prevents data contamination: AI writes pollute exhaust with hybrid human-AI patterns, obscuring pure cognition signals. Human-mediated feedback loops preserve agency—you read reflections and act, avoiding AI-drafted responses.
Observers outperform agents per interaction: agents save seconds (e.g., weather checks), but observers reveal weeks of project avoidance. They're distinct categories—a mirror isn't a broken butler—not a stepping stone to agents. Open Claude read-only pales against custom observers for value density, with lower exfiltration and cognitive pollution risks.
Security Risks Demand Examined Trade-offs
Cross-source power creates mosaic effect vulnerabilities: combining fragments paints a full personal picture, making it a high-value hack target. Simon Willison's lethal trifecta persists—private data + untrusted LLM content + external API/shell access enables risks despite no writes. Data sent to Anthropic over open networks exceeds minimal needs. The system isn't fireproof, but deliberate risk assessment (vs. unexamined agent defaults) justifies use. Key lesson: your digital exhaust is your most underused dataset—reflect on it read-only to improve.