LLM Wikis: Shared Graphs Outperform RAG for AI-Human Knowledge
Build knowledge graphs in Obsidian as LLM Wikis—a persistent, AI-maintained wiki of interlinked markdown files that all AI tools share, scaling better than RAG for complex, relational queries across 3+ years of notes.
Knowledge Graphs Scale Personal Insights via Nodes, Edges, Triples
Knowledge graphs model thinking with three elements: nodes (concepts like ideas, people, events), edges (relationships like "causes," "depends on," "references"), and triples (subject-relationship-object atoms). This structure compounds as you add notes—linking terms with [[double brackets]] in Obsidian auto-builds the graph in real-time. Start with a note on "favorite inventions"; link "flywheel" to "The One Thing" book, and the graph visualizes connections without manual diagramming. Over 3 years and thousands of notes, it reveals insights between unrelated concepts, avoids duplicating ideas (e.g., rediscovering a 2-year-old note), and matches your brain's relational structure. Google's Knowledge Graph powers sidebar panels (e.g., Toronto Reference Library shows architect, reviews, address as nodes); Wikipedia's full graph (1.1% visualized in Obsidian) shows hyper-connected scale. Books are proto-graphs: authors map concepts pre-writing. Result: invest time linking notes once; compound returns via emergent connections, turning note-taking into a "map of your brain."
RAG Fails Complex Queries; Graph RAG Navigates Relations
Standard RAG embeds documents as vectors, retrieves similar chunks for simple "what is X?" queries—efficient for single docs but token-inefficient and blind to inter-document relations on complex data. Graph RAG traverses edges (e.g., which ideas depend on others, chapters link) like a "reference librarian," outperforming on large datasets by following paths instead of retrieving thousands of chunks. Evidence: years of research (pre-Karpathy) and scaling (e.g., author's 3-year Obsidian vault). For high-volume, relational info across sources, graphs cut costs and boost accuracy—AI bounds to your curated knowledge, not hallucinating freely.
LLM Wikis Create Agentic Shared Brains Across Tools
LLM Wiki (per Karpathy): AI agents build/maintain a persistent markdown wiki between raw sources and queries. Process: (1) Clip raw sources (e.g., Obsidian Web Clipper). (2) Agent extracts entities, updates interlinked pages, revises summaries, flags contradictions. (3) Periodic maintenance checks orphans/outdated info. Keeps knowledge compiled/current, not rederived per query. Separate human vault (your thinking) from agentic vault (AI-fed)—firewall origins while sharing structure. Benefits all tools (bypassing silos/rate limits): unified context scales agentic AI, future-proofs knowledge vs. tool churn. Demo potential: connect multiple agents; author offers setup tutorials. Outcome: augmented PKM where humans derive insights, AI executes relationally—closest to a "true second brain."