Wiki vs Database: Compile-Time vs Query-Time AI Memory
Karpathy's personal wiki compiles knowledge upfront for evolving synthesis; OpenBrain stores structured data for precise on-demand queries. Each excels differently—combine them to avoid single-system pitfalls.
Why Current AI Tools Waste Compute on Rederiving Knowledge
AI apps like ChatGPT, NotebookLM, and Claude force LLMs to rediscover insights from fragmented documents and chats every query. For a question spanning five docs and six chats, the model hunts, reads, connects, and synthesizes—then discards it all. Repeat tomorrow: full recompute. No persistent synthesis means no cross-references, no flagged contradictions, no evolution tracking. Karpathy built his wiki to fix this: AI reads new sources, extracts key insights, and updates organized notes with links and evolutions. "The knowledge is compiled once and then kept current. It's not rederived on every query," Karpathy notes. This shifts AI from ephemeral researcher to persistent note-keeper, using folders of Markdown files in Obsidian for browsing graphs and links.
His setup: Raw sources stay untouched; AI (as "programmer") writes/rewrites wiki pages. Add a Monday paper? AI integrates it with prior threads. Friday query? Pull pre-synthesized wiki, not raw pile. 41k bookmarks signal hunger for this "builds on learnings" paradigm. But risks emerge: AI's editorial choices frame connections, drop nuances, or smooth contradictions—clean wiki hides gaps like a dashboard masks spreadsheet details. Most users skip raw sources, trusting AI summaries (80-90% accurate?), baking errors into "truth."
Compile-Time Synthesis (Karpathy's Wiki): Strengths in Evolving Narratives
Wiki is "right-time" (ingest-time) thinking: New source triggers AI to extract, summarize, link, flag contradictions, update topics. Post-ingest: Cheap retrieval, zero recompute. Ideal for research marathons—10 papers over weeks. By paper 5, wiki holds synthesis of first 4; paper 10 yields navigable artifact of understanding evolution. Wins for health tracking, self-improvement, competitive analysis where connections > isolated facts. Like NotebookLM on steroids, but persistent.
AI role: Writer/editor. Heavy upfront (updates dozen pages?), cheap queries. Assumes single agent; multi-agent writes collide. Instructions file is high-leverage: Dictates synthesis fidelity, but laziness underinvests, yielding suboptimal wikis. Quote from speaker: "Most AI knowledge tools spend compute and tokens to rederive, whereas his wiki compiles." For teams, risks smoothing tensions—e.g., eng's 12-week timeline vs sales' 8-week promise becomes averaged 10, losing misalignment signal.
Query-Time Precision (OpenBrain): Strengths in Structured Operations
OpenBrain is query-time: Ingest faithfully—tag, categorize, store in tables. No upfront synthesis. Query hits: AI searches, reads relevant entries fresh, synthesizes precisely. Like organized filing cabinet + brilliant librarian pinpointing needs. Adding info: Lazy/cheap (one row). Queries: Simple fast, complex token-heavy but detailed.
Excels at database ops: "Every Q1 meeting note on pricing," "Recent competitor updates comparison," "Action items assigned to me last 2 weeks." Filters, sorts, multi-source across hundreds. Multi-agent friendly—multiple read/write database safely. Preserves provenance: Trace claims to sources/timestamps. Trust deeper: "This is raw facts + fresh synthesis," not AI's solo framing. AI role: Reader/analyst. Quote: "Every knowledge system with an AI at its core has to answer one question. When does the AI do the hard thinking? Is it when information comes in or is it when you ask about that information you got to pick that's the fork everything else follows from that."
For teams drowning in AI outputs (meeting summaries, strategies, Slack), prevents "write once, read never" noise. Flags contradictions explicitly vs wiki's potential smoothing.
Tradeoffs: No Universal Winner, But Clear Fork in the Road
Wiki (study guide tutor): Preps perfectly for exams, but no raw precision/filtering. Can't handle structured pulls or multi-agent scale. OpenBrain (filing cabinet librarian): Precise, traceable, agent-scalable, but recomputes synthesis (token burn on repeats).
Whose understanding? Wiki trusts AI's capture for sharing; database demands provenance. Speaker's bias: Lazy ingest drew him to OpenBrain, but admits wiki's research edge. Teams: Storage shapes decisions—compounding asset vs noise pile. Quote: "Carpathy's wiki is like a study guide that a really good tutor writes for you... Open brain is like a perfectly organized filing cabinet with a brilliant librarian standing next to that filing cabinet."
Scale issues: Wiki single-agent, heavy ingest; OpenBrain multi-agent, heavy queries. Both for personal/team context layer—2026's big bet.
Hybrid Path: Best of Both via OpenBrain Plugin
Speaker ships OpenBrain plugin merging wiki synthesis with structured data. Compile narratives where needed, query raw precision anytime. Equips users to pick per-need, avoiding "only store" token waste or "only wiki" imprecision. Quote: "I put a plugin into OpenBrain that will help you have the best of both worlds. So you can have the wiki approach Carpathy takes with the structured data that OpenBrain brings."
Key Takeaways
- Decide ingest vs query thinking: Compile upfront for cheap synthesis (wiki); query fresh for precision (database).
- Wiki shines in research evolution (10+ papers, connections); preserve raw sources to audit AI edits.
- Database wins structured queries (filters, multi-agent); ideal for ops, teams flagging contradictions.
- Craft wiki instructions meticulously—it's your synthesis blueprint.
- For teams, prioritize provenance to trust shared knowledge.
- Avoid single paradigm: Token waste from pure storage, detail loss from pure synthesis.
- Test hybrids: OpenBrain plugin blends both.
- Track evolutions manually if needed—AI can't fully capture human nuance.
- In 2026, context layer decisions compound: Build asset, not noise.