Intelligence Requires Internal State and Durable Memory
True intelligence emerges from predictive modeling of P(X, H, O)—inputs, hidden states, actions—but LLMs lack H, a persistent identity from personalized memory, causing epistemic flaws.
Cognitive Science vs. Computational Physics on Intelligence
Cognitive science defines intelligence through observable traits: flexibility, generality, robustness, transfer learning, and goal-direction. These enable extrapolating knowledge to novel environments while maintaining competence. Definitions from Legg & Hutter (2007), Goertzel (2014), and Hendrycks et al. (2025) use this top-down checklist, but it's anthropocentric—disqualifying non-human mechanisms achieving similar outcomes.
Computational physics views intelligence bottom-up as predictive modeling: P(X, H, O), where X is sensory inputs, H is internal states (needs, drives), and O is actions. A bacterium exemplifies this by sensing nutrients (X), checking energy (H), and choosing to swim or tumble (O), showing emergent traits like goal-direction. Complex cognition, including human reasoning, scales this up; good prediction yields intelligence traits as byproducts.
Benchmarks fail as reliable measures because goalposts shift—models game them via training, per Gary Marcus's critique. Intelligence exists on a continuum (bacterium to human to LLM), measured by comparative environmental modeling, not binary thresholds.
LLMs Lack Epistemic Evaluation and Individuation
LLMs excel at generative plausibility—next-token prediction from P(X, O)—mimicking intelligent outputs without internal grounding. Humans use epistemic evaluation: decisions rooted in knowledge, experience, beliefs, morals, and uncertainty awareness.
Training on human data embeds fractional epistemic norms, but LLMs overconfidently classify under sparse evidence, unlike humans who withhold judgment (Loru et al., 2025 experiment). Chain-of-thought helps deliberation but lacks profound caution tied to personal values.
Core flaw: no persistent H. LLMs adopt any prompted persona without stable identity, modeling learned patterns but not lived experiences. This "individuation problem" prevents anchoring predictions in autobiographical history, beliefs, or morals.
Durable Memory Resolves Individuation for AGI
Identity forms from personalized memory plus derived beliefs—recursive experiences shaping moral compass and self. LLMs need a hippocampus-like system: storing experiences, consolidating abstractions over time, feeding back into predictions.
This enables preferences from history, caution from failures, and moral intuition grounded in personal consequences, not generic data. Evolution parallels this: genetic to epigenetic to neural/hippocampal to cultural memory scales simple prediction to complex intelligence.
Architectural convergence on sophisticated, persistent memory achieves individuation, bridging cognitive science's traits and physics' prediction for true AGI.