Augustine's Diagnosis: Systems Fail from Misdirected Loves
Human systems collapse not from poor execution but misordered desires—what Augustine calls ordo amoris. In Confessions, he shows desire shapes what we build; societies reflect collective loves. The City of Man prioritizes self-love and libido dominandi (mastery drive), chasing unstable goods like security, leading to inherent instability. The City of God orients toward divine order via rightly ordered love. Tools follow uti (use as means, from On Christian Doctrine) vs. frui (enjoy as end)—AI errs by blurring this, treating utility as authority. Even secularly, this mirrors bounded rationality: no system self-justifies its value hierarchy, whether from cognitive limits or original sin.
This creates a structural gap: optimization needs a prior 'good' definition, always partial and contested. Builders ignore this at peril—AI doesn't access fundamental truth, only scales encoded values.
AI's False Promise: Efficiency Masks Distortion
AI intensifies the problem by optimizing flawed inputs at scale. Hiring algorithms narrow 'qualified' to keywords; recommendation systems redefine relevance; risk models formalize biases. MIT Technology Review coverage shows AI doesn't eliminate bias but embeds it objectively. Generative tools or AGI pursuits assume more intelligence resolves value disputes—it doesn't, just amplifies priors. Engagement as 'good' maximizes attention; efficiency sacrifices depth. Outputs become conclusions, narrowing human perception: the tool frames reality, not windows it.
Innovation feels precise via metrics, but unexamined norms persist. Consistent results signal stability, not legitimacy—a well-oiled City of Man machine.
Remedies for Builders: Judgment Over Automation
Restore deliberation: AI outputs are inputs to human reasoning, not finals. Structure orgs so judgment stays authoritative—e.g., review hiring scores manually.
Expose values: Surface embedded priorities as political choices open to contestation, per algorithmic accountability research. Name assumptions in models (e.g., what 'qualified' means) for revision.
Cultivate institutional humility: Audit if outputs align with right goals, not just stated ones. Efficiency doesn't validate ends. Result: AI aids without substituting moral seriousness, preserving systems from disordered orientations.