AI Lacks Laziness: Prioritize Abstractions, TDD, and Doubt
Human programmers' laziness builds crisp abstractions to simplify code; AI bloats it. Use TDD for agent prompts (instructions first, then verification) and teach AI doubt to avoid overconfident errors.
Laziness Drives Essential Abstractions, Which AI Ignores
Larry Wall's three programmer virtues—hubris, impatience, laziness—emphasize laziness as key to abstraction. Bryan Cantrill explains it forces simplicity: "make the system as simple as possible (but no simpler)" under time constraints, yielding powerful models that reduce code while deepening domain understanding. AI lacks this; LLMs generate endless code cheaply, creating bloated "layercake of garbage" that appeals to line-count vanity but increases cognitive load and future maintenance costs. Example: Modifying a music playlist generator—initial overcomplication dropped via YAGNI (You Ain't Gonna Need It), shrinking from frustration to ~24 lines. LLM might speed initial output but embed bloat, leading to shrugged LGTM approvals and downstream issues. Counter brogrammer boasts of 37k lines/day; best engineering stems from human time limits enforcing crispness.
TDD Sequence for Reliable AI Agent Outputs
Apply Test-Driven Development to agent prompting: write tests first, then code. Jessica Kerr's example ensures documentation updates in code changes—break into two steps: (1) Instructions in AGENTS.md telling agent to scan/update docs; (2) Reviewer agent verifying PRs for misses. Do instructions first as the 'test' defining behavior, then verification. This mirrors classic TDD: specify desired outcome before implementation, catching gaps early and building incrementally.
Design AI Restraint via Doubt for High-Stakes Decisions
AI's decisiveness—probabilistically resolving ambiguity—fails in open systems with asymmetric risks, needing deferral or inaction. Mark Little cites Dark Star scene: crew uses philosophy to make sentient bomb doubt its detonation order ("no proof data is correct"), expanding its consciousness beyond sensory impulses. Metaphor for AI hallucinations from overconfidence. Solution: Engineer doubt explicitly—value human-like uncertainty in decisions with profound consequences. Restraint becomes core capability for autonomous, safe AI without constant oversight.