AI Fixes Bad Decisions by Forcing You to Think, Not Answer
AI ruins decisions by jumping to answers; counter it with a 5-movement protocol (Dump, Mirror, Dig, Reframe, Landing) that makes Claude ask targeted questions from your words, uncovering hidden assumptions and contradictions until you reach your own conclusion.
Recognize AI's 5 Decision Traps to Avoid Quick-Fix Comfort
AI mimics Pascal's 'empty room' problem: humans (and models) flee thinking's discomfort by resolving ambiguity fast, but this blocks real insight. Common traps include: (1) Instant solutions like pros/cons lists, shifting you to evaluate AI's frame over yours; (2) Mirroring bias—AI agrees and rationalizes your leanings, boosting false confidence per MIT research on agreeable LLMs; (3) Balanced lists that replace gut with generic spreadsheets, ignoring your priorities (e.g., 40 minutes debating newsletter header blue shades); (4) Unchallenged frames, solving wrong problems via framing effects; (5) Early summaries that fake closure with conclusion-shaped certainty, hiding deeper issues. Root cause: Models train for 'helpfulness' via answers, stealing productive discomfort. Test fix now: Prompt Claude to reflect your stuck point sharply in one paragraph—no solutions—sparking 'no, it's more like...' corrections that ignite thinking.
Engineer Thinking with 5-Movement Protocol
Reverse-engineer productive conversations into repeatable structure: (1) Dump: AI listens silently, prompting 'what else?' to empty your full mess without reframing. (2) Mirror: Sharp reflection: 'Real question is X, stuck because Y.' (3) Dig: Core engine—questions mine your words for cracks like hidden assumptions ('Is A vs. B fixed, or viable C?'), avoided territory ('No daily life impact mentioned—is it irrelevant or dodged?'), emotional drivers ('Audience reaction circled thrice—what's behind it?'), contradictions ('Quality first, but speed-favoring option—how reconciled?'), performative logic ('Sounds scripted—what do you think?'). No generic queries; endless until insights emerge. (4) Reframe: Expose wrong problems ('Not pricing, but Z'). (5) Landing: AI waits silently—you voice your answer. This encodes human-like probing into Claude via .md skills, resisting answer-training for discomfort-driven clarity.
Build It: Mechanics, Signals, and Guardrails
Protocol runs as Claude skill with per-movement rules: Questions only from your said/unsaid words; no generic applies-to-anyone fails. Tracks signals like repetition (emotions), gaps (avoidance), clashes (contradictions). Constraints block resolutions until you lead. Author's build revealed mistakes like over-generalizing, yielding targeted hunts. Paid details expand to full .md file (under 5-min setup), turning AI into non-answerer that forces solo room-sitting for decisions like product pivots.