Ditch Step-by-Step Paths for Clear Destinations
New models like GPT-5.5 know better routes than detailed instructions, making mega-prompts counterproductive—they bottleneck intelligence by dictating steps. Instead, state the end goal precisely to let the model determine the optimal path. For example, replace 'summarize this meeting transcript' with 'turn this transcript into a follow-up email I can send to a client,' revealing intent over mere output. Similarly, swap 'make a table from this spreadsheet' for 'find the three problems in this spreadsheet that would change my decision for X criteria,' focusing on decision impact. This unlocks faster, more relevant outputs since the model handles the 'how' better than rigid paths, reducing use cases needing steps as models advance.
Define Success with Binary Criteria
After setting the destination, specify 'what good looks like' using verifiable, binary checks—yes/no metrics the model self-audits before outputting. Examples include 'on-brand for company,' 'under 200 words,' or 'put the ask in the first three sentences.' Binary trumps spectra (e.g., 'clear' is vague; 'under 200 words' is checkable), speeding convergence to quality. In a rewrite prompt: 'Make it clear, calm, and direct. Keep the same facts. Keep it under 200 words. Put the ask in the first three sentences.' The last two enable instant validation, cutting iterations.
Address Doubt and Set a Finish Line
Smarter models hallucinate more convincingly, guessing confidently on benchmarks. Counter with proof: require inline citations like 'Source: Report X, page Y' per claim, or 'when unsure, write "unverified" or leave blank—I'd rather gaps than guesses.' This shifts incentives from fabricating to honesty, grounding in provided data (e.g., 'use only decisions directly supported by the transcript; put unclear items under open questions'). For heavy reasoning modes (extra high in o1, heavy in ChatGPT), prevent endless thinking—wasting time and tokens—by setting finish lines: 'Stop once you can answer the main question with enough evidence' or 'when the output meets the checklist, give the final version.'
Full 4 D's Prompt Transforms Outputs
Combine into concise prompts: Destination ('Turn this transcript into a client-ready follow-up email'), Definition ('Clearly states what we decided, what's open, next actions per person'), Doubt ('Use only transcript-supported decisions; unclear under open questions'), Done ('When checklist met, give final email'). Old mega-prompts listed steps like 'act as strategist, read transcripts, identify themes, extract items, write email'—now obsolete. This structure yields precise, grounded, efficient results across liability-sensitive cases (finance, legal, reputation).