GPT-5.5 Dominates Agentic Tasks with Token Efficiency

GPT-5.5 achieves 84.9% on GDP Val (44 professions), 78.7% on OS World (beats human 72.4%), handles computer control, coding, spreadsheets using fewer tokens than GPT-5.4, but doubles API pricing to $5/$30 per million input/output.

Agentic Capabilities Redefine Knowledge Work

GPT-5.5 powers agents for complex real-world tasks like tool use, work verification, and task completion, shifting from coding-only to full computer control. It navigates browsers, clicks, types, and handles inputs like a human tester for QA or app interactions. Key strengths include writing/debugging code, online research, data analysis, document creation, and spreadsheet editing—navigating massive sheets via efficient code generation to avoid breakage. This beats human baseline on OS World (78.7% vs. 72.4%), questioning what knowledge work remains AI-proof. On GDP Val across 44 professions, it scores 84.9%, proving end-to-end task handling without speed loss, matching GPT-5.4 per-token latency.

Demos showcase this: interactive Artemis 2 mission simulator web app; simple dungeon game with swinging weapons, health bars, enemy movement; 3D tank game; financial modeling in complex Excel via reasoning and updates. Browser/computer use extends to testing apps or full desktop automation.

Token Efficiency Outweighs Benchmark Leads

While topping agentic coding on Terminal Bench and others with fewer tokens (key for production), it trails Opus on SweBench Pro. Context efficiency means same tasks use less tokens/rounds/revisions than GPT-5.4, justifying costs despite higher intelligence demands. Artificial Analysis Intelligence Index plots effort levels for API/consumer tuning.

Pricing and Access Trade-offs

ChatGPT-5.5 rolls out to Plus/Pro/Business/Enterprise (thinking variant); Pro for harder tasks across tiers. Consumer: 400k context window, fast mode at extra cost like GPT-5.4. API: $5/M input, $30/M output (vs. GPT-5.4 halved); Pro: $30/M input, $180/M output—double prior pricing. Tune for fewer tokens in Codex to offset expense, delivering better results cheaper long-term.

Summarized by x-ai/grok-4.1-fast via openrouter

5161 input / 1169 output tokens in 7429ms

© 2026 Edge