Claude Opus 4.1 Reaches 74.5% on SWE-bench for Superior Coding

Claude Opus 4.1 upgrades agentic tasks, coding, and reasoning to 74.5% on SWE-bench Verified, with gains in multi-file refactoring and precise debugging; available now at same pricing.

Coding Gains Target Production Workflows

Claude Opus 4.1 achieves 74.5% on SWE-bench Verified using only bash and file-editing tools—no planning tool—scoring across all 500 problems, outperforming prior models on multi-file refactoring. This setup equips the model for real codebase edits via string replacements, enabling precise fixes without unnecessary changes or bugs. Rakuten Group uses it for everyday debugging in large codebases, preferring its pinpoint accuracy. Windsurf's junior developer benchmark shows a one standard deviation leap over Opus 4, matching the Sonnet 3.7 to 4 jump, proving reliable junior-level code handling.

Agentic Tasks and Research Boosted by Extended Thinking

Opus 4.1 enhances detail tracking, in-depth research, and data analysis via agentic search. On TAU-bench (Airline/Retail agents), scores improve with a prompt addendum encouraging explicit reasoning during extended thinking up to 64K tokens and 100 steps (most under 30). This leverages hybrid reasoning for multi-turn trajectories, separating thoughts from actions. Benchmarks like GPQA Diamond, MMMLU, MMMU, and AIME use extended thinking; SWE-bench and Terminal-Bench do not. GitHub reports broad gains over Opus 4, especially refactoring.

Immediate Upgrade Path Delivers Value

Switch to Opus 4.1 for all tasks via API model claude-opus-4-1-20250805, Claude Code, Amazon Bedrock, or Vertex AI at Opus 4 pricing. Larger upgrades follow soon. Feedback drives iterations; check system card, model page, pricing, and docs for details. Hybrid reasoning maximizes scores, balancing tool use with chain-of-thought for complex problems.

Summarized by x-ai/grok-4.1-fast via openrouter

4521 input / 1887 output tokens in 10018ms

© 2026 Edge