Claude Sonnet 4 vs GPT-4.1

Performance benchmarks + pricing comparison — updated April 2026

Claude Sonnet 4

Anthropic

Anthropic's balanced model for coding and general tasks. Best price-performance ratio in the Claude family.

Input$3.00/M
Output$15.00/M
Context200K tokens
Best ForDay-to-day coding, code review, documentation
Benchmark78/100

GPT-4.1

OpenAI

Updated GPT-4 generation with improved instruction following and reduced hallucination. Better coding accuracy than GPT-4o.

Input$2.00/M
Output$8.00/M
Context128K tokens
Best ForProduction coding, API development, complex instructions
Benchmark80/100

Benchmark Performance Comparison

Third-party benchmark scores — higher is better. Data sourced from SWE-bench, LiveCodeBench, HumanEval, and BigCodeBench.

BenchmarkClaude Sonnet 4GPT-4.1Leader
Overall Score 78 80 GPT-4.1 leads by 2pts
SWE-bench Verified 74 76 GPT-4.1 leads by 2pts
LiveCodeBench 82 82 Claude Sonnet 4 leads by 0pts
HumanEval 92 94 GPT-4.1 leads by 2pts
BigCodeBench 64 68 GPT-4.1 leads by 4pts

Cost Comparison by Scenario

Estimated cost per project with 30% cache hit rate. Actual costs may vary based on usage patterns.

ScenarioClaude Sonnet 4GPT-4.1Savings
Small Script (1K lines) $0.62 $0.31 GPT-4.1 saves $0.31 (50%)
Medium Feature (10K lines) $4.66 $2.30 GPT-4.1 saves $2.36 (51%)
Large Project (50K lines) $23.29 $11.50 GPT-4.1 saves $11.79 (51%)
Code Review (5K lines) $1.20 $0.55 GPT-4.1 saves $0.65 (54%)

Value Analysis (Price per Benchmark Score Point)

Lower is better — how much you pay for each point of benchmark performance.

ModelOverall ScorePrice per Score PointVerdict
Claude Sonnet 4 78 $0.038/pt Better value
GPT-4.1 80 $0.063/pt Higher cost per point

Claude Sonnet 4 delivers the best value at $0.038 per score point.

Strengths & Weaknesses

Claude Sonnet 4

  • + Price-performance leader
  • + Strong at web development
  • + Excellent code review
  • - Struggles with complex algorithms
  • - Less consistent on system design

GPT-4.1

  • + Latest GPT model
  • + Strong across all benchmarks
  • - Premium pricing

Verdict

GPT-4.1 wins on both price and performance — $2.00/M input with a benchmark score of 80/100.

For most developers, this is the clear choice between these two models.

Compare with Other Models