Claude 3.5 Sonnet vs GPT-4o

Performance benchmarks + pricing comparison — updated April 2026

Claude 3.5 Sonnet

Anthropic

Previous generation Sonnet. Still excellent for coding tasks at the same price point.

Input$3.00/M
Output$15.00/M
Context200K tokens
Best ForCoding assistants, web development, data analysis
Benchmark72/100

GPT-4o

OpenAI

OpenAI's flagship multimodal model. Strong coding and reasoning at competitive pricing.

Input$2.50/M
Output$10.00/M
Context128K tokens
Best ForGeneral coding, multimodal tasks, chatbots
Benchmark75/100

Benchmark Performance Comparison

Third-party benchmark scores — higher is better. Data sourced from SWE-bench, LiveCodeBench, HumanEval, and BigCodeBench.

BenchmarkClaude 3.5 SonnetGPT-4oLeader
Overall Score 72 75 GPT-4o leads by 3pts
SWE-bench Verified 68 70 GPT-4o leads by 2pts
LiveCodeBench 75 78 GPT-4o leads by 3pts
HumanEval 90 90 Claude 3.5 Sonnet leads by 0pts
BigCodeBench 58 62 GPT-4o leads by 4pts

Cost Comparison by Scenario

Estimated cost per project with 30% cache hit rate. Actual costs may vary based on usage patterns.

ScenarioClaude 3.5 SonnetGPT-4oSavings
Small Script (1K lines) $0.62 $0.41 GPT-4o saves $0.21 (34%)
Medium Feature (10K lines) $4.66 $3.06 GPT-4o saves $1.59 (34%)
Large Project (50K lines) $23.29 $15.31 GPT-4o saves $7.98 (34%)
Code Review (5K lines) $1.20 $0.78 GPT-4o saves $0.42 (35%)

Value Analysis (Price per Benchmark Score Point)

Lower is better — how much you pay for each point of benchmark performance.

ModelOverall ScorePrice per Score PointVerdict
Claude 3.5 Sonnet 72 $0.042/pt Higher cost per point
GPT-4o 75 $0.033/pt Better value

GPT-4o delivers the best value at $0.033 per score point.

Strengths & Weaknesses

Claude 3.5 Sonnet

  • + Balanced performance
  • + Computer use capability
  • + Artifact generation
  • - Older architecture
  • - Falling behind Sonnet 4

GPT-4o

  • + Strong general-purpose
  • + Good multimodal
  • - Less consistent on coding than Claude

Verdict

GPT-4o wins on both price and performance — $2.50/M input with a benchmark score of 75/100.

For most developers, this is the clear choice between these two models.

Compare with Other Models