Claude 3.5 Sonnet vs GPT-4 Turbo

Performance benchmarks + pricing comparison — updated April 2026

Claude 3.5 Sonnet

Anthropic

Previous generation Sonnet. Still excellent for coding tasks at the same price point.

Input$3.00/M
Output$15.00/M
Context200K tokens
Best ForCoding assistants, web development, data analysis
Benchmark72/100

GPT-4 Turbo

OpenAI

Previous generation high-performance model. Good for complex reasoning tasks.

Input$10.00/M
Output$30.00/M
Context128K tokens
Best ForComplex reasoning, data extraction, analysis
Benchmark70/100

Benchmark Performance Comparison

Third-party benchmark scores — higher is better. Data sourced from SWE-bench, LiveCodeBench, HumanEval, and BigCodeBench.

BenchmarkClaude 3.5 SonnetGPT-4 TurboLeader
Overall Score 72 70 Claude 3.5 Sonnet leads by 2pts
SWE-bench Verified 68 64 Claude 3.5 Sonnet leads by 4pts
LiveCodeBench 75 72 Claude 3.5 Sonnet leads by 3pts
HumanEval 90 88 Claude 3.5 Sonnet leads by 2pts
BigCodeBench 58 56 Claude 3.5 Sonnet leads by 2pts

Cost Comparison by Scenario

Estimated cost per project with 30% cache hit rate. Actual costs may vary based on usage patterns.

ScenarioClaude 3.5 SonnetGPT-4 TurboSavings
Small Script (1K lines) $0.62 $1.25 Claude 3.5 Sonnet saves $0.63 (51%)
Medium Feature (10K lines) $4.66 $9.50 Claude 3.5 Sonnet saves $4.84 (51%)
Large Project (50K lines) $23.29 $47.50 Claude 3.5 Sonnet saves $24.21 (51%)
Code Review (5K lines) $1.20 $2.50 Claude 3.5 Sonnet saves $1.30 (52%)

Value Analysis (Price per Benchmark Score Point)

Lower is better — how much you pay for each point of benchmark performance.

ModelOverall ScorePrice per Score PointVerdict
Claude 3.5 Sonnet 72 $0.042/pt Better value
GPT-4 Turbo 70 $0.133/pt Higher cost per point

Claude 3.5 Sonnet delivers the best value at $0.042 per score point.

Strengths & Weaknesses

Claude 3.5 Sonnet

  • + Balanced performance
  • + Computer use capability
  • + Artifact generation
  • - Older architecture
  • - Falling behind Sonnet 4

GPT-4 Turbo

  • + Proven model
  • + Large context
  • - Superseded by GPT-4o

Verdict

Claude 3.5 Sonnet wins on both price and performance — $3.00/M input with a benchmark score of 72/100.

For most developers, this is the clear choice between these two models.

Compare with Other Models