Details
Developer
Anthropic
License
NA (private model)
Model parameters
NA (private model)
Supported context length
200k
Price for prompt token
$15/Million tokens
Price for response token
$75/Million tokens
Chainpoll Score
Short Context
0.97
Medium Context
1
Long Context
1
Digging deeper, here’s a look how claude-3-opus-20240229 performed across specific datasets
This heatmap indicates the model's success in recalling information at different locations in the context. Green signifies success, while red indicates failure.
This heatmap indicates the model's success in recalling information at different locations in the context. Green signifies success, while red indicates failure.
Tasks | Task insight | Cost insight | Dataset | Context adherence | Avg response length |
---|---|---|---|---|---|
Short context RAG | The model demonstrates exceptional reasoning and comprehension skills, excelling at short context RAG. It outperforms other models in mathematical proficiency, as evidenced by its strong performance on DROP and ConvFinQA benchmarks. This makes it the costliest top tier model for RAG. | It is a great model but is nearly 5x and 3x costlier than Sonnet 3.5 and GPT-4o making it an unpreferable choice in closed source models. | Drop | 0.96 | 483 |
Hotpot | 0.96 | 483 | |||
MS Marco | 0.94 | 483 | |||
ConvFinQA | 1.00 | 483 | |||
Medium context RAG | Flawless performance making it suitable for any context length upto 25000 tokens. | Great performance but we recommed using 200x cheaper Gemini Flash. | Medium context RAG | 1.00 | 483 |
Long context RAG | Flawless performance making it suitable for any context length upto 100000 tokens. | Great performance but we recommed using 5x cheaper Claude 3.5 Sonnet for best performance or 40x cheaper Gemini Flash for cost effective performance. | Long context RAG | 1.00 | 483 |