Details
Developer
License
NA (private model)
Model parameters
NA (private model)
Supported context length
1000k
Price for prompt token
$3.5/Million tokens
Price for response token
$10.5/Million tokens
Chainpoll Score
Short Context
0.95
Medium Context
1
Long Context
1
Digging deeper, here’s a look how gemini-1.5-pro-001 performed across specific datasets
This heatmap indicates the model's success in recalling information at different locations in the context. Green signifies success, while red indicates failure.
This heatmap indicates the model's success in recalling information at different locations in the context. Green signifies success, while red indicates failure.
Tasks | Task insight | Cost insight | Dataset | Context adherence | Avg response length |
---|---|---|---|---|---|
Short context RAG | The model demonstrates exceptional reasoning and comprehension skills, excelling at short context RAG. It shows good mathematical proficiency, as evidenced by its performance on DROP and ConvFinQA benchmarks. | It is a great model only slightly behind Sonnet 3.5 and nearly similar pricing. If cost is your concern its better to try out Gemini-1.5-Pro or Llama-3-70b. | Drop | 0.93 | 309 |
Hotpot | 0.95 | 309 | |||
MS Marco | 0.93 | 309 | |||
ConvFinQA | 0.98 | 309 | |||
Medium context RAG | Flawless performance making it suitable for any context length upto 25000 tokens. | Great performance but we recommed using 30x cheaper Gemini Flash. | Medium context RAG | 1.00 | 309 |
Long context RAG | Flawless performance making it suitable for any context length upto 100000 tokens. | Great performance and you can use it. Alternatively you can try Claude 3.5 Sonnet which is in the similar range for more complicated task. | Long context RAG | 1.00 | 309 |