Details
Developer
License
Gemma
Model parameters
7b
Supported context length
8k
Price for prompt token
$0.2/Million tokens
Price for response token
$0.2/Million tokens
Chainpoll Score
Short Context
0.65
Digging deeper, here’s a look how gemma-7b-it performed across specific datasets
Tasks | Task insight | Cost insight | Dataset | Context adherence | Avg response length |
---|---|---|---|---|---|
Short context RAG | The model struggles with reasoning and comprehension skills at short context RAG. It shows poor mathematical proficiency, as evidenced by its performance on DROP and ConvFinQA benchmarks. | We recommend using Llama-3-8b instead of this for same price. | Drop | 0.60 | 174 |
Hotpot | 0.68 | 174 | |||
MS Marco | 0.84 | 174 | |||
ConvFinQA | 0.49 | 174 |