Details
Developer
Meta
License
Llama 3.1
Model parameters
8B
Supported context length
128k
Price for prompt token
$0.18/Million tokens
Price for response token
$0.18/Million tokens
Chainpoll Score
Short Context
0.89
Medium Context
1
Digging deeper, here’s a look how meta-llama-3.1-8b-instruct performed across specific datasets
This heatmap indicates the model's success in recalling information at different locations in the context. Green signifies success, while red indicates failure.
Tasks | Dataset | Context adherence | Avg response length |
---|---|---|---|
Short context RAG | Drop | 0.87 | 292.4 |
Hotpot | 0.88 | 188.7 | |
MS Marco | 0.93 | 408.6 | |
ConvFinQA | 0.88 | 435.3 | |
Medium context RAG | Medium context RAG | 1.00 | NA |
Long context RAG | Long context RAG | NaN | NA |