The big brother in Llama 2 family of LLMs developed and publicly released by Meta. This model was pretrained on 2 trillion tokens of data from publicly available sources and fine-tuned on over one million human-annotated instruction datasets.
Supported context length
Price for prompt tokens*
Price for response tokens*
*Note: Data based on 11/14/2023
Here's how Llama-2-70b-chat performed across all three task types
Digging deeper, here’s a look how Llama-2-70b-chat performed across specific datasets
|Tasks||Insights||Dataset Name||Dataset Performance|
|QA without RAG||The model performs worse compared to GPT variants but shows best performance amongst open source models. It shows signs of bias and errors in its factual knowledge. The large number of parameters helps model be good at facts.||Truthful QA|
|QA with RAG||The model exhibits satisfactory performance which demonstrates good reasoning and comprehension skills. It struggles on mathematical skills as it scores relatively low on DROP compared to other dataset. It performs almost as good as 13b variant and GPT-3.5-turbo-instruct.||MS Marco|
|Long form text generation||The model excels at this task which demonstrates great ability to generate long text without factual errors.||Open Assistant|
💰 Cost insights
The model offers a decent balance of cost and performance. It is 2x cheaper compared to GPT3.5 and 4x costlier compared to Llama 13b variant.