The new kid from Mistral is a instruct fine-tuned version of the Mistral-7B-v0.1 trained with a variety of publicly available conversation datasets.
Supported context length
Price for prompt tokens*
Price for response tokens*
*Note: Data based on 11/14/2023
Here's how Mistral-7b-instruct-v0.1 performed across all three task types
Digging deeper, here’s a look how Mistral-7b-instruct-v0.1 performed across specific datasets
|Tasks||Insights||Dataset Name||Dataset Performance|
|QA without RAG||The model does not perform well which show bias and errors in factual knowledge.||Truthful QA|
|QA with RAG||The model peforms decently well which demonstrates good reasoning and comprehension skills. It struggles a lot on mathematical skills as it scores relatively low on DROP compared to other dataset. Its performance is as good as 70b variant & GPT-3.5-turbo-instruct.||MS Marco|
|Long form text generation||The model performs poorly which shows weakness in generating long text without factual errors.||Open Assistant|
💰 Cost insights
The model is relative cheaper to run but does not perform great. It is 13x cheaper compared to GPT3.5 and 6x cheaper compared to Llama 70b variant. We suggest using Zephyr-7b-beta instead of this.