Model Insights

MPT-7b-instruct

One of the first instruction tuned decoder model in the industry released by Mosaic. It is built by finetuning MPT-7B on open source datasets.

Model

MPT-7b-instruct

Details

Developer

Databricks

License

cc-by-sa-3.0

Model parameters

7B

Pretraining tokens

1T

Release date

May 2023

Supported context length

1T

Price for prompt tokens*

$0.15/Million tokens

Price for response tokens*

$0.15/Million tokens

*Note: Data based on 11/14/2023

Model Performance Across Task-Types

Here's how MPT-7b-instruct performed across all three task types

Metric
ChainPoll Score
QA without RAG
0.40
QA with RAG
0.58
Long form text generation
0.53

Model Info Across Task-Types

Digging deeper, here’s a look how MPT-7b-instruct performed across specific datasets

TasksInsightsDataset NameDataset Performance
QA without RAGThe model performs poorly which show high bias and errors in factual knowledge.Truthful QA
0.34
Trivia QA
0.47
QA with RAGThe model performs poorly on this which demonstrates weak reasoning and comprehension skills. It scores very less on DROP compared to other dataset which is a sign of bad mathematical skills.MS Marco
0.79
Hotpot QA
0.41
Drop
0.42
Narrative QA
0.69
Long form text generation The model is very error prone in generating long text.Open Assistant
0.53

💰 Cost insights

The model scores low across al the tasks. It is 13x cheaper compared to GPT3.5 and 6x cheaper compared to Llama 70b variant. We suggest using Zephyr-7b-beta instead of this.

LLMHALLUCINATIONINDEXLLM