With the plethora of large language models (LLMs) and chatbots being released week upon week, often with grandiose claims of their performance, it can be hard to filter out the genuine progress that is being made by the open-source community and which model is the current state of the art.
We wrote a release blog here to explain why we introduced this leaderboard!
đ We evaluate models on 6 key benchmarks using the Eleuther AI Language Model Evaluation Harness , a unified framework to test generative language models on a large number of different evaluation tasks.
For all these evaluations, a higher score is a better score. We chose these benchmarks as they test a variety of reasoning and general knowledge across a wide variety of fields in 0-shot and few-shot settings.
You can find:
results
 Hugging Face dataset.details
 of each model, which you can access by clicking the đ emoji after the model name.requests
 Hugging Face dataset.If a modelâs name contains âFlaggedâ, this indicates it has been flagged by the community, and should probably be ignored! Clicking the link will redirect you to the discussion about the model.
To reproduce our results, you can use lm_eval by installing it from main and then checkout out the chat_template_fix
PR (to be merged soon).
git clone git@github.com:EleutherAI/lm-evaluation-harness.git
cd lm-evaluation-harness
git remote add hf https://github.com/huggingface/lm-evaluation-harness
git fetch hf
git checkout chat_template_fix
git merge main
lm-eval --model_args="pretrained=<your_model>,revision=<your_model_revision>,dtype=<model_dtype>" --tasks=leaderboard --batch_size=auto --output_path=<output_path>
Note:Â You can expect results to vary slightly for different batch sizes because of padding.
IFEval:
inst_level_strict_acc,none
and prompt_level_strict_acc,none
)Big Bench Hard (BBH):
acc_norm,none
)num_choices
:
| BBH Task | num_choices |
|-----------------------------------------------|-------------|
| BBH Sports Understanding | 2 |
| BBH Tracking Shuffled Objects (Three Objects) | 3 |
| BBH Navigate | 2 |
| BBH Snarks | 2 |
| BBH Date Understanding | 6 |
| BBH Reasoning about Colored Objects | 18 |
| BBH Object Counting | 19 |
| BBH Logical Deduction (Seven Objects) | 7 |
| BBH Geometric Shapes | 11 |
| BBH Web of Lies | 2 |
| BBH Movie Recommendation | 6 |
| BBH Logical Deduction (Five Objects) | 5 |
| BBH Salient Translation Error Detection | 6 |
| BBH Disambiguation QA | 3 |
| BBH Temporal Sequences | 4 |
| BBH Hyperbaton | 2 |
| BBH Logical Deduction (Three Objects) | 3 |
| BBH Causal Judgement | 2 |
| BBH Formal Fallacies | 2 |
| BBH Tracking Shuffled Objects (Seven Objects) | 7 |
| BBH Ruin Names | 6 |
| BBH Penguins in a Table | 5 |
| BBH Boolean Expressions | 2 |
| BBH Tracking Shuffled Objects (Five Objects) | 5 |Math Challenges:
exact_match,none
)Generalized Purpose Question Answering (GPQA):
acc_norm,none
)MuSR:
acc_norm,none
)MMLU-PRO:
acc,none
)