ABOUT

With the plethora of large language models (LLMs) and chatbots being released week upon week, often with grandiose claims of their performance, it can be hard to filter out the genuine progress that is being made by the open-source community and which model is the current state of the art. 🤗 Submit a model for automated evaluation on the 🤗 GPU cluster on the “Submit” page! The leaderboard’s backend runs the great Eleuther AI Language Model Evaluation Harness - read more details below!

Tasks

📈 We evaluate models on 6 key benchmarks using the Eleuther AI Language Model Evaluation Harness , a unified framework to test generative language models on a large number of different evaluation tasks.

Results

You can find:


REPRODUCIBILITY

To reproduce our results, here are the commands you can run, using this version of the Eleuther AI Harness:

python main.py --model=hf-causal-experimental \
    --model_args="pretrained=<your_model>,use_accelerate=True,revision=<your_model_revision>" \
    --tasks=<task_list> \
    --num_fewshot=<n_few_shot> \
    --batch_size=1 \
    --output_path=<output_path>

Note: We evaluate all models on a single node of 8 H100s, so the global batch size is 8 for each evaluation. If you don’t use parallelism, adapt your batch size to fit. You can expect results to vary slightly for different batch sizes because of padding. The tasks and few shots parameters are:


RESOURCES

Quantization

To get more information about quantization, see:

Useful links

Other cool leaderboards:

< > Update on GitHub