We recommend using the --help flag to get more information about the
available options for each command.
lighteval --help
Lighteval can be used with a few different commands.
lighteval accelerate : evaluate models on CPU or one or more GPUs using 🤗
Acceleratelighteval nanotron: evaluate models in distributed settings using ⚡️
Nanotronlighteval vllm: evaluate models on one or more GPUs using 🚀
VLLMlighteval endpointinference-endpoint: evaluate models on one or more GPUs using đź”—
Inference Endpointtgi: evaluate models on one or more GPUs using đź”— Text Generation Inferenceopenai: evaluate models on one or more GPUs using đź”— OpenAI APITo evaluate GPT-2 on the Truthful QA benchmark, run:
lighteval accelerate \
"pretrained=gpt2" \
"leaderboard|truthfulqa:mc|0|0"Here, --tasks refers to either a comma-separated list of supported tasks from
the tasks_list in the format:
{suite}|{task}|{num_few_shot}|{0 or 1 to automatically reduce `num_few_shot` if prompt is too long}or a file path like examples/tasks/recommended_set.txt which specifies multiple task configurations.
Tasks details can be found in the file implementing them.
To evaluate a model on one or more GPUs, first create a multi-gpu config by running.
accelerate config
You can then evaluate a model using data parallelism on 8 GPUs like follows:
accelerate launch --multi_gpu --num_processes=8 -m \
lighteval accelerate \
"pretrained=gpt2" \
"leaderboard|truthfulqa:mc|0|0"Here, --override_batch_size defines the batch size per device, so the effective
batch size will be override_batch_size * num_gpus.
To evaluate a model using pipeline parallelism on 2 or more GPUs, run:
lighteval accelerate \
"pretrained=gpt2,model_parallel=True" \
"leaderboard|truthfulqa:mc|0|0"This will automatically use accelerate to distribute the model across the GPUs.
Both data and pipeline parallelism can be combined by setting
model_parallel=True and using accelerate to distribute the data across the
GPUs.
The model-args argument takes a string representing a list of model
argument. The arguments allowed vary depending on the backend you use (vllm or
accelerate).
pretrained_model_name_or_path
argument of from_pretrained in the HuggingFace transformers API.None, the default value will be set to True for seq2seq models (e.g. T5) and
False for causal models.accelerate library to load a large
model across multiple devices.
Default: None which corresponds to comparing the number of processes with
the number of GPUs. If it’s smaller => model-parallelism, else not.dtype, if specified. Strings get
converted to torch.dtype objects (e.g. float16 -> torch.float16).
Use dtype="auto" to derive the type from the model’s weights.To evaluate a model trained with nanotron on a single gpu.
Nanotron models cannot be evaluated without torchrun.
torchrun --standalone --nnodes=1 --nproc-per-node=1 \ src/lighteval/__main__.py nanotron \ --checkpoint-config-path ../nanotron/checkpoints/10/config.yaml \ --lighteval-config-path examples/nanotron/lighteval_config_override_template.yaml
The nproc-per-node argument should match the data, tensor and pipeline
parallelism confidured in the lighteval_config_template.yaml file.
That is: nproc-per-node = data_parallelism * tensor_parallelism * pipeline_parallelism.