Litellm as backend

Lighteval allows to use litellm, a backend allowing you to call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.].

Documentation for available APIs and compatible endpoints can be found here.

Quick use

lighteval endpoint litellm \
    "provider=openai,model_name=gpt-3.5-turbo" \
    "lighteval|gsm8k|0|0" \
    --use-chat-template

--use-chat-template is required for litellm to work properly.

Using a config file

Litellm allows generation with any OpenAI compatible endpoint, for example you can evaluate a model running on a local vllm server.

To do so you will need to use a config file like so:

model_parameters:
    model_name: "openai/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
    base_url: "URL OF THE ENDPOINT YOU WANT TO USE"
    api_key: "" # remove or keep empty as needed
    generation_parameters:
      temperature: 0.5
      max_new_tokens: 256
      stop_tokens: [""]
      top_p: 0.9
      seed: 0
      repetition_penalty: 1.0
      frequency_penalty: 0.0
< > Update on GitHub