Clean up operations if needed, such as closing an endpoint.
( requests: list override_bs: typing.Optional[int] = None ) → list[GenerativeResponse]
Parameters
Returns
list[GenerativeResponse]
list of generated responses.
Generates responses using a greedy decoding strategy until certain ending conditions are met.
Generates responses using a greedy decoding strategy until certain ending conditions are met.
Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.
This function is used to compute the log likelihood of the context for perplexity metrics.
Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.
( context continuation pairwise: bool = False ) → Tuple[TokenSequence, TokenSequence]
Parameters
Returns
Tuple[TokenSequence, TokenSequence]
A tuple containing the encoded context and continuation.
Encodes a context, continuation pair by taking care of the spaces in between.
The advantage of pairwise is: 1) It better aligns with how LLM predicts tokens 2) Works in case len(tok(context,cont)) != len(tok(context)) + len(tok(continuation)). E.g this can happen for chinese if no space is used between context/continuation
( generation_parameters: GenerationParameters = GenerationParameters(early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=None, top_k=None, min_p=None, top_p=None, truncate_prompt=None, response_format=None) model_name: str tokenizer: str | None = None subfolder: str | None = None revision: str = 'main' batch_size: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None generation_size: typing.Annotated[int, Gt(gt=0)] = 256 max_length: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None add_special_tokens: bool = True model_parallel: bool | None = None dtype: str | None = None device: typing.Union[int, str] = 'cuda' trust_remote_code: bool = False use_chat_template: bool = False compile: bool = False multichoice_continuations_start_space: bool | None = None pairwise_tokenization: bool = False )
Parameters
pretrained_model_name_or_path
argument of from_pretrained in the HuggingFace transformers API. None, the default value will be set to True for seq2seq models (e.g. T5) and
False for causal models. accelerate library to load a large
model across multiple devices.
Default: None which corresponds to comparing the number of processes with
the number of GPUs. If it’s smaller => model-parallelism, else not. dtype, if specified. Strings get
converted to torch.dtype objects (e.g. float16 -> torch.float16).
Use dtype="auto" to derive the type from the model’s weights. Base configuration class for models.
Methods: post_init(): Performs post-initialization checks on the configuration. _init_configs(model_name, env_config): Initializes the model configuration. init_configs(env_config): Initializes the model configuration using the environment configuration. get_model_sha(): Retrieves the SHA of the model.
( config: TransformersModelConfig )
( requests: list ) → list[GenerativeResponse]
Generates responses using a greedy decoding strategy until certain ending conditions are met.
Compute all the parameters related to model_parallel
( requests: list ) → list[Tuple[float, bool]]
Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.
( requests: list ) → list[Tuple[float, bool]]
Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.
( output_tensor: Tensor drop_last_samples: bool = True num_samples: int = None ) → torch.Tensor
Parameters
Returns
torch.Tensor
The padded output tensor and the gathered length tensor.
Pads the output_tensor to the maximum length and gathers the lengths across processes.
( batch: list padding_length: int max_context: typing.Optional[int] = None single_token: bool = False )
Tokenize a batch of inputs and return also the length, truncations and padding. This step is done manually since we tokenize log probability inputs together with their continuation, to manage possible extra spaces added at the start by tokenizers, see tok_encode_pair.
( generation_parameters: GenerationParameters = GenerationParameters(early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=None, top_k=None, min_p=None, top_p=None, truncate_prompt=None, response_format=None) model_name: str tokenizer: str | None = None subfolder: str | None = None revision: str = 'main' batch_size: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None generation_size: typing.Annotated[int, Gt(gt=0)] = 256 max_length: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None add_special_tokens: bool = True model_parallel: bool | None = None dtype: str | None = None device: typing.Union[int, str] = 'cuda' trust_remote_code: bool = False use_chat_template: bool = False compile: bool = False multichoice_continuations_start_space: bool | None = None pairwise_tokenization: bool = False base_model: str adapter_weights: bool )
( config: TransformersModelConfig )
( generation_parameters: GenerationParameters = GenerationParameters(early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=None, top_k=None, min_p=None, top_p=None, truncate_prompt=None, response_format=None) model_name: str tokenizer: str | None = None subfolder: str | None = None revision: str = 'main' batch_size: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None generation_size: typing.Annotated[int, Gt(gt=0)] = 256 max_length: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None add_special_tokens: bool = True model_parallel: bool | None = None dtype: str | None = None device: typing.Union[int, str] = 'cuda' trust_remote_code: bool = False use_chat_template: bool = False compile: bool = False multichoice_continuations_start_space: bool | None = None pairwise_tokenization: bool = False base_model: str delta_weights: bool )
( config: TransformersModelConfig )
( generation_parameters: GenerationParameters = GenerationParameters(early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=None, top_k=None, min_p=None, top_p=None, truncate_prompt=None, response_format=None) endpoint_name: str | None = None model_name: str | None = None reuse_existing: bool = False accelerator: str = 'gpu' dtype: str | None = None vendor: str = 'aws' region: str = 'us-east-1' instance_size: str | None = None instance_type: str | None = None framework: str = 'pytorch' endpoint_type: str = 'protected' add_special_tokens: bool = True revision: str = 'main' namespace: str | None = None image_url: str | None = None env_vars: dict | None = None )
( generation_parameters: GenerationParameters = GenerationParameters(early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=None, top_k=None, min_p=None, top_p=None, truncate_prompt=None, response_format=None) model_name: str add_special_tokens: bool = True )
( config: typing.Union[lighteval.models.endpoints.endpoint_model.InferenceEndpointModelConfig, lighteval.models.endpoints.endpoint_model.ServerlessEndpointModelConfig] )
InferenceEndpointModels can be used both with the free inference client, or with inference endpoints, which will use text-generation-inference to deploy your model for the duration of the evaluation.
( generation_parameters: GenerationParameters = GenerationParameters(early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=None, top_k=None, min_p=None, top_p=None, truncate_prompt=None, response_format=None) inference_server_address: str | None inference_server_auth: str | None model_name: str | None )
( generation_parameters: GenerationParameters = GenerationParameters(early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=None, top_k=None, min_p=None, top_p=None, truncate_prompt=None, response_format=None) model_name: str model_definition_file_path: str )
Parameters
Configuration class for loading custom model implementations in Lighteval.
This config allows users to define and load their own model implementations by specifying a Python file containing a custom model class that inherits from LightevalModel.
The custom model file should contain exactly one class that inherits from LightevalModel. This class will be automatically detected and instantiated when loading the model.
Example usage:
# Define config
config = CustomModelConfig(
model="my-custom-model",
model_definition_file_path="path/to/my_model.py"
)
# Example custom model file (my_model.py):
from lighteval.models.abstract_model import LightevalModel
class MyCustomModel(LightevalModel):
def __init__(self, config, env_config):
super().__init__(config, env_config)
# Custom initialization...
def greedy_until(self, *args, **kwargs):
# Custom generation logic...
passAn example of a custom model can be found in examples/custom_models/google_translate_model.py.
Notes:
( config: OpenAIModelConfig env_config )
( requests: list override_bs: typing.Optional[int] = None ) → list[GenerativeResponse]
Generates responses using a greedy decoding strategy until certain ending conditions are met.
( generation_parameters: GenerationParameters = GenerationParameters(early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=None, top_k=None, min_p=None, top_p=None, truncate_prompt=None, response_format=None) model_name: str revision: str = 'main' dtype: str = 'bfloat16' tensor_parallel_size: typing.Annotated[int, Gt(gt=0)] = 1 data_parallel_size: typing.Annotated[int, Gt(gt=0)] = 1 pipeline_parallel_size: typing.Annotated[int, Gt(gt=0)] = 1 gpu_memory_utilization: typing.Annotated[float, Ge(ge=0)] = 0.9 max_model_length: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None swap_space: typing.Annotated[int, Gt(gt=0)] = 4 seed: typing.Annotated[int, Ge(ge=0)] = 1234 trust_remote_code: bool = False use_chat_template: bool = False add_special_tokens: bool = True multichoice_continuations_start_space: bool = True pairwise_tokenization: bool = False max_num_seqs: typing.Annotated[int, Gt(gt=0)] = 128 max_num_batched_tokens: typing.Annotated[int, Gt(gt=0)] = 2048 subfolder: str | None = None )
( requests: list override_bs: typing.Optional[int] = None ) → list[GenerateReturn]
Generates responses using a greedy decoding strategy until certain ending conditions are met.