ModelConfig

class lighteval.models.model_config.BaseModelConfig

< >

( pretrained: str accelerator: Accelerator = None tokenizer: typing.Optional[str] = None multichoice_continuations_start_space: typing.Optional[bool] = None pairwise_tokenization: bool = False subfolder: typing.Optional[str] = None revision: str = 'main' batch_size: int = -1 max_gen_toks: typing.Optional[int] = 256 max_length: typing.Optional[int] = None add_special_tokens: bool = True model_parallel: typing.Optional[bool] = None dtype: typing.Union[str, torch.dtype, NoneType] = None device: typing.Union[int, str] = 'cuda' quantization_config: typing.Optional[transformers.utils.quantization_config.BitsAndBytesConfig] = None trust_remote_code: bool = False use_chat_template: bool = False compile: bool = False )

Parameters

  • pretrained (str) — HuggingFace Hub model ID name or the path to a pre-trained model to load. This is effectively the pretrained_model_name_or_path argument of from_pretrained in the HuggingFace transformers API.
  • accelerator (Accelerator) — accelerator to use for model training.
  • tokenizer (Optional[str]) — HuggingFace Hub tokenizer ID that will be used for tokenization.
  • multichoice_continuations_start_space (Optional[bool]) — Whether to add a space at the start of each continuation in multichoice generation. For example, context: “What is the capital of France?” and choices: “Paris”, “London”. Will be tokenized as: “What is the capital of France? Paris” and “What is the capital of France? London”. True adds a space, False strips a space, None does nothing
  • pairwise_tokenization (bool) — Whether to tokenize the context and continuation as separately or together.
  • subfolder (Optional[str]) — The subfolder within the model repository.
  • revision (str) — The revision of the model.
  • batch_size (int) — The batch size for model training.
  • max_gen_toks (Optional[int]) — The maximum number of tokens to generate.
  • max_length (Optional[int]) — The maximum length of the generated output.
  • add_special_tokens (bool, optional, defaults to True) — Whether to add special tokens to the input sequences. If None, the default value will be set to True for seq2seq models (e.g. T5) and False for causal models.
  • model_parallel (bool, optional, defaults to False) — True/False: force to use or not the accelerate library to load a large model across multiple devices. Default: None which corresponds to comparing the number of processes with the number of GPUs. If it’s smaller => model-parallelism, else not.
  • dtype (Union[str, torch.dtype], optional, defaults to None) —): Converts the model weights to dtype, if specified. Strings get converted to torch.dtype objects (e.g. float16 -> torch.float16). Use dtype="auto" to derive the type from the model’s weights.
  • device (Union[int, str]) — device to use for model training.
  • quantization_config (Optional[BitsAndBytesConfig]) — quantization configuration for the model, manually provided to load a normally floating point model at a quantized precision. Needed for 4-bit and 8-bit precision.
  • trust_remote_code (bool) — Whether to trust remote code during model loading.

Base configuration class for models.

Methods: post_init(): Performs post-initialization checks on the configuration. _init_configs(model_name, env_config): Initializes the model configuration. init_configs(env_config): Initializes the model configuration using the environment configuration. get_model_sha(): Retrieves the SHA of the model.

class lighteval.models.model_config.AdapterModelConfig

< >

( pretrained: str accelerator: Accelerator = None tokenizer: typing.Optional[str] = None multichoice_continuations_start_space: typing.Optional[bool] = None pairwise_tokenization: bool = False subfolder: typing.Optional[str] = None revision: str = 'main' batch_size: int = -1 max_gen_toks: typing.Optional[int] = 256 max_length: typing.Optional[int] = None add_special_tokens: bool = True model_parallel: typing.Optional[bool] = None dtype: typing.Union[str, torch.dtype, NoneType] = None device: typing.Union[int, str] = 'cuda' quantization_config: typing.Optional[transformers.utils.quantization_config.BitsAndBytesConfig] = None trust_remote_code: bool = False use_chat_template: bool = False compile: bool = False base_model: str = None )

class lighteval.models.model_config.DeltaModelConfig

< >

( pretrained: str accelerator: Accelerator = None tokenizer: typing.Optional[str] = None multichoice_continuations_start_space: typing.Optional[bool] = None pairwise_tokenization: bool = False subfolder: typing.Optional[str] = None revision: str = 'main' batch_size: int = -1 max_gen_toks: typing.Optional[int] = 256 max_length: typing.Optional[int] = None add_special_tokens: bool = True model_parallel: typing.Optional[bool] = None dtype: typing.Union[str, torch.dtype, NoneType] = None device: typing.Union[int, str] = 'cuda' quantization_config: typing.Optional[transformers.utils.quantization_config.BitsAndBytesConfig] = None trust_remote_code: bool = False use_chat_template: bool = False compile: bool = False base_model: str = None )

class lighteval.models.model_config.InferenceEndpointModelConfig

< >

( model_or_endpoint_name: str should_reuse_existing: bool = False accelerator: str = 'gpu' model_dtype: str = None vendor: str = 'aws' region: str = 'us-east-1' instance_size: str = None instance_type: str = None framework: str = 'pytorch' endpoint_type: str = 'protected' add_special_tokens: bool = True revision: str = 'main' namespace: str = None image_url: str = None env_vars: dict = None )

class lighteval.models.model_config.InferenceModelConfig

< >

( model: str add_special_tokens: bool = True )

class lighteval.models.model_config.TGIModelConfig

< >

( inference_server_address: str inference_server_auth: str model_id: str )

class lighteval.models.model_config.VLLMModelConfig

< >

( pretrained: str gpu_memory_utilisation: float = 0.9 revision: str = 'main' dtype: str | None = None tensor_parallel_size: int = 1 pipeline_parallel_size: int = 1 data_parallel_size: int = 1 max_model_length: int | None = None swap_space: int = 4 seed: int = 1234 trust_remote_code: bool = False use_chat_template: bool = False add_special_tokens: bool = True multichoice_continuations_start_space: bool = True pairwise_tokenization: bool = False subfolder: typing.Optional[str] = None temperature: float = 0.6 )

lighteval.models.model_config.create_model_config

< >

( use_chat_template: bool override_batch_size: int accelerator: typing.Optional[ForwardRef('Accelerator')] model_args: typing.Union[str, dict] = None model_config_path: str = None ) Union[BaseModelConfig, AdapterModelConfig, DeltaModelConfig, TGIModelConfig, InferenceEndpointModelConfig, DummyModelConfig]

Parameters

  • accelerator(Union[Accelerator, None]) — accelerator to use for model training.
  • use_chat_template (bool) — whether to use the chat template or not. Set to True for chat or ift models
  • override_batch_size (int) — frozen batch size to use
  • model_args (Optional[Union[str, dict]]) — Parameters to create the model, passed as a string (like the CLI kwargs or dict). This option only allows to create a dummy model using dummy or a base model (using accelerate or no accelerator), in which case corresponding full model args available are the arguments of the [[BaseModelConfig]]. Minimal configuration is pretrained=<name_of_the_model_on_the_hub>.
  • model_config_path (Optional[str]) — Path to the parameters to create the model, passed as a config file. This allows to create all possible model configurations (base, adapter, peft, inference endpoints, tgi…)

Returns

Union[BaseModelConfig, AdapterModelConfig, DeltaModelConfig, TGIModelConfig, InferenceEndpointModelConfig, DummyModelConfig]

model configuration.

Raises

ValueError

  • ValueError — If both an inference server address and model arguments are provided.

Create a model configuration based on the provided arguments.

ValueError: If multichoice continuations both should start with a space and should not start with a space. ValueError: If a base model is not specified when using delta weights or adapter weights. ValueError: If a base model is specified when not using delta weights or adapter weights.

< > Update on GitHub