Models

Model

LightevalModel

class lighteval.models.abstract_model.LightevalModel

< >

( )

cleanup

< >

( )

Clean up operations if needed, such as closing an endpoint.

greedy_until

< >

( requests: list override_bs: typing.Optional[int] = None ) list[GenerativeResponse]

Parameters

  • requests (list[Request]) — list of requests containing the context and ending conditions.
  • disable_tqdm (bool, optional) — Whether to disable the progress bar. Defaults to False.
  • override_bs (int, optional) — Override the batch size for generation. Defaults to None.

Returns

list[GenerativeResponse]

list of generated responses.

Generates responses using a greedy decoding strategy until certain ending conditions are met.

greedy_until_multi_turn

< >

( requests: list override_bs: typing.Optional[int] = None )

Generates responses using a greedy decoding strategy until certain ending conditions are met.

loglikelihood

< >

( requests: list override_bs: typing.Optional[int] = None )

Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.

loglikelihood_rolling

< >

( requests: list override_bs: typing.Optional[int] = None )

This function is used to compute the log likelihood of the context for perplexity metrics.

loglikelihood_single_token

< >

( requests: list override_bs: typing.Optional[int] = None )

Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.

tok_encode_pair

< >

( context continuation pairwise: bool = False ) Tuple[TokenSequence, TokenSequence]

Parameters

  • context (str) — The context string to be encoded.
  • continuation (str) — The continuation string to be encoded.
  • pairwise (bool) — If True, encode context and continuation separately. If False, encode them together and then split.

Returns

Tuple[TokenSequence, TokenSequence]

A tuple containing the encoded context and continuation.

Encodes a context, continuation pair by taking care of the spaces in between.

The advantage of pairwise is: 1) It better aligns with how LLM predicts tokens 2) Works in case len(tok(context,cont)) != len(tok(context)) + len(tok(continuation)). E.g this can happen for chinese if no space is used between context/continuation

Accelerate and Transformers Models

BaseModel

class lighteval.models.base_model.BaseModel

< >

( env_config: EnvConfig config: BaseModelConfig )

greedy_until

< >

( requests: list override_bs: typing.Optional[int] = None ) list[GenerativeResponse]

Parameters

  • requests (list[Request]) — list of requests containing the context and ending conditions.
  • override_bs (int, optional) — Override the batch size for generation. Defaults to None.

Returns

list[GenerativeResponse]

list of generated responses.

Generates responses using a greedy decoding strategy until certain ending conditions are met.

init_model_parallel

< >

( model_parallel: bool | None = None )

Compute all the parameters related to model_parallel

loglikelihood

< >

( requests: list override_bs: typing.Optional[int] = None ) list[Tuple[float, bool]]

Parameters

  • requests (list[Tuple[str, dict]]) — description

Returns

list[Tuple[float, bool]]

description

Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.

loglikelihood_single_token

< >

( requests: list override_bs: typing.Optional[int] = None ) list[Tuple[float, bool]]

Parameters

  • requests (list[Tuple[str, dict]]) — description

Returns

list[Tuple[float, bool]]

description

Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.

pad_and_gather

< >

( output_tensor: Tensor drop_last_samples: bool = True num_samples: int = None ) torch.Tensor

Parameters

  • output_tensor (torch.Tensor) — The output tensor to be padded.
  • drop_last_samples (bool, optional) — Whether to drop the last samples during gathering.
  • Last samples are dropped when the number of samples is not divisible by the number of processes. — Defaults to True.

Returns

torch.Tensor

The padded output tensor and the gathered length tensor.

Pads the output_tensor to the maximum length and gathers the lengths across processes.

prepare_batch_logprob

< >

( batch: list padding_length: int max_context: typing.Optional[int] = None single_token: bool = False )

Tokenize a batch of inputs and return also the length, truncations and padding. This step is done manually since we tokenize log probability inputs together with their continuation, to manage possible extra spaces added at the start by tokenizers, see tok_encode_pair.

AdapterModel

class lighteval.models.adapter_model.AdapterModel

< >

( env_config: EnvConfig config: BaseModelConfig )

DeltaModel

class lighteval.models.delta_model.DeltaModel

< >

( env_config: EnvConfig config: BaseModelConfig )

Inference Endpoints and TGI Models

InferenceEndpointModel

class lighteval.models.endpoint_model.InferenceEndpointModel

< >

( config: typing.Union[lighteval.models.model_config.InferenceEndpointModelConfig, lighteval.models.model_config.InferenceModelConfig] env_config: EnvConfig )

InferenceEndpointModels can be used both with the free inference client, or with inference endpoints, which will use text-generation-inference to deploy your model for the duration of the evaluation.

ModelClient

class lighteval.models.tgi_model.ModelClient

< >

( address auth_token = None model_id = None )

Nanotron Model

NanotronLightevalModel

class lighteval.models.nanotron_model.NanotronLightevalModel

< >

( checkpoint_path: str nanotron_config: FullNanotronConfig parallel_context: ParallelContext max_gen_toks: typing.Optional[int] = 256 max_length: typing.Optional[int] = None add_special_tokens: typing.Optional[bool] = True dtype: typing.Union[str, torch.dtype, NoneType] = None trust_remote_code: bool = False debug_one_layer_model: bool = False model_class: typing.Optional[typing.Type] = None env_config: EnvConfig = None )

gather

< >

( output_tensor: Tensor process_group: dist.ProcessGroup = None )

Gather together tensors of (possibly) various size spread on separate GPUs (first exchange the lengths and then pad and gather)

greedy_until

< >

( requests: typing.List[lighteval.tasks.requests.GreedyUntilRequest] disable_tqdm: bool = False override_bs: int = -1 num_dataset_splits: int = 1 )

Greedy generation until a stop token is generated.

homogeneize_ending_conditions

< >

( ending_condition: tuple | dict | list | str )

Ending conditions are submitted in several possible formats. By default in lighteval we pass them as tuples (stop sequence, max number of items). In the harness they sometimes are passed as dicts {“until”: .., “max_length”: …} or as only ending conditions, either lists or strings. Here, we convert all these formats to a tuple containing a list of ending conditions, and a float for the max length allowed.

loglikelihood_single_token

< >

( requests: typing.List[typing.Tuple[str, dict]] override_bs = 0 ) List[Tuple[float, bool]]

Parameters

  • requests (List[Tuple[str, dict]]) — description

Returns

List[Tuple[float, bool]]

description

Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.

pad_and_gather

< >

( output_tensor: Tensor )

Gather together tensors of (possibly) various size spread on separate GPUs (first exchange the lengths and then pad and gather)

prepare_batch

< >

( batch: typing.List[str] padding_length: int max_context: typing.Optional[int] = None full_attention_masks: bool = False pad_on_left: bool = False )

Tokenize a batch of inputs and return also the length, truncations and padding

We truncate to keep only at most max_context tokens We pad to padding_length tokens

VLLM Model

VLLMModel

class lighteval.models.vllm_model.VLLMModel

< >

( config: VLLMModelConfig env_config: EnvConfig )

greedy_until

< >

( requests: list override_bs: typing.Optional[int] = None ) list[GenerateReturn]

Parameters

  • requests (list[Request]) — list of requests containing the context and ending conditions.
  • override_bs (int, optional) — Override the batch size for generation. Defaults to None.

Returns

list[GenerateReturn]

list of generated responses.

Generates responses using a greedy decoding strategy until certain ending conditions are met.

< > Update on GitHub