Models

Model

LightevalModel

class lighteval.models.abstract_model.LightevalModel

< >

( )

cleanup

< >

( )

Clean up operations if needed, such as closing an endpoint.

greedy_until

< >

( requests: list override_bs: typing.Optional[int] = None ) list[GenerativeResponse]

Parameters

  • requests (list[Request]) — list of requests containing the context and ending conditions.
  • disable_tqdm (bool, optional) — Whether to disable the progress bar. Defaults to False.
  • override_bs (int, optional) — Override the batch size for generation. Defaults to None.

Returns

list[GenerativeResponse]

list of generated responses.

Generates responses using a greedy decoding strategy until certain ending conditions are met.

greedy_until_multi_turn

< >

( requests: list override_bs: typing.Optional[int] = None )

Generates responses using a greedy decoding strategy until certain ending conditions are met.

loglikelihood

< >

( requests: list override_bs: typing.Optional[int] = None )

Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.

loglikelihood_rolling

< >

( requests: list override_bs: typing.Optional[int] = None )

This function is used to compute the log likelihood of the context for perplexity metrics.

loglikelihood_single_token

< >

( requests: list override_bs: typing.Optional[int] = None )

Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.

tok_encode_pair

< >

( context continuation pairwise: bool = False ) Tuple[TokenSequence, TokenSequence]

Parameters

  • context (str) — The context string to be encoded.
  • continuation (str) — The continuation string to be encoded.
  • pairwise (bool) — If True, encode context and continuation separately. If False, encode them together and then split.

Returns

Tuple[TokenSequence, TokenSequence]

A tuple containing the encoded context and continuation.

Encodes a context, continuation pair by taking care of the spaces in between.

The advantage of pairwise is: 1) It better aligns with how LLM predicts tokens 2) Works in case len(tok(context,cont)) != len(tok(context)) + len(tok(continuation)). E.g this can happen for chinese if no space is used between context/continuation

Accelerate and Transformers Models

BaseModel

class lighteval.models.base_model.BaseModel

< >

( env_config: EnvConfig config: BaseModelConfig )

greedy_until

< >

( requests: list override_bs: typing.Optional[int] = None ) list[GenerativeResponse]

Parameters

  • requests (list[Request]) — list of requests containing the context and ending conditions.
  • override_bs (int, optional) — Override the batch size for generation. Defaults to None.

Returns

list[GenerativeResponse]

list of generated responses.

Generates responses using a greedy decoding strategy until certain ending conditions are met.

init_model_parallel

< >

( model_parallel: bool | None = None )

Compute all the parameters related to model_parallel

loglikelihood

< >

( requests: list override_bs: typing.Optional[int] = None ) list[Tuple[float, bool]]

Parameters

  • requests (list[Tuple[str, dict]]) — description

Returns

list[Tuple[float, bool]]

description

Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.

loglikelihood_single_token

< >

( requests: list override_bs: typing.Optional[int] = None ) list[Tuple[float, bool]]

Parameters

  • requests (list[Tuple[str, dict]]) — description

Returns

list[Tuple[float, bool]]

description

Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.

pad_and_gather

< >

( output_tensor: Tensor drop_last_samples: bool = True num_samples: int = None ) torch.Tensor

Parameters

  • output_tensor (torch.Tensor) — The output tensor to be padded.
  • drop_last_samples (bool, optional) — Whether to drop the last samples during gathering.
  • Last samples are dropped when the number of samples is not divisible by the number of processes. — Defaults to True.

Returns

torch.Tensor

The padded output tensor and the gathered length tensor.

Pads the output_tensor to the maximum length and gathers the lengths across processes.

prepare_batch_logprob

< >

( batch: list padding_length: int max_context: typing.Optional[int] = None single_token: bool = False )

Tokenize a batch of inputs and return also the length, truncations and padding. This step is done manually since we tokenize log probability inputs together with their continuation, to manage possible extra spaces added at the start by tokenizers, see tok_encode_pair.

DeltaModel

class lighteval.models.delta_model.DeltaModel

< >

( env_config: EnvConfig config: BaseModelConfig )

Inference Endpoints and TGI Models

InferenceEndpointModel

class lighteval.models.endpoint_model.InferenceEndpointModel

< >

( config: typing.Union[lighteval.models.model_config.InferenceEndpointModelConfig, lighteval.models.model_config.InferenceModelConfig] env_config: EnvConfig )

InferenceEndpointModels can be used both with the free inference client, or with inference endpoints, which will use text-generation-inference to deploy your model for the duration of the evaluation.

ModelClient

class lighteval.models.tgi_model.ModelClient

< >

( address auth_token = None model_id = None )

< > Update on GitHub