Clean up operations if needed, such as closing an endpoint.
( requests: list override_bs: typing.Optional[int] = None ) → list[GenerativeResponse]
Parameters
Returns
list[GenerativeResponse]
list of generated responses.
Generates responses using a greedy decoding strategy until certain ending conditions are met.
Generates responses using a greedy decoding strategy until certain ending conditions are met.
Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.
This function is used to compute the log likelihood of the context for perplexity metrics.
Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.
( context continuation pairwise: bool = False ) → Tuple[TokenSequence, TokenSequence]
Parameters
Returns
Tuple[TokenSequence, TokenSequence]
A tuple containing the encoded context and continuation.
Encodes a context, continuation pair by taking care of the spaces in between.
The advantage of pairwise is: 1) It better aligns with how LLM predicts tokens 2) Works in case len(tok(context,cont)) != len(tok(context)) + len(tok(continuation)). E.g this can happen for chinese if no space is used between context/continuation
( env_config: EnvConfig config: BaseModelConfig )
( requests: list override_bs: typing.Optional[int] = None ) → list[GenerativeResponse]
Generates responses using a greedy decoding strategy until certain ending conditions are met.
Compute all the parameters related to model_parallel
( requests: list override_bs: typing.Optional[int] = None ) → list[Tuple[float, bool]]
Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.
( requests: list override_bs: typing.Optional[int] = None ) → list[Tuple[float, bool]]
Tokenize the context and continuation and compute the log likelihood of those tokenized sequences.
( output_tensor: Tensor drop_last_samples: bool = True num_samples: int = None ) → torch.Tensor
Parameters
Returns
torch.Tensor
The padded output tensor and the gathered length tensor.
Pads the output_tensor
to the maximum length and gathers the lengths across processes.
( batch: list padding_length: int max_context: typing.Optional[int] = None single_token: bool = False )
Tokenize a batch of inputs and return also the length, truncations and padding. This step is done manually since we tokenize log probability inputs together with their continuation, to manage possible extra spaces added at the start by tokenizers, see tok_encode_pair.
( env_config: EnvConfig config: BaseModelConfig )
( config: typing.Union[lighteval.models.model_config.InferenceEndpointModelConfig, lighteval.models.model_config.InferenceModelConfig] env_config: EnvConfig )
InferenceEndpointModels can be used both with the free inference client, or with inference endpoints, which will use text-generation-inference to deploy your model for the duration of the evaluation.
( address auth_token = None model_id = None )