( name: str prompt_function: typing.Callable[[dict, str], lighteval.tasks.requests.Doc | None] hf_repo: str hf_subset: str metric: list[lighteval.metrics.utils.metric_utils.Metric | lighteval.metrics.metrics.Metrics] | tuple[lighteval.metrics.utils.metric_utils.Metric | lighteval.metrics.metrics.Metrics, ...] hf_revision: typing.Optional[str] = None hf_filter: typing.Optional[typing.Callable[[dict], bool]] = None hf_avail_splits: typing.Union[list[str], tuple[str, ...], NoneType] = <factory> trust_dataset: bool = False evaluation_splits: list[str] | tuple[str, ...] = <factory> few_shots_split: typing.Optional[str] = None few_shots_select: typing.Optional[str] = None generation_size: typing.Optional[int] = None generation_grammar: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = None stop_sequence: typing.Union[list[str], tuple[str, ...], NoneType] = None num_samples: typing.Optional[list[int]] = None suite: list[str] | tuple[str, ...] = <factory> original_num_docs: int = -1 effective_num_docs: int = -1 must_remove_duplicate_docs: bool = False version: int = 0 )
Parameters
Doc
samples from each line of the evaluation dataset. Stored configuration of a given LightevalTask
.
( name: str cfg: LightevalTaskConfig cache_dir: typing.Optional[str] = None )
Return a dict with metric name and its aggregation function for all metrics
( formatted_doc: Doc context: str document_id_seed: str current_task_name: str ) → dict[RequestType, List[Request]]
Parameters
Returns
dict[RequestType, List[Request]]
List of requests.
Constructs a list of requests from the task based on the given parameters.
Returns the evaluation documents.
( ) → list[Doc]
Returns
list[Doc]
Documents that will be used for few shot examples. One document = one few shot example.
Returns the few shot documents. If the few shot documents are not available, it gets them from the few shot split or the evaluation split.
( available_splits: list[str] | tuple[str, ...] number_of_splits: int = 1 ) → list[str]
Parses the possible fewshot split keys in order: train, then validation keys and matches them with the available keys. Returns the first available.
( tasks: list dataset_loading_processes: int = 1 )
Load datasets from the HuggingFace Hub for the given tasks.
( task: LightevalTask lm: LightevalModel )
( formatted_doc: Doc ) → str
In some cases, when selecting few-shot samples, we want to use specific document classes which need to be specified separately from the target. For example, a document where the gold is a json might want to use only one of the keys of the json to define sorting classes in few shot samples. Else we take the gold.
( formatted_doc: Doc ) → str
Returns the target of the given document.
( doc: Doc return_instructions: bool = False ) → str
Returns the query of the document without the instructions. If the document has instructions, it removes them from the query:
( cache_dir: typing.Optional[str] = None custom_tasks: typing.Union[str, pathlib.Path, module, NoneType] = None )
The Registry class is used to manage the task registry and get task classes.
( task_definition: str ) → list[str]
( task_names: list ) → Dict[str, LightevalTask]
Get a dictionary of tasks based on the task name list (suite|task).
Notes:
( task_name: str ) → LightevalTask
Get the task class based on the task name (suite|task).
Print all the tasks in the task registry.
( task_name: str sample_index: int request_index: int context: str metric_categories: list )
Represents a request for a specific task, example and request within that example in the evaluation process. For example in the task “boolq”, the example “Is the sun hot?” and the requests for that example “Is the sun hot? Yes” and “Is the sun hot? No”.
( task_name: str sample_index: int request_index: int context: str metric_categories: list choice: str tokenized_context: list = None tokenized_continuation: list = None )
Represents a request for log-likelihood evaluation.
( task_name: str sample_index: int request_index: int context: str metric_categories: list choices: list tokenized_context: list = None tokenized_continuation: list = None )
Represents a request for calculating the log-likelihood of a single token. Faster because we can get all the loglikelihoods in one pass.
( task_name: str sample_index: int request_index: int context: str metric_categories: list tokenized_context: list = None tokenized_continuation: list = None )
Represents a request for log-likelihood rolling evaluation.
Inherits from the base Request class.
( task_name: str sample_index: int request_index: int context: str metric_categories: list stop_sequence: typing.Union[str, tuple[str], list[str]] generation_size: typing.Optional[int] generation_grammar: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = None tokenized_context: list = None num_samples: int = None do_sample: bool = False use_logits: bool = False )
Parameters
Represents a request for generating text using the Greedy-Until algorithm.
( task_name: str sample_index: int request_index: int context: str metric_categories: list stop_sequence: str generation_size: int use_logits: bool = False )
Represents a request for generating text using the Greedy-Until algorithm.
( new_arr: list ) → list
Get the original order of the data.
( split_id: int ) → tuple
Get the start and end indices of a dataset split.
Iterator that yields the start and end indices of each dataset split. Also updates the starting batch size for each split (trying to double the batch every time we move to a new split).
( requests: list num_dataset_splits: int )
( num_dataset_splits ) → type
Initialises the split limits based on generation parameters. The splits are used to estimate time remaining when evaluating, and in the case of generative evaluations, to group similar samples together.
For generative tasks, self._sorting_criteria outputs:
In the current function, we create evaluation groups by generation parameters (logits and eos), so that samples with similar properties get batched together afterwards. The samples will then be further organised by length in each split.
( requests: list num_dataset_splits: int )
( dataset: Dataset num_replicas: typing.Optional[int] = None rank: typing.Optional[int] = None shuffle: bool = True seed: int = 0 drop_last: bool = False )
A distributed sampler that copy the last element only when drop_last is False so we keep a small padding in the batches as our samples are sorted by length.