( metric_name: str higher_is_better: bool category: MetricCategory use_case: MetricUseCase sample_level_fn: <built-in function callable> corpus_level_fn: <built-in function callable> )
( metric_name: str higher_is_better: bool category: MetricCategory use_case: MetricUseCase sample_level_fn: <built-in function callable> corpus_level_fn: <built-in function callable> )
Metric computed over the whole corpora, with computations happening at the aggregation phase
( metric_name: str higher_is_better: bool category: MetricCategory use_case: MetricUseCase sample_level_fn: <built-in function callable> corpus_level_fn: <built-in function callable> )
Metric computed per sample, then aggregated over the corpus
( metric_name: list higher_is_better: dict category: MetricCategory use_case: MetricUseCase sample_level_fn: <built-in function callable> corpus_level_fn: dict )
Some metrics are more advantageous to compute together at once. For example, if a costly preprocessing is the same for all metrics, it makes more sense to compute it once.
( metric_name: list higher_is_better: dict category: MetricCategory use_case: MetricUseCase sample_level_fn: <built-in function callable> corpus_level_fn: dict )
MetricGrouping computed over the whole corpora, with computations happening at the aggregation phase
( metric_name: list higher_is_better: dict category: MetricCategory use_case: MetricUseCase sample_level_fn: <built-in function callable> corpus_level_fn: dict )
MetricGrouping are computed per sample, then aggregated over the corpus
( average: str num_classes: int = 2 )
Computes the metric score over all the corpus generated items, by using the scikit learn implementation.
Computes the metric score over all the corpus generated items.
( metric_type: str lang: typing.Literal['zh', 'ja', 'ko', ''] = '' )
Computes the metric score over all the corpus generated items, by using the sacrebleu implementation.
( items: list ) → float
Computes the Matthews Correlation Coefficient, using scikit learn (doc).
( aggregation_function: typing.Callable[[list[float]], float] = <built-in function max> normalize_gold: typing.Optional[typing.Callable[[str], str]] = None normalize_pred: typing.Optional[typing.Callable[[str], str]] = None strip_strings: bool = False type_exact_match: str = 'full' )
( golds: list predictions: list **kwargs ) → float
Computes the metric over a list of golds and predictions for one single sample.
( gold: str pred: str ) → float
Compares two strings only.
( aggregation_function: typing.Callable[[list[float]], float] = <built-in function max> normalize_gold: typing.Optional[typing.Callable[[str], str]] = None normalize_pred: typing.Optional[typing.Callable[[str], str]] = None strip_strings: bool = False )
( golds: list predictions: list **kwargs ) → float
Computes the metric over a list of golds and predictions for one single sample.
( gold: str pred: str ) → float
Compares two strings only.
( logprob_normalization: lighteval.metrics.normalizations.LogProbCharNorm | lighteval.metrics.normalizations.LogProbTokenNorm | lighteval.metrics.normalizations.LogProbPMINorm | None = None )
( gold_ixs: list choices_logprob: list unconditioned_logprob: list[float] | None choices_tokens: list[list[int]] | None formatted_doc: Doc **kwargs ) → int
Parameters
Returns
int
The eval score: 1 if the best log-prob choice is in gold, 0 otherwise.
Computes the log likelihood accuracy: is the choice with the highest logprob in choices_logprob
present
in the gold_ixs
?
( log_prob_normalization: lighteval.metrics.normalizations.LogProbCharNorm | lighteval.metrics.normalizations.LogProbTokenNorm | lighteval.metrics.normalizations.LogProbPMINorm | None = None aggregation_function: typing.Callable[[numpy.ndarray], float] = <function max at 0x7f172ab699f0> )
( gold_ixs: list choices_logprob: list unconditioned_logprob: list[float] | None choices_tokens: list[list[int]] | None formatted_doc: Doc **kwargs ) → float
Parameters
Returns
float
The probability of the best log-prob choice being a gold choice.
Computes the log likelihood probability: chance of choosing the best choice.
( normalization: lighteval.metrics.normalizations.LogProbTokenNorm | None = None aggregation_function: typing.Callable[[numpy.ndarray], float] = <function max at 0x7f172ab699f0> )
( logprobs: list target_tokens: list **kwargs ) → float
Parameters
Returns
float
The probability of the best log-prob choice being a gold choice.
Computes the log likelihood probability: chance of choosing the best choice.
( choices_logprob: list gold_ixs: list **kwargs ) → int
Computes the recall at the requested depth level: looks at the n
best predicted choices (with the
highest log probabilities) and see if there is an actual gold among them.
( choices_logprob: list gold_ixs: list formatted_doc: Doc **kwargs ) → float
Parameters
Returns
float
MRR score.
Mean reciprocal rank. Measures the quality of a ranking of choices (ordered by correctness).
( methods: str | list[str] multiple_golds: bool = False bootstrap: bool = False normalize_gold: <built-in function callable> = None normalize_pred: <built-in function callable> = None aggregation_function: <built-in function callable> = None tokenizer: object = None )
( golds: list predictions: list **kwargs ) → float or dict
Computes the metric(s) over a list of golds and predictions for one single sample.
( normalize_gold: <built-in function callable> = None normalize_pred: <built-in function callable> = None )
( golds: list predictions: list **kwargs ) → dict
Computes the prediction, recall and f1 score using the bert scorer.
( normalize_input: <built-in function callable> = <function remove_braces at 0x7f164343d870> normalize_pred: <built-in function callable> = <function remove_braces_and_strip at 0x7f164343d900> input_column: str = 'text' )
( predictions: list formatted_doc: Doc **kwargs ) → dict[str, float]
Compute the extractiveness of the predictions.
This method calculates coverage, density, and compression scores for a single prediction against the input text.
( normalize_input: <built-in function callable> = <function remove_braces at 0x7f164343d870> normalize_pred: <built-in function callable> = <function remove_braces_and_strip at 0x7f164343d900> input_column: str = 'text' )
( predictions: list formatted_doc: Doc **kwargs ) → dict[str, float]
Compute the faithfulness of the predictions.
The SummaCZS (Summary Content Zero-Shot) model is used with configurable granularity and model variation.
( golds: list predictions: list **kwargs ) → float
Uses the stored BLEURT scorer to compute the score on the current sample.
( golds: list predictions: list **kwargs ) → float
Computes the sentence level BLEU between the golds and each prediction, then takes the average.
( metric_types: list[str] | str strip_prediction: bool = True )
( golds: list predictions: list **kwargs ) → dict
Computes all the requested metrics on the golds and prediction.
Compute the edit similarity between two lists of strings.
Edit similarity is also used in the paper Lee, Katherine, et al. “Deduplicating training data makes language models better.” arXiv preprint arXiv:2107.06499 (2021).
Compute the length of the longest common prefix.
( judge_model_name: str template: typing.Callable process_judge_response: typing.Callable judge_backend: typing.Literal['litellm', 'openai', 'transformers', 'vllm', 'tgi'] short_judge_name: str | None = None )
( judge_model_name: str template: typing.Callable process_judge_response: typing.Callable judge_backend: typing.Literal['litellm', 'openai', 'transformers', 'vllm', 'tgi'] short_judge_name: str | None = None )
Compute the score of a generative task using a llm as a judge. The generative task can be multiturn with 2 turns max, in that case, we return scores for turn 1 and 2. Also returns user_prompt and judgement which are ignored later by the aggregator.
( judge_model_name: str template: typing.Callable process_judge_response: typing.Callable judge_backend: typing.Literal['litellm', 'openai', 'transformers', 'vllm', 'tgi'] short_judge_name: str | None = None )
Compute the score of a generative task using a llm as a judge. The generative task can be multiturn with 2 turns max, in that case, we return scores for turn 1 and 2. Also returns user_prompt and judgement which are ignored later by the aggregator.
( k: int normalize_gold: <built-in function callable> = None normalize_pred: <built-in function callable> = None strip_strings: bool = False type_exact_match: str = 'full' )
( golds: list predictions: list **kwargs ) → float
Computes the metric over a list of golds and predictions for one single sample. It applies normalisation (if needed) to model prediction and gold, and takes the most frequent answer of all the available ones, then compares it to the gold.
( model: str templates: typing.Callable process_judge_response: typing.Callable judge_backend: typing.Literal['litellm', 'openai', 'transformers', 'tgi', 'vllm'] url: str | None = None api_key: str | None = None )
Parameters
A class representing a judge for evaluating answers using either the OpenAI or Transformers library.
Methods: evaluate_answer: Evaluates an answer using the OpenAI API or Transformers library. lazy_load_client: Lazy loads the OpenAI client or Transformers pipeline. call_api: Calls the API to get the judge’s response. call_transformers: Calls the Transformers pipeline to get the judge’s response. call_vllm: Calls the VLLM pipeline to get the judge’s response.
( question: str answer: str options: list[str] | None = None gold: str | None = None )
Evaluates an answer using either Transformers or OpenAI API.