( output_dir: str save_details: bool = True push_to_hub: bool = False push_to_tensorboard: bool = False hub_results_org: str | None = '' tensorboard_metric_prefix: str = 'eval' public: bool = False nanotron_run_info: GeneralArgs = None )
Parameters
str
) — Local folder path where you want results to be saved. bool
, defaults to True) — If True, details are saved to the output_dir
. bool
, defaults to False) — If True, details are pushed to the hub.
Results are pushed to {hub_results_org}/details__{sanitized model_name}
for the model model_name
, a public dataset,
if public
is True else {hub_results_org}/details__{sanitized model_name}_private
, a private dataset. bool
, defaults to False) — If True, will create and push the results for a tensorboard folder on the hub. str
, optional) — The organisation to push the results to.
See more details about the datasets organisation in EvaluationTracker.save
. str
, defaults to “eval”) — Prefix for the metrics in the tensorboard logs. bool
, defaults to False) — If True, results and details are pushed to public orgs. ~nanotron.config.GeneralArgs
, optional) — Reference to information about Nanotron models runs. Keeps track of the overall evaluation process and relevant information.
The EvaluationTracker contains specific loggers for experiments details (DetailsLogger), metrics (MetricsLogger), task versions (VersionsLogger) as well as for the general configurations of both the specific task (TaskConfigLogger) and overall evaluation run (GeneralConfigLogger). It compiles the data from these loggers and writes it to files, which can be published to the Hugging Face hub if requested.
Attributes:
Aggregates and returns all the logger’s experiment information in a dictionary.
This function should be used to gather and display said information at the end of an evaluation run.
Pushes the experiment details (all the model predictions for every step) to the hub.
( repo_id: str )
Fully updates the details repository metadata card for the currently evaluated model
Saves the experiment information and results to files, and to the hub if requested.
( )
Parameters
max_samples
.
Note: This should only be used for debugging purposes! GeneralConfigLogger.log_end_time
Logger for the evaluation parameters.
( num_fewshot_seeds: int override_batch_size: typing.Optional[int] max_samples: typing.Optional[int] job_id: str config: Config = None )
Parameters
Logs the information about the arguments passed to the method.
( model_info: ModelInfo )
Logs the model information.
( hashes: dict = <factory> compiled_hashes: dict = <factory> details: dict = <factory> compiled_details: dict = <factory> compiled_details_over_all_tasks: DetailsLogger.CompiledDetailOverAllTasks = <factory> )
Parameters
Hash
) — Maps each task name to the list of all its samples’ Hash
. CompiledHas
, an aggregation of all the individual sample hashes. Detail
]) — Maps each task name to the list of its samples’ details.
Example: winogrande: [sample1_details, sample2_details, …] CompiledDetail
]) — : Maps each task name to the list of its samples’ compiled details. Logger for the experiment details.
Stores and logs experiment information both at the task and at the sample level.
Aggregate the details and hashes for each task and then for all tasks. We end up with a dict of compiled details for each task and a dict of compiled details for all tasks.
( task_name: str task: LightevalTask doc: Doc outputs: list metrics: dict llm_as_prompt_judgement: typing.Optional[tuple[str, str]] = None )
Parameters
Stores the relevant information for one sample of one task to the total list of samples stored in the DetailsLogger.
( metrics_values: dict = <factory> metric_aggregated: dict = <factory> )
Parameters
Logs the actual scores for each metric of each task.
( task_dict: dict bootstrap_iters: int = 1000 )
Aggregate the metrics for each task and then for all tasks.
( versions: dict = <factory> )
Logger of the tasks versions.
Tasks can have a version number/date, which indicates what is the precise metric definition and dataset version used for an evaluation.
( tasks_configs: dict = <factory> )
Logs the different parameters of the current LightevalTask
of interest.