EvaluationTracker

class lighteval.logging.evaluation_tracker.EvaluationTracker

< >

( output_dir: str save_details: bool = True push_to_hub: bool = False push_to_tensorboard: bool = False hub_results_org: str | None = '' tensorboard_metric_prefix: str = 'eval' public: bool = False nanotron_run_info: GeneralArgs = None )

Keeps track of the overall evaluation process and relevant informations.

The EvaluationTracker contains specific loggers for experiments details (DetailsLogger), metrics (MetricsLogger), task versions (VersionsLogger) as well as for the general configurations of both the specific task (TaskConfigLogger) and overall evaluation run (GeneralConfigLogger). It compiles the data from these loggers and writes it to files, which can be published to the Hugging Face hub if requested.

generate_final_dict

< >

( )

Aggregates and returns all the logger’s experiment information in a dictionary.

This function should be used to gather and display said information at the end of an evaluation run.

push_to_hub

< >

( date_id: str details: dict results_dict: dict )

Pushes the experiment details (all the model predictions for every step) to the hub.

recreate_metadata_card

< >

( repo_id: str )

Parameters

  • repo_id (str) — Details dataset repository path on the hub (org/dataset)

Fully updates the details repository metadata card for the currently evaluated model

save

< >

( )

Saves the experiment information and results to files, and to the hub if requested.

< > Update on GitHub