( output_dir: str save_details: bool = True push_to_hub: bool = False push_to_tensorboard: bool = False hub_results_org: str | None = '' tensorboard_metric_prefix: str = 'eval' public: bool = False nanotron_run_info: GeneralArgs = None )
Keeps track of the overall evaluation process and relevant informations.
The EvaluationTracker
contains specific loggers for experiments details
(DetailsLogger
), metrics (MetricsLogger
), task versions
(VersionsLogger
) as well as for the general configurations of both the
specific task (TaskConfigLogger
) and overall evaluation run
(GeneralConfigLogger
). It compiles the data from these loggers and
writes it to files, which can be published to the Hugging Face hub if
requested.
Aggregates and returns all the logger’s experiment information in a dictionary.
This function should be used to gather and display said information at the end of an evaluation run.
Pushes the experiment details (all the model predictions for every step) to the hub.
( repo_id: str )
Fully updates the details repository metadata card for the currently evaluated model
Saves the experiment information and results to files, and to the hub if requested.