--- base_model: google/pegasus-large tags: - generated_from_trainer metrics: - rouge - bleu model-index: - name: HealthPrincipalPegasusLargeModel results: [] --- # HealthPrincipalPegasusLargeModel This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.0009 - Rouge1: 51.2684 - Rouge2: 17.3059 - Rougel: 33.9682 - Rougelsum: 47.9417 - Bertscore Precision: 80.1764 - Bertscore Recall: 82.3653 - Bertscore F1: 81.2525 - Bleu: 0.1263 - Gen Len: 235.1606 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bertscore Precision | Bertscore Recall | Bertscore F1 | Bleu | Gen Len | |:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------------------:|:----------------:|:------------:|:------:|:--------:| | 6.4911 | 0.0826 | 100 | 6.0799 | 39.9533 | 10.9966 | 25.4783 | 36.614 | 76.4664 | 80.1259 | 78.2467 | 0.0798 | 235.1606 | | 5.9704 | 0.1653 | 200 | 5.7449 | 44.7107 | 13.7317 | 29.3188 | 41.7669 | 78.3865 | 81.0296 | 79.6811 | 0.0985 | 235.1606 | | 5.7855 | 0.2479 | 300 | 5.5879 | 45.798 | 14.5707 | 30.3632 | 42.7848 | 78.6676 | 81.319 | 79.9669 | 0.1056 | 235.1606 | | 5.679 | 0.3305 | 400 | 5.4498 | 46.4083 | 15.0208 | 30.9687 | 43.4256 | 78.8922 | 81.4895 | 80.1659 | 0.1086 | 235.1606 | | 5.5132 | 0.4131 | 500 | 5.3107 | 48.8581 | 15.965 | 32.298 | 45.7224 | 79.3363 | 81.749 | 80.5209 | 0.1158 | 235.1606 | | 5.4462 | 0.4958 | 600 | 5.2137 | 49.3647 | 16.3083 | 32.6541 | 46.1675 | 79.4978 | 81.9205 | 80.6871 | 0.1196 | 235.1606 | | 5.4811 | 0.5784 | 700 | 5.1333 | 49.6995 | 16.5538 | 33.0588 | 46.5791 | 79.7496 | 82.0476 | 80.8784 | 0.1200 | 235.1606 | | 5.3819 | 0.6610 | 800 | 5.0847 | 49.9273 | 16.6235 | 33.3042 | 46.6683 | 79.8845 | 82.1754 | 81.01 | 0.1216 | 235.1606 | | 5.2029 | 0.7436 | 900 | 5.0461 | 50.8755 | 16.9213 | 33.649 | 47.5007 | 80.0059 | 82.2579 | 81.1127 | 0.1236 | 235.1606 | | 5.2703 | 0.8263 | 1000 | 5.0225 | 51.0187 | 17.1644 | 33.8249 | 47.7395 | 80.1442 | 82.3293 | 81.2185 | 0.1254 | 235.1606 | | 5.2121 | 0.9089 | 1100 | 5.0116 | 50.8382 | 17.0946 | 33.8088 | 47.5529 | 80.1459 | 82.3297 | 81.2196 | 0.1251 | 235.1606 | | 5.3128 | 0.9915 | 1200 | 5.0009 | 51.2684 | 17.3059 | 33.9682 | 47.9417 | 80.1764 | 82.3653 | 81.2525 | 0.1263 | 235.1606 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.2.1 - Tokenizers 0.19.1