--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - rouge base_model: google/flan-t5-base model-index: - name: results results: [] --- # results This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1549 - Rouge1: 0.0992 - Rouge2: 0.0185 - Rougel: 0.0769 - Rougelsum: 0.0898 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:| | 3.4344 | 1.0 | 1498 | 3.2018 | 0.1012 | 0.0173 | 0.0788 | 0.0920 | | 3.4054 | 2.0 | 2996 | 3.1860 | 0.0966 | 0.0176 | 0.0749 | 0.0876 | | 3.3838 | 3.0 | 4494 | 3.1767 | 0.0984 | 0.0180 | 0.0766 | 0.0892 | | 3.368 | 4.0 | 5992 | 3.1712 | 0.1013 | 0.0191 | 0.0785 | 0.0915 | | 3.3534 | 5.0 | 7490 | 3.1647 | 0.1007 | 0.0188 | 0.0781 | 0.0910 | | 3.3573 | 6.0 | 8988 | 3.1613 | 0.0992 | 0.0189 | 0.0770 | 0.0897 | | 3.3392 | 7.0 | 10486 | 3.1581 | 0.0999 | 0.0186 | 0.0778 | 0.0904 | | 3.3327 | 8.0 | 11984 | 3.1568 | 0.1006 | 0.0187 | 0.0778 | 0.0909 | | 3.3275 | 9.0 | 13482 | 3.1554 | 0.0983 | 0.0181 | 0.0763 | 0.0890 | | 3.3267 | 10.0 | 14980 | 3.1549 | 0.0992 | 0.0185 | 0.0769 | 0.0898 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2