t5-xl-trivia-gpu-ca2q
This model is a fine-tuned version of google/flan-t5-xl on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 0.6788
- Validation Loss: 1.0558
- Epoch: 1
{'eval_loss': 0.9858804941177368, 'eval_bleu': 22.380738309103673, 'eval_rouge1': 60.21, 'eval_rouge2': 37.25, 'eval_rougeL': 52.72, 'eval_rougeLsum': 52.75, 'eval_exact': 0.03158099310076766, 'eval_runtime': 1598.4314, 'eval_samples_per_second': 6.438, 'eval_steps_per_second': 0.805}
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.0002, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': False}
- training_precision: mixed_bfloat16
Training results
Train Loss | Validation Loss | Epoch |
---|---|---|
1.0151 | 0.9828 | 0 |
0.6788 | 1.0558 | 1 |
Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.14.3
- Tokenizers 0.13.3
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for tilyupo/t5-xl-trivia-ca2q
Base model
google/flan-t5-xl