TisaleoGPT2Bot
This model is a fine-tuned version of DeepESP/gpt2-spanish on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5536
- Perplexity: 13564.8888
- Exact Match: 0.6372
- F1 Score: 0.0009
- Coverage Rate: 0.0
- Context Errors: 13888
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Perplexity | Exact Match | F1 Score | Coverage Rate | Context Errors |
---|---|---|---|---|---|---|---|---|
1.1224 | 1.7921 | 500 | 0.6410 | 13079.0079 | 0.6372 | 0.0011 | 0.0 | 13888 |
0.479 | 3.5842 | 1000 | 0.5560 | 13364.5681 | 0.6372 | 0.0009 | 0.0 | 13888 |
0.3067 | 5.3763 | 1500 | 0.5441 | 13318.3878 | 0.6372 | 0.0009 | 0.0 | 13888 |
0.2065 | 7.1685 | 2000 | 0.5526 | 13633.0373 | 0.6372 | 0.0009 | 0.0 | 13888 |
0.1561 | 8.9606 | 2500 | 0.5534 | 13571.6876 | 0.6372 | 0.0009 | 0.0 | 13888 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for IAMRS23/TisaleoGPT2Bot
Base model
DeepESP/gpt2-spanish