--- library_name: transformers license: mit base_model: ai4bharat/indic-bert tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: Paraphrase_indicBERT_onfull_FT3 results: [] --- # Paraphrase_indicBERT_onfull_FT3 This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0320 - Accuracy: 0.789 - F1: 0.7885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4638638566821256e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 14 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5707 | 1.0 | 157 | 0.5842 | 0.6895 | 0.6630 | | 0.4831 | 2.0 | 314 | 0.5444 | 0.7435 | 0.7420 | | 0.4363 | 3.0 | 471 | 0.4700 | 0.775 | 0.7730 | | 0.3548 | 4.0 | 628 | 0.4781 | 0.7765 | 0.7763 | | 0.2468 | 5.0 | 785 | 0.5416 | 0.786 | 0.7858 | | 0.2046 | 6.0 | 942 | 0.6293 | 0.7775 | 0.7768 | | 0.127 | 7.0 | 1099 | 0.6558 | 0.7815 | 0.7802 | | 0.1042 | 8.0 | 1256 | 0.9524 | 0.742 | 0.7381 | | 0.0653 | 9.0 | 1413 | 1.0619 | 0.7485 | 0.7450 | | 0.0253 | 10.0 | 1570 | 1.0320 | 0.789 | 0.7885 | | 0.0405 | 11.0 | 1727 | 1.1028 | 0.7795 | 0.7794 | | 0.0106 | 12.0 | 1884 | 1.1150 | 0.784 | 0.7840 | | 0.0098 | 13.0 | 2041 | 1.1362 | 0.785 | 0.7850 | | 0.0331 | 14.0 | 2198 | 1.1453 | 0.785 | 0.7850 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0