--- library_name: transformers license: cc-by-4.0 base_model: vesteinn/DanskBERT tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: danskbert_indirect_speech results: [] --- # danskbert_indirect_speech This model is a fine-tuned version of [vesteinn/DanskBERT](https://huggingface.co/vesteinn/DanskBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: 0.6570 - Precision: 0.6586 - Recall: 0.6570 - F1: 0.6525 - Loss: 0.8216 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Accuracy | Precision | Recall | F1 | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------:|:------:|:------:|:---------------:| | No log | 1.0 | 13 | 0.5457 | 0.5231 | 0.5457 | 0.4294 | 1.0285 | | No log | 2.0 | 26 | 0.4100 | 0.1876 | 0.4100 | 0.2461 | 1.0252 | | No log | 3.0 | 39 | 0.5023 | 0.6704 | 0.5023 | 0.4244 | 0.8830 | | No log | 4.0 | 52 | 0.4991 | 0.7051 | 0.4991 | 0.4072 | 1.2035 | | No log | 5.0 | 65 | 0.4291 | 0.7156 | 0.4291 | 0.2829 | 1.3921 | | No log | 6.0 | 78 | 0.5491 | 0.7090 | 0.5491 | 0.4926 | 0.9938 | | No log | 7.0 | 91 | 0.5763 | 0.7044 | 0.5763 | 0.5365 | 1.0815 | | No log | 8.0 | 104 | 0.6624 | 0.6520 | 0.6624 | 0.6503 | 0.7027 | | No log | 9.0 | 117 | 0.6374 | 0.6696 | 0.6374 | 0.6308 | 0.9327 | | No log | 10.0 | 130 | 0.6549 | 0.6680 | 0.6549 | 0.6484 | 0.7493 | | No log | 11.0 | 143 | 0.6552 | 0.6597 | 0.6552 | 0.6499 | 0.7772 | | No log | 12.0 | 156 | 0.6604 | 0.6500 | 0.6604 | 0.6536 | 0.7496 | | No log | 13.0 | 169 | 0.6556 | 0.6643 | 0.6556 | 0.6502 | 0.8145 | | No log | 14.0 | 182 | 0.6588 | 0.6591 | 0.6588 | 0.6536 | 0.7831 | | No log | 15.0 | 195 | 0.6622 | 0.6604 | 0.6622 | 0.6572 | 0.7797 | | No log | 16.0 | 208 | 0.6588 | 0.6661 | 0.6588 | 0.6532 | 0.8084 | | No log | 17.0 | 221 | 0.6624 | 0.6672 | 0.6624 | 0.6578 | 0.8129 | | No log | 18.0 | 234 | 0.6597 | 0.6617 | 0.6597 | 0.6550 | 0.8171 | | No log | 18.48 | 240 | 0.6570 | 0.6586 | 0.6570 | 0.6525 | 0.8216 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Tokenizers 0.21.0