--- library_name: transformers license: cc-by-4.0 base_model: vesteinn/DanskBERT tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: danskbert_indirect_speech results: [] --- # danskbert_indirect_speech This model is a fine-tuned version of [vesteinn/DanskBERT](https://huggingface.co/vesteinn/DanskBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: 0.6577 - Precision: 0.6982 - Recall: 0.6577 - F1: 0.6486 - Loss: 0.8890 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Accuracy | Precision | Recall | F1 | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------:|:------:|:------:|:---------------:| | No log | 1.0 | 13 | 0.5310 | 0.2819 | 0.5310 | 0.3683 | 1.1024 | | No log | 2.0 | 26 | 0.5329 | 0.5102 | 0.5329 | 0.3785 | 0.9938 | | No log | 3.0 | 39 | 0.4206 | 0.5351 | 0.4206 | 0.2658 | 1.1366 | | No log | 4.0 | 52 | 0.6118 | 0.6521 | 0.6118 | 0.5410 | 0.8100 | | No log | 5.0 | 65 | 0.6691 | 0.6700 | 0.6691 | 0.6430 | 0.7886 | | No log | 6.0 | 78 | 0.6214 | 0.6756 | 0.6214 | 0.5529 | 0.8290 | | No log | 7.0 | 91 | 0.6182 | 0.7008 | 0.6182 | 0.5988 | 0.9558 | | No log | 8.0 | 104 | 0.4476 | 0.7400 | 0.4476 | 0.3122 | 1.3500 | | No log | 9.0 | 117 | 0.5997 | 0.7008 | 0.5997 | 0.5757 | 1.0460 | | No log | 10.0 | 130 | 0.5717 | 0.7244 | 0.5717 | 0.5303 | 1.0733 | | No log | 11.0 | 143 | 0.6566 | 0.7000 | 0.6566 | 0.6473 | 0.8322 | | No log | 12.0 | 156 | 0.6562 | 0.7050 | 0.6562 | 0.6488 | 0.8855 | | No log | 13.0 | 169 | 0.6711 | 0.6900 | 0.6711 | 0.6647 | 0.8431 | | No log | 14.0 | 182 | 0.6208 | 0.7001 | 0.6208 | 0.6025 | 0.9622 | | No log | 15.0 | 195 | 0.6823 | 0.7000 | 0.6823 | 0.6758 | 0.7811 | | No log | 16.0 | 208 | 0.6611 | 0.6971 | 0.6611 | 0.6533 | 0.8808 | | No log | 17.0 | 221 | 0.6677 | 0.6960 | 0.6677 | 0.6604 | 0.8277 | | No log | 18.0 | 234 | 0.6579 | 0.6991 | 0.6579 | 0.6488 | 0.8897 | | No log | 18.48 | 240 | 0.6577 | 0.6982 | 0.6577 | 0.6486 | 0.8890 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Tokenizers 0.21.0