KoModernBERT-chp-05

This model is a fine-tuned version of CocoRoF/KoModernBERT-chp-04 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0829

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 512
  • total_eval_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 1.0

Training results

Training Loss Epoch Step Validation Loss
17.9483 0.0904 5000 2.1860
17.4604 0.1808 10000 2.1737
17.2613 0.2712 15000 2.1614
17.3945 0.3616 20000 2.1502
16.9544 0.4520 25000 2.1386
16.8142 0.5424 30000 2.1271
16.7899 0.6329 35000 2.1153
16.9125 0.7233 40000 2.1080
16.954 0.8137 45000 2.1012
16.6773 0.9041 50000 2.0931
16.8028 0.9945 55000 2.0829

Framework versions

  • Transformers 4.48.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
8
Safetensors
Model size
153M params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.