Hubert-noisy-cv-kakeiken-C
This model is a fine-tuned version of rinna/japanese-hubert-base on the ORIGINAL_NOISY_KAKEIKEN - JA dataset. It achieves the following results on the evaluation set:
- Loss: 0.0056
- Wer: 0.9994
- Cer: 0.0878
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 12500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
0.1814 | 1.0 | 2732 | 0.0770 | 0.9997 | 0.1012 |
0.0374 | 2.0 | 5464 | 0.0266 | 0.9994 | 0.0931 |
0.0255 | 3.0 | 8196 | 0.0484 | 0.9997 | 0.0971 |
0.0288 | 4.0 | 10928 | 0.0423 | 0.9997 | 0.0975 |
0.0338 | 5.0 | 13660 | 0.0334 | 0.9999 | 0.0930 |
0.0286 | 6.0 | 16392 | 0.1452 | 1.0 | 0.1171 |
0.0274 | 7.0 | 19124 | 0.0618 | 0.9998 | 0.0984 |
0.0236 | 8.0 | 21856 | 0.0551 | 0.9998 | 0.1011 |
0.0205 | 9.0 | 24588 | 0.0237 | 0.9996 | 0.0923 |
0.0182 | 10.0 | 27320 | 0.0226 | 0.9996 | 0.0915 |
0.022 | 11.0 | 30052 | 0.0289 | 0.9997 | 0.0923 |
0.016 | 12.0 | 32784 | 0.0203 | 0.9996 | 0.0919 |
0.0115 | 13.0 | 35516 | 0.0295 | 0.9996 | 0.0947 |
0.0191 | 14.0 | 38248 | 0.0145 | 0.9997 | 0.0897 |
0.0077 | 15.0 | 40980 | 0.0301 | 0.9996 | 0.0933 |
0.0091 | 16.0 | 43712 | 0.0114 | 0.9994 | 0.0892 |
0.0085 | 17.0 | 46444 | 0.0107 | 0.9996 | 0.0890 |
0.006 | 18.0 | 49176 | 0.0169 | 0.9995 | 0.0905 |
0.0071 | 19.0 | 51908 | 0.0095 | 0.9994 | 0.0886 |
0.0061 | 20.0 | 54640 | 0.0077 | 0.9995 | 0.0885 |
0.0048 | 21.0 | 57372 | 0.0110 | 0.9995 | 0.0890 |
0.003 | 22.0 | 60104 | 0.0073 | 0.9995 | 0.0882 |
0.0022 | 23.0 | 62836 | 0.0067 | 0.9994 | 0.0880 |
0.0019 | 24.0 | 65568 | 0.0074 | 0.9994 | 0.0881 |
0.0018 | 25.0 | 68300 | 0.0059 | 0.9994 | 0.0878 |
0.0018 | 26.0 | 71032 | 0.0057 | 0.9994 | 0.0879 |
0.0016 | 27.0 | 73764 | 0.0056 | 0.9994 | 0.0877 |
0.0023 | 28.0 | 76496 | 0.0058 | 0.9994 | 0.0878 |
0.0015 | 29.0 | 79228 | 0.0056 | 0.9994 | 0.0878 |
0.0013 | 29.9892 | 81930 | 0.0056 | 0.9994 | 0.0878 |
Framework versions
- Transformers 4.47.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for utakumi/Hubert-noisy-cv-kakeiken-C
Base model
rinna/japanese-hubert-base