romainjeff commited on
Commit
a78e283
·
verified ·
1 Parent(s): 708991a

End of training

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: answerdotai/ModernBERT-base
5
  tags:
6
  - generated_from_trainer
7
  metrics:
@@ -16,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # modernbert-llm-router
18
 
19
- This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: nan
22
- - F1: 0.0003
23
 
24
  ## Model description
25
 
@@ -38,29 +38,28 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 5e-06
42
- - train_batch_size: 16
43
  - eval_batch_size: 16
44
  - seed: 42
45
- - gradient_accumulation_steps: 2
46
- - total_train_batch_size: 32
47
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
- - lr_scheduler_warmup_ratio: 0.1
50
- - num_epochs: 2
51
- - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | F1 |
56
  |:-------------:|:-----:|:----:|:---------------:|:------:|
57
- | 0.0 | 1.0 | 313 | nan | 0.0003 |
58
- | 0.0 | 2.0 | 626 | nan | 0.0003 |
 
 
 
59
 
60
 
61
  ### Framework versions
62
 
63
  - Transformers 4.49.0
64
- - Pytorch 2.4.0+cu121
65
  - Datasets 3.1.0
66
  - Tokenizers 0.21.0
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: google-bert/bert-base-uncased
5
  tags:
6
  - generated_from_trainer
7
  metrics:
 
16
 
17
  # modernbert-llm-router
18
 
19
+ This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.2868
22
+ - F1: 0.9285
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 5e-05
42
+ - train_batch_size: 32
43
  - eval_batch_size: 16
44
  - seed: 42
45
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
 
 
46
  - lr_scheduler_type: linear
47
+ - num_epochs: 5
 
 
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | F1 |
52
  |:-------------:|:-----:|:----:|:---------------:|:------:|
53
+ | 1.5656 | 1.0 | 313 | 1.1336 | 0.7859 |
54
+ | 0.5183 | 2.0 | 626 | 0.4624 | 0.9075 |
55
+ | 0.2383 | 3.0 | 939 | 0.3468 | 0.9185 |
56
+ | 0.1281 | 4.0 | 1252 | 0.2912 | 0.9302 |
57
+ | 0.0714 | 5.0 | 1565 | 0.2868 | 0.9285 |
58
 
59
 
60
  ### Framework versions
61
 
62
  - Transformers 4.49.0
63
+ - Pytorch 2.4.1+cu121
64
  - Datasets 3.1.0
65
  - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:777175c6408de31f8f6dbbaf7f70c0bbf165cf6dc042ca86c0342a25ffaff407
3
  size 438189348
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b24c72d885ade23affe7ee68934f87701a3587537ca1633948b41b08eb99a6f
3
  size 438189348
runs/Feb26_19-51-07_modal/events.out.tfevents.1740599469.modal.2.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ab48a3a5f07955f6f19f11f13a8f50eb26782a72c6c96c4b719b36178b2764d1
3
- size 14830
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17691f908aa7404b5b6a987641f1e21b5b5b1a4d321011ef440b3b0427d93237
3
+ size 15501