bobbyw commited on
Commit
6102bb3
·
verified ·
1 Parent(s): 778db01

End of training

Browse files
Files changed (1) hide show
  1. README.md +8 -15
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
- license: apache-2.0
3
- base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
  model-index:
@@ -13,7 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # copilot_relex_v1
15
 
16
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
  - Loss: 0.0067
19
 
@@ -40,27 +40,20 @@ The following hyperparameters were used during training:
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
- - num_epochs: 10
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
- | No log | 1.0 | 172 | 0.0133 |
50
- | No log | 2.0 | 344 | 0.0106 |
51
- | 0.0204 | 3.0 | 516 | 0.0090 |
52
- | 0.0204 | 4.0 | 688 | 0.0078 |
53
- | 0.0204 | 5.0 | 860 | 0.0072 |
54
- | 0.0125 | 6.0 | 1032 | 0.0071 |
55
- | 0.0125 | 7.0 | 1204 | 0.0072 |
56
- | 0.0125 | 8.0 | 1376 | 0.0068 |
57
- | 0.0113 | 9.0 | 1548 | 0.0066 |
58
- | 0.0113 | 10.0 | 1720 | 0.0067 |
59
 
60
 
61
  ### Framework versions
62
 
63
  - Transformers 4.35.2
64
  - Pytorch 2.1.0+cu121
65
- - Datasets 2.16.1
66
  - Tokenizers 0.15.1
 
1
  ---
2
+ license: mit
3
+ base_model: microsoft/deberta-v3-small
4
  tags:
5
  - generated_from_trainer
6
  model-index:
 
13
 
14
  # copilot_relex_v1
15
 
16
+ This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
  - Loss: 0.0067
19
 
 
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
+ - num_epochs: 3
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
+ | No log | 1.0 | 172 | 0.0072 |
50
+ | No log | 2.0 | 344 | 0.0068 |
51
+ | 0.0129 | 3.0 | 516 | 0.0067 |
 
 
 
 
 
 
 
52
 
53
 
54
  ### Framework versions
55
 
56
  - Transformers 4.35.2
57
  - Pytorch 2.1.0+cu121
58
+ - Datasets 2.17.0
59
  - Tokenizers 0.15.1