baelamri commited on
Commit
9eb99bb
·
verified ·
1 Parent(s): 1b74bd9

exec_date=2025-05-27T12:22:36.961936 -- model_name=microsoft/mpnet-base -- dataset_path=sentence-transformers/all-nli -- dataset_name=triplet -- train_size=1000

Browse files
Files changed (2) hide show
  1. README.md +31 -16
  2. model.safetensors +1 -1
README.md CHANGED
@@ -8,7 +8,7 @@ tags:
8
  - nli
9
  - tutorial
10
  - generated_from_trainer
11
- - dataset_size:10000
12
  - loss:MultipleNegativesRankingLoss
13
  base_model: microsoft/mpnet-base
14
  widget:
@@ -47,6 +47,16 @@ pipeline_tag: sentence-similarity
47
  library_name: sentence-transformers
48
  metrics:
49
  - cosine_accuracy
 
 
 
 
 
 
 
 
 
 
50
  model-index:
51
  - name: microsoft/mpnet-base
52
  results:
@@ -58,7 +68,7 @@ model-index:
58
  type: all-nli-eval
59
  metrics:
60
  - type: cosine_accuracy
61
- value: 0.7859963774681091
62
  name: Cosine Accuracy
63
  - task:
64
  type: triplet
@@ -68,7 +78,7 @@ model-index:
68
  type: all-nli-test
69
  metrics:
70
  - type: cosine_accuracy
71
- value: 0.8060221076011658
72
  name: Cosine Accuracy
73
  ---
74
 
@@ -171,7 +181,7 @@ You can finetune this model on your own dataset.
171
 
172
  | Metric | all-nli-eval | all-nli-test |
173
  |:--------------------|:-------------|:-------------|
174
- | **cosine_accuracy** | **0.786** | **0.806** |
175
 
176
  <!--
177
  ## Bias, Risks and Limitations
@@ -192,7 +202,7 @@ You can finetune this model on your own dataset.
192
  #### all-nli
193
 
194
  * Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
195
- * Size: 10,000 training samples
196
  * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
197
  * Approximate statistics based on the first 1000 samples:
198
  | | anchor | positive | negative |
@@ -370,17 +380,22 @@ You can finetune this model on your own dataset.
370
  </details>
371
 
372
  ### Training Logs
373
- | Epoch | Step | Training Loss | Validation Loss | all-nli-eval_cosine_accuracy | all-nli-test_cosine_accuracy |
374
- |:-----:|:----:|:-------------:|:---------------:|:----------------------------:|:----------------------------:|
375
- | -1 | -1 | - | - | 0.6211 | - |
376
- | 0.16 | 100 | 1.7157 | 0.7149 | 0.8109 | - |
377
- | 0.32 | 200 | 0.831 | 0.7081 | 0.8164 | - |
378
- | 0.48 | 300 | 0.7266 | 0.8863 | 0.8056 | - |
379
- | 0.64 | 400 | 0.6932 | 1.0048 | 0.7942 | - |
380
- | 0.8 | 500 | 0.5364 | 0.9695 | 0.7852 | - |
381
- | 0.96 | 600 | 0.3202 | 0.9368 | 0.7860 | - |
382
- | -1 | -1 | - | - | - | 0.8060 |
383
-
 
 
 
 
 
384
 
385
  ### Framework Versions
386
  - Python: 3.12.4
 
8
  - nli
9
  - tutorial
10
  - generated_from_trainer
11
+ - dataset_size:1000
12
  - loss:MultipleNegativesRankingLoss
13
  base_model: microsoft/mpnet-base
14
  widget:
 
47
  library_name: sentence-transformers
48
  metrics:
49
  - cosine_accuracy
50
+ co2_eq_emissions:
51
+ emissions: 0.006544502824422758
52
+ energy_consumed: 0.00011678478960050603
53
+ source: codecarbon
54
+ training_type: fine-tuning
55
+ on_cloud: false
56
+ cpu_model: Apple M4
57
+ ram_total_size: 24.0
58
+ hours_used: 0.02
59
+ hardware_used: Apple M4
60
  model-index:
61
  - name: microsoft/mpnet-base
62
  results:
 
68
  type: all-nli-eval
69
  metrics:
70
  - type: cosine_accuracy
71
+ value: 0.621051013469696
72
  name: Cosine Accuracy
73
  - task:
74
  type: triplet
 
78
  type: all-nli-test
79
  metrics:
80
  - type: cosine_accuracy
81
+ value: 0.8116205334663391
82
  name: Cosine Accuracy
83
  ---
84
 
 
181
 
182
  | Metric | all-nli-eval | all-nli-test |
183
  |:--------------------|:-------------|:-------------|
184
+ | **cosine_accuracy** | **0.6211** | **0.8116** |
185
 
186
  <!--
187
  ## Bias, Risks and Limitations
 
202
  #### all-nli
203
 
204
  * Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
205
+ * Size: 1,000 training samples
206
  * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
207
  * Approximate statistics based on the first 1000 samples:
208
  | | anchor | positive | negative |
 
380
  </details>
381
 
382
  ### Training Logs
383
+ | Epoch | Step | all-nli-eval_cosine_accuracy | all-nli-test_cosine_accuracy |
384
+ |:-----:|:----:|:----------------------------:|:----------------------------:|
385
+ | -1 | -1 | 0.6211 | 0.8116 |
386
+
387
+
388
+ ### Environmental Impact
389
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
390
+ - **Energy Consumed**: 0.000 kWh
391
+ - **Carbon Emitted**: 0.000 kg of CO2
392
+ - **Hours Used**: 0.02 hours
393
+
394
+ ### Training Hardware
395
+ - **On Cloud**: No
396
+ - **GPU Model**: Apple M4
397
+ - **CPU Model**: Apple M4
398
+ - **RAM Size**: 24.00 GB
399
 
400
  ### Framework Versions
401
  - Python: 3.12.4
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a2d3e226416df060d8221227a5beaaf77dd7ca6f690859f914d91e790f25695b
3
  size 437967672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:876e98ba961ccbb0b15e849ece50e7802b9ae9d2226f48813367d94455a9d5af
3
  size 437967672