Tien09 commited on
Commit
088cbec
·
verified ·
1 Parent(s): a1676ee

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 128,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,478 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - generated_from_trainer
10
+ - dataset_size:8959
11
+ - loss:CoSENTLoss
12
+ base_model: prajjwal1/bert-tiny
13
+ widget:
14
+ - source_sentence: 'This card is treated as a Normal Monster while face-up on the
15
+ field or in the GY. While this card is face-up on the field, you can Normal Summon
16
+ it to have it become an Effect Monster with this effect. During your End Phase:
17
+ You can target 1 Equip Spell Card in your GY; add that target to your hand. You
18
+ can only use this effect of "Knight Day Grepher" once per turn.'
19
+ sentences:
20
+ - 'You can Ritual Summon this card with "Primal Cry". Once per turn: You can reveal
21
+ 1 monster in your hand, then target 1 face-up monster on the field; that target''s
22
+ Level becomes equal to the Level the revealed monster had, until the end of this
23
+ turn. Once per turn, if another monster is Tributed from your hand or field (except
24
+ during the Damage Step): You can target 1 monster in your GY; add it to your hand.'
25
+ - 'If you control an Insect monster: You can Special Summon this card from your
26
+ hand. During your Main Phase: You can inflict 200 damage to your opponent for
27
+ each "Battlewasp - Pin the Bullseye" you control. You can only use each effect
28
+ of "Battlewasp - Pin the Bullseye" once per turn.'
29
+ - "Add 1 \"Great Sand Sea - Gold Golgonda\" from your Deck to your hand. If \"Great\
30
+ \ Sand Sea - Gold Golgonda\" is in your Field Zone, you can apply this effect\
31
+ \ instead.\r\n● Add 1 \"Springans\" monster from your Deck to your hand, and if\
32
+ \ you do, send 1 \"Springans\" monster from your Deck to the GY.\r\nYou can only\
33
+ \ activate 1 \"Springans Watch\" per turn."
34
+ - source_sentence: 'You can target 1 face-up card you control; destroy it, and if
35
+ you do, Special Summon 1 "Zoodiac" monster from your Deck. You can only use this
36
+ effect of "Zoodiac Barrage" once per turn. If this card is destroyed by a card
37
+ effect and sent to the GY: You can target 1 "Zoodiac" Xyz Monster you control;
38
+ attach this card from your GY to that Xyz Monster as Xyz Material.'
39
+ sentences:
40
+ - While you control a monster(s), you take no Battle Damage.
41
+ - 1 Dragon-Type Tuner + 1 or more non-Tuner Winged Beast-Type monsters If this card
42
+ attacks or is attacked, during the Damage Step you can remove from play 1 (only)
43
+ Winged Beast-Type monster from your GY, to have this card gain the ATK of that
44
+ monster until the End Phase.
45
+ - 'Activate this card by discarding 1 card: Special Summon as many copies of "Harpie
46
+ Lady" as possible from your GY. When this face-up card leaves the field, destroy
47
+ those monsters.'
48
+ - source_sentence: Fusion Summon 1 Fusion Monster from your Extra Deck, using monsters
49
+ from your hand or your side of the field as Fusion Materials.
50
+ sentences:
51
+ - '1 "Elemental HERO" monster + 1 WIND monster
52
+
53
+ Must be Fusion Summoned and cannot be Special Summoned by other ways. When this
54
+ card is Fusion Summoned: Halve the ATK and DEF of all face-up monsters your opponent
55
+ controls.'
56
+ - When a "roid" monster you control is destroyed by battle and sent to the GY, you
57
+ can return that monster to its owner's hand.
58
+ - Target 1 "Raidraptor" monster you control; Special Summon 1 monster with the same
59
+ name as that monster on the field from your hand or Deck in Defense Position.
60
+ You can only activate 1 "Raidraptor - Call" per turn. You cannot Special Summon
61
+ monsters during the turn you activate this card, except "Raidraptor" monsters.
62
+ - source_sentence: '2 Cyberse monsters If this card is Link Summoned: You can add
63
+ 1 "Cynet Fusion" from your Deck to your hand. If a monster(s) is Special Summoned
64
+ to a zone(s) this card points to (except during the Damage Step): You can target
65
+ 1 Level 4 or lower Cyberse monster in your GY; Special Summon it, but negate its
66
+ effects, also you cannot Special Summon monsters from the Extra Deck for the rest
67
+ of this turn, except Fusion Monsters. You can only use each effect of "Clock Spartoi"
68
+ once per turn.'
69
+ sentences:
70
+ - A zombie shark that can deliver its lethal curse with a spell.
71
+ - Pay 500 Life Points. Destroy a face-up "Blaze Accelerator" card you control and
72
+ destroy all monsters on the field. Then, Special Summon 1 "Wild Fire Token" (Pyro-Type/FIRE/LEVEL
73
+ 3/ATK 1000/DEF 1000) in Attack Position. Also, you cannot declare an attack this
74
+ turn.
75
+ - You can banish 1 "Virtual World" card from your GY, then target 1 face-up monster
76
+ on the field; negate its effects until the end of this turn (even if this card
77
+ leaves the field). You can banish this card from your GY; add 1 "Virtual World"
78
+ monster from your Deck to your hand, then send 1 card from your hand to the GY.
79
+ You can only use each effect of "Virtual World Gate - Qinglong" once per turn.
80
+ - source_sentence: 'When this card destroys an opponent''s monster by battle and sends
81
+ it to the GY: You can discard 1 WATER monster to the GY; Special Summon 1 "Mermail"
82
+ monster from your Deck in face-up Defense Position. You can only use the effect
83
+ of "Mermail Abyssnose" once per turn.'
84
+ sentences:
85
+ - 'If "Obsidim, the Ashened City" is in the Field Zone, you can Special Summon this
86
+ card (from your hand). You can only Special Summon "King of the Ashened City"
87
+ once per turn this way. During your Main Phase: You can Special Summon 1 "Ashened"
88
+ monster from your hand, except "King of the Ashened City", or if your opponent
89
+ controls a monster with 2800 or more ATK, you can Special Summon it from your
90
+ Deck instead. You can only use this effect of "King of the Ashened City" once
91
+ per turn.'
92
+ - 'You can Tribute this card; Special Summon 1 Level 7 or lower "Red-Eyes" monster
93
+ from your Deck, except "Red-Eyes B. Chick". If this card is in your GY: You can
94
+ target 1 Level 7 or lower "Red-Eyes" monster in your GY, except "Red-Eyes B. Chick";
95
+ shuffle it into the Deck, and if you do, add this card to your hand. You can only
96
+ use 1 "The Black Stone of Legend" effect per turn, and only once that turn.'
97
+ - Activate only when your opponent declares a direct attack and you control no monsters.
98
+ Special Summon 1 Level 4 or lower Beast-Type monster from your hand in face-up
99
+ Attack Position.
100
+ datasets:
101
+ - Tien09/pair_similarity_new_1
102
+ pipeline_tag: sentence-similarity
103
+ library_name: sentence-transformers
104
+ ---
105
+
106
+ # MPNet base trained on AllNLI triplets
107
+
108
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the [pair_similarity_new_1](https://huggingface.co/datasets/Tien09/pair_similarity_new_1) dataset. It maps sentences & paragraphs to a 128-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
109
+
110
+ ## Model Details
111
+
112
+ ### Model Description
113
+ - **Model Type:** Sentence Transformer
114
+ - **Base model:** [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) <!-- at revision 6f75de8b60a9f8a2fdf7b69cbd86d9e64bcb3837 -->
115
+ - **Maximum Sequence Length:** 512 tokens
116
+ - **Output Dimensionality:** 128 dimensions
117
+ - **Similarity Function:** Cosine Similarity
118
+ - **Training Dataset:**
119
+ - [pair_similarity_new_1](https://huggingface.co/datasets/Tien09/pair_similarity_new_1)
120
+ - **Language:** en
121
+ - **License:** apache-2.0
122
+
123
+ ### Model Sources
124
+
125
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
126
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
127
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
128
+
129
+ ### Full Model Architecture
130
+
131
+ ```
132
+ SentenceTransformer(
133
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
134
+ (1): Pooling({'word_embedding_dimension': 128, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
135
+ )
136
+ ```
137
+
138
+ ## Usage
139
+
140
+ ### Direct Usage (Sentence Transformers)
141
+
142
+ First install the Sentence Transformers library:
143
+
144
+ ```bash
145
+ pip install -U sentence-transformers
146
+ ```
147
+
148
+ Then you can load this model and run inference.
149
+ ```python
150
+ from sentence_transformers import SentenceTransformer
151
+
152
+ # Download from the 🤗 Hub
153
+ model = SentenceTransformer("Tien09/tiny_bert_ft_sim_score_2")
154
+ # Run inference
155
+ sentences = [
156
+ 'When this card destroys an opponent\'s monster by battle and sends it to the GY: You can discard 1 WATER monster to the GY; Special Summon 1 "Mermail" monster from your Deck in face-up Defense Position. You can only use the effect of "Mermail Abyssnose" once per turn.',
157
+ 'If "Obsidim, the Ashened City" is in the Field Zone, you can Special Summon this card (from your hand). You can only Special Summon "King of the Ashened City" once per turn this way. During your Main Phase: You can Special Summon 1 "Ashened" monster from your hand, except "King of the Ashened City", or if your opponent controls a monster with 2800 or more ATK, you can Special Summon it from your Deck instead. You can only use this effect of "King of the Ashened City" once per turn.',
158
+ 'Activate only when your opponent declares a direct attack and you control no monsters. Special Summon 1 Level 4 or lower Beast-Type monster from your hand in face-up Attack Position.',
159
+ ]
160
+ embeddings = model.encode(sentences)
161
+ print(embeddings.shape)
162
+ # [3, 128]
163
+
164
+ # Get the similarity scores for the embeddings
165
+ similarities = model.similarity(embeddings, embeddings)
166
+ print(similarities.shape)
167
+ # [3, 3]
168
+ ```
169
+
170
+ <!--
171
+ ### Direct Usage (Transformers)
172
+
173
+ <details><summary>Click to see the direct usage in Transformers</summary>
174
+
175
+ </details>
176
+ -->
177
+
178
+ <!--
179
+ ### Downstream Usage (Sentence Transformers)
180
+
181
+ You can finetune this model on your own dataset.
182
+
183
+ <details><summary>Click to expand</summary>
184
+
185
+ </details>
186
+ -->
187
+
188
+ <!--
189
+ ### Out-of-Scope Use
190
+
191
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
192
+ -->
193
+
194
+ <!--
195
+ ## Bias, Risks and Limitations
196
+
197
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
198
+ -->
199
+
200
+ <!--
201
+ ### Recommendations
202
+
203
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
204
+ -->
205
+
206
+ ## Training Details
207
+
208
+ ### Training Dataset
209
+
210
+ #### pair_similarity_new_1
211
+
212
+ * Dataset: [pair_similarity_new_1](https://huggingface.co/datasets/Tien09/pair_similarity_new_1) at [a250d43](https://huggingface.co/datasets/Tien09/pair_similarity_new_1/tree/a250d43cce6282901c98621038f78ed7bd1b8b2c)
213
+ * Size: 8,959 training samples
214
+ * Columns: <code>effect_text</code>, <code>score</code>, and <code>effect_text2</code>
215
+ * Approximate statistics based on the first 1000 samples:
216
+ | | effect_text | score | effect_text2 |
217
+ |:--------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|:-----------------------------------------------------------------------------------|
218
+ | type | string | float | string |
219
+ | details | <ul><li>min: 9 tokens</li><li>mean: 73.57 tokens</li><li>max: 204 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.43</li><li>max: 1.0</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 71.17 tokens</li><li>max: 193 tokens</li></ul> |
220
+ * Samples:
221
+ | effect_text | score | effect_text2 |
222
+ |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
223
+ | <code>When your opponent's monster attacks a face-up Level 4 or lower Toon Monster on your side of the field, you can make the attack a direct attack to your Life Points.</code> | <code>0.0</code> | <code>During either player's Main Phase: Special Summon this card as a Normal Monster (Reptile-Type/EARTH/Level 4/ATK 1600/DEF 1800). (This card is also still a Trap Card.)</code> |
224
+ | <code>When your opponent Special Summons a monster, you can discard 1 card to Special Summon this card from your hand. Your opponent cannot remove cards from play.</code> | <code>1.0</code> | <code>Activate this card by discarding 1 monster, then target 1 monster in your GY whose Level is lower than the discarded monster's original Level; Special Summon it and equip it with this card. The equipped monster has its effects negated. You can only activate 1 "Overdone Burial" per turn.</code> |
225
+ | <code>Mystical Elf" + "Curtain of the Dark Ones</code> | <code>0.0</code> | <code>A lost dog that wandered off 1000 years ago. He's still waiting for his master to come for him.</code> |
226
+ * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
227
+ ```json
228
+ {
229
+ "scale": 20.0,
230
+ "similarity_fct": "pairwise_cos_sim"
231
+ }
232
+ ```
233
+
234
+ ### Evaluation Dataset
235
+
236
+ #### pair_similarity_new_1
237
+
238
+ * Dataset: [pair_similarity_new_1](https://huggingface.co/datasets/Tien09/pair_similarity_new_1) at [a250d43](https://huggingface.co/datasets/Tien09/pair_similarity_new_1/tree/a250d43cce6282901c98621038f78ed7bd1b8b2c)
239
+ * Size: 1,920 evaluation samples
240
+ * Columns: <code>effect_text</code>, <code>score</code>, and <code>effect_text2</code>
241
+ * Approximate statistics based on the first 1000 samples:
242
+ | | effect_text | score | effect_text2 |
243
+ |:--------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|:-----------------------------------------------------------------------------------|
244
+ | type | string | float | string |
245
+ | details | <ul><li>min: 6 tokens</li><li>mean: 72.29 tokens</li><li>max: 190 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.43</li><li>max: 1.0</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 71.47 tokens</li><li>max: 199 tokens</li></ul> |
246
+ * Samples:
247
+ | effect_text | score | effect_text2 |
248
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
249
+ | <code>2+ Level 4 monsters<br>This Xyz Summoned card gains 500 ATK x the total Link Rating of Link Monsters linked to this card. You can detach 2 materials from this card, then target 1 4 Cyberse Link Monster in your GY; Special Summon it to your field so it points to this card, also you cannot Special Summon other monsters or attack directly for the rest of this turn.</code> | <code>1.0</code> | <code>3 Level 4 monsters Once per turn, you can also Xyz Summon "Zoodiac Tigermortar" by using 1 "Zoodiac" monster you control with a different name as Xyz Material. (If you used an Xyz Monster, any Xyz Materials attached to it also become Xyz Materials on this card.) This card gains ATK and DEF equal to the ATK and DEF of all "Zoodiac" monsters attached to it as Materials. Once per turn: You can detach 1 Xyz Material from this card, then target 1 Xyz Monster you control and 1 "Zoodiac" monster in your GY; attach that "Zoodiac" monster to that Xyz Monster as Xyz Material.</code> |
250
+ | <code>1 Tuner + 1 or more non-Tuner Pendulum Monsters Once per turn: You can target 1 Pendulum Monster on the field or 1 card in the Pendulum Zone; destroy it, and if you do, shuffle 1 card on the field into the Deck. Once per turn: You can Special Summon 1 "Dracoslayer" monster from your Deck in Defense Position, but it cannot be used as a Synchro Material for a Summon.</code> | <code>0.0</code> | <code>If this card is Special Summoned: You can add 1 "Performapal" monster from your Deck to your hand, except a Pendulum Monster. You can only use this effect of "Performapal Longphone Bull" once per turn.</code> |
251
+ | <code>If you control an Illusion or Spellcaster monster: Add 1 "White Forest" monster from your Deck to your hand. If this card is sent to the GY to activate a monster effect: You can Set this card. You can only use each effect of "Tales of the White Forest" once per turn.</code> | <code>0.25</code> | <code>Cannot be destroyed by your opponent's card effects while "Multi-Universe" is on the field. You can only use each of the following effects of "Krishnerd Witch" once per turn. If a card(s) in the Field Zone leaves the field by card effect (except during the Damage Step): You can Special Summon this card from your hand. When a Field Spell that is already face-up on the field activates its effect (Quick Effect): You can shuffle 1 of your monsters that is banished or in your GY into the Deck, or if that monster mentions that Field Spell, you can Special Summon it instead.</code> |
252
+ * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
253
+ ```json
254
+ {
255
+ "scale": 20.0,
256
+ "similarity_fct": "pairwise_cos_sim"
257
+ }
258
+ ```
259
+
260
+ ### Training Hyperparameters
261
+ #### Non-Default Hyperparameters
262
+
263
+ - `eval_strategy`: steps
264
+ - `per_device_train_batch_size`: 16
265
+ - `per_device_eval_batch_size`: 16
266
+ - `num_train_epochs`: 5
267
+ - `warmup_ratio`: 0.1
268
+ - `fp16`: True
269
+ - `batch_sampler`: no_duplicates
270
+
271
+ #### All Hyperparameters
272
+ <details><summary>Click to expand</summary>
273
+
274
+ - `overwrite_output_dir`: False
275
+ - `do_predict`: False
276
+ - `eval_strategy`: steps
277
+ - `prediction_loss_only`: True
278
+ - `per_device_train_batch_size`: 16
279
+ - `per_device_eval_batch_size`: 16
280
+ - `per_gpu_train_batch_size`: None
281
+ - `per_gpu_eval_batch_size`: None
282
+ - `gradient_accumulation_steps`: 1
283
+ - `eval_accumulation_steps`: None
284
+ - `torch_empty_cache_steps`: None
285
+ - `learning_rate`: 5e-05
286
+ - `weight_decay`: 0.0
287
+ - `adam_beta1`: 0.9
288
+ - `adam_beta2`: 0.999
289
+ - `adam_epsilon`: 1e-08
290
+ - `max_grad_norm`: 1.0
291
+ - `num_train_epochs`: 5
292
+ - `max_steps`: -1
293
+ - `lr_scheduler_type`: linear
294
+ - `lr_scheduler_kwargs`: {}
295
+ - `warmup_ratio`: 0.1
296
+ - `warmup_steps`: 0
297
+ - `log_level`: passive
298
+ - `log_level_replica`: warning
299
+ - `log_on_each_node`: True
300
+ - `logging_nan_inf_filter`: True
301
+ - `save_safetensors`: True
302
+ - `save_on_each_node`: False
303
+ - `save_only_model`: False
304
+ - `restore_callback_states_from_checkpoint`: False
305
+ - `no_cuda`: False
306
+ - `use_cpu`: False
307
+ - `use_mps_device`: False
308
+ - `seed`: 42
309
+ - `data_seed`: None
310
+ - `jit_mode_eval`: False
311
+ - `use_ipex`: False
312
+ - `bf16`: False
313
+ - `fp16`: True
314
+ - `fp16_opt_level`: O1
315
+ - `half_precision_backend`: auto
316
+ - `bf16_full_eval`: False
317
+ - `fp16_full_eval`: False
318
+ - `tf32`: None
319
+ - `local_rank`: 0
320
+ - `ddp_backend`: None
321
+ - `tpu_num_cores`: None
322
+ - `tpu_metrics_debug`: False
323
+ - `debug`: []
324
+ - `dataloader_drop_last`: False
325
+ - `dataloader_num_workers`: 0
326
+ - `dataloader_prefetch_factor`: None
327
+ - `past_index`: -1
328
+ - `disable_tqdm`: False
329
+ - `remove_unused_columns`: True
330
+ - `label_names`: None
331
+ - `load_best_model_at_end`: False
332
+ - `ignore_data_skip`: False
333
+ - `fsdp`: []
334
+ - `fsdp_min_num_params`: 0
335
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
336
+ - `fsdp_transformer_layer_cls_to_wrap`: None
337
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
338
+ - `deepspeed`: None
339
+ - `label_smoothing_factor`: 0.0
340
+ - `optim`: adamw_torch
341
+ - `optim_args`: None
342
+ - `adafactor`: False
343
+ - `group_by_length`: False
344
+ - `length_column_name`: length
345
+ - `ddp_find_unused_parameters`: None
346
+ - `ddp_bucket_cap_mb`: None
347
+ - `ddp_broadcast_buffers`: False
348
+ - `dataloader_pin_memory`: True
349
+ - `dataloader_persistent_workers`: False
350
+ - `skip_memory_metrics`: True
351
+ - `use_legacy_prediction_loop`: False
352
+ - `push_to_hub`: False
353
+ - `resume_from_checkpoint`: None
354
+ - `hub_model_id`: None
355
+ - `hub_strategy`: every_save
356
+ - `hub_private_repo`: None
357
+ - `hub_always_push`: False
358
+ - `gradient_checkpointing`: False
359
+ - `gradient_checkpointing_kwargs`: None
360
+ - `include_inputs_for_metrics`: False
361
+ - `include_for_metrics`: []
362
+ - `eval_do_concat_batches`: True
363
+ - `fp16_backend`: auto
364
+ - `push_to_hub_model_id`: None
365
+ - `push_to_hub_organization`: None
366
+ - `mp_parameters`:
367
+ - `auto_find_batch_size`: False
368
+ - `full_determinism`: False
369
+ - `torchdynamo`: None
370
+ - `ray_scope`: last
371
+ - `ddp_timeout`: 1800
372
+ - `torch_compile`: False
373
+ - `torch_compile_backend`: None
374
+ - `torch_compile_mode`: None
375
+ - `dispatch_batches`: None
376
+ - `split_batches`: None
377
+ - `include_tokens_per_second`: False
378
+ - `include_num_input_tokens_seen`: False
379
+ - `neftune_noise_alpha`: None
380
+ - `optim_target_modules`: None
381
+ - `batch_eval_metrics`: False
382
+ - `eval_on_start`: False
383
+ - `use_liger_kernel`: False
384
+ - `eval_use_gather_object`: False
385
+ - `average_tokens_across_devices`: False
386
+ - `prompts`: None
387
+ - `batch_sampler`: no_duplicates
388
+ - `multi_dataset_batch_sampler`: proportional
389
+
390
+ </details>
391
+
392
+ ### Training Logs
393
+ | Epoch | Step | Training Loss | Validation Loss |
394
+ |:------:|:----:|:-------------:|:---------------:|
395
+ | 0.1786 | 100 | 4.5375 | 4.3247 |
396
+ | 0.3571 | 200 | 4.3729 | 4.2788 |
397
+ | 0.5357 | 300 | 4.2836 | 4.2434 |
398
+ | 0.7143 | 400 | 4.243 | 4.2069 |
399
+ | 0.8929 | 500 | 4.2876 | 4.1737 |
400
+ | 1.0714 | 600 | 4.2072 | 4.1358 |
401
+ | 1.25 | 700 | 4.2417 | 4.0977 |
402
+ | 1.4286 | 800 | 4.0938 | 4.0738 |
403
+ | 1.6071 | 900 | 4.016 | 4.0435 |
404
+ | 1.7857 | 1000 | 4.0259 | 4.0422 |
405
+ | 1.9643 | 1100 | 4.0137 | 4.0215 |
406
+ | 2.1429 | 1200 | 4.0241 | 4.0208 |
407
+ | 2.3214 | 1300 | 3.9952 | 3.9968 |
408
+ | 2.5 | 1400 | 3.9033 | 3.9860 |
409
+ | 2.6786 | 1500 | 3.8599 | 3.9398 |
410
+ | 2.8571 | 1600 | 3.8683 | 3.9286 |
411
+ | 3.0357 | 1700 | 3.8999 | 3.9003 |
412
+ | 3.2143 | 1800 | 3.899 | 3.9110 |
413
+ | 3.3929 | 1900 | 3.8398 | 3.8958 |
414
+ | 3.5714 | 2000 | 3.7397 | 3.8939 |
415
+ | 3.75 | 2100 | 3.8227 | 3.8797 |
416
+ | 3.9286 | 2200 | 3.7507 | 3.9167 |
417
+ | 4.1071 | 2300 | 3.7835 | 3.8933 |
418
+ | 4.2857 | 2400 | 3.8219 | 3.9044 |
419
+ | 4.4643 | 2500 | 3.7115 | 3.9060 |
420
+ | 4.6429 | 2600 | 3.7014 | 3.8887 |
421
+ | 4.8214 | 2700 | 3.7751 | 3.8855 |
422
+ | 5.0 | 2800 | 3.7999 | 3.8872 |
423
+
424
+
425
+ ### Framework Versions
426
+ - Python: 3.10.12
427
+ - Sentence Transformers: 3.3.1
428
+ - Transformers: 4.47.1
429
+ - PyTorch: 2.5.1+cu121
430
+ - Accelerate: 1.2.1
431
+ - Datasets: 3.2.0
432
+ - Tokenizers: 0.21.0
433
+
434
+ ## Citation
435
+
436
+ ### BibTeX
437
+
438
+ #### Sentence Transformers
439
+ ```bibtex
440
+ @inproceedings{reimers-2019-sentence-bert,
441
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
442
+ author = "Reimers, Nils and Gurevych, Iryna",
443
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
444
+ month = "11",
445
+ year = "2019",
446
+ publisher = "Association for Computational Linguistics",
447
+ url = "https://arxiv.org/abs/1908.10084",
448
+ }
449
+ ```
450
+
451
+ #### CoSENTLoss
452
+ ```bibtex
453
+ @online{kexuefm-8847,
454
+ title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
455
+ author={Su Jianlin},
456
+ year={2022},
457
+ month={Jan},
458
+ url={https://kexue.fm/archives/8847},
459
+ }
460
+ ```
461
+
462
+ <!--
463
+ ## Glossary
464
+
465
+ *Clearly define terms in order to be accessible across audiences.*
466
+ -->
467
+
468
+ <!--
469
+ ## Model Card Authors
470
+
471
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
472
+ -->
473
+
474
+ <!--
475
+ ## Model Card Contact
476
+
477
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
478
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "prajjwal1/bert-tiny",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 128,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 512,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 2,
17
+ "num_hidden_layers": 2,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.47.1",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.3.1",
4
+ "transformers": "4.47.1",
5
+ "pytorch": "2.5.1+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef39596ddd071a8e2c612a54d36ef3ae584ea5dd97e65cc9b03dd08d3c150455
3
+ size 17547912
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 1000000000000000019884624838656,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff