surajvbangera commited on
Commit
110a511
·
verified ·
1 Parent(s): 6bef63f

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,827 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:956
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: sentence-transformers/multi-qa-mpnet-base-cos-v1
11
+ widget:
12
+ - source_sentence: Does my insurance policy exclude medical costs for the first 30
13
+ days' illness, but cover accident-related claims?
14
+ sentences:
15
+ - "any notice for renewal. \nb. Renewal shall not be denied on the ground that\
16
+ \ the insured person had made a claim or claims in the preceding \npolicy years."
17
+ - '• Minimum entry age for proposer/ spouse/ dependent parents - 18 years
18
+
19
+ • Maximum Entry Age for proposer/ spouse/ dependent parents - 80 years
20
+
21
+ • Minimum Entry age for dependent Children - 3 months
22
+
23
+ • Maximum Entry Age for dependent Children - 25 years'
24
+ - "a. Expenses related to the treatment of any illness within 30 days from the\
25
+ \ first policy commencement date shall \nbe excluded except claims arising due\
26
+ \ to an accident, provided the same are covered."
27
+ - source_sentence: I have a pre-authorization for a procedure, what should I bring
28
+ along when I get admitted to the hospital to avoid paying the medical bills?
29
+ sentences:
30
+ - "Obesity/ Weight Control \nChange of Gender treatments\nCosmetic or plastic\
31
+ \ Surgery \nHazardous or Adventure sports \nBreach of law \nExcluded Providers\n\
32
+ Substance Abuse and Alcohol \nWellness and Rejuvenation \nDietary Supplements\
33
+ \ & \nSubstances"
34
+ - '56-60 11,950 12,760 7,874 18,887 13,573 9,243 17,848 13,162 21,348 16,437 11,308
35
+ 24,345 18,177 13,206 35,360 29,906 24,726
36
+
37
+ 61-65 14,352 15,319 9,444 22,688 16,298 11,089 21,442 15,804 25,652 19,744 13,571
38
+ 29,256 21,833 15,852 42,495 35,932 29,699'
39
+ - "specified must be produced to the Network Hospital identified in the pre-authorization\
40
+ \ letter at the time of Y our \nadmission to the same.\niii. If the procedure\
41
+ \ above is followed, Y ou will not be required to directly pay for the Medical\
42
+ \ Expenses above"
43
+ - source_sentence: Can you tell me the range of insured sum for a 4 member family
44
+ in INR?
45
+ sentences:
46
+ - "i. Obesity-related cardiomyopathy\n ii. Coronary heart disease\n iii. Severe\
47
+ \ Sleep Apnea\n iv. Uncontrolled T ype2 Diabetes\n7. Change-of-gender treatments:\
48
+ \ (Excl07)"
49
+ - 'Age/
50
+
51
+ deduc-
52
+
53
+ tible
54
+
55
+ 200000 200000 300000 200000 300000 500000 300000 500000 300000 500000 1000000
56
+ 300000 500000 1000000 300000 500000 1000000
57
+
58
+ 21-25 5,010 5,361 3,326 7,906 5,695 3,899 7,466 5,523 8,918 6,882 4,759 10,163
59
+ 7,610 5,553 14,756 12,498 10,354'
60
+ - "CIN: U66010PN2000PLC015329, UIN:BAJHLIP23069V032223 13\nFAMILY SIZE: 4 MEMBER\n\
61
+ Sum \nInsured \n(in INR)\n300000 500000 1000000 1500000 2000000 2500000 5000000\n\
62
+ Age/\ndeduc-\ntible"
63
+ - source_sentence: Does IRDAI have rules on portability that let someone who's been
64
+ continuously insured under any health policy from an Indian general or health
65
+ insurer carry over waiting period benefits?
66
+ sentences:
67
+ - '◼ WHAT ARE THE EXCLUSIONS AND WAITING PERIOD UNDER THE POLICY?
68
+
69
+ I. Waiting Period
70
+
71
+ A. Pre-Existing Diseases - Code- Excl01
72
+
73
+ a. Expenses related to the treatment of a pre-existing Disease (PED) and its
74
+ direct complications shall be excluded'
75
+ - "has been continuously covered without any lapses under any health insurance policy\
76
+ \ with an Indian General/\nHealth insurer, the proposed insured person will get\
77
+ \ the accrued continuity benefits in waiting periods as per \nIRDAI guidelines\
78
+ \ on portability."
79
+ - "Cumulative Bonus:\n For every claim free policy year, there will be increase\
80
+ \ of 10% of \nthe Sum Insured, maximum up to 100%. If a claim is made in any \n\
81
+ particular Policy Year, the Cumulative Bonus accrued shall not be \nreduced.\n\
82
+ SBIG Health Super T op-Up,"
83
+ - source_sentence: what kind of coverage is provided by insurance for medical expenses
84
+ that go beyond the normal amount?
85
+ sentences:
86
+ - "Enhances any existing health policy from any insurance provider \n- corporate\
87
+ \ or personal"
88
+ - 'Age/
89
+
90
+ deduc-
91
+
92
+ tible
93
+
94
+ 200000 200000 300000 200000 300000 500000 300000 500000 300000 500000 1000000
95
+ 300000 500000 1000000 300000 500000 1000000
96
+
97
+ 21-25 6,544 7,011 4,345 10,389 7,490 5,127 9,839 7,283 11,767 9,087 6,289 13,419
98
+ 10,054 7,343 19,518 16,543 13,717'
99
+ - "health insurance cover and provides wider health protection for you and your\
100
+ \ family. In case of higher expenses \ndue to illness or accidents, Extra Care\
101
+ \ Plus policy takes care of the additional expenses. It is important to consider"
102
+ datasets:
103
+ - surajvbangera/mediclaim
104
+ pipeline_tag: sentence-similarity
105
+ library_name: sentence-transformers
106
+ metrics:
107
+ - cosine_accuracy@1
108
+ - cosine_accuracy@3
109
+ - cosine_accuracy@5
110
+ - cosine_accuracy@10
111
+ - cosine_precision@1
112
+ - cosine_precision@3
113
+ - cosine_precision@5
114
+ - cosine_precision@10
115
+ - cosine_recall@1
116
+ - cosine_recall@3
117
+ - cosine_recall@5
118
+ - cosine_recall@10
119
+ - cosine_ndcg@10
120
+ - cosine_mrr@10
121
+ - cosine_map@100
122
+ model-index:
123
+ - name: SentenceTransformer based on sentence-transformers/multi-qa-mpnet-base-cos-v1
124
+ results:
125
+ - task:
126
+ type: information-retrieval
127
+ name: Information Retrieval
128
+ dataset:
129
+ name: dim 768
130
+ type: dim_768
131
+ metrics:
132
+ - type: cosine_accuracy@1
133
+ value: 0.3020833333333333
134
+ name: Cosine Accuracy@1
135
+ - type: cosine_accuracy@3
136
+ value: 0.8020833333333334
137
+ name: Cosine Accuracy@3
138
+ - type: cosine_accuracy@5
139
+ value: 0.875
140
+ name: Cosine Accuracy@5
141
+ - type: cosine_accuracy@10
142
+ value: 0.9583333333333334
143
+ name: Cosine Accuracy@10
144
+ - type: cosine_precision@1
145
+ value: 0.3020833333333333
146
+ name: Cosine Precision@1
147
+ - type: cosine_precision@3
148
+ value: 0.2673611111111111
149
+ name: Cosine Precision@3
150
+ - type: cosine_precision@5
151
+ value: 0.17499999999999996
152
+ name: Cosine Precision@5
153
+ - type: cosine_precision@10
154
+ value: 0.09583333333333333
155
+ name: Cosine Precision@10
156
+ - type: cosine_recall@1
157
+ value: 0.3020833333333333
158
+ name: Cosine Recall@1
159
+ - type: cosine_recall@3
160
+ value: 0.8020833333333334
161
+ name: Cosine Recall@3
162
+ - type: cosine_recall@5
163
+ value: 0.875
164
+ name: Cosine Recall@5
165
+ - type: cosine_recall@10
166
+ value: 0.9583333333333334
167
+ name: Cosine Recall@10
168
+ - type: cosine_ndcg@10
169
+ value: 0.6497808285407043
170
+ name: Cosine Ndcg@10
171
+ - type: cosine_mrr@10
172
+ value: 0.5484209656084658
173
+ name: Cosine Mrr@10
174
+ - type: cosine_map@100
175
+ value: 0.5512795209742883
176
+ name: Cosine Map@100
177
+ - task:
178
+ type: information-retrieval
179
+ name: Information Retrieval
180
+ dataset:
181
+ name: dim 512
182
+ type: dim_512
183
+ metrics:
184
+ - type: cosine_accuracy@1
185
+ value: 0.28125
186
+ name: Cosine Accuracy@1
187
+ - type: cosine_accuracy@3
188
+ value: 0.78125
189
+ name: Cosine Accuracy@3
190
+ - type: cosine_accuracy@5
191
+ value: 0.875
192
+ name: Cosine Accuracy@5
193
+ - type: cosine_accuracy@10
194
+ value: 0.9479166666666666
195
+ name: Cosine Accuracy@10
196
+ - type: cosine_precision@1
197
+ value: 0.28125
198
+ name: Cosine Precision@1
199
+ - type: cosine_precision@3
200
+ value: 0.2604166666666667
201
+ name: Cosine Precision@3
202
+ - type: cosine_precision@5
203
+ value: 0.17499999999999996
204
+ name: Cosine Precision@5
205
+ - type: cosine_precision@10
206
+ value: 0.09479166666666665
207
+ name: Cosine Precision@10
208
+ - type: cosine_recall@1
209
+ value: 0.28125
210
+ name: Cosine Recall@1
211
+ - type: cosine_recall@3
212
+ value: 0.78125
213
+ name: Cosine Recall@3
214
+ - type: cosine_recall@5
215
+ value: 0.875
216
+ name: Cosine Recall@5
217
+ - type: cosine_recall@10
218
+ value: 0.9479166666666666
219
+ name: Cosine Recall@10
220
+ - type: cosine_ndcg@10
221
+ value: 0.6294431516700937
222
+ name: Cosine Ndcg@10
223
+ - type: cosine_mrr@10
224
+ value: 0.5250578703703704
225
+ name: Cosine Mrr@10
226
+ - type: cosine_map@100
227
+ value: 0.5287000615125614
228
+ name: Cosine Map@100
229
+ - task:
230
+ type: information-retrieval
231
+ name: Information Retrieval
232
+ dataset:
233
+ name: dim 256
234
+ type: dim_256
235
+ metrics:
236
+ - type: cosine_accuracy@1
237
+ value: 0.3020833333333333
238
+ name: Cosine Accuracy@1
239
+ - type: cosine_accuracy@3
240
+ value: 0.7916666666666666
241
+ name: Cosine Accuracy@3
242
+ - type: cosine_accuracy@5
243
+ value: 0.8854166666666666
244
+ name: Cosine Accuracy@5
245
+ - type: cosine_accuracy@10
246
+ value: 0.9375
247
+ name: Cosine Accuracy@10
248
+ - type: cosine_precision@1
249
+ value: 0.3020833333333333
250
+ name: Cosine Precision@1
251
+ - type: cosine_precision@3
252
+ value: 0.2638888888888889
253
+ name: Cosine Precision@3
254
+ - type: cosine_precision@5
255
+ value: 0.1770833333333333
256
+ name: Cosine Precision@5
257
+ - type: cosine_precision@10
258
+ value: 0.09375
259
+ name: Cosine Precision@10
260
+ - type: cosine_recall@1
261
+ value: 0.3020833333333333
262
+ name: Cosine Recall@1
263
+ - type: cosine_recall@3
264
+ value: 0.7916666666666666
265
+ name: Cosine Recall@3
266
+ - type: cosine_recall@5
267
+ value: 0.8854166666666666
268
+ name: Cosine Recall@5
269
+ - type: cosine_recall@10
270
+ value: 0.9375
271
+ name: Cosine Recall@10
272
+ - type: cosine_ndcg@10
273
+ value: 0.6396822227743622
274
+ name: Cosine Ndcg@10
275
+ - type: cosine_mrr@10
276
+ value: 0.5409846230158731
277
+ name: Cosine Mrr@10
278
+ - type: cosine_map@100
279
+ value: 0.5445532958553793
280
+ name: Cosine Map@100
281
+ - task:
282
+ type: information-retrieval
283
+ name: Information Retrieval
284
+ dataset:
285
+ name: dim 128
286
+ type: dim_128
287
+ metrics:
288
+ - type: cosine_accuracy@1
289
+ value: 0.2708333333333333
290
+ name: Cosine Accuracy@1
291
+ - type: cosine_accuracy@3
292
+ value: 0.78125
293
+ name: Cosine Accuracy@3
294
+ - type: cosine_accuracy@5
295
+ value: 0.84375
296
+ name: Cosine Accuracy@5
297
+ - type: cosine_accuracy@10
298
+ value: 0.9479166666666666
299
+ name: Cosine Accuracy@10
300
+ - type: cosine_precision@1
301
+ value: 0.2708333333333333
302
+ name: Cosine Precision@1
303
+ - type: cosine_precision@3
304
+ value: 0.2604166666666667
305
+ name: Cosine Precision@3
306
+ - type: cosine_precision@5
307
+ value: 0.16874999999999996
308
+ name: Cosine Precision@5
309
+ - type: cosine_precision@10
310
+ value: 0.09479166666666666
311
+ name: Cosine Precision@10
312
+ - type: cosine_recall@1
313
+ value: 0.2708333333333333
314
+ name: Cosine Recall@1
315
+ - type: cosine_recall@3
316
+ value: 0.78125
317
+ name: Cosine Recall@3
318
+ - type: cosine_recall@5
319
+ value: 0.84375
320
+ name: Cosine Recall@5
321
+ - type: cosine_recall@10
322
+ value: 0.9479166666666666
323
+ name: Cosine Recall@10
324
+ - type: cosine_ndcg@10
325
+ value: 0.6229142362169651
326
+ name: Cosine Ndcg@10
327
+ - type: cosine_mrr@10
328
+ value: 0.5167080026455027
329
+ name: Cosine Mrr@10
330
+ - type: cosine_map@100
331
+ value: 0.5187267142104471
332
+ name: Cosine Map@100
333
+ - task:
334
+ type: information-retrieval
335
+ name: Information Retrieval
336
+ dataset:
337
+ name: dim 64
338
+ type: dim_64
339
+ metrics:
340
+ - type: cosine_accuracy@1
341
+ value: 0.25
342
+ name: Cosine Accuracy@1
343
+ - type: cosine_accuracy@3
344
+ value: 0.7291666666666666
345
+ name: Cosine Accuracy@3
346
+ - type: cosine_accuracy@5
347
+ value: 0.8333333333333334
348
+ name: Cosine Accuracy@5
349
+ - type: cosine_accuracy@10
350
+ value: 0.9166666666666666
351
+ name: Cosine Accuracy@10
352
+ - type: cosine_precision@1
353
+ value: 0.25
354
+ name: Cosine Precision@1
355
+ - type: cosine_precision@3
356
+ value: 0.24305555555555558
357
+ name: Cosine Precision@3
358
+ - type: cosine_precision@5
359
+ value: 0.16666666666666666
360
+ name: Cosine Precision@5
361
+ - type: cosine_precision@10
362
+ value: 0.09166666666666666
363
+ name: Cosine Precision@10
364
+ - type: cosine_recall@1
365
+ value: 0.25
366
+ name: Cosine Recall@1
367
+ - type: cosine_recall@3
368
+ value: 0.7291666666666666
369
+ name: Cosine Recall@3
370
+ - type: cosine_recall@5
371
+ value: 0.8333333333333334
372
+ name: Cosine Recall@5
373
+ - type: cosine_recall@10
374
+ value: 0.9166666666666666
375
+ name: Cosine Recall@10
376
+ - type: cosine_ndcg@10
377
+ value: 0.5921613565527261
378
+ name: Cosine Ndcg@10
379
+ - type: cosine_mrr@10
380
+ value: 0.486338458994709
381
+ name: Cosine Mrr@10
382
+ - type: cosine_map@100
383
+ value: 0.49077409326175775
384
+ name: Cosine Map@100
385
+ ---
386
+
387
+ # SentenceTransformer based on sentence-transformers/multi-qa-mpnet-base-cos-v1
388
+
389
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1) on the [mediclaim](https://huggingface.co/datasets/surajvbangera/mediclaim) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
390
+
391
+ ## Model Details
392
+
393
+ ### Model Description
394
+ - **Model Type:** Sentence Transformer
395
+ - **Base model:** [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1) <!-- at revision 822dbc9732879fe45b5d79fdb372f2ccec4c76b5 -->
396
+ - **Maximum Sequence Length:** 512 tokens
397
+ - **Output Dimensionality:** 768 dimensions
398
+ - **Similarity Function:** Cosine Similarity
399
+ - **Training Dataset:**
400
+ - [mediclaim](https://huggingface.co/datasets/surajvbangera/mediclaim)
401
+ <!-- - **Language:** Unknown -->
402
+ <!-- - **License:** Unknown -->
403
+
404
+ ### Model Sources
405
+
406
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
407
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
408
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
409
+
410
+ ### Full Model Architecture
411
+
412
+ ```
413
+ SentenceTransformer(
414
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
415
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
416
+ (2): Normalize()
417
+ )
418
+ ```
419
+
420
+ ## Usage
421
+
422
+ ### Direct Usage (Sentence Transformers)
423
+
424
+ First install the Sentence Transformers library:
425
+
426
+ ```bash
427
+ pip install -U sentence-transformers
428
+ ```
429
+
430
+ Then you can load this model and run inference.
431
+ ```python
432
+ from sentence_transformers import SentenceTransformer
433
+
434
+ # Download from the 🤗 Hub
435
+ model = SentenceTransformer("surajvbangera/mediclaim_embedding")
436
+ # Run inference
437
+ sentences = [
438
+ 'what kind of coverage is provided by insurance for medical expenses that go beyond the normal amount?',
439
+ 'health insurance cover and provides wider health protection for you and your family. In case of higher expenses \ndue to illness or accidents, Extra Care Plus policy takes care of the additional expenses. It is important to consider',
440
+ 'Age/\ndeduc-\ntible\n200000 200000 300000 200000 300000 500000 300000 500000 300000 500000 1000000 300000 500000 1000000 300000 500000 1000000\n21-25 6,544 7,011 4,345 10,389 7,490 5,127 9,839 7,283 11,767 9,087 6,289 13,419 10,054 7,343 19,518 16,543 13,717',
441
+ ]
442
+ embeddings = model.encode(sentences)
443
+ print(embeddings.shape)
444
+ # [3, 768]
445
+
446
+ # Get the similarity scores for the embeddings
447
+ similarities = model.similarity(embeddings, embeddings)
448
+ print(similarities.shape)
449
+ # [3, 3]
450
+ ```
451
+
452
+ <!--
453
+ ### Direct Usage (Transformers)
454
+
455
+ <details><summary>Click to see the direct usage in Transformers</summary>
456
+
457
+ </details>
458
+ -->
459
+
460
+ <!--
461
+ ### Downstream Usage (Sentence Transformers)
462
+
463
+ You can finetune this model on your own dataset.
464
+
465
+ <details><summary>Click to expand</summary>
466
+
467
+ </details>
468
+ -->
469
+
470
+ <!--
471
+ ### Out-of-Scope Use
472
+
473
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
474
+ -->
475
+
476
+ ## Evaluation
477
+
478
+ ### Metrics
479
+
480
+ #### Information Retrieval
481
+
482
+ * Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
483
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
484
+
485
+ | Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
486
+ |:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
487
+ | cosine_accuracy@1 | 0.3021 | 0.2812 | 0.3021 | 0.2708 | 0.25 |
488
+ | cosine_accuracy@3 | 0.8021 | 0.7812 | 0.7917 | 0.7812 | 0.7292 |
489
+ | cosine_accuracy@5 | 0.875 | 0.875 | 0.8854 | 0.8438 | 0.8333 |
490
+ | cosine_accuracy@10 | 0.9583 | 0.9479 | 0.9375 | 0.9479 | 0.9167 |
491
+ | cosine_precision@1 | 0.3021 | 0.2812 | 0.3021 | 0.2708 | 0.25 |
492
+ | cosine_precision@3 | 0.2674 | 0.2604 | 0.2639 | 0.2604 | 0.2431 |
493
+ | cosine_precision@5 | 0.175 | 0.175 | 0.1771 | 0.1687 | 0.1667 |
494
+ | cosine_precision@10 | 0.0958 | 0.0948 | 0.0938 | 0.0948 | 0.0917 |
495
+ | cosine_recall@1 | 0.3021 | 0.2812 | 0.3021 | 0.2708 | 0.25 |
496
+ | cosine_recall@3 | 0.8021 | 0.7812 | 0.7917 | 0.7812 | 0.7292 |
497
+ | cosine_recall@5 | 0.875 | 0.875 | 0.8854 | 0.8438 | 0.8333 |
498
+ | cosine_recall@10 | 0.9583 | 0.9479 | 0.9375 | 0.9479 | 0.9167 |
499
+ | **cosine_ndcg@10** | **0.6498** | **0.6294** | **0.6397** | **0.6229** | **0.5922** |
500
+ | cosine_mrr@10 | 0.5484 | 0.5251 | 0.541 | 0.5167 | 0.4863 |
501
+ | cosine_map@100 | 0.5513 | 0.5287 | 0.5446 | 0.5187 | 0.4908 |
502
+
503
+ <!--
504
+ ## Bias, Risks and Limitations
505
+
506
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
507
+ -->
508
+
509
+ <!--
510
+ ### Recommendations
511
+
512
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
513
+ -->
514
+
515
+ ## Training Details
516
+
517
+ ### Training Dataset
518
+
519
+ #### mediclaim
520
+
521
+ * Dataset: [mediclaim](https://huggingface.co/datasets/surajvbangera/mediclaim) at [943cab1](https://huggingface.co/datasets/surajvbangera/mediclaim/tree/943cab115f9a1d649d8a886fb35668e54ad0e1f7)
522
+ * Size: 956 training samples
523
+ * Columns: <code>anchor</code> and <code>positive</code>
524
+ * Approximate statistics based on the first 956 samples:
525
+ | | anchor | positive |
526
+ |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
527
+ | type | string | string |
528
+ | details | <ul><li>min: 10 tokens</li><li>mean: 23.14 tokens</li><li>max: 85 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 57.2 tokens</li><li>max: 135 tokens</li></ul> |
529
+ * Samples:
530
+ | anchor | positive |
531
+ |:---------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
532
+ | <code>Can I get a preventive health check-up covered under my insurance, and if yes, is there a limit to it?</code> | <code>by the Medical Practitioner.<br> vii. The Deductible shall not be applicable on this bene�t.<br> Stay Fit Health Check Up<br> The Insured may avail a health check-up, only for Preventive <br>Test, up to a limit speci�ed in the Policy Schedule, provided</code> |
533
+ | <code>Which claims are excluded if they don't follow the Transplantation of Human Organs Amendment Bill 2011?</code> | <code>4 CIN: U66010PN2000PLC015329, UIN: BAJHLIP23069V032223<br> Specific exclusions:<br> 1. Claims which have NOT been admitted under Medical expenses section<br> 2. Claims not in compliance with THE TRANSPLANTATION OF HUMAN ORGANS (AMENDMENT) BILL, 2011</code> |
534
+ | <code>Will the insurance pay for lawful abortion and related hospital stays?</code> | <code>ii. We will also cover expenses towards lawful medical termination of pregnancy during the Policy period.<br> iii. In patient Hospitalization Expenses of pre-natal and post-natal hospitalization</code> |
535
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
536
+ ```json
537
+ {
538
+ "loss": "MultipleNegativesRankingLoss",
539
+ "matryoshka_dims": [
540
+ 768,
541
+ 512,
542
+ 256,
543
+ 128,
544
+ 64
545
+ ],
546
+ "matryoshka_weights": [
547
+ 1,
548
+ 1,
549
+ 1,
550
+ 1,
551
+ 1
552
+ ],
553
+ "n_dims_per_step": -1
554
+ }
555
+ ```
556
+
557
+ ### Evaluation Dataset
558
+
559
+ #### mediclaim
560
+
561
+ * Dataset: [mediclaim](https://huggingface.co/datasets/surajvbangera/mediclaim) at [943cab1](https://huggingface.co/datasets/surajvbangera/mediclaim/tree/943cab115f9a1d649d8a886fb35668e54ad0e1f7)
562
+ * Size: 956 evaluation samples
563
+ * Columns: <code>anchor</code> and <code>positive</code>
564
+ * Approximate statistics based on the first 956 samples:
565
+ | | anchor | positive |
566
+ |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
567
+ | type | string | string |
568
+ | details | <ul><li>min: 10 tokens</li><li>mean: 22.4 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 56.76 tokens</li><li>max: 133 tokens</li></ul> |
569
+ * Samples:
570
+ | anchor | positive |
571
+ |:---------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
572
+ | <code>Is there any refund for medical exams if I get a policy and it's accepted?</code> | <code>• If pre-policy checkup is conducted, 50% of the medical tests charges would be reimbursed, subject to acceptance <br>of proposal and policy issuance.<br>Age of the person <br>to be insured<br>Sum Insured Medical Examination</code> |
573
+ | <code>Are there any exclusions for coverage of substance abuse treatment or its consequences?</code> | <code>are payable but not the complete claim. <br>12. T reatment for Alcoholism, drug or substance abuse or any addictive condition and consequences thereof. <br>(Excl12)</code> |
574
+ | <code>Can you tell me about the medical bills I might have within 90 days after being discharged?</code> | <code>CIN: U66010PN2000PLC015329, UIN:BAJHLIP23069V032223 3<br> c. Post-hospitalisation expenses<br> The medical expenses incurred in the 90 days immediately after you were discharged, provided that:</code> |
575
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
576
+ ```json
577
+ {
578
+ "loss": "MultipleNegativesRankingLoss",
579
+ "matryoshka_dims": [
580
+ 768,
581
+ 512,
582
+ 256,
583
+ 128,
584
+ 64
585
+ ],
586
+ "matryoshka_weights": [
587
+ 1,
588
+ 1,
589
+ 1,
590
+ 1,
591
+ 1
592
+ ],
593
+ "n_dims_per_step": -1
594
+ }
595
+ ```
596
+
597
+ ### Training Hyperparameters
598
+ #### Non-Default Hyperparameters
599
+
600
+ - `eval_strategy`: epoch
601
+ - `per_device_train_batch_size`: 32
602
+ - `per_device_eval_batch_size`: 16
603
+ - `gradient_accumulation_steps`: 16
604
+ - `learning_rate`: 2e-05
605
+ - `num_train_epochs`: 40
606
+ - `lr_scheduler_type`: cosine
607
+ - `warmup_ratio`: 0.1
608
+ - `fp16`: True
609
+ - `load_best_model_at_end`: True
610
+ - `optim`: adamw_torch_fused
611
+ - `batch_sampler`: no_duplicates
612
+
613
+ #### All Hyperparameters
614
+ <details><summary>Click to expand</summary>
615
+
616
+ - `overwrite_output_dir`: False
617
+ - `do_predict`: False
618
+ - `eval_strategy`: epoch
619
+ - `prediction_loss_only`: True
620
+ - `per_device_train_batch_size`: 32
621
+ - `per_device_eval_batch_size`: 16
622
+ - `per_gpu_train_batch_size`: None
623
+ - `per_gpu_eval_batch_size`: None
624
+ - `gradient_accumulation_steps`: 16
625
+ - `eval_accumulation_steps`: None
626
+ - `torch_empty_cache_steps`: None
627
+ - `learning_rate`: 2e-05
628
+ - `weight_decay`: 0.0
629
+ - `adam_beta1`: 0.9
630
+ - `adam_beta2`: 0.999
631
+ - `adam_epsilon`: 1e-08
632
+ - `max_grad_norm`: 1.0
633
+ - `num_train_epochs`: 40
634
+ - `max_steps`: -1
635
+ - `lr_scheduler_type`: cosine
636
+ - `lr_scheduler_kwargs`: {}
637
+ - `warmup_ratio`: 0.1
638
+ - `warmup_steps`: 0
639
+ - `log_level`: passive
640
+ - `log_level_replica`: warning
641
+ - `log_on_each_node`: True
642
+ - `logging_nan_inf_filter`: True
643
+ - `save_safetensors`: True
644
+ - `save_on_each_node`: False
645
+ - `save_only_model`: False
646
+ - `restore_callback_states_from_checkpoint`: False
647
+ - `no_cuda`: False
648
+ - `use_cpu`: False
649
+ - `use_mps_device`: False
650
+ - `seed`: 42
651
+ - `data_seed`: None
652
+ - `jit_mode_eval`: False
653
+ - `use_ipex`: False
654
+ - `bf16`: False
655
+ - `fp16`: True
656
+ - `fp16_opt_level`: O1
657
+ - `half_precision_backend`: auto
658
+ - `bf16_full_eval`: False
659
+ - `fp16_full_eval`: False
660
+ - `tf32`: None
661
+ - `local_rank`: 0
662
+ - `ddp_backend`: None
663
+ - `tpu_num_cores`: None
664
+ - `tpu_metrics_debug`: False
665
+ - `debug`: []
666
+ - `dataloader_drop_last`: False
667
+ - `dataloader_num_workers`: 0
668
+ - `dataloader_prefetch_factor`: None
669
+ - `past_index`: -1
670
+ - `disable_tqdm`: False
671
+ - `remove_unused_columns`: True
672
+ - `label_names`: None
673
+ - `load_best_model_at_end`: True
674
+ - `ignore_data_skip`: False
675
+ - `fsdp`: []
676
+ - `fsdp_min_num_params`: 0
677
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
678
+ - `fsdp_transformer_layer_cls_to_wrap`: None
679
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
680
+ - `deepspeed`: None
681
+ - `label_smoothing_factor`: 0.0
682
+ - `optim`: adamw_torch_fused
683
+ - `optim_args`: None
684
+ - `adafactor`: False
685
+ - `group_by_length`: False
686
+ - `length_column_name`: length
687
+ - `ddp_find_unused_parameters`: None
688
+ - `ddp_bucket_cap_mb`: None
689
+ - `ddp_broadcast_buffers`: False
690
+ - `dataloader_pin_memory`: True
691
+ - `dataloader_persistent_workers`: False
692
+ - `skip_memory_metrics`: True
693
+ - `use_legacy_prediction_loop`: False
694
+ - `push_to_hub`: False
695
+ - `resume_from_checkpoint`: None
696
+ - `hub_model_id`: None
697
+ - `hub_strategy`: every_save
698
+ - `hub_private_repo`: None
699
+ - `hub_always_push`: False
700
+ - `gradient_checkpointing`: False
701
+ - `gradient_checkpointing_kwargs`: None
702
+ - `include_inputs_for_metrics`: False
703
+ - `include_for_metrics`: []
704
+ - `eval_do_concat_batches`: True
705
+ - `fp16_backend`: auto
706
+ - `push_to_hub_model_id`: None
707
+ - `push_to_hub_organization`: None
708
+ - `mp_parameters`:
709
+ - `auto_find_batch_size`: False
710
+ - `full_determinism`: False
711
+ - `torchdynamo`: None
712
+ - `ray_scope`: last
713
+ - `ddp_timeout`: 1800
714
+ - `torch_compile`: False
715
+ - `torch_compile_backend`: None
716
+ - `torch_compile_mode`: None
717
+ - `dispatch_batches`: None
718
+ - `split_batches`: None
719
+ - `include_tokens_per_second`: False
720
+ - `include_num_input_tokens_seen`: False
721
+ - `neftune_noise_alpha`: None
722
+ - `optim_target_modules`: None
723
+ - `batch_eval_metrics`: False
724
+ - `eval_on_start`: False
725
+ - `use_liger_kernel`: False
726
+ - `eval_use_gather_object`: False
727
+ - `average_tokens_across_devices`: False
728
+ - `prompts`: None
729
+ - `batch_sampler`: no_duplicates
730
+ - `multi_dataset_batch_sampler`: proportional
731
+
732
+ </details>
733
+
734
+ ### Training Logs
735
+ | Epoch | Step | Training Loss | Validation Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
736
+ |:--------:|:------:|:-------------:|:---------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
737
+ | -1 | -1 | - | - | 0.4723 | 0.4748 | 0.5015 | 0.4589 | 0.3867 |
738
+ | 1.0 | 2 | - | 1.5925 | 0.4821 | 0.4846 | 0.5122 | 0.4604 | 0.3971 |
739
+ | 2.0 | 4 | - | 1.5925 | 0.4821 | 0.4846 | 0.5122 | 0.4604 | 0.3971 |
740
+ | 3.0 | 6 | - | 1.0402 | 0.5431 | 0.5468 | 0.5530 | 0.5009 | 0.4435 |
741
+ | 4.0 | 8 | - | 0.7900 | 0.5876 | 0.5926 | 0.6075 | 0.5484 | 0.4726 |
742
+ | 5.0 | 10 | 33.0646 | 0.6077 | 0.5890 | 0.6039 | 0.6270 | 0.5779 | 0.5072 |
743
+ | 6.0 | 12 | - | 0.5213 | 0.6357 | 0.6379 | 0.6522 | 0.5966 | 0.5417 |
744
+ | 7.0 | 14 | - | 0.4735 | 0.6425 | 0.6395 | 0.6286 | 0.5995 | 0.5795 |
745
+ | 8.0 | 16 | - | 0.4416 | 0.6253 | 0.6387 | 0.6227 | 0.5903 | 0.5738 |
746
+ | 9.0 | 18 | - | 0.4236 | 0.6303 | 0.6489 | 0.6387 | 0.6179 | 0.5670 |
747
+ | **10.0** | **20** | **8.8456** | **0.4115** | **0.6465** | **0.6519** | **0.6369** | **0.6112** | **0.572** |
748
+ | 11.0 | 22 | - | 0.4059 | 0.6447 | 0.6270 | 0.6318 | 0.6169 | 0.5950 |
749
+ | 12.0 | 24 | - | 0.4036 | 0.6382 | 0.6318 | 0.6346 | 0.6063 | 0.6026 |
750
+ | 13.0 | 26 | - | 0.4022 | 0.6485 | 0.6410 | 0.6441 | 0.6163 | 0.5900 |
751
+ | 14.0 | 28 | - | 0.4022 | 0.6520 | 0.6426 | 0.6597 | 0.6225 | 0.6001 |
752
+ | 15.0 | 30 | 4.4602 | 0.4033 | 0.6507 | 0.6363 | 0.6576 | 0.6217 | 0.6134 |
753
+ | 16.0 | 32 | - | 0.4047 | 0.6530 | 0.6389 | 0.6609 | 0.6350 | 0.6068 |
754
+ | 17.0 | 34 | - | 0.4058 | 0.6501 | 0.6344 | 0.6501 | 0.6281 | 0.5997 |
755
+ | 18.0 | 36 | - | 0.4067 | 0.6509 | 0.6333 | 0.6553 | 0.6360 | 0.6050 |
756
+ | 19.0 | 38 | - | 0.4070 | 0.6561 | 0.6331 | 0.6602 | 0.6397 | 0.6051 |
757
+ | 20.0 | 40 | 3.9605 | 0.4071 | 0.6498 | 0.6294 | 0.6397 | 0.6229 | 0.5922 |
758
+
759
+ * The bold row denotes the saved checkpoint.
760
+
761
+ ### Framework Versions
762
+ - Python: 3.11.11
763
+ - Sentence Transformers: 3.4.1
764
+ - Transformers: 4.48.3
765
+ - PyTorch: 2.5.1+cu124
766
+ - Accelerate: 1.3.0
767
+ - Datasets: 3.3.2
768
+ - Tokenizers: 0.21.0
769
+
770
+ ## Citation
771
+
772
+ ### BibTeX
773
+
774
+ #### Sentence Transformers
775
+ ```bibtex
776
+ @inproceedings{reimers-2019-sentence-bert,
777
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
778
+ author = "Reimers, Nils and Gurevych, Iryna",
779
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
780
+ month = "11",
781
+ year = "2019",
782
+ publisher = "Association for Computational Linguistics",
783
+ url = "https://arxiv.org/abs/1908.10084",
784
+ }
785
+ ```
786
+
787
+ #### MatryoshkaLoss
788
+ ```bibtex
789
+ @misc{kusupati2024matryoshka,
790
+ title={Matryoshka Representation Learning},
791
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
792
+ year={2024},
793
+ eprint={2205.13147},
794
+ archivePrefix={arXiv},
795
+ primaryClass={cs.LG}
796
+ }
797
+ ```
798
+
799
+ #### MultipleNegativesRankingLoss
800
+ ```bibtex
801
+ @misc{henderson2017efficient,
802
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
803
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
804
+ year={2017},
805
+ eprint={1705.00652},
806
+ archivePrefix={arXiv},
807
+ primaryClass={cs.CL}
808
+ }
809
+ ```
810
+
811
+ <!--
812
+ ## Glossary
813
+
814
+ *Clearly define terms in order to be accessible across audiences.*
815
+ -->
816
+
817
+ <!--
818
+ ## Model Card Authors
819
+
820
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
821
+ -->
822
+
823
+ <!--
824
+ ## Model Card Contact
825
+
826
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
827
+ -->
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "sentence-transformers/multi-qa-mpnet-base-cos-v1",
3
+ "architectures": [
4
+ "MPNetModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "mpnet",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 1,
20
+ "relative_attention_num_buckets": 32,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.48.3",
23
+ "vocab_size": 30527
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.3",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b298339ee929e261c35c20edf7b7b4c63292540d12491a0e76b267a90f2e2da
3
+ size 437967672
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "104": {
36
+ "content": "[UNK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "30526": {
44
+ "content": "<mask>",
45
+ "lstrip": true,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "bos_token": "<s>",
53
+ "clean_up_tokenization_spaces": false,
54
+ "cls_token": "<s>",
55
+ "do_lower_case": true,
56
+ "eos_token": "</s>",
57
+ "extra_special_tokens": {},
58
+ "mask_token": "<mask>",
59
+ "max_length": 250,
60
+ "model_max_length": 512,
61
+ "pad_to_multiple_of": null,
62
+ "pad_token": "<pad>",
63
+ "pad_token_type_id": 0,
64
+ "padding_side": "right",
65
+ "sep_token": "</s>",
66
+ "stride": 0,
67
+ "strip_accents": null,
68
+ "tokenize_chinese_chars": true,
69
+ "tokenizer_class": "MPNetTokenizer",
70
+ "truncation_side": "right",
71
+ "truncation_strategy": "longest_first",
72
+ "unk_token": "[UNK]"
73
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff