tabesink92 commited on
Commit
604f6e4
·
verified ·
1 Parent(s): b3cc9a9

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,772 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:156
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: 'content=''1. What are the names of the three best available models
13
+ that were freely accessible for a few months this year? \n2. In what year does
14
+ the author expect the prompt-driven custom interface feature to be widely integrated
15
+ into products?'' additional_kwargs={''refusal'': None} response_metadata={''token_usage'':
16
+ {''completion_tokens'': 47, ''prompt_tokens'': 161, ''total_tokens'': 208, ''completion_tokens_details'':
17
+ {''accepted_prediction_tokens'': 0, ''audio_tokens'': 0, ''reasoning_tokens'':
18
+ 0, ''rejected_prediction_tokens'': 0}, ''prompt_tokens_details'': {''audio_tokens'':
19
+ 0, ''cached_tokens'': 0}}, ''model_name'': ''gpt-4o-mini-2024-07-18'', ''system_fingerprint'':
20
+ ''fp_13eed4fce1'', ''finish_reason'': ''stop'', ''logprobs'': None} id=''run-97f0e521-5a7a-4195-80c1-6bcbe1888a21-0''
21
+ usage_metadata={''input_tokens'': 161, ''output_tokens'': 47, ''total_tokens'':
22
+ 208, ''input_token_details'': {''audio'': 0, ''cache_read'': 0}, ''output_token_details'':
23
+ {''audio'': 0, ''reasoning'': 0}}'
24
+ sentences:
25
+ - 'Against this photo of butterflies at the California Academy of Sciences:
26
+
27
+
28
+
29
+ A shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange
30
+ slices of fruit are visible inside the dish.
31
+
32
+ Two butterflies are positioned in the feeder, one is a dark brown/black butterfly
33
+ with white/cream-colored markings. The other is a large, brown butterfly with
34
+ patterns of lighter brown, beige, and black markings, including prominent eye
35
+ spots. The larger brown butterfly appears to be feeding on the fruit.'
36
+ - 'This prompt-driven custom interface feature is so powerful and easy to build
37
+ (once you’ve figured out the gnarly details of browser sandboxing) that I expect
38
+ it to show up as a feature in a wide range of products in 2025.
39
+
40
+ Universal access to the best models lasted for just a few short months
41
+
42
+ For a few short months this year all three of the best available models—GPT-4o,
43
+ Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely available to most of the world.'
44
+ - 'Terminology aside, I remain skeptical as to their utility based, once again,
45
+ on the challenge of gullibility. LLMs believe anything you tell them. Any systems
46
+ that attempts to make meaningful decisions on your behalf will run into the same
47
+ roadblock: how good is a travel agent, or a digital assistant, or even a research
48
+ tool if it can’t distinguish truth from fiction?
49
+
50
+ Just the other day Google Search was caught serving up an entirely fake description
51
+ of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined
52
+ movie listing from a fan fiction wiki.'
53
+ - source_sentence: 'content="1. What are the limitations of Apple''s LLM features
54
+ compared to frontier LLMs, according to the context?\n2. What new shape of LLM
55
+ was introduced in the final quarter of 2024, and what were the names of the initial
56
+ models released?" additional_kwargs={''refusal'': None} response_metadata={''token_usage'':
57
+ {''completion_tokens'': 54, ''prompt_tokens'': 195, ''total_tokens'': 249, ''completion_tokens_details'':
58
+ {''accepted_prediction_tokens'': 0, ''audio_tokens'': 0, ''reasoning_tokens'':
59
+ 0, ''rejected_prediction_tokens'': 0}, ''prompt_tokens_details'': {''audio_tokens'':
60
+ 0, ''cached_tokens'': 0}}, ''model_name'': ''gpt-4o-mini-2024-07-18'', ''system_fingerprint'':
61
+ ''fp_13eed4fce1'', ''finish_reason'': ''stop'', ''logprobs'': None} id=''run-f630b70d-5485-465d-b603-7832f2d1dfe6-0''
62
+ usage_metadata={''input_tokens'': 195, ''output_tokens'': 54, ''total_tokens'':
63
+ 249, ''input_token_details'': {''audio'': 0, ''cache_read'': 0}, ''output_token_details'':
64
+ {''audio'': 0, ''reasoning'': 0}}'
65
+ sentences:
66
+ - '17th: AI for Data Journalism: demonstrating what we can do with this stuff right
67
+ now
68
+
69
+
70
+ 22nd: Options for accessing Llama 3 from the terminal using LLM
71
+
72
+
73
+
74
+
75
+ May
76
+
77
+
78
+ 8th: Slop is the new name for unwanted AI-generated content
79
+
80
+
81
+ 15th: ChatGPT in “4o” mode is not running the new features yet
82
+
83
+
84
+ 29th: Training is not the same as chatting: ChatGPT and other LLMs don’t remember
85
+ everything you say
86
+
87
+
88
+
89
+
90
+ June
91
+
92
+
93
+ 6th: Accidental prompt injection against RAG applications
94
+
95
+
96
+ 10th: Thoughts on the WWDC 2024 keynote on Apple Intelligence
97
+
98
+
99
+ 17th: Language models on the command-line
100
+
101
+
102
+ 21st: Building search-based RAG using Claude, Datasette and Val Town
103
+
104
+
105
+ 27th: Open challenges for AI engineering
106
+
107
+
108
+
109
+
110
+ July
111
+
112
+
113
+ 14th: Imitation Intelligence, my keynote for PyCon US 2024'
114
+ - 'Now that those features are rolling out they’re pretty weak. As an LLM power-user
115
+ I know what these models are capable of, and Apple’s LLM features offer a pale
116
+ imitation of what a frontier LLM can do. Instead we’re getting notification summaries
117
+ that misrepresent news headlines and writing assistant tools that I’ve not found
118
+ useful at all. Genmoji are kind of fun though.
119
+
120
+ The rise of inference-scaling “reasoning” models
121
+
122
+ The most interesting development in the final quarter of 2024 was the introduction
123
+ of a new shape of LLM, exemplified by OpenAI’s o1 models—initially released as
124
+ o1-preview and o1-mini on September 12th.'
125
+ - 'The models may have got more capable, but most of the limitations remained the
126
+ same. OpenAI’s o1 may finally be able to (mostly) count the Rs in strawberry,
127
+ but its abilities are still limited by its nature as an LLM and the constraints
128
+ placed on it by the harness it’s running in. o1 can’t run web searches or use
129
+ Code Interpreter, but GPT-4o can—both in that same ChatGPT UI. (o1 will pretend
130
+ to do those things if you ask it to, a regression to the URL hallucinations bug
131
+ from early 2023).
132
+
133
+ What are we doing about this? Not much. Most users are thrown in at the deep end.
134
+ The default LLM chat UI is like taking brand new computer users, dropping them
135
+ into a Linux terminal and expecting them to figure it all out.'
136
+ - source_sentence: 'content=''1. What is the significance of the cost reduction mentioned
137
+ in the context regarding LLMs in 2024? \n2. How does the emergence of multi-modal
138
+ LLMs, such as GPT-4 Vision and Google’s Gemini 1.0, reflect trends in technology
139
+ for 2024?'' additional_kwargs={''refusal'': None} response_metadata={''token_usage'':
140
+ {''completion_tokens'': 62, ''prompt_tokens'': 228, ''total_tokens'': 290, ''completion_tokens_details'':
141
+ {''accepted_prediction_tokens'': 0, ''audio_tokens'': 0, ''reasoning_tokens'':
142
+ 0, ''rejected_prediction_tokens'': 0}, ''prompt_tokens_details'': {''audio_tokens'':
143
+ 0, ''cached_tokens'': 0}}, ''model_name'': ''gpt-4o-mini-2024-07-18'', ''system_fingerprint'':
144
+ ''fp_00428b782a'', ''finish_reason'': ''stop'', ''logprobs'': None} id=''run-adc45d9c-270d-48b7-a80a-3f2de9b1f50c-0''
145
+ usage_metadata={''input_tokens'': 228, ''output_tokens'': 62, ''total_tokens'':
146
+ 290, ''input_token_details'': {''audio'': 0, ''cache_read'': 0}, ''output_token_details'':
147
+ {''audio'': 0, ''reasoning'': 0}}'
148
+ sentences:
149
+ - The most recent twist, again from December (December was a lot) is live video.
150
+ ChatGPT voice mode now provides the option to share your camera feed with the
151
+ model and talk about what you can see in real time. Google Gemini have a preview
152
+ of the same feature, which they managed to ship the day before ChatGPT did.
153
+ - '260 input tokens, 92 output tokens. Cost approximately 0.0024 cents (that’s less
154
+ than a 400th of a cent).
155
+
156
+ This increase in efficiency and reduction in price is my single favourite trend
157
+ from 2024. I want the utility of LLMs at a fraction of the energy cost and it
158
+ looks like that’s what we’re getting.
159
+
160
+ Multimodal vision is common, audio and video are starting to emerge
161
+
162
+ My butterfly example above illustrates another key trend from 2024: the rise of
163
+ multi-modal LLMs.
164
+
165
+ A year ago the single most notable example of these was GPT-4 Vision, released
166
+ at OpenAI’s DevDay in November 2023. Google’s multi-modal Gemini 1.0 was announced
167
+ on December 7th 2023 so it also (just) makes it into the 2023 window.'
168
+ - 'Stuff we figured out about AI in 2023
169
+
170
+
171
+
172
+
173
+
174
+
175
+
176
+
177
+
178
+
179
+
180
+
181
+
182
+
183
+
184
+
185
+
186
+
187
+
188
+
189
+
190
+
191
+ Simon Willison’s Weblog
192
+
193
+ Subscribe
194
+
195
+
196
+
197
+
198
+
199
+
200
+
201
+ Stuff we figured out about AI in 2023
202
+
203
+ 31st December 2023
204
+
205
+ 2023 was the breakthrough year for Large Language Models (LLMs). I think it’s
206
+ OK to call these AI—they’re the latest and (currently) most interesting development
207
+ in the academic field of Artificial Intelligence that dates back to the 1950s.
208
+
209
+ Here’s my attempt to round up the highlights in one place!'
210
+ - source_sentence: 'content="1. When did Google release their gemini-2.0-flash-thinking-exp
211
+ model?\n2. What is the license under which Alibaba''s QwQ model was released?"
212
+ additional_kwargs={''refusal'': None} response_metadata={''token_usage'': {''completion_tokens'':
213
+ 37, ''prompt_tokens'': 204, ''total_tokens'': 241, ''completion_tokens_details'':
214
+ {''accepted_prediction_tokens'': 0, ''audio_tokens'': 0, ''reasoning_tokens'':
215
+ 0, ''rejected_prediction_tokens'': 0}, ''prompt_tokens_details'': {''audio_tokens'':
216
+ 0, ''cached_tokens'': 0}}, ''model_name'': ''gpt-4o-mini-2024-07-18'', ''system_fingerprint'':
217
+ ''fp_00428b782a'', ''finish_reason'': ''stop'', ''logprobs'': None} id=''run-e70ae064-7316-41f9-9ee6-881279b91ba6-0''
218
+ usage_metadata={''input_tokens'': 204, ''output_tokens'': 37, ''total_tokens'':
219
+ 241, ''input_token_details'': {''audio'': 0, ''cache_read'': 0}, ''output_token_details'':
220
+ {''audio'': 0, ''reasoning'': 0}}'
221
+ sentences:
222
+ - Structured and Gradual Learning. In organic datasets, the relationship between
223
+ tokens is often complex and indirect. Many reasoning steps may be required to
224
+ connect the current token to the next, making it challenging for the model to
225
+ learn effectively from next-token prediction. By contrast, each token generated
226
+ by a language model is by definition predicted by the preceding tokens, making
227
+ it easier for a model to follow the resulting reasoning patterns.
228
+ - 'I think people who complain that LLM improvement has slowed are often missing
229
+ the enormous advances in these multi-modal models. Being able to run prompts against
230
+ images (and audio and video) is a fascinating new way to apply these models.
231
+
232
+ Voice and live camera mode are science fiction come to life
233
+
234
+ The audio and live video modes that have started to emerge deserve a special mention.
235
+
236
+ The ability to talk to ChatGPT first arrived in September 2023, but it was mostly
237
+ an illusion: OpenAI used their excellent Whisper speech-to-text model and a new
238
+ text-to-speech model (creatively named tts-1) to enable conversations with the
239
+ ChatGPT mobile apps, but the actual model just saw text.'
240
+ - 'OpenAI are not the only game in town here. Google released their first entrant
241
+ in the category, gemini-2.0-flash-thinking-exp, on December 19th.
242
+
243
+ Alibaba’s Qwen team released their QwQ model on November 28th—under an Apache
244
+ 2.0 license, and that one I could run on my own machine. They followed that up
245
+ with a vision reasoning model called QvQ on December 24th, which I also ran locally.
246
+
247
+ DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out through
248
+ their chat interface on November 20th.
249
+
250
+ To understand more about inference scaling I recommend Is AI progress slowing
251
+ down? by Arvind Narayanan and Sayash Kapoor.'
252
+ - source_sentence: 'content=''1. What challenges did the author face last year regarding
253
+ their choice of platform for machine learning models? \n2. How does the author
254
+ feel about their current experience as a Mac user compared to the previous year?''
255
+ additional_kwargs={''refusal'': None} response_metadata={''token_usage'': {''completion_tokens'':
256
+ 43, ''prompt_tokens'': 188, ''total_tokens'': 231, ''completion_tokens_details'':
257
+ {''accepted_prediction_tokens'': 0, ''audio_tokens'': 0, ''reasoning_tokens'':
258
+ 0, ''rejected_prediction_tokens'': 0}, ''prompt_tokens_details'': {''audio_tokens'':
259
+ 0, ''cached_tokens'': 0}}, ''model_name'': ''gpt-4o-mini-2024-07-18'', ''system_fingerprint'':
260
+ ''fp_00428b782a'', ''finish_reason'': ''stop'', ''logprobs'': None} id=''run-0a628013-970b-4c1e-a336-e835eea1979e-0''
261
+ usage_metadata={''input_tokens'': 188, ''output_tokens'': 43, ''total_tokens'':
262
+ 231, ''input_token_details'': {''audio'': 0, ''cache_read'': 0}, ''output_token_details'':
263
+ {''audio'': 0, ''reasoning'': 0}}'
264
+ sentences:
265
+ - 'I’m still trying to figure out the best patterns for doing this for my own work.
266
+ Everyone knows that evals are important, but there remains a lack of great guidance
267
+ for how to best implement them—I’m tracking this under my evals tag. My SVG pelican
268
+ riding a bicycle benchmark is a pale imitation of what a real eval suite should
269
+ look like.
270
+
271
+ Apple Intelligence is bad, Apple’s MLX library is excellent
272
+
273
+ As a Mac user I’ve been feeling a lot better about my choice of platform this
274
+ year.
275
+
276
+ Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU
277
+ was a huge disadvantage in terms of trying out new models.'
278
+ - 'The GPT-4 barrier was comprehensively broken
279
+
280
+ Some of those GPT-4 models run on my laptop
281
+
282
+ LLM prices crashed, thanks to competition and increased efficiency
283
+
284
+ Multimodal vision is common, audio and video are starting to emerge
285
+
286
+ Voice and live camera mode are science fiction come to life
287
+
288
+ Prompt driven app generation is a commodity already
289
+
290
+ Universal access to the best models lasted for just a few short months
291
+
292
+ “Agents” still haven’t really happened yet
293
+
294
+ Evals really matter
295
+
296
+ Apple Intelligence is bad, Apple’s MLX library is excellent
297
+
298
+ The rise of inference-scaling “reasoning” models
299
+
300
+ Was the best currently available LLM trained in China for less than $6m?
301
+
302
+ The environmental impact got better
303
+
304
+ The environmental impact got much, much worse'
305
+ - 'The May 13th announcement of GPT-4o included a demo of a brand new voice mode,
306
+ where the true multi-modal GPT-4o (the o is for “omni”) model could accept audio
307
+ input and output incredibly realistic sounding speech without needing separate
308
+ TTS or STT models.
309
+
310
+ The demo also sounded conspicuously similar to Scarlett Johansson... and after
311
+ she complained the voice from the demo, Skye, never made it to a production product.
312
+
313
+ The delay in releasing the new voice mode after the initial demo caused quite
314
+ a lot of confusion. I wrote about that in ChatGPT in “4o” mode is not running
315
+ the new features yet.'
316
+ pipeline_tag: sentence-similarity
317
+ library_name: sentence-transformers
318
+ metrics:
319
+ - cosine_accuracy@1
320
+ - cosine_accuracy@3
321
+ - cosine_accuracy@5
322
+ - cosine_accuracy@10
323
+ - cosine_precision@1
324
+ - cosine_precision@3
325
+ - cosine_precision@5
326
+ - cosine_precision@10
327
+ - cosine_recall@1
328
+ - cosine_recall@3
329
+ - cosine_recall@5
330
+ - cosine_recall@10
331
+ - cosine_ndcg@10
332
+ - cosine_mrr@10
333
+ - cosine_map@100
334
+ model-index:
335
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
336
+ results:
337
+ - task:
338
+ type: information-retrieval
339
+ name: Information Retrieval
340
+ dataset:
341
+ name: Unknown
342
+ type: unknown
343
+ metrics:
344
+ - type: cosine_accuracy@1
345
+ value: 0.9166666666666666
346
+ name: Cosine Accuracy@1
347
+ - type: cosine_accuracy@3
348
+ value: 0.9166666666666666
349
+ name: Cosine Accuracy@3
350
+ - type: cosine_accuracy@5
351
+ value: 1.0
352
+ name: Cosine Accuracy@5
353
+ - type: cosine_accuracy@10
354
+ value: 1.0
355
+ name: Cosine Accuracy@10
356
+ - type: cosine_precision@1
357
+ value: 0.9166666666666666
358
+ name: Cosine Precision@1
359
+ - type: cosine_precision@3
360
+ value: 0.3055555555555555
361
+ name: Cosine Precision@3
362
+ - type: cosine_precision@5
363
+ value: 0.20000000000000004
364
+ name: Cosine Precision@5
365
+ - type: cosine_precision@10
366
+ value: 0.10000000000000002
367
+ name: Cosine Precision@10
368
+ - type: cosine_recall@1
369
+ value: 0.9166666666666666
370
+ name: Cosine Recall@1
371
+ - type: cosine_recall@3
372
+ value: 0.9166666666666666
373
+ name: Cosine Recall@3
374
+ - type: cosine_recall@5
375
+ value: 1.0
376
+ name: Cosine Recall@5
377
+ - type: cosine_recall@10
378
+ value: 1.0
379
+ name: Cosine Recall@10
380
+ - type: cosine_ndcg@10
381
+ value: 0.9507303902211639
382
+ name: Cosine Ndcg@10
383
+ - type: cosine_mrr@10
384
+ value: 0.9354166666666667
385
+ name: Cosine Mrr@10
386
+ - type: cosine_map@100
387
+ value: 0.9354166666666667
388
+ name: Cosine Map@100
389
+ ---
390
+
391
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
392
+
393
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
394
+
395
+ ## Model Details
396
+
397
+ ### Model Description
398
+ - **Model Type:** Sentence Transformer
399
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
400
+ - **Maximum Sequence Length:** 512 tokens
401
+ - **Output Dimensionality:** 1024 dimensions
402
+ - **Similarity Function:** Cosine Similarity
403
+ <!-- - **Training Dataset:** Unknown -->
404
+ <!-- - **Language:** Unknown -->
405
+ <!-- - **License:** Unknown -->
406
+
407
+ ### Model Sources
408
+
409
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
410
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
411
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
412
+
413
+ ### Full Model Architecture
414
+
415
+ ```
416
+ SentenceTransformer(
417
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
418
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
419
+ (2): Normalize()
420
+ )
421
+ ```
422
+
423
+ ## Usage
424
+
425
+ ### Direct Usage (Sentence Transformers)
426
+
427
+ First install the Sentence Transformers library:
428
+
429
+ ```bash
430
+ pip install -U sentence-transformers
431
+ ```
432
+
433
+ Then you can load this model and run inference.
434
+ ```python
435
+ from sentence_transformers import SentenceTransformer
436
+
437
+ # Download from the 🤗 Hub
438
+ model = SentenceTransformer("tabesink92/legal-ft-v0")
439
+ # Run inference
440
+ sentences = [
441
+ "content='1. What challenges did the author face last year regarding their choice of platform for machine learning models? \\n2. How does the author feel about their current experience as a Mac user compared to the previous year?' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 43, 'prompt_tokens': 188, 'total_tokens': 231, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_00428b782a', 'finish_reason': 'stop', 'logprobs': None} id='run-0a628013-970b-4c1e-a336-e835eea1979e-0' usage_metadata={'input_tokens': 188, 'output_tokens': 43, 'total_tokens': 231, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}",
442
+ 'I’m still trying to figure out the best patterns for doing this for my own work. Everyone knows that evals are important, but there remains a lack of great guidance for how to best implement them—I’m tracking this under my evals tag. My SVG pelican riding a bicycle benchmark is a pale imitation of what a real eval suite should look like.\nApple Intelligence is bad, Apple’s MLX library is excellent\nAs a Mac user I’ve been feeling a lot better about my choice of platform this year.\nLast year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU was a huge disadvantage in terms of trying out new models.',
443
+ 'The May 13th announcement of GPT-4o included a demo of a brand new voice mode, where the true multi-modal GPT-4o (the o is for “omni”) model could accept audio input and output incredibly realistic sounding speech without needing separate TTS or STT models.\nThe demo also sounded conspicuously similar to Scarlett Johansson... and after she complained the voice from the demo, Skye, never made it to a production product.\nThe delay in releasing the new voice mode after the initial demo caused quite a lot of confusion. I wrote about that in ChatGPT in “4o” mode is not running the new features yet.',
444
+ ]
445
+ embeddings = model.encode(sentences)
446
+ print(embeddings.shape)
447
+ # [3, 1024]
448
+
449
+ # Get the similarity scores for the embeddings
450
+ similarities = model.similarity(embeddings, embeddings)
451
+ print(similarities.shape)
452
+ # [3, 3]
453
+ ```
454
+
455
+ <!--
456
+ ### Direct Usage (Transformers)
457
+
458
+ <details><summary>Click to see the direct usage in Transformers</summary>
459
+
460
+ </details>
461
+ -->
462
+
463
+ <!--
464
+ ### Downstream Usage (Sentence Transformers)
465
+
466
+ You can finetune this model on your own dataset.
467
+
468
+ <details><summary>Click to expand</summary>
469
+
470
+ </details>
471
+ -->
472
+
473
+ <!--
474
+ ### Out-of-Scope Use
475
+
476
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
477
+ -->
478
+
479
+ ## Evaluation
480
+
481
+ ### Metrics
482
+
483
+ #### Information Retrieval
484
+
485
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
486
+
487
+ | Metric | Value |
488
+ |:--------------------|:-----------|
489
+ | cosine_accuracy@1 | 0.9167 |
490
+ | cosine_accuracy@3 | 0.9167 |
491
+ | cosine_accuracy@5 | 1.0 |
492
+ | cosine_accuracy@10 | 1.0 |
493
+ | cosine_precision@1 | 0.9167 |
494
+ | cosine_precision@3 | 0.3056 |
495
+ | cosine_precision@5 | 0.2 |
496
+ | cosine_precision@10 | 0.1 |
497
+ | cosine_recall@1 | 0.9167 |
498
+ | cosine_recall@3 | 0.9167 |
499
+ | cosine_recall@5 | 1.0 |
500
+ | cosine_recall@10 | 1.0 |
501
+ | **cosine_ndcg@10** | **0.9507** |
502
+ | cosine_mrr@10 | 0.9354 |
503
+ | cosine_map@100 | 0.9354 |
504
+
505
+ <!--
506
+ ## Bias, Risks and Limitations
507
+
508
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
509
+ -->
510
+
511
+ <!--
512
+ ### Recommendations
513
+
514
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
515
+ -->
516
+
517
+ ## Training Details
518
+
519
+ ### Training Dataset
520
+
521
+ #### Unnamed Dataset
522
+
523
+ * Size: 156 training samples
524
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
525
+ * Approximate statistics based on the first 156 samples:
526
+ | | sentence_0 | sentence_1 |
527
+ |:--------|:--------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
528
+ | type | string | string |
529
+ | details | <ul><li>min: 343 tokens</li><li>mean: 356.73 tokens</li><li>max: 381 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 130.5 tokens</li><li>max: 204 tokens</li></ul> |
530
+ * Samples:
531
+ | sentence_0 | sentence_1 |
532
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
533
+ | <code>content='1. What key themes and pivotal moments in the field of Large Language Models were identified in 2024? \n2. How does the review of 2024 compare to the review of 2023 regarding advancements in LLMs?' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 49, 'prompt_tokens': 159, 'total_tokens': 208, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_00428b782a', 'finish_reason': 'stop', 'logprobs': None} id='run-17232a8b-9c7d-4707-92bf-2701b977df50-0' usage_metadata={'input_tokens': 159, 'output_tokens': 49, 'total_tokens': 208, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}</code> | <code>Things we learned about LLMs in 2024<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Things we learned about LLMs in 2024<br>31st December 2024<br>A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.<br>This is a sequel to my review of 2023.<br>In this article:</code> |
534
+ | <code>content='1. What key themes and pivotal moments in the field of Large Language Models were identified in 2024? \n2. How does the review of 2024 compare to the review of 2023 regarding advancements in LLMs?' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 49, 'prompt_tokens': 159, 'total_tokens': 208, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_00428b782a', 'finish_reason': 'stop', 'logprobs': None} id='run-d5124245-02eb-4bc7-8286-0e8ebff06093-0' usage_metadata={'input_tokens': 159, 'output_tokens': 49, 'total_tokens': 208, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}</code> | <code>Things we learned about LLMs in 2024<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Things we learned about LLMs in 2024<br>31st December 2024<br>A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.<br>This is a sequel to my review of 2023.<br>In this article:</code> |
535
+ | <code>content='1. What advancements have been made in multimodal vision and audio/video capabilities in LLMs?\n2. How has the competition affected the pricing of LLMs and what impact did it have on universal access to the best models?' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 48, 'prompt_tokens': 210, 'total_tokens': 258, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_13eed4fce1', 'finish_reason': 'stop', 'logprobs': None} id='run-fb650155-719f-43bf-9b42-d01e282a04d5-0' usage_metadata={'input_tokens': 210, 'output_tokens': 48, 'total_tokens': 258, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}</code> | <code>The GPT-4 barrier was comprehensively broken<br>Some of those GPT-4 models run on my laptop<br>LLM prices crashed, thanks to competition and increased efficiency<br>Multimodal vision is common, audio and video are starting to emerge<br>Voice and live camera mode are science fiction come to life<br>Prompt driven app generation is a commodity already<br>Universal access to the best models lasted for just a few short months<br>“Agents” still haven’t really happened yet<br>Evals really matter<br>Apple Intelligence is bad, Apple’s MLX library is excellent<br>The rise of inference-scaling “reasoning” models<br>Was the best currently available LLM trained in China for less than $6m?<br>The environmental impact got better<br>The environmental impact got much, much worse</code> |
536
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
537
+ ```json
538
+ {
539
+ "loss": "MultipleNegativesRankingLoss",
540
+ "matryoshka_dims": [
541
+ 768,
542
+ 512,
543
+ 256,
544
+ 128,
545
+ 64
546
+ ],
547
+ "matryoshka_weights": [
548
+ 1,
549
+ 1,
550
+ 1,
551
+ 1,
552
+ 1
553
+ ],
554
+ "n_dims_per_step": -1
555
+ }
556
+ ```
557
+
558
+ ### Training Hyperparameters
559
+ #### Non-Default Hyperparameters
560
+
561
+ - `eval_strategy`: steps
562
+ - `per_device_train_batch_size`: 10
563
+ - `per_device_eval_batch_size`: 10
564
+ - `num_train_epochs`: 10
565
+ - `multi_dataset_batch_sampler`: round_robin
566
+
567
+ #### All Hyperparameters
568
+ <details><summary>Click to expand</summary>
569
+
570
+ - `overwrite_output_dir`: False
571
+ - `do_predict`: False
572
+ - `eval_strategy`: steps
573
+ - `prediction_loss_only`: True
574
+ - `per_device_train_batch_size`: 10
575
+ - `per_device_eval_batch_size`: 10
576
+ - `per_gpu_train_batch_size`: None
577
+ - `per_gpu_eval_batch_size`: None
578
+ - `gradient_accumulation_steps`: 1
579
+ - `eval_accumulation_steps`: None
580
+ - `torch_empty_cache_steps`: None
581
+ - `learning_rate`: 5e-05
582
+ - `weight_decay`: 0.0
583
+ - `adam_beta1`: 0.9
584
+ - `adam_beta2`: 0.999
585
+ - `adam_epsilon`: 1e-08
586
+ - `max_grad_norm`: 1
587
+ - `num_train_epochs`: 10
588
+ - `max_steps`: -1
589
+ - `lr_scheduler_type`: linear
590
+ - `lr_scheduler_kwargs`: {}
591
+ - `warmup_ratio`: 0.0
592
+ - `warmup_steps`: 0
593
+ - `log_level`: passive
594
+ - `log_level_replica`: warning
595
+ - `log_on_each_node`: True
596
+ - `logging_nan_inf_filter`: True
597
+ - `save_safetensors`: True
598
+ - `save_on_each_node`: False
599
+ - `save_only_model`: False
600
+ - `restore_callback_states_from_checkpoint`: False
601
+ - `no_cuda`: False
602
+ - `use_cpu`: False
603
+ - `use_mps_device`: False
604
+ - `seed`: 42
605
+ - `data_seed`: None
606
+ - `jit_mode_eval`: False
607
+ - `use_ipex`: False
608
+ - `bf16`: False
609
+ - `fp16`: False
610
+ - `fp16_opt_level`: O1
611
+ - `half_precision_backend`: auto
612
+ - `bf16_full_eval`: False
613
+ - `fp16_full_eval`: False
614
+ - `tf32`: None
615
+ - `local_rank`: 0
616
+ - `ddp_backend`: None
617
+ - `tpu_num_cores`: None
618
+ - `tpu_metrics_debug`: False
619
+ - `debug`: []
620
+ - `dataloader_drop_last`: False
621
+ - `dataloader_num_workers`: 0
622
+ - `dataloader_prefetch_factor`: None
623
+ - `past_index`: -1
624
+ - `disable_tqdm`: False
625
+ - `remove_unused_columns`: True
626
+ - `label_names`: None
627
+ - `load_best_model_at_end`: False
628
+ - `ignore_data_skip`: False
629
+ - `fsdp`: []
630
+ - `fsdp_min_num_params`: 0
631
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
632
+ - `fsdp_transformer_layer_cls_to_wrap`: None
633
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
634
+ - `deepspeed`: None
635
+ - `label_smoothing_factor`: 0.0
636
+ - `optim`: adamw_torch
637
+ - `optim_args`: None
638
+ - `adafactor`: False
639
+ - `group_by_length`: False
640
+ - `length_column_name`: length
641
+ - `ddp_find_unused_parameters`: None
642
+ - `ddp_bucket_cap_mb`: None
643
+ - `ddp_broadcast_buffers`: False
644
+ - `dataloader_pin_memory`: True
645
+ - `dataloader_persistent_workers`: False
646
+ - `skip_memory_metrics`: True
647
+ - `use_legacy_prediction_loop`: False
648
+ - `push_to_hub`: False
649
+ - `resume_from_checkpoint`: None
650
+ - `hub_model_id`: None
651
+ - `hub_strategy`: every_save
652
+ - `hub_private_repo`: None
653
+ - `hub_always_push`: False
654
+ - `gradient_checkpointing`: False
655
+ - `gradient_checkpointing_kwargs`: None
656
+ - `include_inputs_for_metrics`: False
657
+ - `include_for_metrics`: []
658
+ - `eval_do_concat_batches`: True
659
+ - `fp16_backend`: auto
660
+ - `push_to_hub_model_id`: None
661
+ - `push_to_hub_organization`: None
662
+ - `mp_parameters`:
663
+ - `auto_find_batch_size`: False
664
+ - `full_determinism`: False
665
+ - `torchdynamo`: None
666
+ - `ray_scope`: last
667
+ - `ddp_timeout`: 1800
668
+ - `torch_compile`: False
669
+ - `torch_compile_backend`: None
670
+ - `torch_compile_mode`: None
671
+ - `dispatch_batches`: None
672
+ - `split_batches`: None
673
+ - `include_tokens_per_second`: False
674
+ - `include_num_input_tokens_seen`: False
675
+ - `neftune_noise_alpha`: None
676
+ - `optim_target_modules`: None
677
+ - `batch_eval_metrics`: False
678
+ - `eval_on_start`: False
679
+ - `use_liger_kernel`: False
680
+ - `eval_use_gather_object`: False
681
+ - `average_tokens_across_devices`: False
682
+ - `prompts`: None
683
+ - `batch_sampler`: batch_sampler
684
+ - `multi_dataset_batch_sampler`: round_robin
685
+
686
+ </details>
687
+
688
+ ### Training Logs
689
+ | Epoch | Step | cosine_ndcg@10 |
690
+ |:-----:|:----:|:--------------:|
691
+ | 1.0 | 16 | 0.9638 |
692
+ | 2.0 | 32 | 0.9330 |
693
+ | 3.0 | 48 | 0.9156 |
694
+ | 3.125 | 50 | 0.9156 |
695
+ | 4.0 | 64 | 0.9156 |
696
+ | 5.0 | 80 | 0.9489 |
697
+ | 6.0 | 96 | 0.9507 |
698
+ | 6.25 | 100 | 0.9489 |
699
+ | 7.0 | 112 | 0.9507 |
700
+ | 8.0 | 128 | 0.9526 |
701
+ | 9.0 | 144 | 0.9507 |
702
+ | 9.375 | 150 | 0.9507 |
703
+ | 10.0 | 160 | 0.9507 |
704
+
705
+
706
+ ### Framework Versions
707
+ - Python: 3.11.11
708
+ - Sentence Transformers: 3.4.1
709
+ - Transformers: 4.48.3
710
+ - PyTorch: 2.5.1+cu124
711
+ - Accelerate: 1.3.0
712
+ - Datasets: 3.3.1
713
+ - Tokenizers: 0.21.0
714
+
715
+ ## Citation
716
+
717
+ ### BibTeX
718
+
719
+ #### Sentence Transformers
720
+ ```bibtex
721
+ @inproceedings{reimers-2019-sentence-bert,
722
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
723
+ author = "Reimers, Nils and Gurevych, Iryna",
724
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
725
+ month = "11",
726
+ year = "2019",
727
+ publisher = "Association for Computational Linguistics",
728
+ url = "https://arxiv.org/abs/1908.10084",
729
+ }
730
+ ```
731
+
732
+ #### MatryoshkaLoss
733
+ ```bibtex
734
+ @misc{kusupati2024matryoshka,
735
+ title={Matryoshka Representation Learning},
736
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
737
+ year={2024},
738
+ eprint={2205.13147},
739
+ archivePrefix={arXiv},
740
+ primaryClass={cs.LG}
741
+ }
742
+ ```
743
+
744
+ #### MultipleNegativesRankingLoss
745
+ ```bibtex
746
+ @misc{henderson2017efficient,
747
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
748
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
749
+ year={2017},
750
+ eprint={1705.00652},
751
+ archivePrefix={arXiv},
752
+ primaryClass={cs.CL}
753
+ }
754
+ ```
755
+
756
+ <!--
757
+ ## Glossary
758
+
759
+ *Clearly define terms in order to be accessible across audiences.*
760
+ -->
761
+
762
+ <!--
763
+ ## Model Card Authors
764
+
765
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
766
+ -->
767
+
768
+ <!--
769
+ ## Model Card Contact
770
+
771
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
772
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.3",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:013af2ed23a5f24e6b92c3fd34e8e365f48b56d17f432a8413303b280514a77e
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff