deepali1021 commited on
Commit
6888589
·
verified ·
1 Parent(s): eed25d7

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,597 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:48
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: What types of training did the drivers complete in the past year
13
+ to enhance their skills?
14
+ sentences:
15
+ - "department. It provides guidelines to ensure safe, efficient, and customer-focused\
16
+ \ transportation \nservices. Please read this manual carefully and consult with\
17
+ \ your supervisor or the department \nmanager if you have any questions or need\
18
+ \ further clarification. \n \nDepartment Overview \nThe Transportation Department\
19
+ \ plays a critical role in providing reliable transportation services to \nour\
20
+ \ customers. Our department consists of 50 drivers, 10 dispatchers, and 5 maintenance\
21
+ \ \ntechnicians. In the past year, we transported over 500,000 passengers across\
22
+ \ various routes, ensuring \ntheir safety and satisfaction. \n \nSafety and Vehicle\
23
+ \ Maintenance \nSafety is our top priority. All vehicles undergo regular inspections\
24
+ \ and maintenance to ensure they"
25
+ - "Compliance with local, state, and federal regulations is crucial. Our drivers\
26
+ \ are required to maintain \nup-to-date knowledge of transportation laws and regulations.\
27
+ \ In the past year, we conducted 20 \ncompliance audits to ensure adherence to\
28
+ \ regulatory requirements. \n \nTraining and Development \nContinuous training\
29
+ \ and development are vital for our department's success. In the past year, our\
30
+ \ \ndrivers completed over 100 hours of professional development training, focusing\
31
+ \ on defensive \ndriving, customer service, and emergency preparedness. \n \n\
32
+ Communication and Collaboration \nEffective communication and collaboration are\
33
+ \ essential within the Transportation Department and"
34
+ - "Customer Service \nWe prioritize exceptional customer service. Our drivers are\
35
+ \ trained to provide a friendly and \nrespectful experience to all passengers.\
36
+ \ In the past year, we received an average customer \nsatisfaction rating of 4.5\
37
+ \ out of 5, demonstrating our commitment to meeting customer needs and \nexceeding\
38
+ \ their expectations. \n \nIncident Reporting and Investigation \nAccidents or\
39
+ \ incidents may occur during transportation operations. In such cases, our drivers\
40
+ \ are \ntrained to promptly report incidents to their supervisor or the incident\
41
+ \ response team. In the past \nyear, we reported and investigated 10 incidents,\
42
+ \ implementing corrective actions to prevent future \noccurrences. \n \nCompliance\
43
+ \ with Regulations"
44
+ - source_sentence: Who should be contacted for questions or further information regarding
45
+ the HR Policy Manual?
46
+ sentences:
47
+ - "responsible for familiarizing themselves with the latest version of the manual.\
48
+ \ \n \nConclusion \nThank you for reviewing our HR Policy Manual. It serves as\
49
+ \ a guide to ensure a positive and inclusive \nwork environment. If you have any\
50
+ \ questions or need further information, please reach out to the HR \ndepartment.\
51
+ \ We value your contributions and commitment to our company's success."
52
+ - "for familiarizing themselves with the latest version of the manual. \n \nConclusion\
53
+ \ \nThank you for reviewing the Transportation Department Policy Manual. Your\
54
+ \ commitment to safety, \ncustomer service, and compliance plays a crucial role\
55
+ \ in our department's success. If you have any \nquestions or need further information,\
56
+ \ please reach out to your supervisor or the department \nmanager. Your dedication\
57
+ \ and professionalism are appreciated."
58
+ - "Leaves of Absence \nWe provide various types of leaves of absence, including\
59
+ \ vacation leave, sick leave, parental leave, \nand bereavement leave. Employees\
60
+ \ are entitled to 15 days of paid vacation leave per year. The \naverage sick\
61
+ \ leave utilization in 2022 was 4.2 days per employee. We offer flexible parental\
62
+ \ leave \npolicies, allowing employees to take up to 12 weeks of leave after the\
63
+ \ birth or adoption of a child. \n \nCompensation and Benefits \nOur employees\
64
+ \ receive competitive compensation packages. In 2022, the average annual salary\
65
+ \ \nacross all positions was $60,000. We offer a comprehensive benefits package,\
66
+ \ including health \ninsurance, dental coverage, retirement plans, and employee\
67
+ \ assistance programs. On average, our"
68
+ - source_sentence: How much did the average route duration decrease in the past year
69
+ due to route planning and optimization?
70
+ sentences:
71
+ - "Our drivers are responsible for operating vehicles safely, following traffic\
72
+ \ rules and regulations. They \nare required to hold a valid driver's license\
73
+ \ and maintain a clean driving record. In the past year, our \ndrivers completed\
74
+ \ over 2,000 hours of driving training to enhance their skills and knowledge.\
75
+ \ \n \nRoute Planning and Optimization \nEfficient route planning is essential\
76
+ \ for timely transportation services. Our department utilizes \nadvanced routing\
77
+ \ software to optimize routes and minimize travel time. In the past year, we reduced\
78
+ \ \nour average route duration by 15% through effective route planning and optimization\
79
+ \ strategies. \n \nCustomer Service"
80
+ - "Our fare collection system ensures fair and consistent fee collection from passengers.\
81
+ \ The current fee \nstructure is as follows: \n \nRegular fare: $2.50 \nSenior\
82
+ \ citizens and students: $1.50 \nChildren under 5 years old: Free \nFee collection\
83
+ \ is primarily done through electronic payment methods, such as smart cards and\
84
+ \ \nmobile payment apps. Drivers are responsible for ensuring correct fare collection\
85
+ \ and providing \nreceipts upon request. \nRoute Information and Rules \nOur transportation\
86
+ \ department operates multiple routes within the city. Route information, including\
87
+ \ \nmaps, schedules, and stops, is available on our website and at designated\
88
+ \ information centers."
89
+ - "manual carefully and contact the HR department if you have any questions or need\
90
+ \ further \nclarification. \n \nEqual Employment Opportunity \nOur company is\
91
+ \ committed to providing equal employment opportunities to all individuals. We\
92
+ \ strive \nto create a diverse and inclusive workplace. In 2022, our workforce\
93
+ \ comprised 55% male and 45% \nfemale employees. We actively recruit and promote\
94
+ \ individuals from different backgrounds, including \nracial and ethnic minorities.\
95
+ \ Our goal is to maintain a workforce that reflects the diverse \ncommunities\
96
+ \ we serve. \n \nAnti-Harassment and Anti-Discrimination \nWe maintain a zero-tolerance\
97
+ \ policy for harassment and discrimination. In the past year, we received"
98
+ - source_sentence: How many employees are served by the organization's email system?
99
+ sentences:
100
+ - "only two reports of harassment, which were promptly investigated and resolved.\
101
+ \ We provide training \nto all employees on recognizing and preventing harassment.\
102
+ \ We encourage employees to report any \nincidents of harassment or discrimination\
103
+ \ and ensure confidentiality throughout the investigation \nprocess."
104
+ - "Passengers are expected to follow the rules and regulations while utilizing our\
105
+ \ transportation \nservices, including: \n \nBoarding and exiting the vehicle\
106
+ \ in an orderly manner. \nYielding seats to elderly, disabled, and pregnant passengers.\
107
+ \ \nKeeping noise levels to a minimum. \nRefraining from eating, drinking, or\
108
+ \ smoking onboard. \nUsing designated safety equipment, such as seat belts, if\
109
+ \ available. \nReporting any suspicious activity or unattended items to the driver.\
110
+ \ \nAmendments to the Policy Manual \nThis policy manual is subject to periodic\
111
+ \ review and amendments. Any updates or changes will be \ncommunicated to employees\
112
+ \ through email or departmental meetings. Employees are responsible"
113
+ - "Network and Systems Access \nAccess to the organization's network and systems\
114
+ \ is granted based on job roles and responsibilities. \nEmployees must adhere\
115
+ \ to the network access policies and protect their login credentials. In the past\
116
+ \ \nyear, we reviewed and updated access privileges for 300 employees to align\
117
+ \ with their job functions. \n \nEmail and Communication \nThe organization's\
118
+ \ email system is to be used for official communication purposes. Employees are\
119
+ \ \nexpected to follow email etiquette and avoid the use of offensive or inappropriate\
120
+ \ language. The \nemail system is monitored for security purposes and to ensure\
121
+ \ compliance with policies. We manage \nand maintain an email system that serves\
122
+ \ 500 employees. \n \nData Security and Confidentiality"
123
+ - source_sentence: How often were departmental meetings conducted to address information
124
+ sharing and problem-solving?
125
+ sentences:
126
+ - "Leaves of Absence \nWe provide various types of leaves of absence, including\
127
+ \ vacation leave, sick leave, parental leave, \nand bereavement leave. Employees\
128
+ \ are entitled to 15 days of paid vacation leave per year. The \naverage sick\
129
+ \ leave utilization in 2022 was 4.2 days per employee. We offer flexible parental\
130
+ \ leave \npolicies, allowing employees to take up to 12 weeks of leave after the\
131
+ \ birth or adoption of a child. \n \nCompensation and Benefits \nOur employees\
132
+ \ receive competitive compensation packages. In 2022, the average annual salary\
133
+ \ \nacross all positions was $60,000. We offer a comprehensive benefits package,\
134
+ \ including health \ninsurance, dental coverage, retirement plans, and employee\
135
+ \ assistance programs. On average, our"
136
+ - "responsible for familiarizing themselves with the latest version of the manual.\
137
+ \ \n \nConclusion \nThank you for reviewing our HR Policy Manual. It serves as\
138
+ \ a guide to ensure a positive and inclusive \nwork environment. If you have any\
139
+ \ questions or need further information, please reach out to the HR \ndepartment.\
140
+ \ We value your contributions and commitment to our company's success."
141
+ - "with other departments. In the past year, we conducted monthly departmental meetings\
142
+ \ and \nestablished communication channels to facilitate information sharing and\
143
+ \ problem-solving. \n \nFare Collection and Fee Structure"
144
+ pipeline_tag: sentence-similarity
145
+ library_name: sentence-transformers
146
+ metrics:
147
+ - cosine_accuracy@1
148
+ - cosine_accuracy@3
149
+ - cosine_accuracy@5
150
+ - cosine_accuracy@10
151
+ - cosine_precision@1
152
+ - cosine_precision@3
153
+ - cosine_precision@5
154
+ - cosine_precision@10
155
+ - cosine_recall@1
156
+ - cosine_recall@3
157
+ - cosine_recall@5
158
+ - cosine_recall@10
159
+ - cosine_ndcg@10
160
+ - cosine_mrr@10
161
+ - cosine_map@100
162
+ model-index:
163
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
164
+ results:
165
+ - task:
166
+ type: information-retrieval
167
+ name: Information Retrieval
168
+ dataset:
169
+ name: Unknown
170
+ type: unknown
171
+ metrics:
172
+ - type: cosine_accuracy@1
173
+ value: 1.0
174
+ name: Cosine Accuracy@1
175
+ - type: cosine_accuracy@3
176
+ value: 1.0
177
+ name: Cosine Accuracy@3
178
+ - type: cosine_accuracy@5
179
+ value: 1.0
180
+ name: Cosine Accuracy@5
181
+ - type: cosine_accuracy@10
182
+ value: 1.0
183
+ name: Cosine Accuracy@10
184
+ - type: cosine_precision@1
185
+ value: 1.0
186
+ name: Cosine Precision@1
187
+ - type: cosine_precision@3
188
+ value: 0.33333333333333337
189
+ name: Cosine Precision@3
190
+ - type: cosine_precision@5
191
+ value: 0.2
192
+ name: Cosine Precision@5
193
+ - type: cosine_precision@10
194
+ value: 0.1
195
+ name: Cosine Precision@10
196
+ - type: cosine_recall@1
197
+ value: 1.0
198
+ name: Cosine Recall@1
199
+ - type: cosine_recall@3
200
+ value: 1.0
201
+ name: Cosine Recall@3
202
+ - type: cosine_recall@5
203
+ value: 1.0
204
+ name: Cosine Recall@5
205
+ - type: cosine_recall@10
206
+ value: 1.0
207
+ name: Cosine Recall@10
208
+ - type: cosine_ndcg@10
209
+ value: 1.0
210
+ name: Cosine Ndcg@10
211
+ - type: cosine_mrr@10
212
+ value: 1.0
213
+ name: Cosine Mrr@10
214
+ - type: cosine_map@100
215
+ value: 1.0
216
+ name: Cosine Map@100
217
+ ---
218
+
219
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
220
+
221
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
222
+
223
+ ## Model Details
224
+
225
+ ### Model Description
226
+ - **Model Type:** Sentence Transformer
227
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
228
+ - **Maximum Sequence Length:** 512 tokens
229
+ - **Output Dimensionality:** 1024 dimensions
230
+ - **Similarity Function:** Cosine Similarity
231
+ <!-- - **Training Dataset:** Unknown -->
232
+ <!-- - **Language:** Unknown -->
233
+ <!-- - **License:** Unknown -->
234
+
235
+ ### Model Sources
236
+
237
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
238
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
239
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
240
+
241
+ ### Full Model Architecture
242
+
243
+ ```
244
+ SentenceTransformer(
245
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
246
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
247
+ (2): Normalize()
248
+ )
249
+ ```
250
+
251
+ ## Usage
252
+
253
+ ### Direct Usage (Sentence Transformers)
254
+
255
+ First install the Sentence Transformers library:
256
+
257
+ ```bash
258
+ pip install -U sentence-transformers
259
+ ```
260
+
261
+ Then you can load this model and run inference.
262
+ ```python
263
+ from sentence_transformers import SentenceTransformer
264
+
265
+ # Download from the 🤗 Hub
266
+ model = SentenceTransformer("deepali1021/finetuned_arctic_ft-v2")
267
+ # Run inference
268
+ sentences = [
269
+ 'How often were departmental meetings conducted to address information sharing and problem-solving?',
270
+ 'with other departments. In the past year, we conducted monthly departmental meetings and \nestablished communication channels to facilitate information sharing and problem-solving. \n \nFare Collection and Fee Structure',
271
+ "responsible for familiarizing themselves with the latest version of the manual. \n \nConclusion \nThank you for reviewing our HR Policy Manual. It serves as a guide to ensure a positive and inclusive \nwork environment. If you have any questions or need further information, please reach out to the HR \ndepartment. We value your contributions and commitment to our company's success.",
272
+ ]
273
+ embeddings = model.encode(sentences)
274
+ print(embeddings.shape)
275
+ # [3, 1024]
276
+
277
+ # Get the similarity scores for the embeddings
278
+ similarities = model.similarity(embeddings, embeddings)
279
+ print(similarities.shape)
280
+ # [3, 3]
281
+ ```
282
+
283
+ <!--
284
+ ### Direct Usage (Transformers)
285
+
286
+ <details><summary>Click to see the direct usage in Transformers</summary>
287
+
288
+ </details>
289
+ -->
290
+
291
+ <!--
292
+ ### Downstream Usage (Sentence Transformers)
293
+
294
+ You can finetune this model on your own dataset.
295
+
296
+ <details><summary>Click to expand</summary>
297
+
298
+ </details>
299
+ -->
300
+
301
+ <!--
302
+ ### Out-of-Scope Use
303
+
304
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
305
+ -->
306
+
307
+ ## Evaluation
308
+
309
+ ### Metrics
310
+
311
+ #### Information Retrieval
312
+
313
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
314
+
315
+ | Metric | Value |
316
+ |:--------------------|:--------|
317
+ | cosine_accuracy@1 | 1.0 |
318
+ | cosine_accuracy@3 | 1.0 |
319
+ | cosine_accuracy@5 | 1.0 |
320
+ | cosine_accuracy@10 | 1.0 |
321
+ | cosine_precision@1 | 1.0 |
322
+ | cosine_precision@3 | 0.3333 |
323
+ | cosine_precision@5 | 0.2 |
324
+ | cosine_precision@10 | 0.1 |
325
+ | cosine_recall@1 | 1.0 |
326
+ | cosine_recall@3 | 1.0 |
327
+ | cosine_recall@5 | 1.0 |
328
+ | cosine_recall@10 | 1.0 |
329
+ | **cosine_ndcg@10** | **1.0** |
330
+ | cosine_mrr@10 | 1.0 |
331
+ | cosine_map@100 | 1.0 |
332
+
333
+ <!--
334
+ ## Bias, Risks and Limitations
335
+
336
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
337
+ -->
338
+
339
+ <!--
340
+ ### Recommendations
341
+
342
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
343
+ -->
344
+
345
+ ## Training Details
346
+
347
+ ### Training Dataset
348
+
349
+ #### Unnamed Dataset
350
+
351
+ * Size: 48 training samples
352
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
353
+ * Approximate statistics based on the first 48 samples:
354
+ | | sentence_0 | sentence_1 |
355
+ |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
356
+ | type | string | string |
357
+ | details | <ul><li>min: 11 tokens</li><li>mean: 16.25 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 99.96 tokens</li><li>max: 143 tokens</li></ul> |
358
+ * Samples:
359
+ | sentence_0 | sentence_1 |
360
+ |:---------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
361
+ | <code>What topics are covered in the Transportation Department Policy Manual?</code> | <code>Transportation Department Policy Manual <br> <br>Table of Contents: <br> <br>• <br>Introduction <br>• <br>Department Overview <br>• <br>Safety and Vehicle Maintenance <br>• <br>Driver Responsibilities <br>• <br>Route Planning and Optimization <br>• <br>Customer Service <br>• <br>Incident Reporting and Investigation <br>• <br>Compliance with Regulations <br>• <br>Training and Development <br>• <br>Communication and Collaboration <br>• <br>Fare Collection and Fee Structure <br>• <br>Route Information and Rules <br>• <br>Amendments to the Policy Manual <br>• <br>Conclusion <br>Introduction <br>Welcome to the Transportation Department Policy Manual! This manual serves as a comprehensive <br>guide to the policies, procedures, and expectations for employees working in the transportation</code> |
362
+ | <code>What is the purpose of the Transportation Department Policy Manual?</code> | <code>Transportation Department Policy Manual <br> <br>Table of Contents: <br> <br>• <br>Introduction <br>• <br>Department Overview <br>• <br>Safety and Vehicle Maintenance <br>• <br>Driver Responsibilities <br>• <br>Route Planning and Optimization <br>• <br>Customer Service <br>• <br>Incident Reporting and Investigation <br>• <br>Compliance with Regulations <br>• <br>Training and Development <br>• <br>Communication and Collaboration <br>• <br>Fare Collection and Fee Structure <br>• <br>Route Information and Rules <br>• <br>Amendments to the Policy Manual <br>• <br>Conclusion <br>Introduction <br>Welcome to the Transportation Department Policy Manual! This manual serves as a comprehensive <br>guide to the policies, procedures, and expectations for employees working in the transportation</code> |
363
+ | <code>What is the primary focus of the Transportation Department as outlined in the manual?</code> | <code>department. It provides guidelines to ensure safe, efficient, and customer-focused transportation <br>services. Please read this manual carefully and consult with your supervisor or the department <br>manager if you have any questions or need further clarification. <br> <br>Department Overview <br>The Transportation Department plays a critical role in providing reliable transportation services to <br>our customers. Our department consists of 50 drivers, 10 dispatchers, and 5 maintenance <br>technicians. In the past year, we transported over 500,000 passengers across various routes, ensuring <br>their safety and satisfaction. <br> <br>Safety and Vehicle Maintenance <br>Safety is our top priority. All vehicles undergo regular inspections and maintenance to ensure they</code> |
364
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
365
+ ```json
366
+ {
367
+ "loss": "MultipleNegativesRankingLoss",
368
+ "matryoshka_dims": [
369
+ 768,
370
+ 512,
371
+ 256,
372
+ 128,
373
+ 64
374
+ ],
375
+ "matryoshka_weights": [
376
+ 1,
377
+ 1,
378
+ 1,
379
+ 1,
380
+ 1
381
+ ],
382
+ "n_dims_per_step": -1
383
+ }
384
+ ```
385
+
386
+ ### Training Hyperparameters
387
+ #### Non-Default Hyperparameters
388
+
389
+ - `eval_strategy`: steps
390
+ - `per_device_train_batch_size`: 10
391
+ - `per_device_eval_batch_size`: 10
392
+ - `num_train_epochs`: 10
393
+ - `multi_dataset_batch_sampler`: round_robin
394
+
395
+ #### All Hyperparameters
396
+ <details><summary>Click to expand</summary>
397
+
398
+ - `overwrite_output_dir`: False
399
+ - `do_predict`: False
400
+ - `eval_strategy`: steps
401
+ - `prediction_loss_only`: True
402
+ - `per_device_train_batch_size`: 10
403
+ - `per_device_eval_batch_size`: 10
404
+ - `per_gpu_train_batch_size`: None
405
+ - `per_gpu_eval_batch_size`: None
406
+ - `gradient_accumulation_steps`: 1
407
+ - `eval_accumulation_steps`: None
408
+ - `torch_empty_cache_steps`: None
409
+ - `learning_rate`: 5e-05
410
+ - `weight_decay`: 0.0
411
+ - `adam_beta1`: 0.9
412
+ - `adam_beta2`: 0.999
413
+ - `adam_epsilon`: 1e-08
414
+ - `max_grad_norm`: 1
415
+ - `num_train_epochs`: 10
416
+ - `max_steps`: -1
417
+ - `lr_scheduler_type`: linear
418
+ - `lr_scheduler_kwargs`: {}
419
+ - `warmup_ratio`: 0.0
420
+ - `warmup_steps`: 0
421
+ - `log_level`: passive
422
+ - `log_level_replica`: warning
423
+ - `log_on_each_node`: True
424
+ - `logging_nan_inf_filter`: True
425
+ - `save_safetensors`: True
426
+ - `save_on_each_node`: False
427
+ - `save_only_model`: False
428
+ - `restore_callback_states_from_checkpoint`: False
429
+ - `no_cuda`: False
430
+ - `use_cpu`: False
431
+ - `use_mps_device`: False
432
+ - `seed`: 42
433
+ - `data_seed`: None
434
+ - `jit_mode_eval`: False
435
+ - `use_ipex`: False
436
+ - `bf16`: False
437
+ - `fp16`: False
438
+ - `fp16_opt_level`: O1
439
+ - `half_precision_backend`: auto
440
+ - `bf16_full_eval`: False
441
+ - `fp16_full_eval`: False
442
+ - `tf32`: None
443
+ - `local_rank`: 0
444
+ - `ddp_backend`: None
445
+ - `tpu_num_cores`: None
446
+ - `tpu_metrics_debug`: False
447
+ - `debug`: []
448
+ - `dataloader_drop_last`: False
449
+ - `dataloader_num_workers`: 0
450
+ - `dataloader_prefetch_factor`: None
451
+ - `past_index`: -1
452
+ - `disable_tqdm`: False
453
+ - `remove_unused_columns`: True
454
+ - `label_names`: None
455
+ - `load_best_model_at_end`: False
456
+ - `ignore_data_skip`: False
457
+ - `fsdp`: []
458
+ - `fsdp_min_num_params`: 0
459
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
460
+ - `fsdp_transformer_layer_cls_to_wrap`: None
461
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
462
+ - `deepspeed`: None
463
+ - `label_smoothing_factor`: 0.0
464
+ - `optim`: adamw_torch
465
+ - `optim_args`: None
466
+ - `adafactor`: False
467
+ - `group_by_length`: False
468
+ - `length_column_name`: length
469
+ - `ddp_find_unused_parameters`: None
470
+ - `ddp_bucket_cap_mb`: None
471
+ - `ddp_broadcast_buffers`: False
472
+ - `dataloader_pin_memory`: True
473
+ - `dataloader_persistent_workers`: False
474
+ - `skip_memory_metrics`: True
475
+ - `use_legacy_prediction_loop`: False
476
+ - `push_to_hub`: False
477
+ - `resume_from_checkpoint`: None
478
+ - `hub_model_id`: None
479
+ - `hub_strategy`: every_save
480
+ - `hub_private_repo`: None
481
+ - `hub_always_push`: False
482
+ - `gradient_checkpointing`: False
483
+ - `gradient_checkpointing_kwargs`: None
484
+ - `include_inputs_for_metrics`: False
485
+ - `include_for_metrics`: []
486
+ - `eval_do_concat_batches`: True
487
+ - `fp16_backend`: auto
488
+ - `push_to_hub_model_id`: None
489
+ - `push_to_hub_organization`: None
490
+ - `mp_parameters`:
491
+ - `auto_find_batch_size`: False
492
+ - `full_determinism`: False
493
+ - `torchdynamo`: None
494
+ - `ray_scope`: last
495
+ - `ddp_timeout`: 1800
496
+ - `torch_compile`: False
497
+ - `torch_compile_backend`: None
498
+ - `torch_compile_mode`: None
499
+ - `dispatch_batches`: None
500
+ - `split_batches`: None
501
+ - `include_tokens_per_second`: False
502
+ - `include_num_input_tokens_seen`: False
503
+ - `neftune_noise_alpha`: None
504
+ - `optim_target_modules`: None
505
+ - `batch_eval_metrics`: False
506
+ - `eval_on_start`: False
507
+ - `use_liger_kernel`: False
508
+ - `eval_use_gather_object`: False
509
+ - `average_tokens_across_devices`: False
510
+ - `prompts`: None
511
+ - `batch_sampler`: batch_sampler
512
+ - `multi_dataset_batch_sampler`: round_robin
513
+
514
+ </details>
515
+
516
+ ### Training Logs
517
+ | Epoch | Step | cosine_ndcg@10 |
518
+ |:-----:|:----:|:--------------:|
519
+ | 1.0 | 5 | 0.9431 |
520
+ | 2.0 | 10 | 1.0 |
521
+ | 3.0 | 15 | 1.0 |
522
+ | 4.0 | 20 | 1.0 |
523
+ | 5.0 | 25 | 1.0 |
524
+ | 6.0 | 30 | 1.0 |
525
+ | 7.0 | 35 | 1.0 |
526
+ | 8.0 | 40 | 1.0 |
527
+ | 9.0 | 45 | 1.0 |
528
+ | 10.0 | 50 | 1.0 |
529
+
530
+
531
+ ### Framework Versions
532
+ - Python: 3.11.11
533
+ - Sentence Transformers: 3.4.1
534
+ - Transformers: 4.48.3
535
+ - PyTorch: 2.5.1+cu124
536
+ - Accelerate: 1.3.0
537
+ - Datasets: 3.3.2
538
+ - Tokenizers: 0.21.0
539
+
540
+ ## Citation
541
+
542
+ ### BibTeX
543
+
544
+ #### Sentence Transformers
545
+ ```bibtex
546
+ @inproceedings{reimers-2019-sentence-bert,
547
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
548
+ author = "Reimers, Nils and Gurevych, Iryna",
549
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
550
+ month = "11",
551
+ year = "2019",
552
+ publisher = "Association for Computational Linguistics",
553
+ url = "https://arxiv.org/abs/1908.10084",
554
+ }
555
+ ```
556
+
557
+ #### MatryoshkaLoss
558
+ ```bibtex
559
+ @misc{kusupati2024matryoshka,
560
+ title={Matryoshka Representation Learning},
561
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
562
+ year={2024},
563
+ eprint={2205.13147},
564
+ archivePrefix={arXiv},
565
+ primaryClass={cs.LG}
566
+ }
567
+ ```
568
+
569
+ #### MultipleNegativesRankingLoss
570
+ ```bibtex
571
+ @misc{henderson2017efficient,
572
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
573
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
574
+ year={2017},
575
+ eprint={1705.00652},
576
+ archivePrefix={arXiv},
577
+ primaryClass={cs.CL}
578
+ }
579
+ ```
580
+
581
+ <!--
582
+ ## Glossary
583
+
584
+ *Clearly define terms in order to be accessible across audiences.*
585
+ -->
586
+
587
+ <!--
588
+ ## Model Card Authors
589
+
590
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
591
+ -->
592
+
593
+ <!--
594
+ ## Model Card Contact
595
+
596
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
597
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.3",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23171febd2763edcd337d7f22a942dea6523efa22c25252a4625e6c3b802a11c
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff