--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:956 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: sentence-transformers/multi-qa-mpnet-base-cos-v1 widget: - source_sentence: Does my insurance policy exclude medical costs for the first 30 days' illness, but cover accident-related claims? sentences: - "any notice for renewal. \nb. Renewal shall not be denied on the ground that\ \ the insured person had made a claim or claims in the preceding \npolicy years." - '• Minimum entry age for proposer/ spouse/ dependent parents - 18 years • Maximum Entry Age for proposer/ spouse/ dependent parents - 80 years • Minimum Entry age for dependent Children - 3 months • Maximum Entry Age for dependent Children - 25 years' - "a. Expenses related to the treatment of any illness within 30 days from the\ \ first policy commencement date shall \nbe excluded except claims arising due\ \ to an accident, provided the same are covered." - source_sentence: I have a pre-authorization for a procedure, what should I bring along when I get admitted to the hospital to avoid paying the medical bills? sentences: - "Obesity/ Weight Control \nChange of Gender treatments\nCosmetic or plastic\ \ Surgery \nHazardous or Adventure sports \nBreach of law \nExcluded Providers\n\ Substance Abuse and Alcohol \nWellness and Rejuvenation \nDietary Supplements\ \ & \nSubstances" - '56-60 11,950 12,760 7,874 18,887 13,573 9,243 17,848 13,162 21,348 16,437 11,308 24,345 18,177 13,206 35,360 29,906 24,726 61-65 14,352 15,319 9,444 22,688 16,298 11,089 21,442 15,804 25,652 19,744 13,571 29,256 21,833 15,852 42,495 35,932 29,699' - "specified must be produced to the Network Hospital identified in the pre-authorization\ \ letter at the time of Y our \nadmission to the same.\niii. If the procedure\ \ above is followed, Y ou will not be required to directly pay for the Medical\ \ Expenses above" - source_sentence: Can you tell me the range of insured sum for a 4 member family in INR? sentences: - "i. Obesity-related cardiomyopathy\n ii. Coronary heart disease\n iii. Severe\ \ Sleep Apnea\n iv. Uncontrolled T ype2 Diabetes\n7. Change-of-gender treatments:\ \ (Excl07)" - 'Age/ deduc- tible 200000 200000 300000 200000 300000 500000 300000 500000 300000 500000 1000000 300000 500000 1000000 300000 500000 1000000 21-25 5,010 5,361 3,326 7,906 5,695 3,899 7,466 5,523 8,918 6,882 4,759 10,163 7,610 5,553 14,756 12,498 10,354' - "CIN: U66010PN2000PLC015329, UIN:BAJHLIP23069V032223 13\nFAMILY SIZE: 4 MEMBER\n\ Sum \nInsured \n(in INR)\n300000 500000 1000000 1500000 2000000 2500000 5000000\n\ Age/\ndeduc-\ntible" - source_sentence: Does IRDAI have rules on portability that let someone who's been continuously insured under any health policy from an Indian general or health insurer carry over waiting period benefits? sentences: - '◼ WHAT ARE THE EXCLUSIONS AND WAITING PERIOD UNDER THE POLICY? I. Waiting Period A. Pre-Existing Diseases - Code- Excl01 a. Expenses related to the treatment of a pre-existing Disease (PED) and its direct complications shall be excluded' - "has been continuously covered without any lapses under any health insurance policy\ \ with an Indian General/\nHealth insurer, the proposed insured person will get\ \ the accrued continuity benefits in waiting periods as per \nIRDAI guidelines\ \ on portability." - "Cumulative Bonus:\n For every claim free policy year, there will be increase\ \ of 10% of \nthe Sum Insured, maximum up to 100%. If a claim is made in any \n\ particular Policy Year, the Cumulative Bonus accrued shall not be \nreduced.\n\ SBIG Health Super T op-Up," - source_sentence: what kind of coverage is provided by insurance for medical expenses that go beyond the normal amount? sentences: - "Enhances any existing health policy from any insurance provider \n- corporate\ \ or personal" - 'Age/ deduc- tible 200000 200000 300000 200000 300000 500000 300000 500000 300000 500000 1000000 300000 500000 1000000 300000 500000 1000000 21-25 6,544 7,011 4,345 10,389 7,490 5,127 9,839 7,283 11,767 9,087 6,289 13,419 10,054 7,343 19,518 16,543 13,717' - "health insurance cover and provides wider health protection for you and your\ \ family. In case of higher expenses \ndue to illness or accidents, Extra Care\ \ Plus policy takes care of the additional expenses. It is important to consider" datasets: - surajvbangera/mediclaim pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on sentence-transformers/multi-qa-mpnet-base-cos-v1 results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.3020833333333333 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8020833333333334 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.875 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9583333333333334 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.3020833333333333 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2673611111111111 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17499999999999996 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09583333333333333 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.3020833333333333 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8020833333333334 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.875 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9583333333333334 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6497808285407043 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.5484209656084658 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.5512795209742883 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.28125 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.78125 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.875 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9479166666666666 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.28125 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2604166666666667 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17499999999999996 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09479166666666665 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.28125 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.78125 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.875 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9479166666666666 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6294431516700937 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.5250578703703704 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.5287000615125614 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.3020833333333333 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7916666666666666 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8854166666666666 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9375 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.3020833333333333 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2638888888888889 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1770833333333333 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09375 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.3020833333333333 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7916666666666666 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8854166666666666 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9375 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6396822227743622 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.5409846230158731 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.5445532958553793 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.2708333333333333 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.78125 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.84375 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9479166666666666 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.2708333333333333 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2604166666666667 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.16874999999999996 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09479166666666666 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.2708333333333333 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.78125 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.84375 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9479166666666666 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6229142362169651 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.5167080026455027 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.5187267142104471 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.25 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7291666666666666 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8333333333333334 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9166666666666666 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.25 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.24305555555555558 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.16666666666666666 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09166666666666666 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.25 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7291666666666666 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8333333333333334 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9166666666666666 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.5921613565527261 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.486338458994709 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.49077409326175775 name: Cosine Map@100 --- # SentenceTransformer based on sentence-transformers/multi-qa-mpnet-base-cos-v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1) on the [mediclaim](https://huggingface.co/datasets/surajvbangera/mediclaim) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1) - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [mediclaim](https://huggingface.co/datasets/surajvbangera/mediclaim) ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("surajvbangera/mediclaim_embedding") # Run inference sentences = [ 'what kind of coverage is provided by insurance for medical expenses that go beyond the normal amount?', 'health insurance cover and provides wider health protection for you and your family. In case of higher expenses \ndue to illness or accidents, Extra Care Plus policy takes care of the additional expenses. It is important to consider', 'Age/\ndeduc-\ntible\n200000 200000 300000 200000 300000 500000 300000 500000 300000 500000 1000000 300000 500000 1000000 300000 500000 1000000\n21-25 6,544 7,011 4,345 10,389 7,490 5,127 9,839 7,283 11,767 9,087 6,289 13,419 10,054 7,343 19,518 16,543 13,717', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Information Retrieval * Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64` * Evaluated with [InformationRetrievalEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 | |:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------| | cosine_accuracy@1 | 0.3021 | 0.2812 | 0.3021 | 0.2708 | 0.25 | | cosine_accuracy@3 | 0.8021 | 0.7812 | 0.7917 | 0.7812 | 0.7292 | | cosine_accuracy@5 | 0.875 | 0.875 | 0.8854 | 0.8438 | 0.8333 | | cosine_accuracy@10 | 0.9583 | 0.9479 | 0.9375 | 0.9479 | 0.9167 | | cosine_precision@1 | 0.3021 | 0.2812 | 0.3021 | 0.2708 | 0.25 | | cosine_precision@3 | 0.2674 | 0.2604 | 0.2639 | 0.2604 | 0.2431 | | cosine_precision@5 | 0.175 | 0.175 | 0.1771 | 0.1687 | 0.1667 | | cosine_precision@10 | 0.0958 | 0.0948 | 0.0938 | 0.0948 | 0.0917 | | cosine_recall@1 | 0.3021 | 0.2812 | 0.3021 | 0.2708 | 0.25 | | cosine_recall@3 | 0.8021 | 0.7812 | 0.7917 | 0.7812 | 0.7292 | | cosine_recall@5 | 0.875 | 0.875 | 0.8854 | 0.8438 | 0.8333 | | cosine_recall@10 | 0.9583 | 0.9479 | 0.9375 | 0.9479 | 0.9167 | | **cosine_ndcg@10** | **0.6498** | **0.6294** | **0.6397** | **0.6229** | **0.5922** | | cosine_mrr@10 | 0.5484 | 0.5251 | 0.541 | 0.5167 | 0.4863 | | cosine_map@100 | 0.5513 | 0.5287 | 0.5446 | 0.5187 | 0.4908 | ## Training Details ### Training Dataset #### mediclaim * Dataset: [mediclaim](https://huggingface.co/datasets/surajvbangera/mediclaim) at [943cab1](https://huggingface.co/datasets/surajvbangera/mediclaim/tree/943cab115f9a1d649d8a886fb35668e54ad0e1f7) * Size: 956 training samples * Columns: anchor and positive * Approximate statistics based on the first 956 samples: | | anchor | positive | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | anchor | positive | |:---------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Can I get a preventive health check-up covered under my insurance, and if yes, is there a limit to it? | by the Medical Practitioner.
vii. The Deductible shall not be applicable on this bene�t.
Stay Fit Health Check Up
The Insured may avail a health check-up, only for Preventive
Test, up to a limit speci�ed in the Policy Schedule, provided
| | Which claims are excluded if they don't follow the Transplantation of Human Organs Amendment Bill 2011? | 4 CIN: U66010PN2000PLC015329, UIN: BAJHLIP23069V032223
Specific exclusions:
1. Claims which have NOT been admitted under Medical expenses section
2. Claims not in compliance with THE TRANSPLANTATION OF HUMAN ORGANS (AMENDMENT) BILL, 2011
| | Will the insurance pay for lawful abortion and related hospital stays? | ii. We will also cover expenses towards lawful medical termination of pregnancy during the Policy period.
iii. In patient Hospitalization Expenses of pre-natal and post-natal hospitalization
| * Loss: [MatryoshkaLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Dataset #### mediclaim * Dataset: [mediclaim](https://huggingface.co/datasets/surajvbangera/mediclaim) at [943cab1](https://huggingface.co/datasets/surajvbangera/mediclaim/tree/943cab115f9a1d649d8a886fb35668e54ad0e1f7) * Size: 956 evaluation samples * Columns: anchor and positive * Approximate statistics based on the first 956 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | anchor | positive | |:---------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Is there any refund for medical exams if I get a policy and it's accepted? | • If pre-policy checkup is conducted, 50% of the medical tests charges would be reimbursed, subject to acceptance
of proposal and policy issuance.
Age of the person
to be insured
Sum Insured Medical Examination
| | Are there any exclusions for coverage of substance abuse treatment or its consequences? | are payable but not the complete claim.
12. T reatment for Alcoholism, drug or substance abuse or any addictive condition and consequences thereof.
(Excl12)
| | Can you tell me about the medical bills I might have within 90 days after being discharged? | CIN: U66010PN2000PLC015329, UIN:BAJHLIP23069V032223 3
c. Post-hospitalisation expenses
The medical expenses incurred in the 90 days immediately after you were discharged, provided that:
| * Loss: [MatryoshkaLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 40 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 40 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | Training Loss | Validation Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 | |:--------:|:------:|:-------------:|:---------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:| | -1 | -1 | - | - | 0.4723 | 0.4748 | 0.5015 | 0.4589 | 0.3867 | | 1.0 | 2 | - | 1.5925 | 0.4821 | 0.4846 | 0.5122 | 0.4604 | 0.3971 | | 2.0 | 4 | - | 1.5925 | 0.4821 | 0.4846 | 0.5122 | 0.4604 | 0.3971 | | 3.0 | 6 | - | 1.0402 | 0.5431 | 0.5468 | 0.5530 | 0.5009 | 0.4435 | | 4.0 | 8 | - | 0.7900 | 0.5876 | 0.5926 | 0.6075 | 0.5484 | 0.4726 | | 5.0 | 10 | 33.0646 | 0.6077 | 0.5890 | 0.6039 | 0.6270 | 0.5779 | 0.5072 | | 6.0 | 12 | - | 0.5213 | 0.6357 | 0.6379 | 0.6522 | 0.5966 | 0.5417 | | 7.0 | 14 | - | 0.4735 | 0.6425 | 0.6395 | 0.6286 | 0.5995 | 0.5795 | | 8.0 | 16 | - | 0.4416 | 0.6253 | 0.6387 | 0.6227 | 0.5903 | 0.5738 | | 9.0 | 18 | - | 0.4236 | 0.6303 | 0.6489 | 0.6387 | 0.6179 | 0.5670 | | **10.0** | **20** | **8.8456** | **0.4115** | **0.6465** | **0.6519** | **0.6369** | **0.6112** | **0.572** | | 11.0 | 22 | - | 0.4059 | 0.6447 | 0.6270 | 0.6318 | 0.6169 | 0.5950 | | 12.0 | 24 | - | 0.4036 | 0.6382 | 0.6318 | 0.6346 | 0.6063 | 0.6026 | | 13.0 | 26 | - | 0.4022 | 0.6485 | 0.6410 | 0.6441 | 0.6163 | 0.5900 | | 14.0 | 28 | - | 0.4022 | 0.6520 | 0.6426 | 0.6597 | 0.6225 | 0.6001 | | 15.0 | 30 | 4.4602 | 0.4033 | 0.6507 | 0.6363 | 0.6576 | 0.6217 | 0.6134 | | 16.0 | 32 | - | 0.4047 | 0.6530 | 0.6389 | 0.6609 | 0.6350 | 0.6068 | | 17.0 | 34 | - | 0.4058 | 0.6501 | 0.6344 | 0.6501 | 0.6281 | 0.5997 | | 18.0 | 36 | - | 0.4067 | 0.6509 | 0.6333 | 0.6553 | 0.6360 | 0.6050 | | 19.0 | 38 | - | 0.4070 | 0.6561 | 0.6331 | 0.6602 | 0.6397 | 0.6051 | | 20.0 | 40 | 3.9605 | 0.4071 | 0.6498 | 0.6294 | 0.6397 | 0.6229 | 0.5922 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.4.1 - Transformers: 4.48.3 - PyTorch: 2.5.1+cu124 - Accelerate: 1.3.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```