SentenceTransformer based on sentence-transformers/multi-qa-mpnet-base-cos-v1

This is a sentence-transformers model finetuned from sentence-transformers/multi-qa-mpnet-base-cos-v1 on the mediclaim dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("surajvbangera/mediclaim_embedding")
# Run inference
sentences = [
    'what kind of coverage is provided by insurance for medical expenses that go beyond the normal amount?',
    'health insurance cover and provides wider health protection for you and your family. In case of higher expenses \ndue to illness or accidents, Extra Care Plus policy takes care of the additional expenses. It is important to consider',
    'Age/\ndeduc-\ntible\n200000 200000 300000 200000 300000 500000 300000 500000 300000 500000 1000000 300000 500000 1000000 300000 500000 1000000\n21-25 6,544 7,011 4,345 10,389 7,490 5,127 9,839 7,283 11,767 9,087 6,289 13,419 10,054 7,343 19,518 16,543 13,717',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric dim_768 dim_512 dim_256 dim_128 dim_64
cosine_accuracy@1 0.3021 0.2812 0.3021 0.2708 0.25
cosine_accuracy@3 0.8021 0.7812 0.7917 0.7812 0.7292
cosine_accuracy@5 0.875 0.875 0.8854 0.8438 0.8333
cosine_accuracy@10 0.9583 0.9479 0.9375 0.9479 0.9167
cosine_precision@1 0.3021 0.2812 0.3021 0.2708 0.25
cosine_precision@3 0.2674 0.2604 0.2639 0.2604 0.2431
cosine_precision@5 0.175 0.175 0.1771 0.1687 0.1667
cosine_precision@10 0.0958 0.0948 0.0938 0.0948 0.0917
cosine_recall@1 0.3021 0.2812 0.3021 0.2708 0.25
cosine_recall@3 0.8021 0.7812 0.7917 0.7812 0.7292
cosine_recall@5 0.875 0.875 0.8854 0.8438 0.8333
cosine_recall@10 0.9583 0.9479 0.9375 0.9479 0.9167
cosine_ndcg@10 0.6498 0.6294 0.6397 0.6229 0.5922
cosine_mrr@10 0.5484 0.5251 0.541 0.5167 0.4863
cosine_map@100 0.5513 0.5287 0.5446 0.5187 0.4908

Training Details

Training Dataset

mediclaim

  • Dataset: mediclaim at 943cab1
  • Size: 956 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 956 samples:
    anchor positive
    type string string
    details
    • min: 10 tokens
    • mean: 23.14 tokens
    • max: 85 tokens
    • min: 6 tokens
    • mean: 57.2 tokens
    • max: 135 tokens
  • Samples:
    anchor positive
    Can I get a preventive health check-up covered under my insurance, and if yes, is there a limit to it? by the Medical Practitioner.
    vii. The Deductible shall not be applicable on this bene�t.
    Stay Fit Health Check Up
    The Insured may avail a health check-up, only for Preventive
    Test, up to a limit speci�ed in the Policy Schedule, provided
    Which claims are excluded if they don't follow the Transplantation of Human Organs Amendment Bill 2011? 4 CIN: U66010PN2000PLC015329, UIN: BAJHLIP23069V032223
    Specific exclusions:
    1. Claims which have NOT been admitted under Medical expenses section
    2. Claims not in compliance with THE TRANSPLANTATION OF HUMAN ORGANS (AMENDMENT) BILL, 2011
    Will the insurance pay for lawful abortion and related hospital stays? ii. We will also cover expenses towards lawful medical termination of pregnancy during the Policy period.
    iii. In patient Hospitalization Expenses of pre-natal and post-natal hospitalization
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Evaluation Dataset

mediclaim

  • Dataset: mediclaim at 943cab1
  • Size: 956 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 956 samples:
    anchor positive
    type string string
    details
    • min: 10 tokens
    • mean: 22.4 tokens
    • max: 62 tokens
    • min: 6 tokens
    • mean: 56.76 tokens
    • max: 133 tokens
  • Samples:
    anchor positive
    Is there any refund for medical exams if I get a policy and it's accepted? • If pre-policy checkup is conducted, 50% of the medical tests charges would be reimbursed, subject to acceptance
    of proposal and policy issuance.
    Age of the person
    to be insured
    Sum Insured Medical Examination
    Are there any exclusions for coverage of substance abuse treatment or its consequences? are payable but not the complete claim.
    12. T reatment for Alcoholism, drug or substance abuse or any addictive condition and consequences thereof.
    (Excl12)
    Can you tell me about the medical bills I might have within 90 days after being discharged? CIN: U66010PN2000PLC015329, UIN:BAJHLIP23069V032223 3
    c. Post-hospitalisation expenses
    The medical expenses incurred in the 90 days immediately after you were discharged, provided that:
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 40
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 40
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
-1 -1 - - 0.4723 0.4748 0.5015 0.4589 0.3867
1.0 2 - 1.5925 0.4821 0.4846 0.5122 0.4604 0.3971
2.0 4 - 1.5925 0.4821 0.4846 0.5122 0.4604 0.3971
3.0 6 - 1.0402 0.5431 0.5468 0.5530 0.5009 0.4435
4.0 8 - 0.7900 0.5876 0.5926 0.6075 0.5484 0.4726
5.0 10 33.0646 0.6077 0.5890 0.6039 0.6270 0.5779 0.5072
6.0 12 - 0.5213 0.6357 0.6379 0.6522 0.5966 0.5417
7.0 14 - 0.4735 0.6425 0.6395 0.6286 0.5995 0.5795
8.0 16 - 0.4416 0.6253 0.6387 0.6227 0.5903 0.5738
9.0 18 - 0.4236 0.6303 0.6489 0.6387 0.6179 0.5670
10.0 20 8.8456 0.4115 0.6465 0.6519 0.6369 0.6112 0.572
11.0 22 - 0.4059 0.6447 0.6270 0.6318 0.6169 0.5950
12.0 24 - 0.4036 0.6382 0.6318 0.6346 0.6063 0.6026
13.0 26 - 0.4022 0.6485 0.6410 0.6441 0.6163 0.5900
14.0 28 - 0.4022 0.6520 0.6426 0.6597 0.6225 0.6001
15.0 30 4.4602 0.4033 0.6507 0.6363 0.6576 0.6217 0.6134
16.0 32 - 0.4047 0.6530 0.6389 0.6609 0.6350 0.6068
17.0 34 - 0.4058 0.6501 0.6344 0.6501 0.6281 0.5997
18.0 36 - 0.4067 0.6509 0.6333 0.6553 0.6360 0.6050
19.0 38 - 0.4070 0.6561 0.6331 0.6602 0.6397 0.6051
20.0 40 3.9605 0.4071 0.6498 0.6294 0.6397 0.6229 0.5922
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.3.2
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
40
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for surajvbangera/mediclaim_embedding

Finetuned
(12)
this model

Dataset used to train surajvbangera/mediclaim_embedding

Evaluation results