enochlev's picture
Add new SentenceTransformer model
888e153 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:7960
  - loss:CoSENTLoss
base_model: sentence-transformers/all-mpnet-base-v2
widget:
  - source_sentence: >-
      Okay, I got it. So just to give you the second price if ever for the
      Samsung Galaxy is ##. It comes with a ## this one. Five gigabyte of data
      or ## gigabyte it will only it will only give you a £39.05. That is for
      that is for the #### G but I do suggest that you go with the equipment
      before because that is only around £31.
    sentences:
      - I can provide to you . Are you happy to go ahead with this?
      - Thank you for calling over to my name is how can I help you.
      - Thank you and could you please confirm to me what is your full name.
  - source_sentence: His number well, so you're looking to travel abroad anytime soon.
    sentences:
      - >-
        I'm now going to read out some terms and conditions to complete the
        order.
      - >-
        Can you provide me with character number one of your security answer
        please?
      - >-
        So looking at your usage of your mobile data. I just wanna share with
        you that your usage for the past six months. It says here it's up to
        gigabytes of mobile data. Okay and in order for us to.
  - source_sentence: >-
      Hello. Hi, thank you so much for patiently waiting. So, I'd look into our
      accessory so for the airbags the one that we have an ongoing promotion
      right now for the accessories is the airport second generation. So you
      can.
    sentences:
      - >-
        The same discounts you can have been added as an additional line and do
        into your account. It needs be entitled to % discount off of the costs.
      - Are you planning to get a new sim only plan or a new phone?
      - >-
        I'm now going to send you a one time code. The first message is a
        warning to not give the code to scammers pretending to work for O2. The
        second message is the code to continue with your request.
  - source_sentence: >-
      Okay, so you can know just spend. Yeah, but anytime via web chat or
      customer Services. Okay.
    sentences:
      - >-
        So looking at your usage of your mobile data. I just wanna share with
        you that your usage for the past six months. It says here it's up to
        gigabytes of mobile data. Okay and in order for us to.
      - >-
        Checking your account I can see you are on the and you have been paying
        £ per month. Is that correct?
      - >-
        So looking at your usage of your mobile data. I just wanna share with
        you that your usage for the past six months. It says here it's up to
        gigabytes of mobile data. Okay and in order for us to.
  - source_sentence: 'Oh, okay, so just the iPhone ## only.'
    sentences:
      - >-
        So I'm actually now checking here just for me to get this deal that you
        had seen.
      - >-
        I'm now going to send you a one time code. The first message is a
        warning to not give the code to scammers pretending to work for O2. The
        second message is the code to continue with your request.
      - >-
        Yes, that's correct for know. Our price is £ and then it won't go down
        to £ after you apply the discount.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - pearson_cosine
  - spearman_cosine
  - pearson_manhattan
  - spearman_manhattan
  - pearson_euclidean
  - spearman_euclidean
  - pearson_dot
  - spearman_dot
  - pearson_max
  - spearman_max
model-index:
  - name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: sts dev
          type: sts_dev
        metrics:
          - type: pearson_cosine
            value: 0.5906538719225906
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.2789361723892506
            name: Spearman Cosine
          - type: pearson_manhattan
            value: 0.630943535003128
            name: Pearson Manhattan
          - type: spearman_manhattan
            value: 0.27814879203445947
            name: Spearman Manhattan
          - type: pearson_euclidean
            value: 0.6348761842006896
            name: Pearson Euclidean
          - type: spearman_euclidean
            value: 0.2789361726048565
            name: Spearman Euclidean
          - type: pearson_dot
            value: 0.5906538598201696
            name: Pearson Dot
          - type: spearman_dot
            value: 0.2789361717424329
            name: Spearman Dot
          - type: pearson_max
            value: 0.6348761842006896
            name: Pearson Max
          - type: spearman_max
            value: 0.2789361726048565
            name: Spearman Max

SentenceTransformer based on sentence-transformers/all-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-mpnet-base-v2
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("enochlev/xlm-similarity-large")
# Run inference
sentences = [
    'Oh, okay, so just the iPhone ## only.',
    "Yes, that's correct for know. Our price is £ and then it won't go down to £ after you apply the discount.",
    "I'm now going to send you a one time code. The first message is a warning to not give the code to scammers pretending to work for O2. The second message is the code to continue with your request.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.5907
spearman_cosine 0.2789
pearson_manhattan 0.6309
spearman_manhattan 0.2781
pearson_euclidean 0.6349
spearman_euclidean 0.2789
pearson_dot 0.5907
spearman_dot 0.2789
pearson_max 0.6349
spearman_max 0.2789

Training Details

Training Dataset

Unnamed Dataset

  • Size: 7,960 training samples
  • Columns: text1, text2, and label
  • Approximate statistics based on the first 1000 samples:
    text1 text2 label
    type string string float
    details
    • min: 5 tokens
    • mean: 20.94 tokens
    • max: 66 tokens
    • min: 13 tokens
    • mean: 28.35 tokens
    • max: 71 tokens
    • min: 0.2
    • mean: 0.22
    • max: 1.0
  • Samples:
    text1 text2 label
    Hello, welcome to O2. My name is __ How can I help you today? Thank you for calling over to my name is how can I help you. 1.0
    Hello, welcome to O2. My name is __ How can I help you today? I was about to ask us to confirm the email address that we have on the account or on your file. So what I can you tell me your email address. 0.2
    Hello, welcome to O2. My name is __ How can I help you today? Are you planning to get a new sim only plan or a new phone? 0.2
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 1,980 evaluation samples
  • Columns: text1, text2, and label
  • Approximate statistics based on the first 1000 samples:
    text1 text2 label
    type string string float
    details
    • min: 8 tokens
    • mean: 36.02 tokens
    • max: 241 tokens
    • min: 13 tokens
    • mean: 28.35 tokens
    • max: 71 tokens
    • min: 0.2
    • mean: 0.22
    • max: 1.0
  • Samples:
    text1 text2 label
    So for example, since this is for the 2nd line bro more. So if you have any family that you want to add on your account. Yeah, we do have a same offer plan. This offer promo today. The same discounts you can have been added as an additional line and do into your account. It needs be entitled to % discount off of the costs. 1.0
    So for example, since this is for the 2nd line bro more. So if you have any family that you want to add on your account. Yeah, we do have a same offer plan. This offer promo today. I was about to ask us to confirm the email address that we have on the account or on your file. So what I can you tell me your email address. 0.2
    So for example, since this is for the 2nd line bro more. So if you have any family that you want to add on your account. Yeah, we do have a same offer plan. This offer promo today. Are you planning to get a new sim only plan or a new phone? 0.2
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 50
  • per_device_eval_batch_size: 50
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 50
  • per_device_eval_batch_size: 50
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Validation Loss sts_dev_spearman_max
1.0 160 0.1772 0.2789

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 3.2.1
  • Transformers: 4.45.2
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.1.1
  • Datasets: 3.1.0
  • Tokenizers: 0.20.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}