SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("vin00d/snowflake-arctic-legal-ft-1")
# Run inference
sentences = [
    "How does a fraudulent transfer relate to a debtor's intent in bankruptcy cases?",
    "A serious crime, usually punishable by at least one year in prison.\nFile\nTo place a paper in the official custody of the clerk of court to enter into the files or records\nof a case.\nFraudulent transfer\nA transfer of a debtor's property made with intent to defraud or for which the debtor\nreceives less than the transferred property's value.\nFresh start\nThe characterization of a debtor's status after bankruptcy, i.e., free of most debts. (Giving\ndebtors a fresh start is one purpose of the Bankruptcy Code.)\nG\nGrand jury\nA body of 16-23 citizens who listen to evidence of criminal allegations, which is presented by\nthe prosecutors, and determine whether there is probable cause to believe an individual",
    '-3-\nArgument:  A reason given in proof or rebuttal to persuade a judge or jury.\nAt Issue:  Whenever the parties to an action come to a point in the pleadings or argument which\nis affirmed on one side and denied on the other, the points  are said to be "at issue".\nAttachment:  The taking of property into legal custody by an enforcement officer (See specialty\nsection: Recovery of Chattel).\nAttestation:  The act of witnessing an instrument in writing at the request of the party making the\ninstrument and signing  it as a witness.\nAttorney of Record:  Attorney whose name appears in the court’s  records or files of a case.\nAward:  A decision of an Arbitrator, judge or jury.\n-B-',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9318
cosine_accuracy@3 0.9318
cosine_accuracy@5 0.9545
cosine_accuracy@10 1.0
cosine_precision@1 0.9318
cosine_precision@3 0.3106
cosine_precision@5 0.1909
cosine_precision@10 0.1
cosine_recall@1 0.9318
cosine_recall@3 0.9318
cosine_recall@5 0.9545
cosine_recall@10 1.0
cosine_ndcg@10 0.9565
cosine_mrr@10 0.9438
cosine_map@100 0.9438

Training Details

Training Dataset

Unnamed Dataset

  • Size: 210 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 210 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 9 tokens
    • mean: 17.36 tokens
    • max: 33 tokens
    • min: 4 tokens
    • mean: 122.9 tokens
    • max: 192 tokens
  • Samples:
    sentence_0 sentence_1
    What is the purpose of the glossary of common legal terms provided in the context? GLOSSARY ‐ COMMON LEGAL TERMS
    NOTE:  The following definitions are not legal definitions.  Rather, these definitions are
    intended to give you a general idea of the meanings of common legal words.  For 
    comprehensive Definitions of legal terms, you may wish to consult a legal dictionary
     “Black’s Law Dictionary” is one such legal dictionary which is usually available at
     most law libraries.
    This glossary of common legal terms is also available on‐line at:
    http://www.nycourts.gov/lawlibraries/glossary.shtml

    ADDITIONAL ON‐LINE RESOURCES:
    http://www.nolo.com/glossary.cfm 
    Nolo’s on‐line legal dictionary.
    http://www.law‐dictionary.org/
    Free on‐line legal dictionary search engine.
    http://www.law.cornell.edu/wex
    Where can one find a comprehensive legal dictionary for more detailed definitions of legal terms? GLOSSARY ‐ COMMON LEGAL TERMS
    NOTE:  The following definitions are not legal definitions.  Rather, these definitions are
    intended to give you a general idea of the meanings of common legal words.  For 
    comprehensive Definitions of legal terms, you may wish to consult a legal dictionary
     “Black’s Law Dictionary” is one such legal dictionary which is usually available at
     most law libraries.
    This glossary of common legal terms is also available on‐line at:
    http://www.nycourts.gov/lawlibraries/glossary.shtml

    ADDITIONAL ON‐LINE RESOURCES:
    http://www.nolo.com/glossary.cfm 
    Nolo’s on‐line legal dictionary.
    http://www.law‐dictionary.org/
    Free on‐line legal dictionary search engine.
    http://www.law.cornell.edu/wex
    What organization maintains the legal dictionary and encyclopedia mentioned in the context? Legal dictionary and encyclopedia maintained by the
    Legal Information Institute at Cornell Law School.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 21 0.9240
2.0 42 0.9628
2.3810 50 0.9628
3.0 63 0.9502
4.0 84 0.9569
4.7619 100 0.9563
5.0 105 0.9556
6.0 126 0.9569
7.0 147 0.9555
7.1429 150 0.9555
8.0 168 0.9565
9.0 189 0.9565
9.5238 200 0.9565
10.0 210 0.9565

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.3.2
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
11
Safetensors
Model size
334M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for vin00d/snowflake-arctic-legal-ft-1

Finetuned
(83)
this model

Space using vin00d/snowflake-arctic-legal-ft-1 1

Evaluation results