legal-ft-v0 / README.md
ernestobs7's picture
Add new SentenceTransformer model
3a0af86 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:156
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
  - source_sentence: >-
      What are some of the tools that different systems can apply to problems,
      as mentioned in the context?
    sentences:
      - >-
        Synthetic data as a substantial component of pretraining is becoming
        increasingly common, and the Phi series of models has consistently
        emphasized the importance of synthetic data. Rather than serving as a
        cheap substitute for organic data, synthetic data has several direct
        advantages over organic data.
      - >-
        The number of available systems has exploded. Different systems have
        different tools they can apply to your problems—like Python and
        JavaScript and web search and image generation and maybe even database
        lookups... so you’d better understand what those tools are, what they
        can do and how to tell if the LLM used them or not.

        Did you know ChatGPT has two entirely different ways of running Python
        now?

        Want to build a Claude Artifact that talks to an external API? You’d
        better understand CSP and CORS HTTP headers first.
      - >-
        29th: NotebookLM’s automatically generated podcasts are surprisingly
        effective


        30th: Weeknotes: Three podcasts, two trips and a new plugin system




        October


        1st: OpenAI DevDay 2024 live blog


        2nd: OpenAI DevDay: Let’s build developer tools, not digital God


        15th: ChatGPT will happily write you a thinly disguised horoscope


        17th: Video scraping: extracting JSON data from a 35 second screen
        capture for less than 1/10th of a cent


        18th: Experimenting with audio input and output for the OpenAI Chat
        Completion API


        19th: Running Llama 3.2 Vision and Phi-3.5 Vision on a Mac with
        mistral.rs


        21st: Everything I built with Claude Artifacts this week


        22nd: Initial explorations of Anthropic’s new Computer Use capability
  - source_sentence: >-
      What key themes and pivotal moments in the field of Large Language Models
      were identified in 2024?
    sentences:
      - >-
        One way to think about these models is an extension of the
        chain-of-thought prompting trick, first explored in the May 2022 paper
        Large Language Models are Zero-Shot Reasoners.

        This is that trick where, if you get a model to talk out loud about a
        problem it’s solving, you often get a result which the model would not
        have achieved otherwise.

        o1 takes this process and further bakes it into the model itself. The
        details are somewhat obfuscated: o1 models spend “reasoning tokens”
        thinking through the problem that are not directly visible to the user
        (though the ChatGPT UI shows a summary of them), then outputs a final
        result.
      - >-
        Things we learned about LLMs in 2024






















        Simon Willison’s Weblog

        Subscribe







        Things we learned about LLMs in 2024

        31st December 2024

        A lot has happened in the world of Large Language Models over the course
        of 2024. Here’s a review of things we figured out about the field in the
        past twelve months, plus my attempt at identifying key themes and
        pivotal moments.

        This is a sequel to my review of 2023.

        In this article:
      - >-
        The number of available systems has exploded. Different systems have
        different tools they can apply to your problems—like Python and
        JavaScript and web search and image generation and maybe even database
        lookups... so you’d better understand what those tools are, what they
        can do and how to tell if the LLM used them or not.

        Did you know ChatGPT has two entirely different ways of running Python
        now?

        Want to build a Claude Artifact that talks to an external API? You’d
        better understand CSP and CORS HTTP headers first.
  - source_sentence: Which organizations have models that scored higher than GPT-4-0314?
    sentences:
      - >-
        This prompt-driven custom interface feature is so powerful and easy to
        build (once you’ve figured out the gnarly details of browser sandboxing)
        that I expect it to show up as a feature in a wide range of products in
        2025.

        Universal access to the best models lasted for just a few short months

        For a few short months this year all three of the best available
        models—GPT-4o, Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely
        available to most of the world.
      - >-
        Then there’s the rest. If you browse the Chatbot Arena leaderboard
        today—still the most useful single place to get a vibes-based evaluation
        of models—you’ll see that GPT-4-0314 has fallen to around 70th place.
        The 18 organizations with higher scoring models are Google, OpenAI,
        Alibaba, Anthropic, Meta, Reka AI, 01 AI, Amazon, Cohere, DeepSeek,
        Nvidia, Mistral, NexusFlow, Zhipu AI, xAI, AI21 Labs, Princeton and
        Tencent.

        Training a GPT-4 beating model was a huge deal in 2023. In 2024 it’s an
        achievement that isn’t even particularly notable, though I personally
        still celebrate any time a new organization joins that list.

        Some of those GPT-4 models run on my laptop
      - >-
        This remains astonishing to me. I thought a model with the capabilities
        and output quality of GPT-4 needed a datacenter class server with one or
        more $40,000+ GPUs.

        These models take up enough of my 64GB of RAM that I don’t run them
        often—they don’t leave much room for anything else.

        The fact that they run at all is a testament to the incredible training
        and inference performance gains that we’ve figured out over the past
        year. It turns out there was a lot of low-hanging fruit to be harvested
        in terms of model efficiency. I expect there’s still more to come.
  - source_sentence: What does the term "slop" refer to in the context of generative AI usage?
    sentences:
      - >-
        I think this means that, as individual users, we don’t need to feel any
        guilt at all for the energy consumed by the vast majority of our
        prompts. The impact is likely neglible compared to driving a car down
        the street or maybe even watching a video on YouTube.

        Likewise, training. DeepSeek v3 training for less than $6m is a
        fantastic sign that training costs can and should continue to drop.

        For less efficient models I find it useful to compare their energy usage
        to commercial flights. The largest Llama 3 model cost about the same as
        a single digit number of fully loaded passenger flights from New York to
        London. That’s certainly not nothing, but once trained that model can be
        used by millions of people at no extra training cost.
      - >-
        A lot of people absolutely hate this stuff. In some of the spaces I hang
        out (Mastodon, Bluesky, Lobste.rs, even Hacker News on occasion) even
        suggesting that “LLMs are useful” can be enough to kick off a huge
        fight.

        I get it. There are plenty of reasons to dislike this technology—the
        environmental impact, the (lack of) ethics of the training data, the
        lack of reliability, the negative applications, the potential impact on
        people’s jobs.

        LLMs absolutely warrant criticism. We need to be talking through these
        problems, finding ways to mitigate them and helping people learn how to
        use these tools responsibly in ways where the positive applications
        outweigh the negative.
      - >-
        I love the term “slop” because it so succinctly captures one of the ways
        we should not be using generative AI!

        Slop was even in the running for Oxford Word of the Year 2024, but it
        lost to brain rot.

        Synthetic training data works great

        An idea that surprisingly seems to have stuck in the public
        consciousness is that of “model collapse”. This was first described in
        the paper The Curse of Recursion: Training on Generated Data Makes
        Models Forget in May 2023, and repeated in Nature in July 2024 with the
        more eye-catching headline AI models collapse when trained on
        recursively generated data.
  - source_sentence: >-
      What are the dates of the articles listed as more recent articles in the
      context?
    sentences:
      - >-
        Posted 31st December 2024 at 6:07 pm · Follow me on Mastodon or Twitter
        or subscribe to my newsletter



        More recent articles


        Run LLMs on macOS using llm-mlx and Apple's MLX framework - 15th
        February 2025

        URL-addressable Pyodide Python environments - 13th February 2025

        Using pip to install a Large Language Model that's under 100MB - 7th
        February 2025


         


        This is Things we learned about LLMs in 2024 by Simon Willison, posted
        on 31st December 2024.


        Part of series LLMs annual review


        Stuff we figured out about AI in 2023 - Dec. 31, 2023, 11:59 p.m. 

        Things we learned about LLMs in 2024 - Dec. 31, 2024, 6:07 p.m. 



                    google
                    347


                    ai
                    1098


                    openai
                    255
      - >-
        OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was
        freely available from its launch in June. This was a momentus change,
        because for the previous year free users had mostly been restricted to
        GPT-3.5 level models, meaning new users got a very inaccurate mental
        model of what a capable LLM could actually do.

        That era appears to have ended, likely permanently, with OpenAI’s launch
        of ChatGPT Pro. This $200/month subscription service is the only way to
        access their most capable model, o1 Pro.

        Since the trick behind the o1 series (and the future models it will
        undoubtedly inspire) is to expend more compute time to get better
        results, I don’t think those days of free access to the best available
        models are likely to return.
      - >-
        Against this photo of butterflies at the California Academy of Sciences:



        A shallow dish, likely a hummingbird or butterfly feeder, is red. 
        Pieces of orange slices of fruit are visible inside the dish.

        Two butterflies are positioned in the feeder, one is a dark brown/black
        butterfly with white/cream-colored markings.  The other is a large,
        brown butterfly with patterns of lighter brown, beige, and black
        markings, including prominent eye spots. The larger brown butterfly
        appears to be feeding on the fruit.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.75
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.75
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3333333333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.20000000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.10000000000000002
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.75
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.8968216255952429
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.861111111111111
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.861111111111111
            name: Cosine Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("ernestobs7/legal-ft-v0")
# Run inference
sentences = [
    'What are the dates of the articles listed as more recent articles in the context?',
    "Posted 31st December 2024 at 6:07 pm · Follow me on Mastodon or Twitter or subscribe to my newsletter\n\n\nMore recent articles\n\nRun LLMs on macOS using llm-mlx and Apple's MLX framework - 15th February 2025\nURL-addressable Pyodide Python environments - 13th February 2025\nUsing pip to install a Large Language Model that's under 100MB - 7th February 2025\n\n\n \n\n\nThis is Things we learned about LLMs in 2024 by Simon Willison, posted on 31st December 2024.\n\nPart of series LLMs annual review\n\nStuff we figured out about AI in 2023 - Dec. 31, 2023, 11:59 p.m. \nThings we learned about LLMs in 2024 - Dec. 31, 2024, 6:07 p.m. \n\n\n\n            google\n            347\n\n\n            ai\n            1098\n\n\n            openai\n            255",
    'Against this photo of butterflies at the California Academy of Sciences:\n\n\nA shallow dish, likely a hummingbird or butterfly feeder, is red.  Pieces of orange slices of fruit are visible inside the dish.\nTwo butterflies are positioned in the feeder, one is a dark brown/black butterfly with white/cream-colored markings.  The other is a large, brown butterfly with patterns of lighter brown, beige, and black markings, including prominent eye spots. The larger brown butterfly appears to be feeding on the fruit.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.75
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.75
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.75
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.8968
cosine_mrr@10 0.8611
cosine_map@100 0.8611

Training Details

Training Dataset

Unnamed Dataset

  • Size: 156 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 156 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 13 tokens
    • mean: 20.12 tokens
    • max: 33 tokens
    • min: 43 tokens
    • mean: 130.53 tokens
    • max: 204 tokens
  • Samples:
    sentence_0 sentence_1
    What are the hardware requirements mentioned for running models like GPT-4? This remains astonishing to me. I thought a model with the capabilities and output quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs.
    These models take up enough of my 64GB of RAM that I don’t run them often—they don’t leave much room for anything else.
    The fact that they run at all is a testament to the incredible training and inference performance gains that we’ve figured out over the past year. It turns out there was a lot of low-hanging fruit to be harvested in terms of model efficiency. I expect there’s still more to come.
    What does the author attribute the ability to run these models on less powerful hardware to? This remains astonishing to me. I thought a model with the capabilities and output quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs.
    These models take up enough of my 64GB of RAM that I don’t run them often—they don’t leave much room for anything else.
    The fact that they run at all is a testament to the incredible training and inference performance gains that we’ve figured out over the past year. It turns out there was a lot of low-hanging fruit to be harvested in terms of model efficiency. I expect there’s still more to come.
    What challenges are associated with using LLMs in 2024? The year of slop
    Synthetic training data works great
    LLMs somehow got even harder to use
    Knowledge is incredibly unevenly distributed
    LLMs need better criticism
    Everything tagged “llms” on my blog in 2024
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 16 0.8885
2.0 32 0.8939
3.0 48 0.8939
3.125 50 0.8994
4.0 64 0.8939
5.0 80 0.8939
6.0 96 0.8968
6.25 100 0.8968
7.0 112 0.8968
8.0 128 0.8968
9.0 144 0.8968
9.375 150 0.8968
10.0 160 0.8968

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.3.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}