legal-ft-2 / README.md
Rsr2425's picture
Add new SentenceTransformer model
87e6f77 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:78
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
  - source_sentence: >-
      1. What role does synthetic data play in the pretraining of models,
      particularly in the Phi series?  

      2. How does synthetic data compare to organic data in terms of advantages?
    sentences:
      - >-
        Synthetic data as a substantial component of pretraining is becoming
        increasingly common, and the Phi series of models has consistently
        emphasized the importance of synthetic data. Rather than serving as a
        cheap substitute for organic data, synthetic data has several direct
        advantages over organic data.
      - >-
        The two main categories I see are people who think AI agents are
        obviously things that go and act on your behalf—the travel agent
        model—and people who think in terms of LLMs that have been given access
        to tools which they can run in a loop as part of solving a problem. The
        term “autonomy” is often thrown into the mix too, again without
        including a clear definition.

        (I also collected 211 definitions on Twitter a few months ago—here they
        are in Datasette Lite—and had gemini-exp-1206 attempt to summarize
        them.)

        Whatever the term may mean, agents still have that feeling of
        perpetually “coming soon”.
      - >-
        Terminology aside, I remain skeptical as to their utility based, once
        again, on the challenge of gullibility. LLMs believe anything you tell
        them. Any systems that attempts to make meaningful decisions on your
        behalf will run into the same roadblock: how good is a travel agent, or
        a digital assistant, or even a research tool if it can’t distinguish
        truth from fiction?

        Just the other day Google Search was caught serving up an entirely fake
        description of the non-existant movie “Encanto 2”. It turned out to be
        summarizing an imagined movie listing from a fan fiction wiki.
  - source_sentence: >-
      1. What is the mlx-vlm project and how does it relate to vision LLMs on
      Apple Silicon?  

      2. What were the author's initial thoughts on Apple's "Apple Intelligence"
      features following their announcement in June?
    sentences:
      - >-
        The GPT-4 barrier was comprehensively broken

        In my December 2023 review I wrote about how We don’t yet know how to
        build GPT-4—OpenAI’s best model was almost a year old at that point, yet
        no other AI lab had produced anything better. What did OpenAI know that
        the rest of us didn’t?

        I’m relieved that this has changed completely in the past twelve months.
        18 organizations now have models on the Chatbot Arena Leaderboard that
        rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the
        board)—70 models in total.
      - |-
        The year of slop
        Synthetic training data works great
        LLMs somehow got even harder to use
        Knowledge is incredibly unevenly distributed
        LLMs need better criticism
        Everything tagged “llms” on my blog in 2024
      - >-
        Prince Canuma’s excellent, fast moving mlx-vlm project brings vision
        LLMs to Apple Silicon as well. I used that recently to run Qwen’s QvQ.

        While MLX is a game changer, Apple’s own “Apple Intelligence” features
        have mostly been a disappointment. I wrote about their initial
        announcement in June, and I was optimistic that Apple had focused hard
        on the subset of LLM applications that preserve user privacy and
        minimize the chance of users getting mislead by confusing features.
  - source_sentence: >-
      1. What improvements were noted in the intonation of ChatGPT Advanced
      Voice mode during its rollout?  

      2. How did the user experiment with accents in the Advanced Voice mode?
    sentences:
      - >-
        When ChatGPT Advanced Voice mode finally did roll out (a slow roll from
        August through September) it was spectacular. I’ve been using it
        extensively on walks with my dog and it’s amazing how much the
        improvement in intonation elevates the material. I’ve also had a lot of
        fun experimenting with the OpenAI audio APIs.

        Even more fun: Advanced Voice mode can do accents! Here’s what happened
        when I told it I need you to pretend to be a California brown pelican
        with a very thick Russian accent, but you talk to me exclusively in
        Spanish.
      - >-
        One way to think about these models is an extension of the
        chain-of-thought prompting trick, first explored in the May 2022 paper
        Large Language Models are Zero-Shot Reasoners.

        This is that trick where, if you get a model to talk out loud about a
        problem it’s solving, you often get a result which the model would not
        have achieved otherwise.

        o1 takes this process and further bakes it into the model itself. The
        details are somewhat obfuscated: o1 models spend “reasoning tokens”
        thinking through the problem that are not directly visible to the user
        (though the ChatGPT UI shows a summary of them), then outputs a final
        result.
      - >-
        The May 13th announcement of GPT-4o included a demo of a brand new voice
        mode, where the true multi-modal GPT-4o (the o is for “omni”) model
        could accept audio input and output incredibly realistic sounding speech
        without needing separate TTS or STT models.

        The demo also sounded conspicuously similar to Scarlett Johansson... and
        after she complained the voice from the demo, Skye, never made it to a
        production product.

        The delay in releasing the new voice mode after the initial demo caused
        quite a lot of confusion. I wrote about that in ChatGPT in “4o” mode is
        not running the new features yet.
  - source_sentence: >-
      1. What advantages does a 64GB Mac have for running models compared to
      other machines?

      2. How does the mlx-lm Python library enhance the performance of
      MLX-compatible models on a Mac?
    sentences:
      - >-
        On paper, a 64GB Mac should be a great machine for running models due to
        the way the CPU and GPU can share the same memory. In practice, many
        models are released as model weights and libraries that reward NVIDIA’s
        CUDA over other platforms.

        The llama.cpp ecosystem helped a lot here, but the real breakthrough has
        been Apple’s MLX library, “an array framework for Apple Silicon”. It’s
        fantastic.

        Apple’s mlx-lm Python library supports running a wide range of
        MLX-compatible models on my Mac, with excellent performance.
        mlx-community on Hugging Face offers more than 1,000 models that have
        been converted to the necessary format.
      - >-
        The earliest of those was Google’s Gemini 1.5 Pro, released in February.
        In addition to producing GPT-4 level outputs, it introduced several
        brand new capabilities to the field—most notably its 1 million (and then
        later 2 million) token input context length, and the ability to input
        video.

        I wrote about this at the time in The killer app of Gemini Pro 1.5 is
        video, which earned me a short appearance as a talking head in the
        Google I/O opening keynote in May.
      - >-
        The biggest innovation here is that it opens up a new way to scale a
        model: instead of improving model performance purely through additional
        compute at training time, models can now take on harder problems by
        spending more compute on inference.

        The sequel to o1, o3 (they skipped “o2” for European trademark reasons)
        was announced on 20th December with an impressive result against the
        ARC-AGI benchmark, albeit one that likely involved more than $1,000,000
        of compute time expense!

        o3 is expected to ship in January. I doubt many people have real-world
        problems that would benefit from that level of compute expenditure—I
        certainly don’t!—but it appears to be a genuine next step in LLM
        architecture for taking on much harder problems.
  - source_sentence: >-
      1. What technique is being used by labs to create training data for
      smaller models?

      2. How many synthetically generated examples were used in Meta’s Llama 3.3
      70B fine-tuning?
    sentences:
      - >-
        The number of available systems has exploded. Different systems have
        different tools they can apply to your problems—like Python and
        JavaScript and web search and image generation and maybe even database
        lookups... so you’d better understand what those tools are, what they
        can do and how to tell if the LLM used them or not.

        Did you know ChatGPT has two entirely different ways of running Python
        now?

        Want to build a Claude Artifact that talks to an external API? You’d
        better understand CSP and CORS HTTP headers first.
      - >-
        7th: Prompts.js


        9th: I can now run a GPT-4 class model on my laptop


        10th: ChatGPT Canvas can make API requests now, but it’s complicated


        11th: Gemini 2.0 Flash: An outstanding multi-modal LLM with a sci-fi
        streaming mode


        19th: Building Python tools with a one-shot prompt using uv run and
        Claude Projects


        19th: Gemini 2.0 Flash “Thinking mode”


        20th: December in LLMs has been a lot


        20th: Live blog: the 12th day of OpenAI—“Early evals for OpenAI o3”


        24th: Trying out QvQ—Qwen’s new visual reasoning model


        31st: Things we learned about LLMs in 2024





        (This list generated using Django SQL Dashboard with a SQL query written
        for me by Claude.)
      - >-
        Another common technique is to use larger models to help create training
        data for their smaller, cheaper alternatives—a trick used by an
        increasing number of labs. DeepSeek v3 used “reasoning” data created by
        DeepSeek-R1. Meta’s Llama 3.3 70B fine-tuning used over 25M
        synthetically generated examples.

        Careful design of the training data that goes into an LLM appears to be
        the entire game for creating these models. The days of just grabbing a
        full scrape of the web and indiscriminately dumping it into a training
        run are long gone.

        LLMs somehow got even harder to use
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.8333333333333334
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.8333333333333334
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3333333333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.20000000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.10000000000000002
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.8333333333333334
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9384882922619097
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9166666666666666
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9166666666666666
            name: Cosine Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Rsr2425/legal-ft-2")
# Run inference
sentences = [
    '1. What technique is being used by labs to create training data for smaller models?\n2. How many synthetically generated examples were used in Meta’s Llama 3.3 70B fine-tuning?',
    'Another common technique is to use larger models to help create training data for their smaller, cheaper alternatives—a trick used by an increasing number of labs. DeepSeek v3 used “reasoning” data created by DeepSeek-R1. Meta’s Llama 3.3 70B fine-tuning used over 25M synthetically generated examples.\nCareful design of the training data that goes into an LLM appears to be the entire game for creating these models. The days of just grabbing a full scrape of the web and indiscriminately dumping it into a training run are long gone.\nLLMs somehow got even harder to use',
    '7th: Prompts.js\n\n9th: I can now run a GPT-4 class model on my laptop\n\n10th: ChatGPT Canvas can make API requests now, but it’s complicated\n\n11th: Gemini 2.0 Flash: An outstanding multi-modal LLM with a sci-fi streaming mode\n\n19th: Building Python tools with a one-shot prompt using uv run and Claude Projects\n\n19th: Gemini 2.0 Flash “Thinking mode”\n\n20th: December in LLMs has been a lot\n\n20th: Live blog: the 12th day of OpenAI—“Early evals for OpenAI o3”\n\n24th: Trying out QvQ—Qwen’s new visual reasoning model\n\n31st: Things we learned about LLMs in 2024\n\n\n\n\n(This list generated using Django SQL Dashboard with a SQL query written for me by Claude.)',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8333
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.8333
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.8333
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9385
cosine_mrr@10 0.9167
cosine_map@100 0.9167

Training Details

Training Dataset

Unnamed Dataset

  • Size: 78 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 78 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 30 tokens
    • mean: 42.76 tokens
    • max: 59 tokens
    • min: 43 tokens
    • mean: 130.5 tokens
    • max: 204 tokens
  • Samples:
    sentence_0 sentence_1
    1. What key themes and pivotal moments in the field of Large Language Models were identified in 2024?
    2. How does the review of 2024 compare to the review of 2023 regarding advancements in LLMs?
    Things we learned about LLMs in 2024





















    Simon Willison’s Weblog
    Subscribe






    Things we learned about LLMs in 2024
    31st December 2024
    A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.
    This is a sequel to my review of 2023.
    In this article:
    1. What advancements in multimodal capabilities have been observed in LLMs, particularly regarding audio and video?
    2. How has the competition among LLMs affected their pricing and accessibility over time?
    The GPT-4 barrier was comprehensively broken
    Some of those GPT-4 models run on my laptop
    LLM prices crashed, thanks to competition and increased efficiency
    Multimodal vision is common, audio and video are starting to emerge
    Voice and live camera mode are science fiction come to life
    Prompt driven app generation is a commodity already
    Universal access to the best models lasted for just a few short months
    “Agents” still haven’t really happened yet
    Evals really matter
    Apple Intelligence is bad, Apple’s MLX library is excellent
    The rise of inference-scaling “reasoning” models
    Was the best currently available LLM trained in China for less than $6m?
    The environmental impact got better
    The environmental impact got much, much worse
    1. What challenges are associated with using LLMs in 2024?
    2. How is knowledge distribution described in the context of LLMs?
    The year of slop
    Synthetic training data works great
    LLMs somehow got even harder to use
    Knowledge is incredibly unevenly distributed
    LLMs need better criticism
    Everything tagged “llms” on my blog in 2024
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 8 1.0
2.0 16 0.9583
3.0 24 0.9276
4.0 32 0.9385
5.0 40 0.9385
6.0 48 0.9385
6.25 50 0.9385
7.0 56 0.9385
8.0 64 0.9385
9.0 72 0.9385
10.0 80 0.9385

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}