legal-ft-2 / README.md
drewgenai's picture
Add new SentenceTransformer model
82e86d0 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:156
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
  - source_sentence: How do longer inputs enhance the problem-solving capabilities of an LLM?
    sentences:
      - >-
        This remains astonishing to me. I thought a model with the capabilities
        and output quality of GPT-4 needed a datacenter class server with one or
        more $40,000+ GPUs.

        These models take up enough of my 64GB of RAM that I don’t run them
        often—they don’t leave much room for anything else.

        The fact that they run at all is a testament to the incredible training
        and inference performance gains that we’ve figured out over the past
        year. It turns out there was a lot of low-hanging fruit to be harvested
        in terms of model efficiency. I expect there’s still more to come.
      - >-
        Longer inputs dramatically increase the scope of problems that can be
        solved with an LLM: you can now throw in an entire book and ask
        questions about its contents, but more importantly you can feed in a lot
        of example code to help the model correctly solve a coding problem. LLM
        use-cases that involve long inputs are far more interesting to me than
        short prompts that rely purely on the information already baked into the
        model weights. Many of my tools were built using this pattern.
      - >-
        Nothing yet from Anthropic or Meta but I would be very surprised if they
        don’t have their own inference-scaling models in the works. Meta
        published a relevant paper Training Large Language Models to Reason in a
        Continuous Latent Space in December.

        Was the best currently available LLM trained in China for less than $6m?

        Not quite, but almost! It does make for a great attention-grabbing
        headline.

        The big news to end the year was the release of DeepSeek v3—dropped on
        Hugging Face on Christmas Day without so much as a README file, then
        followed by documentation and a paper the day after that.
  - source_sentence: >-
      What issue does the author highlight regarding the communication of
      information when someone claims to be building "agents"?
    sentences:
      - >-
        Things we learned about LLMs in 2024






















        Simon Willison’s Weblog

        Subscribe







        Things we learned about LLMs in 2024

        31st December 2024

        A lot has happened in the world of Large Language Models over the course
        of 2024. Here’s a review of things we figured out about the field in the
        past twelve months, plus my attempt at identifying key themes and
        pivotal moments.

        This is a sequel to my review of 2023.

        In this article:
      - >-
        “Agents” still haven’t really happened yet

        I find the term “agents” extremely frustrating. It lacks a single, clear
        and widely understood meaning... but the people who use the term never
        seem to acknowledge that.

        If you tell me that you are building “agents”, you’ve conveyed almost no
        information to me at all. Without reading your mind I have no way of
        telling which of the dozens of possible definitions you are talking
        about.
      - >-
        Prince Canuma’s excellent, fast moving mlx-vlm project brings vision
        LLMs to Apple Silicon as well. I used that recently to run Qwen’s QvQ.

        While MLX is a game changer, Apple’s own “Apple Intelligence” features
        have mostly been a disappointment. I wrote about their initial
        announcement in June, and I was optimistic that Apple had focused hard
        on the subset of LLM applications that preserve user privacy and
        minimize the chance of users getting mislead by confusing features.
  - source_sentence: >-
      How does the author feel about their choice of platform this year compared
      to last year?
    sentences:
      - >-
        On the one hand, we keep on finding new things that LLMs can do that we
        didn’t expect—and that the people who trained the models didn’t expect
        either. That’s usually really fun!

        But on the other hand, the things you sometimes have to do to get the
        models to behave are often incredibly dumb.

        Does ChatGPT get lazy in December, because its hidden system prompt
        includes the current date and its training data shows that people
        provide less useful answers coming up to the holidays?

        The honest answer is “maybe”! No-one is entirely sure, but if you give
        it a different date its answers may skew slightly longer.
      - >-
        I’m still trying to figure out the best patterns for doing this for my
        own work. Everyone knows that evals are important, but there remains a
        lack of great guidance for how to best implement them—I’m tracking this
        under my evals tag. My SVG pelican riding a bicycle benchmark is a pale
        imitation of what a real eval suite should look like.

        Apple Intelligence is bad, Apple’s MLX library is excellent

        As a Mac user I’ve been feeling a lot better about my choice of platform
        this year.

        Last year it felt like my lack of a Linux/Windows  machine with an
        NVIDIA GPU was a huge disadvantage in terms of trying out new models.
      - >-
        One way to think about these models is an extension of the
        chain-of-thought prompting trick, first explored in the May 2022 paper
        Large Language Models are Zero-Shot Reasoners.

        This is that trick where, if you get a model to talk out loud about a
        problem it’s solving, you often get a result which the model would not
        have achieved otherwise.

        o1 takes this process and further bakes it into the model itself. The
        details are somewhat obfuscated: o1 models spend “reasoning tokens”
        thinking through the problem that are not directly visible to the user
        (though the ChatGPT UI shows a summary of them), then outputs a final
        result.
  - source_sentence: >-
      What are the implications of having a Code Interpreter equivalent for
      fact-checking natural language?
    sentences:
      - >-
        I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly
        great model) on my iPhone. You can install several different apps to get
        your own, local, completely private LLM. My own LLM project provides a
        CLI tool for running an array of different models via plugins.

        You can even run them entirely in your browser using WebAssembly and the
        latest Chrome!

        Hobbyists can build their own fine-tuned models

        I said earlier that building an LLM was still out of reach of hobbyists.
        That may be true for training from scratch, but fine-tuning one of those
        models is another matter entirely.
      - >-
        Now add a walrus: Prompt engineering in DALL-E 3

        32.8k

        41.2k



        Web LLM runs the vicuna-7b Large Language Model entirely in your
        browser, and it’s very impressive

        32.5k

        38.2k



        ChatGPT can’t access the internet, even though it really looks like it
        can

        30.5k

        34.2k



        Stanford Alpaca, and the acceleration of on-device large language model
        development

        29.7k

        35.7k



        Run Llama 2 on your own Mac using LLM and Homebrew

        27.9k

        33.6k



        Midjourney 5.1

        26.7k

        33.4k



        Think of language models like ChatGPT as a “calculator for words”

        25k

        31.8k



        Multi-modal prompt injection image attacks against GPT-4V

        23.7k

        27.4k
      - >-
        Except... you can run generated code to see if it’s correct. And with
        patterns like ChatGPT Code Interpreter the LLM can execute the code
        itself, process the error message, then rewrite it and keep trying until
        it works!

        So hallucination is a much lesser problem for code generation than for
        anything else. If only we had the equivalent of Code Interpreter for
        fact-checking natural language!

        How should we feel about this as software engineers?

        On the one hand, this feels like a threat: who needs a programmer if
        ChatGPT can write code for you?
  - source_sentence: >-
      How does the author compare a prompt without evals, models, and UX to an
      ASML machine?
    sentences:
      - >-
        When @v0 first came out we were paranoid about protecting the prompt
        with all kinds of pre and post processing complexity.

        We completely pivoted to let it rip. A prompt without the evals, models,
        and especially UX is like getting a broken ASML machine without a manual
      - >-
        Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac talks
        about Qwen2.5-Coder-32B in November—an Apache 2.0 licensed model!


        I can now run a GPT-4 class model on my laptop talks about running
        Meta’s Llama 3.3 70B (released in December)
      - >-
        On the other hand, as software engineers we are better placed to take
        advantage of this than anyone else. We’ve all been given weird coding
        interns—we can use our deep knowledge to prompt them to solve coding
        problems more effectively than anyone else can.

        The ethics of this space remain diabolically complex

        In September last year Andy Baio and I produced the first major story on
        the unlicensed training data behind Stable Diffusion.

        Since then, almost every major LLM (and most of the image generation
        models) have also been trained on unlicensed data.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.9166666666666666
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.9166666666666666
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3333333333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.20000000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.10000000000000002
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.9166666666666666
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9692441461309548
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9583333333333334
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9583333333333334
            name: Cosine Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("drewgenai/legal-ft-2")
# Run inference
sentences = [
    'How does the author compare a prompt without evals, models, and UX to an ASML machine?',
    'When @v0 first came out we were paranoid about protecting the prompt with all kinds of pre and post processing complexity.\nWe completely pivoted to let it rip. A prompt without the evals, models, and especially UX is like getting a broken ASML machine without a manual',
    'On the other hand, as software engineers we are better placed to take advantage of this than anyone else. We’ve all been given weird coding interns—we can use our deep knowledge to prompt them to solve coding problems more effectively than anyone else can.\nThe ethics of this space remain diabolically complex\nIn September last year Andy Baio and I produced the first major story on the unlicensed training data behind Stable Diffusion.\nSince then, almost every major LLM (and most of the image generation models) have also been trained on unlicensed data.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9167
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.9167
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.9167
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9692
cosine_mrr@10 0.9583
cosine_map@100 0.9583

Training Details

Training Dataset

Unnamed Dataset

  • Size: 156 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 156 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 11 tokens
    • mean: 20.29 tokens
    • max: 37 tokens
    • min: 43 tokens
    • mean: 134.95 tokens
    • max: 214 tokens
  • Samples:
    sentence_0 sentence_1
    What are some examples of programming languages mentioned in the context? If you think about what they do, this isn’t such a big surprise. The grammar rules of programming languages like Python and JavaScript are massively less complicated than the grammar of Chinese, Spanish or English.
    It’s still astonishing to me how effective they are though.
    One of the great weaknesses of LLMs is their tendency to hallucinate—to imagine things that don’t correspond to reality. You would expect this to be a particularly bad problem for code—if an LLM hallucinates a method that doesn’t exist, the code should be useless.
    What is one of the major weaknesses of LLMs as described in the context? If you think about what they do, this isn’t such a big surprise. The grammar rules of programming languages like Python and JavaScript are massively less complicated than the grammar of Chinese, Spanish or English.
    It’s still astonishing to me how effective they are though.
    One of the great weaknesses of LLMs is their tendency to hallucinate—to imagine things that don’t correspond to reality. You would expect this to be a particularly bad problem for code—if an LLM hallucinates a method that doesn’t exist, the code should be useless.
    What is the significance of prompt engineering in DALL-E 3? Now add a walrus: Prompt engineering in DALL-E 3
    32.8k
    41.2k


    Web LLM runs the vicuna-7b Large Language Model entirely in your browser, and it’s very impressive
    32.5k
    38.2k


    ChatGPT can’t access the internet, even though it really looks like it can
    30.5k
    34.2k


    Stanford Alpaca, and the acceleration of on-device large language model development
    29.7k
    35.7k


    Run Llama 2 on your own Mac using LLM and Homebrew
    27.9k
    33.6k


    Midjourney 5.1
    26.7k
    33.4k


    Think of language models like ChatGPT as a “calculator for words”
    25k
    31.8k


    Multi-modal prompt injection image attacks against GPT-4V
    23.7k
    27.4k
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 20 0.9638
2.0 40 0.9539
2.5 50 0.9539
3.0 60 0.9539
4.0 80 0.9692
5.0 100 0.9692
6.0 120 0.9692
7.0 140 0.9692
7.5 150 0.9692
8.0 160 0.9692
9.0 180 0.9692
10.0 200 0.9692

Framework Versions

  • Python: 3.13.1
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}