legal-ft-2 / README.md
njhaveri's picture
Add new SentenceTransformer model
80915d9 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:156
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
  - source_sentence: >-
      What is the significance of Claude Artifacts in the context of LLMs and
      application development?
    sentences:
      - >-
        The environmental impact got much, much worse

        The much bigger problem here is the enormous competitive buildout of the
        infrastructure that is imagined to be necessary for these models in the
        future.

        Companies like Google, Meta, Microsoft and Amazon are all spending
        billions of dollars rolling out new datacenters, with a very material
        impact on the electricity grid and the environment. There’s even talk of
        spinning up new nuclear power stations, but those can take decades.

        Is this infrastructure necessary? DeepSeek v3’s $6m training cost and
        the continued crash in LLM prices might hint that it’s not. But would
        you want to be the big tech executive that argued NOT to build out this
        infrastructure only to be proven wrong in a few years’ time?
      - >-
        We already knew LLMs were spookily good at writing code. If you prompt
        them right, it turns out they can build you a full interactive
        application using HTML, CSS and JavaScript (and tools like React if you
        wire up some extra supporting build mechanisms)—often in a single
        prompt.

        Anthropic kicked this idea into high gear when they released Claude
        Artifacts, a groundbreaking new feature that was initially slightly lost
        in the noise due to being described half way through their announcement
        of the incredible Claude 3.5 Sonnet.

        With Artifacts, Claude can write you an on-demand interactive
        application and then let you use it directly inside the Claude
        interface.

        Here’s my Extract URLs app, entirely generated by Claude:
      - >-
        This prompt-driven custom interface feature is so powerful and easy to
        build (once you’ve figured out the gnarly details of browser sandboxing)
        that I expect it to show up as a feature in a wide range of products in
        2025.

        Universal access to the best models lasted for just a few short months

        For a few short months this year all three of the best available
        models—GPT-4o, Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely
        available to most of the world.
  - source_sentence: What challenges are associated with using LLMs in the year of slop?
    sentences:
      - >-
        I also gave a bunch of talks and podcast appearances. I’ve started
        habitually turning my talks into annotated presentations—here are my
        best from 2023:


        Prompt injection explained, with video, slides, and a transcript

        Catching up on the weird world of LLMs

        Making Large Language Models work for you

        Open questions for AI engineering

        Embeddings: What they are and why they matter

        Financial sustainability for open source projects at GitHub Universe


        And in podcasts:



        What AI can do for you on the Theory of Change


        Working in public on Path to Citus Con


        LLMs break the internet on the Changelog


        Talking Large Language Models on Rooftop Ruby


        Thoughts on the OpenAI board situation on Newsroom Robots
      - |-
        The year of slop
        Synthetic training data works great
        LLMs somehow got even harder to use
        Knowledge is incredibly unevenly distributed
        LLMs need better criticism
        Everything tagged “llms” on my blog in 2024
      - >-
        The boring yet crucial secret behind good system prompts is test-driven
        development. You don’t write down a system prompt and find ways to test
        it. You write down tests and find a system prompt that passes them.


        It’s become abundantly clear over the course of 2024 that writing good
        automated evals for LLM-powered systems is the skill that’s most needed
        to build useful applications on top of these models. If you have a
        strong eval suite you can adopt new models faster, iterate better and
        build more reliable and useful product features than your competition.

        Vercel’s Malte Ubl:
  - source_sentence: >-
      What features did GitHub and Mistral Chat introduce in relation to the
      author's findings?
    sentences:
      - >-
        Except... you can run generated code to see if it’s correct. And with
        patterns like ChatGPT Code Interpreter the LLM can execute the code
        itself, process the error message, then rewrite it and keep trying until
        it works!

        So hallucination is a much lesser problem for code generation than for
        anything else. If only we had the equivalent of Code Interpreter for
        fact-checking natural language!

        How should we feel about this as software engineers?

        On the one hand, this feels like a threat: who needs a programmer if
        ChatGPT can write code for you?
      - >-
        I’ve found myself using this a lot. I noticed how much I was relying on
        it in October and wrote Everything I built with Claude Artifacts this
        week, describing 14 little tools I had put together in a seven day
        period.

        Since then, a whole bunch of other teams have built similar systems.
        GitHub announced their version of this—GitHub Spark—in October. Mistral
        Chat added it as a feature called Canvas in November.

        Steve Krouse from Val Town built a version of it against Cerebras,
        showcasing how a 2,000 token/second LLM can iterate on an application
        with changes visible in less than a second.
      - >-
        This remains astonishing to me. I thought a model with the capabilities
        and output quality of GPT-4 needed a datacenter class server with one or
        more $40,000+ GPUs.

        These models take up enough of my 64GB of RAM that I don’t run them
        often—they don’t leave much room for anything else.

        The fact that they run at all is a testament to the incredible training
        and inference performance gains that we’ve figured out over the past
        year. It turns out there was a lot of low-hanging fruit to be harvested
        in terms of model efficiency. I expect there’s still more to come.
  - source_sentence: >-
      Why did the voice from the demo, named Skye, not make it to a production
      product?
    sentences:
      - >-
        A lot of people are excited about AI agents—an infuriatingly vague term
        that seems to be converging on “AI systems that can go away and act on
        your behalf”. We’ve been talking about them all year, but I’ve seen few
        if any examples of them running in production, despite lots of exciting
        prototypes.

        I think this is because of gullibility.

        Can we solve this? Honestly, I’m beginning to suspect that you can’t
        fully solve gullibility without achieving AGI. So it may be quite a
        while before those agent dreams can really start to come true!

        Code may be the best application

        Over the course of the year, it’s become increasingly clear that writing
        code is one of the things LLMs are most capable of.
      - >-
        Embeddings: What they are and why they matter

        61.7k

        79.3k



        Catching up on the weird world of LLMs

        61.6k

        85.9k



        llamafile is the new best way to run an LLM on your own computer

        52k

        66k



        Prompt injection explained, with video, slides, and a transcript

        51k

        61.9k



        AI-enhanced development makes me more ambitious with my projects

        49.6k

        60.1k



        Understanding GPT tokenizers

        49.5k

        61.1k



        Exploring GPTs: ChatGPT in a trench coat?

        46.4k

        58.5k



        Could you train a ChatGPT-beating model for $85,000 and run it in a
        browser?

        40.5k

        49.2k



        How to implement Q&A against your documentation with GPT3, embeddings
        and Datasette

        37.3k

        44.9k



        Lawyer cites fake cases invented by ChatGPT, judge is not amused

        37.1k

        47.4k
      - >-
        The May 13th announcement of GPT-4o included a demo of a brand new voice
        mode, where the true multi-modal GPT-4o (the o is for “omni”) model
        could accept audio input and output incredibly realistic sounding speech
        without needing separate TTS or STT models.

        The demo also sounded conspicuously similar to Scarlett Johansson... and
        after she complained the voice from the demo, Skye, never made it to a
        production product.

        The delay in releasing the new voice mode after the initial demo caused
        quite a lot of confusion. I wrote about that in ChatGPT in “4o” mode is
        not running the new features yet.
  - source_sentence: >-
      What are some of the new features introduced in multi-modal models that
      enhance their capabilities beyond text?
    sentences:
      - >-
        I think people who complain that LLM improvement has slowed are often
        missing the enormous advances in these multi-modal models. Being able to
        run prompts against images (and audio and video) is a fascinating new
        way to apply these models.

        Voice and live camera mode are science fiction come to life

        The audio and live video modes that have started to emerge deserve a
        special mention.

        The ability to talk to ChatGPT first arrived in September 2023, but it
        was mostly an illusion: OpenAI used their excellent Whisper
        speech-to-text model and a new text-to-speech model (creatively named
        tts-1) to enable conversations with the ChatGPT mobile apps, but the
        actual model just saw text.
      - >-
        Then in February, Meta released Llama. And a few weeks later in March,
        Georgi Gerganov released code that got it working on a MacBook.

        I wrote about how Large language models are having their Stable
        Diffusion moment, and with hindsight that was a very good call!

        This unleashed a whirlwind of innovation, which was accelerated further
        in July when Meta released Llama 2—an improved version which, crucially,
        included permission for commercial use.

        Today there are literally thousands of LLMs that can be run locally, on
        all manner of different devices.
      - >-
        260 input tokens, 92 output tokens. Cost approximately 0.0024 cents
        (that’s less than a 400th of a cent).

        This increase in efficiency and reduction in price is my single
        favourite trend from 2024. I want the utility of LLMs at a fraction of
        the energy cost and it looks like that’s what we’re getting.

        Multimodal vision is common, audio and video are starting to emerge

        My butterfly example above illustrates another key trend from 2024: the
        rise of multi-modal LLMs.

        A year ago the single most notable example of these was GPT-4 Vision,
        released at OpenAI’s DevDay in November 2023. Google’s multi-modal
        Gemini 1.0 was announced on December 7th 2023 so it also (just) makes it
        into the 2023 window.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.9166666666666666
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.9166666666666666
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3333333333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.20000000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.10000000000000002
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.9166666666666666
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9692441461309548
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9583333333333334
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9583333333333334
            name: Cosine Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("njhaveri/legal-ft-2")
# Run inference
sentences = [
    'What are some of the new features introduced in multi-modal models that enhance their capabilities beyond text?',
    'I think people who complain that LLM improvement has slowed are often missing the enormous advances in these multi-modal models. Being able to run prompts against images (and audio and video) is a fascinating new way to apply these models.\nVoice and live camera mode are science fiction come to life\nThe audio and live video modes that have started to emerge deserve a special mention.\nThe ability to talk to ChatGPT first arrived in September 2023, but it was mostly an illusion: OpenAI used their excellent Whisper speech-to-text model and a new text-to-speech model (creatively named tts-1) to enable conversations with the ChatGPT mobile apps, but the actual model just saw text.',
    '260 input tokens, 92 output tokens. Cost approximately 0.0024 cents (that’s less than a 400th of a cent).\nThis increase in efficiency and reduction in price is my single favourite trend from 2024. I want the utility of LLMs at a fraction of the energy cost and it looks like that’s what we’re getting.\nMultimodal vision is common, audio and video are starting to emerge\nMy butterfly example above illustrates another key trend from 2024: the rise of multi-modal LLMs.\nA year ago the single most notable example of these was GPT-4 Vision, released at OpenAI’s DevDay in November 2023. Google’s multi-modal Gemini 1.0 was announced on December 7th 2023 so it also (just) makes it into the 2023 window.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9167
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.9167
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.9167
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9692
cosine_mrr@10 0.9583
cosine_map@100 0.9583

Training Details

Training Dataset

Unnamed Dataset

  • Size: 156 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 156 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 12 tokens
    • mean: 20.29 tokens
    • max: 31 tokens
    • min: 43 tokens
    • mean: 135.13 tokens
    • max: 214 tokens
  • Samples:
    sentence_0 sentence_1
    Why is it important for language models to believe the information provided to them? Language Models are gullible. They “believe” what we tell them—what’s in their training data, then what’s in the fine-tuning data, then what’s in the prompt.
    In order to be useful tools for us, we need them to believe what we feed them!
    But it turns out a lot of the things we want to build need them not to be gullible.
    Everyone wants an AI personal assistant. If you hired a real-world personal assistant who believed everything that anyone told them, you would quickly find that their ability to positively impact your life was severely limited.
    What are the potential drawbacks of having a language model that is overly gullible? Language Models are gullible. They “believe” what we tell them—what’s in their training data, then what’s in the fine-tuning data, then what’s in the prompt.
    In order to be useful tools for us, we need them to believe what we feed them!
    But it turns out a lot of the things we want to build need them not to be gullible.
    Everyone wants an AI personal assistant. If you hired a real-world personal assistant who believed everything that anyone told them, you would quickly find that their ability to positively impact your life was severely limited.
    What significant change occurred in LLM pricing over the past twelve months? Here’s the rest of the transcript. It’s bland and generic, but my phone can pitch bland and generic Christmas movies to Netflix now!
    LLM prices crashed, thanks to competition and increased efficiency
    The past twelve months have seen a dramatic collapse in the cost of running a prompt through the top tier hosted LLMs.
    In December 2023 (here’s the Internet Archive for the OpenAI pricing page) OpenAI were charging $30/million input tokens for GPT-4, $10/mTok for the then-new GPT-4 Turbo and $1/mTok for GPT-3.5 Turbo.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 16 0.9692
2.0 32 0.9692
3.0 48 0.9692
3.125 50 0.9692
4.0 64 0.9692
5.0 80 0.9692
6.0 96 0.9692
6.25 100 0.9692
7.0 112 0.9692
8.0 128 0.9692
9.0 144 0.9692
9.375 150 0.9692
10.0 160 0.9692

Framework Versions

  • Python: 3.13.1
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.6.0
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}