---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: What are some of the tools that different systems can apply to
problems, as mentioned in the context?
sentences:
- Synthetic data as a substantial component of pretraining is becoming increasingly
common, and the Phi series of models has consistently emphasized the importance
of synthetic data. Rather than serving as a cheap substitute for organic data,
synthetic data has several direct advantages over organic data.
- 'The number of available systems has exploded. Different systems have different
tools they can apply to your problems—like Python and JavaScript and web search
and image generation and maybe even database lookups... so you’d better understand
what those tools are, what they can do and how to tell if the LLM used them or
not.
Did you know ChatGPT has two entirely different ways of running Python now?
Want to build a Claude Artifact that talks to an external API? You’d better understand
CSP and CORS HTTP headers first.'
- '29th: NotebookLM’s automatically generated podcasts are surprisingly effective
30th: Weeknotes: Three podcasts, two trips and a new plugin system
October
1st: OpenAI DevDay 2024 live blog
2nd: OpenAI DevDay: Let’s build developer tools, not digital God
15th: ChatGPT will happily write you a thinly disguised horoscope
17th: Video scraping: extracting JSON data from a 35 second screen capture for
less than 1/10th of a cent
18th: Experimenting with audio input and output for the OpenAI Chat Completion
API
19th: Running Llama 3.2 Vision and Phi-3.5 Vision on a Mac with mistral.rs
21st: Everything I built with Claude Artifacts this week
22nd: Initial explorations of Anthropic’s new Computer Use capability'
- source_sentence: What key themes and pivotal moments in the field of Large Language
Models were identified in 2024?
sentences:
- 'One way to think about these models is an extension of the chain-of-thought prompting
trick, first explored in the May 2022 paper Large Language Models are Zero-Shot
Reasoners.
This is that trick where, if you get a model to talk out loud about a problem
it’s solving, you often get a result which the model would not have achieved otherwise.
o1 takes this process and further bakes it into the model itself. The details
are somewhat obfuscated: o1 models spend “reasoning tokens” thinking through the
problem that are not directly visible to the user (though the ChatGPT UI shows
a summary of them), then outputs a final result.'
- 'Things we learned about LLMs in 2024
Simon Willison’s Weblog
Subscribe
Things we learned about LLMs in 2024
31st December 2024
A lot has happened in the world of Large Language Models over the course of 2024.
Here’s a review of things we figured out about the field in the past twelve months,
plus my attempt at identifying key themes and pivotal moments.
This is a sequel to my review of 2023.
In this article:'
- 'The number of available systems has exploded. Different systems have different
tools they can apply to your problems—like Python and JavaScript and web search
and image generation and maybe even database lookups... so you’d better understand
what those tools are, what they can do and how to tell if the LLM used them or
not.
Did you know ChatGPT has two entirely different ways of running Python now?
Want to build a Claude Artifact that talks to an external API? You’d better understand
CSP and CORS HTTP headers first.'
- source_sentence: Which organizations have models that scored higher than GPT-4-0314?
sentences:
- 'This prompt-driven custom interface feature is so powerful and easy to build
(once you’ve figured out the gnarly details of browser sandboxing) that I expect
it to show up as a feature in a wide range of products in 2025.
Universal access to the best models lasted for just a few short months
For a few short months this year all three of the best available models—GPT-4o,
Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely available to most of the world.'
- 'Then there’s the rest. If you browse the Chatbot Arena leaderboard today—still
the most useful single place to get a vibes-based evaluation of models—you’ll
see that GPT-4-0314 has fallen to around 70th place. The 18 organizations with
higher scoring models are Google, OpenAI, Alibaba, Anthropic, Meta, Reka AI, 01
AI, Amazon, Cohere, DeepSeek, Nvidia, Mistral, NexusFlow, Zhipu AI, xAI, AI21
Labs, Princeton and Tencent.
Training a GPT-4 beating model was a huge deal in 2023. In 2024 it’s an achievement
that isn’t even particularly notable, though I personally still celebrate any
time a new organization joins that list.
Some of those GPT-4 models run on my laptop'
- 'This remains astonishing to me. I thought a model with the capabilities and output
quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs.
These models take up enough of my 64GB of RAM that I don’t run them often—they
don’t leave much room for anything else.
The fact that they run at all is a testament to the incredible training and inference
performance gains that we’ve figured out over the past year. It turns out there
was a lot of low-hanging fruit to be harvested in terms of model efficiency. I
expect there’s still more to come.'
- source_sentence: What does the term "slop" refer to in the context of generative
AI usage?
sentences:
- 'I think this means that, as individual users, we don’t need to feel any guilt
at all for the energy consumed by the vast majority of our prompts. The impact
is likely neglible compared to driving a car down the street or maybe even watching
a video on YouTube.
Likewise, training. DeepSeek v3 training for less than $6m is a fantastic sign
that training costs can and should continue to drop.
For less efficient models I find it useful to compare their energy usage to commercial
flights. The largest Llama 3 model cost about the same as a single digit number
of fully loaded passenger flights from New York to London. That’s certainly not
nothing, but once trained that model can be used by millions of people at no extra
training cost.'
- 'A lot of people absolutely hate this stuff. In some of the spaces I hang out
(Mastodon, Bluesky, Lobste.rs, even Hacker News on occasion) even suggesting that
“LLMs are useful” can be enough to kick off a huge fight.
I get it. There are plenty of reasons to dislike this technology—the environmental
impact, the (lack of) ethics of the training data, the lack of reliability, the
negative applications, the potential impact on people’s jobs.
LLMs absolutely warrant criticism. We need to be talking through these problems,
finding ways to mitigate them and helping people learn how to use these tools
responsibly in ways where the positive applications outweigh the negative.'
- 'I love the term “slop” because it so succinctly captures one of the ways we should
not be using generative AI!
Slop was even in the running for Oxford Word of the Year 2024, but it lost to
brain rot.
Synthetic training data works great
An idea that surprisingly seems to have stuck in the public consciousness is that
of “model collapse”. This was first described in the paper The Curse of Recursion:
Training on Generated Data Makes Models Forget in May 2023, and repeated in Nature
in July 2024 with the more eye-catching headline AI models collapse when trained
on recursively generated data.'
- source_sentence: What are the dates of the articles listed as more recent articles
in the context?
sentences:
- "Posted 31st December 2024 at 6:07 pm · Follow me on Mastodon or Twitter or subscribe\
\ to my newsletter\n\n\nMore recent articles\n\nRun LLMs on macOS using llm-mlx\
\ and Apple's MLX framework - 15th February 2025\nURL-addressable Pyodide Python\
\ environments - 13th February 2025\nUsing pip to install a Large Language Model\
\ that's under 100MB - 7th February 2025\n\n\n \n\n\nThis is Things we learned\
\ about LLMs in 2024 by Simon Willison, posted on 31st December 2024.\n\nPart\
\ of series LLMs annual review\n\nStuff we figured out about AI in 2023 - Dec.\
\ 31, 2023, 11:59 p.m. \nThings we learned about LLMs in 2024 - Dec. 31, 2024,\
\ 6:07 p.m. \n\n\n\n google\n 347\n\n\n ai\n\
\ 1098\n\n\n openai\n 255"
- 'OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely
available from its launch in June. This was a momentus change, because for the
previous year free users had mostly been restricted to GPT-3.5 level models, meaning
new users got a very inaccurate mental model of what a capable LLM could actually
do.
That era appears to have ended, likely permanently, with OpenAI’s launch of ChatGPT
Pro. This $200/month subscription service is the only way to access their most
capable model, o1 Pro.
Since the trick behind the o1 series (and the future models it will undoubtedly
inspire) is to expend more compute time to get better results, I don’t think those
days of free access to the best available models are likely to return.'
- 'Against this photo of butterflies at the California Academy of Sciences:
A shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange
slices of fruit are visible inside the dish.
Two butterflies are positioned in the feeder, one is a dark brown/black butterfly
with white/cream-colored markings. The other is a large, brown butterfly with
patterns of lighter brown, beige, and black markings, including prominent eye
spots. The larger brown butterfly appears to be feeding on the fruit.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.75
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.75
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.75
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8968216255952429
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.861111111111111
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.861111111111111
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ernestobs7/legal-ft-v0")
# Run inference
sentences = [
'What are the dates of the articles listed as more recent articles in the context?',
"Posted 31st December 2024 at 6:07 pm · Follow me on Mastodon or Twitter or subscribe to my newsletter\n\n\nMore recent articles\n\nRun LLMs on macOS using llm-mlx and Apple's MLX framework - 15th February 2025\nURL-addressable Pyodide Python environments - 13th February 2025\nUsing pip to install a Large Language Model that's under 100MB - 7th February 2025\n\n\n \n\n\nThis is Things we learned about LLMs in 2024 by Simon Willison, posted on 31st December 2024.\n\nPart of series LLMs annual review\n\nStuff we figured out about AI in 2023 - Dec. 31, 2023, 11:59 p.m. \nThings we learned about LLMs in 2024 - Dec. 31, 2024, 6:07 p.m. \n\n\n\n google\n 347\n\n\n ai\n 1098\n\n\n openai\n 255",
'Against this photo of butterflies at the California Academy of Sciences:\n\n\nA shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange slices of fruit are visible inside the dish.\nTwo butterflies are positioned in the feeder, one is a dark brown/black butterfly with white/cream-colored markings. The other is a large, brown butterfly with patterns of lighter brown, beige, and black markings, including prominent eye spots. The larger brown butterfly appears to be feeding on the fruit.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [InformationRetrievalEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.75 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.75 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.75 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.8968** |
| cosine_mrr@10 | 0.8611 |
| cosine_map@100 | 0.8611 |
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 156 training samples
* Columns: sentence_0
and sentence_1
* Approximate statistics based on the first 156 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details |
What are the hardware requirements mentioned for running models like GPT-4?
| This remains astonishing to me. I thought a model with the capabilities and output quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs.
These models take up enough of my 64GB of RAM that I don’t run them often—they don’t leave much room for anything else.
The fact that they run at all is a testament to the incredible training and inference performance gains that we’ve figured out over the past year. It turns out there was a lot of low-hanging fruit to be harvested in terms of model efficiency. I expect there’s still more to come.
|
| What does the author attribute the ability to run these models on less powerful hardware to?
| This remains astonishing to me. I thought a model with the capabilities and output quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs.
These models take up enough of my 64GB of RAM that I don’t run them often—they don’t leave much room for anything else.
The fact that they run at all is a testament to the incredible training and inference performance gains that we’ve figured out over the past year. It turns out there was a lot of low-hanging fruit to be harvested in terms of model efficiency. I expect there’s still more to come.
|
| What challenges are associated with using LLMs in 2024?
| The year of slop
Synthetic training data works great
LLMs somehow got even harder to use
Knowledge is incredibly unevenly distributed
LLMs need better criticism
Everything tagged “llms” on my blog in 2024
|
* Loss: [MatryoshkaLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters