metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:78
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: >-
1. What role does synthetic data play in the pretraining of models,
particularly in the Phi series?
2. How does synthetic data compare to organic data in terms of advantages?
sentences:
- >-
Synthetic data as a substantial component of pretraining is becoming
increasingly common, and the Phi series of models has consistently
emphasized the importance of synthetic data. Rather than serving as a
cheap substitute for organic data, synthetic data has several direct
advantages over organic data.
- >-
The two main categories I see are people who think AI agents are
obviously things that go and act on your behalf—the travel agent
model—and people who think in terms of LLMs that have been given access
to tools which they can run in a loop as part of solving a problem. The
term “autonomy” is often thrown into the mix too, again without
including a clear definition.
(I also collected 211 definitions on Twitter a few months ago—here they
are in Datasette Lite—and had gemini-exp-1206 attempt to summarize
them.)
Whatever the term may mean, agents still have that feeling of
perpetually “coming soon”.
- >-
Terminology aside, I remain skeptical as to their utility based, once
again, on the challenge of gullibility. LLMs believe anything you tell
them. Any systems that attempts to make meaningful decisions on your
behalf will run into the same roadblock: how good is a travel agent, or
a digital assistant, or even a research tool if it can’t distinguish
truth from fiction?
Just the other day Google Search was caught serving up an entirely fake
description of the non-existant movie “Encanto 2”. It turned out to be
summarizing an imagined movie listing from a fan fiction wiki.
- source_sentence: >-
1. What is the mlx-vlm project and how does it relate to vision LLMs on
Apple Silicon?
2. What were the author's initial thoughts on Apple's "Apple Intelligence"
features following their announcement in June?
sentences:
- >-
The GPT-4 barrier was comprehensively broken
In my December 2023 review I wrote about how We don’t yet know how to
build GPT-4—OpenAI’s best model was almost a year old at that point, yet
no other AI lab had produced anything better. What did OpenAI know that
the rest of us didn’t?
I’m relieved that this has changed completely in the past twelve months.
18 organizations now have models on the Chatbot Arena Leaderboard that
rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the
board)—70 models in total.
- |-
The year of slop
Synthetic training data works great
LLMs somehow got even harder to use
Knowledge is incredibly unevenly distributed
LLMs need better criticism
Everything tagged “llms” on my blog in 2024
- >-
Prince Canuma’s excellent, fast moving mlx-vlm project brings vision
LLMs to Apple Silicon as well. I used that recently to run Qwen’s QvQ.
While MLX is a game changer, Apple’s own “Apple Intelligence” features
have mostly been a disappointment. I wrote about their initial
announcement in June, and I was optimistic that Apple had focused hard
on the subset of LLM applications that preserve user privacy and
minimize the chance of users getting mislead by confusing features.
- source_sentence: >-
1. What improvements were noted in the intonation of ChatGPT Advanced
Voice mode during its rollout?
2. How did the user experiment with accents in the Advanced Voice mode?
sentences:
- >-
When ChatGPT Advanced Voice mode finally did roll out (a slow roll from
August through September) it was spectacular. I’ve been using it
extensively on walks with my dog and it’s amazing how much the
improvement in intonation elevates the material. I’ve also had a lot of
fun experimenting with the OpenAI audio APIs.
Even more fun: Advanced Voice mode can do accents! Here’s what happened
when I told it I need you to pretend to be a California brown pelican
with a very thick Russian accent, but you talk to me exclusively in
Spanish.
- >-
One way to think about these models is an extension of the
chain-of-thought prompting trick, first explored in the May 2022 paper
Large Language Models are Zero-Shot Reasoners.
This is that trick where, if you get a model to talk out loud about a
problem it’s solving, you often get a result which the model would not
have achieved otherwise.
o1 takes this process and further bakes it into the model itself. The
details are somewhat obfuscated: o1 models spend “reasoning tokens”
thinking through the problem that are not directly visible to the user
(though the ChatGPT UI shows a summary of them), then outputs a final
result.
- >-
The May 13th announcement of GPT-4o included a demo of a brand new voice
mode, where the true multi-modal GPT-4o (the o is for “omni”) model
could accept audio input and output incredibly realistic sounding speech
without needing separate TTS or STT models.
The demo also sounded conspicuously similar to Scarlett Johansson... and
after she complained the voice from the demo, Skye, never made it to a
production product.
The delay in releasing the new voice mode after the initial demo caused
quite a lot of confusion. I wrote about that in ChatGPT in “4o” mode is
not running the new features yet.
- source_sentence: >-
1. What advantages does a 64GB Mac have for running models compared to
other machines?
2. How does the mlx-lm Python library enhance the performance of
MLX-compatible models on a Mac?
sentences:
- >-
On paper, a 64GB Mac should be a great machine for running models due to
the way the CPU and GPU can share the same memory. In practice, many
models are released as model weights and libraries that reward NVIDIA’s
CUDA over other platforms.
The llama.cpp ecosystem helped a lot here, but the real breakthrough has
been Apple’s MLX library, “an array framework for Apple Silicon”. It’s
fantastic.
Apple’s mlx-lm Python library supports running a wide range of
MLX-compatible models on my Mac, with excellent performance.
mlx-community on Hugging Face offers more than 1,000 models that have
been converted to the necessary format.
- >-
The earliest of those was Google’s Gemini 1.5 Pro, released in February.
In addition to producing GPT-4 level outputs, it introduced several
brand new capabilities to the field—most notably its 1 million (and then
later 2 million) token input context length, and the ability to input
video.
I wrote about this at the time in The killer app of Gemini Pro 1.5 is
video, which earned me a short appearance as a talking head in the
Google I/O opening keynote in May.
- >-
The biggest innovation here is that it opens up a new way to scale a
model: instead of improving model performance purely through additional
compute at training time, models can now take on harder problems by
spending more compute on inference.
The sequel to o1, o3 (they skipped “o2” for European trademark reasons)
was announced on 20th December with an impressive result against the
ARC-AGI benchmark, albeit one that likely involved more than $1,000,000
of compute time expense!
o3 is expected to ship in January. I doubt many people have real-world
problems that would benefit from that level of compute expenditure—I
certainly don’t!—but it appears to be a genuine next step in LLM
architecture for taking on much harder problems.
- source_sentence: >-
1. What technique is being used by labs to create training data for
smaller models?
2. How many synthetically generated examples were used in Meta’s Llama 3.3
70B fine-tuning?
sentences:
- >-
The number of available systems has exploded. Different systems have
different tools they can apply to your problems—like Python and
JavaScript and web search and image generation and maybe even database
lookups... so you’d better understand what those tools are, what they
can do and how to tell if the LLM used them or not.
Did you know ChatGPT has two entirely different ways of running Python
now?
Want to build a Claude Artifact that talks to an external API? You’d
better understand CSP and CORS HTTP headers first.
- >-
7th: Prompts.js
9th: I can now run a GPT-4 class model on my laptop
10th: ChatGPT Canvas can make API requests now, but it’s complicated
11th: Gemini 2.0 Flash: An outstanding multi-modal LLM with a sci-fi
streaming mode
19th: Building Python tools with a one-shot prompt using uv run and
Claude Projects
19th: Gemini 2.0 Flash “Thinking mode”
20th: December in LLMs has been a lot
20th: Live blog: the 12th day of OpenAI—“Early evals for OpenAI o3”
24th: Trying out QvQ—Qwen’s new visual reasoning model
31st: Things we learned about LLMs in 2024
(This list generated using Django SQL Dashboard with a SQL query written
for me by Claude.)
- >-
Another common technique is to use larger models to help create training
data for their smaller, cheaper alternatives—a trick used by an
increasing number of labs. DeepSeek v3 used “reasoning” data created by
DeepSeek-R1. Meta’s Llama 3.3 70B fine-tuning used over 25M
synthetically generated examples.
Careful design of the training data that goes into an LLM appears to be
the entire game for creating these models. The days of just grabbing a
full scrape of the web and indiscriminately dumping it into a training
run are long gone.
LLMs somehow got even harder to use
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.8333333333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8333333333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8333333333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 1
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9384882922619097
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9166666666666666
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9166666666666666
name: Cosine Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-l
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Rsr2425/legal-ft-2")
# Run inference
sentences = [
'1. What technique is being used by labs to create training data for smaller models?\n2. How many synthetically generated examples were used in Meta’s Llama 3.3 70B fine-tuning?',
'Another common technique is to use larger models to help create training data for their smaller, cheaper alternatives—a trick used by an increasing number of labs. DeepSeek v3 used “reasoning” data created by DeepSeek-R1. Meta’s Llama 3.3 70B fine-tuning used over 25M synthetically generated examples.\nCareful design of the training data that goes into an LLM appears to be the entire game for creating these models. The days of just grabbing a full scrape of the web and indiscriminately dumping it into a training run are long gone.\nLLMs somehow got even harder to use',
'7th: Prompts.js\n\n9th: I can now run a GPT-4 class model on my laptop\n\n10th: ChatGPT Canvas can make API requests now, but it’s complicated\n\n11th: Gemini 2.0 Flash: An outstanding multi-modal LLM with a sci-fi streaming mode\n\n19th: Building Python tools with a one-shot prompt using uv run and Claude Projects\n\n19th: Gemini 2.0 Flash “Thinking mode”\n\n20th: December in LLMs has been a lot\n\n20th: Live blog: the 12th day of OpenAI—“Early evals for OpenAI o3”\n\n24th: Trying out QvQ—Qwen’s new visual reasoning model\n\n31st: Things we learned about LLMs in 2024\n\n\n\n\n(This list generated using Django SQL Dashboard with a SQL query written for me by Claude.)',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.8333 |
cosine_accuracy@3 | 1.0 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.8333 |
cosine_precision@3 | 0.3333 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.8333 |
cosine_recall@3 | 1.0 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9385 |
cosine_mrr@10 | 0.9167 |
cosine_map@100 | 0.9167 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 78 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 78 samples:
sentence_0 sentence_1 type string string details - min: 30 tokens
- mean: 42.76 tokens
- max: 59 tokens
- min: 43 tokens
- mean: 130.5 tokens
- max: 204 tokens
- Samples:
sentence_0 sentence_1 1. What key themes and pivotal moments in the field of Large Language Models were identified in 2024?
2. How does the review of 2024 compare to the review of 2023 regarding advancements in LLMs?Things we learned about LLMs in 2024
Simon Willison’s Weblog
Subscribe
Things we learned about LLMs in 2024
31st December 2024
A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.
This is a sequel to my review of 2023.
In this article:1. What advancements in multimodal capabilities have been observed in LLMs, particularly regarding audio and video?
2. How has the competition among LLMs affected their pricing and accessibility over time?The GPT-4 barrier was comprehensively broken
Some of those GPT-4 models run on my laptop
LLM prices crashed, thanks to competition and increased efficiency
Multimodal vision is common, audio and video are starting to emerge
Voice and live camera mode are science fiction come to life
Prompt driven app generation is a commodity already
Universal access to the best models lasted for just a few short months
“Agents” still haven’t really happened yet
Evals really matter
Apple Intelligence is bad, Apple’s MLX library is excellent
The rise of inference-scaling “reasoning” models
Was the best currently available LLM trained in China for less than $6m?
The environmental impact got better
The environmental impact got much, much worse1. What challenges are associated with using LLMs in 2024?
2. How is knowledge distribution described in the context of LLMs?The year of slop
Synthetic training data works great
LLMs somehow got even harder to use
Knowledge is incredibly unevenly distributed
LLMs need better criticism
Everything tagged “llms” on my blog in 2024 - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_ndcg@10 |
---|---|---|
1.0 | 8 | 1.0 |
2.0 | 16 | 0.9583 |
3.0 | 24 | 0.9276 |
4.0 | 32 | 0.9385 |
5.0 | 40 | 0.9385 |
6.0 | 48 | 0.9385 |
6.25 | 50 | 0.9385 |
7.0 | 56 | 0.9385 |
8.0 | 64 | 0.9385 |
9.0 | 72 | 0.9385 |
10.0 | 80 | 0.9385 |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.1
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}