metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: >-
What is the significance of Claude Artifacts in the context of LLMs and
application development?
sentences:
- >-
The environmental impact got much, much worse
The much bigger problem here is the enormous competitive buildout of the
infrastructure that is imagined to be necessary for these models in the
future.
Companies like Google, Meta, Microsoft and Amazon are all spending
billions of dollars rolling out new datacenters, with a very material
impact on the electricity grid and the environment. There’s even talk of
spinning up new nuclear power stations, but those can take decades.
Is this infrastructure necessary? DeepSeek v3’s $6m training cost and
the continued crash in LLM prices might hint that it’s not. But would
you want to be the big tech executive that argued NOT to build out this
infrastructure only to be proven wrong in a few years’ time?
- >-
We already knew LLMs were spookily good at writing code. If you prompt
them right, it turns out they can build you a full interactive
application using HTML, CSS and JavaScript (and tools like React if you
wire up some extra supporting build mechanisms)—often in a single
prompt.
Anthropic kicked this idea into high gear when they released Claude
Artifacts, a groundbreaking new feature that was initially slightly lost
in the noise due to being described half way through their announcement
of the incredible Claude 3.5 Sonnet.
With Artifacts, Claude can write you an on-demand interactive
application and then let you use it directly inside the Claude
interface.
Here’s my Extract URLs app, entirely generated by Claude:
- >-
This prompt-driven custom interface feature is so powerful and easy to
build (once you’ve figured out the gnarly details of browser sandboxing)
that I expect it to show up as a feature in a wide range of products in
2025.
Universal access to the best models lasted for just a few short months
For a few short months this year all three of the best available
models—GPT-4o, Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely
available to most of the world.
- source_sentence: What challenges are associated with using LLMs in the year of slop?
sentences:
- >-
I also gave a bunch of talks and podcast appearances. I’ve started
habitually turning my talks into annotated presentations—here are my
best from 2023:
Prompt injection explained, with video, slides, and a transcript
Catching up on the weird world of LLMs
Making Large Language Models work for you
Open questions for AI engineering
Embeddings: What they are and why they matter
Financial sustainability for open source projects at GitHub Universe
And in podcasts:
What AI can do for you on the Theory of Change
Working in public on Path to Citus Con
LLMs break the internet on the Changelog
Talking Large Language Models on Rooftop Ruby
Thoughts on the OpenAI board situation on Newsroom Robots
- |-
The year of slop
Synthetic training data works great
LLMs somehow got even harder to use
Knowledge is incredibly unevenly distributed
LLMs need better criticism
Everything tagged “llms” on my blog in 2024
- >-
The boring yet crucial secret behind good system prompts is test-driven
development. You don’t write down a system prompt and find ways to test
it. You write down tests and find a system prompt that passes them.
It’s become abundantly clear over the course of 2024 that writing good
automated evals for LLM-powered systems is the skill that’s most needed
to build useful applications on top of these models. If you have a
strong eval suite you can adopt new models faster, iterate better and
build more reliable and useful product features than your competition.
Vercel’s Malte Ubl:
- source_sentence: >-
What features did GitHub and Mistral Chat introduce in relation to the
author's findings?
sentences:
- >-
Except... you can run generated code to see if it’s correct. And with
patterns like ChatGPT Code Interpreter the LLM can execute the code
itself, process the error message, then rewrite it and keep trying until
it works!
So hallucination is a much lesser problem for code generation than for
anything else. If only we had the equivalent of Code Interpreter for
fact-checking natural language!
How should we feel about this as software engineers?
On the one hand, this feels like a threat: who needs a programmer if
ChatGPT can write code for you?
- >-
I’ve found myself using this a lot. I noticed how much I was relying on
it in October and wrote Everything I built with Claude Artifacts this
week, describing 14 little tools I had put together in a seven day
period.
Since then, a whole bunch of other teams have built similar systems.
GitHub announced their version of this—GitHub Spark—in October. Mistral
Chat added it as a feature called Canvas in November.
Steve Krouse from Val Town built a version of it against Cerebras,
showcasing how a 2,000 token/second LLM can iterate on an application
with changes visible in less than a second.
- >-
This remains astonishing to me. I thought a model with the capabilities
and output quality of GPT-4 needed a datacenter class server with one or
more $40,000+ GPUs.
These models take up enough of my 64GB of RAM that I don’t run them
often—they don’t leave much room for anything else.
The fact that they run at all is a testament to the incredible training
and inference performance gains that we’ve figured out over the past
year. It turns out there was a lot of low-hanging fruit to be harvested
in terms of model efficiency. I expect there’s still more to come.
- source_sentence: >-
Why did the voice from the demo, named Skye, not make it to a production
product?
sentences:
- >-
A lot of people are excited about AI agents—an infuriatingly vague term
that seems to be converging on “AI systems that can go away and act on
your behalf”. We’ve been talking about them all year, but I’ve seen few
if any examples of them running in production, despite lots of exciting
prototypes.
I think this is because of gullibility.
Can we solve this? Honestly, I’m beginning to suspect that you can’t
fully solve gullibility without achieving AGI. So it may be quite a
while before those agent dreams can really start to come true!
Code may be the best application
Over the course of the year, it’s become increasingly clear that writing
code is one of the things LLMs are most capable of.
- >-
Embeddings: What they are and why they matter
61.7k
79.3k
Catching up on the weird world of LLMs
61.6k
85.9k
llamafile is the new best way to run an LLM on your own computer
52k
66k
Prompt injection explained, with video, slides, and a transcript
51k
61.9k
AI-enhanced development makes me more ambitious with my projects
49.6k
60.1k
Understanding GPT tokenizers
49.5k
61.1k
Exploring GPTs: ChatGPT in a trench coat?
46.4k
58.5k
Could you train a ChatGPT-beating model for $85,000 and run it in a
browser?
40.5k
49.2k
How to implement Q&A against your documentation with GPT3, embeddings
and Datasette
37.3k
44.9k
Lawyer cites fake cases invented by ChatGPT, judge is not amused
37.1k
47.4k
- >-
The May 13th announcement of GPT-4o included a demo of a brand new voice
mode, where the true multi-modal GPT-4o (the o is for “omni”) model
could accept audio input and output incredibly realistic sounding speech
without needing separate TTS or STT models.
The demo also sounded conspicuously similar to Scarlett Johansson... and
after she complained the voice from the demo, Skye, never made it to a
production product.
The delay in releasing the new voice mode after the initial demo caused
quite a lot of confusion. I wrote about that in ChatGPT in “4o” mode is
not running the new features yet.
- source_sentence: >-
What are some of the new features introduced in multi-modal models that
enhance their capabilities beyond text?
sentences:
- >-
I think people who complain that LLM improvement has slowed are often
missing the enormous advances in these multi-modal models. Being able to
run prompts against images (and audio and video) is a fascinating new
way to apply these models.
Voice and live camera mode are science fiction come to life
The audio and live video modes that have started to emerge deserve a
special mention.
The ability to talk to ChatGPT first arrived in September 2023, but it
was mostly an illusion: OpenAI used their excellent Whisper
speech-to-text model and a new text-to-speech model (creatively named
tts-1) to enable conversations with the ChatGPT mobile apps, but the
actual model just saw text.
- >-
Then in February, Meta released Llama. And a few weeks later in March,
Georgi Gerganov released code that got it working on a MacBook.
I wrote about how Large language models are having their Stable
Diffusion moment, and with hindsight that was a very good call!
This unleashed a whirlwind of innovation, which was accelerated further
in July when Meta released Llama 2—an improved version which, crucially,
included permission for commercial use.
Today there are literally thousands of LLMs that can be run locally, on
all manner of different devices.
- >-
260 input tokens, 92 output tokens. Cost approximately 0.0024 cents
(that’s less than a 400th of a cent).
This increase in efficiency and reduction in price is my single
favourite trend from 2024. I want the utility of LLMs at a fraction of
the energy cost and it looks like that’s what we’re getting.
Multimodal vision is common, audio and video are starting to emerge
My butterfly example above illustrates another key trend from 2024: the
rise of multi-modal LLMs.
A year ago the single most notable example of these was GPT-4 Vision,
released at OpenAI’s DevDay in November 2023. Google’s multi-modal
Gemini 1.0 was announced on December 7th 2023 so it also (just) makes it
into the 2023 window.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9166666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9166666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9166666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 1
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9692441461309548
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9583333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9583333333333334
name: Cosine Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-l
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("njhaveri/legal-ft-2")
# Run inference
sentences = [
'What are some of the new features introduced in multi-modal models that enhance their capabilities beyond text?',
'I think people who complain that LLM improvement has slowed are often missing the enormous advances in these multi-modal models. Being able to run prompts against images (and audio and video) is a fascinating new way to apply these models.\nVoice and live camera mode are science fiction come to life\nThe audio and live video modes that have started to emerge deserve a special mention.\nThe ability to talk to ChatGPT first arrived in September 2023, but it was mostly an illusion: OpenAI used their excellent Whisper speech-to-text model and a new text-to-speech model (creatively named tts-1) to enable conversations with the ChatGPT mobile apps, but the actual model just saw text.',
'260 input tokens, 92 output tokens. Cost approximately 0.0024 cents (that’s less than a 400th of a cent).\nThis increase in efficiency and reduction in price is my single favourite trend from 2024. I want the utility of LLMs at a fraction of the energy cost and it looks like that’s what we’re getting.\nMultimodal vision is common, audio and video are starting to emerge\nMy butterfly example above illustrates another key trend from 2024: the rise of multi-modal LLMs.\nA year ago the single most notable example of these was GPT-4 Vision, released at OpenAI’s DevDay in November 2023. Google’s multi-modal Gemini 1.0 was announced on December 7th 2023 so it also (just) makes it into the 2023 window.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9167 |
cosine_accuracy@3 | 1.0 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.9167 |
cosine_precision@3 | 0.3333 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.9167 |
cosine_recall@3 | 1.0 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9692 |
cosine_mrr@10 | 0.9583 |
cosine_map@100 | 0.9583 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 156 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 156 samples:
sentence_0 sentence_1 type string string details - min: 12 tokens
- mean: 20.29 tokens
- max: 31 tokens
- min: 43 tokens
- mean: 135.13 tokens
- max: 214 tokens
- Samples:
sentence_0 sentence_1 Why is it important for language models to believe the information provided to them?
Language Models are gullible. They “believe” what we tell them—what’s in their training data, then what’s in the fine-tuning data, then what’s in the prompt.
In order to be useful tools for us, we need them to believe what we feed them!
But it turns out a lot of the things we want to build need them not to be gullible.
Everyone wants an AI personal assistant. If you hired a real-world personal assistant who believed everything that anyone told them, you would quickly find that their ability to positively impact your life was severely limited.What are the potential drawbacks of having a language model that is overly gullible?
Language Models are gullible. They “believe” what we tell them—what’s in their training data, then what’s in the fine-tuning data, then what’s in the prompt.
In order to be useful tools for us, we need them to believe what we feed them!
But it turns out a lot of the things we want to build need them not to be gullible.
Everyone wants an AI personal assistant. If you hired a real-world personal assistant who believed everything that anyone told them, you would quickly find that their ability to positively impact your life was severely limited.What significant change occurred in LLM pricing over the past twelve months?
Here’s the rest of the transcript. It’s bland and generic, but my phone can pitch bland and generic Christmas movies to Netflix now!
LLM prices crashed, thanks to competition and increased efficiency
The past twelve months have seen a dramatic collapse in the cost of running a prompt through the top tier hosted LLMs.
In December 2023 (here’s the Internet Archive for the OpenAI pricing page) OpenAI were charging $30/million input tokens for GPT-4, $10/mTok for the then-new GPT-4 Turbo and $1/mTok for GPT-3.5 Turbo. - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_ndcg@10 |
---|---|---|
1.0 | 16 | 0.9692 |
2.0 | 32 | 0.9692 |
3.0 | 48 | 0.9692 |
3.125 | 50 | 0.9692 |
4.0 | 64 | 0.9692 |
5.0 | 80 | 0.9692 |
6.0 | 96 | 0.9692 |
6.25 | 100 | 0.9692 |
7.0 | 112 | 0.9692 |
8.0 | 128 | 0.9692 |
9.0 | 144 | 0.9692 |
9.375 | 150 | 0.9692 |
10.0 | 160 | 0.9692 |
Framework Versions
- Python: 3.13.1
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.6.0
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}