Add new SentenceTransformer model
Browse files- README.md +97 -97
- model.safetensors +1 -1
README.md
CHANGED
@@ -31,33 +31,30 @@ widget:
|
|
31 |
|
32 |
In October I upgraded my LLM CLI tool to support multi-modal models via attachments.
|
33 |
It now has plugins for a whole collection of different vision models.'
|
34 |
-
- '
|
35 |
-
|
36 |
-
|
37 |
-
The environmental impact got better
|
38 |
|
39 |
-
|
40 |
-
|
41 |
-
of running a prompt has dropped enormously over the past couple of years.
|
42 |
|
43 |
-
|
44 |
-
|
45 |
-
|
|
|
46 |
- source_sentence: How did the construction of railways in the 1800s impact the environment?
|
47 |
sentences:
|
48 |
-
- '
|
49 |
-
|
50 |
-
|
51 |
-
new users got a very inaccurate mental model of what a capable LLM could actually
|
52 |
-
do.
|
53 |
|
54 |
-
That era appears to have ended, likely permanently, with OpenAI’s launch of ChatGPT
|
55 |
-
Pro. This $200/month subscription service is the only way to access their most
|
56 |
-
capable model, o1 Pro.
|
57 |
|
58 |
-
|
59 |
-
|
60 |
-
|
|
|
|
|
|
|
|
|
61 |
- 'An interesting point of comparison here could be the way railways rolled out
|
62 |
around the world in the 1800s. Constructing these required enormous investments
|
63 |
and had a massive environmental impact, and many of the lines that were built
|
@@ -70,18 +67,19 @@ widget:
|
|
70 |
environmental damage.
|
71 |
|
72 |
The year of slop'
|
73 |
-
- '
|
74 |
-
|
75 |
-
|
76 |
-
|
|
|
77 |
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
new models faster, iterate better and build more reliable and useful product features
|
82 |
-
than your competition.
|
83 |
|
84 |
-
|
|
|
|
|
85 |
- source_sentence: Why does the author believe that gullibility may hinder the development
|
86 |
of AI agents?
|
87 |
sentences:
|
@@ -112,6 +110,23 @@ widget:
|
|
112 |
|
113 |
Over the course of the year, it’s become increasingly clear that writing code
|
114 |
is one of the things LLMs are most capable of.'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
115 |
- 'DeepSeek v3 is a huge 685B parameter model—one of the largest openly licensed
|
116 |
models currently available, significantly bigger than the largest of Meta’s Llama
|
117 |
series, Llama 3.1 405B.
|
@@ -124,9 +139,6 @@ widget:
|
|
124 |
was trained on 2,788,000 H800 GPU hours at an estimated cost of $5,576,000. Llama
|
125 |
3.1 405B trained 30,840,000 GPU hours—11x that used by DeepSeek v3, for a model
|
126 |
that benchmarks slightly worse.'
|
127 |
-
- source_sentence: How did the approach to handling prompts change after the initial
|
128 |
-
release of @v0?
|
129 |
-
sentences:
|
130 |
- 'So far, I think they’re a net positive. I’ve used them on a personal level to
|
131 |
improve my productivity (and entertain myself) in all sorts of different ways.
|
132 |
I think people who learn how to use them effectively can gain a significant boost
|
@@ -140,38 +152,26 @@ widget:
|
|
140 |
|
141 |
The most surprising thing we’ve learned about LLMs this year is that they’re actually
|
142 |
quite easy to build.'
|
143 |
-
- 'The environmental impact got much, much worse
|
144 |
-
|
145 |
-
The much bigger problem here is the enormous competitive buildout of the infrastructure
|
146 |
-
that is imagined to be necessary for these models in the future.
|
147 |
-
|
148 |
-
Companies like Google, Meta, Microsoft and Amazon are all spending billions of
|
149 |
-
dollars rolling out new datacenters, with a very material impact on the electricity
|
150 |
-
grid and the environment. There’s even talk of spinning up new nuclear power stations,
|
151 |
-
but those can take decades.
|
152 |
-
|
153 |
-
Is this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued
|
154 |
-
crash in LLM prices might hint that it’s not. But would you want to be the big
|
155 |
-
tech executive that argued NOT to build out this infrastructure only to be proven
|
156 |
-
wrong in a few years’ time?'
|
157 |
- 'When @v0 first came out we were paranoid about protecting the prompt with all
|
158 |
kinds of pre and post processing complexity.
|
159 |
|
160 |
We completely pivoted to let it rip. A prompt without the evals, models, and especially
|
161 |
UX is like getting a broken ASML machine without a manual'
|
162 |
-
- source_sentence: What
|
163 |
-
|
164 |
sentences:
|
165 |
-
- '
|
166 |
-
|
167 |
|
168 |
-
|
169 |
-
don’t leave much room for anything else.
|
170 |
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
-
|
|
|
|
|
|
|
175 |
- 'Terminology aside, I remain skeptical as to their utility based, once again,
|
176 |
on the challenge of gullibility. LLMs believe anything you tell them. Any systems
|
177 |
that attempts to make meaningful decisions on your behalf will run into the same
|
@@ -221,7 +221,7 @@ model-index:
|
|
221 |
type: unknown
|
222 |
metrics:
|
223 |
- type: cosine_accuracy@1
|
224 |
-
value:
|
225 |
name: Cosine Accuracy@1
|
226 |
- type: cosine_accuracy@3
|
227 |
value: 1.0
|
@@ -233,7 +233,7 @@ model-index:
|
|
233 |
value: 1.0
|
234 |
name: Cosine Accuracy@10
|
235 |
- type: cosine_precision@1
|
236 |
-
value:
|
237 |
name: Cosine Precision@1
|
238 |
- type: cosine_precision@3
|
239 |
value: 0.3333333333333333
|
@@ -245,7 +245,7 @@ model-index:
|
|
245 |
value: 0.10000000000000002
|
246 |
name: Cosine Precision@10
|
247 |
- type: cosine_recall@1
|
248 |
-
value:
|
249 |
name: Cosine Recall@1
|
250 |
- type: cosine_recall@3
|
251 |
value: 1.0
|
@@ -257,13 +257,13 @@ model-index:
|
|
257 |
value: 1.0
|
258 |
name: Cosine Recall@10
|
259 |
- type: cosine_ndcg@10
|
260 |
-
value:
|
261 |
name: Cosine Ndcg@10
|
262 |
- type: cosine_mrr@10
|
263 |
-
value:
|
264 |
name: Cosine Mrr@10
|
265 |
- type: cosine_map@100
|
266 |
-
value:
|
267 |
name: Cosine Map@100
|
268 |
---
|
269 |
|
@@ -317,8 +317,8 @@ from sentence_transformers import SentenceTransformer
|
|
317 |
model = SentenceTransformer("lsy9874205/legal-ft-2")
|
318 |
# Run inference
|
319 |
sentences = [
|
320 |
-
'What
|
321 |
-
'
|
322 |
'The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition.\n(I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)\nWhatever the term may mean, agents still have that feeling of perpetually “coming soon”.',
|
323 |
]
|
324 |
embeddings = model.encode(sentences)
|
@@ -363,23 +363,23 @@ You can finetune this model on your own dataset.
|
|
363 |
|
364 |
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
|
365 |
|
366 |
-
| Metric | Value
|
367 |
-
|
368 |
-
| cosine_accuracy@1 |
|
369 |
-
| cosine_accuracy@3 | 1.0
|
370 |
-
| cosine_accuracy@5 | 1.0
|
371 |
-
| cosine_accuracy@10 | 1.0
|
372 |
-
| cosine_precision@1 |
|
373 |
-
| cosine_precision@3 | 0.3333
|
374 |
-
| cosine_precision@5 | 0.2
|
375 |
-
| cosine_precision@10 | 0.1
|
376 |
-
| cosine_recall@1 |
|
377 |
-
| cosine_recall@3 | 1.0
|
378 |
-
| cosine_recall@5 | 1.0
|
379 |
-
| cosine_recall@10 | 1.0
|
380 |
-
| **cosine_ndcg@10** | **
|
381 |
-
| cosine_mrr@10 |
|
382 |
-
| cosine_map@100 |
|
383 |
|
384 |
<!--
|
385 |
## Bias, Risks and Limitations
|
@@ -405,7 +405,7 @@ You can finetune this model on your own dataset.
|
|
405 |
| | sentence_0 | sentence_1 |
|
406 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
|
407 |
| type | string | string |
|
408 |
-
| details | <ul><li>min: 12 tokens</li><li>mean: 20.
|
409 |
* Samples:
|
410 |
| sentence_0 | sentence_1 |
|
411 |
|:-----------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
@@ -567,19 +567,19 @@ You can finetune this model on your own dataset.
|
|
567 |
### Training Logs
|
568 |
| Epoch | Step | cosine_ndcg@10 |
|
569 |
|:-----:|:----:|:--------------:|
|
570 |
-
| 1.0 | 16 |
|
571 |
-
| 2.0 | 32 | 0.
|
572 |
-
| 3.0 | 48 |
|
573 |
-
| 3.125 | 50 |
|
574 |
-
| 4.0 | 64 |
|
575 |
-
| 5.0 | 80 |
|
576 |
-
| 6.0 | 96 |
|
577 |
-
| 6.25 | 100 |
|
578 |
-
| 7.0 | 112 |
|
579 |
-
| 8.0 | 128 |
|
580 |
-
| 9.0 | 144 |
|
581 |
-
| 9.375 | 150 |
|
582 |
-
| 10.0 | 160 |
|
583 |
|
584 |
|
585 |
### Framework Versions
|
|
|
31 |
|
32 |
In October I upgraded my LLM CLI tool to support multi-modal models via attachments.
|
33 |
It now has plugins for a whole collection of different vision models.'
|
34 |
+
- 'This remains astonishing to me. I thought a model with the capabilities and output
|
35 |
+
quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs.
|
|
|
|
|
36 |
|
37 |
+
These models take up enough of my 64GB of RAM that I don’t run them often—they
|
38 |
+
don’t leave much room for anything else.
|
|
|
39 |
|
40 |
+
The fact that they run at all is a testament to the incredible training and inference
|
41 |
+
performance gains that we’ve figured out over the past year. It turns out there
|
42 |
+
was a lot of low-hanging fruit to be harvested in terms of model efficiency. I
|
43 |
+
expect there’s still more to come.'
|
44 |
- source_sentence: How did the construction of railways in the 1800s impact the environment?
|
45 |
sentences:
|
46 |
+
- 'The boring yet crucial secret behind good system prompts is test-driven development.
|
47 |
+
You don’t write down a system prompt and find ways to test it. You write down
|
48 |
+
tests and find a system prompt that passes them.
|
|
|
|
|
49 |
|
|
|
|
|
|
|
50 |
|
51 |
+
It’s become abundantly clear over the course of 2024 that writing good automated
|
52 |
+
evals for LLM-powered systems is the skill that’s most needed to build useful
|
53 |
+
applications on top of these models. If you have a strong eval suite you can adopt
|
54 |
+
new models faster, iterate better and build more reliable and useful product features
|
55 |
+
than your competition.
|
56 |
+
|
57 |
+
Vercel’s Malte Ubl:'
|
58 |
- 'An interesting point of comparison here could be the way railways rolled out
|
59 |
around the world in the 1800s. Constructing these required enormous investments
|
60 |
and had a massive environmental impact, and many of the lines that were built
|
|
|
67 |
environmental damage.
|
68 |
|
69 |
The year of slop'
|
70 |
+
- 'OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely
|
71 |
+
available from its launch in June. This was a momentus change, because for the
|
72 |
+
previous year free users had mostly been restricted to GPT-3.5 level models, meaning
|
73 |
+
new users got a very inaccurate mental model of what a capable LLM could actually
|
74 |
+
do.
|
75 |
|
76 |
+
That era appears to have ended, likely permanently, with OpenAI’s launch of ChatGPT
|
77 |
+
Pro. This $200/month subscription service is the only way to access their most
|
78 |
+
capable model, o1 Pro.
|
|
|
|
|
79 |
|
80 |
+
Since the trick behind the o1 series (and the future models it will undoubtedly
|
81 |
+
inspire) is to expend more compute time to get better results, I don’t think those
|
82 |
+
days of free access to the best available models are likely to return.'
|
83 |
- source_sentence: Why does the author believe that gullibility may hinder the development
|
84 |
of AI agents?
|
85 |
sentences:
|
|
|
110 |
|
111 |
Over the course of the year, it’s become increasingly clear that writing code
|
112 |
is one of the things LLMs are most capable of.'
|
113 |
+
- 'The environmental impact got much, much worse
|
114 |
+
|
115 |
+
The much bigger problem here is the enormous competitive buildout of the infrastructure
|
116 |
+
that is imagined to be necessary for these models in the future.
|
117 |
+
|
118 |
+
Companies like Google, Meta, Microsoft and Amazon are all spending billions of
|
119 |
+
dollars rolling out new datacenters, with a very material impact on the electricity
|
120 |
+
grid and the environment. There’s even talk of spinning up new nuclear power stations,
|
121 |
+
but those can take decades.
|
122 |
+
|
123 |
+
Is this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued
|
124 |
+
crash in LLM prices might hint that it’s not. But would you want to be the big
|
125 |
+
tech executive that argued NOT to build out this infrastructure only to be proven
|
126 |
+
wrong in a few years’ time?'
|
127 |
+
- source_sentence: How did the approach to handling prompts change after the initial
|
128 |
+
release of @v0?
|
129 |
+
sentences:
|
130 |
- 'DeepSeek v3 is a huge 685B parameter model—one of the largest openly licensed
|
131 |
models currently available, significantly bigger than the largest of Meta’s Llama
|
132 |
series, Llama 3.1 405B.
|
|
|
139 |
was trained on 2,788,000 H800 GPU hours at an estimated cost of $5,576,000. Llama
|
140 |
3.1 405B trained 30,840,000 GPU hours—11x that used by DeepSeek v3, for a model
|
141 |
that benchmarks slightly worse.'
|
|
|
|
|
|
|
142 |
- 'So far, I think they’re a net positive. I’ve used them on a personal level to
|
143 |
improve my productivity (and entertain myself) in all sorts of different ways.
|
144 |
I think people who learn how to use them effectively can gain a significant boost
|
|
|
152 |
|
153 |
The most surprising thing we’ve learned about LLMs this year is that they’re actually
|
154 |
quite easy to build.'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
155 |
- 'When @v0 first came out we were paranoid about protecting the prompt with all
|
156 |
kinds of pre and post processing complexity.
|
157 |
|
158 |
We completely pivoted to let it rip. A prompt without the evals, models, and especially
|
159 |
UX is like getting a broken ASML machine without a manual'
|
160 |
+
- source_sentence: What changes have occurred in the energy usage and environmental
|
161 |
+
impact of running AI prompts over the past couple of years?
|
162 |
sentences:
|
163 |
+
- 'Those US export regulations on GPUs to China seem to have inspired some very
|
164 |
+
effective training optimizations!
|
165 |
|
166 |
+
The environmental impact got better
|
|
|
167 |
|
168 |
+
A welcome result of the increased efficiency of the models—both the hosted ones
|
169 |
+
and the ones I can run locally—is that the energy usage and environmental impact
|
170 |
+
of running a prompt has dropped enormously over the past couple of years.
|
171 |
+
|
172 |
+
OpenAI themselves are charging 100x less for a prompt compared to the GPT-3 days.
|
173 |
+
I have it on good authority that neither Google Gemini nor Amazon Nova (two of
|
174 |
+
the least expensive model providers) are running prompts at a loss.'
|
175 |
- 'Terminology aside, I remain skeptical as to their utility based, once again,
|
176 |
on the challenge of gullibility. LLMs believe anything you tell them. Any systems
|
177 |
that attempts to make meaningful decisions on your behalf will run into the same
|
|
|
221 |
type: unknown
|
222 |
metrics:
|
223 |
- type: cosine_accuracy@1
|
224 |
+
value: 0.8333333333333334
|
225 |
name: Cosine Accuracy@1
|
226 |
- type: cosine_accuracy@3
|
227 |
value: 1.0
|
|
|
233 |
value: 1.0
|
234 |
name: Cosine Accuracy@10
|
235 |
- type: cosine_precision@1
|
236 |
+
value: 0.8333333333333334
|
237 |
name: Cosine Precision@1
|
238 |
- type: cosine_precision@3
|
239 |
value: 0.3333333333333333
|
|
|
245 |
value: 0.10000000000000002
|
246 |
name: Cosine Precision@10
|
247 |
- type: cosine_recall@1
|
248 |
+
value: 0.8333333333333334
|
249 |
name: Cosine Recall@1
|
250 |
- type: cosine_recall@3
|
251 |
value: 1.0
|
|
|
257 |
value: 1.0
|
258 |
name: Cosine Recall@10
|
259 |
- type: cosine_ndcg@10
|
260 |
+
value: 0.9330328858630988
|
261 |
name: Cosine Ndcg@10
|
262 |
- type: cosine_mrr@10
|
263 |
+
value: 0.9097222222222222
|
264 |
name: Cosine Mrr@10
|
265 |
- type: cosine_map@100
|
266 |
+
value: 0.9097222222222223
|
267 |
name: Cosine Map@100
|
268 |
---
|
269 |
|
|
|
317 |
model = SentenceTransformer("lsy9874205/legal-ft-2")
|
318 |
# Run inference
|
319 |
sentences = [
|
320 |
+
'What changes have occurred in the energy usage and environmental impact of running AI prompts over the past couple of years?',
|
321 |
+
'Those US export regulations on GPUs to China seem to have inspired some very effective training optimizations!\nThe environmental impact got better\nA welcome result of the increased efficiency of the models—both the hosted ones and the ones I can run locally—is that the energy usage and environmental impact of running a prompt has dropped enormously over the past couple of years.\nOpenAI themselves are charging 100x less for a prompt compared to the GPT-3 days. I have it on good authority that neither Google Gemini nor Amazon Nova (two of the least expensive model providers) are running prompts at a loss.',
|
322 |
'The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition.\n(I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)\nWhatever the term may mean, agents still have that feeling of perpetually “coming soon”.',
|
323 |
]
|
324 |
embeddings = model.encode(sentences)
|
|
|
363 |
|
364 |
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
|
365 |
|
366 |
+
| Metric | Value |
|
367 |
+
|:--------------------|:----------|
|
368 |
+
| cosine_accuracy@1 | 0.8333 |
|
369 |
+
| cosine_accuracy@3 | 1.0 |
|
370 |
+
| cosine_accuracy@5 | 1.0 |
|
371 |
+
| cosine_accuracy@10 | 1.0 |
|
372 |
+
| cosine_precision@1 | 0.8333 |
|
373 |
+
| cosine_precision@3 | 0.3333 |
|
374 |
+
| cosine_precision@5 | 0.2 |
|
375 |
+
| cosine_precision@10 | 0.1 |
|
376 |
+
| cosine_recall@1 | 0.8333 |
|
377 |
+
| cosine_recall@3 | 1.0 |
|
378 |
+
| cosine_recall@5 | 1.0 |
|
379 |
+
| cosine_recall@10 | 1.0 |
|
380 |
+
| **cosine_ndcg@10** | **0.933** |
|
381 |
+
| cosine_mrr@10 | 0.9097 |
|
382 |
+
| cosine_map@100 | 0.9097 |
|
383 |
|
384 |
<!--
|
385 |
## Bias, Risks and Limitations
|
|
|
405 |
| | sentence_0 | sentence_1 |
|
406 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
|
407 |
| type | string | string |
|
408 |
+
| details | <ul><li>min: 12 tokens</li><li>mean: 20.55 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.01 tokens</li><li>max: 214 tokens</li></ul> |
|
409 |
* Samples:
|
410 |
| sentence_0 | sentence_1 |
|
411 |
|:-----------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
|
|
567 |
### Training Logs
|
568 |
| Epoch | Step | cosine_ndcg@10 |
|
569 |
|:-----:|:----:|:--------------:|
|
570 |
+
| 1.0 | 16 | 0.9692 |
|
571 |
+
| 2.0 | 32 | 0.9484 |
|
572 |
+
| 3.0 | 48 | 0.9385 |
|
573 |
+
| 3.125 | 50 | 0.9385 |
|
574 |
+
| 4.0 | 64 | 0.9385 |
|
575 |
+
| 5.0 | 80 | 0.9330 |
|
576 |
+
| 6.0 | 96 | 0.9330 |
|
577 |
+
| 6.25 | 100 | 0.9330 |
|
578 |
+
| 7.0 | 112 | 0.9385 |
|
579 |
+
| 8.0 | 128 | 0.9330 |
|
580 |
+
| 9.0 | 144 | 0.9330 |
|
581 |
+
| 9.375 | 150 | 0.9330 |
|
582 |
+
| 10.0 | 160 | 0.9330 |
|
583 |
|
584 |
|
585 |
### Framework Versions
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1336413848
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2d84fb4212e72c10493029eef22123c233d6bcb67bd049afeb843b20287ba7cd
|
3 |
size 1336413848
|