Update README.md
Browse files
README.md
CHANGED
@@ -30,10 +30,11 @@ With a compact size of just 35 million parameters,
|
|
30 |
the model enables lightning-fast inference while still delivering impressive performance.
|
31 |
Additionally, we provide the following options:
|
32 |
|
|
|
33 |
- `jina-embedding-b-en-v1`: 110 million parameters.
|
34 |
-
- `jina-embedding-l-en-v1`:
|
35 |
-
- `jina-embedding-
|
36 |
-
- `jina-embedding-
|
37 |
|
38 |
## Data & Parameters
|
39 |
|
@@ -41,15 +42,16 @@ More info will be released together with the technique report.
|
|
41 |
|
42 |
## Metrics
|
43 |
|
44 |
-
We compared the model against `all-minilm-l6-v2` from sbert and `text-embeddings-ada-002` from OpenAI:
|
45 |
|
46 |
|Name|param |context|
|
47 |
|------------------------------|-----|------|
|
48 |
|all-minilm-l6-v2|33m |128|
|
49 |
-
|all-mpnet
|
50 |
|ada-embedding-002|Unknown/API based |8192|
|
51 |
|jina-embedding-s-en-v1|35m |512|
|
52 |
|jina-embedding-b-en-v1|110m |512|
|
|
|
53 |
|
54 |
|
55 |
|Name|STS12|STS13|STS14|STS15|STS16|STS17|TRECOVID|Quora|SciFact|
|
@@ -59,6 +61,7 @@ We compared the model against `all-minilm-l6-v2` from sbert and `text-embeddings
|
|
59 |
|ada-embedding-002|0.698|0.833|0.761|0.861|0.86 |0.903|0.685 |0.876|0.726 |
|
60 |
|jina-embedding-s-en-v1|0.738|0.781|0.732|0.833|0.785|0.859|0.471 |0.852|0.567 |
|
61 |
|jina-embedding-b-en-v1|0.736|0.804|0.745|0.844|0.793|0.873|0.481 |0.87|0.616 |
|
|
|
62 |
|
63 |
For more tasks and metrics, please checkout [MTEB](https://huggingface.co/spaces/mteb/leaderboard) benchmark.
|
64 |
|
|
|
30 |
the model enables lightning-fast inference while still delivering impressive performance.
|
31 |
Additionally, we provide the following options:
|
32 |
|
33 |
+
- `jina-embedding-s-en-v1`: 35 million parameters **(you are here)**.
|
34 |
- `jina-embedding-b-en-v1`: 110 million parameters.
|
35 |
+
- `jina-embedding-l-en-v1`: 330 million parameters.
|
36 |
+
- `jina-embedding-1b-en-v1`: 1.2 billion parameters, 10* bert-base size (soon).
|
37 |
+
- `jina-embedding-6b-en-v1`: 6 billion parameters 30* bert-base size(soon).
|
38 |
|
39 |
## Data & Parameters
|
40 |
|
|
|
42 |
|
43 |
## Metrics
|
44 |
|
45 |
+
We compared the model against `all-minilm-l6-v2`/`all-mpnet-base-v2` from sbert and `text-embeddings-ada-002` from OpenAI:
|
46 |
|
47 |
|Name|param |context|
|
48 |
|------------------------------|-----|------|
|
49 |
|all-minilm-l6-v2|33m |128|
|
50 |
+
|all-mpnet-base-v2 |110m |128|
|
51 |
|ada-embedding-002|Unknown/API based |8192|
|
52 |
|jina-embedding-s-en-v1|35m |512|
|
53 |
|jina-embedding-b-en-v1|110m |512|
|
54 |
+
|jina-embedding-l-en-v1|330m |512|
|
55 |
|
56 |
|
57 |
|Name|STS12|STS13|STS14|STS15|STS16|STS17|TRECOVID|Quora|SciFact|
|
|
|
61 |
|ada-embedding-002|0.698|0.833|0.761|0.861|0.86 |0.903|0.685 |0.876|0.726 |
|
62 |
|jina-embedding-s-en-v1|0.738|0.781|0.732|0.833|0.785|0.859|0.471 |0.852|0.567 |
|
63 |
|jina-embedding-b-en-v1|0.736|0.804|0.745|0.844|0.793|0.873|0.481 |0.87|0.616 |
|
64 |
+
|jina-embedding-l-en-v1|0.735|0.829|0.759|0.844|0.8|0.888|0.465 |0.876|0.645 |
|
65 |
|
66 |
For more tasks and metrics, please checkout [MTEB](https://huggingface.co/spaces/mteb/leaderboard) benchmark.
|
67 |
|