Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,22 @@ This model was trained on the [MS Marco Passage Ranking](https://github.com/micr
|
|
| 8 |
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
|
| 9 |
|
| 10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
## Usage with Transformers
|
| 12 |
|
| 13 |
```python
|
|
@@ -26,17 +42,6 @@ with torch.no_grad():
|
|
| 26 |
```
|
| 27 |
|
| 28 |
|
| 29 |
-
## Usage with SentenceTransformers
|
| 30 |
-
|
| 31 |
-
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
|
| 32 |
-
```python
|
| 33 |
-
from sentence_transformers import CrossEncoder
|
| 34 |
-
|
| 35 |
-
model = CrossEncoder('cross-encoder/ms-marco-MiniLM-L2-v2', max_length=512)
|
| 36 |
-
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
|
| 37 |
-
```
|
| 38 |
-
|
| 39 |
-
|
| 40 |
## Performance
|
| 41 |
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
|
| 42 |
|
|
|
|
| 8 |
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
|
| 9 |
|
| 10 |
|
| 11 |
+
## Usage with SentenceTransformers
|
| 12 |
+
|
| 13 |
+
The usage is easy when you have [SentenceTransformers](https://www.sbert.net/) installed. Then you can use the pre-trained models like this:
|
| 14 |
+
```python
|
| 15 |
+
from sentence_transformers import CrossEncoder
|
| 16 |
+
|
| 17 |
+
model = CrossEncoder('cross-encoder/ms-marco-MiniLM-L2-v2')
|
| 18 |
+
scores = model.predict([
|
| 19 |
+
("How many people live in Berlin?", "Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers."),
|
| 20 |
+
("How many people live in Berlin?", "Berlin is well known for its museums."),
|
| 21 |
+
])
|
| 22 |
+
print(scores)
|
| 23 |
+
# [ 8.510401 -4.860082]
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
|
| 27 |
## Usage with Transformers
|
| 28 |
|
| 29 |
```python
|
|
|
|
| 42 |
```
|
| 43 |
|
| 44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
## Performance
|
| 46 |
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
|
| 47 |
|