Update README.md
Browse files
README.md
CHANGED
@@ -11,17 +11,7 @@ metrics:
|
|
11 |
license: apache-2.0
|
12 |
---
|
13 |
|
14 |
-
# Cross-Encoder
|
15 |
-
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base)
|
16 |
-
|
17 |
-
## Training Data
|
18 |
-
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
|
19 |
-
|
20 |
-
## Performance
|
21 |
-
- Accuracy on SNLI-test dataset: 92.38
|
22 |
-
- Accuracy on MNLI mismatched set: 90.04
|
23 |
-
|
24 |
-
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
|
25 |
|
26 |
## Usage
|
27 |
|
@@ -36,25 +26,6 @@ label_mapping = ['contradiction', 'entailment', 'neutral']
|
|
36 |
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
|
37 |
```
|
38 |
|
39 |
-
## Usage with Transformers AutoModel
|
40 |
-
You can use the model also directly with Transformers library (without SentenceTransformers library):
|
41 |
-
```python
|
42 |
-
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
43 |
-
import torch
|
44 |
-
|
45 |
-
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-base')
|
46 |
-
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-base')
|
47 |
-
|
48 |
-
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
|
49 |
-
|
50 |
-
model.eval()
|
51 |
-
with torch.no_grad():
|
52 |
-
scores = model(**features).logits
|
53 |
-
label_mapping = ['contradiction', 'entailment', 'neutral']
|
54 |
-
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
|
55 |
-
print(labels)
|
56 |
-
```
|
57 |
-
|
58 |
## Zero-Shot Classification
|
59 |
This model can also be used for zero-shot-classification:
|
60 |
```python
|
@@ -66,4 +37,6 @@ sent = "Apple just announced the newest iPhone X"
|
|
66 |
candidate_labels = ["technology", "sports", "politics"]
|
67 |
res = classifier(sent, candidate_labels)
|
68 |
print(res)
|
69 |
-
```
|
|
|
|
|
|
11 |
license: apache-2.0
|
12 |
---
|
13 |
|
14 |
+
# Hebrew Cross-Encoder Model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
## Usage
|
17 |
|
|
|
26 |
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
|
27 |
```
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
## Zero-Shot Classification
|
30 |
This model can also be used for zero-shot-classification:
|
31 |
```python
|
|
|
37 |
candidate_labels = ["technology", "sports", "politics"]
|
38 |
res = classifier(sent, candidate_labels)
|
39 |
print(res)
|
40 |
+
```
|
41 |
+
|
42 |
+
Sequence
|