Update README.md
Browse files
README.md
CHANGED
@@ -49,9 +49,11 @@ Than you need to initialize a model and a pipeline:
|
|
49 |
```python
|
50 |
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
|
51 |
from transformers import AutoTokenizer
|
|
|
52 |
model = GLiClassModel.from_pretrained("knowledgator/gliclass-modern-large-v2.0-init")
|
53 |
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-modern-large-v2.0-init")
|
54 |
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
|
|
|
55 |
text = "One day I will see the world!"
|
56 |
labels = ["travel", "dreams", "sport", "science", "politics"]
|
57 |
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
|
@@ -59,6 +61,15 @@ for result in results:
|
|
59 |
print(result["label"], "=>", result["score"])
|
60 |
```
|
61 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
### Benchmarks:
|
63 |
Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting.
|
64 |
| Model | IMDB | AG_NEWS | Emotions |
|
|
|
49 |
```python
|
50 |
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
|
51 |
from transformers import AutoTokenizer
|
52 |
+
|
53 |
model = GLiClassModel.from_pretrained("knowledgator/gliclass-modern-large-v2.0-init")
|
54 |
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-modern-large-v2.0-init")
|
55 |
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
|
56 |
+
|
57 |
text = "One day I will see the world!"
|
58 |
labels = ["travel", "dreams", "sport", "science", "politics"]
|
59 |
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
|
|
|
61 |
print(result["label"], "=>", result["score"])
|
62 |
```
|
63 |
|
64 |
+
If you want to use it for NLI type of tasks, we recommend representing your premise as a text and hypothesis as a label, you can put several hypotheses, but the model works be
|
65 |
+
```python
|
66 |
+
# Initialize model and multi-label pipeline
|
67 |
+
text = "The cat slept on the windowsill all afternoon"
|
68 |
+
labels = ["The cat was awake and playing outside."]
|
69 |
+
results = pipeline(text, labels, threshold=0.0)[0]
|
70 |
+
print(results)
|
71 |
+
```
|
72 |
+
|
73 |
### Benchmarks:
|
74 |
Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting.
|
75 |
| Model | IMDB | AG_NEWS | Emotions |
|