Update README.md
Browse files
README.md
CHANGED
|
@@ -19,8 +19,11 @@ This model is built on the [DistilBERT](https://huggingface.co/distilbert/distil
|
|
| 19 |
The model processes input text to determine whether it is a statement or a question. It is used in the ilert Search Algorithem.
|
| 20 |
### Training Data
|
| 21 |
|
| 22 |
-
The model was trained on a diverse dataset containing examples of both statements and questions. The training process involved fine-tuning the pre-trained DistilBERT model on this specific classification task. The dataset included various types of questions
|
| 23 |
-
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
### Performance
|
| 26 |
|
|
@@ -34,7 +37,7 @@ To use this model, you can load it through the Hugging Face `transformers` libra
|
|
| 34 |
from transformers import pipeline
|
| 35 |
|
| 36 |
# Load the model and tokenizer
|
| 37 |
-
classifier = pipeline("text-classification", model="
|
| 38 |
|
| 39 |
# Example texts
|
| 40 |
texts = ["Is it going to rain today?", "It is a sunny day."]
|
|
@@ -45,5 +48,5 @@ results = classifier(texts)
|
|
| 45 |
# Output the results
|
| 46 |
for text, result in zip(texts, results):
|
| 47 |
print(f"Text: {text}")
|
| 48 |
-
print(f"Classification: {result['label']}
|
| 49 |
```
|
|
|
|
| 19 |
The model processes input text to determine whether it is a statement or a question. It is used in the ilert Search Algorithem.
|
| 20 |
### Training Data
|
| 21 |
|
| 22 |
+
The model was trained on a diverse dataset containing examples of both statements and questions. The training process involved fine-tuning the pre-trained DistilBERT model on this specific classification task. The dataset included various types of questions and statements from different contexts to ensure robustness.
|
| 23 |
+
|
| 24 |
+
* - Quora Question Keyword Pairs
|
| 25 |
+
* - Questions vs Statements Classification
|
| 26 |
+
* - ilert related Questions
|
| 27 |
|
| 28 |
### Performance
|
| 29 |
|
|
|
|
| 37 |
from transformers import pipeline
|
| 38 |
|
| 39 |
# Load the model and tokenizer
|
| 40 |
+
classifier = pipeline("text-classification", model="ilert/SoQbert")
|
| 41 |
|
| 42 |
# Example texts
|
| 43 |
texts = ["Is it going to rain today?", "It is a sunny day."]
|
|
|
|
| 48 |
# Output the results
|
| 49 |
for text, result in zip(texts, results):
|
| 50 |
print(f"Text: {text}")
|
| 51 |
+
print(f"Classification: {result['label']}")
|
| 52 |
```
|