Anshoo Mehra
commited on
Commit
·
ba67d00
1
Parent(s):
493bf34
Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,23 @@ roberta-base has been fine-tuned with SQUAD dataset with QuestionAnswering LM He
|
|
22 |
## Intended uses & limitations
|
23 |
|
24 |
The model intended to be used for Q&A task, given the context and question, model would attempt to infer answer text, answer span and probability scores.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
## Training and evaluation data
|
27 |
|
|
|
22 |
## Intended uses & limitations
|
23 |
|
24 |
The model intended to be used for Q&A task, given the context and question, model would attempt to infer answer text, answer span and probability scores.
|
25 |
+
Model can be simply put to use as:
|
26 |
+
|
27 |
+
```
|
28 |
+
from transformers import pipeline
|
29 |
+
|
30 |
+
model_checkpoint = "anshoomehra/roberta-base-fineTuned-squadV1QA"
|
31 |
+
|
32 |
+
context = """
|
33 |
+
🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration
|
34 |
+
between them. It's straightforward to train your models with one before loading them for inference with the other.
|
35 |
+
"""
|
36 |
+
question = "Which deep learning libraries back 🤗 Transformers?"
|
37 |
+
|
38 |
+
question_answerer = pipeline("question-answering", model=model_checkpoint)
|
39 |
+
question_answerer(question=question, context=context)
|
40 |
+
|
41 |
+
```
|
42 |
|
43 |
## Training and evaluation data
|
44 |
|