|
|
--- |
|
|
library_name: transformers |
|
|
tags: |
|
|
- medical |
|
|
datasets: |
|
|
- kodetr/stunting-qa |
|
|
language: |
|
|
- id |
|
|
metrics: |
|
|
- accuracy |
|
|
- bleu |
|
|
pipeline_tag: question-answering |
|
|
--- |
|
|
|
|
|
### Model Description |
|
|
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
Konsultasi terkait stunting pada anak |
|
|
|
|
|
- **Developed by:** Tanwir |
|
|
- **Language(s) (NLP):** Indonesia |
|
|
|
|
|
### Evaluation |
|
|
|
|
|
**Evaluations:** GLUE |
|
|
|
|
|
 |
|
|
|
|
|
### Use with transformers |
|
|
|
|
|
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. |
|
|
|
|
|
Make sure to update your transformers installation via pip install --upgrade transformers. |
|
|
|
|
|
```python |
|
|
import torch |
|
|
from transformers import pipeline |
|
|
|
|
|
model_id = "kodetr/stunting-qa-v3" |
|
|
pipe = pipeline( |
|
|
"text-generation", |
|
|
model=model_id, |
|
|
torch_dtype=torch.bfloat16, |
|
|
device_map="auto", |
|
|
) |
|
|
|
|
|
messages = [ |
|
|
{"role": "system", "content": "Jelaskan definisi 1000 hari pertama kehidupan."}, |
|
|
{"role": "user", "content": "Apa itu 1000 hari pertama kehidupan?"}, |
|
|
] |
|
|
outputs = pipe( |
|
|
messages, |
|
|
max_new_tokens=256, |
|
|
) |
|
|
print(outputs[0]["generated_text"][-1]) |
|
|
``` |