|
--- |
|
library_name: transformers |
|
base_model: dicta-il/dictalm2.0-instruct |
|
license: apache-2.0 |
|
language: |
|
- he |
|
pipeline_tag: text-generation |
|
--- |
|
# Model Card for Guysh1805/dictalm2-it-qa-fine-tune |
|
|
|
This is a fine-tuned version of the Dicta-IL dictalm2.0-instruct model, specifically tailored for generating question-answer pairs in Hebrew. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
The model, Guysh1805/dictalm2-it-qa-fine-tunetuned, is a fine-tuned version of the dictalm2.0-instruct model on a both synthetically generated datasets and already existing Q&A datasets wrapped in insturction prompts. |
|
|
|
- **Developed by:** Guy Shapira |
|
- **Model type:** Transformer-based, fine-tuned Dicta-IL dictalm2.0-instruct |
|
- **Language(s) (NLP):** Hebrew |
|
- **Finetuned from:** dicta-il/dictalm2.0-instruct |
|
|
|
## How to Get Started with the Model |
|
To get started, load the model using the Transformers library by Hugging Face: |
|
|
|
```python |
|
from transformers import AutoModelForQuestionAnswering, AutoTokenizer |
|
|
|
model_name = "Guysh1805/dictalm2-it-qa-fine-tune" |
|
model = AutoModelForQuestionAnswering.from_pretrained(model_name) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
``` |