File size: 1,160 Bytes
308f478 611786f 308f478 611786f 308f478 dcea07c 308f478 dcea07c 308f478 dcea07c 308f478 dcea07c 308f478 dcea07c 308f478 dcea07c 308f478 dcea07c 611786f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
library_name: transformers
base_model: dicta-il/dictalm2.0-instruct
license: apache-2.0
language:
- he
pipeline_tag: text-generation
---
# Model Card for Guysh1805/dictalm2-it-qa-fine-tune
This is a fine-tuned version of the Dicta-IL dictalm2.0-instruct model, specifically tailored for generating question-answer pairs in Hebrew.
## Model Details
### Model Description
The model, Guysh1805/dictalm2-it-qa-fine-tunetuned, is a fine-tuned version of the dictalm2.0-instruct model on a both synthetically generated datasets and already existing Q&A datasets wrapped in insturction prompts.
- **Developed by:** Guy Shapira
- **Model type:** Transformer-based, fine-tuned Dicta-IL dictalm2.0-instruct
- **Language(s) (NLP):** Hebrew
- **Finetuned from:** dicta-il/dictalm2.0-instruct
## How to Get Started with the Model
To get started, load the model using the Transformers library by Hugging Face:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model_name = "Guysh1805/dictalm2-it-qa-fine-tune"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
``` |