|
--- |
|
license: cc-by-4.0 |
|
language: |
|
- he |
|
inference: false |
|
--- |
|
# **DictaLM**: A Large Generative Language Model for Modern Hebrew |
|
|
|
A large generative pretrained transformer (GPT) language model for Hebrew, released [here](https://arxiv.org/abs/2309.14568). |
|
|
|
- This is an alpha version of the model, and there are many improvements to come. |
|
- We are actively working on improving the model, so stay tuned. |
|
|
|
|
|
This model was fine-tuned for instructions, here are a few examples of the different types of instructions the model was trained on: |
|
|
|
- General questions: |
|
``` |
|
ืื ืื ืืืช ืกืคืจ? |
|
``` |
|
|
|
``` |
|
ืงืืืืชื ืืชื ืงื ืืืฆืืข. ืืื ืืืจื ืื ืืื ื ืืืคื ืืื? |
|
``` |
|
- Simple tasks: |
|
``` |
|
ืชืฆืืข ืืื ืจืขืืื ืืช ืืคืขืืืืช ืขื ืืืืื ืื ื 5: |
|
``` |
|
- Information retrieval from a paragraph context: |
|
|
|
``` |
|
ืืืกืืง ืืืื ื ืืื ืืืจื ืืืกืืจืชืืช ืืืขืชืืงื ืืงืืืฃ ืืืชืื. ืฉืืื ืื ืืืจืฉืช ืืื ืืื ืจื ืืืืคื ืืืกื ืืขืืืื ืืงืืืืช ืืืฉืจืื ืืืืงืืืืช ืจืืื ืืขืืื. ืฉืืืืช ืืกืืง ืืื ื ืืืคืฉืจืืช ืืืกืืื ืขืืืืืช ืืืงืืืืช ืืื ืืื ืืืื ืืื ืืขืืืช ืืฉืืืืช ืืืืืื ืืช ืืืืื. ืืืืชืื ืืืืืขืืื ืืืืื (ืืืืืฉื, ืื ืืืื ืืืืชืื ืืฉืื) ืืชืืื ืืืชืจ ืืกืืง ืืื ื ืืืืื ืฉืืคืจื ืคืืืช ื ืคืืข ืืืืื ืืืกืืง ืืฉืืื ืื (ืคืืืขืืช ืืงืืืคืช ืืคืจื ืืืืชืื ืืฉืื ืคืืืช ืืฉืืขืืชืืืช). ืืื ืื ืืืขืืฃ ืืกืืง ืืื ื ืืืืืจืื ืืื ืืืืคืืืจืคืื ืืืงืืืืช ืื ืฆืคืืคืืช ืืขืฆืื ืื ืืืคืฉืจืื ืืืฉื ื ืืื ืืืืื ืืื ืื. ืืฉืืื ืืืื ืืช ืืืคืฉืจืช ืื ืืืกืืง ืขืฆืื ืฉืื ืื ืืืืขืืื ืฉืื ืื, ืืืชืื ืืงืฆื ืืืฉืืช ืืคืจื ืืืืขื ืืื ืขืฅ. |
|
|
|
ืขื ืืกืืก ืืคืกืงื ืืืืช, ืื ืืื ืืืชืจืื ืฉื ืืกืืง ืืื ื ืืืืื ืช ืงืฆื ืืืฉืืช ืืคืจื? |
|
``` |
|
|
|
## Sample usage: |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
import torch |
|
|
|
tokenizer = AutoTokenizer.from_pretrained('dicta-il/dictalm-7b-instruct') |
|
model = AutoModelForCausalLM.from_pretrained('dicta-il/dictalm-7b-instruct', trust_remote_code=True).cuda() |
|
|
|
model.eval() |
|
|
|
with torch.inference_mode(): |
|
prompt = 'ืชืฆืืข ืืื ืจืขืืื ืืช ืืคืขืืืืช ืขื ืืืืื ืื ื 5:\n' |
|
kwargs = dict( |
|
inputs=tokenizer(prompt, return_tensors='pt').input_ids.to(model.device), |
|
do_sample=True, |
|
top_k=50, |
|
top_p=0.95, |
|
temperature=0.75, |
|
max_length=100, |
|
min_new_tokens=5 |
|
) |
|
|
|
print(tokenizer.batch_decode(model.generate(**kwargs), skip_special_tokens=True)) |
|
``` |
|
|
|
|
|
There are many different parameters you can input into `kwargs` for different results (greedy, beamsearch, different sampling configurations, longer/shorter respones, etc.). |
|
|
|
You can view the full list of parameters you can pass to the `generate` function [here](https://huggingface.co/docs/transformers/v4.33.0/en/main_classes/text_generation#transformers.GenerationMixin.generate). |
|
|
|
### Alternative ways to initialize the model: |
|
|
|
If you have multiple smaller GPUs, and the package `accelerate` is installed, you can initialize the model split across the devices: |
|
```python |
|
model = AutoModelForCausalLM.from_pretrained('dicta-il/dictalm-7b-instruct', trust_remote_code=True, device_map='auto') |
|
``` |
|
|
|
If you are running on linux and have the `bitsandbytes` package installed, you can initialize the model in 4/8 bit inference mode: |
|
```python |
|
model = AutoModelForCausalLM.from_pretrained('dicta-il/dictalm-7b-instruct', trust_remote_code=True, load_in_8bit=True) |
|
``` |
|
|
|
If you have [FlashAttention](https://github.com/Dao-AILab/flash-attention) installed in your environment, you can instruct the model to use the flash attention implementation (either V1 or V2, whichever is installed): |
|
```python |
|
model = AutoModelForCausalLM.from_pretrained('dicta-il/dictalm-7b-instruct', trust_remote_code=True, use_flash_attention=True) |
|
``` |
|
|
|
## Colab notebook demos |
|
You can try the model on a free tier google colab using the following notebooks: |
|
* [Streamlit based](https://colab.research.google.com/drive/1hn23eA4m7ISW2e40DsAB6sbLRok4RKqS?usp=sharing) - you will need first to log in https://ngrok.com/ and get an authtoken, then paste it in the notebook ([screenshot][screen-shot-streamlit]). |
|
* [Gradio based](https://gist.github.com/Norod/11997c0c9a330d0eeb9a6d4791b9aa2f) - uses deep speed for faster inference, text streamer to get the results as they are being generated and the UI is a widget embedded in the notebook ([screenshot][screen-shot-gradio]). |
|
|
|
|
|
## Citation |
|
|
|
If you use DictaLM in your research, please cite ```DictaLM -- A Large Generative Language Model for Modern Hebrew``` |
|
|
|
**BibTeX:** |
|
|
|
```bibtex |
|
@misc{shmidman2023introducing, |
|
title={Introducing DictaLM -- A Large Generative Language Model for Modern Hebrew}, |
|
author={Shaltiel Shmidman and Avi Shmidman and Amir David Nissan Cohen and Moshe Koppel}, |
|
year={2023}, |
|
eprint={2309.14568}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
## License |
|
|
|
Shield: [![CC BY 4.0][cc-by-shield]][cc-by] |
|
|
|
This work is licensed under a |
|
[Creative Commons Attribution 4.0 International License][cc-by]. |
|
|
|
[![CC BY 4.0][cc-by-image]][cc-by] |
|
|
|
[cc-by]: http://creativecommons.org/licenses/by/4.0/ |
|
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png |
|
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg |
|
[screen-shot-streamlit]: https://mitmachim.top/assets/uploads/files/1696309842384-36f732e3-d168-4f4c-8eaf-e78ec23e6bf6-image.png |
|
[screen-shot-gradio]: https://scontent.ftlv2-1.fna.fbcdn.net/v/t39.30808-6/384119874_10160090112344007_6111524432595263230_n.jpg?_nc_cat=111&ccb=1-7&_nc_sid=5f2048&_nc_ohc=tWYAm8uz7T4AX9DNlRD&_nc_ht=scontent.ftlv2-1.fna&cb_e2o_trans=t&oh=00_AfBocAujdLaWJqYWrNGtgdz99Cdz8JEqO_ez70SXqlf_2Q&oe=654D476D |