RichardErkhov commited on
Commit
2ca4958
·
verified ·
1 Parent(s): df8a565

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +202 -0
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ RoLlama2-7b-Chat - GGUF
11
+ - Model creator: https://huggingface.co/OpenLLM-Ro/
12
+ - Original model: https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Chat/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [RoLlama2-7b-Chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q2_K.gguf) | Q2_K | 2.36GB |
18
+ | [RoLlama2-7b-Chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
19
+ | [RoLlama2-7b-Chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.IQ3_S.gguf) | IQ3_S | 2.75GB |
20
+ | [RoLlama2-7b-Chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
21
+ | [RoLlama2-7b-Chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.IQ3_M.gguf) | IQ3_M | 2.9GB |
22
+ | [RoLlama2-7b-Chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q3_K.gguf) | Q3_K | 2.7GB |
23
+ | [RoLlama2-7b-Chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
24
+ | [RoLlama2-7b-Chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
25
+ | [RoLlama2-7b-Chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
26
+ | [RoLlama2-7b-Chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q4_0.gguf) | Q4_0 | 3.56GB |
27
+ | [RoLlama2-7b-Chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
28
+ | [RoLlama2-7b-Chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
29
+ | [RoLlama2-7b-Chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q4_K.gguf) | Q4_K | 3.8GB |
30
+ | [RoLlama2-7b-Chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
31
+ | [RoLlama2-7b-Chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q4_1.gguf) | Q4_1 | 3.95GB |
32
+ | [RoLlama2-7b-Chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q5_0.gguf) | Q5_0 | 4.33GB |
33
+ | [RoLlama2-7b-Chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
34
+ | [RoLlama2-7b-Chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q5_K.gguf) | Q5_K | 4.45GB |
35
+ | [RoLlama2-7b-Chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
36
+ | [RoLlama2-7b-Chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q5_1.gguf) | Q5_1 | 4.72GB |
37
+ | [RoLlama2-7b-Chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q6_K.gguf) | Q6_K | 5.15GB |
38
+ | [RoLlama2-7b-Chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/OpenLLM-Ro_-_RoLlama2-7b-Chat-gguf/blob/main/RoLlama2-7b-Chat.Q8_0.gguf) | Q8_0 | 6.67GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: cc-by-nc-4.0
46
+ language:
47
+ - ro
48
+ base_model:
49
+ - OpenLLM-Ro/RoLlama2-7b-Base
50
+ new_version: OpenLLM-Ro/RoLlama2-7b-Instruct
51
+ model-index:
52
+ - name: OpenLLM-Ro/RoLlama2-7b-Chat
53
+ results:
54
+ - task:
55
+ type: text-generation
56
+ dataset:
57
+ name: OpenLLM-Ro/ro_arc_challenge
58
+ type: RoARC
59
+ metrics:
60
+ - name: Average
61
+ type: accuracy
62
+ value: 41.92
63
+ - name: 0-shot
64
+ type: accuracy
65
+ value: 39.59
66
+ - name: 1-shot
67
+ type: accuracy
68
+ value: 41.05
69
+ - name: 3-shot
70
+ type: accuracy
71
+ value: 42.42
72
+ - name: 5-shot
73
+ type: accuracy
74
+ value: 42.16
75
+ - name: 10-shot
76
+ type: accuracy
77
+ value: 43.36
78
+ - name: 25-shot
79
+ type: accuracy
80
+ value: 42.93
81
+ ---
82
+
83
+ # Model Card for Model ID
84
+
85
+ <!-- Provide a quick summary of what the model is/does. -->
86
+
87
+ RoLlama2 is a family of pretrained and fine-tuned generative text models for Romanian. This is the repository for the **chat 7B model**. Links to other models can be found at the bottom of this page.
88
+
89
+ ## Model Details
90
+
91
+ ### Model Description
92
+
93
+ <!-- Provide a longer summary of what this model is. -->
94
+ OpenLLM represents the first open-source effort to build a LLM specialized for Romanian. OpenLLM-Ro developed and publicly releases a collection of Romanian LLMs, both in the form of foundational model and instruct and chat variants.
95
+
96
+
97
+ - **Developed by:** OpenLLM-Ro
98
+ <!-- - **Funded by [optional]:** [More Information Needed] -->
99
+ <!-- - **Shared by [optional]:** [More Information Needed] -->
100
+ <!-- - **Model type:** [More Information Needed] -->
101
+ - **Language(s):** Romanian
102
+ - **License:** cc-by-nc-4.0
103
+ - **Finetuned from model:** [RoLlama2-7b-Base](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base)
104
+
105
+ ### Model Sources
106
+
107
+ <!-- Provide the basic links for the model. -->
108
+
109
+ - **Repository:** https://github.com/OpenLLM-Ro/llama-recipes
110
+ - **Paper:** https://arxiv.org/abs/2405.07703
111
+
112
+ ## Intended Use
113
+
114
+ ### Intended Use Cases
115
+
116
+ RoLlama2 is intented for research use in Romanian. Base models can be adapted for a variety of natural language tasks while instruction and chat tuned models are intended for assistant-like chat.
117
+
118
+ ### Out-of-Scope Use
119
+
120
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
121
+
122
+ Use in any manner that violates the license, any applicable laws or regluations, use in languages other than Romanian.
123
+
124
+
125
+
126
+ ## How to Get Started with the Model
127
+
128
+ Use the code below to get started with the model.
129
+
130
+ ```python
131
+ from transformers import AutoTokenizer, AutoModelForCausalLM
132
+
133
+ tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Chat")
134
+ model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Chat")
135
+
136
+ instruction = "Care este cel mai înalt vârf muntos din România?"
137
+ chat = [
138
+ {"role": "system", "content": "Ești un asistent folositor, respectuos și onest. Încearcă să ajuți cât mai mult prin informațiile oferite, excluzând răspunsuri toxice, rasiste, sexiste, periculoase și ilegale."},
139
+ {"role": "user", "content": instruction},
140
+ ]
141
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False)
142
+
143
+ inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
144
+ outputs = model.generate(input_ids=inputs, max_new_tokens=128)
145
+ print(tokenizer.decode(outputs[0]))
146
+ ```
147
+
148
+ ## Benchmarks
149
+
150
+ | Model | Average | ARC | MMLU |Winogrande|HellaSwag | GSM8k |TruthfulQA|
151
+ |--------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
152
+ | Llama-2-7b-chat | 36.84 | 37.03 | 33.81 | 55.87 | 45.36 | 4.90 | 44.09 |
153
+ |RoLlama2-7b-Instruct|**45.71**|**43.66**|**39.70**|**70.34** | 57.36 |**18.78**| 44.44 |
154
+ |*RoLlama2-7b-Chat* | *43.82* | *41.92* | *37.29* | *66.68* | ***57.91***| *13.47* | ***45.65***|
155
+
156
+
157
+
158
+ ## Romanian MT-Bench
159
+
160
+ | Model | Average | 1st turn | 2nd turn | Answers in Ro |
161
+ |--------------------|:--------:|:--------:|:--------:|:--------:|
162
+ | Llama-2-7b-chat | 1.08 | 1.44 | 0.73 | 45 / 160 |
163
+ |RoLlama2-7b-Instruct| **3.86**|**4.68**| **3.04** | **160 / 160** |
164
+ |*RoLlama2-7b-Chat* | *TBC* | *TBC* | *TBC* | *TBC* |
165
+
166
+ ## RoCulturaBench
167
+
168
+ | Model | Score | Answers in Ro|
169
+ |--------------------|:--------:|:--------:|
170
+ | Llama-2-7b-chat | 1.21 | 33 / 100 |
171
+ |RoLlama2-7b-Instruct| **3.77**| **160 / 160** |
172
+ |*RoLlama2-7b-Chat* | *TBC* | *TBC* |
173
+
174
+
175
+
176
+
177
+ ## RoLlama2 Model Family
178
+
179
+ | Model | Link |
180
+ |--------------------|:--------:|
181
+ |RoLlama2-7b-Base | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base) |
182
+ |RoLlama2-7b-Instruct| [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Instruct) |
183
+ |*RoLlama2-7b-Chat* | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Chat) |
184
+
185
+
186
+ ## Citation
187
+
188
+ ```
189
+ @misc{masala2024openllmrotechnicalreport,
190
+ title={OpenLLM-Ro -- Technical Report on Open-source Romanian LLMs},
191
+ author={Mihai Masala and Denis C. Ilie-Ablachim and Dragos Corlatescu and Miruna Zavelca and Marius Leordeanu and Horia Velicu and Marius Popescu and Mihai Dascalu and Traian Rebedea},
192
+ year={2024},
193
+ eprint={2405.07703},
194
+ archivePrefix={arXiv},
195
+ primaryClass={cs.CL},
196
+ url={https://arxiv.org/abs/2405.07703},
197
+ }
198
+ ```
199
+ <!-- **APA:**
200
+
201
+ [More Information Needed] -->
202
+