Safetensors
Romanian
gemma
Eval Results
mihaimasala commited on
Commit
d114d0f
·
verified ·
1 Parent(s): 2b520cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +741 -3
README.md CHANGED
@@ -1,3 +1,741 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - ro
5
+ base_model:
6
+ - OpenLLM-Ro/RoGemma-7b-Instruct-2024-10-09
7
+ datasets:
8
+ - OpenLLM-Ro/ro_dpo_helpsteer
9
+ model-index:
10
+ - name: OpenLLM-Ro/RoGemma-7b-Instruct-DPO-2024-10-09
11
+ results:
12
+ - task:
13
+ type: text-generation
14
+ dataset:
15
+ name: RoMT-Bench
16
+ type: RoMT-Bench
17
+ metrics:
18
+ - name: Score
19
+ type: Score
20
+ value: 5.47
21
+ - task:
22
+ type: text-generation
23
+ dataset:
24
+ name: RoCulturaBench
25
+ type: RoCulturaBench
26
+ metrics:
27
+ - name: Score
28
+ type: Score
29
+ value: 3.94
30
+ - task:
31
+ type: text-generation
32
+ dataset:
33
+ name: Romanian_Academic_Benchmarks
34
+ type: Romanian_Academic_Benchmarks
35
+ metrics:
36
+ - name: Average accuracy
37
+ type: accuracy
38
+ value: 48.27
39
+ - task:
40
+ type: text-generation
41
+ dataset:
42
+ name: OpenLLM-Ro/ro_arc_challenge
43
+ type: OpenLLM-Ro/ro_arc_challenge
44
+ metrics:
45
+ - name: Average accuracy
46
+ type: accuracy
47
+ value: 46.66
48
+ - task:
49
+ type: text-generation
50
+ dataset:
51
+ name: OpenLLM-Ro/ro_mmlu
52
+ type: OpenLLM-Ro/ro_mmlu
53
+ metrics:
54
+ - name: Average accuracy
55
+ type: accuracy
56
+ value: 54.45
57
+ - task:
58
+ type: text-generation
59
+ dataset:
60
+ name: OpenLLM-Ro/ro_winogrande
61
+ type: OpenLLM-Ro/ro_winogrande
62
+ metrics:
63
+ - name: Average accuracy
64
+ type: accuracy
65
+ value: 63.73
66
+ - task:
67
+ type: text-generation
68
+ dataset:
69
+ name: OpenLLM-Ro/ro_hellaswag
70
+ type: OpenLLM-Ro/ro_hellaswag
71
+ metrics:
72
+ - name: Average accuracy
73
+ type: accuracy
74
+ value: 49.33
75
+ - task:
76
+ type: text-generation
77
+ dataset:
78
+ name: OpenLLM-Ro/ro_gsm8k
79
+ type: OpenLLM-Ro/ro_gsm8k
80
+ metrics:
81
+ - name: Average accuracy
82
+ type: accuracy
83
+ value: 34.98
84
+ - task:
85
+ type: text-generation
86
+ dataset:
87
+ name: OpenLLM-Ro/ro_truthfulqa
88
+ type: OpenLLM-Ro/ro_truthfulqa
89
+ metrics:
90
+ - name: Average accuracy
91
+ type: accuracy
92
+ value: 40.45
93
+ - task:
94
+ type: text-generation
95
+ dataset:
96
+ name: LaRoSeDa_binary
97
+ type: LaRoSeDa_binary
98
+ metrics:
99
+ - name: Average macro-f1
100
+ type: macro-f1
101
+ value: 96.45
102
+ - task:
103
+ type: text-generation
104
+ dataset:
105
+ name: LaRoSeDa_multiclass
106
+ type: LaRoSeDa_multiclass
107
+ metrics:
108
+ - name: Average macro-f1
109
+ type: macro-f1
110
+ value: 63.23
111
+ - task:
112
+ type: text-generation
113
+ dataset:
114
+ name: LaRoSeDa_binary_finetuned
115
+ type: LaRoSeDa_binary_finetuned
116
+ metrics:
117
+ - name: Average macro-f1
118
+ type: macro-f1
119
+ value: 0.00
120
+ - task:
121
+ type: text-generation
122
+ dataset:
123
+ name: LaRoSeDa_multiclass_finetuned
124
+ type: LaRoSeDa_multiclass_finetuned
125
+ metrics:
126
+ - name: Average macro-f1
127
+ type: macro-f1
128
+ value: 0.00
129
+ - task:
130
+ type: text-generation
131
+ dataset:
132
+ name: WMT_EN-RO
133
+ type: WMT_EN-RO
134
+ metrics:
135
+ - name: Average bleu
136
+ type: bleu
137
+ value: 20.73
138
+ - task:
139
+ type: text-generation
140
+ dataset:
141
+ name: WMT_RO-EN
142
+ type: WMT_RO-EN
143
+ metrics:
144
+ - name: Average bleu
145
+ type: bleu
146
+ value: 7.87
147
+ - task:
148
+ type: text-generation
149
+ dataset:
150
+ name: WMT_EN-RO_finetuned
151
+ type: WMT_EN-RO_finetuned
152
+ metrics:
153
+ - name: Average bleu
154
+ type: bleu
155
+ value: 0.00
156
+ - task:
157
+ type: text-generation
158
+ dataset:
159
+ name: WMT_RO-EN_finetuned
160
+ type: WMT_RO-EN_finetuned
161
+ metrics:
162
+ - name: Average bleu
163
+ type: bleu
164
+ value: 0.00
165
+ - task:
166
+ type: text-generation
167
+ dataset:
168
+ name: XQuAD
169
+ type: XQuAD
170
+ metrics:
171
+ - name: Average exact_match
172
+ type: exact_match
173
+ value: 19.14
174
+ - task:
175
+ type: text-generation
176
+ dataset:
177
+ name: XQuAD
178
+ type: XQuAD
179
+ metrics:
180
+ - name: Average f1
181
+ type: f1
182
+ value: 38.10
183
+ - task:
184
+ type: text-generation
185
+ dataset:
186
+ name: XQuAD_finetuned
187
+ type: XQuAD_finetuned
188
+ metrics:
189
+ - name: Average exact_match
190
+ type: exact_match
191
+ value: 0.00
192
+ - task:
193
+ type: text-generation
194
+ dataset:
195
+ name: XQuAD_finetuned
196
+ type: XQuAD_finetuned
197
+ metrics:
198
+ - name: Average f1
199
+ type: f1
200
+ value: 0.00
201
+ - task:
202
+ type: text-generation
203
+ dataset:
204
+ name: STS
205
+ type: STS
206
+ metrics:
207
+ - name: Average spearman
208
+ type: spearman
209
+ value: 69.38
210
+ - task:
211
+ type: text-generation
212
+ dataset:
213
+ name: STS
214
+ type: STS
215
+ metrics:
216
+ - name: Average pearson
217
+ type: pearson
218
+ value: 69.34
219
+ - task:
220
+ type: text-generation
221
+ dataset:
222
+ name: STS_finetuned
223
+ type: STS_finetuned
224
+ metrics:
225
+ - name: Average spearman
226
+ type: spearman
227
+ value: 0.00
228
+ - task:
229
+ type: text-generation
230
+ dataset:
231
+ name: STS_finetuned
232
+ type: STS_finetuned
233
+ metrics:
234
+ - name: Average pearson
235
+ type: pearson
236
+ value: 0.00
237
+ - task:
238
+ type: text-generation
239
+ dataset:
240
+ name: RoMT-Bench
241
+ type: RoMT-Bench
242
+ metrics:
243
+ - name: First turn
244
+ type: Score
245
+ value: 5.92
246
+ - name: Second turn
247
+ type: Score
248
+ value: 5.03
249
+ - task:
250
+ type: text-generation
251
+ dataset:
252
+ name: OpenLLM-Ro/ro_arc_challenge
253
+ type: OpenLLM-Ro/ro_arc_challenge
254
+ metrics:
255
+ - name: 0-shot
256
+ type: accuracy
257
+ value: 48.84
258
+ - name: 1-shot
259
+ type: accuracy
260
+ value: 46.27
261
+ - name: 3-shot
262
+ type: accuracy
263
+ value: 44.64
264
+ - name: 5-shot
265
+ type: accuracy
266
+ value: 45.76
267
+ - name: 10-shot
268
+ type: accuracy
269
+ value: 46.62
270
+ - name: 25-shot
271
+ type: accuracy
272
+ value: 47.81
273
+ - task:
274
+ type: text-generation
275
+ dataset:
276
+ name: OpenLLM-Ro/ro_mmlu
277
+ type: OpenLLM-Ro/ro_mmlu
278
+ metrics:
279
+ - name: 0-shot
280
+ type: accuracy
281
+ value: 52.47
282
+ - name: 1-shot
283
+ type: accuracy
284
+ value: 54.40
285
+ - name: 3-shot
286
+ type: accuracy
287
+ value: 55.63
288
+ - name: 5-shot
289
+ type: accuracy
290
+ value: 55.30
291
+ - task:
292
+ type: text-generation
293
+ dataset:
294
+ name: OpenLLM-Ro/ro_winogrande
295
+ type: OpenLLM-Ro/ro_winogrande
296
+ metrics:
297
+ - name: 0-shot
298
+ type: accuracy
299
+ value: 60.54
300
+ - name: 1-shot
301
+ type: accuracy
302
+ value: 63.54
303
+ - name: 3-shot
304
+ type: accuracy
305
+ value: 63.46
306
+ - name: 5-shot
307
+ type: accuracy
308
+ value: 67.40
309
+ - task:
310
+ type: text-generation
311
+ dataset:
312
+ name: OpenLLM-Ro/ro_hellaswag
313
+ type: OpenLLM-Ro/ro_hellaswag
314
+ metrics:
315
+ - name: 0-shot
316
+ type: accuracy
317
+ value: 52.67
318
+ - name: 1-shot
319
+ type: accuracy
320
+ value: 50.89
321
+ - name: 3-shot
322
+ type: accuracy
323
+ value: 47.85
324
+ - name: 5-shot
325
+ type: accuracy
326
+ value: 45.98
327
+ - name: 10-shot
328
+ type: accuracy
329
+ value: 49.26
330
+ - task:
331
+ type: text-generation
332
+ dataset:
333
+ name: OpenLLM-Ro/ro_gsm8k
334
+ type: OpenLLM-Ro/ro_gsm8k
335
+ metrics:
336
+ - name: 1-shot
337
+ type: accuracy
338
+ value: 27.45
339
+ - name: 3-shot
340
+ type: accuracy
341
+ value: 36.32
342
+ - name: 5-shot
343
+ type: accuracy
344
+ value: 41.17
345
+ - task:
346
+ type: text-generation
347
+ dataset:
348
+ name: LaRoSeDa_binary
349
+ type: LaRoSeDa_binary
350
+ metrics:
351
+ - name: 0-shot
352
+ type: macro-f1
353
+ value: 95.90
354
+ - name: 1-shot
355
+ type: macro-f1
356
+ value: 95.36
357
+ - name: 3-shot
358
+ type: macro-f1
359
+ value: 97.13
360
+ - name: 5-shot
361
+ type: macro-f1
362
+ value: 97.43
363
+ - task:
364
+ type: text-generation
365
+ dataset:
366
+ name: LaRoSeDa_multiclass
367
+ type: LaRoSeDa_multiclass
368
+ metrics:
369
+ - name: 0-shot
370
+ type: macro-f1
371
+ value: 66.82
372
+ - name: 1-shot
373
+ type: macro-f1
374
+ value: 59.47
375
+ - name: 3-shot
376
+ type: macro-f1
377
+ value: 62.88
378
+ - name: 5-shot
379
+ type: macro-f1
380
+ value: 63.77
381
+ - task:
382
+ type: text-generation
383
+ dataset:
384
+ name: WMT_EN-RO
385
+ type: WMT_EN-RO
386
+ metrics:
387
+ - name: 0-shot
388
+ type: bleu
389
+ value: 8.00
390
+ - name: 1-shot
391
+ type: bleu
392
+ value: 24.37
393
+ - name: 3-shot
394
+ type: bleu
395
+ value: 26.19
396
+ - name: 5-shot
397
+ type: bleu
398
+ value: 24.36
399
+ - task:
400
+ type: text-generation
401
+ dataset:
402
+ name: WMT_RO-EN
403
+ type: WMT_RO-EN
404
+ metrics:
405
+ - name: 0-shot
406
+ type: bleu
407
+ value: 0.76
408
+ - name: 1-shot
409
+ type: bleu
410
+ value: 4.67
411
+ - name: 3-shot
412
+ type: bleu
413
+ value: 13.33
414
+ - name: 5-shot
415
+ type: bleu
416
+ value: 12.73
417
+ - task:
418
+ type: text-generation
419
+ dataset:
420
+ name: XQuAD_EM
421
+ type: XQuAD_EM
422
+ metrics:
423
+ - name: 0-shot
424
+ type: exact_match
425
+ value: 14.37
426
+ - name: 1-shot
427
+ type: exact_match
428
+ value: 19.08
429
+ - name: 3-shot
430
+ type: exact_match
431
+ value: 17.73
432
+ - name: 5-shot
433
+ type: exact_match
434
+ value: 25.38
435
+ - task:
436
+ type: text-generation
437
+ dataset:
438
+ name: XQuAD_F1
439
+ type: XQuAD_F1
440
+ metrics:
441
+ - name: 0-shot
442
+ type: f1
443
+ value: 33.52
444
+ - name: 1-shot
445
+ type: f1
446
+ value: 37.27
447
+ - name: 3-shot
448
+ type: f1
449
+ value: 35.77
450
+ - name: 5-shot
451
+ type: f1
452
+ value: 45.84
453
+ - task:
454
+ type: text-generation
455
+ dataset:
456
+ name: STS_Spearman
457
+ type: STS_Spearman
458
+ metrics:
459
+ - name: 1-shot
460
+ type: spearman
461
+ value: 54.50
462
+ - name: 3-shot
463
+ type: spearman
464
+ value: 74.93
465
+ - name: 5-shot
466
+ type: spearman
467
+ value: 78.70
468
+ - task:
469
+ type: text-generation
470
+ dataset:
471
+ name: STS_Pearson
472
+ type: STS_Pearson
473
+ metrics:
474
+ - name: 1-shot
475
+ type: pearson
476
+ value: 54.91
477
+ - name: 3-shot
478
+ type: pearson
479
+ value: 74.98
480
+ - name: 5-shot
481
+ type: pearson
482
+ value: 78.13
483
+
484
+ ---
485
+
486
+ # Model Card for Model ID
487
+
488
+ <!-- Provide a quick summary of what the model is/does. -->
489
+ This model points/is identical to [RoGemma-7b-Instruct-DPO-2024-10-09](https://huggingface.co/OpenLLM-Ro/RoGemma-7b-Instruct-DPO-2024-10-09).
490
+
491
+
492
+ RoGemma is a family of pretrained and fine-tuned generative text models for Romanian. This is the repository for the **human aligned instruct 7B model**. Links to other models can be found at the bottom of this page.
493
+
494
+ ## Model Details
495
+
496
+ ### Model Description
497
+
498
+ <!-- Provide a longer summary of what this model is. -->
499
+ OpenLLM-Ro represents the first open-source effort to build a LLM specialized for Romanian. OpenLLM-Ro developed and publicly releases a collection of Romanian LLMs, both in the form of foundational model and instruct and chat variants.
500
+
501
+
502
+ - **Developed by:** OpenLLM-Ro
503
+ <!-- - **Funded by [optional]:** [More Information Needed] -->
504
+ <!-- - **Shared by [optional]:** [More Information Needed] -->
505
+ <!-- - **Model type:** [More Information Needed] -->
506
+ - **Language(s):** Romanian
507
+ - **License:** cc-by-nc-4.0
508
+ - **Finetuned from model:** [RoGemma-7b-Instruct-2024-10-09](https://huggingface.co/OpenLLM-Ro/RoGemma-7b-Instruct-2024-10-09)
509
+ - **Trained using:** [RoHelpSteer](https://huggingface.co/datasets/OpenLLM-Ro/ro_dpo_helpsteer)
510
+
511
+
512
+ ### Model Sources
513
+
514
+ <!-- Provide the basic links for the model. -->
515
+
516
+ - **Repository:** https://github.com/OpenLLM-Ro/LLaMA-Factory
517
+ - **Paper:** https://arxiv.org/abs/2406.18266
518
+
519
+ ## Intended Use
520
+
521
+ ### Intended Use Cases
522
+
523
+ RoGemma is intented for research use in Romanian. Base models can be adapted for a variety of natural language tasks while instruction and chat tuned models are intended for assistant-like chat.
524
+
525
+ ### Out-of-Scope Use
526
+
527
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
528
+
529
+ Use in any manner that violates the license, any applicable laws or regluations, use in languages other than Romanian.
530
+
531
+
532
+
533
+ ## How to Get Started with the Model
534
+
535
+ Use the code below to get started with the model.
536
+
537
+ ```python
538
+ from transformers import AutoTokenizer, AutoModelForCausalLM
539
+
540
+ tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoGemma-7b-Instruct-DPO")
541
+ model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoGemma-7b-Instruct-DPO")
542
+
543
+ instruction = "Ce jocuri de societate pot juca cu prietenii mei?"
544
+ chat = [
545
+ {"role": "user", "content": instruction},
546
+ ]
547
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False, system_message="")
548
+
549
+ inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
550
+ outputs = model.generate(input_ids=inputs, max_new_tokens=128)
551
+ print(tokenizer.decode(outputs[0]))
552
+ ```
553
+
554
+ ## Academic Benchmarks
555
+
556
+ <table>
557
+ <tbody>
558
+ <tr>
559
+ <td><strong>Model</strong></td>
560
+ <td><strong><center>Average</center></strong></td>
561
+ <td><strong><center>ARC</center></strong></td>
562
+ <td><strong><center>MMLU</center></strong></td>
563
+ <td><strong><center>Winogrande</center></strong></td>
564
+ <td><strong><center>Hellaswag</center></strong></td>
565
+ <td><strong><center>GSM8k</center></strong></td>
566
+ <td><strong><center>TruthfulQA</center></strong></td>
567
+ </tr>
568
+ <tr>
569
+ <td>gemma-1.1-7b-it</td><td><center>41.44</center></td><td><center>40.32</center></td><td><center>47.22</center></td><td><center>55.01</center></td><td><center>47.03</center></td><td><center>9.50</center></td><td><center>49.58</center></td>
570
+ </tr>
571
+ <tr>
572
+ <td>RoGemma-7b-Instruct-2024-06-28</td><td><center><strong>53.41</strong></center></td><td><center><strong>52.44</strong></center></td><td><center>54.44</center></td><td><center><strong>69.36</strong></center></td><td><center><strong>61.96</strong></center></td><td><center>31.06</center></td><td><center><strong>51.23</strong></center></td>
573
+ </tr>
574
+ <tr>
575
+ <td>RoGemma-7b-Instruct-2024-10-09</td><td><center>50.48</center></td><td><center>52.01</center></td><td><center>52.37</center></td><td><center>66.97</center></td><td><center>56.34</center></td><td><center>25.98</center></td><td><center>49.18</center></td>
576
+ </tr>
577
+ <tr>
578
+ <td><em>RoGemma-7b-Instruct-DPO-2024-10-09</em></td><td><center><em>48.27</em></center></td><td><center><em>46.66</em></center></td><td><center><em><strong>54.45</strong></em></center></td><td><center><em>63.73</em></center></td><td><center><em>49.33</em></center></td><td><center><em><strong>34.98</strong></em></center></td><td><center><em>40.45</em></center></td>
579
+ </tr>
580
+ </tbody>
581
+ </table>
582
+
583
+
584
+ ## Downstream tasks
585
+
586
+ <table>
587
+ <tbody>
588
+ <tr>
589
+ <td></td>
590
+ <td colspan="4"><center><strong>LaRoSeDa</strong></center></td>
591
+ <td colspan="4"><center><strong>WMT</strong></center></td>
592
+ </tr>
593
+ <tr>
594
+ <td></td>
595
+ <td colspan="2"><center><strong>Few-shot</strong></center></td>
596
+ <td colspan="2"><center><strong>Finetuned</strong></center></td>
597
+ <td colspan="2"><center><strong>Few-shot</strong></center></td>
598
+ <td colspan="2"><center><strong>Finetuned</strong></center></td>
599
+ </tr>
600
+ <tr>
601
+ <td><strong>Model</strong></td>
602
+ <td><center><strong>Binary<br>(Macro F1)</strong></center></td>
603
+ <td><center><strong>Multiclass<br>(Macro F1)</strong></center></td>
604
+ <td><center><strong>Binary<br>(Macro F1)</strong></center></td>
605
+ <td><center><strong>Multiclass<br>(Macro F1)</strong></center></td>
606
+ <td><center><strong>EN-RO<br>(Bleu)</strong></center></td>
607
+ <td><center><strong>RO-EN<br>(Bleu)</strong></center></td>
608
+ <td><center><strong>EN-RO<br>(Bleu)</strong></center></td>
609
+ <td><center><strong>RO-EN<br>(Bleu)</strong></center>
610
+ </tr>
611
+ <tr>
612
+ <td>gemma-1.1-7b-it</td><td><center>87.54</center></td><td><center>51.48</center></td><td><center>83.87</center></td><td><center>85.61</center></td><td><center>17.96</center></td><td><center><strong>27.74</strong></center></td><td><center>25.48</center></td><td><center>36.11</center></td>
613
+ </tr>
614
+ <tr>
615
+ <td>RoGemma-7b-Instruct-2024-06-28</td><td><center><strong>97.86</strong></center></td><td><center><strong>65.70</strong></center></td><td><center>98.43</center></td><td><center><strong>87.17</strong></center></td><td><center><strong>27.91</strong></center></td><td><center>23.08</center></td><td><center><strong>27.99</strong></center></td><td><center><strong>39.51</strong></center></td>
616
+ </tr>
617
+ <tr>
618
+ <td>RoGemma-7b-Instruct-2024-10-09</td><td><center>86.96</center></td><td><center>56.72</center></td><td><center><strong>98.80</strong></center></td><td><center>85.81</center></td><td><center>24.45</center></td><td><center>14.20</center></td><td><center>25.96</center></td><td><center>39.07</center></td>
619
+ </tr>
620
+ <tr>
621
+ <td><em>RoGemma-7b-Instruct-DPO-2024-10-09</em></td><td><center><em>96.45</em></center></td><td><center><em>63.23</em></center></td><td><center><em>-</em></center></td><td><center><em>-</em></center></td><td><center><em>20.73</em></center></td><td><center><em>7.87</em></center></td><td><center><em>-</em></center></td><td><center><em>-</em></center></td>
622
+ </tr>
623
+ </tbody>
624
+ </table>
625
+
626
+
627
+ <table>
628
+ <tbody>
629
+ <tr>
630
+ <td></td>
631
+ <td colspan="4"><center><strong>XQuAD</strong></center></td>
632
+ <td colspan="4"><center><strong>STS</strong></center></td>
633
+ </tr>
634
+ <tr>
635
+ <td></td>
636
+ <td colspan="2"><center><strong>Few-shot</strong></center></td>
637
+ <td colspan="2"><center><strong>Finetuned</strong></center></td>
638
+ <td colspan="2"><center><strong>Few-shot</strong></center></td>
639
+ <td colspan="2"><center><strong>Finetuned</strong></center></td>
640
+ </tr>
641
+ <tr>
642
+ <td><strong>Model</strong></td>
643
+ <td><center><strong>(EM)</strong></center></td>
644
+ <td><center><strong>(F1)</strong></center></td>
645
+ <td><center><strong>(EM)</strong></center></td>
646
+ <td><center><strong>(F1)</strong></center></td>
647
+ <td><center><strong>(Spearman)</strong></center></td>
648
+ <td><center><strong>(Pearson)</strong></center></td>
649
+ <td><center><strong>(Spearman)</strong></center></td>
650
+ <td><center><strong>(Pearson)</strong></center></td>
651
+ </tr>
652
+ <tr>
653
+ <td>gemma-1.1-7b-it</td><td><center><strong>42.10</strong></center></td><td><center><strong>62.30</strong></center></td><td><center><strong>60.34</strong></center></td><td><center><strong>77.40</strong></center></td><td><center>49.10</center></td><td><center>50.23</center></td><td><center>83.43</center></td><td><center>83.64</center></td>
654
+ </tr>
655
+ <tr>
656
+ <td>RoGemma-7b-Instruct-2024-06-28</td><td><center>17.75</center></td><td><center>28.11</center></td><td><center>52.02</center></td><td><center>68.43</center></td><td><center><strong>73.96</strong></center></td><td><center><strong>75.16</strong></center></td><td><center>86.45</center></td><td><center>86.31</center></td>
657
+ </tr>
658
+ <tr>
659
+ <td>RoGemma-7b-Instruct-2024-10-09</td><td><center>26.03</center></td><td><center>41.58</center></td><td><center>46.72</center></td><td><center>60.79</center></td><td><center>73.23</center></td><td><center>71.58</center></td><td><center><strong>88.42</strong></center></td><td><center><strong>88.45</strong></center></td>
660
+ </tr>
661
+ <tr>
662
+ <td><em>RoGemma-7b-Instruct-DPO-2024-10-09</em></td><td><center><em>19.14</em></center></td><td><center><em>38.10</em></center></td><td><center><em>-</em></center></td><td><center><em>-</em></center></td><td><center><em>69.38</em></center></td><td><center><em>69.34</em></center></td><td><center><em>-</em></center></td><td><center><em>-</em></center></td>
663
+ </tr>
664
+ </tbody>
665
+ </table>
666
+
667
+ ## MT-Bench
668
+
669
+ <<table>
670
+ <tbody>
671
+ <tr>
672
+ <td><strong>Model</strong></td>
673
+ <td><strong><center>Average</center></strong></td>
674
+ <td><strong><center>1st turn</center></strong></td>
675
+ <td><strong><center>2nd turn</center></strong></td>
676
+ <td><strong><center>Answers in Ro</center></strong></td>
677
+ </tr>
678
+ <tr>
679
+ <td>gemma-1.1-7b-it</td><td><center>4.83</center></td><td><center>5.11</center></td><td><center>4.55</center></td><td><center><strong>160/160</strong></center></td>
680
+ </tr>
681
+ <tr>
682
+ <td>RoGemma-7b-Instruct-2024-06-28</td><td><center>5.26</center></td><td><center><strong>5.92</strong></center></td><td><center>4.60</center></td><td><center><strong>160/160</strong></center></td>
683
+ </tr>
684
+ <tr>
685
+ <td>RoGemma-7b-Instruct-2024-10-09</td><td><center>5.24</center></td><td><center>5.55</center></td><td><center>4.94</center></td><td><center><strong>160/160</strong></center></td>
686
+ </tr>
687
+ <tr>
688
+ <td><em>RoGemma-7b-Instruct-DPO-2024-10-09</em></td><td><center><em><strong>5.47</strong></em></center></td><td><center><em><strong>5.92</strong></em></center></td><td><center><em><strong>5.03</strong></em></center></td><td><center><em><strong>160/160</strong></em></center></td>
689
+ </tr>
690
+ </tbody>
691
+ </table>
692
+
693
+ ## RoCulturaBench
694
+
695
+ <table>
696
+ <tbody>
697
+ <tr>
698
+ <td><strong>Model</strong></td>
699
+ <td><strong><center>Average</center></strong></td>
700
+ <td><strong><center>Answers in Ro</center></strong></td>
701
+ </tr>
702
+ <tr>
703
+ <td>gemma-1.1-7b-it</td><td><center>3.38</center></td><td><center><strong>100/100</strong></center></td>
704
+ </tr>
705
+ <tr>
706
+ <td>RoGemma-7b-Instruct-2024-06-28</td><td><center>3.26</center></td><td><center><strong>100/100</strong></center></td>
707
+ </tr>
708
+ <tr>
709
+ <td>RoGemma-7b-Instruct-2024-10-09</td><td><center>3.51</center></td><td><center><strong>100/100</strong></center></td>
710
+ </tr>
711
+ <tr>
712
+ <td><em>RoGemma-7b-Instruct-DPO-2024-10-09</em></td><td><center><em><strong>3.94</strong></em></center></td><td><center><em><strong>100/100</strong></em></center></td>
713
+ </tr>
714
+ </tbody>
715
+ </table>
716
+
717
+ ## RoGemma Model Family
718
+
719
+ | Model | Link |
720
+ |--------------------|:--------:|
721
+ |RoGemma-7b-Instruct-2024-06-28| [link](https://huggingface.co/OpenLLM-Ro/RoGemma-7b-Instruct-2024-06-28) |
722
+ |RoGemma-7b-Instruct-2024-10-09| [link](https://huggingface.co/OpenLLM-Ro/RoGemma-7b-Instruct-2024-10-09) |
723
+ |*RoGemma-7b-Instruct-DPO-2024-10-09*| [link](https://huggingface.co/OpenLLM-Ro/RoGemma-7b-Instruct-DPO-2024-10-09) |
724
+
725
+
726
+ ## Citation
727
+
728
+ ```
729
+ @misc{masala2024vorbecstiromanecsterecipetrain,
730
+ title={"Vorbe\c{s}ti Rom\^ane\c{s}te?" A Recipe to Train Powerful Romanian LLMs with English Instructions},
731
+ author={Mihai Masala and Denis C. Ilie-Ablachim and Alexandru Dima and Dragos Corlatescu and Miruna Zavelca and Ovio Olaru and Simina Terian-Dan and Andrei Terian-Dan and Marius Leordeanu and Horia Velicu and Marius Popescu and Mihai Dascalu and Traian Rebedea},
732
+ year={2024},
733
+ eprint={2406.18266},
734
+ archivePrefix={arXiv},
735
+ primaryClass={cs.CL},
736
+ url={https://arxiv.org/abs/2406.18266},
737
+ }
738
+ ```
739
+ <!-- **APA:**
740
+
741
+ [More Information Needed] -->