Safetensors
English
Finnish
bloom

Add pipeline tag, library name and link to paper

#2
by nielsr HF staff - opened
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -1,17 +1,21 @@
1
  ---
2
- license: apache-2.0
3
- language:
4
- - en
5
- - fi
6
  base_model:
7
  - LumiOpen/Poro-34B
8
  datasets:
9
  - sablo/oasst2_curated
10
  - LumiOpen/instruction-collection-fin
 
 
 
 
 
 
11
  ---
12
 
13
  This is an SFT-tuned model of [Poro-34B](https://huggingface.co/LumiOpen/Poro-34B) with English and Finnish data. We trained this model as part of our experiments on the impact of multilingual instruction-tuning on Poro-34B. For a better chat experience, we recommend using [Poro-34B-chat](https://huggingface.co/LumiOpen/Poro-34B-chat) instead.
14
 
 
 
15
  ## Datasets
16
 
17
  ### SFT
@@ -90,7 +94,6 @@ seed: 42
90
  warmup_ratio: 0.1
91
  ```
92
 
93
-
94
  ## Evaluation
95
 
96
  We use [IFEval](https://huggingface.co/datasets/google/IFEval) to evaluate the performance of the model in English. For Finnish, we translated the IFEval prompts to [Finnish](https://huggingface.co/datasets/LumiOpen/ifeval_mt) with DeepL. We report the instruction-level strict accuracy:
 
1
  ---
 
 
 
 
2
  base_model:
3
  - LumiOpen/Poro-34B
4
  datasets:
5
  - sablo/oasst2_curated
6
  - LumiOpen/instruction-collection-fin
7
+ language:
8
+ - en
9
+ - fi
10
+ license: apache-2.0
11
+ library_name: transformers
12
+ pipeline_tag: text-generation
13
  ---
14
 
15
  This is an SFT-tuned model of [Poro-34B](https://huggingface.co/LumiOpen/Poro-34B) with English and Finnish data. We trained this model as part of our experiments on the impact of multilingual instruction-tuning on Poro-34B. For a better chat experience, we recommend using [Poro-34B-chat](https://huggingface.co/LumiOpen/Poro-34B-chat) instead.
16
 
17
+ The model was presented in the paper [Poro 34B and the Blessing of Multilinguality](https://huggingface.co/papers/2404.01856).
18
+
19
  ## Datasets
20
 
21
  ### SFT
 
94
  warmup_ratio: 0.1
95
  ```
96
 
 
97
  ## Evaluation
98
 
99
  We use [IFEval](https://huggingface.co/datasets/google/IFEval) to evaluate the performance of the model in English. For Finnish, we translated the IFEval prompts to [Finnish](https://huggingface.co/datasets/LumiOpen/ifeval_mt) with DeepL. We report the instruction-level strict accuracy: