Update README.md
Browse files
README.md
CHANGED
|
@@ -8,13 +8,11 @@ pipeline_tag: text-generation
|
|
| 8 |
|
| 9 |
Use in the same way as IlyaGusev/saiga2_7b_lora.
|
| 10 |
|
| 11 |
-
WARNING! Load tokenizer as AutoTokenizer.from_pretrained(model_path, use_fast=True)
|
| 12 |
-
|
| 13 |
Up to 60% faster generation and 35% training (on identical russian text sequences!) with HF because of different tokenizer.
|
| 14 |
|
| 15 |
-
|
|
|
|
| 16 |
|
| 17 |
-
|
| 18 |
|
| 19 |
-
|
| 20 |
-
The quality is slightly worse than the IlyaGusev/saiga_mistral_7b_lora, but faster because of tokenizer.
|
|
|
|
| 8 |
|
| 9 |
Use in the same way as IlyaGusev/saiga2_7b_lora.
|
| 10 |
|
|
|
|
|
|
|
| 11 |
Up to 60% faster generation and 35% training (on identical russian text sequences!) with HF because of different tokenizer.
|
| 12 |
|
| 13 |
+
rccmsu/ruadapt_mistral_7b_v0.1 trained on saiga corpuses.
|
| 14 |
+
The quality is slightly worse than the IlyaGusev/saiga_mistral_7b_lora, but faster because of tokenizer.
|
| 15 |
|
| 16 |
+
WARNING! Load tokenizer as AutoTokenizer.from_pretrained(model_path, use_fast=True)
|
| 17 |
|
| 18 |
+
Paper: Tikhomirov M., Chernyshev D. Impact of Tokenization on LLaMa Russian Adaptation //arXiv preprint arXiv:2312.02598. – 2023.
|
|
|