Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
language:
|
4 |
- ru
|
5 |
base_model:
|
@@ -7,6 +7,9 @@ base_model:
|
|
7 |
datasets:
|
8 |
- pomelk1n/RuadaptQwen-Quantization-Dataset
|
9 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
10 |
---
|
11 |
|
12 |
Эта модель является квантизированной версией.
|
@@ -31,4 +34,4 @@ model = AutoAWQForCausalLM.from_pretrained(
|
|
31 |
model.quantize(tokenizer, quant_config=quant_config, calib_data="/data/scripts/RuadaptQwen-Quantization-Dataset", text_column='text')
|
32 |
|
33 |
model.save_quantized(quant_path, safetensors=True, shard_size="5GB")
|
34 |
-
```
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
language:
|
4 |
- ru
|
5 |
base_model:
|
|
|
7 |
datasets:
|
8 |
- pomelk1n/RuadaptQwen-Quantization-Dataset
|
9 |
pipeline_tag: text-generation
|
10 |
+
tags:
|
11 |
+
- AWQ
|
12 |
+
- GEMM
|
13 |
---
|
14 |
|
15 |
Эта модель является квантизированной версией.
|
|
|
34 |
model.quantize(tokenizer, quant_config=quant_config, calib_data="/data/scripts/RuadaptQwen-Quantization-Dataset", text_column='text')
|
35 |
|
36 |
model.save_quantized(quant_path, safetensors=True, shard_size="5GB")
|
37 |
+
```
|