PepBun commited on
Commit
cf56b72
·
1 Parent(s): 116d280

Upload model

Browse files
Files changed (2) hide show
  1. README.md +18 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -324,4 +324,22 @@ The following `bitsandbytes` quantization config was used during training:
324
  ### Framework versions
325
 
326
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
327
  - PEFT 0.6.2
 
324
  ### Framework versions
325
 
326
 
327
+ - PEFT 0.6.2
328
+ ## Training procedure
329
+
330
+
331
+ The following `bitsandbytes` quantization config was used during training:
332
+ - load_in_8bit: True
333
+ - load_in_4bit: False
334
+ - llm_int8_threshold: 6.0
335
+ - llm_int8_skip_modules: None
336
+ - llm_int8_enable_fp32_cpu_offload: False
337
+ - llm_int8_has_fp16_weight: False
338
+ - bnb_4bit_quant_type: fp4
339
+ - bnb_4bit_use_double_quant: False
340
+ - bnb_4bit_compute_dtype: float32
341
+
342
+ ### Framework versions
343
+
344
+
345
  - PEFT 0.6.2
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1ac1f5a2595956596e42a818e2b222dfd04d0826ccf6f3a7eab94ee02ba1d42a
3
  size 21020682
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53028f846e968626fccddb28ed3aa2b661c017773b08b91ace7eb8bd94d3fbc3
3
  size 21020682