PepBun commited on
Commit
12ad66f
·
1 Parent(s): 09b495d

Upload model

Browse files
Files changed (2) hide show
  1. README.md +18 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -414,4 +414,22 @@ The following `bitsandbytes` quantization config was used during training:
414
  ### Framework versions
415
 
416
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
417
  - PEFT 0.6.2
 
414
  ### Framework versions
415
 
416
 
417
+ - PEFT 0.6.2
418
+ ## Training procedure
419
+
420
+
421
+ The following `bitsandbytes` quantization config was used during training:
422
+ - load_in_8bit: True
423
+ - load_in_4bit: False
424
+ - llm_int8_threshold: 6.0
425
+ - llm_int8_skip_modules: None
426
+ - llm_int8_enable_fp32_cpu_offload: False
427
+ - llm_int8_has_fp16_weight: False
428
+ - bnb_4bit_quant_type: fp4
429
+ - bnb_4bit_use_double_quant: False
430
+ - bnb_4bit_compute_dtype: float32
431
+
432
+ ### Framework versions
433
+
434
+
435
  - PEFT 0.6.2
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:37e2f1f3d0b5455233f4314efe5519d4aa65670296108f3c8f083d9c8b40e8fa
3
  size 21020682
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69ed818ae42965cf65305ec99e26116d070cbb77d143c06a0946a8afe4cca67e
3
  size 21020682