library_name: peft | |
tags: | |
- falcon | |
- causal-lm | |
- text-generation | |
license: apache-2.0 | |
inference: false | |
# Falcon 7B Tatts Merged Model | |
## Model Description | |
This model is a merged version of Falcon 7B with LoRA weights, designed for causal language modeling and text generation tasks. | |
## Training procedure | |
The following bitsandbytes quantization config was used during training: | |
- `load_in_8bit`: False | |
- `load_in_4bit`: True | |
- `llm_int8_threshold`: 6.0 | |
- `llm_int8_skip_modules`: None | |
- `llm_int8_enable_fp32_cpu_offload`: False | |
- `llm_int8_has_fp16_weight`: False | |
- `bnb_4bit_quant_type`: nf4 | |
- `bnb_4bit_use_double_quant`: False | |
- `bnb_4bit_compute_dtype`: float16 | |
## Framework versions | |
- PEFT 0.4.0 | |