Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

An experimental model, fine-tuned using the "multiplicative-LoRA" method on command-r-v01.

NOTE: You MUST use some (small) value of min-p with this such as 0.01 (as with the original command-r-v01 model), or else the model will output gibberish!

Downloads last month
7
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for gghfez/jukofyork_creative-writer-35b-preview-exl2-4.5bpw

Quantized
(4)
this model