Genearted from https://github.com/yhyu13/AutoGPTQ.git branch cuda_dev
Original weight: https://huggingface.co/tiiuae/falcon-7b
Note this is the quantization of the base model, where base model is not fined-tuned with chat instructions yet
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API does not yet support model repos that contain custom code.