Name | Quant method | Size |
---|---|---|
Ae-calem-mistral-7b-v0.2_f16.gguf | fp16 | 14.6 GB |
Ae-calem-mistral-7b-v0.2_8bit.gguf | q8_0 | 7.75 GB |
- Downloads last month
- 23
16-bit
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model authors have turned it off explicitly.