L3.1-70B-Celeste-v0.1-GGUF

GGUF quants of nothingiisreal/L3.1-70B-Celeste-V0.1-BF16

Made with AutoGGUF

Quantization Size
q5_k_m 49.9GB
Downloads last month
1
GGUF
Model size
70.6B params
Architecture
llama

3-bit

4-bit

5-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support ggml models with pipeline type text-generation

Datasets used to train leafspark/L3.1-70B-Celeste-v0.1-GGUF