GGUF quants of MN-12B-Mag-Mell-R1. Q4_K_M, Q6_K, Q8_0 and F16 are available. Let me know if you need more.

Downloads last month
4,298
GGUF
Model size
12.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for inflatebot/MN-12B-Mag-Mell-R1-GGUF

Quantized
(21)
this model