owao/Falcon3-Mamba-R1-v0-Q6_K-GGUF

This model was converted to GGUF format from hanzla/Falcon3-Mamba-R1-v0 using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Downloads last month
20
GGUF
Model size
7.27B params
Architecture
mamba

6-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for owao/Falcon3-Mamba-R1-v0-Q6_K-GGUF

Quantized
(2)
this model