euclaise/Ferret-3B-GGUF

Quantized GGUF model files for Ferret-3B from euclaise

Name Quant method Size
ferret-3b.fp16.gguf fp16 5.59 GB
ferret-3b.q2_k.gguf q2_k 1.20 GB
ferret-3b.q3_k_m.gguf q3_k_m 1.39 GB
ferret-3b.q4_k_m.gguf q4_k_m 1.71 GB
ferret-3b.q5_k_m.gguf q5_k_m 1.99 GB
ferret-3b.q6_k.gguf q6_k 2.30 GB
ferret-3b.q8_0.gguf q8_0 2.97 GB

Original Model Card:

Downloads last month
113
GGUF
Model size
2.8B params
Architecture
stablelm
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for afrideva/Ferret-3B-GGUF

Base model

euclaise/Ferret-3B
Quantized
(1)
this model

Datasets used to train afrideva/Ferret-3B-GGUF