Use Unsloth BF16 to quantize

Downloads last month
55
GGUF
Model size
36.2B params
Architecture
seed_oss
Hardware compatibility
Log In to view the estimation

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lovedheart/Seed-OSS-36B-Instruct-GGUF

Quantized
(34)
this model