GGUFs for Fireplace 34b - https://huggingface.co/ValiantLabs/Fireplace-34b

iMatrix GGUFs generated with Kalomaze's semi-random groups_merged.txt

Files >50gb have been split with peazip. Recombine with peazip, 7zip, or simple concatenate command.

Downloads last month
46
GGUF
Model size
34.4B params
Architecture
llama
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for MarsupialAI/Fireplace-34b_iMatrix_GGUF

Quantized
(4)
this model