https://huggingface.co/nyunai/nyun-c2-llama3-50B
#109
by
jpsequeira
- opened
This model presents a good opportunity to use decent quants with 24gb VRAM.
I tried to convert it using llamacpp but I get an error with the num_attention_heads
in the config.json
, llamacpp expects an int
and the config shows an int[]
.
yeah, it's not supported by llama.cpp at the moment, unfortunately.
mradermacher
changed discussion status to
closed
Thank you.