https://huggingface.co/mergekit-community/AngelSlayer-Slush-12B
#606
by
Darkhells
- opened
If you can please do it in Q4_K_M.gguf and Q5_K_M.gguf
https://huggingface.co/mergekit-community/AngelSlayer-Slush-12B
Sorry for the late replay. The model failed and we likely forgot it was even requested by someone. I now forcefully requeued it to see why. Apparently there is an error loading the GGUF in llama.cpp due to token_embd.weight having the wrong shape.
llama_model_load: error loading model: check_tensor_dims: tensor 'token_embd.weight' has wrong shape; expected 5120, 131074, got 5120, 131072, 1, 1
While static quants started to upload I'm almost certain llama.cpp would not be able to load them so I decided to cancle the job and delete all the repositories using llmc nukeall AngelSlayer-Slush-12B
mradermacher
changed discussion status to
closed