Spaces:
Running
on
A10G
Running
on
A10G
No support for making GGUF of HuggingFaceTB/SmolVLM-500M-Instruct
#148
by
TimexPeachtree
- opened
Error converting to fp16: INFO:hf-to-gguf:Loading model: SmolVLM-500M-Instruct
ERROR:hf-to-gguf:Model Idefics3ForConditionalGeneration is not supported
The model is not supported by llama.cpp yet, quantizing it won't work until it is implemented.
Two related issues on github:
https://github.com/ggerganov/llama.cpp/issues/10877
https://github.com/ggerganov/llama.cpp/issues/11682