Support with vLLM
#4
by
rip-syntax
- opened
Right now, the model FastVLM-0.5B is not supported by vLLM. This is due to a fundamental mismatch in the model vision encoder architecture. Any future plan to make it work with vLLM ? Is there any workaround it to serve using vLLM ?