support tokenised prompt (online vllm)

#17
by Payoto - opened

Online vLLM inference passes an already pre-processed text prompt to the multimodal preprocessor.

This comment has been hidden
Payoto changed pull request status to open
Payoto changed pull request status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment