https://github.com/zixi01chen/llama.cpp_internvl2_bpu

Introduction: The VLM model runs on gguf and bpu.

Downloads last month
514
GGUF
Model size
630M params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QIANCHEN100/InternVL2_5-1B-GGUF-BPU

Quantized
(1)
this model