-
JLKang/ViSpec-Qwen2.5-VL-3B-Instruct
Image-Text-to-Text • 0.4B • Updated • 55 -
JLKang/ViSpec-Qwen2.5-VL-7B-Instruct
Image-Text-to-Text • 0.9B • Updated • 52 -
JLKang/ViSpec-llava-v1.6-vicuna-7b-hf
Image-Text-to-Text • 0.5B • Updated • 12 -
JLKang/ViSpec-llava-v1.6-vicuna-13b-hf
Image-Text-to-Text • 0.7B • Updated • 7
Jialiang Kang
JLKang
AI & ML interests
Vision Language Models
Organizations
None yet
ViSpec
-
JLKang/ViSpec-Qwen2.5-VL-3B-Instruct
Image-Text-to-Text • 0.4B • Updated • 55 -
JLKang/ViSpec-Qwen2.5-VL-7B-Instruct
Image-Text-to-Text • 0.9B • Updated • 52 -
JLKang/ViSpec-llava-v1.6-vicuna-7b-hf
Image-Text-to-Text • 0.5B • Updated • 12 -
JLKang/ViSpec-llava-v1.6-vicuna-13b-hf
Image-Text-to-Text • 0.7B • Updated • 7
models 5
JLKang/ViSpec-llava-1.5-7b-hf
Image-Text-to-Text • 0.5B • Updated • 9
JLKang/ViSpec-llava-v1.6-vicuna-13b-hf
Image-Text-to-Text • 0.7B • Updated • 7
JLKang/ViSpec-llava-v1.6-vicuna-7b-hf
Image-Text-to-Text • 0.5B • Updated • 12
JLKang/ViSpec-Qwen2.5-VL-7B-Instruct
Image-Text-to-Text • 0.9B • Updated • 52
JLKang/ViSpec-Qwen2.5-VL-3B-Instruct
Image-Text-to-Text • 0.4B • Updated • 55
datasets 0
None public yet