-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper β’ 2402.04252 β’ Published β’ 27 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper β’ 2402.03749 β’ Published β’ 13 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper β’ 2402.04615 β’ Published β’ 43 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper β’ 2402.05008 β’ Published β’ 22
Collections
Discover the best community collections!
Collections including paper arxiv:2403.11703
-
DocLLM: A layout-aware generative language model for multimodal document understanding
Paper β’ 2401.00908 β’ Published β’ 182 -
COSMO: COntrastive Streamlined MultimOdal Model with Interleaved Pre-Training
Paper β’ 2401.00849 β’ Published β’ 17 -
LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents
Paper β’ 2311.05437 β’ Published β’ 50 -
LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing
Paper β’ 2311.00571 β’ Published β’ 41
-
Improved Baselines with Visual Instruction Tuning
Paper β’ 2310.03744 β’ Published β’ 37 -
DeepSeek-VL: Towards Real-World Vision-Language Understanding
Paper β’ 2403.05525 β’ Published β’ 44 -
Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities
Paper β’ 2308.12966 β’ Published β’ 8 -
LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact Language Model
Paper β’ 2404.01331 β’ Published β’ 26
-
LLaVA-OneVision: Easy Visual Task Transfer
Paper β’ 2408.03326 β’ Published β’ 60 -
VILA^2: VILA Augmented VILA
Paper β’ 2407.17453 β’ Published β’ 41 -
PaliGemma: A versatile 3B VLM for transfer
Paper β’ 2407.07726 β’ Published β’ 70 -
openbmb/MiniCPM-V-2_6
Image-Text-to-Text β’ Updated β’ 260k β’ 964
-
Woodpecker: Hallucination Correction for Multimodal Large Language Models
Paper β’ 2310.16045 β’ Published β’ 16 -
SILC: Improving Vision Language Pretraining with Self-Distillation
Paper β’ 2310.13355 β’ Published β’ 9 -
To See is to Believe: Prompting GPT-4V for Better Visual Instruction Tuning
Paper β’ 2311.07574 β’ Published β’ 16 -
MyVLM: Personalizing VLMs for User-Specific Queries
Paper β’ 2403.14599 β’ Published β’ 17
-
Improved Baselines with Visual Instruction Tuning
Paper β’ 2310.03744 β’ Published β’ 37 -
DeepSeek-VL: Towards Real-World Vision-Language Understanding
Paper β’ 2403.05525 β’ Published β’ 44 -
Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities
Paper β’ 2308.12966 β’ Published β’ 8 -
LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact Language Model
Paper β’ 2404.01331 β’ Published β’ 26
-
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Paper β’ 2403.09611 β’ Published β’ 127 -
Evolutionary Optimization of Model Merging Recipes
Paper β’ 2403.13187 β’ Published β’ 53 -
MobileVLM V2: Faster and Stronger Baseline for Vision Language Model
Paper β’ 2402.03766 β’ Published β’ 14 -
LLM Agent Operating System
Paper β’ 2403.16971 β’ Published β’ 68
-
Unlocking the conversion of Web Screenshots into HTML Code with the WebSight Dataset
Paper β’ 2403.09029 β’ Published β’ 55 -
Cleaner Pretraining Corpus Curation with Neural Web Scraping
Paper β’ 2402.14652 β’ Published -
LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images
Paper β’ 2403.11703 β’ Published β’ 17
-
How Far Are We from Intelligent Visual Deductive Reasoning?
Paper β’ 2403.04732 β’ Published β’ 22 -
MoAI: Mixture of All Intelligence for Large Language and Vision Models
Paper β’ 2403.07508 β’ Published β’ 76 -
DragAnything: Motion Control for Anything using Entity Representation
Paper β’ 2403.07420 β’ Published β’ 15 -
Learning and Leveraging World Models in Visual Representation Learning
Paper β’ 2403.00504 β’ Published β’ 33