-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 26 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 13 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 43 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 22
Collections
Discover the best community collections!
Collections including paper arxiv:2408.08872
-
sentence-transformers/all-mpnet-base-v2
Sentence Similarity • Updated • 34.6M • • 1.01k -
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper • 1910.10683 • Published • 11 -
google-t5/t5-base
Translation • Updated • 3.26M • • 676 -
Attention Is All You Need
Paper • 1706.03762 • Published • 55
-
What matters when building vision-language models?
Paper • 2405.02246 • Published • 102 -
MUMU: Bootstrapping Multimodal Image Generation from Text-to-Image Data
Paper • 2406.18790 • Published • 34 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 126 -
Show-o: One Single Transformer to Unify Multimodal Understanding and Generation
Paper • 2408.12528 • Published • 51
-
LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Paper • 2408.10188 • Published • 52 -
xGen-MM (BLIP-3): A Family of Open Large Multimodal Models
Paper • 2408.08872 • Published • 99 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 126 -
Show-o: One Single Transformer to Unify Multimodal Understanding and Generation
Paper • 2408.12528 • Published • 51
-
xGen-MM (BLIP-3): A Family of Open Large Multimodal Models
Paper • 2408.08872 • Published • 99 -
Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model
Paper • 2408.11039 • Published • 59 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 126