-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 27 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 13 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 43 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 22
Collections
Discover the best community collections!
Collections including paper arxiv:2503.11576
-
Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMs
Paper • 2503.16870 • Published • 5 -
Gemma 3 Technical Report
Paper • 2503.19786 • Published • 40 -
Qwen2.5-Omni Technical Report
Paper • 2503.20215 • Published • 112 -
Think Twice: Enhancing LLM Reasoning by Scaling Multi-round Test-time Thinking
Paper • 2503.19855 • Published • 24
-
SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion
Paper • 2503.11576 • Published • 81 -
OmniDocBench: Benchmarking Diverse PDF Document Parsing with Comprehensive Annotations
Paper • 2412.07626 • Published • 22 -
ds4sd/docling-models
Updated • 553k • 116 -
ds4sd/SmolDocling-256M-preview
Image-Text-to-Text • Updated • 63.4k • 1.12k