C4AI Aya Vision Collection Aya Vision is a state-of-the-art family of vision models that brings multimodal capabilities to 23 languages. • 5 items • Updated 14 days ago • 64
view article Article A Deepdive into Aya Vision: Advancing the Frontier of Multilingual Multimodality 15 days ago • 67
Unified Reward Model for Multimodal Understanding and Generation Paper • 2503.05236 • Published 12 days ago • 105
Token-Efficient Long Video Understanding for Multimodal LLMs Paper • 2503.04130 • Published 13 days ago • 81
SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution Paper • 2502.18449 • Published 21 days ago • 70
DICEPTION: A Generalist Diffusion Model for Visual Perceptual Tasks Paper • 2502.17157 • Published 22 days ago • 51
SurveyX: Academic Survey Automation via Large Language Models Paper • 2502.14776 • Published 26 days ago • 95
Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement Learning Paper • 2502.14768 • Published 26 days ago • 46
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features Paper • 2502.14786 • Published 26 days ago • 130
SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation Paper • 2502.13143 • Published 28 days ago • 29
InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU Paper • 2502.08910 • Published Feb 13 • 143
mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data Paper • 2502.08468 • Published Feb 12 • 13
Analyze Feature Flow to Enhance Interpretation and Steering in Language Models Paper • 2502.03032 • Published Feb 5 • 58
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model Paper • 2502.02737 • Published Feb 4 • 206