Papers
arxiv:2507.04590

VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and Visual Documents

Published on Jul 7
· Submitted by ziyjiang on Jul 8
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

A unified framework VLM2Vec-V2 for multimodal embedding supports diverse visual forms, including videos and visual documents, improving performance across various tasks and benchmarks.

AI-generated summary

Multimodal embedding models have been crucial in enabling various downstream tasks such as semantic similarity, information retrieval, and clustering over different modalities. However, existing multimodal embeddings like VLM2Vec, E5-V, GME are predominantly focused on natural images, with limited support for other visual forms such as videos and visual documents. This restricts their applicability in real-world scenarios, including AI agents, multi-modal search and recommendation, and retrieval-augmented generation (RAG). To close this gap, we propose VLM2Vec-V2, a unified framework for learning embeddings across diverse visual forms. First, we introduce MMEB-V2, a comprehensive benchmark that extends MMEB with five new task types: visual document retrieval, video retrieval, temporal grounding, video classification and video question answering - spanning text, image, video, and visual document inputs. Next, we train VLM2Vec-V2, a general-purpose embedding model that supports text, image, video, and visual document inputs. Extensive experiments show that VLM2Vec-V2 achieves strong performance not only on the newly introduced video and document retrieval tasks, but also improves over prior baselines on the original image benchmarks. Through extensive evaluation, our study offers insights into the generalizability of various multimodal embedding models and highlights effective strategies for unified embedding learning, laying the groundwork for more scalable and adaptable representation learning in both research and real-world settings.

Community

Paper author Paper submitter

We introduce MMEB-V2, a comprehensive benchmark that extends MMEB with five new task types spanning video and visual document domain. Next, we train VLM2Vec-V2, a general-purpose embedding model that supports text, image, video, and visual document inputs.

What is the size of this model?

·
Paper author

The currently released version is of 2B size.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 2

Spaces citing this paper 2

Collections including this paper 9