MMTok: Multimodal Coverage Maximization for Efficient Inference of VLMs
Abstract
A multimodal method leverages both vision and text tokens to optimize vision token selection, improving inference efficiency in vision-language models.
Vision-Language Models (VLMs) demonstrate impressive performance in understanding visual content with language instruction by converting visual input to vision tokens. However, redundancy in vision tokens results in the degenerated inference efficiency of VLMs. While many algorithms have been proposed to reduce the number of vision tokens, most of them apply only unimodal information (i.e., vision/text) for pruning and ignore the inherent multimodal property of vision-language tasks. Moreover, it lacks a generic criterion that can be applied to different modalities. To mitigate this limitation, in this work, we propose to leverage both vision and text tokens to select informative vision tokens by the criterion of coverage. We first formulate the subset selection problem as a maximum coverage problem. Afterward, a subset of vision tokens is optimized to cover the text tokens and the original set of vision tokens, simultaneously. Finally, a VLM agent can be adopted to further improve the quality of text tokens for guiding vision pruning. The proposed method MMTok is extensively evaluated on benchmark datasets with different VLMs. The comparison illustrates that vision and text information are complementary, and combining multimodal information can surpass the unimodal baseline with a clear margin. Moreover, under the maximum coverage criterion on the POPE dataset, our method achieves a 1.87x speedup while maintaining 98.7% of the original performance on LLaVA-NeXT-13B. Furthermore, with only four vision tokens, it still preserves 87.7% of the original performance on LLaVA-1.5-7B. These results highlight the effectiveness of coverage in token selection.
Community
Efficient token pruning using multimodal coverage maximization: MMTok achieves up to 1.87ร speedup on H100 while preserving 98.7 % accuracy, and retains 87.7 % F1 with just 4 vision tokens on POPE.
Project page: https://project.ironieser.cc/mmtok
Impressive work!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HiPrune: Training-Free Visual Token Pruning via Hierarchical Attention in Vision-Language Models (2025)
- CoViPAL: Layer-wise Contextualized Visual Token Pruning for Large Vision-Language Models (2025)
- AdaptInfer: Adaptive Token Pruning for Vision-Language Model Inference with Dynamical Text Guidance (2025)
- Fourier-VLM: Compressing Vision Tokens in the Frequency Domain for Large Vision-Language Models (2025)
- CATP: Contextually Adaptive Token Pruning for Efficient and Enhanced Multimodal In-Context Learning (2025)
- TransPrune: Token Transition Pruning for Efficient Large Vision-Language Model (2025)
- MoCHA: Advanced Vision-Language Reasoning with MoE Connector and Hierarchical Group Attention (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper