Papers
arxiv:2509.01563

Kwai Keye-VL 1.5 Technical Report

Published on Sep 1
· Submitted by taesiri on Sep 3
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Keye-VL-1.5 enhances video understanding through a Slow-Fast encoding strategy, progressive pre-training, and post-training reasoning improvements, outperforming existing models on video tasks while maintaining general multimodal performance.

AI-generated summary

In recent years, the development of Large Language Models (LLMs) has significantly advanced, extending their capabilities to multimodal tasks through Multimodal Large Language Models (MLLMs). However, video understanding remains a challenging area due to the dynamic and information-dense nature of videos. Existing models struggle with the trade-off between spatial resolution and temporal coverage when processing video content. We present Keye-VL-1.5, which addresses fundamental challenges in video comprehension through three key innovations. First, we introduce a novel Slow-Fast video encoding strategy that dynamically allocates computational resources based on inter-frame similarity, processing key frames with significant visual changes at higher resolution (Slow pathway) while handling relatively static frames with increased temporal coverage at lower resolution (Fast pathway). Second, we implement a progressive four-stage pre-training methodology that systematically extends the model's context length from 8K to 128K tokens, enabling processing of longer videos and more complex visual content. Third, we develop a comprehensive post-training pipeline focusing on reasoning enhancement and human preference alignment, incorporating a 5-step chain-of-thought data construction process, iterative GSPO-based reinforcement learning with progressive prompt hinting for difficult cases, and alignment training. Through extensive evaluation on public benchmarks and rigorous internal human assessment, Keye-VL-1.5 demonstrates significant improvements over existing models, particularly excelling in video understanding tasks while maintaining competitive performance on general multimodal benchmarks.

Community

Paper submitter

Kwai Keye-VL 1.5 Technical Report

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.01563 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 2