Papers
arxiv:2502.14866

LServe: Efficient Long-sequence LLM Serving with Unified Sparse Attention

Published on Feb 20
· Submitted by vansin on Feb 21

Abstract

Large language models (LLMs) have shown remarkable potential in processing long sequences, yet efficiently serving these long-context models remains challenging due to the quadratic computational complexity of attention in the prefilling stage and the large memory footprint of the KV cache in the decoding stage. To address these issues, we introduce LServe, an efficient system that accelerates long-sequence LLM serving via hybrid sparse attention. This method unifies different hardware-friendly, structured sparsity patterns for both prefilling and decoding attention into a single framework, where computations on less important tokens are skipped block-wise. LServe demonstrates the compatibility of static and dynamic sparsity in long-context LLM attention. This design enables multiplicative speedups by combining these optimizations. Specifically, we convert half of the attention heads to nearly free streaming heads in both the prefilling and decoding stages. Additionally, we find that only a constant number of KV pages is required to preserve long-context capabilities, irrespective of context length. We then design a hierarchical KV page selection policy that dynamically prunes KV pages based on query-centric similarity. On average, LServe accelerates LLM prefilling by up to 2.9x and decoding by 1.3-2.1x over vLLM, maintaining long-context accuracy. Code is released at https://github.com/mit-han-lab/omniserve.

Community

Paper submitter

Large language models (LLMs) have shown remarkable potential in processing
long sequences, yet efficiently serving these long-context models remains
challenging due to the quadratic computational complexity of attention in the
prefilling stage and the large memory footprint of the KV cache in the decoding
stage. To address these issues, we introduce LServe, an efficient system that
accelerates long-sequence LLM serving via hybrid sparse attention. This method
unifies different hardware-friendly, structured sparsity patterns for both
prefilling and decoding attention into a single framework, where computations
on less important tokens are skipped block-wise. LServe demonstrates the
compatibility of static and dynamic sparsity in long-context LLM attention.
This design enables multiplicative speedups by combining these optimizations.
Specifically, we convert half of the attention heads to nearly free streaming
heads in both the prefilling and decoding stages. Additionally, we find that
only a constant number of KV pages is required to preserve long-context
capabilities, irrespective of context length. We then design a hierarchical KV
page selection policy that dynamically prunes KV pages based on query-centric
similarity. On average, LServe accelerates LLM prefilling by up to 2.9x and
decoding by 1.3-2.1x over vLLM, maintaining long-context accuracy. Code is
released at https://github.com/mit-han-lab/omniserve.

Accepted by MLSys 2025. Code available at:
https://github.com/mit-han-lab/omniserve

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.14866 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.14866 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.14866 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.