XQuant: Breaking the Memory Wall for LLM Inference with KV Cache Rematerialization Paper • 2508.10395 • Published 20 days ago • 40
XQuant: Breaking the Memory Wall for LLM Inference with KV Cache Rematerialization Paper • 2508.10395 • Published 20 days ago • 40
Overcoming Simplicity Bias in Deep Networks using a Feature Sieve Paper • 2301.13293 • Published Jan 30, 2023
QuantSpec: Self-Speculative Decoding with Hierarchical Quantized KV Cache Paper • 2502.10424 • Published Feb 5 • 1