The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve? Paper • 2502.17535 • Published 24 days ago • 8 • 2
Perovskite-LLM: Knowledge-Enhanced Large Language Models for Perovskite Solar Cell Research Paper • 2502.12669 • Published about 1 month ago • 2 • 2
Mediator: Memory-efficient LLM Merging with Less Parameter Conflicts and Uncertainty Based Routing Paper • 2502.04411 • Published Feb 6 • 4 • 2
Can LLMs Maintain Fundamental Abilities under KV Cache Compression? Paper • 2502.01941 • Published Feb 4 • 15 • 2
ChunkKV: Semantic-Preserving KV Cache Compression for Efficient Long-Context LLM Inference Paper • 2502.00299 • Published Feb 1 • 2 • 2
Should We Really Edit Language Models? On the Evaluation of Edited Language Models Paper • 2410.18785 • Published Oct 24, 2024 • 7 • 2
3D Question Answering for City Scene Understanding Paper • 2407.17398 • Published Jul 24, 2024 • 22 • 5