Efficient Data Selection at Scale via Influence Distillation Paper • 2505.19051 • Published May 25 • 4
Quartet: Native FP4 Training Can Be Optimal for Large Language Models Paper • 2505.14669 • Published May 20 • 78
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation Paper • 2401.04679 • Published Jan 9, 2024 • 2
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks Paper • 2302.04852 • Published Feb 9, 2023
Panza: A Personalized Text Writing Assistant via Data Playback and Local Fine-Tuning Paper • 2407.10994 • Published Jun 24, 2024 • 2
HALO: Hadamard-Assisted Lossless Optimization for Efficient Low-Precision LLM Training and Fine-Tuning Paper • 2501.02625 • Published Jan 5 • 16
QuEST: Stable Training of LLMs with 1-Bit Weights and Activations Paper • 2502.05003 • Published Feb 7 • 44