Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs Paper • 2412.21187 • Published Dec 30, 2024 • 40
CodeUnlearn: Amortized Zero-Shot Machine Unlearning in Language Models Using Discrete Concept Paper • 2410.10866 • Published Oct 8, 2024
Byte Latent Transformer: Patches Scale Better Than Tokens Paper • 2412.09871 • Published Dec 13, 2024 • 93