Collections
Discover the best community collections!
Collections including paper arxiv:2412.06769
-
Training Large Language Models to Reason in a Continuous Latent Space
Paper • 2412.06769 • Published • 77 -
Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
Paper • 2408.03314 • Published • 54 -
ICAL: Continual Learning of Multimodal Agents by Transforming Trajectories into Actionable Insights
Paper • 2406.14596 • Published • 5 -
A Comprehensive Survey of LLM Alignment Techniques: RLHF, RLAIF, PPO, DPO and More
Paper • 2407.16216 • Published
-
Training Large Language Models to Reason in a Continuous Latent Space
Paper • 2412.06769 • Published • 77 -
Byte Latent Transformer: Patches Scale Better Than Tokens
Paper • 2412.09871 • Published • 90 -
Qwen2.5 Technical Report
Paper • 2412.15115 • Published • 345 -
YuLan-Mini: An Open Data-efficient Language Model
Paper • 2412.17743 • Published • 64
-
Training Large Language Models to Reason in a Continuous Latent Space
Paper • 2412.06769 • Published • 77 -
Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
Paper • 2408.03314 • Published • 54 -
Evolving Deeper LLM Thinking
Paper • 2501.09891 • Published • 105 -
Kimi k1.5: Scaling Reinforcement Learning with LLMs
Paper • 2501.12599 • Published • 87
-
Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering
Paper • 2411.11504 • Published • 20 -
Top-nσ: Not All Logits Are You Need
Paper • 2411.07641 • Published • 20 -
Adaptive Decoding via Latent Preference Optimization
Paper • 2411.09661 • Published • 10 -
When Precision Meets Position: BFloat16 Breaks Down RoPE in Long-Context Training
Paper • 2411.13476 • Published • 15
-
Large Language Models Can Self-Improve in Long-context Reasoning
Paper • 2411.08147 • Published • 63 -
Reverse Thinking Makes LLMs Stronger Reasoners
Paper • 2411.19865 • Published • 20 -
Training Large Language Models to Reason in a Continuous Latent Space
Paper • 2412.06769 • Published • 77 -
HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs
Paper • 2412.18925 • Published • 98
-
On Memorization of Large Language Models in Logical Reasoning
Paper • 2410.23123 • Published • 18 -
LLMs Do Not Think Step-by-step In Implicit Reasoning
Paper • 2411.15862 • Published • 8 -
Training Large Language Models to Reason in a Continuous Latent Space
Paper • 2412.06769 • Published • 77 -
Deliberation in Latent Space via Differentiable Cache Augmentation
Paper • 2412.17747 • Published • 30