Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks Paper • 2307.02477 • Published Jul 5, 2023
Transparency Helps Reveal When Language Models Learn Meaning Paper • 2210.07468 • Published Oct 14, 2022
Continued Pretraining for Better Zero- and Few-Shot Promptability Paper • 2210.10258 • Published Oct 19, 2022
The Semantic Hub Hypothesis: Language Models Share Semantic Representations Across Languages and Modalities Paper • 2411.04986 • Published Nov 7, 2024 • 6
Sparkle: Mastering Basic Spatial Capabilities in Vision Language Models Elicits Generalization to Composite Spatial Reasoning Paper • 2410.16162 • Published Oct 21, 2024
ITINERA: Integrating Spatial Optimization with Large Language Models for Open-domain Urban Itinerary Planning Paper • 2402.07204 • Published Feb 11, 2024
SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models Paper • 2502.09604 • Published 28 days ago • 33
SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models Paper • 2502.09604 • Published 28 days ago • 33
The Semantic Hub Hypothesis: Language Models Share Semantic Representations Across Languages and Modalities Paper • 2411.04986 • Published Nov 7, 2024 • 6
The Semantic Hub Hypothesis: Language Models Share Semantic Representations Across Languages and Modalities Paper • 2411.04986 • Published Nov 7, 2024 • 6 • 2
Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment Paper • 2404.12318 • Published Apr 18, 2024 • 15
Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment Paper • 2404.12318 • Published Apr 18, 2024 • 15