SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models Paper • 2502.09604 • Published 29 days ago • 33
Lookback Lens: Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps Paper • 2407.07071 • Published Jul 9, 2024 • 12
Lookback Lens: Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps Paper • 2407.07071 • Published Jul 9, 2024 • 12
Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks Paper • 2307.02477 • Published Jul 5, 2023
Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement Paper • 2310.08559 • Published Oct 12, 2023 • 1
Learning to Reason via Program Generation, Emulation, and Search Paper • 2405.16337 • Published May 25, 2024