Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation
Abstract
There is growing excitement about the potential of Language Models (LMs) to accelerate scientific discovery. Falsifying hypotheses is key to scientific progress, as it allows claims to be iteratively refined over time. This process requires significant researcher effort, reasoning, and ingenuity. Yet current benchmarks for LMs predominantly assess their ability to generate solutions rather than challenge them. We advocate for developing benchmarks that evaluate this inverse capability - creating counterexamples for subtly incorrect solutions. To demonstrate this approach, we start with the domain of algorithmic problem solving, where counterexamples can be evaluated automatically using code execution. Specifically, we introduce REFUTE, a dynamically updating benchmark that includes recent problems and incorrect submissions from programming competitions, where human experts successfully identified counterexamples. Our analysis finds that the best reasoning agents, even OpenAI o3-mini (high) with code execution feedback, can create counterexamples for only <9% of incorrect solutions in REFUTE, even though ratings indicate its ability to solve up to 48% of these problems from scratch. We hope our work spurs progress in evaluating and enhancing LMs' ability to falsify incorrect solutions - a capability that is crucial for both accelerating research and making models self-improve through reliable reflective reasoning.
Community
Brandolini's law states bs is harder to refute than generate. As AI generated hypotheses proliferate discourse, can AI help us falsify some? Our paper finds verification capabilities lag far behind generation, with o3-mini (high) being successful only 9% of the time, even when it can generate correct solutions to 50% of these problems
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Tool for In-depth Analysis of Code Execution Reasoning of Large Language Models (2025)
- Language Models Struggle to Achieve a Consistent Temporal Representation of Facts (2025)
- CounterBench: A Benchmark for Counterfactuals Reasoning in Large Language Models (2025)
- Instantiation-based Formalization of Logical Reasoning Tasks using Language Models and Logical Solvers (2025)
- LLM-ProS: Analyzing Large Language Models' Performance in Competitive Problem Solving (2025)
- JustLogic: A Comprehensive Benchmark for Evaluating Deductive Reasoning in Large Language Models (2025)
- Token-by-Token Regeneration and Domain Biases: A Benchmark of LLMs on Advanced Mathematical Problem-Solving (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper