Papers
arxiv:2501.17703

Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate

Published on Jan 29
· Submitted by ubowang on Jan 30
#1 Paper of the day

Abstract

Supervised Fine-Tuning (SFT) is commonly used to train language models to imitate annotated responses for given instructions. In this paper, we challenge this paradigm and propose Critique Fine-Tuning (CFT), a strategy where models learn to critique noisy responses rather than simply imitate correct ones. Inspired by human learning processes that emphasize critical thinking, CFT encourages deeper analysis and nuanced understanding-traits often overlooked by standard SFT. To validate the effectiveness of CFT, we construct a 50K-sample dataset from WebInstruct, using GPT-4o as the teacher to generate critiques in the form of (input=[query; noisy response], output=critique). CFT on this dataset yields a consistent 4-10% improvement over SFT on six math benchmarks with different base models like Qwen2.5, Qwen2.5-Math and DeepSeek-Math. We further expand to MetaMath and NuminaMath datasets and observe similar gains over SFT. Notably, our Qwen2.5-Math-CFT model-trained on just 50K samples-matches or outperforms competitive models such as AceMath and Qwen2.5-Math-Instruct on most benchmarks, both of which use over 2M samples. Ablation studies show that CFT is robust to the source of noisy response and teacher critique model. Through these findings, we argue that critique-based training offers a more effective alternative to advance the reasoning of language models.

Community

Paper author Paper submitter

Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate

Introducing Critique Fine-Tuning (CFT) - a paradigm shift in LLM training where models learn to critique rather than imitate! With just 50K samples, we achieved 4-10% gains over SFT across math benchmarks, matching models trained on 2M+ samples. Critical thinking > simple imitation! 🚀 Our results demonstrate dramatic improvements across all models - achieving 79.4% on MATH and 41.6% on OlympiadBench with Qwen-2.5-Math. A paradigm shift showing that teaching critical thinking beats simple imitation, even with small datasets!
image.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Do we have any implementation of CFT anywhere? I wanna fine-tune and test out results first hand.

·
This comment has been hidden

We made a deep dive video for this paper: https://www.youtube.com/watch?v=qaSFCjPjPXg. SFT(🦜) vs CFT(🦉). Happy learning together!

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2501.17703 in a Space README.md to link it from this page.

Collections including this paper 10