Improve model card: Add license, usage, overview, and links

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +112 -33
README.md CHANGED
@@ -1,19 +1,72 @@
1
  ---
 
 
2
  datasets:
3
  - yolay/RAIF-ComplexInstruction-DeepSeek
4
  library_name: transformers
5
  pipeline_tag: text-generation
6
- base_model:
7
- - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
8
  ---
9
 
10
  This model belongs to the official implementation of the paper "[Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models](https://huggingface.co/papers/2506.01413)".
11
 
12
- Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
- To this end, we propose a systematic method to boost LLMs in dealing with complex instructions via incentivizing reasoning for test-time compute scaling. First, we stem from the decomposition of complex instructions under existing taxonomies and propose a reproducible data acquisition method. Second, we exploit reinforcement learning (RL) with verifiable rule-centric reward signals to cultivate reasoning specifically for instruction following. We address the shallow, non-essential nature of reasoning under complex instructions via sample-wise contrast for superior CoT enforcement. We also exploit behavior cloning of experts to facilitate steady distribution shift from fast-thinking LLMs to skillful reasoners. Extensive evaluations on seven comprehensive benchmarks confirm the validity of the proposed method, where a 1.5B LLM achieves 11.74% gains with performance comparable to a 8B LLM.
15
 
16
- The model DeepSeek-Qwen-7B is our optimized model for its advanced instruction-following capabilities under complex instructions. It corresponds to the **DeepSeek-Qwen7B (Ours)** in the Table 1.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  **Table 1** Performance on seven instruction benchmarks. Best/2nd best are marked **bold**/<u>underlined</u>.
19
 
@@ -24,39 +77,65 @@ The model DeepSeek-Qwen-7B is our optimized model for its advanced instruction-f
24
  | Qwen2.5-1.5B-Instruct | SDC | 41.95 | 66.10 | 30.00 | 41.70 | 36.52 | 37.39 | 67.55 | 45.89 (-4.71%) |
25
  | Qwen2.5-1.5B-Instruct | SFT | 65.61 | 71.20 | 48.00 | 57.46 | 42.75 | 56.47 | 76.22 | 59.67 (+9.06%) |
26
  | Qwen2.5-1.5B-Instruct | Ours | 44.91 | 73.50 | 53.66 | 63.92 | 58.67 | 59.82 | 81.95 | 62.35 (+11.74%) |
27
- | DeepSeek-Qwen1.5B | I/O† | 36.04 | 62.50 | 27.99 | 39.89 | 34.51 | 20.29 | 52.00 | 39.03 |
28
- | DeepSeek-Qwen1.5B | SFT | 45.29 | 63.20 | 25.33 | 35.53 | 37.59 | 22.18 | 51.96 | 40.15 (+1.12%) |
29
- | DeepSeek-Qwen1.5B | Ours | 57.67 | 69.00 | 40.00 | 44.38 | 37.78 | 37.79 | 60.48 | 49.58 (+10.54%) |
30
- | DeepScaleR-1.5B | I/O† | 41.77 | 65.00 | 30.00 | 40.70 | 40.24 | 26.01 | 60.31 | 43.43 |
31
- | DeepScaleR-1.5B | SFT | 48.24 | 62.90 | 28.00 | 36.68 | 35.72 | 26.50 | 54.22 | 41.75 (-1.67%) |
32
- | DeepScaleR-1.5B | Ours | 55.63 | 67.30 | 39.33 | 43.23 | 37.81 | 36.80 | 60.08 | 48.60 (+5.17%) |
33
- | Qwen2.5-7B-Instruct | I/O | 72.82 | 76.50 | 64.33 | 74.47 | 59.29 | 75.03 | <u>85.60</u> | <u>72.58</u> |
34
- | Qwen2.5-7B-Instruct | CoT | 69.50 | 75.20 | 61.66 | 72.00 | 42.65 | 74.86 | 82.13 | 68.28 (-4.29%) |
35
- | Qwen2.5-7B-Instruct | SDC | 60.44 | 72.60 | **65.66**| <u>76.53</u> | <u>60.07</u> | **76.09** | **86.88** | 71.18 (-1.39%) |
36
- | Qwen2.5-7B-Instruct | SFT | 72.45 | <u>77.50</u> | 63.33 | 74.23 | 58.76 | 75.92 | 84.31 | 72.36 (-0.21%) |
37
- | Qwen2.5-7B-Instruct | Ours | 70.06 | **79.20** | <u>65.00</u> | **77.40** | **64.45** | 75.32 | 82.67 | **73.44** (+0.85%) |
38
- | LLaMA3.1-8B-Instruct | I/O | <u>77.63</u> | 75.20 | 56.99 | 69.11 | 46.92 | 53.52 | 71.52 | 67.01 |
39
- | LLaMA3.1-8B-Instruct | CoT | 60.44 | 65.50 | 47.66 | 56.54 | 32.34 | 37.36 | 58.48 | 54.53 (-12.48%) |
40
- | LLaMA3.1-8B-Instruct | SDC | **80.22** | 71.00 | 58.33 | 68.73 | 38.36 | 48.92 | 72.89 | 65.24 (-1.77%) |
41
- | LLaMA3.1-8B-Instruct | SFT | 77.26 | 75.80 | 54.00 | 65.24 | 40.16 | 59.56 | 65.30 | 64.92 (-2.09%) |
42
- | LLaMA3.1-8B-Instruct | Ours | 13.49 | 4.6 | 1.33 | 2.71 | 7.14 | 1.08 | 0.51 | 4.06 (-62.95%) |
43
- | Ministral-8B-Instruct | I/O | 59.51 | 76.20 | 62.33 | 70.03 | 54.54 | 73.49 | 84.00 | 68.58 |
44
- | Ministral-8B-Instruct | CoT | 48.79 | 61.90 | 49.66 | 61.31 | 39.17 | 61.75 | 79.73 | 57.47 (-11.11%) |
45
- | Ministral-8B-Instruct | SDC | 58.59 | 63.60 | 56.99 | 68.32 | 48.06 | 69.37 | 84.08 | 64.14 (-4.43%) |
46
- | Ministral-8B-Instruct | SFT | 68.57 | 66.30 | 48.66 | 67.20 | 37.26 | 54.37 | 76.62 | 59.85 (-8.72%) |
47
- | Ministral-8B-Instruct | Ours | 72.64 | 72.6 | 59.33 | 70.45 | 54.35 | <u>76.08</u> | 75.33 | 68.68 (+0.10%) |
48
- | DeepSeek-Qwen7B | I/O† | 60.81 | 72.39 | 57.99 | 66.86 | 59.59 | 62.80 | 79.64 | 65.73 |
49
- | DeepSeek-Qwen7B | SFT | 67.09 | 69.10 | 58.66 | 58.42 | 55.60 | 65.96 | 79.15 | 64.85 (-0.88%) |
50
  | DeepSeek-Qwen7B | Ours | 71.35 | 71.40 | 58.67 | 62.04 | 59.65 | 59.38 | 82.00 | 66.35 (+0.62%) |
51
 
52
- Code: https://github.com/yuleiqin/RAIF
53
 
54
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
- 🎓 If you find this work useful, please consider the following citation:
57
- ```
 
 
 
 
 
 
 
 
 
 
58
  @article{qin2025incentivizingreasoningadvancedinstructionfollowing,
59
- title={Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models},
60
  author={Yulei Qin and Gang Li and Zongyi Li and Zihan Xu and Yuchen Shi and Zhekai Lin and Xiao Cui and Ke Li and Xing Sun},
61
  year={2025},
62
  eprint={2506.01413},
 
1
  ---
2
+ base_model:
3
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
4
  datasets:
5
  - yolay/RAIF-ComplexInstruction-DeepSeek
6
  library_name: transformers
7
  pipeline_tag: text-generation
8
+ license: cc-by-4.0
 
9
  ---
10
 
11
  This model belongs to the official implementation of the paper "[Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models](https://huggingface.co/papers/2506.01413)".
12
 
13
+ Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions. To this end, we propose **RAIF**, a systematic method to boost LLMs in dealing with complex instructions via incentivizing reasoning for test-time compute scaling. First, we stem from the decomposition of complex instructions under existing taxonomies and propose a reproducible data acquisition method. Second, we exploit reinforcement learning (RL) with verifiable rule-centric reward signals to cultivate reasoning specifically for instruction following. We address the shallow, non-essential nature of reasoning under complex instructions via sample-wise contrast for superior CoT enforcement. We also exploit behavior cloning of experts to facilitate steady distribution shift from fast-thinking LLMs to skillful reasoners. Extensive evaluations on seven comprehensive benchmarks confirm the validity of the proposed method, where a 1.5B LLM achieves 11.74% gains with performance comparable to a 8B LLM. Evaluation on OOD constraints also confirms the generalizability of our RAIF.
14
+
15
+ The model DeepSeek-Qwen-7B is our optimized model for its advanced instruction-following capabilities under complex instructions. It corresponds to the **DeepSeek-Qwen7B (Ours)** in Table 1 of the paper.
16
+
17
+ **Project Page**: Not available
18
+ **Code**: [https://github.com/yuleiqin/RAIF](https://github.com/yuleiqin/RAIF)
19
+
20
+ ## Overview and Framework
21
+
22
+ Our preliminary experiments confirm that the reasoning (e.g., triggered by CoT prompting) of fast-thinking LLMs (instructed models) are often shallow and superficial. Such reasoning only briefly repeats parts of the input requests and fails to extract key components from the complex instructions that are often composed of various sub-instructions, constraints, and rules. On the contrary, existing slow-thinking LLMs (reasoning models) demonstrate superior performance where their deep, organized reasoning truly help the analyses of complex instructions and provide the decomposed action steps to the final answer. Consequently, it is important to incentivize the authentic reasoning of LLMs to solve complex instructions.
23
+
24
+ ![The shallow, superficial reasoning of LLMs cannot improve but instead degrade performance under complex instructions.](./images/intro.png)
25
+
26
+ In this project, we present a reinforcement learning-based method for cultivation of the deep reasoning of LLMs.
27
+
28
+ ![The proposed method is aimed at incentivizing the deep reasoning capabilities of instructed LLMs for solving complex instructions.](./images/framework.png)
29
+
30
+ ## Usage
31
+
32
+ You can use this model with the `transformers` library.
33
+
34
+ ```python
35
+ from transformers import AutoModelForCausalLM, AutoTokenizer
36
+ import torch
37
 
38
+ model_id = "yolay/RAIF-Deepseek-Qwen-7B"
39
 
40
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
41
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
42
+
43
+ messages = [
44
+ {"role": "user", "content": "Explain the concept of quantum entanglement in simple terms."},
45
+ ]
46
+
47
+ text = tokenizer.apply_chat_template(
48
+ messages,
49
+ tokenize=False,
50
+ add_generation_prompt=True
51
+ )
52
+
53
+ model_inputs = tokenizer(text, return_tensors="pt").to(model.device)
54
+
55
+ generated_ids = model.generate(
56
+ model_inputs.input_ids,
57
+ max_new_tokens=512
58
+ )
59
+
60
+ generated_ids = [
61
+ output_id[len(input_id):]
62
+ for input_id, output_id in zip(model_inputs.input_ids, generated_ids)
63
+ ]
64
+
65
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
66
+ print(response)
67
+ ```
68
+
69
+ ## Performance
70
 
71
  **Table 1** Performance on seven instruction benchmarks. Best/2nd best are marked **bold**/<u>underlined</u>.
72
 
 
77
  | Qwen2.5-1.5B-Instruct | SDC | 41.95 | 66.10 | 30.00 | 41.70 | 36.52 | 37.39 | 67.55 | 45.89 (-4.71%) |
78
  | Qwen2.5-1.5B-Instruct | SFT | 65.61 | 71.20 | 48.00 | 57.46 | 42.75 | 56.47 | 76.22 | 59.67 (+9.06%) |
79
  | Qwen2.5-1.5B-Instruct | Ours | 44.91 | 73.50 | 53.66 | 63.92 | 58.67 | 59.82 | 81.95 | 62.35 (+11.74%) |
80
+ | DeepSeek-Qwen1.5B | I/O† | 36.04 | 62.50 | 27.99 | 39.89 | 34.51 | 20.29 | 52.00 | 39.03 |\
81
+ | DeepSeek-Qwen1.5B | SFT | 45.29 | 63.20 | 25.33 | 35.53 | 37.59 | 22.18 | 51.96 | 40.15 (+1.12%) |\
82
+ | DeepSeek-Qwen1.5B | Ours | 57.67 | 69.00 | 40.00 | 44.38 | 37.78 | 37.79 | 60.48 | 49.58 (+10.54%) |\
83
+ | DeepScaleR-1.5B | I/O† | 41.77 | 65.00 | 30.00 | 40.70 | 40.24 | 26.01 | 60.31 | 43.43 |\
84
+ | DeepScaleR-1.5B | SFT | 48.24 | 62.90 | 28.00 | 36.68 | 35.72 | 26.50 | 54.22 | 41.75 (-1.67%) |\
85
+ | DeepScaleR-1.5B | Ours | 55.63 | 67.30 | 39.33 | 43.23 | 37.81 | 36.80 | 60.08 | 48.60 (+5.17%) |\
86
+ | Qwen2.5-7B-Instruct | I/O | 72.82 | 76.50 | 64.33 | 74.47 | 59.29 | 75.03 | <u>85.60</u> | <u>72.58</u> |\
87
+ | Qwen2.5-7B-Instruct | CoT | 69.50 | 75.20 | 61.66 | 72.00 | 42.65 | 74.86 | 82.13 | 68.28 (-4.29%) |\
88
+ | Qwen2.5-7B-Instruct | SDC | 60.44 | 72.60 | **65.66**| <u>76.53</u> | <u>60.07</u> | **76.09** | **86.88** | 71.18 (-1.39%) |\
89
+ | Qwen2.5-7B-Instruct | SFT | 72.45 | <u>77.50</u> | 63.33 | 74.23 | 58.76 | 75.92 | 84.31 | 72.36 (-0.21%) |\
90
+ | Qwen2.5-7B-Instruct | Ours | 70.06 | **79.20** | <u>65.00</u> | **77.40** | **64.45** | 75.32 | 82.67 | **73.44** (+0.85%) |\
91
+ | LLaMA3.1-8B-Instruct | I/O | <u>77.63</u> | 75.20 | 56.99 | 69.11 | 46.92 | 53.52 | 71.52 | 67.01 |\
92
+ | LLaMA3.1-8B-Instruct | CoT | 60.44 | 65.50 | 47.66 | 56.54 | 32.34 | 37.36 | 58.48 | 54.53 (-12.48%) |\
93
+ | LLaMA3.1-8B-Instruct | SDC | **80.22** | 71.00 | 58.33 | 68.73 | 38.36 | 48.92 | 72.89 | 65.24 (-1.77%) |\
94
+ | LLaMA3.1-8B-Instruct | SFT | 77.26 | 75.80 | 54.00 | 65.24 | 40.16 | 59.56 | 65.30 | 64.92 (-2.09%) |\
95
+ | LLaMA3.1-8B-Instruct | Ours | 13.49 | 4.6 | 1.33 | 2.71 | 7.14 | 1.08 | 0.51 | 4.06 (-62.95%) |\
96
+ | Ministral-8B-Instruct | I/O | 59.51 | 76.20 | 62.33 | 70.03 | 54.54 | 73.49 | 84.00 | 68.58 |\
97
+ | Ministral-8B-Instruct | CoT | 48.79 | 61.90 | 49.66 | 61.31 | 39.17 | 61.75 | 79.73 | 57.47 (-11.11%) |\
98
+ | Ministral-8B-Instruct | SDC | 58.59 | 63.60 | 56.99 | 68.32 | 48.06 | 69.37 | 84.08 | 64.14 (-4.43%) |\
99
+ | Ministral-8B-Instruct | SFT | 68.57 | 66.30 | 48.66 | 67.20 | 37.26 | 54.37 | 76.62 | 59.85 (-8.72%) |\
100
+ | Ministral-8B-Instruct | Ours | 72.64 | 72.6 | 59.33 | 70.45 | 54.35 | <u>76.08</u> | 75.33 | 68.68 (+0.10%) |\
101
+ | DeepSeek-Qwen7B | I/O† | 60.81 | 72.39 | 57.99 | 66.86 | 59.59 | 62.80 | 79.64 | 65.73 |\
102
+ | DeepSeek-Qwen7B | SFT | 67.09 | 69.10 | 58.66 | 58.42 | 55.60 | 65.96 | 79.15 | 64.85 (-0.88%) |\
103
  | DeepSeek-Qwen7B | Ours | 71.35 | 71.40 | 58.67 | 62.04 | 59.65 | 59.38 | 82.00 | 66.35 (+0.62%) |
104
 
105
+ **Table 2** Performance on ComplexBench (Qwen2.5-7B-Instruct). Best/2nd best are marked **bold**/<u>underlined</u>. OD, SC, CNFR, FC, and SR stand for Oracle Decomposition, Self-Consistency, Conifer, FollowComplex, and Self-Refine.
106
 
107
+ | Category | ND | I/O | OD | SC | CNFR | FC | SR | Ours |
108
+ |------------------|------|--------|--------|--------|--------|--------|--------|---------|
109
+ | And | 1 | __85.85__ | 84.27 | 84.03 | 75.10 | 84.77 | 85.66 | **86.57** |
110
+ | **Chain** | | | | | | | | |
111
+ | | 1 | 72.18| __74.68__ | 73.54 | 60.95 | 66.27 | **75.25** | 73.96 |
112
+ | | 2 | 70.56| 72.70 | 69.63 | 64.43 | 70.66 | __73.07__ | **76.88** |
113
+ | *Avg.* | - | 70.96 | 73.18 | 70.57 | 63.59 | 69.60 | __73.59__ | **76.18** |
114
+ | **Selection** | | | | | | | | |
115
+ | | 1 | **77.25** | __76.61__ | 72.08 | 60.52 | 71.67 | 69.61 | 73.39 |
116
+ | | 2 | 65.61| __71.83__ | 68.23 | 53.25 | 61.96 | 64.34 | **72.92** |
117
+ | | 3 | __63.39__ | **68.45** | 56.13 | 46.04 | 51.70 | 58.67 | 60.75 |
118
+ | *Avg.* | - | 65.67 | **70.49** | 65.83 | 51.92 | 60.92 | 62.69 | __69.16__ |
119
+ | **Selection & Chain** | | | | | | | | |
120
+ | | 2 | __65.64__ | **65.94** | 60.81 | 47.33 | 61.07 | 52.01 | 61.06 |
121
+ | | 3 | 59.70| **65.77** | 64.08 | 48.53 | 57.65 | 60.41 | __65.00__ |
122
+ | *Avg.* | - | 62.68 | **65.85** | 62.44 | 47.93 | 59.36 | 56.20 | __63.03__ |
123
+ | **Overall** | - | 74.47 | __76.26__ | 73.76 | 63.51 | 71.97 | 74.00 | **77.40** |
124
 
125
+ ## Acknowledgement
126
+
127
+ In this project, we follow the SimpleRL and the OpenRLHF framework to prepare the codebase. We acknowledge their great work for open-sourcing the implementations of reinforcement learning algorithms.
128
+ * [SimpleRL](https://github.com/hkust-nlp/simpleRL-reason/)
129
+ * [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF)
130
+
131
+ We also would like to express gratitude to the research community that organize the existing benchmarks for validating the LLMs of solving complex instructions.
132
+
133
+ ## Citation
134
+
135
+ If our work has been helpful to you, please consider citing it. Your citation serves as encouragement for our research.
136
+ ```bibtex
137
  @article{qin2025incentivizingreasoningadvancedinstructionfollowing,
138
+ title={Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models},
139
  author={Yulei Qin and Gang Li and Zongyi Li and Zihan Xu and Yuchen Shi and Zhekai Lin and Xiao Cui and Ke Li and Xing Sun},
140
  year={2025},
141
  eprint={2506.01413},