nielsr HF Staff commited on
Commit
9a1bfba
·
verified ·
1 Parent(s): 77a17fe

Improve model card with full abstract, additional metrics, and sample usage

Browse files

Hi! This PR significantly improves the model card for `yolay/RAIF-DeepScaleR-1.5B` by adding:

- Comprehensive metadata tags including `license`, `language`, and `metrics` for better discoverability.
- The full abstract of the paper "Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models".
- A high-level overview of the RAIF method from the project's GitHub README.
- The detailed "Table 2" performance metrics on ComplexBench, which was previously missing.
- A "Sample Usage" section with Python code snippets using the `transformers` library, demonstrating both basic text generation and chat completion with the model's specific chat template.
- A link to the related Hugging Face collection.

These updates aim to provide users with a more complete understanding of the model, its capabilities, and how to use it effectively.

Files changed (1) hide show
  1. README.md +100 -8
README.md CHANGED
@@ -1,19 +1,34 @@
1
  ---
 
 
2
  datasets:
3
  - yolay/RAIF-ComplexInstruction-DeepSeek
4
  library_name: transformers
5
  pipeline_tag: text-generation
6
- base_model:
7
- - agentica-org/DeepScaleR-1.5B-Preview
 
 
 
8
  ---
9
 
10
  This model belongs to the official implementation of the paper "Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models".
11
 
12
- Paper: [Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models](https://huggingface.co/papers/2506.01413)
 
 
 
 
 
 
 
 
 
 
13
 
14
- Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions.
15
 
16
- To this end, we propose a systematic method to boost LLMs in dealing with complex instructions via incentivizing reasoning for test-time compute scaling. First, we stem from the decomposition of complex instructions under existing taxonomies and propose a reproducible data acquisition method. Second, we exploit reinforcement learning (RL) with verifiable rule-centric reward signals to cultivate reasoning specifically for instruction following. We address the shallow, non-essential nature of reasoning under complex instructions via sample-wise contrast for superior CoT enforcement. We also exploit behavior cloning of experts to facilitate steady distribution shift from fast-thinking LLMs to skillful reasoners. Extensive evaluations on seven comprehensive benchmarks confirm the validity of the proposed method, where a 1.5B LLM achieves 11.74% gains with performance comparable to a 8B LLM.
17
 
18
  The model DeepScaleR-1.5B is our optimized model for its advanced instruction-following capabilities under complex instructions. It corresponds to the **DeepScaleR-1.5B (Ours)** in the Table 1.
19
 
@@ -51,9 +66,82 @@ The model DeepScaleR-1.5B is our optimized model for its advanced instruction-fo
51
  | DeepSeek-Qwen7B | SFT | 67.09 | 69.10 | 58.66 | 58.42 | 55.60 | 65.96 | 79.15 | 64.85 (-0.88%) |
52
  | DeepSeek-Qwen7B | Ours | 71.35 | 71.40 | 58.67 | 62.04 | 59.65 | 59.38 | 82.00 | 66.35 (+0.62%) |
53
 
54
- Code: https://github.com/yuleiqin/RAIF
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
- 🎓 If you find this work useful, please consider the following citation:
57
  ```
58
  @article{qin2025incentivizingreasoningadvancedinstructionfollowing,
59
  title={Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models},
@@ -64,4 +152,8 @@ Code: https://github.com/yuleiqin/RAIF
64
  primaryClass={cs.CV},
65
  url={https://arxiv.org/abs/2506.01413}
66
  }
67
- ```
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - agentica-org/DeepScaleR-1.5B-Preview
4
  datasets:
5
  - yolay/RAIF-ComplexInstruction-DeepSeek
6
  library_name: transformers
7
  pipeline_tag: text-generation
8
+ license: cc-by-4.0
9
+ language:
10
+ - en
11
+ metrics:
12
+ - accuracy
13
  ---
14
 
15
  This model belongs to the official implementation of the paper "Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models".
16
 
17
+ **Paper**: [Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models](https://huggingface.co/papers/2506.01413)
18
+ **Code**: [https://github.com/yuleiqin/RAIF](https://github.com/yuleiqin/RAIF)
19
+ **Hugging Face Collection**: [RAIF Models & Datasets](https://huggingface.co/collections/yolay/raif-682b16e5c0c2fa9b73811369)
20
+
21
+ ### Abstract
22
+
23
+ Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions. To this end, we propose RAIF, a systematic method to boost LLMs in dealing with complex instructions via incentivizing reasoning for test-time compute scaling. First, we stem from the decomposition of complex instructions under existing taxonomies and propose a reproducible data acquisition method. Second, we exploit reinforcement learning (RL) with verifiable rule-centric reward signals to cultivate reasoning specifically for instruction following. We address the shallow, non-essential nature of reasoning under complex instructions via sample-wise contrast for superior CoT enforcement. We also exploit behavior cloning of experts to facilitate steady distribution shift from fast-thinking LLMs to skillful reasoners. Extensive evaluations on seven comprehensive benchmarks confirm the validity of the proposed method, where a 1.5B LLM achieves 11.74% gains with performance comparable to a 8B LLM. Evaluation on OOD constraints also confirms the generalizability of our RAIF.
24
+
25
+ ### Overview
26
+
27
+ Our preliminary experiments confirm that the reasoning (e.g., triggered by CoT prompting) of fast-thinking LLMs (instructed models) are often shallow and superficial. Such reasoning only briefly repeats parts of the input requests and fails to extract key components from the complex instructions that are often composed of various sub-instructions, constraints, and rules. On the contrary, existing slow-thinking LLMs (reasoning models) demonstrate superior performance where their deep, organized reasoning truly help the analyses of complex instructions and provide the decomposed action steps to the final answer. Consequently, it is important to incentivize the authentic reasoning of LLMs to solve complex instructions.
28
 
29
+ In this project, we present a reinforcement learning-based method for cultivation of the deep reasoning of LLMs.
30
 
31
+ ### Model Details and Performance
32
 
33
  The model DeepScaleR-1.5B is our optimized model for its advanced instruction-following capabilities under complex instructions. It corresponds to the **DeepScaleR-1.5B (Ours)** in the Table 1.
34
 
 
66
  | DeepSeek-Qwen7B | SFT | 67.09 | 69.10 | 58.66 | 58.42 | 55.60 | 65.96 | 79.15 | 64.85 (-0.88%) |
67
  | DeepSeek-Qwen7B | Ours | 71.35 | 71.40 | 58.67 | 62.04 | 59.65 | 59.38 | 82.00 | 66.35 (+0.62%) |
68
 
69
+ ---
70
+
71
+ **Table 2** Performance on ComplexBench (Qwen2.5-7B-Instruct). Best/2nd best are marked **bold**/<u>underlined</u>. OD, SC, CNFR, FC, and SR stand for Oracle Decomposition, Self-Consistency, Conifer, FollowComplex, and Self-Refine.
72
+
73
+ | Category | ND | I/O | OD | SC | CNFR | FC | SR | Ours |
74
+ |------------------|------|--------|--------|--------|--------|--------|--------|---------|
75
+ | And | 1 | __85.85__ | 84.27 | 84.03 | 75.10 | 84.77 | 85.66 | **86.57** |
76
+ | **Chain** | | | | | | | | |
77
+ | | 1 | 72.18| __74.68__ | 73.54 | 60.95 | 66.27 | **75.25** | 73.96 |
78
+ | | 2 | 70.56| 72.70 | 69.63 | 64.43 | 70.66 | __73.07__ | **76.88** |
79
+ | *Avg.* | - | 70.96 | 73.18 | 70.57 | 63.59 | 69.60 | __73.59__ | **76.18** |
80
+ | **Selection** | | | | | | | | |
81
+ | | 1 | **77.25** | __76.61__ | 72.08 | 60.52 | 71.67 | 69.61 | 73.39 |
82
+ | | 2 | 65.61| __71.83__ | 68.23 | 53.25 | 61.96 | 64.34 | **72.92** |
83
+ | | 3 | __63.39__ | **68.45** | 56.13 | 46.04 | 51.70 | 58.67 | 60.75 |
84
+ | *Avg.* | - | 65.67 | **70.49** | 65.83 | 51.92 | 60.92 | 62.69 | __69.16__ |
85
+ | **Selection & Chain** | | | | | | | | |
86
+ | | 2 | __65.64__ | **65.94** | 60.81 | 47.33 | 61.06 | 52.01 | 61.06 |
87
+ | | 3 | 59.70| **65.77** | 64.08 | 48.53 | 57.65 | 60.41 | __65.00__ |
88
+ | *Avg.* | - | 62.68 | **65.85** | 62.44 | 47.93 | 59.36 | 56.20 | __63.03__ |
89
+ | **Overall** | - | 74.47 | __76.26__ | 73.76 | 63.51 | 71.97 | 74.00 | **77.40** |
90
+
91
+ ---
92
+
93
+ ### Sample Usage
94
+
95
+ You can use this model with the `transformers` library for text generation.
96
+
97
+ ```python
98
+ from transformers import AutoTokenizer, AutoModelForCausalLM
99
+ import torch
100
+
101
+ model_id = "yolay/RAIF-DeepScaleR-1.5B"
102
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
103
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
104
+
105
+ # Example for text generation (basic prompt)
106
+ input_text = "What is the capital of France?"
107
+ inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
108
+ outputs = model.generate(**inputs, max_new_tokens=50)
109
+ print("Basic Generation:")
110
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
111
+
112
+ # Example for chat completion (using the model's chat template)
113
+ # This model uses the Qwen2.5-Instruct chat template.
114
+ messages = [
115
+ {"role": "user", "content": "Explain the concept of photosynthesis to a 5-year-old."},
116
+ {"role": "assistant", "content": "Imagine a plant is like a tiny chef, and it wants to make its own food. It uses sunshine, water, and air to cook up yummy sugary food. This cooking process is called photosynthesis!"},
117
+ {"role": "user", "content": "That's cool! What does it use to 'cook'?"}
118
+ ]
119
+
120
+ text = tokenizer.apply_chat_template(
121
+ messages,
122
+ tokenize=False,
123
+ add_generation_prompt=True
124
+ )
125
+
126
+ chat_inputs = tokenizer(text, return_tensors="pt").to(model.device)
127
+ chat_outputs = model.generate(**chat_inputs, max_new_tokens=200)
128
+ print("
129
+ Chat Completion:")
130
+ print(tokenizer.decode(chat_outputs[0], skip_special_tokens=True))
131
+ ```
132
+
133
+ ### Acknowledgement
134
+
135
+ In this project, we follow the SimpleRL and the OpenRLHF framework to prepare the codebase. We acknowledge their great work for open-sourcing the implementations of reinforcement learning algorithms.
136
+ * [[SimpleRL](https://github.com/hkust-nlp/simpleRL-reason/)]
137
+ * [[OpenRLHF](https://github.com/OpenRLHF/OpenRLHF)]
138
+
139
+ We also would like to express gratitude to the research community that organize the existing benchmarks for validating the LLMs of solving complex instructions.
140
+
141
+ ### Citation
142
+
143
+ If you find this work useful, please consider the following citation:
144
 
 
145
  ```
146
  @article{qin2025incentivizingreasoningadvancedinstructionfollowing,
147
  title={Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models},
 
152
  primaryClass={cs.CV},
153
  url={https://arxiv.org/abs/2506.01413}
154
  }
155
+ ```
156
+
157
+ ### License
158
+
159
+ Please refer to `License_RAIF` for the license of this project.