nielsr HF Staff commited on
Commit
ba89773
·
verified ·
1 Parent(s): 998c6bf

Improve model card with full abstract, usage example, and metadata

Browse files

This PR significantly improves the model card for `yolay/RAIF-DeepSeek-Qwen-1.5B` by:

* **Enhancing Metadata**:
* Adding the `license: apache-2.0` based on the project's GitHub repository.
* Including `language: en` to accurately reflect the model's language.
* **Expanding Content**:
* Adding the full research paper abstract for a more comprehensive overview.
* Including a detailed `transformers` usage example with a conversational prompt, making it easier for users to interact with the model. The example now correctly uses `tokenizer.apply_chat_template` and generation parameters from the model's configuration.
* Incorporating `Table 2` from the paper/GitHub repository, which provides additional performance metrics.
* Restructuring the model card with clearer markdown headers for improved readability and navigation.
* Adding "Acknowledgement" and "License" sections from the GitHub README.

These changes aim to provide researchers and users with more complete and accessible information about the model and its proper usage.

Files changed (1) hide show
  1. README.md +90 -7
README.md CHANGED
@@ -7,15 +7,62 @@ library_name: transformers
7
  metrics:
8
  - accuracy
9
  pipeline_tag: text-generation
 
 
10
  ---
11
 
12
- This model belongs to the official implementation of the paper "Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models".
13
 
14
- You can find the paper at https://huggingface.co/papers/2506.01413.
15
 
16
- Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions.
17
 
18
- To this end, we propose a systematic method to boost LLMs in dealing with complex instructions via incentivizing reasoning for test-time compute scaling. First, we stem from the decomposition of complex instructions under existing taxonomies and propose a reproducible data acquisition method. Second, we exploit reinforcement learning (RL) with verifiable rule-centric reward signals to cultivate reasoning specifically for instruction following. We address the shallow, non-essential nature of reasoning under complex instructions via sample-wise contrast for superior CoT enforcement. We also exploit behavior cloning of experts to facilitate steady distribution shift from fast-thinking LLMs to skillful reasoners. Extensive evaluations on seven comprehensive benchmarks confirm the validity of the proposed method, where a 1.5B LLM achieves 11.74% gains with performance comparable to a 8B LLM.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  The model DeepSeek-Qwen2.5-1.5B is our optimized model for its advanced instruction-following capabilities under complex instructions. It corresponds to the **DeepSeek-Qwen1.5B (Ours)** in the Table 1.
21
 
@@ -53,12 +100,48 @@ The model DeepSeek-Qwen2.5-1.5B is our optimized model for its advanced instruct
53
  | DeepSeek-Qwen7B | SFT | 67.09 | 69.10 | 58.66 | 58.42 | 55.60 | 65.96 | 79.15 | 64.85 (-0.88%) |
54
  | DeepSeek-Qwen7B | Ours | 71.35 | 71.40 | 58.67 | 62.04 | 59.65 | 59.38 | 82.00 | 66.35 (+0.62%) |
55
 
56
- Github: https://github.com/yuleiqin/RAIF
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
- 🎓 If you find this work useful, please consider the following citation:
59
  ```
60
  @article{qin2025incentivizingreasoningadvancedinstructionfollowing,
61
- title={Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models},
62
  author={Yulei Qin and Gang Li and Zongyi Li and Zihan Xu and Yuchen Shi and Zhekai Lin and Xiao Cui and Ke Li and Xing Sun},
63
  year={2025},
64
  eprint={2506.01413},
 
7
  metrics:
8
  - accuracy
9
  pipeline_tag: text-generation
10
+ license: apache-2.0
11
+ language: en
12
  ---
13
 
14
+ # Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models
15
 
16
+ This model is the official implementation of the paper [Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models](https://huggingface.co/papers/2506.01413).
17
 
18
+ You can find the official code and more details on the [GitHub repository](https://github.com/yuleiqin/RAIF).
19
 
20
+ ## Abstract
21
+
22
+ Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions. To this end, we propose RAIF, a systematic method to boost LLMs in dealing with complex instructions via incentivizing reasoning for test-time compute scaling. First, we stem from the decomposition of complex instructions under existing taxonomies and propose a reproducible data acquisition method. Second, we exploit reinforcement learning (RL) with verifiable rule-centric reward signals to cultivate reasoning specifically for instruction following. We address the shallow, non-essential nature of reasoning under complex instructions via sample-wise contrast for superior CoT enforcement. We also exploit behavior cloning of experts to facilitate steady distribution shift from fast-thinking LLMs to skillful reasoners. Extensive evaluations on seven comprehensive benchmarks confirm the validity of the proposed method, where a 1.5B LLM achieves 11.74% gains with performance comparable to a 8B LLM. Evaluation on OOD constraints also confirms the generalizability of our RAIF.
23
+
24
+ ## Usage
25
+
26
+ This model can be loaded and used directly with the Hugging Face `transformers` library.
27
+
28
+ ```python
29
+ from transformers import AutoTokenizer, AutoModelForCausalLM
30
+ import torch
31
+
32
+ model_name = "yolay/RAIF-DeepSeek-Qwen-1.5B"
33
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
34
+ model = AutoModelForCausalLM.from_pretrained(
35
+ model_name,
36
+ torch_dtype=torch.bfloat16,
37
+ device_map="auto"
38
+ )
39
+
40
+ # Example for instruction following
41
+ messages = [
42
+ {"role": "user", "content": "Explain the concept of quantum entanglement in simple terms, using an analogy from everyday life."}
43
+ ]
44
+
45
+ # Apply chat template for proper input formatting
46
+ input_ids = tokenizer.apply_chat_template(
47
+ messages,
48
+ add_generation_prompt=True,
49
+ return_tensors="pt"
50
+ ).to(model.device)
51
+
52
+ # Generate response using parameters from generation_config.json
53
+ outputs = model.generate(
54
+ input_ids,
55
+ max_new_tokens=256,
56
+ do_sample=True,
57
+ temperature=0.6,
58
+ top_p=0.9
59
+ )
60
+
61
+ generated_text = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
62
+ print(generated_text)
63
+ ```
64
+
65
+ ## Model Details
66
 
67
  The model DeepSeek-Qwen2.5-1.5B is our optimized model for its advanced instruction-following capabilities under complex instructions. It corresponds to the **DeepSeek-Qwen1.5B (Ours)** in the Table 1.
68
 
 
100
  | DeepSeek-Qwen7B | SFT | 67.09 | 69.10 | 58.66 | 58.42 | 55.60 | 65.96 | 79.15 | 64.85 (-0.88%) |
101
  | DeepSeek-Qwen7B | Ours | 71.35 | 71.40 | 58.67 | 62.04 | 59.65 | 59.38 | 82.00 | 66.35 (+0.62%) |
102
 
103
+ ---
104
+
105
+ **Table 2** Performance on ComplexBench (Qwen2.5-7B-Instruct). Best/2nd best are marked **bold**/<u>underlined</u>. OD, SC, CNFR, FC, and SR stand for Oracle Decomposition, Self-Consistency, Conifer, FollowComplex, and Self-Refine.
106
+
107
+ | Category | ND | I/O | OD | SC | CNFR | FC | SR | Ours |
108
+ |------------------|------|--------|--------|--------|--------|--------|--------|---------|
109
+ | And | 1 | __85.85__ | 84.27 | 84.03 | 75.10 | 84.77 | 85.66 | **86.57** |
110
+ | **Chain** | | | | | | | | |
111
+ | | 1 | 72.18| __74.68__ | 73.54 | 60.95 | 66.27 | **75.25** | 73.96 |
112
+ | | 2 | 70.56| 72.70 | 69.63 | 64.43 | 70.66 | __73.07__ | **76.88** |
113
+ | *Avg.* | - | 70.96 | 73.18 | 70.57 | 63.59 | 69.60 | __73.59__ | **76.18** |
114
+ | **Selection** | | | | | | | | |
115
+ | | 1 | **77.25** | __76.61__ | 72.08 | 60.52 | 71.67 | 69.61 | 73.39 |
116
+ | | 2 | 65.61| __71.83__ | 68.23 | 53.25 | 61.96 | 64.34 | **72.92** |
117
+ | | 3 | __63.39__ | **68.45** | 56.13 | 46.04 | 51.70 | 58.67 | 60.75 |
118
+ | *Avg.* | - | 65.67 | **70.49** | 65.83 | 51.92 | 60.92 | 62.69 | __69.16__ |
119
+ | **Selection & Chain** | | | | | | | | |
120
+ | | 2 | __65.64__ | **65.94** | 60.81 | 47.33 | 61.07 | 52.01 | 61.06 |
121
+ | | 3 | 59.70| **65.77** | 64.08 | 48.53 | 57.65 | 60.41 | __65.00__ |
122
+ | *Avg.* | - | 62.68 | **65.85** | 62.44 | 47.93 | 59.36 | 56.20 | __63.03__ |
123
+ | **Overall** | - | 74.47 | __76.26__ | 73.76 | 63.51 | 71.97 | 74.00 | **77.40** |
124
+
125
+ ---
126
+
127
+ ## Acknowledgement🫡
128
+
129
+ In this project, we follow the SimpleRL and the OpenRLHF framework to prepare the codebase. We acknowledge their great work for open-sourcing the implementations of reinforcement learning algorithms.
130
+ * [[SimpleRL](https://github.com/hkust-nlp/simpleRL-reason/)]
131
+ * [[OpenRLHF](https://github.com/OpenRLHF/OpenRLHF)]
132
+
133
+ We also would like to express gratitude to the research community that organize the existing benchmarks for validating the LLMs of solving complex instructions.
134
+
135
+ ## License🪪
136
+
137
+ Please refer to `License_RAIF` for the license of this project.
138
+
139
+ ## Citation🎓
140
 
141
+ If you find this work useful, please consider the following citation:
142
  ```
143
  @article{qin2025incentivizingreasoningadvancedinstructionfollowing,
144
+ title={Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models},
145
  author={Yulei Qin and Gang Li and Zongyi Li and Zihan Xu and Yuchen Shi and Zhekai Lin and Xiao Cui and Ke Li and Xing Sun},
146
  year={2025},
147
  eprint={2506.01413},