aashish1904 commited on
Commit
f87d110
Β·
verified Β·
1 Parent(s): d90ba40

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +128 -0
README.md ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model: Qwen/Qwen2.5-7B-Instruct
5
+ datasets:
6
+ - open-thoughts/OpenThoughts3-1.2M
7
+ library_name: transformers
8
+ license: apache-2.0
9
+ tags:
10
+ - llama-factory
11
+ - full
12
+ - generated_from_trainer
13
+ model-index:
14
+ - name: OpenThinker3-7B
15
+ results: []
16
+ pipeline_tag: text-generation
17
+
18
+ ---
19
+
20
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
21
+
22
+
23
+ # QuantFactory/OpenThinker3-7B-GGUF
24
+ This is quantized version of [open-thoughts/OpenThinker3-7B](https://huggingface.co/open-thoughts/OpenThinker3-7B) created using llama.cpp
25
+
26
+ # Original Model Card
27
+
28
+
29
+ <p align="center">
30
+ <img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
31
+ </p>
32
+
33
+ <p align="center">
34
+ <a href="https://arxiv.org/abs/2506.04178" style="margin-right: 24px;">paper</a> |
35
+ <a href="https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M" style="margin-right: 24px; margin-left: 24px;">dataset</a> |
36
+ <a href="https://huggingface.co/open-thoughts/OpenThinker3-7B" style="margin-left: 24px;">model</a>
37
+ </p>
38
+
39
+ > [!NOTE]
40
+ > We have released a paper for OpenThoughts! See our paper [here](https://arxiv.org/abs/2506.04178).
41
+
42
+ # OpenThinker3-7B
43
+
44
+ State-of-the-art open-data 7B reasoning model. πŸš€
45
+
46
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the
47
+ [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset.
48
+ It represents a notable improvement over our previous models, [OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) and [OpenThinker2-7B](https://huggingface.co/open-thoughts/OpenThinker2-7B), and it outperforms several other strong reasoning 7B models such as [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) and [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1), despite being trained only with SFT, without any RL.
49
+
50
+ This time, we also released a paper! See our [paper](https://arxiv.org/abs/2506.04178) and [blog post](https://openthoughts.ai/blog/ot3) for more details. OpenThinker3-32B to follow! πŸ‘€
51
+
52
+ # Evaluation Results
53
+ The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
54
+ In the table below, we bold values in each column that are within 2 standard errors of the best.
55
+
56
+ | Model | Data | AIME24 | AIME25 | AMC23 | MATH500 | HMMT O2/25 | LCB 06/24-01/25 | CodeElo | CodeForces | GPQA-D | JEEBench |
57
+ | ----------------------------------------------------------------------------------------------- | ----- | ------ | ------ | ------ | ------- | ---------- | --------------- | ------- | ---------- | ------ | -------- |
58
+ | [OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) | βœ… | 30.7 | 22.0 | 72.5 | 82.8 | 15.7 | 26.1 | 11.1 | 14.9 | 38.6 | 45.3 |
59
+ | [OpenThinker2-7B](https://huggingface.co/open-thoughts/OpenThinker2-7B) | βœ… | 60.7 | 38.7 | 89.8 | 87.6 | 24.7 | 40.6 | 22.8 | 26.6 | 47.0 | 65.1 |
60
+ | **[OpenThinker3-7B](https://huggingface.co/open-thoughts/OpenThinker3-7B)** | βœ… |**69.0**|**53.3**|**93.5**| **90.0**| **42.7** | **51.7** | 31.0 |**32.2** | 53.7 |**72.4** |
61
+ | [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | ❌ | 51.3 | 38.0 | 92.0 | 88.0 | 25.0 | 34.5 | 19.9 | 21.1 | 33.2 | 50.4 |
62
+ | [OpenR1-Distill-7B](https://huggingface.co/open-r1/OpenR1-Distill-7B) | βœ… | 57.7 | 39.7 | 87.0 | 88.0 | 25.7 | 30.7 | 30.1 | 29.3 |**58.9**| 68.7 |
63
+ | [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) | βœ… | 62.0 | 48.0 |**94.0**| 89.4 | 26.7 | **50.9** | 30.9 |**32.9** | 52.9 | 70.7 |
64
+ | [AceReason-Nemotron-7B](https://huggingface.co/nvidia/AceReason-Nemotron-7B) | βœ… |**71.0**| 50.7 |**93.8**| 89.8 | 33.3 | 44.3 |**32.9** |**30.9** | 52.9 | 64.3 |
65
+
66
+ # Data
67
+
68
+ This model was trained on the [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset.
69
+
70
+ The key to the strong model performance is our comprehensive data pipeline and over 1,000+ ablation experiments.
71
+ This led to the creation of [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M), which consists of 850,000 math questions, 250,000 code questions, and 100,000 science questions.
72
+ Reasoning traces are generated with QwQ-32B.
73
+
74
+ See the [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset page or our [paper](https://arxiv.org/abs/2506.04178) for additional information.
75
+
76
+
77
+ # Intended uses & limitations
78
+
79
+ Apache 2.0 License
80
+
81
+
82
+ ## Training procedure
83
+
84
+ We used 512 A100 nodes to train the model for 48 hours.
85
+
86
+ ## Training hyperparameters
87
+
88
+ The following hyperparameters were used during training:
89
+ - learning_rate: 8e-05
90
+ - seed: 42
91
+ - distributed_type: multi-GPU
92
+ - num_devices: 512
93
+ - gradient_accumulation_steps: 1
94
+ - total_train_batch_size: 512
95
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
96
+ - lr_scheduler_type: cosine
97
+ - lr_scheduler_warmup_ratio: 0.1
98
+ - num_epochs: 5.0
99
+ - weight_decay: 0.0
100
+
101
+ ## Framework versions
102
+
103
+ - Transformers 4.46.1
104
+ - Pytorch 2.3.0
105
+ - Datasets 3.1.0
106
+ - Tokenizers 0.20.3
107
+
108
+ More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).
109
+
110
+ # Links
111
+ - πŸ“ [OpenThoughts Paper](https://arxiv.org/abs/2506.04178)
112
+ - πŸ“Š [OpenThoughts3-1.2M and OpenThinker3-7B Blog Post](https://www.open-thoughts.ai/blog/ot3)
113
+ - πŸ’» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts)
114
+ - 🧠 [OpenThoughts3-1.2M dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M)
115
+ - πŸ€– [OpenThinker3-7B model](https://huggingface.co/open-thoughts/OpenThinker3-7B) - this model.
116
+
117
+ # Citation
118
+ ```
119
+ @misc{guha2025openthoughtsdatarecipesreasoning,
120
+ title={OpenThoughts: Data Recipes for Reasoning Models},
121
+ author={Etash Guha and Ryan Marten and Sedrick Keh and Negin Raoof and Georgios Smyrnis and Hritik Bansal and Marianna Nezhurina and Jean Mercat and Trung Vu and Zayne Sprague and Ashima Suvarna and Benjamin Feuer and Liangyu Chen and Zaid Khan and Eric Frankel and Sachin Grover and Caroline Choi and Niklas Muennighoff and Shiye Su and Wanjia Zhao and John Yang and Shreyas Pimpalgaonkar and Kartik Sharma and Charlie Cheng-Jie Ji and Yichuan Deng and Sarah Pratt and Vivek Ramanujan and Jon Saad-Falcon and Jeffrey Li and Achal Dave and Alon Albalak and Kushal Arora and Blake Wulfe and Chinmay Hegde and Greg Durrett and Sewoong Oh and Mohit Bansal and Saadia Gabriel and Aditya Grover and Kai-Wei Chang and Vaishaal Shankar and Aaron Gokaslan and Mike A. Merrill and Tatsunori Hashimoto and Yejin Choi and Jenia Jitsev and Reinhard Heckel and Maheswaran Sathiamoorthy and Alexandros G. Dimakis and Ludwig Schmidt},
122
+ year={2025},
123
+ eprint={2506.04178},
124
+ archivePrefix={arXiv},
125
+ primaryClass={cs.LG},
126
+ url={https://arxiv.org/abs/2506.04178},
127
+ }
128
+ ```