munish0838 commited on
Commit
f3139eb
Β·
verified Β·
1 Parent(s): c40d47f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +117 -0
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ license: apache-2.0
6
+ base_model: Qwen/Qwen2.5-7B-Instruct
7
+ tags:
8
+ - llama-factory
9
+ - full
10
+ - generated_from_trainer
11
+ model-index:
12
+ - name: OpenThinker-7B
13
+ results: []
14
+ datasets:
15
+ - open-thoughts/open-thoughts-114k
16
+
17
+ ---
18
+
19
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
20
+
21
+
22
+ # QuantFactory/OpenThinker-7B-GGUF
23
+ This is quantized version of [open-thoughts/OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) created using llama.cpp
24
+
25
+ # Original Model Card
26
+
27
+
28
+ <p align="center">
29
+ <img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
30
+ </p>
31
+
32
+ # OpenThinker-7B
33
+
34
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the
35
+ [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset.
36
+
37
+ The dataset is derived by distilling DeepSeek-R1 using the [data pipeline available on github](https://github.com/open-thoughts/open-thoughts).
38
+ More info about the dataset can be found on the dataset card at [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k).
39
+
40
+ This model improves upon the [Bespoke-Stratos-7B model](https://huggingface.co/bespokelabs/Bespoke-Stratos-7B), which used 17k examples ([Bespoke-Stratos-17k dataset](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)).
41
+ The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
42
+
43
+ | | AIME24 | MATH500 | GPQA-Diamond | LCBv2 Easy | LCBv2 Medium | LCBv2 Hard | LCBv2 All |
44
+ | --------------------------- | -------- | ------- | ------------ | ----------- | ------------- | ----------- | ---------- |
45
+ | OpenThinker-7B | 31.3 | 83.0 | 42.4 | 75.3 | 28.6 | 6.5 | 39.9 |
46
+ | Bespoke-Stratos-7B | 22.7 | 79.6 | 38.9 | 71.4 | 25.2 | 0.8 | 35.8 |
47
+ | DeepSeek-R1-Distill-Qwen-7B | 60 | 88.2 | 46.9 | 79.7 | 45.1 | 14.6 | 50.1 |
48
+ | gpt-4o-0513 | 8.7 | 75.8 | 46.5 | 87.4 | 42.7 | 8.9 | 50.5 |
49
+ | o1-mini | 64 | 85.6 | 60 | 92.8 | 74.7 | 39.8 | 72.8 |
50
+
51
+ We are fully open-source. Our [model weights](https://huggingface.co/open-thoughts), [datasets](https://huggingface.co/open-thoughts), [data generation code](https://github.com/open-thoughts/open-thoughts), [evaluation code](https://github.com/mlfoundations/Evalchemy), and [training code](https://github.com/hiyouga/LLaMA-Factory) are all publicly available.
52
+
53
+ | | Open Weights | Open Data | Open Code |
54
+ |--|--------------|-----------| --------- |
55
+ |OpenThinker-7B|βœ…|[βœ…](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)|[βœ…](https://github.com/open-thoughts/open-thoughts) |
56
+ |Bespoke-Stratos-7B|βœ…|[βœ…](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)|[βœ…](https://github.com/bespokelabsai/curator/tree/main/examples/bespoke-stratos-data-generation)|
57
+ |DeepSeek-R1-Distill-Qwen-7B|βœ…|❌|❌|
58
+ |gpt-4o-0513|❌|❌|❌|❌|
59
+ |o1-mini|❌|❌|❌|❌|
60
+
61
+
62
+ ## Intended uses & limitations
63
+
64
+ Apache 2.0 License
65
+
66
+
67
+ ## Training procedure
68
+
69
+ We used four 8xH100 nodes to train the model for 20 hours.
70
+
71
+ ### Training hyperparameters
72
+
73
+ The following hyperparameters were used during training:
74
+ - learning_rate: 1e-05
75
+ - train_batch_size: 1
76
+ - eval_batch_size: 8
77
+ - seed: 42
78
+ - distributed_type: multi-GPU
79
+ - num_devices: 32
80
+ - gradient_accumulation_steps: 3
81
+ - total_train_batch_size: 96
82
+ - total_eval_batch_size: 256
83
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
84
+ - lr_scheduler_type: cosine
85
+ - lr_scheduler_warmup_ratio: 0.1
86
+ - num_epochs: 3.0
87
+
88
+ ### Framework versions
89
+
90
+ - Transformers 4.46.1
91
+ - Pytorch 2.3.0
92
+ - Datasets 3.1.0
93
+ - Tokenizers 0.20.3
94
+
95
+ More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).
96
+
97
+ # Citation
98
+ ```
99
+ @misc{openthoughts,
100
+ author = {Team, OpenThoughts},
101
+ month = jan,
102
+ title = {{Open Thoughts}},
103
+ howpublished = {https://open-thoughts.ai},
104
+ year = {2025}
105
+ }
106
+ ```
107
+
108
+ # Links
109
+ - πŸ“Š [Open Thoughts Launch Blog Post](https://www.open-thoughts.ai/blog/launch)
110
+ - πŸ’» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts)
111
+ - 🧠 [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
112
+ - πŸ€– [OpenThinker-7B model](https://huggingface.co/open-thoughts/OpenThinker-7B) - this model.
113
+ - πŸ“Š [Bespoke-Stratos Blog Post](https://www.bespokelabs.ai/blog/bespoke-stratos-the-unreasonable-effectiveness-of-reasoning-distillation)
114
+ - 🧠 [Bespoke-Stratos-17k dataset](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)
115
+ - πŸ€– [Bespoke-Stratos-32B model](https://huggingface.co/bespokelabs/Bespoke-Stratos-32B)
116
+ - πŸ€– [Bespoke-Stratos-7B model](https://huggingface.co/bespokelabs/Bespoke-Stratos-7B)
117
+