tgriggs commited on
Commit
fdf9e1e
·
2 Parent(s): 66beced fc9e1a0

Merge branch 'main' of https://huggingface.co/NovaSky-AI/Sky-T1-32B-Flash

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ datasets:
4
+ - BAAI/TACO
5
+ - tasksource/PRM800K
6
+ language:
7
+ - en
8
+ base_model:
9
+ - Qwen/Qwen2.5-32B-Instruct
10
+ - NovaSky-AI/Sky-T1-32B-Preview
11
+ license: apache-2.0
12
+ ---
13
+
14
+ ## Model Details
15
+
16
+ ### Model Description
17
+
18
+ <!-- Provide a longer summary of what this model is. -->
19
+
20
+ This is a 32B reasoning model preference optimized on top of Sky-T1-32B-Preview to significantly reduce generation lengths while maintaining accuracy. The performance is on par with o1-preview model in both math and coding, while reducing generation lengths by up to 57% relative to Sky-T1-32B-Preview.
21
+ Please see our [blog post](https://novasky-ai.github.io/posts/reduce-overthinking/) for more details.
22
+
23
+ - **Developed by:** NovaSky Team from Sky Computing Lab at UC Berkeley.
24
+
25
+ ## Training Details
26
+
27
+ ### Training Data
28
+
29
+ 10K preference pairs in math and coding domains, generated by Sky-T1-32B-Preview.
30
+
31
+ ### Training Procedure
32
+ We perform Simple Policy Optimization (SimPO) with a batch size of 96, learning rate of 5e-7, gamma of 0.3, and beta of 2.0.
33
+
34
+ #### Speeds
35
+
36
+ We use Llama-Factory for training. On 8xH100, the SimPO training takes ~2.5 hours with DeepSpeed Zero-3 Offload.
37
+
38
+
39
+ ## Evaluation
40
+ | | | Sky-T1-32B-Preview | Sky-T1-32B-Flash | Qwen2.5-32B-Instruct | QwQ-32B- Base | DeepSeek-R1-Distill-Qwen-32B |
41
+ |--------------|---------|:------------------:|:----------------:|:--------------------:|:-------------:|:----------------------------:|
42
+ | Math500 | Acc | 88.6 | 88.6 | 76.2 | 89.2 | 90.8 |
43
+ | | Avg Len | 2124 | 1417 (-33%) | 522 | 2089 | 2010 |
44
+ | AIME24 | Acc | 43.3 | 43.3 | 16.7 | 50 | 66.7 |
45
+ | | Avg Len | 6881 | 4365 (-37%) | 970 | 7379 | 9173 |
46
+ | LCB Easy | Acc | 87.4 | 89 | 84.6 | 90.7 | 91.2 |
47
+ | | Avg Len | 3415 | 2265 (-34%) | 414 | 3255 | 2775 |
48
+ | LCB Medium | Acc | 56.8 | 56.3 | 40.8 | 56.3 | 76.7 |
49
+ | | Avg Len | 8263 | 4389 (-47%) | 535 | 6742 | 6324 |
50
+ | LCB Hard | Acc | 17.9 | 17.9 | 9.8 | 17.1 | 38.2 |
51
+ | | Avg Len | 14564 | 6199 (-57%) | 618 | 10450 | 10448 |
52
+ | MMLU | Acc | 82.4 | 81.7 | 80.1 | 85.2 | 82.1 |
53
+ | | Avg Len | 1087 | 799 (-17%) | 312 | 1041 | 774 |
54
+ | GPQA Diamond | Acc | 56.8 | 56.6 | 45.5 | 52.5 | 62.6 |
55
+ | | Avg Len | 3503 | 2148 (-39%) | 600 | 3302 | 5108 |
56
+
57
+ ## Acknowledgement
58
+ We would like to thanks the compute resources from [Lambda Lab](https://lambdalabs.com/service/gpu-cloud?srsltid=AfmBOop5FnmEFTkavVtdZDsLWvHWNg6peXtat-OXJ9MW5GMNsk756PE5) and [AnyScale](https://www.anyscale.com/).
59
+
60
+ ## Citation
61
+ Please considering citing our blog post if you found it useful for your research. Thank you!
62
+
63
+ ```bibtex
64
+ @misc{reduce_overthinking_2025,
65
+ author = {NovaSky Team},
66
+ title = {Think Less, Achieve More: Cut Reasoning Costs by 50% Without Sacrificing Accuracy},
67
+ howpublished = {https://novasky-ai.github.io/posts/reduce-overthinking},
68
+ note = {Accessed: 2025-01-23},
69
+ year = {2025}
70
+ }