PCL-Reasoner commited on
Commit
ab89f30
Β·
verified Β·
1 Parent(s): 39d1d63

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -1,10 +1,9 @@
1
  # ​**PCL-Reasoner-V1 Model**​
2
 
3
  ## Model Overview
4
- We release ​**PCL-Reasoner-V1**β€‹οΌŒ a model trained based on ​**Qwen2.5-32B-Base**​ and undergoes high-performance supervised fine-tuning based on the ​**MindSpore framework**​ and ​**Ascend hardware**. After fine-tuning, the model demonstrates significant improvements in mathematical reasoning capabilities. PCL-Reasoner-V1 achieves 85.7% and 84.2 respectively on AIME 24 and AIME 25, which position PCL-Reasoner-V1 among the top-tier models in the 32B parameter class.
5
 
6
- To promote technical collaboration and application, we have fully open-sourced model weights, dataset and training code
7
- PCL-Reasoner-V1 not only represents a leading 32B mathematical reasoning model but also provides developers with valuable expertise in domain-specific supervised fine-tuning and post-training solutions. Follow the tutorial below to deploy and explore advanced post-training methodologies!
8
 
9
  ![eval_results](images/README/eval_results.png)
10
 
 
1
  # ​**PCL-Reasoner-V1 Model**​
2
 
3
  ## Model Overview
4
+ We release ​PCL-Reasoner-V1​, a model trained based on ​Qwen2.5-32B-Base​ and undergoes high-performance supervised fine-tuning based on the ​MindSpore framework​ and ​Ascend hardware. After fine-tuning, the model demonstrates significant improvements in mathematical reasoning capabilities. PCL-Reasoner-V1 achieves 85.7% and 84.2% respectively on AIME 24 and AIME 25, which position PCL-Reasoner-V1 among the top-tier models in the 32B parameter class.
5
 
6
+ We have fully open-sourced the model weights, dataset and training code. Follow the tutorial below to deploy and explore post-training!
 
7
 
8
  ![eval_results](images/README/eval_results.png)
9