Update README.md
Browse files
README.md
CHANGED
|
@@ -3,10 +3,10 @@
|
|
| 3 |
## Model Overview
|
| 4 |
We release PCL-Reasoner-V1, a model trained based on Qwen2.5-32B-Base and undergoes high-performance supervised fine-tuning based on the MindSpore framework and Ascend hardware. After fine-tuning, the model demonstrates significant improvements in mathematical reasoning capabilities. PCL-Reasoner-V1 achieves 85.7% and 84.2% respectively on AIME 24 and AIME 25, which position PCL-Reasoner-V1 among the top-tier models in the 32B parameter class on AIME24/25.
|
| 5 |
|
| 6 |
-
We have fully open-sourced the model weights, dataset and training code. Follow the tutorial below to deploy and explore post-training!
|
| 7 |
-
|
| 8 |

|
| 9 |
|
|
|
|
|
|
|
| 10 |
## Code
|
| 11 |
https://github.com/PCL-Reasoner/V1
|
| 12 |
|
|
|
|
| 3 |
## Model Overview
|
| 4 |
We release PCL-Reasoner-V1, a model trained based on Qwen2.5-32B-Base and undergoes high-performance supervised fine-tuning based on the MindSpore framework and Ascend hardware. After fine-tuning, the model demonstrates significant improvements in mathematical reasoning capabilities. PCL-Reasoner-V1 achieves 85.7% and 84.2% respectively on AIME 24 and AIME 25, which position PCL-Reasoner-V1 among the top-tier models in the 32B parameter class on AIME24/25.
|
| 5 |
|
|
|
|
|
|
|
| 6 |

|
| 7 |
|
| 8 |
+
We have fully open-sourced the model weights, dataset and training code. Follow the tutorial below to deploy and explore post-training!
|
| 9 |
+
|
| 10 |
## Code
|
| 11 |
https://github.com/PCL-Reasoner/V1
|
| 12 |
|