Update README.md
Browse files
README.md
CHANGED
@@ -3,4 +3,28 @@ datasets:
|
|
3 |
- Quinn777/AMATH-SFT
|
4 |
base_model:
|
5 |
- meta-llama/Llama-3.2-11B-Vision-Instruct
|
6 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
- Quinn777/AMATH-SFT
|
4 |
base_model:
|
5 |
- meta-llama/Llama-3.2-11B-Vision-Instruct
|
6 |
+
---
|
7 |
+
# Model Card for AtomThink-LlamaV
|
8 |
+
|
9 |
+
The model is post-trained based on Llama-3.2-11B-Vision-Instruct and the AtomThink framework, and can be used to solve complex multimodal mathematical problems.
|
10 |
+
|
11 |
+
# Citation
|
12 |
+
If you use this dataset in your research, please cite:
|
13 |
+
```text
|
14 |
+
@article{xiang2025can,
|
15 |
+
title={Can Atomic Step Decomposition Enhance the Self-structured Reasoning of Multimodal Large Models?},
|
16 |
+
author={Xiang, Kun and Liu, Zhili and Jiang, Zihao and Nie, Yunshuang and Cai, Kaixin and Yin, Yiyang and Huang, Runhui and Fan, Haoxiang and Li, Hanhui and Huang, Weiran and others},
|
17 |
+
journal={arXiv preprint arXiv:2503.06252},
|
18 |
+
year={2025}
|
19 |
+
}
|
20 |
+
|
21 |
+
@article{xiang2024atomthink,
|
22 |
+
title={AtomThink: A Slow Thinking Framework for Multimodal Mathematical Reasoning},
|
23 |
+
author={Xiang, Kun and Liu, Zhili and Jiang, Zihao and Nie, Yunshuang and Huang, Runhui and Fan, Haoxiang and Li, Hanhui and Huang, Weiran and Zeng, Yihan and Han, Jianhua and others},
|
24 |
+
journal={arXiv preprint arXiv:2411.11930},
|
25 |
+
year={2024}
|
26 |
+
}
|
27 |
+
```
|
28 |
+
|
29 |
+
# License
|
30 |
+
The checkpoint is released under the Apache 2.0 license. Please ensure proper attribution when using this checkpoint.
|