language:
- en
license: apache-2.0
size_categories:
- n=520
task_categories:
- question-answering
- text-generation
pretty_name: CMPhysBench
tags:
- Condensed Matter Physics
- physics
- benchmark
CMPhysBench: A Benchmark for Evaluating Large Language Models in Condensed Matter Physics
We introduce CMPhysBench, designed to assess the proficiency of Large Language Models (LLMs) in Condensed Matter Physics, as a novel Benchmark. CMPhysBench is composed of more than 520 graduate-level meticulously curated questions covering both representative subfields and foundational theoretical frameworks of condensed matter physics, such as magnetism, superconductivity, strongly correlated systems, etc. To ensure a deep understanding of the problem-solving process,we focus exclusively on calculation problems, requiring LLMs to independently generate comprehensive solutions. Meanwhile, leveraging tree-based representations of expressions, we introduce the Scalable Expression Edit Distance (SEED) score, which provides fine-grained (non-binary) partial credit and yields a more accurate assessment of similarity between prediction and ground-truth. Our results show that even the best models, Grok-4, reach only 36 average SEED score and 28% accuracy on CMPhysBench, underscoring a significant capability gap, especially for this practical and frontier domain relative to traditional physics.
Acknowledgement
CMPhysBench was inspired by previous dataset works including PHYBench, PHYSICS, GPQA and OlympiadBench. Scalable Expression Edit Distance (SEED) is inspired by Expression Edit Distance (EED) metric from PHYBench, which introduced Edit Distance to evaluating symbolic reasoning in physics. We extend and modify this idea by proposing the SEED score, supporting more diverse answer types and providing fine-grained and more robust evaluation dedicated for the fields of Condensed Matter Physics.
We sincerely thank the PHYBench team for their open-source contribution. Their code is released under the MIT license and is available at https://github.com/phybench-official/phybench.
Citations
@misc{wang2025cmphysbench,
title={CMPhysBench: A Benchmark for Evaluating Large Language Models in Condensed Matter Physics},
author={Weida Wang and Dongchen Huang and Jiatong Li and Tengchao Yang and Ziyang Zheng and Di Zhang and Dong Han and Benteng Chen and Binzhao Luo and Zhiyu Liu and Kunling Liu and Zhiyuan Gao and Shiqi Geng and Wei Ma and Jiaming Su and Xin Li and Shuchen Pu and Yuhan Shui and Qianjia Cheng and Zhihao Dou and Dongfei Cui and Changyong He and Jin Zeng and Zeke Xie and Mao Su and Dongzhan Zhou and Yuqiang Li and Wanli Ouyang and Yunqi Cai and Xi Dai and Shufei Zhang and Lei Bai and Jinguang Cheng and Zhong Fang and Hongming Weng},
year={2025},
eprint={2508.18124},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18124},
}
@inproceedings{rein2024gpqa,
title={Gpqa: A graduate-level google-proof q\&a benchmark},
author={Rein, David and Hou, Betty Li and Stickland, Asa Cooper and Petty, Jackson and Pang, Richard Yuanzhe and Dirani, Julien and Michael, Julian and Bowman, Samuel R},
booktitle={First Conference on Language Modeling},
year={2024}
}
@article{zheng2025scaling,
title={Scaling physical reasoning with the physics dataset},
author={Zheng, Shenghe and Cheng, Qianjia and Yao, Junchi and Wu, Mengsong and He, Haonan and Ding, Ning and Cheng, Yu and Hu, Shuyue and Bai, Lei and Zhou, Dongzhan and others},
journal={arXiv preprint arXiv:2506.00022},
year={2025}
}
@article{he2024olympiadbench,
title={Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems},
author={He, Chaoqun and Luo, Renjie and Bai, Yuzhuo and Hu, Shengding and Thai, Zhen Leng and Shen, Junhao and Hu, Jinyi and Han, Xu and Huang, Yujie and Zhang, Yuxiang and others},
journal={arXiv preprint arXiv:2402.14008},
year={2024}
}
@article{qiu2025phybench,
title={Phybench: Holistic evaluation of physical perception and reasoning in large language models},
author={Qiu, Shi and Guo, Shaoyang and Song, Zhuo-Yang and Sun, Yunbo and Cai, Zeyu and Wei, Jiashen and Luo, Tianyu and Yin, Yixuan and Zhang, Haoxu and Hu, Yi and others},
journal={arXiv preprint arXiv:2504.16074},
year={2025}
}