metadata
license: mit
task_categories:
- question-answering
language:
- en
tags:
- reasoning
- physics
size_categories:
- n<1K
arxiv:
- 2502.15815
configs:
- config_name: default
data_files:
- split: public
path: data/public-*
dataset_info:
features:
- name: problem_id
dtype: string
- name: domain
dtype: string
- name: difficulty_level
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: code_answer_requirements
dtype: string
- name: reference_implementation
dtype: string
splits:
- name: public
num_bytes: 52743
num_examples: 10
download_size: 29845
dataset_size: 52743
TP Bench – Theoretical Physics Benchmark for AI
TPBench is a curated dataset and evaluation suite designed to measure the reasoning capabilities of AI models in theoretical physics. Our test problems span multiple difficulty levels—from undergraduate to frontier research—and cover topics such as cosmology, high-energy theory, general relativity, and more. By providing a unified framework for problem-solving and auto-verifiable answers, TPBench aims to drive progress in AI-based research assistance for theoretical physics.
Dataset Sources
- Paper: https://arxiv.org/abs/2502.15815
- Website: https://tpbench.org/
Citation
If you do find our dataset helpful, please cite our paper.
BibTeX:
@misc{chung2025theoreticalphysicsbenchmarktpbench, title={Theoretical Physics Benchmark (TPBench) -- a Dataset and Study of AI Reasoning Capabilities in Theoretical Physics}, author={Daniel J. H. Chung and Zhiqi Gao and Yurii Kvasiuk and Tianyi Li and Moritz Münchmeyer and Maja Rudolph and Frederic Sala and Sai Chaitanya Tadepalli}, year={2025}, eprint={2502.15815}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2502.15815}, }