UTQA / README.md
herteltm's picture
Update README.md
42309a4 verified
---
license: other
license_name: utqa25-eval-only
license_link: LICENSE
pretty_name: UTQA Thermodynamics Benchmark
paperswithcode_id: utqa-thermodynamics-benchmark
tags:
- thermodynamics
- benchmark
- multiple-choice
- education
- chemistry
- physics
task_categories:
- multiple-choice
language:
- en
# Tell the Viewer exactly which file(s) to display
configs:
- config_name: default
default: true
data_files:
- split: test
path: "data.csv"
# (Optional but recommended) Define schema so the Viewer renders image thumbnails
dataset_info:
features:
- name: question_number
dtype: int32
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: image
dtype: string # <- this makes the viewer show thumbnails for paths like images/fig01.png
- name: correct_answer
dtype: string
- name: explanation
dtype: string
- name: image_explain
dtype: string
description: >
UTQA is a 50-item benchmark on undergraduate-level thermodynamics
covering state functions, entropy, reversibility, and diagram
interpretation. Each question is in multiple-choice format with
exactly one correct answer, with explanations and
(optional) figure references.
---
# UTQA: Undergraduate Thermodynamics Question Answering Benchmark
UTQA is a 50-item multiple-choice benchmark designed to evaluate large language models on **undergraduate-level thermodynamics**.
It consists of single-choice questions (four options, one correct), of which 17 include associated figures.
The benchmark targets **reasoning about state functions, reversibility, entropy, and diagram interpretation**, rather than simple plug-and-chug calculations.
---
## Dataset Structure
- **Number of items**: 50
- **Format**: CSV table + images folder
- **Columns**:
- `question_number`: integer identifier
- `question`: question stem (text)
- `option_a``option_d`: four answer choices
- `image`: relative path to figure (if applicable; empty otherwise)
- `correct_answer`: one of {a, b, c, d}
- `explanation`: text explanation of the solution
- `image_explain`: additional explanation of the solution using a figure (if applicable)
Images are provided in the `/images/` directory and linked from the `image` column.
---
## Intended Use
UTQA is released as a **benchmark for evaluation** of language models and as a resource for **educators and researchers** studying AI capabilities in unsupervised teaching contexts.
It is **not** intended for use in pretraining or fine-tuning language models.
### ✅ Allowed
- Model evaluation and benchmarking
- Academic research
- Teaching and educational analysis
- Publication of results with proper citation
### ❌ Not Allowed
- Pretraining or fine-tuning of models on UTQA (in whole or in part)
- Incorporation into larger training corpora
- Commercial use for training or data augmentation
---
## License
This dataset is released under a **Custom Evaluation-Only License** (see `LICENSE`).
- Free to use for **research, education, and evaluation**.
- **Training use is prohibited**.
- Attribution is required in all uses and publications.
---
## Citation
If you use UTQA in your research, please cite:
Geißler, A., Bien, L.-S., Schöppler, F., & Hertel, T. (2025).
*From Canonical to Complex: Benchmarking LLM Capabilities in Undergraduate Thermodynamics*.
arXiv:2508.21452 [physics.ed-ph]. https://arxiv.org/abs/2508.21452
```bibtex
@misc{Geissler2025,
title = {From Canonical to Complex: Benchmarking LLM Capabilities in Undergraduate Thermodynamics},
author = {Anna Geißler and Luca-Sophie Bien and Friedrich Schöppler and Tobias Hertel},
year = {2025},
eprint = {2508.21452},
archivePrefix= {arXiv},
primaryClass = {physics.ed-ph},
url = {https://arxiv.org/abs/2508.21452},
}