Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,529 Bytes
15423f2
 
adb60a5
15423f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4ebf4c3
15423f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b06f10a
 
80842fb
15423f2
 
156995d
 
 
 
 
 
ce363a5
 
 
 
 
 
 
15423f2
 
 
 
 
 
 
 
1829d41
15423f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
license: mit
pretty_name: U-MATH
task_categories:
- text-generation
language:
- en
tags:
- math
- reasoning
size_categories:
- 1K<n<10K
dataset_info:
  features:
  - name: uuid
    dtype: string
  - name: subject
    dtype: string
  - name: has_image
    dtype: bool
  - name: image
    dtype: string
  - name: problem_statement
    dtype: string
  - name: golden_answer
    dtype: string
  splits:
  - name: test
    num_examples: 1100
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---

**U-MATH** is a comprehensive benchmark of 1,100 unpublished university-level problems sourced from real teaching materials. 

It is designed to evaluate the mathematical reasoning capabilities of Large Language Models (LLMs). \
The dataset is balanced across six core mathematical topics and includes 20% of multimodal problems (involving visual elements such as graphs and diagrams). 

For fine-grained performance evaluation results and detailed discussion, check out our [paper](LINK).

* 📊 [U-MATH benchmark at Huggingface](https://huggingface.co/datasets/toloka/umath)
* 🔎 [μ-MATH benchmark at Huggingface](https://huggingface.co/datasets/toloka/mumath)
* 🗞️ [Paper](https://arxiv.org/abs/2412.03205)
* 👾 [Evaluation Code at GitHub](https://github.com/Toloka/u-math/)

### Key Features
* **Topics Covered**: Precalculus, Algebra, Differential Calculus, Integral Calculus, Multivariable Calculus, Sequences & Series.
* **Problem Format**: Free-form answer with LLM judgement
* **Evaluation Metrics**: Accuracy; splits by subject and text-only vs multimodal problem type.
* **Curation**: Original problems composed by math professors and used in university curricula, samples validated by math experts at [Toloka AI](https://toloka.ai), [Gradarius](https://www.gradarius.com)

### Use it

```python
from datasets import load_dataset
ds = load_dataset('toloka/u-math', split='test')
```

### Dataset Fields
`uuid`: problem id \
`has_image`: a boolean flag on whether the problem is multimodal or not \
`image`: binary data encoding the accompanying image, empty for text-only problems \
`subject`: subject tag marking the topic that the problem belongs to \
`problem_statement`: problem formulation, written in natural language \
`golden_answer`: a correct solution for the problem, written in natural language \

For meta-evaluation (evaluating the quality of LLM judges), refer to the [µ-MATH dataset](https://huggingface.co/datasets/toloka/mu-math).

### Evaluation Results

<div align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/650238063e61bc019201e3e2/beMyOikpKfp3My2vu5Mjc.png" alt="umath-table" width="800"/>
</div>

<div align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/650238063e61bc019201e3e2/7_VZXidxMHG7PiDM983lS.png" alt="umath-bar" width="950"/>
</div>

The prompt used for inference:

```
{problem_statement}
Please reason step by step, and put your final answer within \boxed{}
```

### Licensing Information
All the dataset contents are available under the MIT license.

### Citation
If you use U-MATH or μ-MATH in your research, please cite the paper:
```bibtex
@inproceedings{umath2024,
title={U-MATH: A University-Level Benchmark for Evaluating Mathematical Skills in LLMs},
author={Konstantin Chernyshev, Vitaliy Polshkov, Ekaterina Artemova, Alex Myasnikov, Vlad Stepanov, Alexei Miasnikov and Sergei Tilga},
year={2024}
}
```

### Contact
For inquiries, please contact [email protected]