metadata
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: image
dtype: string
- name: decoded_image
dtype: image
- name: ground_truth
dtype: string
- name: answer_type
dtype: string
- name: subject
dtype: string
- name: knowledge_level
dtype: string
splits:
- name: train
num_bytes: 37409713.77245509
num_examples: 1000
download_size: 43769777
dataset_size: 37409713.77245509
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
(Unofficial) Mini version of Dynamath (https://huggingface.co/datasets/DynaMath/DynaMath_Sample) dataset.
Here, we have sampled 100 random ids from all 10 variants of the original Dynamath dataset.
This allows faster inference on 1000 eval examples instead of 5010 examples; which can be very slow and limiting for large model runs.