--- license: mit --- # RealCQA: Real-World Complex Question Answering Dataset This repository contains the dataset used in the paper "[RealCQA: Scientific Chart Question Answering as a Test-Bed for First-Order Logic](https://arxiv.org/pdf/2308.01979)" (ICDAR 2023). The dataset is designed to facilitate research in complex question answering, involving a diverse set of real-world images and associated textual question-answer pairs. ## Dataset Overview The RealCQA dataset consists of 28,266 images, and corresponding 2 million question-answer pairs organized into three complementary subsets. Each image is accompanied by a JSON file containing one or more question blocks. The dataset is structured to address a range of question-answering tasks that require an understanding of the visual content. ### Dataset Structure The dataset is organized into the following folders: - **Images** - `images`: Contains the first 10,000 images. - `images2`: Contains the next 10,000 images. - `images3`: Contains the remaining 8,266 images. - **JSON Files** - `jsons`: Contains the JSON files corresponding to the images in the `images` folder. - `jsons2`: Contains the JSON files corresponding to the images in the `images2` folder. - `jsons3`: Contains the JSON files corresponding to the images in the `images3` folder. - **QA Files** - `qa`: Contains the QA files corresponding to the images in the `images` folder. - `qa2`: Contains the QA files corresponding to the images in the `images2` folder. - `qa3`: Contains the QA files corresponding to the images in the `images3` folder. ### File Details - **Images**: JPEG files named in the format `PMCxxxxxx_abc.jpg`, where `xxxxxx` represents the PubMed Central ID and `abc` represents an identifier specific to the image. - **JSON Files**: JSON files named in the same format as the images. Each JSON file is a list of question blocks associated with the corresponding image. #### JSON Structure Each JSON file contains a list of question blocks in the following format: ```json [ { "taxonomy id": "2j", "QID": "16", "question": "Are all the bars in the chart visually horizontal?", "answer": "no", "answer_type": "Binary", "qa_id": "XbUzFtjqsEOF", "PMC_ID": "PMC8439477___g003" }, { "taxonomy id": "1a", "QID": "7a", "question": "What is the type of chart?", "answer": "Vertical Bar chart", "answer_type": "String", "qa_id": "wzcdDijkrHtt", "PMC_ID": "PMC8439477___g003" } ] ``` - **QA Files**: Contain additional question-answer metadata relevant to the dataset. ### Dataset Loader To facilitate loading and using the dataset, we provide a custom dataset loader script, `dataset.py`. This script defines a PyTorch `Dataset` class to handle loading, preprocessing, and batching of the images and question-answer pairs. #### How to Use the Dataset Loader 1. **Setup and Requirements** Ensure you have the following Python packages installed: ```bash pip install torch torchvision Pillow ``` 2. **Dataset Loader Script** Use the provided `dataset.py` to load the dataset. The script is designed to load the dataset efficiently and handle both training and testing cases. ```python from dataset import RQADataset from torch.utils.data import DataLoader # Define the data configuration class DataConfig: img_dir = '/home/jupyter/data/RQA/images' # Update with actual image directory path json_dir = '/home/jupyter/data/RQA/jsons' # Update with actual JSON directory path filter_list = '/home/jupyter/data/RQA_V0/test_filenames.txt' # Path to the file containing test filenames train = False # Set to True for training, False for testing # Initialize dataset dataset = RQADataset(DataConfig) # Initialize DataLoader dataloader = DataLoader(dataset, batch_size=4, collate_fn=RQADataset.custom_collate) # Iterate through the dataset for batch in dataloader: print(batch) ``` ### Citation If you use this dataset in your research, please cite the following paper: ```bibtex @InProceedings{10.1007/978-3-031-41682-8_5, author="Ahmed, Saleem and Jawade, Bhavin and Pandey, Shubham and Setlur, Srirangaraj and Govindaraju, Venu", editor="Fink, Gernot A. and Jain, Rajiv and Kise, Koichi and Zanibbi, Richard", title="RealCQA: Scientific Chart Question Answering as a Test-Bed for First-Order Logic", booktitle="Document Analysis and Recognition - ICDAR 2023", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="66--83", abstract="We present a comprehensive study of chart visual question-answering(QA) task, to address the challenges faced in comprehending and extracting data from chart visualizations within documents. Despite efforts to tackle this problem using synthetic charts, solutions are limited by the shortage of annotated real-world data. To fill this gap, we introduce a benchmark and dataset for chart visual QA on real-world charts, offering a systematic analysis of the task and a novel taxonomy for template-based chart question creation. Our contribution includes the introduction of a new answer type, `list', with both ranked and unranked variations. Our study is conducted on a real-world chart dataset from scientific literature, showcasing higher visual complexity compared to other works. Our focus is on template-based QA and how it can serve as a standard for evaluating the first-order logic capabilities of models. The results of our experiments, conducted on a real-world out-of-distribution dataset, provide a robust evaluation of large-scale pre-trained models and advance the field of chart visual QA and formal logic verification for neural networks in general. Our code and dataset is publicly available (https://github.com/cse-ai-lab/RealCQA).", isbn="978-3-031-41682-8" } } ``` ### License This dataset is licensed under the [MIT License](LICENSE). By using this dataset, you agree to abide by its terms and conditions. ### Contact For any questions or issues, please contact the authors of the paper or open an issue in this repository.