allenai_qasper / README.md
librarian-bot's picture
Librarian Bot: Add language metadata for dataset
90a85ba verified
|
raw
history blame
4.25 kB
metadata
language:
  - en
dataset_info:
  features:
    - name: context
      dtype: string
    - name: questions
      sequence: string
    - name: answers
      sequence:
        sequence: string
  splits:
    - name: train
      num_bytes: 21609736
      num_examples: 888
    - name: validation
      num_bytes: 6445050
      num_examples: 281
  download_size: 14182695
  dataset_size: 28054786
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*

Qasper (Question Answering on Scientific Research Papers)

This dataset card aims to be a base template for QasperQA dataset released by Allenai. It has been generated using this raw template. This is a dataset of 5,049 questions over 1,585 Natural Language Processing papers. Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text. The questions are then answered by a separate set of NLP practitioners who also provide supporting evidence to answers.

Dataset Details - Abstract of the Paper

Readers of academic research papers often read with the goal of answering specific ques- tions. Question Answering systems that can answer those questions can make consumption of the content much more efficient. However, building such tools requires data that reflect the difficulty of the task arising from complex reasoning about claims made in multiple parts of a paper. In contrast, existing information- seeking question answering datasets usually contain questions about generic factoid-type information. We therefore present QASPER, a dataset of 5,049 questions over 1,585 Natu- ral Language Processing papers. Each ques- tion is written by an NLP practitioner who read only the title and abstract of the corre- sponding paper, and the question seeks infor- mation present in the full text. The questions are then answered by a separate set of NLP practitioners who also provide supporting ev- idence to answers. We find that existing mod- els that do well on other QA tasks do not per- form well on answering these questions, un- derperforming humans by at least 27 F1 points when answering them from entire papers, motivating further research in document-grounded, information-seeking QA, which our dataset is designed to facilitate.

Dataset Description

  • Curated by: Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, Matt Gardner
  • Shared by: Allen Institute for AI, Paul G. Allen School of CSE, University of Washington

Dataset Sources

Dataset Structure

{ context: , questions: [question1, question2, ...], asnwers: [[answer1_to_question1, answer2_to_question1, ...], [answer1_to_question2, answer2_to_question2, ...], ...] }

Citation

BibTeX:

@misc{dasigi2021datasetinformationseekingquestionsanswers, title={A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers}, author={Pradeep Dasigi and Kyle Lo and Iz Beltagy and Arman Cohan and Noah A. Smith and Matt Gardner}, year={2021}, eprint={2105.03011}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2105.03011}, }

Dataset Card Author

Hulki Çıray, researcher at GGLab, Linkedin, Hugging Face

Dataset Card Contact

Email