Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
quality / README.md
rbiswasfc's picture
Update README.md
4f54fa9 verified
metadata
license: mit
dataset_info:
  features:
    - name: id
      dtype: string
    - name: question
      dtype: string
    - name: context
      dtype: string
    - name: choices
      sequence: string
    - name: label
      dtype: int64
  splits:
    - name: train
      num_bytes: 63920351
      num_examples: 2523
    - name: validation
      num_bytes: 52064930
      num_examples: 2086
  download_size: 5955070
  dataset_size: 115985281
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*

This dataset is derived from tau/scrolls dataset by running the following script:

import re

from datasets import load_dataset


def _normalize_answer(text):
    return " ".join(text.split()).strip()


def _drop_duplicates_in_input(untokenized_dataset):
    # from scrolls/evaluator/dataset_evaluator.py

    indices_to_keep = []
    id_to_idx = {}
    outputs = []
    for i, (id_, output) in enumerate(
        zip(untokenized_dataset["id"], untokenized_dataset["output"])
    ):
        if id_ in id_to_idx:
            outputs[id_to_idx[id_]].append(output)
            continue
        indices_to_keep.append(i)
        id_to_idx[id_] = len(outputs)
        outputs.append([output])
    untokenized_dataset = untokenized_dataset.select(indices_to_keep).flatten_indices()
    untokenized_dataset = untokenized_dataset.remove_columns("output")
    untokenized_dataset = untokenized_dataset.add_column("outputs", outputs)
    return untokenized_dataset


def _process_doc_prepended_question(doc):
    input = doc["input"]
    split = input.find("\n\n")
    return {
        "id": doc["id"],
        "pid": doc["pid"],
        "input": input,
        "outputs": doc["outputs"],
        "question": input[0:split],
        "text": input[split + 2 :],
    }


def process_doc(doc):
    quality_multiple_choice_pattern = re.compile(r" *\([A-D]\) *")
    doc = _process_doc_prepended_question(doc)

    split = doc["text"].find("\n\n", doc["text"].find("(D)"))
    choices_text = doc["text"][:split]

    doc["text"] = doc["text"][split:].strip()
    doc["choices"] = [
        _normalize_answer(choice)
        for choice in re.split(quality_multiple_choice_pattern, choices_text)[1:]
    ]
    doc["gold"] = doc["choices"].index(_normalize_answer(doc["outputs"][0]))
    return doc


def get_quality_dataset():
    """
    download and processes the quality dataset following the lm-evaluation-harness scrolls_quality task

    The processed dataset has the following train & validation splits with 2523 & 2086 examples respectively.
    fields to be used during evaluation:
    - question: the question prompt
    - text: the context
    - choices: list of choices (4 in total)
    - gold: index of the correct choice
    """
    quality_dataset = load_dataset("tau/scrolls", "quality")
    del quality_dataset["test"]  # drop test split -> no ground truths
    for split in quality_dataset:
        quality_dataset[split] = _drop_duplicates_in_input(quality_dataset[split])
    quality_dataset = quality_dataset.map(process_doc)
    return quality_dataset


quality_dataset = get_quality_dataset()
quality_dataset = quality_dataset.rename_columns({"text": "context", "gold": "label"})
quality_dataset = quality_dataset.remove_columns(["pid", "input", "outputs"])
train_ds = quality_dataset["train"]
validation_ds = quality_dataset["validation"]

The processing code is adapted from lm-evaluation-harness scrolls task


Relevant sections from the SCROLLS: Standardized CompaRison Over Long Language Sequences paper

QuALITY (Pang et al., 2021): A multiplechoice question answering dataset over stories
and articles sourced from Project Gutenberg,10 the
Open American National Corpus (Fillmore et al.,
1998; Ide and Suderman, 2004), and more. Experienced writers wrote questions and distractors, and
were incentivized to write answerable, unambiguous questions such that in order to correctly answer
them, human annotators must read large portions
of the given document. To measure the difficulty
of their questions, Pang et al. conducted a speed
validation process, where another set of annotators
were asked to answer questions given only a short
period of time to skim through the document. As
a result, 50% of the questions in QuALITY are
labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong
answer.