flan_2022_350k / README.md
itay1itzhak's picture
Update README.md
c24c5c6 verified
metadata
dataset_info:
  features:
    - name: prompt
      dtype: string
    - name: completion
      dtype: string
    - name: task_source
      dtype: string
    - name: task_name
      dtype: string
    - name: template_type
      dtype: string
  splits:
    - name: train
      num_bytes: 548560141
      num_examples: 350023
  download_size: 322568508
  dataset_size: 548560141
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: apache-2.0
task_categories:
  - text-generation
language:
  - en
tags:
  - flan
  - instructions
  - Flan
  - subsample
pretty_name: Flan-2022-350K
size_categories:
  - 100K<n<1M

Dataset Card for Flan-2022 Subsample (Flan 350K)

Dataset Description

Dataset Summary

This dataset was used in the paper on the origins of cognitive biases in LLMs: "Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"
https://arxiv.org/abs/2507.07186 This is a 350,000-example subsample of the original FLAN 2022 instruction-tuning dataset (https://arxiv.org/abs/2210.11416). It was created to provide a balanced, computationally efficient variant for evaluating bias robustness in language model finetuning. The subset maintains the original task diversity and structure while substantially reducing size.

The dataset contains English language instructions and responses across hundreds of NLP tasks. It supports text-to-text generation and instruction-following tasks in a format suitable for training encoder-decoder or decoder-only LLMs.

Supported Tasks and Leaderboards

text2text-generation: The dataset is designed for instruction tuning. A model is trained to generate a helpful, accurate, and concise response (completion) to a user instruction (prompt).
Metric: Exact match, BLEU, ROUGE-L, task-specific metrics depending on downstream evaluation.

Languages

  • English (BCP-47: en)

Dataset Structure

Data Instances

Each instance is structured as:

{
  "prompt": "Explain what overfitting means in machine learning.",
  "completion": "Overfitting occurs when a model learns the training data too well...",
  "task_source": "science",
  "task_name": "definition",
  "template_type": "instruction"
}

Data Fields

  • prompt (string): Natural language instruction (input)
  • completion (string): Natural language output (target)
  • task_source (string): The domain or source of the task
  • task_name (string): The name of the specific NLP task
  • template_type (string): Format of the prompt (e.g., instruction, question-answer)

Data Splits

Split Examples
train 350,023

Only a training split is provided. The subsample was drawn randomly while preserving task diversity.

Dataset Creation

Curation Rationale

This dataset was created to reduce the size of FLAN 2022 while preserving representative task coverage. The goal was to make bias evaluation tractable on resource-constrained hardware.

Source Data

Original data collected by Google Research and described in: https://arxiv.org/abs/2210.11416

The subset used here was sampled using uniform per-task subsampling.

Who are the source language producers?

Original tasks are crowdsourced or derived from public NLP benchmarks and curated by researchers. The subset preserves this structure.

Annotations

There are no additional annotations.

Personal and Sensitive Information

The dataset does not include identity or sensitive user information.

Considerations for Using the Data

Social Impact of Dataset

Instruction datasets like Flan support generalization and alignment of language models. They also risk amplifying human biases embedded in task design.

Discussion of Biases

Biases may stem from:

  • Uneven task formulation
  • Cultural assumptions in prompts or completions

Additional Information

Dataset Curators

Google Research (original FLAN dataset)
Subsampled by authors of "Planted in Pretraining, Swayed by Finetuning"

Licensing Information

Apache 2.0

Citation Information

@misc{itzhak2025plantedpretrainingswayedfinetuning,
      title={Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs}, 
      author={Itay Itzhak and Yonatan Belinkov and Gabriel Stanovsky},
      year={2025},
      eprint={2507.07186},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.07186}, 
}

Contributions

Thanks to @itay1itzhak for preparing the 350K subset.