JackBAI's picture
Update README.md
c5cacb6
|
raw
history blame
1.21 kB
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 24500165181
      num_examples: 80462898
  download_size: 14400389487
  dataset_size: 24500165181
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Card for "bert_pretrain_datasets"

This dataset is essentially a concatenation of the training set of the English Wikipedia (wikipedia.20220301.en.train) and the Book Corpus (bookcorpus.train).

This is exactly how I get this dataset:

from datasets import load_dataset, concatenate_datasets, load_from_disk

cache_dir = "/data/haob2/cache/"

# book corpus
bookcorpus = load_dataset("bookcorpus", split="train", cache_dir=cache_dir)

# english wikipedia
wiki = load_dataset("wikipedia", "20220301.en", split="train", cache_dir=cache_dir)
wiki = wiki.remove_columns([col for col in wiki.column_names if col != "text"])

# # concatenation
concat = concatenate_datasets([bookcorpus, wiki])

concat.push_to_hub("JackBAI/bert_pretrain_datasets")

Note that this is a naive reproduction of the dataset that BERT is using. We believe the official BERT checkpoint is pretrained on a much more engineered dataset.