JackBAI commited on
Commit
c5cacb6
·
1 Parent(s): 412e6ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -1
README.md CHANGED
@@ -17,4 +17,26 @@ configs:
17
  ---
18
  # Dataset Card for "bert_pretrain_datasets"
19
 
20
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ---
18
  # Dataset Card for "bert_pretrain_datasets"
19
 
20
+ This dataset is essentially a concatenation of the training set of the English Wikipedia (wikipedia.20220301.en.train) and the Book Corpus (bookcorpus.train).
21
+
22
+ This is exactly how I get this dataset:
23
+
24
+ ```
25
+ from datasets import load_dataset, concatenate_datasets, load_from_disk
26
+
27
+ cache_dir = "/data/haob2/cache/"
28
+
29
+ # book corpus
30
+ bookcorpus = load_dataset("bookcorpus", split="train", cache_dir=cache_dir)
31
+
32
+ # english wikipedia
33
+ wiki = load_dataset("wikipedia", "20220301.en", split="train", cache_dir=cache_dir)
34
+ wiki = wiki.remove_columns([col for col in wiki.column_names if col != "text"])
35
+
36
+ # # concatenation
37
+ concat = concatenate_datasets([bookcorpus, wiki])
38
+
39
+ concat.push_to_hub("JackBAI/bert_pretrain_datasets")
40
+ ```
41
+
42
+ Note that this is a naive reproduction of the dataset that BERT is using. We believe the official BERT checkpoint is pretrained on a much more engineered dataset.