Update README.md
Browse files
README.md
CHANGED
@@ -17,4 +17,26 @@ configs:
|
|
17 |
---
|
18 |
# Dataset Card for "bert_pretrain_datasets"
|
19 |
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
---
|
18 |
# Dataset Card for "bert_pretrain_datasets"
|
19 |
|
20 |
+
This dataset is essentially a concatenation of the training set of the English Wikipedia (wikipedia.20220301.en.train) and the Book Corpus (bookcorpus.train).
|
21 |
+
|
22 |
+
This is exactly how I get this dataset:
|
23 |
+
|
24 |
+
```
|
25 |
+
from datasets import load_dataset, concatenate_datasets, load_from_disk
|
26 |
+
|
27 |
+
cache_dir = "/data/haob2/cache/"
|
28 |
+
|
29 |
+
# book corpus
|
30 |
+
bookcorpus = load_dataset("bookcorpus", split="train", cache_dir=cache_dir)
|
31 |
+
|
32 |
+
# english wikipedia
|
33 |
+
wiki = load_dataset("wikipedia", "20220301.en", split="train", cache_dir=cache_dir)
|
34 |
+
wiki = wiki.remove_columns([col for col in wiki.column_names if col != "text"])
|
35 |
+
|
36 |
+
# # concatenation
|
37 |
+
concat = concatenate_datasets([bookcorpus, wiki])
|
38 |
+
|
39 |
+
concat.push_to_hub("JackBAI/bert_pretrain_datasets")
|
40 |
+
```
|
41 |
+
|
42 |
+
Note that this is a naive reproduction of the dataset that BERT is using. We believe the official BERT checkpoint is pretrained on a much more engineered dataset.
|