Resource used to produce this version of dataset?

#1
by spate141 - opened

Can you provide any details about resources (CPUs, memory, storage, time) used to produce this dataset?

From the OLD/CC GitHub readme, I can estimate that, to get and process 20% of the August 2022 CC snapshot, which is 1.45 TB of data, requires about 15TB of storage disk and total deduplication will require about 700 to 900 GB of memory. But I can't find any details about how many CPUs were used and how much time it took to process it. Was this data processed on a single machine, single disk?

Online Language Modelling org

closing because I think your question was answered here: https://github.com/huggingface/olm-datasets/issues/4. Feel free to open another issue if something else comes up, or if you want further clarification!

Tristan changed discussion status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment