The Dataset Viewer is not available on this dataset.
E621 2024 tags only in 1k tar
Dedicated dataset to align both NebulaeWis/e621-2024-webp-4Mpixel and deepghs/e621_newest-webp-4Mpixel.
How to use / why I create this: my speedrun to build the dataset
Core logic
Artist and character first.Literally as-is. No fancy "quality" score for pretraining focused dataset.
df3 = pd.DataFrame(data={
'id': df11["id"].to_list() + df21["id"].to_list(),
'rating': df11["rating"].to_list() + df21["rating"].to_list(),
'tag_string': [x.replace(" ",", ") for x in df11["tag_string"].to_list()] + [", ".join(x) for x in df21["tags"].to_list()],
})
How to build the "dataset" with speed
Get at least 4TB of storage, and around 75GB of RAM. Always make a venv / conda environment for each task.
(Optional) Download this directly: posts-2024-04-07.parquet and table.parquet
Download all 1k tarfile with webp via dl-e621-hfhub-nw.py and dl-e621-hfhub-dgs.py
Rerun that script for this repo (another 1k tarfile).
Run extract-e621-parallel.py to extract all tars into a single directory.
> python extract-e621-parallel.py
100%|ββββββββββββββββββββββββββββββββββββββ| 1000/1000 [2:30:51<00:00, 9.05s/it]
Extracted: 1000 iters
Delta: 744438 files
PS H:\e621_newest-webp-4Mpixel> node
Welcome to Node.js v20.15.0.
Type ".help" for more information.
> const fs = require('fs');
> console.log(fs.readdirSync("./kohyas_finetune").length);
8883320
- (Done?) Finally, instead the official guide (a bit messy), follow this reddit post to make the metadata JSON file (with ARB) and start finetuning.
- Downloads last month
- 37