6DammK9 commited on
Commit
573e32c
·
verified ·
1 Parent(s): 5b5006f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ source_datasets:
24
  - Dedicated dataset to align both [NebulaeWis/e621-2024-webp-4Mpixel](https://huggingface.co/datasets/NebulaeWis/e621-2024-webp-4Mpixel) and [deepghs/e621_newest-webp-4Mpixel](https://huggingface.co/datasets/deepghs/e621_newest-webp-4Mpixel). "4MP-Focus" for average raw image resolution.
25
  - Latents are ARB with maximum size of 1024x1024 as the recommended setting in kohyas. Major reason is to make sure I can finetune with RTX 3090. *VRAM usage will raise drastically after 1024.*
26
  - Generated from [prepare_buckets_latents_v2.py](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch06/sd-scripts-runtime/prepare_buckets_latents_v2.py), modified from [prepare_buckets_latents.py](https://github.com/kohya-ss/sd-scripts/blob/sd3/finetune/prepare_buckets_latents.py).
27
- - Used for [kohya-ss/sd-scripts](https://github.com/kohya-ss/sd-scripts/blob/sd3/docs/train_README-ja.md#latents%E3%81%AE%E4%BA%8B%E5%89%8D%E5%8F%96%E5%BE%97). In theory it may replace `*.webp` and `*.txt` along with [meta_lat.json](https://huggingface.co/datasets/6DammK9/e621_2024-latents-sdxl-1ktar/blob/main/meta_lat.tar.gz).
28
  - It took me around 10 days with 4x RTX 3090 to generate (with many PSU trips and I/O deadlocks). Perfect case would be 3-4 days only (18 it/s).
29
  - Download along with [webp / txt](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch06/cheesechaser-runtime/e621_newest-webp-4Mpixel/dl-e621-hfhub-dgs.py), and then [extract them all to single directory](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch06/cheesechaser-runtime/e621_newest-webp-4Mpixel/extract-e621-parallel-ex.py), and then [you are good to go](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch06/sd-scripts-runtime/readme.md#finetune-stage). Tags available in [6DammK9/e621_2024-tags-1ktar](https://huggingface.co/datasets/6DammK9/e621_2024-tags-1ktar).
30
  - *I still don't know how to work with multigpu trainning in Windows.* Ultimately I may need to switch trainer. Use this repo if you are working well already.
 
24
  - Dedicated dataset to align both [NebulaeWis/e621-2024-webp-4Mpixel](https://huggingface.co/datasets/NebulaeWis/e621-2024-webp-4Mpixel) and [deepghs/e621_newest-webp-4Mpixel](https://huggingface.co/datasets/deepghs/e621_newest-webp-4Mpixel). "4MP-Focus" for average raw image resolution.
25
  - Latents are ARB with maximum size of 1024x1024 as the recommended setting in kohyas. Major reason is to make sure I can finetune with RTX 3090. *VRAM usage will raise drastically after 1024.*
26
  - Generated from [prepare_buckets_latents_v2.py](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch06/sd-scripts-runtime/prepare_buckets_latents_v2.py), modified from [prepare_buckets_latents.py](https://github.com/kohya-ss/sd-scripts/blob/sd3/finetune/prepare_buckets_latents.py).
27
+ - Used for [kohya-ss/sd-scripts](https://github.com/kohya-ss/sd-scripts/blob/sd3/docs/train_README-ja.md#latents%E3%81%AE%E4%BA%8B%E5%89%8D%E5%8F%96%E5%BE%97). In theory it may replace `*.webp` and `*.txt` along with [meta_lat.json](https://huggingface.co/datasets/6DammK9/e621_2024-latents-sdxl-1ktar/blob/main/meta_lat.tar.gz). **Raw data is no longer required.**
28
  - It took me around 10 days with 4x RTX 3090 to generate (with many PSU trips and I/O deadlocks). Perfect case would be 3-4 days only (18 it/s).
29
  - Download along with [webp / txt](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch06/cheesechaser-runtime/e621_newest-webp-4Mpixel/dl-e621-hfhub-dgs.py), and then [extract them all to single directory](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch06/cheesechaser-runtime/e621_newest-webp-4Mpixel/extract-e621-parallel-ex.py), and then [you are good to go](https://github.com/6DammK9/nai-anime-pure-negative-prompt/blob/main/ch06/sd-scripts-runtime/readme.md#finetune-stage). Tags available in [6DammK9/e621_2024-tags-1ktar](https://huggingface.co/datasets/6DammK9/e621_2024-tags-1ktar).
30
  - *I still don't know how to work with multigpu trainning in Windows.* Ultimately I may need to switch trainer. Use this repo if you are working well already.