Trained for 23 epochs and 1800 steps.
Browse filesTrained with datasets ['text-embeds', 'little-tinies']
Learning rate 0.0001, batch size 8, and 2 gradient accumulation steps.
Used DDPM noise scheduler for training with epsilon prediction type and rescaled_betas_zero_snr=False
Using 'trailing' timestep spacing.
Base model: /workspace/SimpleTuner/FLUX.1-dev/
VAE: None
pytorch_lora_weights.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:91b01ebc7ba1a3f6cabe03c9be859ebdab2da8c2b5a03754621214fa5327ccf5
|
3 |
+
size 149474080
|