Realistic photo in 1 step
[Please keep in mind I am very very new to using AI (just 3 days)]
I'm really confused by this Schnell. Using ComfyUI with 1 step it makes a very realistic image which gets worse with more steps:
1 step (9.09s) vs 6 steps (38.24s), same settings. (Seed for this image 12284149850765, seems to be consistent across seeds)
Is this model using a real picture from the dataset as a start point or something?
is there any reason to go more than 1 step when it looks this good?
Prompt: "realistic, high quality, photograph of close-up of ginger haired woman with natural freckles"
Sampler: 1 step, 1.0 cfg, euler sampler, karras scheduler, 1 denoise
dev is the guidance distilled version, but schnell is the timestep distilled version, so it's normal.
There are several other algorithms that converge in fewer steps, such as LCM, SDXL-Hyper, and DMD2.
Schnell
https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux
https://huggingface.co/sayakpaul/FLUX.1-merged/discussions/1
https://huggingface.co/spaces/multimodalart/low-step-flux-comparison
Few steps
https://huggingface.co/ByteDance/Hyper-SD
https://huggingface.co/latent-consistency/lcm-lora-sdxl
https://huggingface.co/tianweiy/DMD2
dev is the guidance distilled version, but schnell is the timestep distilled version, so it's normal.
There are several other algorithms that converge in fewer steps, such as LCM, SDXL-Hyper, and DMD2.Schnell
https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux
https://huggingface.co/sayakpaul/FLUX.1-merged/discussions/1
https://huggingface.co/spaces/multimodalart/low-step-flux-comparisonFew steps
https://huggingface.co/ByteDance/Hyper-SD
https://huggingface.co/latent-consistency/lcm-lora-sdxl
https://huggingface.co/tianweiy/DMD2
Thanks for the reply, it was strange to me since more complicated prompts create strange lines across parts of the image and mutations in 1 step but for some reason that prompts first step was consistently pretty perfect