Model converted to fp8_e4m3fn. Everything else is the same as upstream. https://huggingface.co/ashen0209/Flux-Dev2Pro

Colab Flux-dev2pro fp8 script

Flux-Dev2Pro

Flux-Dev2Pro finetunes the transformer of Flux-Dev to make LoRA training better.

As discussed in this blog https://medium.com/@zhiwangshi28/why-flux-lora-so-hard-to-train-and-how-to-overcome-it-a0c70bc59eaf, LoRA trained on Flux-Dev often yields bad results, because without guidance distillation the LoRA training is diverged from the original training process. Flux-Dev2Pro recovers Flux-pro from Flux-dev by finetuning the model for many steps. Two epoch of 3M high quality images have been trained.

The LoRA trained on Flux-Dev2pro yields a much better results when being applied on Flux-dev, just like LoRA trained on SDXL and being applied to SDXL-turbo/lightning.

To use this model, run:

from diffusers import FluxTransformer2DModel

transformer = FluxTransformer2DModel.from_pretrained("rockerBOO/Flux-Dev2Pro-fp8_e4m3fn")

“The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.

IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.”

Downloads last month
18
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.