metadata
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
widget:
- text: >-
Serene raven-haired woman, moonlit lilies, swirling botanicals, yarn art
style
output:
url: images/yarn_merged1.png
- text: a puppy in a pond, yarn art style
output:
url: images/yarn_merged2.png
- text: >-
Ornate fox with a collar of autumn leaves and berries, amidst a tapestry
of forest foliage, yarn art style
output:
url: images/yarn_merged3.png
instance_prompt: null
datasets:
- Norod78/Yarn-art-style
LoRA for FLUX.1-dev - Yarn Art Style
This repository contains a LoRA (Low-Rank Adaptation) fine-tuned on black-forest-labs/FLUX.1-dev
to generate images in the artistic style of Yarn Art. This work is part of the blog post, "Fine-Tuning FLUX.1-dev on consumer hardware and in FP8".

- Prompt
- Serene raven-haired woman, moonlit lilies, swirling botanicals, yarn art style

- Prompt
- a puppy in a pond, yarn art style

- Prompt
- Ornate fox with a collar of autumn leaves and berries, amidst a tapestry of forest foliage, yarn art style
Inference
There are two main ways to use this LoRA for inference: loading the adapter on the fly or merging it with the base model.
Option 1: Loading LoRA Adapters
This approach offers flexibility, allowing you to easily switch between different LoRA styles.
from diffusers import FluxPipeline
import torch
ckpt_id = "black-forest-labs/FLUX.1-dev"
pipeline = FluxPipeline.from_pretrained(
ckpt_id, torch_dtype=torch.float16
)
pipeline.load_lora_weights("derekl35/yarn-qlora-flux", weight_name="pytorch_lora_weights.safetensors")
pipeline.enable_model_cpu_offload()
image = pipeline(
"a puppy in a pond, yarn art style",
num_inference_steps=28,
guidance_scale=3.5,
height=768,
generator=torch.manual_seed(0)
).images[0]
image.save("yarn_loaded.png")
Option 2: Merging LoRA into Base Model
Merging the LoRA into the base model can lead to slightly faster inference and is useful when you want to use a single style consistently.
from diffusers import FluxPipeline, AutoPipelineForText2Image, FluxTransformer2DModel
import torch
ckpt_id = "black-forest-labs/FLUX.1-dev"
pipeline = FluxPipeline.from_pretrained(
ckpt_id, text_encoder=None, text_encoder_2=None, torch_dtype=torch.float16
)
pipeline.load_lora_weights("derekl35/yarn-qlora-flux", weight_name="pytorch_lora_weights.safetensors")
pipeline.fuse_lora()
pipeline.unload_lora_weights()
# You can save the fused transformer for later use
# pipeline.transformer.save_pretrained("fused_transformer")
pipeline.enable_model_cpu_offload()
image = pipeline(
"a puppy in a pond, yarn art style",
num_inference_steps=28,
guidance_scale=3.5,
height=768,
generator=torch.manual_seed(0)
).images[0]
image.save("yarn_merged.png")
you can also requantize model:
from diffusers import FluxPipeline, AutoPipelineForText2Image, FluxTransformer2DModel, BitsAndBytesConfig
import torch
ckpt_id = "black-forest-labs/FLUX.1-dev"
pipeline = FluxPipeline.from_pretrained(
ckpt_id, text_encoder=None, text_encoder_2=None, torch_dtype=torch.float16
)
pipeline.load_lora_weights("derekl35/yarn-qlora-flux", weight_name="pytorch_lora_weights.safetensors")
pipeline.fuse_lora()
pipeline.unload_lora_weights()
pipeline.transformer.save_pretrained("fused_transformer")
ckpt_id = "black-forest-labs/FLUX.1-dev"
bnb_4bit_compute_dtype = torch.bfloat16
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=bnb_4bit_compute_dtype,
)
transformer = FluxTransformer2DModel.from_pretrained(
"fused_transformer",
quantization_config=nf4_config,
torch_dtype=bnb_4bit_compute_dtype,
)
pipeline = AutoPipelineForText2Image.from_pretrained(
ckpt_id, transformer=transformer, torch_dtype=bnb_4bit_compute_dtype
)
pipeline.enable_model_cpu_offload()
image = pipeline(
"a puppy in a pond, yarn art style",
num_inference_steps=28,
guidance_scale=3.5,
height=768,
generator=torch.manual_seed(0)
).images[0]
image.save("yarn_merged.png")