Setup

Install the latest version of diffusers

pip install git+https://github.com/huggingface/diffusers.git

Login to your Hugging Face account

hf auth login

How to use

The following code snippet demonstrates how to use the Flux2 modular pipeline with a remote text encoder and a 4bit quantized version of the DiT. It requires approximately 19GB of VRAM to generate an image.

import torch
from diffusers.modular_pipelines.flux2 import ALL_BLOCKS
from diffusers.modular_pipelines import SequentialPipelineBlocks

blocks = SequentialPipelineBlocks.from_blocks_dict(ALL_BLOCKS['remote'])
pipe = blocks.init_pipeline("diffusers/flux2-bnb-4bit-modular")
pipe.load_components(torch_dtype=torch.bfloat16, device_map="cuda")

prompt = "a photo of a cat"
outputs = pipe(prompt=prompt, num_inference_steps=28, output="images")
outputs[0].save("flux2-bnb-modular.png")
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including diffusers/flux2-bnb-4bit-modular