cartoon-control-lr_1e-4-wd_1e-4-gs_10.0-cd_0.1
These are Flux control weights trained on black-forest-labs/FLUX.1-dev with a new type of conditioning. instruction-tuning-sd/cartoonization dataset was used for training. You can find some example images below.
License
Please adhere to the licensing terms as described here
Intended uses & limitations
How to use
from diffusers import FluxTransformer2DModel, FluxControlPipeline
from diffusers.utils import load_image
import torch
path = "sayakpaul/cartoon-control-lr_1e-4-wd_1e-4-gs_10.0-cd_0.1"
transformer = FluxTransformer2DModel.from_pretrained(path, torch_dtype=torch.bfloat16)
pipe = FluxControlPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev", transformer=transformer, torch_dtype=torch.bfloat16
).to("cuda")
prompt = "Generate a cartoonized version of the image"
url = "https://huggingface.co/sayakpaul/cartoon-control-lr_1e-4-wd_1e-4-gs_10.0-cd_0.1/resolve/main/taj.jpg"
image = load_image(img).resize((1024, 1024))
gen_image = pipe(
prompt=prompt,
control_image=image,
guidance_scale=10.,
num_inference_steps=50,
generator=torch.manual_seed(0),
max_sequence_length=512,
).images[0]
gen_image.save("output.png")
Refer to the Flux Control docs here.
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
- Downloads last month
- 23
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for sayakpaul/cartoon-control-lr_1e-4-wd_1e-4-gs_10.0-cd_0.1
Base model
black-forest-labs/FLUX.1-dev