Usage

Please run demo.py for full demonstrations. This repo generates 256x256 images.

Load Model

from diffusers import DiffusionPipeline
import torch

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

pipeline = DiffusionPipeline.from_pretrained(
    "xiangjx/MuPaD-256",
    custom_pipeline="xiangjx/MuPaD-256",
    trust_remote_code=True,
)
pipeline.to(device)

Text-to-Image Generation

Generate histopathology images from a text prompt.

# Text-to-Image generation
prompt = "lung adenocarcinoma"

output_t2i = pipeline(
    prompt=prompt,
    modality="text",
    num_images_per_prompt=4,
    num_inference_steps=250,
    guidance_scale=2.5,
    guidance_high=0.75,
    guidance_low=0.0,
    mode="sde",
    path_type="linear",
    seed=42
)

for i, img in enumerate(output_t2i["images"]):
    img.save(f"text2image_{i}.png")

Text2image

Image-to-Image Generation

Generate images conditioned on a reference image.

from PIL import Image

# Load reference image
# Ensure you have a reference image path
raw_image = Image.open("test_image.png").convert("RGB")

output_i2i = pipeline(
    image=raw_image,
    modality="image",
    num_images_per_prompt=4,
    num_inference_steps=250,
    guidance_scale=2.5,
    guidance_high=0.75,
    guidance_low=0.0,
    mode="sde",
    path_type="linear",
    seed=42
)

for i, img in enumerate(output_i2i["images"]):
    img.save(f"image2image_{i}.png")

Image2image

Software Dependencies

  • torch>=2.0.0
  • diffusers>=0.35.1
  • timm>=0.9.0
  • pillow
  • huggingface-hub
  • dictdot
  • einops
  • fairscale
  • transformers==4.57.3
  • sentencepiece
Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including xiangjx/MuPaD-256