metadata
library_name: diffusers
license: apache-2.0
pipeline_tag: image-to-image
Model Card for Model ID
This is a ControlNet based model that synthesizes satellite images given OpenStreetMap Images. The base stable diffusion model used is stable-diffusion-2-1-base (v2-1_512-ema-pruned.ckpt).
- Use it with 🧨 diffusers
- Use it with controlnet repository
Model Sources [optional]
- Repository: stable-diffusion
- Paper: Adding Conditional Control to Text-to-Image Diffusion Models
Examples
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
from PIL import Image
img = Image.open("osm_tile_18_42048_101323.jpeg")
controlnet = ControlNetModel.from_pretrained("MVRL/GeoSynth-OSM")
scheduler = UniPCMultistepScheduler.from_pretrained("stabilityai/stable-diffusion-2-1-base", subfolder="scheduler")
pipe = StableDiffusionControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base", controlnet=controlnet, scheduler=scheduler)
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
# generate image
generator = torch.manual_seed(10345340)
image = pipe(
"Satellite image features a city neighborhood",
num_inference_steps=50,
generator=generator,
image=img,
controlnet_conditioning_scale=1.0,
).images[0]
image.save("generated_city.jpg")
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]