Time to host your own demo! In this portion, you will:
In the notebook, replace the model name with your own model name:
import torch
from diffusers import DiffusionPipeline
multi_view_diffusion_pipeline = DiffusionPipeline.from_pretrained(
"<your-user-name>/<your-model-name>",
custom_pipeline="dylanebert/multi-view-diffusion",
torch_dtype=torch.float16,
trust_remote_code=True,
).to("cuda")
Then, re-run the notebook. You should see the same results as before.
Now, let’s create a Gradio demo:
import gradio as gr
def run(image):
image = np.array(image, dtype=np.float32) / 255.0
images = multi_view_diffusion_pipeline("", image, guidance_scale=5, num_inference_steps=30, elevation=0)
images = [Image.fromarray((img * 255).astype("uint8")) for img in images]
width, height = images[0].size
grid_img = Image.new("RGB", (2 * width, 2 * height))
grid_img.paste(images[0], (0, 0))
grid_img.paste(images[1], (width, 0))
grid_img.paste(images[2], (0, height))
grid_img.paste(images[3], (width, height))
return grid_img
demo = gr.Interface(fn=run, inputs="image", outputs="image")
demo.launch()
The run
method combines all the code from earlier in a single function. The gr.Interface
method then uses this function to create a demo with image
inputs and image
outputs.
Congratulations! You’ve created a Gradio demo for your model.
You probably want to run your demo outside of Colab.
There are many ways to do this:
Go to Hugging Face Spaces and create a new Space. Choose the Gradio Space SDK
. Create a new file in the Space called app.py
and paste the code from the Gradio demo. Copy the demo requirements.txt into the Space.
For a complete example, check out this Space, then click Files
in the top right to view the source code.
Note: This approach requires a GPU to host publicly, which costs money. However, you can run the demo locally for free, following the instructions in Option 3.
Gradio makes it easy to deploy your demo to a server using the gradio deploy
command.
For more details, check out the Gradio documentation.
To run locally, simply copy the code into a Python file and run it on your machine.
The full source file should look like this:
import gradio as gr
import numpy as np
import torch
from diffusers import DiffusionPipeline
from PIL import Image
multi_view_diffusion_pipeline = DiffusionPipeline.from_pretrained(
"dylanebert/multi-view-diffusion",
custom_pipeline="dylanebert/multi-view-diffusion",
torch_dtype=torch.float16,
trust_remote_code=True,
).to("cuda")
def run(image):
image = np.array(image, dtype=np.float32) / 255.0
images = multi_view_diffusion_pipeline(
"", image, guidance_scale=5, num_inference_steps=30, elevation=0
)
images = [Image.fromarray((img * 255).astype("uint8")) for img in images]
width, height = images[0].size
grid_img = Image.new("RGB", (2 * width, 2 * height))
grid_img.paste(images[0], (0, 0))
grid_img.paste(images[1], (width, 0))
grid_img.paste(images[2], (0, height))
grid_img.paste(images[3], (width, height))
return grid_img
demo = gr.Interface(fn=run, inputs="image", outputs="image")
demo.launch()
To set up and run this demo in a virtual Python environment, run the following:
# Setup
python -m venv venv
source venv/bin/activate
pip install -r https://huggingface.co/spaces/dylanebert/multi-view-diffusion/raw/main/requirements.txt
# Run
python app.py
< > Update on GitHubNote: This was tested using Python 3.10.12 and CUDA 12.1 on an NVIDIA RTX 4090.