File size: 1,464 Bytes
e3bc6c5
 
 
8296a97
e3bc6c5
 
 
 
 
 
7ee0eea
8292604
7ee0eea
 
de3909e
 
b22e286
b082435
 
 
ab2d4cc
b082435
7ee0eea
 
 
 
ab2d4cc
ddb3125
 
7ee0eea
 
 
 
 
 
 
 
 
a587bee
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: creativeml-openrail-m
---
# Overview 📃✏️
This is a Diffusers-compatible version of [Yiffymix v51 by chilon249](https://civitai.com/models/3671?modelVersionId=658237).
 See the original page for more information.

 Keep in mind that this is SDXL-Lightning based model,
 so using fewer steps (around 12 to 25) and low guidance
 scale (around 4 to 6) is recommended for the best result.
 It's also recommended to use clip skip of 2.

This repo uses DPM++ 2M Karras as its sampler (Diffusers only). 

# Diffusers Installation 🧨
### Dependencies Installation 📁
First, you'll need to install few dependencies. This is a one-time operation, you only need to run the code once.
```py
!pip install -q diffusers transformers accelerate
```
### Model Installation 💿
After the installation, you can run SDXL with Yiffymix v51 model using the code below:
```py
from diffusers import StableDiffusionXLPipeline
import torch

model = "IDK-ab0ut/Yiffymix_v51-XL"
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, torch_dtype=torch.float16).to("cuda")

prompt = "a cat, detailed background, dynamic lighting"
negative_prompt = "low resolution, bad quality, deformed"
steps = 25
guidance_scale = 4
image = pipeline(prompt=prompt, negative_prompt=negative_prompt,
        num_inference_steps=steps, guidance_scale=guidance_scale,
        clip_skip=2).images[0]
image
```

Feel free to edit the image's configuration with your desire.