--- base_model: runwayml/stable-diffusion-v1-5 license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image inference: true --- [Original model page](https://civitai.com/models/25694/epicrealism)
Examples | | | |---------------------------------|---------------------------------| | ![](.hf/readme/examples/1.jpeg) | ![](.hf/readme/examples/2.jpeg) | | ![](.hf/readme/examples/3.jpeg) | ![](.hf/readme/examples/4.jpeg) | | ![](.hf/readme/examples/5.jpeg) | ![](.hf/readme/examples/6.jpeg) | | ![](.hf/readme/examples/7.jpeg) | ![](.hf/readme/examples/8.jpeg) | -----------
## Natural Sin Final and last of epiCRealism ~~Since SDXL is right around the corner~~, let's say it is the final version for now since I put a lot effort into it and probably cannot do much more. I tried to refine the understanding of the Prompts, Hands and of course the Realism.\ Let's see what you guys can do with it. Advices: - use simple prompts - no need to use keywords like "masterpiece, best quality, 8k, intricate, high detail" or "(extremely detailed face), ( extremely detailed hands), (extremely detailed hair)" since it doesn't produce appreciable change - use simple negatives or small negative embeddings. gives most realistic look (check samples to get an idea of negatives i used) - add "asian, chinese" to negative if you're looking for ethnicities other than Asian - Light, shadows, and details are excellent without extra keywords - If you're looking for a natural effect, avoid "cinematic" - avoid using "1girl" since it pushes things to render/anime style - to much description of the face will turn out bad mostly - no extra noise-offset needed, but u can if you like to ๐Ÿ˜‰ ## How to use? Prompt: simple explanation of the image (try first without extra keywords)\ Negative: "cartoon, painting, illustration, (worst quality, low quality, normal quality:2)"\ Steps: >50 (if image has errors or artefacts use higher steps)\ CFG Scale: 5.0 (higher config scale can lose realism, depends on prompt, sampler and steps)\ Size: 512x768 or 768x512 ## Use it with ๐Ÿงจ diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Diffusers](https://huggingface.co/docs/diffusers/index). ```python import torch from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained( 'kamillaova/epiCRealism', torch_dtype=torch.float16 ).to('cuda') img = pipe( prompt='positive prompt', negative_prompt='negative prompt', guidance_scale=5.0, num_inference_steps=50, width=512, height=768, ).images[0] img.save('image.png') ```