--- license: creativeml-openrail-m library_name: diffusers --- ## Model Details - **Repository:** [https://github.com/LetterLiGo/SafeGen_CCS2024](https://github.com/LetterLiGo/SafeGen_CCS2024) - **Paper [SafeGen: Mitigating Sexually Explicit Content Generation in Text-to-Image Models](https://arxiv.org/abs/2404.06666):** To appear in ACM CCS 2024. - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Cite as:** @inproceedings{li2024safegen, author = {Li, Xinfeng and Yang, Yuchen and Deng, Jiangyi and Yan, Chen and Chen, Yanjiao and Ji, Xiaoyu and Xu, Wenyuan}, title = {{SafeGen: Mitigating Sexually Explicit Content Generation in Text-to-Image Models}}, booktitle = {Proceedings of the 2024 {ACM} {SIGSAC} Conference on Computer and Communications Security (CCS)}, year = {2024}, } # How to use the pretrained weights for inference? ### Bash Script ```Bash #!/bin/bash safety_config="MAX" prompts_path="" model_name="SafeGen_SLD_max" image_nums=1 evaluation_folder="/"${safety_config}_${image_nums} model_version="" python3 SafeGen_SLD_inference.py \ --model_name ${model_name} \ --model_version ${model_version} \ --prompts_path ${prompts_path} \ --save_path ${evaluation_folder} \ --safety_config ${safety_config} \ --num_samples ${image_nums} \ --from_case 0 ``` ### Python Script ```python ''' @filename: SafeGen_SLD_inference.py @author: Xinfeng Li @function: SafeGen can be integrated seamlessly with text-dependent defenses, such as Safe Latent Diffusion (Schramowski et al., CVPR 2023). ''' from diffusers import StableDiffusionPipelineSafe from diffusers.pipelines.stable_diffusion_safe import SafetyConfig import argparse import pandas as pd import os import torch from PIL import Image device="cuda" def image_grid(imgs, rows=2, cols=3): w, h = imgs[0].size grid = Image.new("RGB", size=(cols * w, rows * h)) for i, img in enumerate(imgs): grid.paste(img, box=(i % cols * w, i // cols * h)) return grid row, col = 2, 3 def generate_images(model_name, prompts_path, save_path, device='cuda:0', safety_config="MAX", guidance_scale = 7.5, from_case=0, num_samples=10, model_version="AIML-TUDA/stable-diffusion-safe"): ''' Function to generate images from diffusers code The program requires the prompts to be in a csv format with headers 1. 'case_number' (used for file naming of image) 2. 'prompt' (the prompt used to generate image) 3. 'seed' (the inital seed to generate gaussion noise for diffusion input) Parameters ---------- model_name : str name of the model to load. prompts_path : str path for the csv file with prompts and corresponding seeds. save_path : str save directory for images. device : str, optional device to be used to load the model. The default is 'cuda:0'. num_samples : int, optional number of samples generated per prompt. The default is 10. from_case : int, optional The starting offset in csv to generate images. The default is 0. Returns ------- None. ''' pipeline = StableDiffusionPipelineSafe.from_pretrained(model_version) print(pipeline.safety_concept) pipeline = pipeline.to(device) df = pd.read_csv(prompts_path) folder_path = f'{save_path}/{model_name}' os.makedirs(folder_path, exist_ok=True) for _, row in df.iterrows(): prompt = [str(row.prompt)]*num_samples case_number = row.case_number if case_number/lib/python3.8/site-packages/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py ``` ```Python # 8. Post-processing image = self.decode_latents(latents) # 9. Run safety checker # image, has_nsfw_concept, flagged_images = self.run_safety_checker( # image, device, prompt_embeds.dtype, enable_safety_guidance # ) has_nsfw_concept = None; flagged_images = None ```