Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,68 @@ base_model:
|
|
6 |
- google/siglip2-base-patch16-224
|
7 |
pipeline_tag: image-classification
|
8 |
library_name: transformers
|
9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- google/siglip2-base-patch16-224
|
7 |
pipeline_tag: image-classification
|
8 |
library_name: transformers
|
9 |
+
---
|
10 |
+
# **Guard-Against-Unsafe-Content-Siglip2**
|
11 |
+
**Guard-Against-Unsafe-Content-Siglip2** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to detect **NSFW content**, including **vulgarity and nudity**, using the **SiglipForImageClassification** architecture.
|
12 |
+
|
13 |
+
The model categorizes images into two classes:
|
14 |
+
- **Class 0:** "Unsafe Content" – indicating that the image contains vulgarity, nudity, or explicit content.
|
15 |
+
- **Class 1:** "Safe Content" – indicating that the image is appropriate and does not contain any unsafe elements.
|
16 |
+
|
17 |
+
```python
|
18 |
+
!pip install -q transformers torch pillow gradio
|
19 |
+
```
|
20 |
+
|
21 |
+
```python
|
22 |
+
import gradio as gr
|
23 |
+
from transformers import AutoImageProcessor
|
24 |
+
from transformers import SiglipForImageClassification
|
25 |
+
from transformers.image_utils import load_image
|
26 |
+
from PIL import Image
|
27 |
+
import torch
|
28 |
+
|
29 |
+
# Load model and processor
|
30 |
+
model_name = "prithivMLmods/Guard-Against-Unsafe-Content-Siglip2"
|
31 |
+
model = SiglipForImageClassification.from_pretrained(model_name)
|
32 |
+
processor = AutoImageProcessor.from_pretrained(model_name)
|
33 |
+
|
34 |
+
def nsfw_detection(image):
|
35 |
+
"""Predicts NSFW probability scores for an image."""
|
36 |
+
image = Image.fromarray(image).convert("RGB")
|
37 |
+
inputs = processor(images=image, return_tensors="pt")
|
38 |
+
|
39 |
+
with torch.no_grad():
|
40 |
+
outputs = model(**inputs)
|
41 |
+
logits = outputs.logits
|
42 |
+
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
|
43 |
+
|
44 |
+
labels = model.config.id2label
|
45 |
+
predictions = {labels[i]: round(probs[i], 3) for i in range(len(probs))}
|
46 |
+
|
47 |
+
return predictions
|
48 |
+
|
49 |
+
# Create Gradio interface
|
50 |
+
iface = gr.Interface(
|
51 |
+
fn=nsfw_detection,
|
52 |
+
inputs=gr.Image(type="numpy"),
|
53 |
+
outputs=gr.Label(label="NSFW Content Detection"),
|
54 |
+
title="NSFW Image Detection",
|
55 |
+
description="Upload an image to check if it contains unsafe content such as vulgarity or nudity."
|
56 |
+
)
|
57 |
+
|
58 |
+
# Launch the app
|
59 |
+
if __name__ == "__main__":
|
60 |
+
iface.launch()
|
61 |
+
```
|
62 |
+
|
63 |
+
# **Intended Use:**
|
64 |
+
|
65 |
+
The **Guard-Against-Unsafe-Content-Siglip2** model is designed to detect **inappropriate and explicit content** in images. It helps distinguish between **safe** and **unsafe** images based on the presence of **vulgarity, nudity, or other NSFW elements**.
|
66 |
+
|
67 |
+
### Potential Use Cases:
|
68 |
+
- **NSFW Content Detection:** Identifying images containing explicit content to help filter inappropriate material.
|
69 |
+
- **Content Moderation:** Assisting platforms in filtering out unsafe images before they are shared publicly.
|
70 |
+
- **Parental Controls:** Enabling automated filtering of explicit images in child-friendly environments.
|
71 |
+
- **Safe Image Classification:** Helping AI-powered applications distinguish between safe and unsafe content for appropriate usage.
|
72 |
+
|
73 |
+
This model is intended for **research, content moderation, and automated safety applications**, rather than **real-time detection** of explicit content.
|