Update README.md
Browse files
README.md
CHANGED
@@ -62,6 +62,8 @@ if __name__ == "__main__":
|
|
62 |
iface.launch()
|
63 |
```
|
64 |
|
|
|
|
|
65 |
# **Intended Use:**
|
66 |
|
67 |
The **Guard-Against-Unsafe-Content-Siglip2** model is designed to detect **inappropriate and explicit content** in images. It helps distinguish between **safe** and **unsafe** images based on the presence of **vulgarity, nudity, or other NSFW elements**.
|
|
|
62 |
iface.launch()
|
63 |
```
|
64 |
|
65 |
+
TrainOutput(global_step=376, training_loss=0.11756020403922872, metrics={'train_runtime': 597.6963, 'train_samples_per_second': 20.077, 'train_steps_per_second': 0.629, 'total_flos': 1.005065949855744e+18, 'train_loss': 0.11756020403922872, 'epoch': 2.0})
|
66 |
+
|
67 |
# **Intended Use:**
|
68 |
|
69 |
The **Guard-Against-Unsafe-Content-Siglip2** model is designed to detect **inappropriate and explicit content** in images. It helps distinguish between **safe** and **unsafe** images based on the presence of **vulgarity, nudity, or other NSFW elements**.
|