14.png

Deepfake-Quality-Classifier2-SigLIP2

Deepfake-Quality-Classifier2-SigLIP2 is an image classification vision-language encoder model fine-tuned from google/siglip2-base-patch16-224 for a single-label classification task. It is designed to assess the quality of deepfake images using the SiglipForImageClassification architecture.

Classification Report:
                       precision    recall  f1-score   support

    Issue In Deepfake     0.8352    0.7800    0.8067      5000
High Quality Deepfake     0.7951    0.8500    0.8217      5000

             accuracy                         0.8245     10000
            macro avg     0.8152    0.8245    0.8142     10000
         weighted avg     0.8152    0.8245    0.8142     10000

The model categorizes images into two classes:

  • Class 0: "Issue In Deepfake" – indicating that the deepfake image has noticeable flaws or inconsistencies.
  • Class 1: "High Quality Deepfake" – indicating that the deepfake image is of high quality and appears more realistic.

Run with Transformers🤗

!pip install -q transformers torch pillow gradio
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch

# Load model and processor
model_name = "prithivMLmods/Deepfake-Quality-Classifier2-SigLIP2"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)

def deepfake_detection(image):
    """Predicts deepfake probability scores for an image."""
    image = Image.fromarray(image).convert("RGB")
    inputs = processor(images=image, return_tensors="pt")
    
    with torch.no_grad():
        outputs = model(**inputs)
        logits = outputs.logits
        probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
    
    labels = {"0": "Issue In Deepfake", "1": "High Quality Deepfake"}
    predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
    
    return predictions

# Create Gradio interface
iface = gr.Interface(
    fn=deepfake_detection,
    inputs=gr.Image(type="numpy"),
    outputs=gr.Label(label="Prediction Scores"),
    title="Deepfake Quality Detection",
    description="Upload an image to check its deepfake probability scores."
)

# Launch the app
if __name__ == "__main__":
    iface.launch()

Intended Use:

The Deepfake-Quality-Classifier2-SigLIP2 model is designed to evaluate the quality of deepfake images. It helps distinguish between high-quality deepfakes and those with noticeable issues. Potential use cases include:

  • Deepfake Quality Assessment: Identifying whether a generated deepfake meets high-quality standards or contains artifacts and inconsistencies.
  • Content Moderation: Assisting in filtering low-quality deepfake images in digital media platforms.
  • Forensic Analysis: Supporting researchers and analysts in assessing the credibility of synthetic images.
  • Deepfake Model Benchmarking: Helping developers compare and improve deepfake generation models.
Downloads last month
0
Safetensors
Model size
92.9M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for prithivMLmods/Deepfake-Quality-Classifier2-SigLIP2

Finetuned
(9)
this model

Collection including prithivMLmods/Deepfake-Quality-Classifier2-SigLIP2