[Paper] [GitHub]

Robust perceptual metric, based on DINO model facebook/dinov2-base. Adversarially fine-tuned with FARE (Schlarmann et al. (2024)) on ImageNet with infinity-norm and radius 4/255.

Usage

preprocessor = transforms.Compose([
                transforms.Resize(256, interpolation=3),
                transforms.CenterCrop(224),
                transforms.ToTensor(),
                transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
            ])
model = AutoModel.from_pretrained("ch20/dinov2-base-fare4")

Citation

If you find this model useful, please consider citing our papers:

@inproceedings{croce2024adversarially,
  title={Adversarially Robust CLIP Models Can Induce Better (Robust) Perceptual Metrics},
  author={Croce, Francesco and Schlarmann, Christian and Singh, Naman Deep and Hein, Matthias},
  year={2025},
  booktitle={{SaTML}}
}
@inproceedings{schlarmann2024robustclip,
    title={Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language 
    Models}, 
    author={Schlarmann, Christian and Singh, Naman Deep and Croce, Francesco and Hein, Matthias},
    year={2024},
    booktitle={{ICML}}
}
Downloads last month
5
Safetensors
Model size
86.6M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for chs20/dinov2-base-fare4

Finetuned
(38)
this model

Collection including chs20/dinov2-base-fare4