FSFM-3C Models (Pre-trained/Fine-tuned Vision Transformers)
A self-supervised pre-training framework to learn a transferable facial representation that boosts various downstream face security tasks.
Environment
Git clone our repository, creating a python environment and activate it via the following command:
conda create -n fsfm3c python=3.9
conda activate fsfm3c
pip install -r requirements.txt
Model Loading
import models_vit
from huggingface_hub import hf_hub_download
CKPT_SAVE_PATH = [your checkpoint storage path]
CKPT_NAME = [checkpoint name]
hf_hub_download(local_dir=CKPT_SAVE_PATH,
repo_id='Wolowolo/fsfm-3c',
filename=CKPT_NAME)
model = models_vit.__dict__['vit_base_patch16'](
num_classes=2,
drop_path_rate=0.1,
global_pool=True,
)
checkpoint = torch.load(os.path.join(CKPT_SAVE_PATH, CKPT_NAME), map_location='cpu')
model.load_state_dict(checkpoint['model'])
Downstream Use
Face security tasks such as deepfake detection, face anti-spoofing, and diffusion facial forgery detection.
BibTeX entry and citation info
@article{wang2024fsfm,
title={FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation Learning},
author={Wang, Gaojian and Lin, Feng and Wu, Tong and Liu, Zhenguang and Ba, Zhongjie and Ren, Kui},
journal={arXiv preprint arXiv:2412.12032},
year={2024}
}
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.