fsfm-3c / README.md
Wolowolo's picture
Update README.md
f34ead5 verified
|
raw
history blame
1.68 kB
---
license: cc-by-nc-4.0
---
# FSFM-3C Models (Pre-trained Vision Transformers)
A self-supervised pre-training framework to learn a transferable facial representation that boosts various face security tasks.
- **Paper** [FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation Learning](https://arxiv.org/abs/2412.12032)
- **Repository** [github](TODO)
- **Projeact** [this https URL](https://fsfm-3c.github.io/)
## Environment
Git clone our repository, creating a python environment and activate it via the following command:
```bash
conda create -n fsfm3c python=3.9
conda activate fsfm3c
pip install -r requirements.txt
```
## Model Loading
```python
import models_vit
from huggingface_hub import hf_hub_download
CKPT_SAVE_PATH = [your checkpoint storage path]
CKPT_NAME = [checkpoint name]
hf_hub_download(local_dir=CKPT_SAVE_PATH,
repo_id='Wolowolo/fsfm-3c',
filename=CKPT_NAME)
model = models_vit.__dict__['vit_base_patch16'](
num_classes=2,
drop_path_rate=0.1,
global_pool=True,
)
checkpoint = torch.load(os.path.join(CKPT_SAVE_PATH, CKPT_NAME), map_location='cpu')
model.load_state_dict(checkpoint['model'])
```
## Downstream Use
Face security tasks such as deepfake detection, face anti-spoofing, and diffusion facial forgery detection.
## BibTeX entry and citation info
```bibtex
@article{wang2024fsfm,
title={FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation Learning},
author={Wang, Gaojian and Lin, Feng and Wu, Tong and Liu, Zhenguang and Ba, Zhongjie and Ren, Kui},
journal={arXiv preprint arXiv:2412.12032},
year={2024}
}