File size: 1,729 Bytes
e3423fe
 
 
 
7763ed4
e3423fe
 
7763ed4
 
35993ec
7763ed4
5f310f6
7763ed4
5f310f6
7763ed4
e3423fe
 
 
 
 
 
354e722
e3423fe
 
 
 
 
 
 
 
f34ead5
e3423fe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: cc-by-nc-4.0
---

# FSFM-3C Models (Pre-trained/Fine-tuned Vision Transformers)


A self-supervised pre-training framework to learn a transferable facial representation that boosts various downstream face security tasks.

**[Paper [FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation Learning](https://arxiv.org/abs/2412.12032)]** 

**[**Project**](https://fsfm-3c.github.io/)**

**[**Github Rrepository**](https://github.com/wolo-wolo/FSFM)**
  
## Environment
Git clone our repository, creating a python environment and activate it via the following command: 
```bash
conda create -n fsfm3c python=3.9
conda activate fsfm3c
pip install -r requirements.txt
```

## Model Loading
```python
import models_vit
from huggingface_hub import hf_hub_download
CKPT_SAVE_PATH = [your checkpoint storage path]
CKPT_NAME = [checkpoint name]
hf_hub_download(local_dir=CKPT_SAVE_PATH,
                repo_id='Wolowolo/fsfm-3c',
                filename=CKPT_NAME)
model = models_vit.__dict__['vit_base_patch16'](
    num_classes=2,
    drop_path_rate=0.1,
    global_pool=True,
)
checkpoint = torch.load(os.path.join(CKPT_SAVE_PATH, CKPT_NAME), map_location='cpu')
model.load_state_dict(checkpoint['model'])
```

## Downstream Use
Face security tasks such as deepfake detection, face anti-spoofing, and diffusion facial forgery detection.

## BibTeX entry and citation info

```bibtex
@article{wang2024fsfm,
  title={FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation Learning},
  author={Wang, Gaojian and Lin, Feng and Wu, Tong and Liu, Zhenguang and Ba, Zhongjie and Ren, Kui},
  journal={arXiv preprint arXiv:2412.12032},
  year={2024}
}