You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model card for CLIP ViT-B-16 from CLIP-KD method trained Laion400M

Model Description

A CLIP ViT-B/16 model trained by CLIP-KD method with the using Laion400M. The weight of this model is converted from ViT_B_16-laion400m_e32.pt in open_clip to huggingface clip format.

Reference

Please refer to the original work.

@inproceedings{yang2024clip,
  title={CLIP-KD: An Empirical Study of CLIP Model Distillation},
  author={Yang, Chuanguang and An, Zhulin and Huang, Libo and Bi, Junyu and Yu, Xinqiang and Yang, Han and Diao, Boyu and Xu, Yongjun},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}
Downloads last month
67
Safetensors
Model size
150M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Collection including romrawinjp/clip-kd_ViT-B-16_Baseline-Laion400M