pipeline_tag: zero-shot-classification | |
base_model: | |
- openai/clip-vit-base-patch16 | |
- openai/clip-vit-base-patch32 | |
- openai/clip-vit-large-patch14 | |
- openai/clip-vit-large-patch14-336 | |
language: | |
- en | |
tags: | |
- transformers | |
- clip | |
- image | |
- dghs-realutils | |
library_name: dghs-realutils | |
ONNX exported version of CLIP models. | |
# Models | |
4 models exported in total. | |
| Name | Image (Params/FLOPS) | Image Size | Image Width (Enc/Emb) | Text (Params/FLOPS) | Text Width (Enc/Emb) | Created At | | |
|:----------------------------------------------------------------------------------------------|:-----------------------|-------------:|:------------------------|:----------------------|:-----------------------|:-------------| | |
| [openai/clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | 302.9M / 174.7G | 336 | 1024 / 768 | 85.1M / 1.2G | 768 / 768 | 2022-04-22 | | |
| [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) | 302.9M / 77.8G | 224 | 1024 / 768 | 85.1M / 1.2G | 768 / 768 | 2022-03-03 | | |
| [openai/clip-vit-base-patch16](https://huggingface.co/openai/clip-vit-base-patch16) | 85.6M / 16.9G | 224 | 768 / 512 | 37.8M / 529.2M | 512 / 512 | 2022-03-03 | | |
| [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) | 87.4M / 4.4G | 224 | 768 / 512 | 37.8M / 529.2M | 512 / 512 | 2022-03-03 | | |