nielsr HF Staff commited on
Commit
2768a2e
Β·
verified Β·
1 Parent(s): 8f86a5e

Improve model card: Add InternVL3.5 paper, pipeline tag, library name, and project page

Browse files

This Pull Request enhances the model card for `OpenGVLab/InternViT-300M-448px-V2_5` by:

- Adding a prominent link to the foundational paper for InternVL3.5: [InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency](https://huggingface.co/papers/2508.18265).
- Updating the `pipeline_tag` to `zero-shot-image-classification` to better reflect a key capability of the InternVL family, making it discoverable at https://huggingface.co/models?pipeline_tag=zero-shot-image-classification.
- Adding `library_name: transformers` to leverage the Hugging Face `transformers` library's automated usage snippets, improving accessibility for users.
- Including a direct link to the project page (https://chat.intern-ai.org.cn/) in the main content for easier access to related resources and demos.
- Updating the Citation section to include the `InternVL3.5` paper.

These changes aim to improve the model's discoverability, usability, and contextual information on the Hugging Face Hub.

Files changed (1) hide show
  1. README.md +40 -11
README.md CHANGED
@@ -1,13 +1,16 @@
1
  ---
2
- license: mit
3
- pipeline_tag: image-feature-extraction
4
  base_model: OpenGVLab/InternViT-300M-448px
 
 
5
  base_model_relation: finetune
 
6
  ---
7
 
8
  # InternViT-300M-448px-V2_5
9
 
10
- [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[πŸ“œ InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[πŸ“œ InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[πŸ“œ Mini-InternVL\]](https://arxiv.org/abs/2410.16261) [\[πŸ“œ InternVL 2.5\]](https://huggingface.co/papers/2412.05271)
 
 
11
 
12
  [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[πŸ€— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[πŸš€ Quick Start\]](#quick-start) [\[πŸ“– Documents\]](https://internvl.readthedocs.io/en/latest/)
13
 
@@ -82,7 +85,7 @@ We present a comprehensive evaluation of the vision encoder’s performance acro
82
 
83
  ## Quick Start
84
 
85
- > \[!Warning\]
86
  > 🚨 Note: In our experience, the InternViT V2.5 series is better suited for building MLLMs than traditional computer vision tasks.
87
 
88
  ```python
@@ -115,23 +118,49 @@ This project is released under the MIT License.
115
  If you find this project useful in your research, please consider citing:
116
 
117
  ```BibTeX
 
 
 
 
 
 
 
 
 
 
 
 
118
  @article{chen2024expanding,
119
  title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
120
  author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
121
  journal={arXiv preprint arXiv:2412.05271},
122
  year={2024}
123
  }
 
 
 
 
 
 
124
  @article{gao2024mini,
125
- title={Mini-internvl: A flexible-transfer pocket multimodal model with 5\% parameters and 90\% performance},
126
  author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
127
- journal={arXiv preprint arXiv:2410.16261},
128
- year={2024}
 
 
 
 
129
  }
130
  @article{chen2024far,
131
- title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
132
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
133
- journal={arXiv preprint arXiv:2404.16821},
134
- year={2024}
 
 
 
 
135
  }
136
  @inproceedings{chen2024internvl,
137
  title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
@@ -140,4 +169,4 @@ If you find this project useful in your research, please consider citing:
140
  pages={24185--24198},
141
  year={2024}
142
  }
143
- ```
 
1
  ---
 
 
2
  base_model: OpenGVLab/InternViT-300M-448px
3
+ license: mit
4
+ pipeline_tag: zero-shot-image-classification
5
  base_model_relation: finetune
6
+ library_name: transformers
7
  ---
8
 
9
  # InternViT-300M-448px-V2_5
10
 
11
+ This vision encoder is a component of the **InternVL3.5** family of multimodal models, as presented in the paper [InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency](https://huggingface.co/papers/2508.18265).
12
+
13
+ [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[πŸ“œ InternVL3.5 Paper\]](https://huggingface.co/papers/2508.18265) [\[🌐 Project Page\]](https://chat.intern-ai.org.cn/) [\[πŸ“œ InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[πŸ“œ InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[πŸ“œ Mini-InternVL\]](https://arxiv.org/abs/2410.16261) [\[πŸ“œ InternVL 2.5\]](https://huggingface.co/papers/2412.05271)
14
 
15
  [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[πŸ€— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[πŸš€ Quick Start\]](#quick-start) [\[πŸ“– Documents\]](https://internvl.readthedocs.io/en/latest/)
16
 
 
85
 
86
  ## Quick Start
87
 
88
+ > [!Warning]
89
  > 🚨 Note: In our experience, the InternViT V2.5 series is better suited for building MLLMs than traditional computer vision tasks.
90
 
91
  ```python
 
118
  If you find this project useful in your research, please consider citing:
119
 
120
  ```BibTeX
121
+ @article{wang2025internvl3_5,
122
+ title={InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency},
123
+ author={Wang, Weiyun and Gao, Zhangwei and Gu, Lixin and Pu, Hengjun and Cui, Long and Wei, Xingguang and Liu, Zhaoyang and Jing, Linglin and Ye, Shenglong and Shao, Jie and others},
124
+ journal={arXiv preprint arXiv:2508.18265},
125
+ year={2025}
126
+ }
127
+ @article{zhu2025internvl3,
128
+ title={Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models},
129
+ author={Zhu, Jinguo and Wang, Weiyun and Chen, Zhe and Liu, Zhaoyang and Ye, Shenglong and Gu, Lixin and Tian, Hao and Duan, Yuchen and Su, Weijie and Shao, Jie and others},
130
+ journal={arXiv preprint arXiv:2504.10479},
131
+ year={2025}
132
+ }
133
  @article{chen2024expanding,
134
  title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
135
  author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
136
  journal={arXiv preprint arXiv:2412.05271},
137
  year={2024}
138
  }
139
+ @article{wang2024mpo,
140
+ title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
141
+ author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
142
+ journal={arXiv preprint arXiv:2411.10442},
143
+ year={2024}
144
+ }
145
  @article{gao2024mini,
146
+ title={Mini-InternVL: a flexible-transfer pocket multi-modal model with 5\% parameters and 90\% performance},
147
  author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
148
+ journal={Visual Intelligence},
149
+ volume={2},
150
+ number={1},
151
+ pages={1--17},
152
+ year={2024},
153
+ publisher={Springer}
154
  }
155
  @article{chen2024far,
156
+ title={How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites},
157
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
158
+ journal={Science China Information Sciences},
159
+ volume={67},
160
+ number={12},
161
+ pages={220101},
162
+ year={2024},
163
+ publisher={Springer}
164
  }
165
  @inproceedings{chen2024internvl,
166
  title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
 
169
  pages={24185--24198},
170
  year={2024}
171
  }
172
+ ```