Weiyun1025 commited on
Commit
e7a151f
Β·
verified Β·
1 Parent(s): f49d3d8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +25 -24
README.md CHANGED
@@ -7,6 +7,7 @@ base_model:
7
  base_model_relation: finetune
8
  datasets:
9
  - OpenGVLab/MMPR-v1.2
 
10
  language:
11
  - multilingual
12
  tags:
@@ -42,6 +43,9 @@ To maintain consistency with earlier generations, we provide two model formats:
42
  > If you want to convert the checkpoint between these two formats, please refer to the scripts about [custom2hf](https://github.com/OpenGVLab/InternVL/blob/main/internvl_chat/tools/internvl_custom2hf.py) and [hf2custom](https://github.com/OpenGVLab/InternVL/blob/main/internvl_chat/tools/internvl_hf2custom.py).
43
 
44
 
 
 
 
45
  | Model | #Vision Param | #Language Param | #Total Param | HF Link | ModelScope Link |
46
  | --------------------- | ------------- | --------------- | ------------ | ------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------- |
47
  | InternVL3.5-1B | 0.3B | 0.8B | 1.1B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-1B) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-1B) |
@@ -55,6 +59,22 @@ To maintain consistency with earlier generations, we provide two model formats:
55
  | InternVL3.5-241B-A28B | 5.5B | 235.1B | 240.7B-A29B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-241B-A28B) |
56
 
57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  ![image/jpg](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B/resolve/main/images/performance_overall.jpg)
59
 
60
  > We conduct the evaluation with [VLMEvalkit](https://github.com/open-compass/VLMEvalKit). ***To enable the Thinking mode of our model, please set the system prompt to [R1_SYSTEM_PROMPT](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/internvl/internvl_chat.py#L38).*** When enabling Thinking mode, we recommend setting `do_sample=True` and `temperature=0.6` to mitigate undesired repetition.
@@ -802,29 +822,10 @@ This project is released under the apache-2.0 License. This project uses the pre
802
  If you find this project useful in your research, please consider citing:
803
 
804
  ```BibTeX
805
- @article{chen2024expanding,
806
- title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
807
- author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
808
- journal={arXiv preprint arXiv:2412.05271},
809
- year={2024}
810
- }
811
- @article{wang2024mpo,
812
- title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
813
- author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
814
- journal={arXiv preprint arXiv:2411.10442},
815
- year={2024}
816
- }
817
- @article{chen2024far,
818
- title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
819
- author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
820
- journal={arXiv preprint arXiv:2404.16821},
821
- year={2024}
822
- }
823
- @inproceedings{chen2024internvl,
824
- title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
825
- author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
826
- booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
827
- pages={24185--24198},
828
- year={2024}
829
  }
830
  ```
 
7
  base_model_relation: finetune
8
  datasets:
9
  - OpenGVLab/MMPR-v1.2
10
+ - OpenGVLab/MMPR-Tiny
11
  language:
12
  - multilingual
13
  tags:
 
43
  > If you want to convert the checkpoint between these two formats, please refer to the scripts about [custom2hf](https://github.com/OpenGVLab/InternVL/blob/main/internvl_chat/tools/internvl_custom2hf.py) and [hf2custom](https://github.com/OpenGVLab/InternVL/blob/main/internvl_chat/tools/internvl_hf2custom.py).
44
 
45
 
46
+ ### Github Format
47
+
48
+
49
  | Model | #Vision Param | #Language Param | #Total Param | HF Link | ModelScope Link |
50
  | --------------------- | ------------- | --------------- | ------------ | ------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------- |
51
  | InternVL3.5-1B | 0.3B | 0.8B | 1.1B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-1B) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-1B) |
 
59
  | InternVL3.5-241B-A28B | 5.5B | 235.1B | 240.7B-A29B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-241B-A28B) |
60
 
61
 
62
+ ### HuggingFace Format
63
+
64
+
65
+ | Model | #Vision Param | #Language Param | #Total Param | HF Link | ModelScope Link |
66
+ | ------------------------ | ------------- | --------------- | ------------ | --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
67
+ | InternVL3.5-1B-HF | 0.3B | 0.8B | 1.1B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-1B-HF) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-1B-HF) |
68
+ | InternVL3.5-2B-HF | 0.3B | 2.0B | 2.3B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-2B-HF) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-2B-HF) |
69
+ | InternVL3.5-4B-HF | 0.3B | 4.4B | 4.7B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-4B-HF) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-4B-HF) |
70
+ | InternVL3.5-8B-HF | 0.3B | 8.2B | 8.5B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-8B-HF) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-8B-HF) |
71
+ | InternVL3.5-14B-HF | 0.3B | 14.8B | 15.1B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-14B-HF) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-14B-HF) |
72
+ | InternVL3.5-38B-HF | 5.5B | 32.8B | 38.4B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-38B-HF) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-38B-HF) |
73
+ | InternVL3.5-20B-A4B-HF | 0.3B | 20.9B | 21.2B-A4B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview-HF) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview-HF) |
74
+ | InternVL3.5-30B-A3B-HF | 0.3B | 30.5B | 30.8B-A3B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B-HF) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-30B-A3B-HF) |
75
+ | InternVL3.5-241B-A28B-HF | 5.5B | 235.1B | 240.7B-A29B | [πŸ€— link](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B-HF) | [πŸ€– link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-241B-A28B-HF) |
76
+
77
+
78
  ![image/jpg](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B/resolve/main/images/performance_overall.jpg)
79
 
80
  > We conduct the evaluation with [VLMEvalkit](https://github.com/open-compass/VLMEvalKit). ***To enable the Thinking mode of our model, please set the system prompt to [R1_SYSTEM_PROMPT](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/internvl/internvl_chat.py#L38).*** When enabling Thinking mode, we recommend setting `do_sample=True` and `temperature=0.6` to mitigate undesired repetition.
 
822
  If you find this project useful in your research, please consider citing:
823
 
824
  ```BibTeX
825
+ @article{wang2025internvl3_5,
826
+ title={InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency},
827
+ author={Wang, Weiyun and Gao, Zhangwei and Gu, Lixin and Pu, Hengjun and Cui Long and Wei, Xingguang and Liu, Zhaoyang and Jing, Linglin and Ye, Shenglong and Shao, Jie and others},
828
+ journal={arXiv preprint arXiv:2508.18265},
829
+ year={2025}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
830
  }
831
  ```