BabyChou commited on
Commit
30d21de
Β·
verified Β·
1 Parent(s): 5e13dcf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -197
README.md CHANGED
@@ -4,101 +4,9 @@ license_name: yi-license
4
  license_link: LICENSE
5
  ---
6
 
7
- <div align="center">
8
-
9
- <picture>
10
- <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px">
11
- <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px">
12
- <img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px">
13
- </picture>
14
-
15
- </div>
16
-
17
- <div align="center">
18
- <h1 align="center">Yi Vision Language Model</h1>
19
- </div>
20
-
21
-
22
- <div align="center">
23
- <h3 align="center">Better Bilingual Multimodal Model</h3>
24
- </div>
25
-
26
- <p align="center">
27
- πŸ€— <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> β€’ πŸ€– <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> β€’ ✑️ <a href="https://wisemodel.cn/organization/01.AI" target="_blank">WiseModel</a>
28
- </p>
29
-
30
- <p align="center">
31
- πŸ‘©β€πŸš€ Ask questions or discuss ideas on <a href="https://github.com/01-ai/Yi/discussions" target="_blank"> GitHub </a>!
32
- </p>
33
-
34
- <p align="center">
35
- πŸ‘‹ Join us πŸ’¬ <a href="https://github.com/01-ai/Yi/issues/43#issuecomment-1827285245" target="_blank"> WeChat (Chinese) </a>!
36
- </p>
37
-
38
- <p align="center">
39
- πŸ“š Grow at <a href="https://github.com/01-ai/Yi/blob/main/docs/learning_hub.md"> Yi Learning Hub </a>!
40
- </p>
41
-
42
- <hr>
43
-
44
- <!-- DO NOT REMOVE ME -->
45
-
46
- <details open>
47
- <summary></b>πŸ“• Table of Contents</b></summary>
48
-
49
- - [What is Yi-VL?](#what-is-yi-vl)
50
- - [Overview](#overview)
51
- - [Models](#models)
52
- - [Features](#features)
53
- - [Architecture](#architecture)
54
- - [Training](#training)
55
- - [Limitations](#limitations)
56
- - [Why Yi-VL?](#why-yi-vl)
57
- - [Benchmarks](#benchmarks)
58
- - [Showcases](#showcases)
59
- - [How to use Yi-VL?](#how-to-use-yi-vl)
60
- - [Quick start](#quick-start)
61
- - [Hardware requirements](#hardware-requirements)
62
- - [Misc.](#misc)
63
- - [Acknowledgements and attributions](#acknowledgements-and-attributions)
64
- - [List of used open-source projects](#list-of-used-open-source-projects)
65
- - [License](#license)
66
-
67
- </details>
68
-
69
- <hr>
70
 
71
  # What is Yi-VL?
72
 
73
- ## Overview
74
-
75
- - **Yi Vision Language (Yi-VL)** model is the open-source, multimodal version of the Yi **Large Language Model (LLM)** series, enabling content comprehension, recognition, and multi-round conversations about images.
76
-
77
- - Yi-VL demonstrates exceptional performance, **ranking first** among all existing open-source models in the latest benchmarks including [MMMU](https://mmmu-benchmark.github.io/#leaderboard) in English and [CMMMU](https://mmmu-benchmark.github.io/#leaderboard) in Chinese (based on data available up to January 2024).
78
-
79
- - Yi-VL-34B is the **first** open-source 34B vision language model worldwide.
80
-
81
- ## Models
82
-
83
- Yi-VL has released the following versions.
84
-
85
- Model | Download
86
- |---|---
87
- Yi-VL-34B |β€’ [πŸ€— Hugging Face](https://huggingface.co/01-ai/Yi-VL-34B) β€’ [πŸ€– ModelScope](https://www.modelscope.cn/models/01ai/Yi-VL-34B/summary)
88
- Yi-VL-6B | β€’ [πŸ€— Hugging Face](https://huggingface.co/01-ai/Yi-VL-6B) β€’ [πŸ€– ModelScope](https://www.modelscope.cn/models/01ai/Yi-VL-6B/summary)
89
-
90
- ## Features
91
-
92
- Yi-VL offers the following features:
93
-
94
- - Multi-round text-image conversations: Yi-VL can take both text and images as inputs and produce text outputs. Currently, it supports multi-round visual question answering with one image.
95
-
96
- - Bilingual text support: Yi-VL supports conversations in both English and Chinese, including text recognition in images.
97
-
98
- - Strong image comprehension: Yi-VL is adept at analyzing visuals, making it an efficient tool for tasks like extracting, organizing, and summarizing information from images.
99
-
100
- - Fine-grained image resolution: Yi-VL supports image understanding at a higher resolution of 448&times;448.
101
-
102
  ## Architecture
103
 
104
  Yi-VL adopts the [LLaVA](https://github.com/haotian-liu/LLaVA) architecture, which is composed of three primary components:
@@ -111,120 +19,31 @@ Yi-VL adopts the [LLaVA](https://github.com/haotian-liu/LLaVA) architecture, whi
111
 
112
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/EGVHSWG4kAcX01xDaoeXS.png)
113
 
114
- ## Training
115
-
116
- ### Training process
117
-
118
- Yi-VL is trained to align visual information well to the semantic space of Yi LLM, which undergoes a comprehensive three-stage training process:
119
-
120
- - Stage 1: The parameters of ViT and the projection module are trained using an image resolution of 224&times;224. The LLM weights are frozen. The training leverages an image caption dataset comprising 100 million image-text pairs from [LAION-400M](https://laion.ai/blog/laion-400-open-dataset/). The primary objective is to enhance the ViT's knowledge acquisition within our specified architecture and to achieve better alignment between the ViT and the LLM.
121
-
122
- - Stage 2: The image resolution of ViT is scaled up to 448&times;448, and the parameters of ViT and the projection module are trained. It aims to further boost the model's capability for discerning intricate visual details. The dataset used in this stage includes about 25 million image-text pairs, such as [LAION-400M](https://laion.ai/blog/laion-400-open-dataset/), [CLLaVA](https://huggingface.co/datasets/LinkSoul/Chinese-LLaVA-Vision-Instructions), [LLaVAR](https://llavar.github.io/), [Flickr](https://www.kaggle.com/datasets/hsankesara/flickr-image-dataset), [VQAv2](https://paperswithcode.com/dataset/visual-question-answering-v2-0), [RefCOCO](https://github.com/lichengunc/refer/tree/master), [Visual7w](http://ai.stanford.edu/~yukez/visual7w/) and so on.
123
-
124
- - Stage 3: The parameters of the entire model (that is, ViT, projection module, and LLM) are trained. The primary goal is to enhance the model's proficiency in multimodal chat interactions, thereby endowing it with the ability to seamlessly integrate and interpret visual and linguistic inputs. To this end, the training dataset encompasses a diverse range of sources, totalling approximately 1 million image-text pairs, including [GQA](https://cs.stanford.edu/people/dorarad/gqa/download.html), [VizWiz VQA](https://vizwiz.org/tasks-and-datasets/vqa/), [TextCaps](https://opendatalab.com/OpenDataLab/TextCaps), [OCR-VQA](https://ocr-vqa.github.io/), [Visual Genome](https://homes.cs.washington.edu/~ranjay/visualgenome/api.html), [LAION GPT4V](https://huggingface.co/datasets/laion/gpt4v-dataset) and so on. To ensure data balancing, we impose a cap on the maximum data contribution from any single source, restricting it to no more than 50,000 pairs.
125
-
126
- Below are the parameters configured for each stage.
127
-
128
- Stage | Global batch size | Learning rate | Gradient clip | Epochs
129
- |---|---|---|---|---
130
- Stage 1, 2 |4096|1e-4|0.5|1
131
- Stage 3|256|2e-5|1.0|2
132
-
133
- ### Training resource consumption
134
-
135
- - The training consumes 128 NVIDIA A800 (80G) GPUs.
136
-
137
- - The total training time amounted to approximately 10 days for Yi-VL-34B and 3 days for Yi-VL-6B.
138
-
139
- ## Limitations
140
-
141
- This is the initial release of the Yi-VL, which comes with some known limitations. It is recommended to carefully evaluate potential risks before adopting any models.
142
-
143
- - Feature limitation
144
-
145
- - Visual question answering is supported. Other features like text-to-3D and image-to-video are not yet supported.
146
-
147
- - A single image rather than several images can be accepted as an input.
148
-
149
- - Hallucination problem
150
-
151
- - There is a certain possibility of generating content that does not exist in the image.
152
-
153
- - In scenes containing multiple objects, some objects might be incorrectly identified or described with insufficient detail.
154
-
155
- - Resolution issue
156
-
157
- - Yi-VL is trained on images with a resolution of 448&times;448. During inference, inputs of any resolution are resized to 448&times;448. Low-resolution images may result in information loss, and more fine-grained images (above 448) do not bring in extra knowledge.
158
-
159
- - Other limitations of the Yi LLM.
160
-
161
- # Why Yi-VL?
162
-
163
- ## Benchmarks
164
-
165
- Yi-VL outperforms all existing open-source models in [MMMU](https://mmmu-benchmark.github.io) and [CMMMU](https://cmmmu-benchmark.github.io/), two advanced benchmarks that include massive multi-discipline multimodal questions (based on data available up to January 2024).
166
-
167
- - MMMU
168
-
169
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/kCmXuwLbLvequ93kjh3mg.png)
170
-
171
- - CMMMU
172
-
173
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/6YuSakMCg3D2AozixdoZ0.png)
174
-
175
- ## Showcases
176
-
177
- Below are some representative examples of detailed description and visual question answering, showcasing the capabilities of Yi-VL.
178
-
179
- - English
180
-
181
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64cc65d786d8dc0caa6ab3cd/F_2bIVwMtVamygbVqtb8E.png)
182
-
183
- - Chinese
184
-
185
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/l_tLzugFtHk1dkVsFJE7B.png)
186
 
187
  # How to use Yi-VL?
188
 
189
  ## Quick start
190
 
191
- Please refer to [Yi GitHub Repo](https://github.com/01-ai/Yi/tree/main/VL) for details.
192
-
193
- ## Hardware requirements
194
-
195
- For model inference, the recommended GPU examples are:
 
196
 
197
- - Yi-VL-6B: RTX 3090, RTX 4090, A10, A30
198
 
199
- - Yi-VL-34B: 4 &times; RTX 4090, A800 (80 GB)
 
 
200
 
201
- # Misc.
202
-
203
- ## Acknowledgements and attributions
204
-
205
- This project makes use of open-source software/components. We acknowledge and are grateful to these developers for their contributions to the open-source community.
206
-
207
- ### List of used open-source projects
208
-
209
- 1. LLaVA
210
- - Authors: Haotian Liu, Chunyuan Li, Qingyang Wu, Yuheng Li, and Yong Jae Lee
211
- - Source: https://github.com/haotian-liu/LLaVA
212
- - License: Apache-2.0 license
213
- - Description: The codebase is based on LLaVA code.
214
-
215
- 2. OpenClip
216
- - Authors: Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt
217
- - Source: https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K
218
- - License: MIT
219
- - Description: The ViT is initialized using the weights of OpenClip.
220
-
221
- **Notes**
222
-
223
- - This attribution does not claim to cover all open-source components used. Please check individual components and their respective licenses for full details.
224
-
225
- - The use of the open-source components is subject to the terms and conditions of the respective licenses.
226
 
227
- We appreciate the open-source community for their invaluable contributions to the technology world.
 
 
 
 
 
 
228
 
229
  ## License
230
 
 
4
  license_link: LICENSE
5
  ---
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
  # What is Yi-VL?
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ## Architecture
11
 
12
  Yi-VL adopts the [LLaVA](https://github.com/haotian-liu/LLaVA) architecture, which is composed of three primary components:
 
19
 
20
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/EGVHSWG4kAcX01xDaoeXS.png)
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  # How to use Yi-VL?
24
 
25
  ## Quick start
26
 
27
+ This has been implemented into the SGLang codebase, where you can simply call this model by creating a function like so:
28
+ ```
29
+ sgl.function
30
+ def image_qa(s, image_path, question):
31
+ s += sgl.user(sgl.image(image_path) + question)
32
+ s += sgl.assistant(sgl.gen("answer"))
33
 
 
34
 
35
+ runtime = sgl.Runtime(model_path="BabyChou/Yi-VL-6B",
36
+ tokenizer_path="BabyChou/Yi-VL-6B")
37
+ sgl.set_default_backend(runtime)
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
+ # Single
41
+ state = image_qa.run(
42
+ image_path="images/cat.jpeg",
43
+ question="What is this?",
44
+ max_new_tokens=64)
45
+ print(state["answer"], "\n")
46
+ ```
47
 
48
  ## License
49