jameslahm commited on
Commit
247ce5c
·
verified ·
1 Parent(s): b263f20

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +298 -0
README.md ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # [YOLOE: Real-Time Seeing Anything](https://arxiv.org/abs/2503.07465)
2
+
3
+ Official PyTorch implementation of **YOLOE**.
4
+
5
+ <p align="center">
6
+ <img src="figures/comparison.svg" width=70%> <br>
7
+ Comparison of performance, training cost, and inference efficiency between YOLOE (Ours) and YOLO-Worldv2 in terms of open text prompts.
8
+ </p>
9
+
10
+ [YOLOE: Real-Time Seeing Anything](https://arxiv.org/abs/2503.07465).\
11
+ Ao Wang*, Lihao Liu*, Hui Chen, Zijia Lin, Jungong Han, and Guiguang Ding\
12
+ [![arXiv](https://img.shields.io/badge/arXiv-2503.07465-b31b1b.svg)](https://arxiv.org/abs/2503.07465) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/jameslahm/yoloe/tree/main) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/jameslahm/yoloe)
13
+
14
+
15
+ We introduce **YOLOE(ye)**, a highly **efficient**, **unified**, and **open** object detection and segmentation model, like human eye, under different prompt mechanisms, like *texts*, *visual inputs*, and *prompt-free paradigm*.
16
+
17
+ <!-- <p align="center">
18
+ <img src="figures/pipeline.svg" width=96%> <br>
19
+ </p> -->
20
+
21
+ <p align="center">
22
+ <img src="figures/visualization.svg" width=96%> <br>
23
+ </p>
24
+
25
+
26
+ <details>
27
+ <summary>
28
+ <font size="+1">Abstract</font>
29
+ </summary>
30
+ Object detection and segmentation are widely employed in computer vision applications, yet conventional models like YOLO series, while efficient and accurate, are limited by predefined categories, hindering adaptability in open scenarios. Recent open-set methods leverage text prompts, visual cues, or prompt-free paradigm to overcome this, but often compromise between performance and efficiency due to high computational demands or deployment complexity. In this work, we introduce YOLOE, which integrates detection and segmentation across diverse open prompt mechanisms within a single highly efficient model, achieving real-time seeing anything. For text prompts, we propose Re-parameterizable Region-Text Alignment (RepRTA) strategy. It refines pretrained textual embeddings via a re-parameterizable lightweight auxiliary network and enhances visual-textual alignment with zero inference and transferring overhead. For visual prompts, we present Semantic-Activated Visual Prompt Encoder (SAVPE). It employs decoupled semantic and activation branches to bring improved visual embedding and accuracy with minimal complexity. For prompt-free scenario, we introduce Lazy Region-Prompt Contrast (LRPC) strategy. It utilizes a built-in large vocabulary and specialized embedding to identify all objects, avoiding costly language model dependency. Extensive experiments show YOLOE's exceptional zero-shot performance and transferability with high inference efficiency and low training cost. Notably, on LVIS, with $3\times$ less training cost and $1.4\times$ inference speedup, YOLOE-v8-S surpasses YOLO-Worldv2-S by 3.5 AP. When transferring to COCO, YOLOE-v8-L achieves 0.6 $AP^b$ and 0.4 $AP^m$ gains over closed-set YOLOv8-L with nearly $4\times$ less training time.
31
+ </details>
32
+ <p></p>
33
+ <p align="center">
34
+ <img src="figures/pipeline.svg" width=96%> <br>
35
+ </p>
36
+
37
+ ## Performance
38
+
39
+ ### Zero-shot detection evaluation
40
+
41
+ - *Fixed AP* is reported on LVIS `minival` set with text (T) / visual (V) prompts.
42
+ - Training time is for text prompts with detection based on 8 Nvidia RTX4090 GPUs.
43
+ - FPS is measured on T4 with TensorRT and iPhone 12 with CoreML, respectively.
44
+ - For training data, OG denotes Objects365v1 and GoldG.
45
+ - YOLOE can become YOLOs after re-parameterization with **zero inference and transferring overhead**.
46
+
47
+ | Model | Size | Prompt | Params | Data | Time | FPS | $AP$ | $AP_r$ | $AP_c$ | $AP_f$ | Log |
48
+ |---|---|---|---|---|---|---|---|---|---|---|---|
49
+ | [YOLOE-v8-S](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-v8s-seg.pt) | 640 | T / V | 12M / 13M | OG | 12.0h | 305.8 / 64.3 | 27.9 / 26.2 | 22.3 / 21.3 | 27.8 / 27.7 | 29.0 / 25.7 | [T](./logs/yoloe-v8s-seg) / [V](./logs/yoloe-v8s-seg-vp) |
50
+ | [YOLOE-v8-M](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-v8m-seg.pt) | 640 | T / V | 27M / 30M | OG | 17.0h | 156.7 / 41.7 | 32.6 / 31.0 | 26.9 / 27.0 | 31.9 / 31.7 | 34.4 / 31.1 | [T](./logs/yoloe-v8m-seg) / [V](./logs/yoloe-v8m-seg-vp) |
51
+ | [YOLOE-v8-L](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-v8l-seg.pt) | 640 | T / V | 45M / 50M | OG | 22.5h | 102.5 / 27.2 | 35.9 / 34.2 | 33.2 / 33.2 | 34.8 / 34.6 | 37.3 / 34.1 | [T](./logs/yoloe-v8l-seg) / [V](./logs/yoloe-v8l-seg-vp) |
52
+ | [YOLOE-11-S](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-11s-seg.pt) | 640 | T / V | 10M / 12M | OG | 13.0h | 301.2 / 73.3 | 27.5 / 26.3 | 21.4 / 22.5 | 26.8 / 27.1 | 29.3 / 26.4 | [T](./logs/yoloe-11s-seg) / [V](./logs/yoloe-11s-seg-vp) |
53
+ | [YOLOE-11-M](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-11m-seg.pt) | 640 | T / V | 21M / 27M | OG | 18.5h | 168.3 / 39.2 | 33.0 / 31.4 | 26.9 / 27.1 | 32.5 / 31.9 | 34.5 / 31.7 | [T](./logs/yoloe-11m-seg) / [V](./logs/yoloe-11m-seg-vp) |
54
+ | [YOLOE-11-L](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-11l-seg.pt) | 640 | T / V | 26M / 32M | OG | 23.5h | 130.5 / 35.1 | 35.2 / 33.7 | 29.1 / 28.1 | 35.0 / 34.6 | 36.5 / 33.8 | [T](./logs/yoloe-11l-seg) / [V](./logs/yoloe-11l-seg-vp) |
55
+
56
+ ### Zero-shot segmentation evaluation
57
+
58
+ - The model is the same as above in [Zero-shot detection evaluation](#zero-shot-detection-evaluation).
59
+ - *Standard AP<sup>m</sup>* is reported on LVIS `val` set with text (T) / visual (V) prompts.
60
+
61
+ | Model | Size | Prompt | $AP^m$ | $AP_r^m$ | $AP_c^m$ | $AP_f^m$ |
62
+ |---|---|---|---|---|---|---|
63
+ | YOLOE-v8-S | 640 | T / V | 17.7 / 16.8 | 15.5 / 13.5 | 16.3 / 16.7 | 20.3 / 18.2 |
64
+ | YOLOE-v8-M | 640 | T / V | 20.8 / 20.3 | 17.2 / 17.0 | 19.2 / 20.1 | 24.2 / 22.0 |
65
+ | YOLOE-v8-L | 640 | T / V | 23.5 / 22.0 | 21.9 / 16.5 | 21.6 / 22.1 | 26.4 / 24.3 |
66
+ | YOLOE-11-S | 640 | T / V | 17.6 / 17.1 | 16.1 / 14.4 | 15.6 / 16.8 | 20.5 / 18.6 |
67
+ | YOLOE-11-M | 640 | T / V | 21.1 / 21.0 | 17.2 / 18.3 | 19.6 / 20.6 | 24.4 / 22.6 |
68
+ | YOLOE-11-L | 640 | T / V | 22.6 / 22.5 | 19.3 / 20.5 | 20.9 / 21.7 | 26.0 / 24.1 |
69
+
70
+ ### Prompt-free evaluation
71
+
72
+ - The model is the same as above in [Zero-shot detection evaluation](#zero-shot-detection-evaluation) except the specialized prompt embedding.
73
+ - *Fixed AP* is reported on LVIS `minival` set and FPS is measured on Nvidia T4 GPU with Pytorch.
74
+
75
+ | Model | Size | Params | $AP$ | $AP_r$ | $AP_c$ | $AP_f$ | FPS | Log |
76
+ |---|---|---|---|---|---|---|---|---|
77
+ | [YOLOE-v8-S](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-v8s-seg-pf.pt) | 640 | 13M | 21.0 | 19.1 | 21.3 | 21.0 | 95.8 | [PF](./logs/yoloe-v8s-seg-pf/) |
78
+ | [YOLOE-v8-M](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-v8m-seg-pf.pt) | 640 | 29M | 24.7 | 22.2 | 24.5 | 25.3 | 45.9 | [PF](./logs/yoloe-v8m-seg-pf/) |
79
+ | [YOLOE-v8-L](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-v8l-seg-pf.pt) | 640 | 47M | 27.2 | 23.5 | 27.0 | 28.0 | 25.3 | [PF](./logs/yoloe-v8l-seg-pf/) |
80
+ | [YOLOE-11-S](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-11s-seg-pf.pt) | 640 | 11M | 20.6 | 18.4 | 20.2 | 21.3 | 93.0 | [PF](./logs/yoloe-11s-seg-pf/) |
81
+ | [YOLOE-11-M](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-11m-seg-pf.pt) | 640 | 24M | 25.5 | 21.6 | 25.5 | 26.1 | 42.5 | [PF](./logs/yoloe-11m-seg-pf/) |
82
+ | [YOLOE-11-L](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-11l-seg-pf.pt) | 640 | 29M | 26.3 | 22.7 | 25.8 | 27.5 | 34.9 | [PF](./logs/yoloe-11l-seg-pf/) |
83
+
84
+ ### Downstream transfer on COCO
85
+
86
+ - During transferring, YOLOE-v8 / YOLOE-11 is **exactly the same** as YOLOv8 / YOLO11.
87
+ - For *Linear probing*, only the last conv in classification head is trainable.
88
+ - For *Full tuning*, all parameters are trainable.
89
+
90
+ | Model | Size | Epochs | $AP^b$ | $AP^b_{50}$ | $AP^b_{75}$ | $AP^m$ | $AP^m_{50}$ | $AP^m_{75}$ | Log |
91
+ |---|---|---|---|---|---|---|---|---|---|
92
+ | Linear probing | | | | | | | | | |
93
+ | [YOLOE-v8-S](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-v8s-seg-coco-pe.pt) | 640 | 10 | 35.6 | 51.5 | 38.9 | 30.3 | 48.2 | 32.0 | [LP](./logs/yoloe-v8s-seg-coco-pe/) |
94
+ | [YOLOE-v8-M](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-v8m-seg-coco-pe.pt) | 640 | 10 | 42.2 | 59.2 | 46.3 | 35.5 | 55.6 | 37.7 | [LP](./logs/yoloe-v8m-seg-coco-pe/) |
95
+ | [YOLOE-v8-L](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-v8l-seg-coco-pe.pt) | 640 | 10 | 45.4 | 63.3 | 50.0 | 38.3 | 59.6 | 40.8 | [LP](./logs/yoloe-v8l-seg-coco-pe/) |
96
+ | [YOLOE-11-S](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-11s-seg-coco-pe.pt) | 640 | 10 | 37.0 | 52.9 | 40.4 | 31.5 | 49.7 | 33.5 | [LP](./logs/yoloe-11s-seg-coco-pe/) |
97
+ | [YOLOE-11-M](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-11m-seg-coco-pe.pt) | 640 | 10 | 43.1 | 60.6 | 47.4 | 36.5 | 56.9 | 39.0 | [LP](./logs/yoloe-11m-seg-coco-pe/) |
98
+ | [YOLOE-11-L](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-11l-seg-coco-pe.pt) | 640 | 10 | 45.1 | 62.8 | 49.5 | 38.0 | 59.2 | 40.6 | [LP](./logs/yoloe-11l-seg-coco-pe/) |
99
+ | Full tuning | | | | | | | | | |
100
+ | [YOLOE-v8-S](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-v8s-seg-coco.pt) | 640 | 160 | 45.0 | 61.6 | 49.1 | 36.7 | 58.3 | 39.1 | [FT](./logs/yoloe-v8s-seg-coco/) |
101
+ | [YOLOE-v8-M](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-v8m-seg-coco.pt) | 640 | 80 | 50.4 | 67.0 | 55.2 | 40.9 | 63.7 | 43.5 | [FT](./logs/yoloe-v8m-seg-coco/) |
102
+ | [YOLOE-v8-L](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-v8l-seg-coco.pt) | 640 | 80 | 53.0 | 69.8 | 57.9 | 42.7 | 66.5 | 45.6 | [FT](./logs/yoloe-v8l-seg-coco/) |
103
+ | [YOLOE-11-S](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-11s-seg-coco.pt) | 640 | 160 | 46.2 | 62.9 | 50.0 | 37.6 | 59.3 | 40.1 | [FT](./logs/yoloe-11s-seg-coco/) |
104
+ | [YOLOE-11-M](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-11m-seg-coco.pt) | 640 | 80 | 51.3 | 68.3 | 56.0 | 41.5 | 64.8 | 44.3 | [FT](./logs/yoloe-11m-seg-coco/) |
105
+ | [YOLOE-11-L](https://huggingface.co/jameslahm/yoloe/blob/main/yoloe-11l-seg-coco.pt) | 640 | 80 | 52.6 | 69.7 | 57.5 | 42.4 | 66.2 | 45.2 | [FT](./logs/yoloe-11l-seg-coco/) |
106
+
107
+ ## Installation
108
+ `conda` virtual environment is recommended.
109
+ ```bash
110
+ conda create -n yoloe python=3.10 -y
111
+ conda activate yoloe
112
+
113
+ pip install -r requirements.txt
114
+ pip install -e .
115
+ pip install -e lvis-api
116
+ pip install -e ml-mobileclip
117
+ pip install -e CLIP
118
+ ```
119
+
120
+ ## Demo
121
+ If desired objects are not identified, pleaset set a **smaller** confidence threshold, e.g., for visual prompts with handcrafted shape or cross-image prompts.
122
+ ```bash
123
+ # Optional for mirror: export HF_ENDPOINT=https://hf-mirror.com
124
+ pip install gradio==4.42.0 gradio_image_prompter==0.1.0 fastapi==0.112.2
125
+ python app.py
126
+ # Please visit http://127.0.0.1:7860
127
+ ```
128
+
129
+ ## Prediction
130
+
131
+ ### Text prompt
132
+ ```bash
133
+ python predict.py
134
+ ```
135
+
136
+ ### Visual prompt
137
+ ```bash
138
+ python predict_vp.py
139
+ ```
140
+
141
+ ### Prompt free
142
+ ```bash
143
+ python predict_pf.py
144
+ ```
145
+
146
+ ## Validation
147
+
148
+ ### Data
149
+ - Please download LVIS following [here](https://docs.ultralytics.com/zh/datasets/detect/lvis/) or [lvis.yaml](./ultralytics/cfg/datasets/lvis.yaml).
150
+ - We use this [`minival.txt`](./tools/lvis/minival.txt) with background images for evaluation.
151
+
152
+ ```bash
153
+ # For evaluation with visual prompt, please obtain the referring data.
154
+ python tools/generate_lvis_visual_prompt_data.py
155
+ ```
156
+
157
+ ### Zero-shot evaluation on LVIS
158
+ - For text prompts, `python val.py`.
159
+ - For visual prompts, `python val_vp.py`
160
+
161
+ For *Fixed AP*, please refer to the comments in `val.py` and `val_vp.py`, and use `tools/eval_fixed_ap.py` for evaluation.
162
+
163
+ ### Prompt-free evaluation
164
+ ```bash
165
+ python val_pe_free.py
166
+ python tools/eval_open_ended.py --json ../datasets/lvis/annotations/lvis_v1_minival.json --pred runs/detect/val/predictions.json --fixed
167
+ ```
168
+
169
+ ### Downstream transfer on COCO
170
+ ```bash
171
+ python val_coco.py
172
+ ```
173
+
174
+ ## Training
175
+
176
+ The training includes three stages:
177
+ - YOLOE is trained with text prompts for detection and segmentation for 30 epochs.
178
+ - Only visual prompt encoder (SAVPE) is trained with visual prompts for 2 epochs.
179
+ - Only specialized prompt embedding for prompt free is trained for 1 epochs.
180
+
181
+ ### Data
182
+
183
+ | Images | Raw Annotations | Processed Annotations |
184
+ |---|---|---|
185
+ | [Objects365v1](https://opendatalab.com/OpenDataLab/Objects365_v1) | [objects365_train.json](https://opendatalab.com/OpenDataLab/Objects365_v1) | [objects365_train_segm.json](https://huggingface.co/datasets/jameslahm/yoloe/blob/main/objects365_train_segm.json) |
186
+ | [GQA](https://nlp.stanford.edu/data/gqa/images.zip) | [ final_mixed_train_noo_coco.json](https://huggingface.co/GLIPModel/GLIP/blob/main/mdetr_annotations/final_mixed_train_no_coco.json) | [ final_mixed_train_noo_coco_segm.json](https://huggingface.co/datasets/jameslahm/yoloe/blob/main/final_mixed_train_no_coco_segm.json) |
187
+ | [Flickr30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [final_flickr_separateGT_train.json](https://huggingface.co/GLIPModel/GLIP/blob/main/mdetr_annotations/final_flickr_separateGT_train.json) | [final_flickr_separateGT_train_segm.json](https://huggingface.co/datasets/jameslahm/yoloe/blob/main/final_flickr_separateGT_train_segm.json) |
188
+
189
+ For annotations, you can directly use our preprocessed ones or use the following script to obtain the processed annotations with segmentation masks.
190
+ ```bash
191
+ # Generate segmentation data
192
+ conda create -n sam2 python==3.10.16
193
+ conda activate sam2
194
+ pip install -r sam2/requirements.txt
195
+ pip install -e sam2/
196
+
197
+ python tools/generate_sam_masks.py --img-path ../datasets/Objects365v1/images/train --json-path ../datasets/Objects365v1/annotations/objects365_train.json --batch
198
+ python tools/generate_sam_masks.py --img-path ../datasets/flickr/full_images/ --json-path ../datasets/flickr/annotations/final_flickr_separateGT_train.json
199
+ python tools/generate_sam_masks.py --img-path ../datasets/mixed_grounding/gqa/images --json-path ../datasets/mixed_grounding/annotations/final_mixed_train_no_coco.json
200
+
201
+ # Generate objects365v1 labels
202
+ python tools/generate_objects365v1.py
203
+ ```
204
+
205
+ Then, please generate the data and embedding cache for training.
206
+ ```bash
207
+ # Generate grounding segmentation cache
208
+ python tools/generate_grounding_cache.py --img-path ../datasets/flickr/full_images/ --json-path ../datasets/flickr/annotations/final_flickr_separateGT_train_segm.json
209
+ python tools/generate_grounding_cache.py --img-path ../datasets/mixed_grounding/gqa/images --json-path ../datasets/mixed_grounding/annotations/final_mixed_train_no_coco_segm.json
210
+
211
+ # Generate train label embeddings
212
+ python tools/generate_label_embedding.py
213
+ python tools/generate_global_neg_cat.py
214
+ ```
215
+ At last, please download MobileCLIP-B(LT) for text encoder.
216
+ ```bash
217
+ wget https://docs-assets.developer.apple.com/ml-research/datasets/mobileclip/mobileclip_blt.pt
218
+ ```
219
+
220
+ ### Text prompt
221
+ ```bash
222
+ # For models with l scale, please change the initialization by referring to the comments in Line 549 in ultralytics/nn/moduels/head.py
223
+ # If you want to train YOLOE only for detection, you can use `train.py`
224
+ python train_seg.py
225
+ ```
226
+
227
+ ### Visual prompt
228
+ ```bash
229
+ # For visual prompt, because only SAVPE is trained, we can adopt the detection pipleline with less training time
230
+
231
+ # First, obtain the detection model
232
+ python tools/convert_segm2det.py
233
+ # Then, train the SAVPE module
234
+ python train_vp.py
235
+ # After training, please use tools/get_vp_segm.py to add the segmentation head
236
+ # python tools/get_vp_segm.py
237
+ ```
238
+
239
+ ### Prompt free
240
+ ```bash
241
+ # Generate LVIS with single class for evaluation during training
242
+ python tools/generate_lvis_sc.py
243
+
244
+ # Similar to visual prompt, because only the specialized prompt embedding is trained, we can adopt the detection pipleline with less training time
245
+ python tools/convert_segm2det.py
246
+ python train_pe_free.py
247
+ # After training, please use tools/get_pf_free_segm.py to add the segmentation head
248
+ # python tools/get_pf_free_segm.py
249
+ ```
250
+
251
+ ## Transferring
252
+ After pretraining, YOLOE-v8 / YOLOE-11 can be re-parameterized into the same architecture as YOLOv8 / YOLO11, with **zero overhead** for transferring.
253
+
254
+ ### Linear probing
255
+ Only the last conv, ie., the prompt embedding, is trainable.
256
+ ```bash
257
+ python train_pe.py
258
+ ```
259
+
260
+ ### Full tuning
261
+ All parameters are trainable, for better performance.
262
+ ```bash
263
+ # For models with s scale, please change the epochs to 160 for longer training
264
+ python train_pe_all.py
265
+ ```
266
+
267
+ ## Export
268
+ After re-parameterization, YOLOE-v8 / YOLOE-11 can be exported into the identical format as YOLOv8 / YOLO11.
269
+ ```bash
270
+ pip install onnx coremltools onnxslim
271
+ python export.py
272
+ ```
273
+
274
+ ## Benchmark
275
+ - For TensorRT, please refer to `benchmark.sh`.
276
+ - For CoreML, please use the benchmark tool from [XCode 14](https://developer.apple.com/videos/play/wwdc2022/10027/).
277
+ - For prompt-free setting, please refer to `tools/benchmark_pf.py`.
278
+
279
+ ## Acknowledgement
280
+
281
+ The code base is built with [ultralytics](https://github.com/ultralytics/ultralytics), [YOLO-World](https://github.com/AILab-CVC/YOLO-World), [MobileCLIP](https://github.com/apple/ml-mobileclip), [lvis-api](https://github.com/lvis-dataset/lvis-api), [CLIP](https://github.com/openai/CLIP), and [GenerateU](https://github.com/FoundationVision/GenerateU).
282
+
283
+ Thanks for the great implementations!
284
+
285
+ ## Citation
286
+
287
+ If our code or models help your work, please cite our paper:
288
+ ```BibTeX
289
+ @misc{wang2025yoloerealtimeseeing,
290
+ title={YOLOE: Real-Time Seeing Anything},
291
+ author={Ao Wang and Lihao Liu and Hui Chen and Zijia Lin and Jungong Han and Guiguang Ding},
292
+ year={2025},
293
+ eprint={2503.07465},
294
+ archivePrefix={arXiv},
295
+ primaryClass={cs.CV},
296
+ url={https://arxiv.org/abs/2503.07465},
297
+ }
298
+ ```