Spaces:
Running
on
Zero
Running
on
Zero
Update README.md
Browse files
README.md
CHANGED
@@ -3,7 +3,7 @@ sdk: gradio
|
|
3 |
sdk_version: 4.42.0
|
4 |
---
|
5 |
|
6 |
-
# [YOLOE: Real-Time Seeing Anything]()
|
7 |
|
8 |
Official PyTorch implementation of **YOLOE**.
|
9 |
|
@@ -12,8 +12,9 @@ Official PyTorch implementation of **YOLOE**.
|
|
12 |
Comparison of performance, training cost, and inference efficiency between YOLOE (Ours) and YOLO-Worldv2 in terms of open text prompts.
|
13 |
</p>
|
14 |
|
15 |
-
[YOLOE: Real-Time Seeing Anything]().\
|
16 |
-
Ao Wang*, Lihao Liu*, Hui Chen, Zijia Lin, Jungong Han, and Guiguang Ding
|
|
|
17 |
|
18 |
|
19 |
We introduce **YOLOE(ye)**, a highly **efficient**, **unified**, and **open** object detection and segmentation model, like human eye, under different prompt mechanisms, like *texts*, *visual inputs*, and *prompt-free paradigm*.
|
@@ -31,12 +32,12 @@ We introduce **YOLOE(ye)**, a highly **efficient**, **unified**, and **open** ob
|
|
31 |
<summary>
|
32 |
<font size="+1">Abstract</font>
|
33 |
</summary>
|
34 |
-
Object detection and segmentation are widely employed in computer vision applications, yet conventional models like YOLO series, while efficient and accurate, are limited by predefined categories, hindering adaptability in open scenarios. Recent open-set methods leverage text prompts, visual cues, or prompt-free paradigm to overcome this, but often compromise between performance and efficiency due to high computational demands or deployment complexity. In this work, we introduce YOLOE, which integrates detection and segmentation across diverse open prompt mechanisms within a single highly efficient model, achieving real-time seeing anything. For text prompts, we propose Re-parameterizable Region-Text Alignment (RepRTA) strategy. It refines pretrained textual embeddings via a re-parameterizable lightweight auxiliary network and enhances visual-textual alignment with zero inference and transferring overhead. For visual prompts, we present Semantic-Activated Visual Prompt Encoder (SAVPE). It employs decoupled semantic and activation branches to bring improved visual embedding and accuracy with minimal complexity. For prompt-free scenario, we introduce Lazy Region-Prompt Contrast (LRPC) strategy. It utilizes a built-in large vocabulary and specialized embedding to identify all objects, avoiding costly language model dependency. Extensive experiments show YOLOE's exceptional zero-shot performance and transferability with high inference efficiency and low training cost. Notably, on LVIS, with $3\times$ less training cost and $1.4\times$ inference speedup, YOLOE-v8-S surpasses YOLO-Worldv2-S by 3.5 AP. When transferring to COCO, YOLOE-v8-L achieves 0.6 $AP^b$ and 0.4 $AP^m$ gains over closed-set YOLOv8-L with nearly $4\times$ less training time.
|
|
|
35 |
<p></p>
|
36 |
<p align="center">
|
37 |
<img src="figures/pipeline.svg" width=96%> <br>
|
38 |
</p>
|
39 |
-
</details>
|
40 |
|
41 |
## Performance
|
42 |
|
@@ -122,6 +123,7 @@ pip install -e CLIP
|
|
122 |
```
|
123 |
|
124 |
## Demo
|
|
|
125 |
```bash
|
126 |
# Optional for mirror: export HF_ENDPOINT=https://hf-mirror.com
|
127 |
pip install gradio==4.42.0 gradio_image_prompter==0.1.0 fastapi==0.112.2
|
@@ -235,6 +237,8 @@ python train_seg.py
|
|
235 |
python tools/convert_segm2det.py
|
236 |
# Then, train the SAVPE module
|
237 |
python train_vp.py
|
|
|
|
|
238 |
```
|
239 |
|
240 |
### Prompt free
|
@@ -245,6 +249,8 @@ python tools/generate_lvis_sc.py
|
|
245 |
# Similar to visual prompt, because only the specialized prompt embedding is trained, we can adopt the detection pipleline with less training time
|
246 |
python tools/convert_segm2det.py
|
247 |
python train_pe_free.py
|
|
|
|
|
248 |
```
|
249 |
|
250 |
## Transferring
|
@@ -285,5 +291,13 @@ Thanks for the great implementations!
|
|
285 |
|
286 |
If our code or models help your work, please cite our paper:
|
287 |
```BibTeX
|
288 |
-
|
289 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
sdk_version: 4.42.0
|
4 |
---
|
5 |
|
6 |
+
# [YOLOE: Real-Time Seeing Anything](https://arxiv.org/abs/2503.07465)
|
7 |
|
8 |
Official PyTorch implementation of **YOLOE**.
|
9 |
|
|
|
12 |
Comparison of performance, training cost, and inference efficiency between YOLOE (Ours) and YOLO-Worldv2 in terms of open text prompts.
|
13 |
</p>
|
14 |
|
15 |
+
[YOLOE: Real-Time Seeing Anything](https://arxiv.org/abs/2503.07465).\
|
16 |
+
Ao Wang*, Lihao Liu*, Hui Chen, Zijia Lin, Jungong Han, and Guiguang Ding\
|
17 |
+
[](https://arxiv.org/abs/2503.07465) [](https://huggingface.co/jameslahm/yoloe/tree/main) [](https://huggingface.co/spaces/jameslahm/yoloe)
|
18 |
|
19 |
|
20 |
We introduce **YOLOE(ye)**, a highly **efficient**, **unified**, and **open** object detection and segmentation model, like human eye, under different prompt mechanisms, like *texts*, *visual inputs*, and *prompt-free paradigm*.
|
|
|
32 |
<summary>
|
33 |
<font size="+1">Abstract</font>
|
34 |
</summary>
|
35 |
+
Object detection and segmentation are widely employed in computer vision applications, yet conventional models like YOLO series, while efficient and accurate, are limited by predefined categories, hindering adaptability in open scenarios. Recent open-set methods leverage text prompts, visual cues, or prompt-free paradigm to overcome this, but often compromise between performance and efficiency due to high computational demands or deployment complexity. In this work, we introduce YOLOE, which integrates detection and segmentation across diverse open prompt mechanisms within a single highly efficient model, achieving real-time seeing anything. For text prompts, we propose Re-parameterizable Region-Text Alignment (RepRTA) strategy. It refines pretrained textual embeddings via a re-parameterizable lightweight auxiliary network and enhances visual-textual alignment with zero inference and transferring overhead. For visual prompts, we present Semantic-Activated Visual Prompt Encoder (SAVPE). It employs decoupled semantic and activation branches to bring improved visual embedding and accuracy with minimal complexity. For prompt-free scenario, we introduce Lazy Region-Prompt Contrast (LRPC) strategy. It utilizes a built-in large vocabulary and specialized embedding to identify all objects, avoiding costly language model dependency. Extensive experiments show YOLOE's exceptional zero-shot performance and transferability with high inference efficiency and low training cost. Notably, on LVIS, with $3\times$ less training cost and $1.4\times$ inference speedup, YOLOE-v8-S surpasses YOLO-Worldv2-S by 3.5 AP. When transferring to COCO, YOLOE-v8-L achieves 0.6 $AP^b$ and 0.4 $AP^m$ gains over closed-set YOLOv8-L with nearly $4\times$ less training time.
|
36 |
+
</details>
|
37 |
<p></p>
|
38 |
<p align="center">
|
39 |
<img src="figures/pipeline.svg" width=96%> <br>
|
40 |
</p>
|
|
|
41 |
|
42 |
## Performance
|
43 |
|
|
|
123 |
```
|
124 |
|
125 |
## Demo
|
126 |
+
If desired objects are not identified, pleaset set a **smaller** confidence threshold, e.g., for visual prompts with handcrafted shape or cross-image prompts.
|
127 |
```bash
|
128 |
# Optional for mirror: export HF_ENDPOINT=https://hf-mirror.com
|
129 |
pip install gradio==4.42.0 gradio_image_prompter==0.1.0 fastapi==0.112.2
|
|
|
237 |
python tools/convert_segm2det.py
|
238 |
# Then, train the SAVPE module
|
239 |
python train_vp.py
|
240 |
+
# After training, please use tools/get_vp_segm.py to add the segmentation head
|
241 |
+
# python tools/get_vp_segm.py
|
242 |
```
|
243 |
|
244 |
### Prompt free
|
|
|
249 |
# Similar to visual prompt, because only the specialized prompt embedding is trained, we can adopt the detection pipleline with less training time
|
250 |
python tools/convert_segm2det.py
|
251 |
python train_pe_free.py
|
252 |
+
# After training, please use tools/get_pf_free_segm.py to add the segmentation head
|
253 |
+
# python tools/get_pf_free_segm.py
|
254 |
```
|
255 |
|
256 |
## Transferring
|
|
|
291 |
|
292 |
If our code or models help your work, please cite our paper:
|
293 |
```BibTeX
|
294 |
+
@misc{wang2025yoloerealtimeseeing,
|
295 |
+
title={YOLOE: Real-Time Seeing Anything},
|
296 |
+
author={Ao Wang and Lihao Liu and Hui Chen and Zijia Lin and Jungong Han and Guiguang Ding},
|
297 |
+
year={2025},
|
298 |
+
eprint={2503.07465},
|
299 |
+
archivePrefix={arXiv},
|
300 |
+
primaryClass={cs.CV},
|
301 |
+
url={https://arxiv.org/abs/2503.07465},
|
302 |
+
}
|
303 |
+
```
|