Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@ datasets:
|
|
| 24 |
- `2025/08/15`: We released our annotated DanceTrack-L, MOT17-L, and SportsMOT-L.
|
| 25 |
- `2025/05/22`: Our work has been accepted by IEEE TCSVT.
|
| 26 |
- `2024/06/14`: We released our trained models.
|
| 27 |
-
- `2024/06/12`: We released our code.
|
| 28 |
- `2024/06/11`: We released our technical report on [arxiv](https://arxiv.org/abs/2406.04844.pdf). Our code and models are coming soon!
|
| 29 |
|
| 30 |
<br>
|
|
@@ -53,21 +53,9 @@ at both scene and instance levels.
|
|
| 53 |
|
| 54 |
- **LG-MOT** is a new multi-object tracking framework which leverages language information at different granularity during training to enhance object association capabilities. During training, our **ISG** module aligns each node embedding $\phi(b_i^k)$ with instance-level descriptions embeddings $\varphi_i$, while our **SPG** module aligns edge embeddings $\hat{E}_{(u,v)}$ with scene-level descriptions embeddings $\varphi_s$ to guide correlation estimation after message passing. Our approach does not require language description during inference.
|
| 55 |
|
| 56 |
-
<div align=center>
|
| 57 |
-
<img src="source/LG-MOT_overview.png" width="90%"/>
|
| 58 |
-
</div>
|
| 59 |
|
| 60 |
- **LG-MOT** increases 1.2% in terms of IDF1 over the baseline SUSHI intra-domain, while significantly improves 11.2% in cross-domain.
|
| 61 |
|
| 62 |
-
<div align=center>
|
| 63 |
-
<img src="source/performance.png" width="50%"/>
|
| 64 |
-
</div>
|
| 65 |
-
<!-- <img src="source/performance.png" width="50%"/> -->
|
| 66 |
-
|
| 67 |
-
## Visualization
|
| 68 |
-
<div align=center>
|
| 69 |
-
<img src="source/visualization.png" width="90%"/>
|
| 70 |
-
</div>
|
| 71 |
|
| 72 |
## Performance on Benchmarks
|
| 73 |
### MOT17 Challenge - Test Set
|
|
|
|
| 24 |
- `2025/08/15`: We released our annotated DanceTrack-L, MOT17-L, and SportsMOT-L.
|
| 25 |
- `2025/05/22`: Our work has been accepted by IEEE TCSVT.
|
| 26 |
- `2024/06/14`: We released our trained models.
|
| 27 |
+
- `2024/06/12`: We released our code on [github](https://github.com/WesLee88524/LG-MOT).
|
| 28 |
- `2024/06/11`: We released our technical report on [arxiv](https://arxiv.org/abs/2406.04844.pdf). Our code and models are coming soon!
|
| 29 |
|
| 30 |
<br>
|
|
|
|
| 53 |
|
| 54 |
- **LG-MOT** is a new multi-object tracking framework which leverages language information at different granularity during training to enhance object association capabilities. During training, our **ISG** module aligns each node embedding $\phi(b_i^k)$ with instance-level descriptions embeddings $\varphi_i$, while our **SPG** module aligns edge embeddings $\hat{E}_{(u,v)}$ with scene-level descriptions embeddings $\varphi_s$ to guide correlation estimation after message passing. Our approach does not require language description during inference.
|
| 55 |
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
- **LG-MOT** increases 1.2% in terms of IDF1 over the baseline SUSHI intra-domain, while significantly improves 11.2% in cross-domain.
|
| 58 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
## Performance on Benchmarks
|
| 61 |
### MOT17 Challenge - Test Set
|