Commit
·
747a7cb
1
Parent(s):
1730015
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,81 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
tags:
|
4 |
+
- object-detection
|
5 |
+
- object-tracking
|
6 |
+
- video
|
7 |
+
- video-object-segmentation
|
8 |
+
inference: false
|
9 |
---
|
10 |
+
|
11 |
+
# unicorn_track_tiny_mask
|
12 |
+
|
13 |
+
## Table of Contents
|
14 |
+
- [unicorn_track_tiny_mask](#-model_id--defaultmymodelname-true)
|
15 |
+
- [Table of Contents](#table-of-contents)
|
16 |
+
- [Model Details](#model-details)
|
17 |
+
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
|
18 |
+
- [Uses](#uses)
|
19 |
+
- [Direct Use](#direct-use)
|
20 |
+
- [Downstream Use](#downstream-use)
|
21 |
+
- [Misuse and Out-of-scope Use](#misuse-and-out-of-scope-use)
|
22 |
+
- [Limitations and Biases](#limitations-and-biases)
|
23 |
+
- [Training](#training)
|
24 |
+
- [Training Data](#training-data)
|
25 |
+
- [Training Procedure](#training-procedure)
|
26 |
+
- [Evaluation Results](#evaluation-results)
|
27 |
+
- [Environmental Impact](#environmental-impact)
|
28 |
+
- [Citation Information](#citation-information)
|
29 |
+
|
30 |
+
|
31 |
+
<model_details>
|
32 |
+
|
33 |
+
## Model Details
|
34 |
+
|
35 |
+
Unicorn accomplishes the great unification of the network architecture and the learning paradigm for four tracking tasks. Unicorn puts forwards new state-of-the-art performance on many challenging tracking benchmarks using the same model parameters. This model has an input size of 800x1280.
|
36 |
+
|
37 |
+
- License: This model is licensed under the apache-2.0 license
|
38 |
+
- Resources for more information:
|
39 |
+
- [Research Paper](https://arxiv.org/abs/2111.12085)
|
40 |
+
- [GitHub Repo](https://github.com/MasterBin-IIAU/Unicorn)
|
41 |
+
|
42 |
+
</model_details>
|
43 |
+
|
44 |
+
<uses>
|
45 |
+
|
46 |
+
## Uses
|
47 |
+
|
48 |
+
#### Direct Use
|
49 |
+
|
50 |
+
This model can be used for:
|
51 |
+
|
52 |
+
* Single Object Tracking (SOT)
|
53 |
+
* Multiple Object Tracking (MOT)
|
54 |
+
* Video Object Segmentation (VOS)
|
55 |
+
* Multi-Object Tracking and Segmentation (MOTS)
|
56 |
+
|
57 |
+
<Eval_Results>
|
58 |
+
|
59 |
+
## Evaluation Results
|
60 |
+
|
61 |
+
LaSOT AUC (%): 67.7
|
62 |
+
BDD100K mMOTA (%): 39.9
|
63 |
+
DAVIS17 J&F (%): 68.0
|
64 |
+
BDD100K MOTS mMOTSA (%): 29.7
|
65 |
+
|
66 |
+
|
67 |
+
</Eval_Results>
|
68 |
+
|
69 |
+
<Cite>
|
70 |
+
|
71 |
+
## Citation Information
|
72 |
+
|
73 |
+
```bibtex
|
74 |
+
@inproceedings{unicorn,
|
75 |
+
title={Towards Grand Unification of Object Tracking},
|
76 |
+
author={Yan, Bin and Jiang, Yi and Sun, Peize and Wang, Dong and Yuan, Zehuan and Luo, Ping and Lu, Huchuan},
|
77 |
+
booktitle={ECCV},
|
78 |
+
year={2022}
|
79 |
+
}
|
80 |
+
```
|
81 |
+
</Cite>
|