Gil-Simas's picture
emoji fix
bd044ed
|
raw
history blame
7.85 kB
---
app_file: app.py
colorFrom: yellow
colorTo: green
description: 'TODO: add a description here'
emoji: "🤑"
pinned: false
runme:
id: 01HPS3ASFJXVQR88985QNSXVN1
version: v3
sdk: gradio
sdk_version: 4.36.0
tags:
- evaluate
- metric
title: user-friendly-metrics
---
# How to Use
```python {"id":"01HPS3ASFHPCECERTYN7Z4Z7MN"}
>>> import evaluate
>>> from seametrics.fo_utils.utils import fo_to_payload
>>> b = fo_to_payload(
>>> dataset="SENTRY_VIDEOS_DATASET_QA",
>>> gt_field="ground_truth_det",
>>> models=['volcanic-sweep-3_02_2023_N_LN1_ep288_TRACKER'],
>>> sequence_list=["Sentry_2022_11_PROACT_CELADON_7.5M_MOB_2022_11_25_12_12_39"],
>>> tracking_mode=True
>>> )
>>> module = evaluate.load("SEA-AI/user-friendly-metrics")
>>> res = module._calculate(b, max_iou=0.99, recognition_thresholds=[0.3, 0.5, 0.8])
>>> print(res)
```
```
global:
ahoy-IR-b2-whales__XAVIER-AGX-JP46_TRACKER:
all:
f1: 0.8262651742077881
fn: 2045.0
fp: 159.0
num_gt_ids: 13
precision: 0.9705555555555555
recall: 0.7193247323634367
recognition_0.3: 0.9230769230769231
recognition_0.5: 0.8461538461538461
recognition_0.8: 0.46153846153846156
recognized_0.3: 12
recognized_0.5: 11
recognized_0.8: 6
tp: 5241.0
area:
large:
f1: 0.4053050397877984
fn: 612.0
fp: 3872.0
num_gt_ids: 6
precision: 0.28296296296296297
recall: 0.7140186915887851
recognition_0.3: 0.8333333333333334
recognition_0.5: 0.8333333333333334
recognition_0.8: 0.3333333333333333
recognized_0.3: 5
recognized_0.5: 5
recognized_0.8: 2
tp: 1528.0
medium:
f1: 0.7398209644816635
fn: 1146.0
fp: 1557.0
num_gt_ids: 10
precision: 0.7116666666666667
recall: 0.7702946482260974
recognition_0.3: 1.0
recognition_0.5: 0.8
recognition_0.8: 0.6
recognized_0.3: 10
recognized_0.5: 8
recognized_0.8: 6
tp: 3843.0
small:
f1: 0.10373582388258838
fn: 285.0
fp: 5089.0
num_gt_ids: 6
precision: 0.05759259259259259
recall: 0.5218120805369127
recognition_0.3: 0.3333333333333333
recognition_0.5: 0.3333333333333333
recognition_0.8: 0.16666666666666666
recognized_0.3: 2
recognized_0.5: 2
recognized_0.8: 1
tp: 311.0
per_sequence:
Sentry_2022_12_19_Romania_2022_12_19_17_09_34:
ahoy-IR-b2-whales__XAVIER-AGX-JP46_TRACKER:
all:
f1: 0.8262651742077881
fn: 2045.0
fp: 159.0
num_gt_ids: 13
precision: 0.9705555555555555
recall: 0.7193247323634367
recognition_0.3: 0.9230769230769231
recognition_0.5: 0.8461538461538461
recognition_0.8: 0.46153846153846156
recognized_0.3: 12
recognized_0.5: 11
recognized_0.8: 6
tp: 5241.0
area:
large:
f1: 0.4053050397877984
fn: 612.0
fp: 3872.0
num_gt_ids: 6
precision: 0.28296296296296297
recall: 0.7140186915887851
recognition_0.3: 0.8333333333333334
recognition_0.5: 0.8333333333333334
recognition_0.8: 0.3333333333333333
recognized_0.3: 5
recognized_0.5: 5
recognized_0.8: 2
tp: 1528.0
medium:
f1: 0.7398209644816635
fn: 1146.0
fp: 1557.0
num_gt_ids: 10
precision: 0.7116666666666667
recall: 0.7702946482260974
recognition_0.3: 1.0
recognition_0.5: 0.8
recognition_0.8: 0.6
recognized_0.3: 10
recognized_0.5: 8
recognized_0.8: 6
tp: 3843.0
small:
f1: 0.10373582388258838
fn: 285.0
fp: 5089.0
num_gt_ids: 6
precision: 0.05759259259259259
recall: 0.5218120805369127
recognition_0.3: 0.3333333333333333
recognition_0.5: 0.3333333333333333
recognition_0.8: 0.16666666666666666
recognized_0.3: 2
recognized_0.5: 2
recognized_0.8: 1
tp: 311.0
```
## Metric Settings
The `max_iou` parameter is used to filter out the bounding boxes with IOU less than the threshold. The default value is 0.5. This means that if a ground truth and a predicted bounding boxes IoU value is less than 0.5, then the predicted bounding box is not considered for association. So, the higher the `max_iou` value, the more the predicted bounding boxes are considered for association.
## Output
The output is a dictionary containing the following metrics:
| Name | Description |
| :------------------- | :--------------------------------------------------------------------------------- |
| recall | Number of detections over number of objects. |
| precision | Number of detected objects over sum of detected and false positives. |
| f1 | F1 score |
| num_gt_ids | Number of unique objects on the ground truth |
| fn | Number of false negatives |
| fp | Number of of false postives |
| tp | number of true positives |
| recognized_th | Total number of unique objects on the ground truth that were seen more then th% of the times |
| recognition_th | Total number of unique objects on the ground truth that were seen more then th% of the times over the number of unique objects on the ground truth|
## How it Works
We levereage one of the internal variables of motmetrics ```MOTAccumulator``` class, ```events```, which keeps track of the detections hits and misses. These values are then processed via the ```track_ratios``` function which counts the ratio of assigned to total appearance count per unique object id. We then define the ```recognition``` function that counts how many objects have been seen more times then the desired threshold.
## Citations
```bibtex {"id":"01HPS3ASFJXVQR88985GKHAQRE"}
@InProceedings{huggingface:module,
title = {A great new module},
authors={huggingface, Inc.},
year={2020}}
```
```bibtex {"id":"01HPS3ASFJXVQR88985KRT478N"}
@article{milan2016mot16,
title={MOT16: A benchmark for multi-object tracking},
author={Milan, Anton and Leal-Taix{\'e}, Laura and Reid, Ian and Roth, Stefan and Schindler, Konrad},
journal={arXiv preprint arXiv:1603.00831},
year={2016}}
```
## Further References
- [Github Repository - py-motmetrics](https://github.com/cheind/py-motmetrics/tree/develop)