Spaces:
Running
Running
File size: 9,290 Bytes
3ae0b30 6168220 f965db0 6168220 3ae0b30 f965db0 3ae0b30 6168220 3ae0b30 f965db0 3359d6e ccb23fb f965db0 a6ef5d2 f965db0 3359d6e f965db0 3359d6e f965db0 3359d6e f965db0 3359d6e f965db0 3359d6e bff0759 3359d6e f965db0 3359d6e f965db0 6168220 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 |
---
title: Detection Metrics
tags:
- evaluate
- metric
description: >-
Compute multiple object detection metrics at different bounding box area
levels.
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
emoji: 🕵️
---
# Metric Card for Detection Metric
## Metric Description
This metric can be used to calculate object detection metrics. It has an option to calculate the metrics at different levels of bounding box sizes, so that more insight is provided into the performance for different objects. It is adapted from the base of pycocotools metrics.
## How to Use
```
>>> import evaluate
>>> from seametrics.fo_to_payload.utils import fo_to_payload
>>> b = fo_to_payload(
>>> dataset="SAILING_DATASET_QA",
>>> gt_field="ground_truth_det",
>>> models=["yolov5n6_RGB_D2304-v1_9C"],
>>> sequence_list=["Trip_14_Seq_1"],
>>> data_type="rgb"
>>> )
>>> module = evaluate.load("SEA-AI/det-metrics.py")
>>> module.add_batch(b)
>>> res = module.compute()
>>> print(res)
{'all': {'range': [0, 10000000000.0],
'iouThr': '0.00',
'maxDets': 100,
'tp': 89,
'fp': 13,
'fn': 15,
'duplicates': 1,
'precision': 0.8725490196078431,
'recall': 0.8557692307692307,
'f1': 0.8640776699029126,
'support': 104,
'fpi': 0,
'nImgs': 22}}
```
### Metric Settings
When loading module: `module = evaluate.load("SEA-AI/det-metrics", **params)`, multiple parameters can be specified.
- **area_ranges_tuples** *List[Tuple[str, List[int]]]*: Different levels of area ranges at which metrics should be calculated. It is a list that contains tuples, where the first element of each tuple should specify the name of the area range and the second element is list specifying the lower and upper limit of the area range. Defaults to `[("all", [0, 1e5.pow(2)])]`.
- **bbox_format** *Literal["xyxy", "xywh", "cxcywh"]*: Bounding box format of predictions and ground truth. Defaults to `"xywh"`.
- **iou_threshold** *Optional[float]*: at which IOU-treshold the metrics should be calculated. IOU-threshold defines the minimal overlap between a ground truth and predicted bounding box so that it is considered a correct prediction. Defaults to `1e-10`.
- **class_agnostic** *bool*. Defaults to `True`. Non-class-agnostic metrics are currently not supported.
### Input Values
Add predictions and ground truths to the metric with the function `module.add_batches(payload)`.
The format of payload should be as returned by function `fo_to_payload()` defined in seametrics library.
An example of how a payload might look like is:
```
test_payload = {
'dataset': 'SAILING_DATASET_QA',
'models': ['yolov5n6_RGB_D2304-v1_9C'],
'gt_field_name': 'ground_truth_det',
'sequences': {
# sequence 1, 1 frame with 1 pred and 1 gt
'Trip_14_Seq_1': {
'resolution': (720, 1280),
'yolov5n6_RGB_D2304-v1_9C': [[fo.Detection(
label='FAR_AWAY_OBJECT',
bounding_box=[0.35107421875, 0.274658203125, 0.0048828125, 0.009765625], # tp nr1
confidence=0.153076171875
)]],
'ground_truth_det': [[fo.Detection(
label='FAR_AWAY_OBJECT',
bounding_box=[0.35107421875, 0.274658203125, 0.0048828125, 0.009765625]
)]]
},
# sequence 2, 2 frames with frame 1: 2 pred, 1 gt; frame 2: 1 pred 1 gt
'Trip_14_Seq_2': {
'resolution': (720, 1280),
'yolov5n6_RGB_D2304-v1_9C': [
[
fo.Detection(
label='FAR_AWAY_OBJECT',
bounding_box=[0.389404296875,0.306640625,0.005126953125,0.0146484375], # tp nr 2
confidence=0.153076171875
),
fo.Detection(
label='FAR_AWAY_OBJECT',
bounding_box=[0.50390625, 0.357666015625, 0.0048828125, 0.00976562], # fp nr 1
confidence=0.153076171875
),
fo.Detection(
label='FAR_AWAY_OBJECT',
bounding_box=[0.455078125, 0.31494140625, 0.00390625, 0.0087890625], # fp nr 2
confidence=0.153076171875
)
],
[
fo.Detection(
label='FAR_AWAY_OBJECT',
bounding_box=[0.455078125, 0.31494140625, 0.00390625, 0.0087890625], # tp nr 3
confidence=0.153076171875
)
],
[
fo.Detection(
label='FAR_AWAY_OBJECT',
bounding_box=[0.455078125, 0.31494140625, 0.00390625, 0.0087890625], # fp nr 3
confidence=0.153076171875
)
]
],
'ground_truth_det': [
# frame nr 1
[
fo.Detection(
label='FAR_AWAY_OBJECT',
bounding_box=[0.389404296875,0.306640625,0.005126953125,0.0146484375],
)
],
# frame nr 2
[
fo.Detection(
label='FAR_AWAY_OBJECT',
bounding_box=[0.455078125, 0.31494140625, 0.00390625, 0.0087890625],
confidence=0.153076171875
),
fo.Detection(
label='FAR_AWAY_OBJECT',
bounding_box=[0.35107421875, 0.274658203125, 0.0048828125, 0.009765625], # missed nr 1
confidence=0.153076171875
)
],
# frame nr3
[
],
]
}
},
"sequence_list": ["Trip_14_Seq_1", 'Trip_14_Seq_2']
}
```
Optionally, you can pass the model as string that should be evaluated, via `model=model_str`. By default, it will evaluate the first model, i.e. `model = payload["models"][0]`.
### Output Values
The metric outputs a dictionary that contains sub-dictionaries for each name of the specified area ranges.
Each sub-dictionary holds performance metrics at the specific area range level:
- **range**: corresponding area range
- **iouThr**: IOU-threshold used in calculating the metric
- **maxDets**: maximum number of detections in calculating the metrics
- **tp**: number of true positive predictions
- **fp**: number of false positive predictions
- **fn**: number of false negative predictions
- **duplicates**: number of duplicated bounding box predictions
- **precision**: ratio between true positive predictions and positive predictions (tp/(tp+fp))
- **recall**: ratio between true positive predictions and actual ground truths (tp/(tp+fn))
- **f1**: trades-off precision and recall (2*(precision*recall)/(precision+recall))
- **support**: number of ground truth bounding boxes that are considered in the metric
- **fpi**: number of images with predictions but no ground truths
- **nImgs**: number of total images considered in calculating the metric
### Examples
We can specify different area range levels, at which we would like to compute the metrics.
```
>>> import evaluate
>>> from seametrics.fo_to_payload.utils import fo_to_payload
>>> area_ranges_tuples = [
("all", [0, 1e5 ** 2]),
("small", [0 ** 2, 6 ** 2]),
("medium", [6 ** 2, 12 ** 2]),
("large", [12 ** 2, 1e5 ** 2])
]
>>> payload = fo_to_payload(
dataset=dataset,
gt_field=gt_field,
models=model_list
)
>>> module = evaluate.load(
"./detection_metric.py",
iou_thresholds=0.9,
area_ranges_tuples=area_ranges_tuples
)
>>> module.add_batch(payload)
>>> result = module.compute()
>>> print(result)
{'all':
{'range': [0, 10000000000.0],
'iouThr': '0.00',
'maxDets': 100,
'tp': 0,
'fp': 3,
'fn': 1,
'duplicates': 0,
'precision': 0.0,
'recall': 0.0,
'f1': 0,
'support': 1,
'fpi': 1,
'nImgs': 2
},
'small': {
'range': [0, 36],
'iouThr': '0.00',
'maxDets': 100,
'tp': 0,
'fp': 1,
'fn': 1,
'duplicates': 0,
'precision': 0.0,
'recall': 0.0,
'f1': 0,
'support': 1,
'fpi': 1,
'nImgs': 2
},
'medium': {
'range': [36, 144],
'iouThr': '0.00',
'maxDets': 100,
'tp': 0,
'fp': 2,
'fn': 0,
'duplicates': 0,
'precision': 0.0,
'recall': 0,
'f1': 0,
'support': 0,
'fpi': 2,
'nImgs': 2
}, 'large': {
'range': [144, 10000000000.0],
'iouThr': '0.00',
'maxDets': 100,
'tp': -1,
'fp': -1,
'fn': -1,
'duplicates': -1,
'precision': -1,
'recall': -1,
'f1': -1,
'support': 0,
'fpi': 0,
'nImgs': 2
}
}
```
## Further References
*seametrics* library: https://github.com/SEA-AI/seametrics/tree/main
Calculating metrics is based on pycoco tools: https://github.com/cocodataset/cocoapi/tree/master/PythonAPI/pycocotools
Further info about metrics: https://www.analyticsvidhya.com/blog/2020/09/precision-recall-machine-learning/ |