det-metrics / README.md
kevinconka's picture
backwards compatibility with 'add_batch' approach but with deprecation warning
12aa779
|
raw
history blame
5.35 kB
---
title: det-metrics
tags:
- evaluate
- metric
description: >-
Modified cocoevals.py which is wrapped into torchmetrics' mAP metric with numpy instead of torch dependency.
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
emoji: 🕵️
---
# SEA-AI/det-metrics
This hugging face metric uses `seametrics.detection.PrecisionRecallF1Support` under the hood to compute coco-like metrics for object detection tasks. It is a [modified cocoeval.py](https://github.com/SEA-AI/seametrics/blob/develop/seametrics/detection/cocoeval.py) wrapped inside [torchmetrics' mAP metric](https://lightning.ai/docs/torchmetrics/stable/detection/mean_average_precision.html) but with numpy arrays instead of torch tensors.
## Getting Started
To get started with det-metrics, make sure you have the necessary dependencies installed. This metric relies on the `evaluate` and `seametrics` libraries for metric calculation and integration with FiftyOne datasets.
### Installation
First, ensure you have Python 3.8 or later installed. Then, install det-metrics using pip:
```sh
pip install evaluate git+https://github.com/SEA-AI/seametrics@develop
```
### Basic Usage
Here's how to quickly evaluate your object detection models using SEA-AI/det-metrics:
```python
import evaluate
# Define your predictions and references (dict values can also by numpy arrays)
predictions = [
{
"boxes": [[449.3, 197.75390625, 6.25, 7.03125], [334.3, 181.58203125, 11.5625, 6.85546875]],
"labels": [0, 0],
"scores": [0.153076171875, 0.72314453125],
}
]
references = [
{
"boxes": [[449.3, 197.75390625, 6.25, 7.03125], [334.3, 181.58203125, 11.5625, 6.85546875]],
"labels": [0, 0],
"area": [132.2, 83.8],
}
]
# Load SEA-AI/det-metrics and evaluate
module = evaluate.load("SEA-AI/det-metrics")
module.add(prediction=predictions, reference=references)
results = module.compute()
print(results)
```
This will output the evaluation metrics for your detection model.
```
{'all': {'range': [0, 10000000000.0],
'iouThr': '0.00',
'maxDets': 100,
'tp': 2,
'fp': 0,
'fn': 0,
'duplicates': 0,
'precision': 1.0,
'recall': 1.0,
'f1': 1.0,
'support': 2,
'fpi': 0,
'nImgs': 1}
```
## FiftyOne Integration
Integrate SEA-AI/det-metrics with FiftyOne datasets for enhanced analysis and visualization:
```python
import evaluate
import logging
from seametrics.payload import PayloadProcessor
logging.basicConfig(level=logging.WARNING)
# Configure your dataset and model details
processor = PayloadProcessor(
dataset_name="SAILING_DATASET_QA",
gt_field="ground_truth_det",
models=["yolov5n6_RGB_D2304-v1_9C"],
sequence_list=["Trip_14_Seq_1"],
data_type="rgb",
)
# Evaluate using SEA-AI/det-metrics
module = evaluate.load("SEA-AI/det-metrics")
module.add_payload(processor.payload)
results = module.compute()
print(results)
```
```console
{'all': {'range': [0, 10000000000.0],
'iouThr': '0.00',
'maxDets': 100,
'tp': 89,
'fp': 13,
'fn': 15,
'duplicates': 1,
'precision': 0.8725490196078431,
'recall': 0.8557692307692307,
'f1': 0.8640776699029126,
'support': 104,
'fpi': 0,
'nImgs': 22}}
```
## Metric Settings
Customize your evaluation by specifying various parameters when loading SEA-AI/det-metrics:
- **area_ranges_tuples**: Define different area ranges for metrics calculation.
- **bbox_format**: Set the bounding box format (e.g., `"xywh"`).
- **iou_threshold**: Choose the IOU threshold for determining correct detections.
- **class_agnostic**: Specify whether to calculate metrics disregarding class labels.
```python
area_ranges_tuples = [
("all", [0, 1e5**2]),
("small", [0**2, 6**2]),
("medium", [6**2, 12**2]),
("large", [12**2, 1e5**2]),
]
module = evaluate.load(
"SEA-AI/det-metrics",
iou_thresholds=[0.00001],
area_ranges_tuples=area_ranges_tuples,
)
```
## Output Values
SEA-AI/det-metrics provides a detailed breakdown of performance metrics for each specified area range:
- **range**: The area range considered.
- **iouThr**: The IOU threshold applied.
- **maxDets**: The maximum number of detections evaluated.
- **tp/fp/fn**: Counts of true positives, false positives, and false negatives.
- **duplicates**: Number of duplicate detections.
- **precision/recall/f1**: Calculated precision, recall, and F1 score.
- **support**: Number of ground truth boxes considered.
- **fpi**: Number of images with predictions but no ground truths.
- **nImgs**: Total number of images evaluated.
## Further References
- **seametrics Library**: Explore the [seametrics GitHub repository](https://github.com/SEA-AI/seametrics/tree/main) for more details on the underlying library.
- **Pycoco Tools**: SEA-AI/det-metrics calculations are based on [pycoco tools](https://github.com/cocodataset/cocoapi/tree/master/PythonAPI/pycocotools), a widely used library for COCO dataset evaluation.
- **Understanding Metrics**: For a deeper understanding of precision, recall, and other metrics, read [this comprehensive guide](https://www.analyticsvidhya.com/blog/2020/09/precision-recall-machine-learning/).
## Contribution
Your contributions are welcome! If you'd like to improve SEA-AI/det-metrics or add new features, please feel free to fork the repository, make your changes, and submit a pull request.