|
--- |
|
title: CEA List FrugalAI Challenge |
|
emoji: 🔥 |
|
colorFrom: red |
|
colorTo: yellow |
|
sdk: docker |
|
pinned: false |
|
--- |
|
|
|
|
|
# YOLO for Early Fire Detection |
|
|
|
## Team |
|
- Renato Sortino |
|
- Aboubacar Tuo |
|
- Charles Villard |
|
- Nicolas Allezard |
|
- Nicolas Granger |
|
- Angélique Loesch |
|
- Quoc-Cuong Pham |
|
|
|
## Model Description |
|
|
|
YOLO model for early fire detection in forests, proposed as a solution for the Frugal AI Challenge 2025, image task. |
|
|
|
### Intended Use |
|
|
|
- **Primary intended uses**: |
|
- **Primary intended users**: |
|
- **Out-of-scope use cases**: |
|
|
|
## Training Data |
|
|
|
The model uses the pyronear/pyro-sdis dataset: |
|
- Size: ~33000 examples |
|
- Split: 80% train, 20% test |
|
- Images annotated with bounding boxes in correspondence of wildfire instances |
|
|
|
### Labels |
|
0. Smoke |
|
|
|
## Performance |
|
|
|
### Metrics |
|
- **Accuracy**: ~83% |
|
- **Environmental Impact**: |
|
- Emissions tracked in gCO2eq |
|
- Energy consumption tracked in Wh |
|
|
|
### Model Architecture |
|
The model is a YOLO-based object detection model, that does not depend on NMS in inference. |
|
Bypassing this operation allows for further optimization at inference time via tensor decomposition and quantization |
|
|
|
## Environmental Impact |
|
|
|
Environmental impact is tracked using CodeCarbon, measuring: |
|
- Carbon emissions during inference |
|
- Energy consumption during inference |
|
|
|
This tracking helps establish a baseline for the environmental impact of model deployment and inference. |
|
|
|
## Limitations |
|
- It may fail to generalize to night scenes or foggy settings |
|
- It is subject to false detections, especially at low confidence thresholds |
|
|
|
``` |