File size: 2,314 Bytes
6fb5d57 cb4c885 6fb5d57 cb4c885 6fb5d57 2c4d915 6fb5d57 70f5f26 cb4c885 ad29040 cb4c885 70f5f26 ad29040 70f5f26 ad29040 70f5f26 cb4c885 ad29040 70f5f26 ad29040 70f5f26 ad29040 70f5f26 ad29040 a73cec0 ad29040 70f5f26 cb4c885 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
title: CEA List FrugalAI Challenge
emoji: 🔥
colorFrom: red
colorTo: yellow
sdk: docker
pinned: false
license: apache-2.0
short_description: YOLO for low-emission Early Fire Detection
---
# YOLO for Early Fire Detection
## Team ([CEA List, LVA](https://kalisteo.cea.fr/index.php/ai/))
- Renato Sortino
- Aboubacar Tuo
- Charles Villard
- Nicolas Allezard
- Nicolas Granger
- Angélique Loesch
- Quoc-Cuong Pham
## Model Description
YOLO model for early fire detection in forests, proposed as a solution for the [Frugal AI Challenge 2025](https://frugalaichallenge.org/), image task.
## Training Data
The model uses the following datasets:
| Dataset | Number of samples | Number of instances |
|----------|----------|----------|
| [pyronear/pyro-sdis](https://huggingface.co/datasets/pyronear/pyro-sdis) | 29,537 | 28,167 |
| [D-Fire](https://github.com/gaiasd/DFireDataset) | 10,525 | 11,865 |
| [Wildfire Smoke Dataset](https://www.kaggle.com/datasets/gloryvu/wildfire-smoke-detection/data) | ~12,300 | 11,539 |
| [Hard Negatives](https://github.com/aiformankind/wildfire-smoke-dataset) | ~5,000 | ~5,000 |
| Synthetic Dataset | ~5,000 | ~5,000 |
## Performance
### Model Architecture
The model is a YOLO-based object detection model, that does not depend on NMS in inference.
Bypassing this operation allows for further optimization at inference time via tensor decomposition.
### Metrics
| Model | Accuracy | Precision | Recall | meanIoU | Wh | gCO2eq
|----------|----------|----------|----------|----------|----------|----------|
| YOLOv10s | 0.87 | 0.88 | 0.98 | 0.84 | 6.77 | 0.94 |
| YOLOv10m | 0.88 | 0.87 | 0.99 | 0.88 | 8.39 | 1.16 |
| YOLOv10m + Spatial-SVD | 0.85 | 0.86 | 0.97 | 0.82 | 8.24 | 1.14 |
Environmental impact is tracked using [CodeCarbon](https://codecarbon.io/), measuring:
- Carbon emissions during inference (gCO2eq)
- Energy consumption during inference (Wh)
This tracking helps establish a baseline for the environmental impact of model deployment and inference.
## Limitations and future work
- It may fail to generalize to night scenes or foggy settings
- It is subject to false detections, especially at low confidence thresholds
- Clouds at ground level can be misinterpreted as smoke
- It would be better to use temporal-aware models trained on videos
``` |