Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ pipeline_tag: object-detection
|
|
11 |
# Layer Freezing and Transformer-Based Data Curation for Enhanced Transfer Learning in YOLO Architectures
|
12 |
|
13 |
## Abstract
|
14 |
-
The You Only Look Once (YOLO) architecture has
|
15 |
## Table of Contents
|
16 |
|
17 |
- [Installation](#installation)
|
|
|
11 |
# Layer Freezing and Transformer-Based Data Curation for Enhanced Transfer Learning in YOLO Architectures
|
12 |
|
13 |
## Abstract
|
14 |
+
The You Only Look Once (YOLO) architecture has revolutionized real-time object detection by performing detection, localization, and classification in a single forward pass. However, balancing detection accuracy with computational efficiency remains a critical challenge, particularly for deployment in resource-constrained environments such as edge devices and UAV-based monitoring systems. This research presents a comprehensive analysis of layer freezing strategies for transfer learning in modern YOLO architectures, systematically investigating how selective parameter freezing affects both performance and computational requirements. We evaluate multiple freezing configurations across YOLOv8 and YOLOv10 variants (nano, small, medium, large) on four challenging datasets representing critical infrastructure monitoring applications: InsPLAD-det, Electric Substation, Common-VALID, and Bird's Nest. Our methodology incorporates gradient behavior analysis through L2 norm monitoring and visual explanations via Gradient-weighted Class Activation Mapping (GradCAM) to provide deeper insights into training dynamics under different freezing strategies. Results demonstrate that strategic layer freezing—particularly freezing the first 4 blocks or the complete backbone—achieves substantial computational savings while maintaining competitive detection accuracy. The optimal configurations reduce GPU memory consumption by up to 28% compared to full fine-tuning, while in several cases achieving superior mAP@50 scores (e.g., our YOLOv10-small with 4-block freezing achieved 0.84 vs 0.81 for fine-tuning on the InsPLAD-det dataset). Gradient analysis reveals distinct convergence patterns across freezing strategies, with backbone-frozen models exhibiting stable learning dynamics while preserving essential feature extraction capabilities. These findings provide actionable guidelines for deploying efficient YOLO models in resource-limited scenarios, demonstrating that selective layer freezing represents a viable alternative to full fine-tuning for transfer learning in object detection tasks.
|
15 |
## Table of Contents
|
16 |
|
17 |
- [Installation](#installation)
|