|
--- |
|
license: cc-by-nc-sa-4.0 |
|
task_categories: |
|
- object-detection |
|
tags: |
|
- object_detection |
|
- Object_tracking |
|
- autonomous_driving |
|
--- |
|
--- |
|
license: cc-by-nc-sa-4.0 |
|
--- |
|
|
|
# EMT Dataset |
|
This dataset was presented in [EMT: A Visual Multi-Task Benchmark Dataset for Autonomous Driving in the Arab Gulf Region](https://huggingface.co/papers/2502.19260). |
|
|
|
|
|
## Introduction |
|
EMT is a comprehensive dataset for autonomous driving research, containing **57 minutes** of diverse urban traffic footage from the **Gulf Region**. It includes rich semantic annotations across two agent categories: |
|
|
|
- **People**: Pedestrians and cyclists |
|
- **Vehicles**: Seven different classes |
|
|
|
Each video segment spans **2.5-3 minutes**, capturing challenging real-world scenarios: |
|
|
|
- **Dense Urban Traffic** β Multi-agent interactions in congested environments |
|
- **Weather Variations** β Clear and rainy conditions |
|
- **Visual Challenges** β High reflections and adverse weather combinations (e.g., rainy nights) |
|
|
|
### Dataset Annotations |
|
This dataset provides annotations for: |
|
|
|
- **Detection & Tracking** β Multi-object tracking with consistent IDs |
|
|
|
For **intention prediction** and **trajectory prediction** annotations, please refer to our [GitHub repository](https://github.com/AV-Lab/emt-dataset). |
|
|
|
--- |
|
|
|
## Quick Start |
|
```python |
|
from datasets import load_dataset |
|
|
|
# Load the dataset |
|
dataset = load_dataset("KuAvLab/EMT", split="train") |
|
``` |
|
|
|
### Available Labels |
|
Each dataset sample contains two main components: |
|
|
|
1. **Image** β The frame image |
|
2. **Object** β The annotations for detected objects |
|
|
|
#### Object Labels |
|
- **bbox**: Bounding box coordinates (`x_min, y_min, x_max, y_max`) |
|
- **track_id**: Tracking ID of detected objects |
|
- **class_id**: Numeric class ID |
|
- **class_name**: Object type (e.g., `car`, `pedestrian`) |
|
|
|
#### Sample Usage |
|
```python |
|
import numpy as np |
|
|
|
for data in dataset: |
|
# Convert image from PIL to OpenCV format (BGR) |
|
img = np.array(data['image']) |
|
|
|
print("Classes:", data['objects']['class_name']) |
|
print("Bboxes:", data['objects']['bbox']) |
|
print("Track IDs:", data['objects']['track_id']) |
|
print("Class IDs:", data['objects']['class_id']) |
|
``` |
|
|
|
--- |
|
|
|
## Data Collection |
|
| Aspect | Description | |
|
|------------|----------------------------------| |
|
| Duration | 57 minutes total footage | |
|
| Segments | 2.5-3 minutes per recording | |
|
| FPS | 10 fps for annotated frames | |
|
| Agent Classes | 2 Person categories, 7 Vehicle categories | |
|
|
|
### Agent Categories |
|
#### **People** |
|
- Pedestrians |
|
- Cyclists |
|
|
|
#### **Vehicles** |
|
- Motorbike |
|
- Small motorized vehicle |
|
- Medium vehicle |
|
- Large vehicle |
|
- Car |
|
- Bus |
|
- Emergency vehicle |
|
|
|
--- |
|
|
|
## Dataset Statistics |
|
| Category | Count | |
|
|-------------------|------------| |
|
| Annotated Frames | 34,386 | |
|
| Bounding Boxes | 626,634 | |
|
| Unique Agents | 9,094 | |
|
| Vehicle Instances | 7,857 | |
|
| Pedestrian Instances | 568 | |
|
|
|
### Class Breakdown |
|
| **Class** | **Description** | **Bounding Boxes** | **Unique Agents** | |
|
|---------------------------|----------------|-------------------|----------------| |
|
| Pedestrian | Walking individuals | 24,574 | 568 | |
|
| Cyclist | Bicycle/e-bike riders | 594 | 14 | |
|
| Motorbike | Motorcycles, bikes, scooters | 11,294 | 159 | |
|
| Car | Standard automobiles | 429,705 | 6,559 | |
|
| Small motorized vehicle | Mobility scooters, quad bikes | 767 | 13 | |
|
| Medium vehicle | Vans, tractors | 51,257 | 741 | |
|
| Large vehicle | Lorries, trucks (6+ wheels) | 37,757 | 579 | |
|
| Bus | School buses, single/double-deckers | 19,244 | 200 | |
|
| Emergency vehicle | Ambulances, police cars, fire trucks | 1,182 | 9 | |
|
| **Overall** | | **576,374** | **8,842** | |
|
|
|
--- |
|
|
|
For more details , visit our [GitHub repository](https://github.com/AV-Lab/emt-dataset). |
|
Our paper can be found [Here](https://huggingface.co/papers/2502.19260.) |
|
For any inquires contact [[email protected]]([email protected]) or [https://huggingface.co/Murdism](https://huggingface.co/Murdism) |