Datasets:
Formats:
parquet
Languages:
English
Size:
1K - 10K
Tags:
remote-sensing
aerial-imagery
orthomosaic
lighting-invariance
representation-stability
vision-encoder
License:
File size: 10,389 Bytes
3939c72 1c63a76 e6e4128 6a44f38 e6e4128 6a44f38 e6e4128 cf44fb9 d1546f7 cf44fb9 3b3c7b8 d1546f7 1fde5f4 406ad98 e8ee138 cf44fb9 3b3c7b8 d1546f7 1fde5f4 406ad98 e8ee138 3939c72 1c63a76 6a44f38 1c63a76 6a44f38 1c63a76 6a44f38 1c63a76 6a44f38 1c63a76 6a44f38 1c63a76 6a44f38 1c63a76 6a44f38 1c63a76 6a44f38 1c63a76 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 |
---
license: cc-by-4.0
task_categories:
- feature-extraction
- image-to-image
language:
- en
tags:
- remote-sensing
- aerial-imagery
- orthomosaic
- lighting-invariance
- representation-stability
- vision-encoder
- time-series
- dinov2
- dinov3
- embeddings
- multi-config
pretty_name: Light Stable Representations
size_categories:
- n<1K
dataset_info:
- config_name: default
features:
- name: idx
dtype: string
- name: image_t0
dtype: image
- name: image_t1
dtype: image
- name: image_t2
dtype: image
- name: canopy_height
dtype:
array2_d:
shape:
- 1024
- 1024
dtype: int32
splits:
- name: train
num_bytes: 4905235380
num_examples: 487
- name: test
num_bytes: 1221459061
num_examples: 122
download_size: 3688072446
dataset_size: 6126694441
- config_name: dinov2_base
features:
- name: idx
dtype: string
- name: cls_t0
list: float32
length: 768
- name: cls_t1
list: float32
length: 768
- name: cls_t2
list: float32
length: 768
- name: patch_t0
dtype:
array2_d:
shape:
- 256
- 768
dtype: float32
- name: patch_t1
dtype:
array2_d:
shape:
- 256
- 768
dtype: float32
- name: patch_t2
dtype:
array2_d:
shape:
- 256
- 768
dtype: float32
splits:
- name: train
num_bytes: 1154971327
num_examples: 487
- name: test
num_bytes: 289335733
num_examples: 122
download_size: 1487171455
dataset_size: 1444307060
- config_name: dinov3_sat
features:
- name: idx
dtype: string
- name: cls_t0
list: float32
length: 1024
- name: cls_t1
list: float32
length: 1024
- name: cls_t2
list: float32
length: 1024
- name: patch_t0
dtype:
array2_d:
shape:
- 196
- 1024
dtype: float32
- name: patch_t1
dtype:
array2_d:
shape:
- 196
- 1024
dtype: float32
- name: patch_t2
dtype:
array2_d:
shape:
- 196
- 1024
dtype: float32
splits:
- name: train
num_bytes: 1180053775
num_examples: 487
- name: test
num_bytes: 295619221
num_examples: 122
download_size: 1520934285
dataset_size: 1475672996
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: dinov2_base
data_files:
- split: train
path: dinov2_base/train-*
- split: test
path: dinov2_base/test-*
- config_name: dinov3_sat
data_files:
- split: train
path: dinov3_sat/train-*
- split: test
path: dinov3_sat/test-*
---
# Light Stable Representations Dataset
## Dataset Description
This dataset contains aerial orthomosaic tiles captured at three different times of day (10:00, 12:00, and 15:00). The dataset is organized into three configurations: `default` (raw images + canopy height), `dinov2_base` (DINOv2 embeddings), and `dinov3_sat` (DINOv3 embeddings). All configurations share consistent train/test splits with matching tile identifiers for cross-referencing. The dataset is designed for training vision encoders that maintain consistent feature representations despite changes in illumination, with applications in remote sensing and environmental monitoring.
## Dataset Configurations
The dataset is organized into three configurations, each serving different research needs:
### Configuration: `default`
Raw imagery and environmental data for direct analysis:
| Feature | Type | Shape | Description |
|---------|------|--------|-------------|
| `idx` | string | - | Tile identifier in format `{ROW}_{COL}` for geographic referencing |
| `image_t0` | Image | 1024×1024×3 | Morning capture at 10:00 AM (time=1000) |
| `image_t1` | Image | 1024×1024×3 | Noon capture at 12:00 PM (time=1200) |
| `image_t2` | Image | 1024×1024×3 | Afternoon capture at 3:00 PM (time=1500) |
| `canopy_height` | int32 | [1024, 1024] | Canopy height grid in centimeters from canopy height model |
### Configuration: `dinov2_base`
Pre-computed DINOv2 Base (ViT-B/14) embeddings:
| Feature | Type | Shape | Description |
|---------|------|--------|-------------|
| `idx` | string | - | Tile identifier matching other configurations |
| `cls_t0` | float32 | [768] | DINOv2 CLS token (global features) for morning image |
| `cls_t1` | float32 | [768] | DINOv2 CLS token (global features) for noon image |
| `cls_t2` | float32 | [768] | DINOv2 CLS token (global features) for afternoon image |
| `patch_t0` | float32 | [256, 768] | DINOv2 patch tokens (16×16 spatial grid) for morning image |
| `patch_t1` | float32 | [256, 768] | DINOv2 patch tokens (16×16 spatial grid) for noon image |
| `patch_t2` | float32 | [256, 768] | DINOv2 patch tokens (16×16 spatial grid) for afternoon image |
### Configuration: `dinov3_sat`
Pre-computed DINOv3 Large (ViT-L/16) embeddings with satellite pretraining:
| Feature | Type | Shape | Description |
|---------|------|--------|-------------|
| `idx` | string | - | Tile identifier matching other configurations |
| `cls_t0` | float32 | [1024] | DINOv3 CLS token (global features) for morning image |
| `cls_t1` | float32 | [1024] | DINOv3 CLS token (global features) for noon image |
| `cls_t2` | float32 | [1024] | DINOv3 CLS token (global features) for afternoon image |
| `patch_t0` | float32 | [196, 1024] | DINOv3 patch tokens (14×14 spatial grid) for morning image |
| `patch_t1` | float32 | [196, 1024] | DINOv3 patch tokens (14×14 spatial grid) for noon image |
| `patch_t2` | float32 | [196, 1024] | DINOv3 patch tokens (14×14 spatial grid) for afternoon image |
**Notes:**
- Canopy height values represent centimeters above ground; missing data is encoded as `-2147483648`
- All configurations use consistent 80%/20% train/test splits with matching `idx` values
- Patch tokens represent spatial features in different grid resolutions: 16×16 (DINOv2) vs 14×14 (DINOv3)
## Usage Example
```python
from datasets import load_dataset
# Load specific configurations
dataset_default = load_dataset("mpg-ranch/drone-lsr", "default")
dataset_dinov2 = load_dataset("mpg-ranch/drone-lsr", "dinov2_base")
dataset_dinov3 = load_dataset("mpg-ranch/drone-lsr", "dinov3_sat")
# Access raw imagery and canopy height
sample_default = dataset_default['train'][0]
morning_image = sample_default['image_t0'] # RGB image
noon_image = sample_default['image_t1'] # RGB image
afternoon_image = sample_default['image_t2'] # RGB image
canopy_height = sample_default['canopy_height'] # Height grid in cm
tile_id = sample_default['idx'] # Geographic identifier
# Access DINOv2 embeddings (same tile via matching idx)
sample_dinov2 = dataset_dinov2['train'][0]
dinov2_cls_morning = sample_dinov2['cls_t0'] # Global features (768-dim)
dinov2_patches_morning = sample_dinov2['patch_t0'] # Spatial features (256×768)
# Access DINOv3 embeddings (same tile via matching idx)
sample_dinov3 = dataset_dinov3['train'][0]
dinov3_cls_morning = sample_dinov3['cls_t0'] # Global features (1024-dim)
dinov3_patches_morning = sample_dinov3['patch_t0'] # Spatial features (196×1024)
# Verify consistent tile identifiers across configurations
assert sample_default['idx'] == sample_dinov2['idx'] == sample_dinov3['idx']
# Access test sets for evaluation
test_default = dataset_default['test'][0]
test_dinov2 = dataset_dinov2['test'][0]
test_dinov3 = dataset_dinov3['test'][0]
```
## Pre-computed Embeddings
The dataset includes pre-computed embeddings from two state-of-the-art vision transformers:
### DINOv2 Base (`facebook/dinov2-base`)
- **Architecture**: Vision Transformer Base with 14×14 patch size
- **CLS Tokens**: 768-dimensional global feature vectors capturing scene-level representations
- **Patch Tokens**: 256×768 arrays (16×16 spatial grid) encoding local features
- **Training**: Self-supervised learning on natural images
### DINOv3 Large (`facebook/dinov3-vitl16-pretrain-sat493m`)
- **Architecture**: Vision Transformer Large with 16×16 patch size
- **CLS Tokens**: 1024-dimensional global feature vectors capturing scene-level representations
- **Patch Tokens**: 196×1024 arrays (14×14 spatial grid) encoding local features
- **Training**: Self-supervised learning with satellite imagery pretraining
**Purpose**: Enable efficient training and analysis without requiring on-the-fly feature extraction, while providing comparison between natural image and satellite-pretrained models.
## Dataset Information
- **Location**: Lower Partridge Alley, MPG Ranch, Montana, USA
- **Survey Date**: November 7, 2024
- **Coverage**: 620 complete tile sets (80% train / 20% test split via seeded random sampling)
- **Resolution**: 1024×1024 pixels at 1.2cm ground resolution
- **Total Size**: ~6.4GB of image data plus embeddings
- **Quality Control**: Tiles with transient objects, such as vehicles, were excluded from the dataset. RGB imagery and canopy rasters are removed together to keep modalities aligned.
## Use Cases
This dataset is intended for:
- Developing vision encoders robust to lighting variations
- Representation stability research in computer vision
- Time-invariant feature learning
- Remote sensing applications requiring lighting robustness
- Comparative analysis of illumination effects on vision model features
## Citation
If you use this dataset in your research, please cite:
```bibtex
@dataset{mpg_ranch_light_stable_semantics_2024,
title={Light Stable Representations Dataset},
author={Kyle Doherty and Erik Samose and Max Gurinas and Brandon Trabucco and Ruslan Salakhutdinov},
year={2024},
month={November},
url={https://huggingface.co/datasets/mpg-ranch/drone-lsr},
publisher={Hugging Face},
note={Aerial orthomosaic tiles with DINOv2 and DINOv3 embeddings for light-stable representation vision encoder training},
location={MPG Ranch, Montana, USA},
survey_date={2024-11-07},
organization={MPG Ranch}
}
```
## License
This dataset is released under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
**Attribution Requirements:**
- You must give appropriate credit to MPG Ranch
- Provide a link to the license
- Indicate if changes were made to the dataset
|