File size: 9,175 Bytes
13b72f5 2d3338b dd6884f 13b72f5 2d3338b 13b72f5 2d3338b 13b72f5 2d3338b 02ff185 2d3338b 13b72f5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 |
---
license: other
language:
- en
pretty_name: dynpose-100k
size_categories:
- 100K<n<1M
task_categories:
- other
---
# DynPose-100K
**[Dynamic Camera Poses and Where to Find Them](https://research.nvidia.com/labs/dir/dynpose-100k)** \
[Chris Rockwell<sup>1,2</sup>](https://crockwell.github.io), [Joseph Tung<sup>3</sup>](https://jot-jt.github.io/), [Tsung-Yi Lin<sup>1</sup>](https://tsungyilin.info/),
[Ming-Yu Liu<sup>1</sup>](https://mingyuliu.net/), [David F. Fouhey<sup>3</sup>](https://cs.nyu.edu/~fouhey/), [Chen-Hsuan Lin<sup>1</sup>](https://chenhsuanlin.bitbucket.io/) \
<sup>1</sup>NVIDIA <sup>2</sup>University of Michigan <sup>3</sup>New York University
## π Updates
- **[2025.05]** We have released the Lightspeed benchmark, a new dataset with ground-truth camera poses for validating DynPose-100K's pose annotation method. See [download instructions](#lightspeed-benchmark-download) below.
- **[2025.04]** We have made the initial release of DynPose-100K, a large-scale dataset of diverse, dynamic videos with camera annotations. See [download instructions](#dynpose-100k-download) below.
[](https://research.nvidia.com/labs/dir/dynpose-100k) [](https://arxiv.org/abs/2504.17788)

## Overview
DynPose-100K is a large-scale dataset of diverse, dynamic videos with camera annotations. We curate 100K videos containing dynamic content while ensuring cameras can be accurately estimated (including intrinsics and poses), addressing two key challenges:
1. Identifying videos suitable for camera estimation
2. Improving camera estimation algorithms for dynamic videos
| Characteristic | Value |
| --- | --- |
| **Size** | 100K videos |
| **Resolution** | 1280Γ720 (720p) |
| **Annotation type** | Camera poses (world-to-camera), intrinsics |
| **Format** | MP4 (videos), PKL (camera data), JPG (frames) |
| **Frame rate** | 12 fps (extracted frames) |
| **Storage** | ~200 GB (videos) + ~400 GB (frames) + 0.7 GB (annotations) |
| **License** | NVIDIA License (for DynPose-100K) |
## DynPose-100K Download
DynPose-100K contains diverse Internet videos annotated with state-of-the-art camera pose estimation. Videos were selected from 3.2M candidates through advanced filtering.
### 1. Camera annotation download (0.7 GB)
```bash
git clone https://huggingface.co/datasets/nvidia/dynpose-100k
cd dynpose-100k
unzip dynpose_100k.zip
export DYNPOSE_100K_ROOT=$(pwd)/dynpose_100k
```
### 2. Video download (~200 GB for all videos at 720p)
```bash
git clone https://github.com/snap-research/Panda-70M.git
pip install -e Panda-70M/dataset_dataloading/video2dataset
```
- For experiments we use (1280, 720) video resolution rather than the default (640, 360). To download at this resolution (optional), modify [download size](https://github.com/snap-research/Panda-70M/blob/main/dataset_dataloading/video2dataset/video2dataset/configs/panda70m.yaml#L5) to 720
```bash
video2dataset --url_list="${DYNPOSE_100K_ROOT}/metadata.csv" --output_folder="${DYNPOSE_100K_ROOT}/video" \
--url_col="url" --caption_col="caption" --clip_col="timestamp" \
--save_additional_columns="[matching_score,desirable_filtering,shot_boundary_detection]" \
--config="video2dataset/video2dataset/configs/panda70m.yaml"
```
### 3. Video frame extraction (~400 GB for 12 fps over all videos at 720p)
```bash
python scripts/extract_frames.py --input_video_dir ${DYNPOSE_100K_ROOT}/video \
--output_frame_parent ${DYNPOSE_100K_ROOT}/frames-12fps \
--url_list ${DYNPOSE_100K_ROOT}/metadata.csv \
--uid_mapping ${DYNPOSE_100K_ROOT}/uid_mapping.csv
```
### 4. Camera pose visualization
Create a conda environment if you haven't done so:
```bash
conda env create -f environment.yml
conda activate dynpose-100k
```
Run the below under the `dynpose-100k` environment:
```bash
python scripts/visualize_pose.py --dset dynpose_100k --dset_parent ${DYNPOSE_100K_ROOT}
```
### Dataset structure
```
dynpose_100k
βββ cameras
| βββ 00011ee6-cbc1-4ec4-be6f-292bfa698fc6.pkl {uid}
| βββ poses {camera poses (all frames) ([N',3,4])}
| βββ intrinsics {camera intrinsic matrix ([3,3])}
| βββ frame_idxs {corresponding frame indices ([N']), values within [0,N-1]}
| βββ mean_reproj_error {average reprojection error from SfM ([N'])}
| βββ num_points {number of reprojected points ([N'])}
| βββ num_frames {number of video frames N (scalar)}
| # where N' is number of registered frames
| βββ 00031466-5496-46fa-a992-77772a118b17.pkl
| βββ poses # camera poses (all frames) ([N',3,4])
| βββ ...
| βββ ...
βββ video
| βββ 00011ee6-cbc1-4ec4-be6f-292bfa698fc6.mp4 {uid}
| βββ 00031466-5496-46fa-a992-77772a118b17.mp4
| βββ ...
βββ frames-12fps
| βββ 00011ee6-cbc1-4ec4-be6f-292bfa698fc6 {uid}
| βββ 00001.jpg {frame id}
| βββ 00002.jpg
| βββ ...
| βββ 00031466-5496-46fa-a992-77772a118b17
| βββ 00001.jpg
| βββ ...
| βββ ...
βββ metadata.csv {used to download video & extract frames}
| βββ uid
| βββ 00031466-5496-46fa-a992-77772a118b17
| βββ ...
βββ uid_mapping.csv {used to download video & extract frames}
| βββ videoID,url,timestamp,caption,matching_score,desirable_filtering,shot_boundary_detection
| βββ --106WvnIhc,https://www.youtube.com/watch?v=--106WvnIhc,"[['0:13:34.029', '0:13:40.035']]",['A man is swimming in a pool with an inflatable mattress.'],[0.44287109375],['desirable'],"[[['0:00:00.000', '0:00:05.989']]]"
| βββ ...
βββ viz_list.txt {used as index for pose visualization}
| βββ 004cd3b5-8af4-4613-97a0-c51363d80c31 {uid}
| βββ 0c3e06ae-0d0e-4c41-999a-058b4ea6a831
| βββ ...
```
## Lightspeed Benchmark Download
Lightspeed is a challenging, photorealistic benchmark for dynamic pose estimation with **ground-truth** camera poses. It is used to validate DynPose-100K's pose annotation method.
Original video clips can be found here: https://www.youtube.com/watch?v=AsykNkUMoNU&t=1s
### 1. Downloading cameras, videos and frames (8.1 GB)
```bash
git clone https://huggingface.co/datasets/nvidia/dynpose-100k
cd dynpose-100k
unzip lightspeed.zip
export LIGHTSPEED_PARENT=$(pwd)/lightspeed
```
### 2. Dataset structure
```
lightspeed
βββ poses.pkl
| βββ 0120_LOFT {id_setting}
| βββ poses {camera poses (all frames) ([N,3,4])}
| # where N is number of frames
| βββ 0180_DUST
| βββ poses {camera poses (all frames) ([N,3,4])}
| βββ ...
βββ video
| βββ 0120_LOFT.mp4 {id_setting}
| βββ 0180_DUST.mp4
| βββ ...
βββ frames-24fps
| βββ 0120_LOFT/images {id_setting}
| βββ 00000.png {frame id}
| βββ 00001.png
| βββ ...
| βββ 0180_DUST/images
| βββ 00000.png
| βββ ...
| βββ ...
βββ viz_list.txt {used as index for pose visualization}
| βββ 0120_LOFT.mp4 {id_setting}
| βββ 0180_DUST.mp4
| βββ ...
```
### 3. Camera pose visualization
Create a conda environment if you haven't done so:
```bash
conda env create -f environment.yml
conda activate dynpose-100k
```
Run the below under the `dynpose-100k` environment:
```bash
python scripts/visualize_pose.py --dset lightspeed --dset_parent ${LIGHTSPEED_PARENT}
```
## FAQ
**Q: What coordinate system do the camera poses use?**
A: Camera poses are world-to-camera and follow OpenCV "RDF" convention (same as COLMAP): X axis points to the right, the Y axis to the bottom, and the Z axis to the front as seen from the image.
**Q: How do I map between frame indices and camera poses?**
A: The `frame_idxs` field in each camera PKL file contains the corresponding frame indices for the poses.
**Q: How can I contribute to this dataset?**
A: Please contact the authors for collaboration opportunities.
## Citation
If you find this dataset useful in your research, please cite our paper:
```bibtex
@inproceedings{rockwell2025dynpose,
author = {Rockwell, Chris and Tung, Joseph and Lin, Tsung-Yi and Liu, Ming-Yu and Fouhey, David F. and Lin, Chen-Hsuan},
title = {Dynamic Camera Poses and Where to Find Them},
booktitle = {CVPR},
year = 2025
}
```
## Acknowledgements
We thank Gabriele Leone and the NVIDIA Lightspeed Content Tech team for sharing the original 3D assets and scene data for creating the Lightspeed benchmark. We thank Yunhao Ge, Zekun Hao, Yin Cui, Xiaohui Zeng, Zhaoshuo Li, Hanzi Mao, Jiahui Huang, Justin Johnson, JJ Park and Andrew Owens for invaluable inspirations, discussions and feedback on this project. |