Update README.md
Browse files
README.md
CHANGED
|
@@ -17,28 +17,25 @@ size_categories:
|
|
| 17 |
|
| 18 |
# Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models
|
| 19 |
|
| 20 |
-
[[Project Page]](https://vita-group.github.io/Diffusion4D/) | [[Code]](https://github.com/VITA-Group/Diffusion4D)
|
| 21 |
|
| 22 |
## News
|
| 23 |
- 2024.6.4: Released rendered data from curated [objaverse-1.0](https://huggingface.co/datasets/hw-liang/Diffusion4D/tree/main/objaverse1.0_curated), including orbital videos of dynamic 3D, orbital videos of static 3D, and monocular videos from front view.
|
| 24 |
- 2024.5.27: Released metadata for objects!
|
| 25 |
|
| 26 |
## Overview
|
| 27 |
-
|
| 28 |
-
We collect a large-scale, high-quality dynamic 3D(4D) dataset sourced from the
|
| 29 |
-
vast 3D data corpus of [Objaverse-1.0](https://objaverse.allenai.org/objaverse-1.0/) and [Objaverse-XL](https://github.com/allenai/objaverse-xl). We apply a series of empirical rules to filter the dataset. You can find more details in our paper. In this part, we will release the selected 4D assets, including:
|
| 30 |
1. Selected high-quality 4D object ID.
|
| 31 |
2. A render script using Blender, providing optional settings to render your personalized data.
|
| 32 |
-
3. Rendered 4D images by our team to save your GPU time.
|
| 33 |
|
| 34 |
## 4D Dataset ID/Metadata
|
| 35 |
-
We collect 365k dynamic 3D assets from Objaverse-1.0 (42k) and Objaverse-xl (323k).
|
| 36 |
|
| 37 |
Metadata of animated objects (323k) from objaverse-xl can be found in [meta_xl_animation_tot.csv](https://huggingface.co/datasets/hw-liang/Diffusion4D/blob/main/meta_xl_animation_tot.csv).
|
| 38 |
We also release the metadata of all successfully rendered objects from objaverse-xl's Github subset in [meta_xl_tot.csv](https://huggingface.co/datasets/hw-liang/Diffusion4D/blob/main/meta_xl_tot.csv).
|
| 39 |
|
| 40 |
-
For text-to-4D generation, the captions are obtained from the work [Cap3D](https://huggingface.co/datasets/tiange/Cap3D).
|
| 41 |
-
More about the dataset and curation scripts are coming soon!
|
| 42 |
|
| 43 |
|
| 44 |
## Citation
|
|
|
|
| 17 |
|
| 18 |
# Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models
|
| 19 |
|
| 20 |
+
[[Project Page]](https://vita-group.github.io/Diffusion4D/) | [[Arxiv]](https://arxiv.org/abs/2405.16645) | [[Code]](https://github.com/VITA-Group/Diffusion4D)
|
| 21 |
|
| 22 |
## News
|
| 23 |
- 2024.6.4: Released rendered data from curated [objaverse-1.0](https://huggingface.co/datasets/hw-liang/Diffusion4D/tree/main/objaverse1.0_curated), including orbital videos of dynamic 3D, orbital videos of static 3D, and monocular videos from front view.
|
| 24 |
- 2024.5.27: Released metadata for objects!
|
| 25 |
|
| 26 |
## Overview
|
| 27 |
+
We collect a large-scale, high-quality dynamic 3D(4D) dataset sourced from the vast 3D data corpus of [Objaverse-1.0](https://objaverse.allenai.org/objaverse-1.0/) and [Objaverse-XL](https://github.com/allenai/objaverse-xl). We apply a series of empirical rules to filter the dataset. You can find more details in our paper. In this part, we will release the selected 4D assets, including:
|
|
|
|
|
|
|
| 28 |
1. Selected high-quality 4D object ID.
|
| 29 |
2. A render script using Blender, providing optional settings to render your personalized data.
|
| 30 |
+
3. Rendered 4D images by our team to save your GPU time. With 8 GPUs and a total of 16 threads, it took 5.5 days to render the curated objaverse-1.0 dataset.
|
| 31 |
|
| 32 |
## 4D Dataset ID/Metadata
|
| 33 |
+
We collect 365k dynamic 3D assets from Objaverse-1.0 (42k) and Objaverse-xl (323k). Then we curate a high-quality subset to train our models.
|
| 34 |
|
| 35 |
Metadata of animated objects (323k) from objaverse-xl can be found in [meta_xl_animation_tot.csv](https://huggingface.co/datasets/hw-liang/Diffusion4D/blob/main/meta_xl_animation_tot.csv).
|
| 36 |
We also release the metadata of all successfully rendered objects from objaverse-xl's Github subset in [meta_xl_tot.csv](https://huggingface.co/datasets/hw-liang/Diffusion4D/blob/main/meta_xl_tot.csv).
|
| 37 |
|
| 38 |
+
For text-to-4D generation, the captions are obtained from the work [Cap3D](https://huggingface.co/datasets/tiange/Cap3D).
|
|
|
|
| 39 |
|
| 40 |
|
| 41 |
## Citation
|