Update README.md
Browse files
README.md
CHANGED
@@ -54,7 +54,7 @@ pretty_name: TIP-I2V
|
|
54 |
---
|
55 |
|
56 |
# Summary
|
57 |
-
This is the dataset proposed in our paper [**TIP-I2V: A Million-Scale Real Prompt
|
58 |
|
59 |
TIP-I2V is the first dataset comprising over 1.70 million unique user-provided text and image prompts. Besides the prompts, TIP-I2V also includes videos generated by five state-of-the-art image-to-video models (Pika, Stable Video Diffusion, Open-Sora, I2VGen-XL, and CogVideoX-5B). The TIP-I2V contributes to the development of better and safer image-to-video models.
|
60 |
|
@@ -211,7 +211,7 @@ The prompts and videos in our TIP-I2V are licensed under the [CC BY-NC 4.0 licen
|
|
211 |
# Citation
|
212 |
```
|
213 |
@article{wang2024tipi2v,
|
214 |
-
title={TIP-I2V: A Million-Scale Real Prompt
|
215 |
author={Wang, Wenhao and Yang, Yi},
|
216 |
booktitle={arXiv preprint arXiv:2411.xxxxx},
|
217 |
year={2024}
|
|
|
54 |
---
|
55 |
|
56 |
# Summary
|
57 |
+
This is the dataset proposed in our paper [**TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation**](https://arxiv.org/abs/2411.xxxxx).
|
58 |
|
59 |
TIP-I2V is the first dataset comprising over 1.70 million unique user-provided text and image prompts. Besides the prompts, TIP-I2V also includes videos generated by five state-of-the-art image-to-video models (Pika, Stable Video Diffusion, Open-Sora, I2VGen-XL, and CogVideoX-5B). The TIP-I2V contributes to the development of better and safer image-to-video models.
|
60 |
|
|
|
211 |
# Citation
|
212 |
```
|
213 |
@article{wang2024tipi2v,
|
214 |
+
title={TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation},
|
215 |
author={Wang, Wenhao and Yang, Yi},
|
216 |
booktitle={arXiv preprint arXiv:2411.xxxxx},
|
217 |
year={2024}
|