LanguageBind commited on
Commit
cfcc16d
·
verified ·
1 Parent(s): 11b17b8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md CHANGED
@@ -1,3 +1,70 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ # Open-Sora-Dataset
5
+
6
+ [[Project Page]](https://pku-yuangroup.github.io/Open-Sora-Plan/) [[中文主页]](https://pku-yuangroup.github.io/Open-Sora-Plan/blog_cn.html)
7
+
8
+ :bulb: Welcome to the Open-Sora-DataSet project! As part of the [Open-Sora-Plan](https://pku-yuangroup.github.io/Open-Sora-Plan/) project, we specifically talk about the collection and processing of data sets. To build a high-quality video dataset for the open-source world, we started this project. 💪
9
+
10
+ We warmly welcome you to join us! Let's contribute to the open-source world together! Thank you for your support and contribution. :heart:
11
+
12
+ :bulb: 欢迎来到Open-Sora-DataSet项目!我们作为Open-Sora—Plan项目的一部分,详细阐述数据集的收集和处理。为给开源世界构建一个高质量的视频数据,我们发起了这个项目。💪
13
+
14
+ 我们非常欢迎您的加入!让我们共同为开源的世界贡献力量!感谢您的支持和贡献。 :heart:
15
+
16
+
17
+ ## Data Construction for Open-Sora-Plan v1.0.0
18
+ ### Data distribution
19
+ we crawled 40258 videos from open-source websites under the CC0 license. All videos are of high quality without watermarks and All videos are of high quality without watermarks, and about 60% of them are landscape data. The total duration is about **274h 05m 13s**The main sources of data are divided into three parts:
20
+ 1. [mixkit](https://mixkit.co/):The total number of videos we collected is **1234**, the total duration is about **6h 19m 32s**, and the total number of frames is **570815**. The resolution and aspect ratio distribution histogram of the video is as follows (the ones that account for less than 1% are not listed):
21
+
22
+ <img src="assets/v1.0.0_mixkit_resolution_plot.png" width="400" /> <img src="assets/v1.0.0_mixkit_aspect_ratio_plot.png" width="400" />
23
+
24
+ 2. [pexels](https://www.pexels.com/zh-cn/):The total number of videos we collected is **7408** the total duration is about **48h 49m 24s** and the total number of frames is **5038641**. The resolution and aspect ratio distribution histogram of the video is as follows (the ones that account for less than 1% are not listed):
25
+
26
+ <img src="assets/v1.0.0_pexels_resolution_plot.png" height="300" /> <img src="assets/v1.0.0_pexels_aspect_ratio_plot.png" height="300" />
27
+
28
+
29
+ 3. [pixabay](https://pixabay.com/):The total number of videos we collected is **31616** the total duration is about **218h 56m 17s** and the total number of frames is **23508970**. The resolution and aspect ratio distribution histogram of the video is as follows (the ones that account for less than 1% are not listed):
30
+
31
+ <img src="assets/v1.0.0_pixabay_resolution_plot.png" height="300" /> <img src="assets/v1.0.0_pixabay_aspect_ratio_plot.png" height="300" />
32
+
33
+
34
+ ### Dense captions
35
+ it is challenging to directly crawl a large quantity of high-quality dense captions from the internet. Therefore, we utilize a mature Image-captioner model to obtain high-quality dense captions. We conducted ablation experiments on two multimodal large models: [ShareGPT4V-Captioner-7B](https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/README.md) and [LLaVA-1.6-34B](https://github.com/haotian-liu/LLaVA). The former is specifically designed for caption generation, while the latter is a general-purpose multimodal large model. After conducting our ablation experiments, we found that they are comparable in performance. However, there is a significant difference in their inference speed on the A800 GPU: 40s/it of batch size of 12 for ShareGPT4V-Captioner-7B, 15s/it of batch size of 1 for LLaVA-1.6-34B. We open-source all annotations [here](https://huggingface.co/datasets/LanguageBind/Open-Sora-Plan-v1.0.0). We show some statistics here, and we set the maximum length of the model to 300, which covers almost 99% of the samples.
36
+
37
+ | Name | Avg length | Max | Std |
38
+ |---|---|---|---|
39
+ | ShareGPT4V-Captioner-7B | 170.0827524529121 | 467 | 53.689967539537776 |
40
+ | LLaVA-1.6-34B | 141.75851073472666 | 472 | 48.52492072346965 |
41
+
42
+
43
+
44
+ ## Video split
45
+ ### Video with transitions
46
+ Use [panda-70m](https://github.com/snap-research/Panda-70M/tree/main/splitting) to split transition video
47
+
48
+ ### Video without transitions
49
+ 1. Clone this repository and navigate to Open-Sora-Plan folder
50
+ ```
51
+ git clone https://github.com/PKU-YuanGroup/Open-Sora-Plan
52
+ cd Open-Sora-Plan
53
+ ```
54
+ 2. Install the required packages
55
+ ```
56
+ conda create -n opensora python=3.8 -y
57
+ conda activate opensora
58
+ pip install -e .
59
+ ```
60
+ 3. Split video script
61
+ ```
62
+ git clone https://github.com/shaodong233/open-sora-Dataset.git
63
+ python split/no_transition.py --video_json_file /path/to/your_video /path/to/save
64
+ ```
65
+
66
+
67
+ If you want to know more, check out [Requirements and Installation](https://github.com/PKU-YuanGroup/Open-Sora-Plan?tab=readme-ov-file#%EF%B8%8F-requirements-and-installation)
68
+
69
+ ## Acknowledgement 👍
70
+ Qingdao Weiyi Network Technology Co., Ltd.: Thank you very much for providing us with valuable data