Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -11,12 +11,14 @@ size_categories:
|
|
11 |
viewer: false
|
12 |
---
|
13 |
|
14 |
-
|
|
|
|
|
15 |
|
16 |
-
This is the repository of OmniCorpus-
|
17 |
|
18 |
- Repository: https://github.com/OpenGVLab/OmniCorpus
|
19 |
-
- Paper: https://arxiv.org/abs/2406.08418
|
20 |
|
21 |
OmniCorpus dataset is a large-scale image-text interleaved dataset, which pushes the boundaries of scale and diversity by encompassing **8.6 billion images** interleaved with **1,696 billion text tokens** from diverse sources, significantly surpassing previous datasets.
|
22 |
This dataset demonstrates several advantages over its counterparts:
|
@@ -133,18 +135,26 @@ if __name__ == "__main__":
|
|
133 |
extract_frames_with_hls("1xGiPUeevCM", [19.000000, 23.000000, 28.000000, 32.000000, 45.000000, 54.000000, 57.000000, 67.000000])
|
134 |
```
|
135 |
|
136 |
-
# License
|
|
|
137 |
|
138 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
139 |
|
140 |
# Citation
|
141 |
|
142 |
```
|
143 |
-
@
|
144 |
title={OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text},
|
145 |
author={Li, Qingyun and Chen, Zhe and Wang, Weiyun and Wang, Wenhai and Ye, Shenglong and Jin, Zhenjiang and others},
|
146 |
-
|
147 |
-
year={
|
148 |
}
|
149 |
```
|
150 |
|
|
|
11 |
viewer: false
|
12 |
---
|
13 |
|
14 |
+
<p align="center">
|
15 |
+
<h1 align="center">🐳 OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text</h1>
|
16 |
+
</p>
|
17 |
|
18 |
+
This is the repository of OmniCorpus-CC, which contains 988 million image-text interleaved documents collected from [Common Crawl](https://commoncrawl.org/).
|
19 |
|
20 |
- Repository: https://github.com/OpenGVLab/OmniCorpus
|
21 |
+
- Paper (ICLR 2025 Spotlight): https://arxiv.org/abs/2406.08418
|
22 |
|
23 |
OmniCorpus dataset is a large-scale image-text interleaved dataset, which pushes the boundaries of scale and diversity by encompassing **8.6 billion images** interleaved with **1,696 billion text tokens** from diverse sources, significantly surpassing previous datasets.
|
24 |
This dataset demonstrates several advantages over its counterparts:
|
|
|
135 |
extract_frames_with_hls("1xGiPUeevCM", [19.000000, 23.000000, 28.000000, 32.000000, 45.000000, 54.000000, 57.000000, 67.000000])
|
136 |
```
|
137 |
|
138 |
+
# License and Terms of Use
|
139 |
+
The OmniCorpus dataset is distributed under [the CC BY 4.0 License](https://creativecommons.org/licenses/by/4.0/). The open-source code is released under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
|
140 |
|
141 |
+
The Terms of Use (ToUs) have been developed based on widely accepted standards. By accessing or using this dataset, users acknowledge their responsibility to comply with all relevant legal, regulatory, and ethical standards.
|
142 |
+
- All users, whether from academia or industry, must comply with the ToUs outlined in the CC BY 4.0 License.
|
143 |
+
- Any derived datasets or models must acknowledge the use of the OmniCorpus dataset to maintain transparency.
|
144 |
+
- The OmniCorpus must not be used in any project involving sensitive content or harmful outcomes, including but not limited to political manipulation, hate speech generation, misinformation propagation, or tasks that perpetuate harmful stereotypes or biases.
|
145 |
+
- The use of this dataset in any manner that violates rights, such as copyright infringement, privacy breaches, or misuse of sensitive information, is strictly prohibited.
|
146 |
+
- While we do not enforce jurisdiction-specific terms, we strongly recommend that users ensure compliance with applicable local laws and regulations.
|
147 |
+
- The use of specific subset must comply with the ToUs of the primary source. Specifically, the use of OmniCorpus-CC, OmniCorpus-CW, and OmniCorpus-YT must comply with [the Common Crawl ToUs](https://commoncrawl.org/terms-of-use), the [regulations](https://www.gov.cn/zhengce/content/202409/content\_6977766.htm) on the security management of Internet data in China, and [YouTube’s ToUs](https://www.youtube.com/terms), respectively.
|
148 |
+
- These ToUs do not supersede the ToUs of the original content sources. Users must ensure that any use of the dataset’s content complies with the original ToUs and the rights of the data subjects.
|
149 |
|
150 |
# Citation
|
151 |
|
152 |
```
|
153 |
+
@inproceedings{li2024omnicorpus,
|
154 |
title={OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text},
|
155 |
author={Li, Qingyun and Chen, Zhe and Wang, Weiyun and Wang, Wenhai and Ye, Shenglong and Jin, Zhenjiang and others},
|
156 |
+
booktitle={The Thirteenth International Conference on Learning Representations},
|
157 |
+
year={2025}
|
158 |
}
|
159 |
```
|
160 |
|