OmniCorpus-YT / README.md
Qingyun's picture
Update README.md
1a35a3e verified
metadata
license: cc-by-4.0
task_categories:
  - video-text-to-text
  - visual-question-answering
  - image-to-text
language:
  - en
size_categories:
  - 10M<n<100M
viewer: false

🐳 OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text

This is the repository of OmniCorpus-YT, which contains 10 million image-text interleaved documents collected from Youtube videos.

OmniCorpus dataset is a large-scale image-text interleaved dataset, which pushes the boundaries of scale and diversity by encompassing 8.6 billion images interleaved with 1,696 billion text tokens from diverse sources, significantly surpassing previous datasets. This dataset demonstrates several advantages over its counterparts:

  1. Larger data scale: Our dataset is 1.7 times larger in images and 12.5 times larger in texts compared to the previously largest multimodal dataset, LAION-5B, while maintaining excellent data quality.
  2. Richer data diversity: Drawing from a broader range of data sources, our dataset is more diverse than other image-text interleaved datasets. It includes bilingual multimodal data in both Chinese and English, and encompasses text-centric and vision-centric documents extracted from common websites and video platforms.
  3. More flexible format: The streaming data format of our dataset offers exceptional flexibility, allowing adaptation to various data structures, including pure text corpora, image-text pairs, and interleaved data formats.
image

The OmniCorpus contains three sections:

  • OmniCorpus-CC: processed from dumps in Common Crawl from 2013 to Nov./Dec. 2023.
  • OmniCorpus-CW: sourced from Chinese internet resources, will be availiable on OpenDataLab platform.
  • OmniCorpus-YT: samples Youtube video frames as images and collects subtitles as texts.

Code for pre-training, evaluating, main body extracting, and filtering have been released in the official repository. A pre-trained model is availiable here.

Usages

The image-text interleaved documents are recommanded for the following usages:

  • Pre-training multimodal large language model (MLLM): Recent MLLMs (such as Flamingo series, EMU series, IDEFICS series, MM1, Cambrian-1, and xGen-MM) have shown that image-text interleaved data aids multimodal in-context learning and maintains the capabilities of large language models during multimodal fine-tuning.
  • Long text-image retrieval: We provide image-text similarities calculated with CLIP, which can convert the documents to image-text retrieval dataset with longer text. A retrieval model pre-trained on such data can retrieval images based on longer text, which can be used for multimodal RAG, converting pure text to multimodal sample, etc.
  • Source for futher dataset research: Our data is large-scale, which can serve as the source for researches for data curation strategies. We provide many useful attributes as metadata for each document, which can enrich the filtering strategy and reduce the cost.
  • ......

Data Format

Following common practices, the data is organized into Parquet file format. You might encounter errors when using pandas.read_parquet (because the data structure contains nested elements). We recommend using fastparquet to load the parquet files.

import fastparquet
df = fastparquet.ParquetFile(parquet_file_path).to_pandas()

# You can also use iter_batches
parquet_file = pq.ParquetFile(filepath)
for batch in parquet_file.iter_batches():
    df = batch.to_pandas()

You can convert the i-th document and convert it into a dictionary.

doc_dict = df.iloc[i].to_dict()

The document format is as follow:

{
    'id': <str: youtube video id>,
    'images': <bytes: list of image timestamps>,
    'texts': <bytes: list of texts>
}

the images and texts can be loaded with lambda s: json.loads(s)

'images': [
    <str: key_frame_1_timestamp>,
    None,
    <str: key_frame_2_timestamp>,
    None,
],
'texts': [
    None,
    <str: text_paragraph_1_content>
    None,
    <str: text_paragraph_2_content>,
]

The frame can be sampled from downloaded Youtube videos, we provide a python sampling tool:

import os
import sys
import yt_dlp  # pip install yt-dlp
import ffmpeg  # brew install ffmpeg; pip install ffmpeg-python
import traceback
from multiprocessing import Pool

def download_hls_url(youtube_id):
    video_url = f"https://www.youtube.com/watch?v={youtube_id}"
    ydl_opts = {
        'format': 'best',
        'noplaylist': True,
        'quiet': True,
    }
    with yt_dlp.YoutubeDL(ydl_opts) as ydl:
        info = ydl.extract_info(video_url, download=False)
        return info['url']

def extract_frame(hls_url, timestamp, output_file):
    try:
        (
            ffmpeg
            .input(hls_url, ss=timestamp, protocol_whitelist='file,http,https,tcp,tls,httpproxy')
            .output(output_file, vframes=1)
            .run(quiet=True, capture_stdout=True, capture_stderr=True)
        )
    except ffmpeg.Error as e:
        print(f"Error extracting frame at timestamp {timestamp}: {e}")
        print("FFmpeg stderr output:\n", e.stderr.decode())
        traceback.print_exc()

def extract_frames_with_hls(youtube_id, timestamps, output_dir='frames'):
    if not os.path.exists(output_dir):
        os.makedirs(output_dir)

    hls_url = download_hls_url(youtube_id)

    tasks = [(hls_url, timestamp, os.path.join(output_dir, f"{timestamp}.jpg")) for timestamp in timestamps]
    
    with Pool() as pool:
        pool.starmap(extract_frame, tasks)

if __name__ == "__main__":
    extract_frames_with_hls("1xGiPUeevCM", [19.000000, 23.000000, 28.000000, 32.000000, 45.000000, 54.000000, 57.000000, 67.000000])

License and Terms of Use

The OmniCorpus dataset is distributed under the CC BY 4.0 License. The open-source code is released under the Apache License 2.0.

The Terms of Use (ToUs) have been developed based on widely accepted standards. By accessing or using this dataset, users acknowledge their responsibility to comply with all relevant legal, regulatory, and ethical standards.

  • All users, whether from academia or industry, must comply with the ToUs outlined in the CC BY 4.0 License.
  • Any derived datasets or models must acknowledge the use of the OmniCorpus dataset to maintain transparency.
  • The OmniCorpus must not be used in any project involving sensitive content or harmful outcomes, including but not limited to political manipulation, hate speech generation, misinformation propagation, or tasks that perpetuate harmful stereotypes or biases.
  • The use of this dataset in any manner that violates rights, such as copyright infringement, privacy breaches, or misuse of sensitive information, is strictly prohibited.
  • While we do not enforce jurisdiction-specific terms, we strongly recommend that users ensure compliance with applicable local laws and regulations.
  • The use of specific subset must comply with the ToUs of the primary source. Specifically, the use of OmniCorpus-CC, OmniCorpus-CW, and OmniCorpus-YT must comply with the Common Crawl ToUs, the regulations on the security management of Internet data in China, and YouTube’s ToUs, respectively.
  • These ToUs do not supersede the ToUs of the original content sources. Users must ensure that any use of the dataset’s content complies with the original ToUs and the rights of the data subjects.

Citation

@inproceedings{li2024omnicorpus,
  title={OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text},
  author={Li, Qingyun and Chen, Zhe and Wang, Weiyun and Wang, Wenhai and Ye, Shenglong and Jin, Zhenjiang and others},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025}
}