Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
LongVideoHaystack / README.md
ZihanWang314
update
ee6abc6
|
raw
history blame
9.73 kB
metadata
dataset_info:
  features:
    - name: vclip_id
      dtype: string
    - name: question_id
      dtype: int64
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: frame_indexes
      sequence: int64
    - name: choices
      struct:
        - name: A
          dtype: string
        - name: B
          dtype: string
        - name: C
          dtype: string
        - name: D
          dtype: string
        - name: E
          dtype: string
    - name: video_metadata
      struct:
        - name: CLIP-reference-interval-clip
          sequence: float64
        - name: CLIP-reference-interval-video
          sequence: float64
        - name: bitrate
          dtype: int64
        - name: codec
          dtype: string
        - name: frame_dimensions
          sequence: int64
        - name: frame_dimensions_resized
          sequence: int64
        - name: frame_rate
          dtype: float64
        - name: resolution
          dtype: string
        - name: resolution_resized
          dtype: string
        - name: vclip_duration
          dtype: float64
        - name: vclip_frame_count
          dtype: int64
        - name: vclip_interval_in_video
          sequence: float64
        - name: video_duration
          dtype: float64
        - name: video_frame_count
          dtype: int64
        - name: video_id
          dtype: string
  splits:
    - name: train
      num_bytes: 5358616
      num_examples: 11218
    - name: test
      num_bytes: 1977870
      num_examples: 3874
  download_size: 2168577
  dataset_size: 7336486
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

LV-Haystack: Temporal Search for Long-Form Video Understanding

Jinhui Ye1Zihan Wang2Haosen Sun2Keshigeyan Chandrasegaran1
Zane Durante1Cristobal Eyzaguirre1Yonatan Bisk3Juan Carlos Niebles1Ehsan Adeli1
Li Fei-Fei1Jiajun Wu1Manling Li2
 Stanford University1, Northwestern University2, Carnegie Mellon University3
Dataset is part of the T* project
🌎Website | 🧑‍💻Code | 📄arXiv | 🏆 Leaderboard (Coming Soon)

Logo

Dataset Sample

{
    'vclip_id': '6338b73e-393f-4d37-b278-68703b45908c',
    'question_id': 10,
    'question': 'What nail did I pull out?',
    'answer': 'E',
    'frame_indexes': [5036, 5232], # the keyframe indexes
    'choices': {
        'A': 'The nail from the front wheel fender',
        'B': 'The nail from the motorcycle battery compartment',
        'C': 'The nail from the left side of the motorcycle seat',
        'D': 'The nail from the rearview mirror mount',
        'E': 'The nail on the right side of the motorcycle exhaust pipe'
    },
    'video_metadata': {
        'CLIP-reference-interval-vclip': [180.0, 240.0],  # Time interval of the "vclip" that is considered to be important by CLIP. this is calculated by (CLIP-reference-interval-video - vclip-interval-in-video[0])
        'CLIP-reference-interval-video': [180.0, 240.0],  # Time interval of the "video" that is considered to be important by CLIP. This is originally from the **Ego4D dataset**, used in our work for annotators to quickly locate in the video.
        'vclip_interval_in_video': [0.0, 480.06667277018227], # the vclip start and end second, i.e., for [a, b], the vclip starts at the a second of the video, ends at the b second of the video
        'frame_count': 14155,  # Total number of frames in the video
        'frame_rate': 30.0,  # Frame rate of the video
        'duration': 471.8333435058594,  # Duration of the video in seconds
        'resolution': '454x256',  # Original resolution of the video
        'frame_dimensions': None,  # Frame dimensions (if available)
        'codec': 'N/A',  # Codec used for the video (if available)
        'bitrate': 0,  # Bitrate of the video (if available)
        'frame_dimensions_resized': [340, 256],  # Resized frame dimensions
        'resolution_resized': '340x256',  # Resized resolution
        'video_id': 'b6ae365a-dd70-42c4-90d6-e0351778d991'  # Unique video identifier
    }
}

Dataset exploration

add hyperlink to demo

Dataset Usage

from datasets import load_dataset
dataset = load_dataset("LVHaystack/LongVideoHaystack")
print(dataset)
>>> DatasetDict({
    train: Dataset({
        features: ['vclip_id', 'question_id', 'question', 'answer', 'frame_indexes', 'choices', 'video_metadata'],
        num_rows: 11218
    })
    test: Dataset({
        features: ['vclip_id', 'question_id', 'question', 'answer', 'frame_indexes', 'choices', 'video_metadata'],
        num_rows: 3874
    })
})

Video Source Download

TODO: We plan to provide a script of how to download a subset from Ego4d. For now, you can refer to their official guide here. Your code would be look like the follows:

pip install ego4d

ego4d --output_directory=your_path/videos/ \
  --datasets full_scale annotations \
  --metadata \
  --video_uid_file video_uids.txt

python process_videos_to_clips.py

Please find video_uid.txt in our repo, or you can generate it by:

import datasets
metadata = datasets.load_dataset("LVHaystack/LongVideoHaystack-metadata")
with open("video_uids.txt", "w") as file:
    for video_id in metadata['video_id']:
        file.write(video_id + " ")

then, you need to transform them to video clips:


Dataset Statistics Summary

Metric Total Train Test
Video Statistics
Total Videos 988 744 244
Total Video Duration (hr) 423.3 322.2 101.0
Avg. Video Duration (min) 25.7 26.0 24.8
Clip Statistics
Total Video Clips 1,324 996 328
Total Video Clip Duration (hr) 180.4 135.3 45.0
Avg. Video Clip Duration (sec) 8.2 8.2 8.2
Frame Statistics
Total Frames (k) 45,700 34,800 10,900
Avg. Frames per Video (k) 46.3 46.8 44.7
Ratio of Keyframe / Frame (‰) 0.62 0.59 0.71
QA Statistics
Total QA Pairs 15,092 11,218 3,874
Avg. QA Pair per Video 15.3 15.1 15.9
Avg. QA Pair per Clip 11.4 11.3 11.8
Avg. Keyframes per Question 1.88 1.84 2.01

Evaluation scripts

Please refer to ./eval.py.

Contact

Citation

@misc{tstar,
      title={Re-thinking Temporal Search for Long-Form Video Understanding}, 
      author={Jinhui Ye and Zihan Wang and Haosen Sun and Keshigeyan Chandrasegaran and Zane Durante and Cristobal Eyzaguirre and Yonatan Bisk and Juan Carlos Niebles and Ehsan Adeli and Li Fei-Fei and Jiajun Wu and Manling Li},
      year={2025},
      eprint={2501.TODO},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Website template borrowed from HourVideo.