Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
LongVideoHaystack / README.md
ZihanWang314
update
4861b51
|
raw
history blame
7.39 kB
metadata
dataset_info:
  features:
    - name: vclip_id
      dtype: string
    - name: question_id
      dtype: int32
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: frame_indexes
      sequence: int32
    - name: choices
      struct:
        - name: A
          dtype: string
        - name: B
          dtype: string
        - name: C
          dtype: string
        - name: D
          dtype: string
        - name: E
          dtype: string
    - name: video_metadata
      struct:
        - name: CLIP-reference-interval
          sequence: float32
        - name: frame_count
          dtype: int32
        - name: frame_rate
          dtype: float32
        - name: duration
          dtype: float32
        - name: resolution
          dtype: string
        - name: frame_dimensions
          sequence: int32
        - name: codec
          dtype: string
        - name: bitrate
          dtype: int32
        - name: frame_dimensions_resized
          sequence: int32
        - name: resolution_resized
          dtype: string
        - name: video_id
          dtype: string
  splits:
    - name: train
      num_bytes: 4065923
      num_examples: 11218
    - name: test
      num_bytes: 1559334
      num_examples: 3874
  download_size: 1941543
  dataset_size: 5625257
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/trai   n-*
      - split: test
        path: data/test-*

LV-Haystack: Temporal Search in Long-Form Video Understanding

Jinhui Ye1Zihan Wang2Haosen Sun2Keshigeyan Chandrasegaran1Zane Durante1
Cristobal Eyzaguirre1Yonatan Bisk3Juan Carlos Niebles1Ehsan Adeli1Li Fei-Fei1Jiajun Wu1Manling Li2
 Stanford University1, Northwestern University2, Carnegie Mellon University3
Conference on AI Research, 2025
🌎Website | 🧑‍💻Code | 📄arXiv | 🏆 Leaderboard (Coming Soon)

Logo

Dataset is part of the T* project

NOTE: Does Manling need Stanford Affiliation?
NOTE: Fill in website url etc

News

  • 1/1/2025: Thrilled to announce T* and LV-Haystack!

Dataset Sample

{
    'vclip_id': '6338b73e-393f-4d37-b278-68703b45908c',
    'question_id': 10,
    'question': 'What nail did I pull out?',
    'answer': 'E',
    'frame_indexes': [5036, 5232],
    'choices': {
        'A': 'The nail from the front wheel fender',
        'B': 'The nail from the motorcycle battery compartment',
        'C': 'The nail from the left side of the motorcycle seat',
        'D': 'The nail from the rearview mirror mount',
        'E': 'The nail on the right side of the motorcycle exhaust pipe'
    },
    'video_metadata': {
        'CLIP-reference-interval': [180.0, 240.0],  # Time interval of the video clip
        'frame_count': 14155,  # Total number of frames in the video
        'frame_rate': 30.0,  # Frame rate of the video
        'duration': 471.8333435058594,  # Duration of the video in seconds
        'resolution': '454x256',  # Original resolution of the video
        'frame_dimensions': None,  # Frame dimensions (if available)
        'codec': 'N/A',  # Codec used for the video (not available here)
        'bitrate': 0,  # Bitrate of the video
        'frame_dimensions_resized': [340, 256],  # Resized frame dimensions
        'resolution_resized': '340x256',  # Resized resolution
        'video_id': 'b6ae365a-dd70-42c4-90d6-e0351778d991'  # Unique video identifier
    }
}

Usage

from datasets import load_dataset
dataset = load_dataset("LVHaystack/LongVideoHaystack")
print(dataset)
>>> DatasetDict({
    train: Dataset({
        features: ['vclip_id', 'question_id', 'question', 'answer', 'frame_indexes', 'choices', 'video_metadata'],
        num_rows: 11218
    })
    test: Dataset({
        features: ['vclip_id', 'question_id', 'question', 'answer', 'frame_indexes', 'choices', 'video_metadata'],
        num_rows: 3874
    })
})

Abstract

[[ABSTRACT]]

[[TITLE]] Statistics

image description

Dataset Organization

The dataset is organized to facilitate easy access to all resources. Below is the structure:

[[DATASET_ORGANIZATION_STRUCTURE]]

Description of Key Components

[[KEY_COMPONENT_PATH]]: This directory contains resources in [[FORMAT]] format. Each file includes metadata and other details:

  • [[DATA_FILE_1]]:

    • [[DESCRIPTION_1]].
  • [[DATA_FILE_2]]:

    • [[DESCRIPTION_2]].
  • [[DATA_FILE_3]]:

    • [[DESCRIPTION_3]].

Annotation Format

Each entry includes metadata in the following format:

{
    "[[FIELD_1]]": {
        "[[METADATA_FIELD_1]]": {
            "[[DETAIL_1]]": [[DETAIL_TYPE_1]],
            "[[DETAIL_2]]": [[DETAIL_TYPE_2]],
        },
        "[[BENCHMARK_FIELD]]": [
            {
                "[[QUESTION_FIELD]]": [[QUESTION_TYPE]],
                "[[TASK_FIELD]]": [[TASK_TYPE]],
                "[[LABEL_FIELD]]": [[LABEL_TYPE]],
                "[[TIMESTAMP_FIELD]]": [[TIMESTAMP_TYPE]],
                "[[MCQ_FIELD]]": "[[MCQ_OPTIONS]]",
                "[[ANSWER_FIELD_1]]": [[ANSWER_TYPE_1]],
                "[[ANSWER_FIELD_2]]": [[ANSWER_TYPE_2]],
                "[[ANSWER_FIELD_3]]": [[ANSWER_TYPE_3]],
                "[[ANSWER_FIELD_4]]": [[ANSWER_TYPE_4]],
                "[[ANSWER_FIELD_5]]": [[ANSWER_TYPE_5]]
            },
            // Next question
        ]
    },
    // Next entry
}

Limitations

[[LIMITATIONS]]

Contact

  • [[CONTACT_1]]
  • [[CONTACT_2]]
  • [[CONTACT_3]]

Citation

[[BIBTEX]]