Datasets:

Modalities:
Text
Video
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
VSI-Bench / README.md
jihanyang's picture
init README.md
133eb64
|
raw
history blame
4.16 kB
metadata
license: apache-2.0
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - Video
  - Text
size_categories:
  - 1K<n<10K
arXiv Website GitHub Code

Visual Spatial Intelligence Benchmark (VSI-Bench)

This repository contains the visual spatial intelligence benchmark (VSI-Bench), introduced in Thinking in Space: How Multimodal Large Language Models See, Remember and Recall Spaces.

Files

The test-00000-of-00001.parquet contains the full dataset annotations and images pre-loaded for processing with HF Datasets. It can be loaded as follows:

from datasets import load_dataset
vsi_bench = load_dataset("nyu-visionx/VSI-Bench")

Additionally, we provide the compressed raw videos in *.zip.

Dataset Description

VSI-Bench quantitatively evaluate the visual-spatial intelligence of MLLMs from egocentric video. VSI-Bench comprises over 5,000 question-answer pairs derived from 288 real videos. These videos are sourced from the validation sets of the public indoor 3D scene reconstruction datasets ScanNet, ScanNet++, and ARKitScenes and represent diverse environments -- including residential spaces, professional settings (e.g., offices, labs), and industrial spaces (e.g., factories) and multiple geographic regions. Repurposing these existing 3D reconstruction and understanding datasets offers accurate object-level annotations which we use in question generation and could enable future study into the connection between MLLMs and 3D reconstruction.

The dataset contains the following fields:

Field Name Description
idx Global index of the entry in the dataset
dataset Video source: scannet, arkitscenes or scannetpp
question_type The type of task for question
question Question asked about the video
options Answer choices for the question (only for multiple choice questions)
ground_truth Correct answer to the question
video_suffix Suffix of the video

Example Code

import pandas as pd
# Load the CSV file into a DataFrame
df = pd.read_csv('cv_bench_results.csv')
# Define a function to calculate accuracy for a given source
def calculate_accuracy(df, source):
    source_df = df[df['source'] == source]
    accuracy = source_df['result'].mean()  # Assuming 'result' is 1 for correct and 0 for incorrect
    return accuracy
# Calculate accuracy for each source
accuracy_2d_ade = calculate_accuracy(df, 'ADE20K')
accuracy_2d_coco = calculate_accuracy(df, 'COCO')
accuracy_3d_omni = calculate_accuracy(df, 'Omni3D')
# Calculate the accuracy for each type
accuracy_2d = (accuracy_2d_ade + accuracy_2d_coco) / 2
accuracy_3d = accuracy_3d_omni
# Compute the combined accuracy as specified
combined_accuracy = (accuracy_2d + accuracy_3d) / 2
# Print the results
print(f"CV-Bench Accuracy: {combined_accuracy:.4f}")
print()
print(f"Type Accuracies:")
print(f"2D Accuracy: {accuracy_2d:.4f}")
print(f"3D Accuracy: {accuracy_3d:.4f}")
print()
print(f"Source Accuracies:")
print(f"ADE20K Accuracy: {accuracy_2d_ade:.4f}")
print(f"COCO Accuracy: {accuracy_2d_coco:.4f}")
print(f"Omni3D Accuracy: {accuracy_3d_omni:.4f}")

Citation

@article{yang2024think,
    title={{Thinking in Space: How Multimodal Large Language Models See, Remember and Recall Spaces}},
    author={Yang, Jihan and Yang, Shusheng and Gupta, Anjali and Han, Rilyn and Fei-Fei, Li and Xie, Saining},
    year={2024},
    journal={arXiv preprint},
}