Srikant86 commited on
Commit
6357cd5
·
1 Parent(s): fc70812

readme updated

Browse files
Files changed (1) hide show
  1. README.md +117 -9
README.md CHANGED
@@ -62,14 +62,122 @@ language:
62
  size_categories:
63
  - 1K<n<10K
64
  ---
65
- # Proposed MVTamperBench, a novel benchmark that systematically evaluates the adversarial robustness of VLMs against video specific tampering techniques, with a focus on temporal reasoning and multimodal coherence.
66
 
67
- ## Dataset Description
68
- MVTamperBench applies five distinct tampering techniques to the original MVBench videos: Dropping, Masking, Substitution, Repetition, and Rotation. Each tampering effect introduces unique adversarial challenges to test VLM robustness under various conditions
69
 
70
- ### Tampering Techniques
71
- - **Dropping**: Removes a 1-second segment, creating temporal discontinuity.
72
- - **Masking**: Overlays a black rectangle on a 1-second segment, simulating visual data loss.
73
- - **Rotation**: Rotates a 1-second segment by 180 degrees, introducing spatial distortion.
74
- - **Substitution**: Replaces a 1-second segment with a random clip from another video, disrupting the temporal and contextual flow.
75
- - **Repetition**: Repeats a 1-second segment, introducing temporal redundancy.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
  size_categories:
63
  - 1K<n<10K
64
  ---
65
+ # MVTamperBench Dataset
66
 
67
+ ## Overview
 
68
 
69
+ **MVTamperBench** is a robust benchmark designed to evaluate Vision-Language Models (VLMs) against adversarial video tampering effects. It leverages the diverse and well-structured MVBench dataset, systematically augmented with five distinct tampering techniques:
70
+
71
+ 1. **Frame Dropping**: Removes a 1-second segment, creating temporal discontinuity.
72
+ 2. **Masking**: Overlays a black rectangle on a 1-second segment, simulating visual data loss.
73
+ 3. **Repetition**: Repeats a 1-second segment, introducing temporal redundancy.
74
+ 4. **Rotation**: Rotates a 1-second segment by 180 degrees, introducing spatial distortion.
75
+ 5. **Substitution**: Replaces a 1-second segment with a random clip from another video, disrupting the temporal and contextual flow.
76
+
77
+ The tampering effects are applied to the middle of each video to ensure consistent evaluation across models.
78
+
79
+ ---
80
+
81
+ ## Dataset Details
82
+
83
+ The MVTamperBench dataset is built upon the **MVBench dataset**, a widely recognized collection used in video-language evaluation. It features a broad spectrum of content to ensure robust model evaluation, including:
84
+
85
+ - **Content Diversity**: Spanning a variety of objects, activities, and settings.
86
+ - **Temporal Dynamics**: Videos with temporal dependencies for coherence testing.
87
+ - **Benchmark Utility**: Recognized datasets enabling comparisons with prior work.
88
+
89
+ ### Incorporated Datasets
90
+
91
+ The MVTamperBench dataset integrates videos from several sources, each contributing unique characteristics:
92
+
93
+ | Dataset Name | Primary Scene Type and Unique Characteristics |
94
+ |----------------------|-------------------------------------------------------------------------|
95
+ | STAR | Indoor actions and object interactions |
96
+ | PAXION | Real-world scenes with nuanced actions |
97
+ | Moments in Time (MiT) V1 | Indoor/outdoor scenes across varied contexts |
98
+ | FunQA | Humor-focused, creative, real-world events |
99
+ | CLEVRER | Simulated scenes for object movement and reasoning |
100
+ | Perception Test | First/third-person views for object tracking |
101
+ | Charades-STA | Indoor human actions and interactions |
102
+ | MoVQA | Diverse scenes for scene transition comprehension |
103
+ | VLN-CE | Indoor navigation from agent perspective |
104
+ | TVQA | TV show scenes for episodic reasoning |
105
+
106
+ ### Dataset Expansion
107
+
108
+ The original MVBench dataset contains 3,699 videos, which have been systematically expanded through tampering effects, resulting in a total of **18,495 videos**. This ensures:
109
+
110
+ - **Diversity**: Varied adversarial challenges for robust evaluation.
111
+ - **Volume**: Sufficient data for training and testing.
112
+
113
+ Below is a visual representation of the tampered video length distribution:
114
+
115
+ ![Tampered Video Length Distribution](path/to/tampered_video_length_distribution.png "Distribution of tampered video lengths")
116
+
117
+ ---
118
+
119
+ ## Benchmark Construction
120
+
121
+ MVTamperBench is built with modularity, scalability, and reproducibility at its core:
122
+
123
+ - **Modularity**: Each tampering effect is implemented as a reusable class, allowing for easy adaptation.
124
+ - **Scalability**: Supports customizable tampering parameters, such as location and duration.
125
+ - **Integration**: Fully compatible with VLMEvalKit, enabling seamless evaluations of tampering robustness alongside general VLM capabilities.
126
+
127
+ By maintaining consistent tampering duration (1 second) and location (center of the video), MVTamperBench ensures fair and comparable evaluations across models.
128
+
129
+ ---
130
+
131
+ ## Download Dataset
132
+
133
+ You can access the MVTamperBench dataset directly from the Hugging Face repository:
134
+
135
+ [Download MVTamperBench Dataset](https://huggingface.co/datasets/Srikant86/MVTamperBench)
136
+
137
+ ---
138
+
139
+ ## How to Use
140
+
141
+ 1. Clone the Hugging Face repository:
142
+ ```bash
143
+ git clone [https://huggingface.co/datasets/mvtamperbench](https://huggingface.co/datasets/Srikant86/MVTamperBench)
144
+ cd mvtamperbench
145
+ ```
146
+
147
+ 2. Load the dataset using the Hugging Face `datasets` library:
148
+ ```python
149
+ from datasets import load_dataset
150
+
151
+ dataset = load_dataset("mvtamperbench")
152
+ ```
153
+
154
+ 3. Explore the dataset structure and metadata:
155
+ ```python
156
+ print(dataset["train"])
157
+ ```
158
+
159
+ 4. Utilize the dataset for tampering detection tasks, model evaluation, and more.
160
+
161
+ ---
162
+
163
+ ## Citation
164
+
165
+ If you use MVTamperBench in your research, please cite:
166
+
167
+ ```bibtex
168
+ @misc{agarwal2024mvtamperbenchevaluatingrobustnessvisionlanguage,
169
+ title={MVTamperBench: Evaluating Robustness of Vision-Language Models},
170
+ author={Amit Agarwal and Srikant Panda and Angeline Charles and Bhargava Kumar and Hitesh Patel and Priyanranjan Pattnayak and Taki Hasan Rafi and Tejaswini Kumar and Dong-Kyu Chae},
171
+ year={2024},
172
+ eprint={2412.19794},
173
+ archivePrefix={arXiv},
174
+ primaryClass={cs.CV},
175
+ url={https://arxiv.org/abs/2412.19794},
176
+ }
177
+ ```
178
+
179
+ ---
180
+
181
+ ## License
182
+
183
+ MVTamperBench is released under the MIT License. See `LICENSE` for details.