Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
ZihanWang314 commited on
Commit
e18de20
·
1 Parent(s): 8f87cad
Files changed (2) hide show
  1. README.md +53 -71
  2. eval.py +197 -0
README.md CHANGED
@@ -70,7 +70,7 @@ configs:
70
  ---
71
 
72
  <h1 align='center' style="text-align:center; font-weight:bold; font-size:2.0em;letter-spacing:2.0px;">
73
- LV-Haystack: Temporal Search in Long-Form Video Understanding</h1>
74
 
75
 
76
  <p align='center' style="text-align:center;font-size:1.25em;">
@@ -87,7 +87,8 @@ configs:
87
  <a href="https://jiajunwu.com/" target="_blank">Jiajun Wu<sup>1</sup></a>,&nbsp;
88
  <a href="https://limanling.github.io/" target="_blank">Manling Li<sup>2</sup></a><br/>
89
  &nbsp;Stanford University<sup>1</sup>, Northwestern University<sup>2</sup>, Carnegie Mellon University<sup>3</sup><br/>
90
- <em>Conference on AI Research, 2025</em><br/>
 
91
  <a href="https://examplewebsite.com" title="Website" target="_blank" rel="nofollow" style="text-decoration: none;">🌎Website</a> |
92
  <a href="https://examplecode.com" title="Dataset" target="_blank" rel="nofollow" style="text-decoration: none;">🧑‍💻Code</a> |
93
  <a href="https://arxiv.org/examplepaper" title="aXiv" target="_blank" rel="nofollow" style="text-decoration: none;">📄arXiv</a> |
@@ -100,18 +101,14 @@ configs:
100
  Dataset is part of the <a href="">T* project</a></p>
101
 
102
  <p align=center>
103
- NOTE: Does Manling need Stanford Affiliation? <br>
104
- NOTE: Fill in website url etc
105
- </p>
106
 
 
107
 
108
 
109
- ## News
110
 
111
- - **1/1/2025: Thrilled to announce T\* and LV-Haystack!**
112
 
113
 
114
- ## Dataset Sample
115
 
116
  ```python
117
  {
@@ -119,7 +116,7 @@ NOTE: Fill in website url etc
119
  'question_id': 10,
120
  'question': 'What nail did I pull out?',
121
  'answer': 'E',
122
- 'frame_indexes': [5036, 5232],
123
  'choices': {
124
  'A': 'The nail from the front wheel fender',
125
  'B': 'The nail from the motorcycle battery compartment',
@@ -128,22 +125,25 @@ NOTE: Fill in website url etc
128
  'E': 'The nail on the right side of the motorcycle exhaust pipe'
129
  },
130
  'video_metadata': {
131
- 'CLIP-reference-interval': [180.0, 240.0], # Time interval of the video clip
132
  'frame_count': 14155, # Total number of frames in the video
133
  'frame_rate': 30.0, # Frame rate of the video
134
  'duration': 471.8333435058594, # Duration of the video in seconds
135
  'resolution': '454x256', # Original resolution of the video
136
  'frame_dimensions': None, # Frame dimensions (if available)
137
- 'codec': 'N/A', # Codec used for the video (not available here)
138
- 'bitrate': 0, # Bitrate of the video
139
  'frame_dimensions_resized': [340, 256], # Resized frame dimensions
140
  'resolution_resized': '340x256', # Resized resolution
141
  'video_id': 'b6ae365a-dd70-42c4-90d6-e0351778d991' # Unique video identifier
142
  }
143
  }
144
  ```
 
145
 
146
- ## Usage
 
 
147
 
148
  ```python
149
  from datasets import load_dataset
@@ -163,79 +163,61 @@ print(dataset)
163
  })
164
  ```
165
 
 
166
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
167
 
168
 
169
- ## Abstract
170
-
171
- [[ABSTRACT]]
172
 
173
- ## [[TITLE]] Statistics
174
 
175
- <img src="[[STATISTICS_IMAGE_LINK]]" alt="image description" width="850" height="200">
176
 
 
177
 
178
 
179
- ## Dataset Organization
180
-
181
- The dataset is organized to facilitate easy access to all resources. Below is the structure:
182
- ```
183
- [[DATASET_ORGANIZATION_STRUCTURE]]
184
- ```
185
-
186
- ### Description of Key Components
187
- ```[[KEY_COMPONENT_PATH]]```: This directory contains resources in [[FORMAT]] format. Each file includes metadata and other details:
188
-
189
- - ```[[DATA_FILE_1]]```:
190
- - [[DESCRIPTION_1]].
191
 
192
- - ```[[DATA_FILE_2]]```:
193
- - [[DESCRIPTION_2]].
194
 
195
- - ```[[DATA_FILE_3]]```:
196
- - [[DESCRIPTION_3]].
197
 
198
- ### Annotation Format
199
- Each entry includes metadata in the following format:
200
 
201
- ```
202
- {
203
- "[[FIELD_1]]": {
204
- "[[METADATA_FIELD_1]]": {
205
- "[[DETAIL_1]]": [[DETAIL_TYPE_1]],
206
- "[[DETAIL_2]]": [[DETAIL_TYPE_2]],
207
- },
208
- "[[BENCHMARK_FIELD]]": [
209
- {
210
- "[[QUESTION_FIELD]]": [[QUESTION_TYPE]],
211
- "[[TASK_FIELD]]": [[TASK_TYPE]],
212
- "[[LABEL_FIELD]]": [[LABEL_TYPE]],
213
- "[[TIMESTAMP_FIELD]]": [[TIMESTAMP_TYPE]],
214
- "[[MCQ_FIELD]]": "[[MCQ_OPTIONS]]",
215
- "[[ANSWER_FIELD_1]]": [[ANSWER_TYPE_1]],
216
- "[[ANSWER_FIELD_2]]": [[ANSWER_TYPE_2]],
217
- "[[ANSWER_FIELD_3]]": [[ANSWER_TYPE_3]],
218
- "[[ANSWER_FIELD_4]]": [[ANSWER_TYPE_4]],
219
- "[[ANSWER_FIELD_5]]": [[ANSWER_TYPE_5]]
220
- },
221
- // Next question
222
- ]
223
- },
224
- // Next entry
225
- }
226
- ```
227
 
228
- ## Limitations
229
- [[LIMITATIONS]]
230
 
231
- ## Contact
232
- - [[CONTACT_1]]
233
- - [[CONTACT_2]]
234
- - [[CONTACT_3]]
 
 
235
 
236
- ## Citation
237
 
238
  ```bibtex
239
- [[BIBTEX]]
 
 
 
 
 
 
 
240
  ```
241
-
 
70
  ---
71
 
72
  <h1 align='center' style="text-align:center; font-weight:bold; font-size:2.0em;letter-spacing:2.0px;">
73
+ LV-Haystack: Temporal Search for Long-Form Video Understanding</h1>
74
 
75
 
76
  <p align='center' style="text-align:center;font-size:1.25em;">
 
87
  <a href="https://jiajunwu.com/" target="_blank">Jiajun Wu<sup>1</sup></a>,&nbsp;
88
  <a href="https://limanling.github.io/" target="_blank">Manling Li<sup>2</sup></a><br/>
89
  &nbsp;Stanford University<sup>1</sup>, Northwestern University<sup>2</sup>, Carnegie Mellon University<sup>3</sup><br/>
90
+ <!-- <em>Conference on AI Research, 2025</em> -->
91
+ <br/>
92
  <a href="https://examplewebsite.com" title="Website" target="_blank" rel="nofollow" style="text-decoration: none;">🌎Website</a> |
93
  <a href="https://examplecode.com" title="Dataset" target="_blank" rel="nofollow" style="text-decoration: none;">🧑‍💻Code</a> |
94
  <a href="https://arxiv.org/examplepaper" title="aXiv" target="_blank" rel="nofollow" style="text-decoration: none;">📄arXiv</a> |
 
101
  Dataset is part of the <a href="">T* project</a></p>
102
 
103
  <p align=center>
 
 
 
104
 
105
+ </p>
106
 
107
 
 
108
 
 
109
 
110
 
111
+ #### Dataset Sample
112
 
113
  ```python
114
  {
 
116
  'question_id': 10,
117
  'question': 'What nail did I pull out?',
118
  'answer': 'E',
119
+ 'frame_indexes': [5036, 5232], # the keyframe indexes
120
  'choices': {
121
  'A': 'The nail from the front wheel fender',
122
  'B': 'The nail from the motorcycle battery compartment',
 
125
  'E': 'The nail on the right side of the motorcycle exhaust pipe'
126
  },
127
  'video_metadata': {
128
+ 'CLIP-reference-interval': [180.0, 240.0], # Time interval of the video that is considered to be important in CLIP. This is originally from the Ego4D dataset, used here for annotators to quickly locate in the video.
129
  'frame_count': 14155, # Total number of frames in the video
130
  'frame_rate': 30.0, # Frame rate of the video
131
  'duration': 471.8333435058594, # Duration of the video in seconds
132
  'resolution': '454x256', # Original resolution of the video
133
  'frame_dimensions': None, # Frame dimensions (if available)
134
+ 'codec': 'N/A', # Codec used for the video (if available)
135
+ 'bitrate': 0, # Bitrate of the video (if available)
136
  'frame_dimensions_resized': [340, 256], # Resized frame dimensions
137
  'resolution_resized': '340x256', # Resized resolution
138
  'video_id': 'b6ae365a-dd70-42c4-90d6-e0351778d991' # Unique video identifier
139
  }
140
  }
141
  ```
142
+ #### Dataset exploration
143
 
144
+ add hyperlink to demo
145
+
146
+ #### Dataset Usage
147
 
148
  ```python
149
  from datasets import load_dataset
 
163
  })
164
  ```
165
 
166
+ #### Dataset Statistics Summary
167
 
168
+ | **Metric** | **Total** | **Train** | **Test** |
169
+ |--------------------------------|--------------|-------------|-------------|
170
+ | **Video Statistics** | | | |
171
+ | Total Videos | **988** | **744** | **244** |
172
+ | Total Video Duration (hr) | 423.3 | 322.2 | 101.0 |
173
+ | Avg. Video Duration (min) | 25.7 | 26.0 | 24.8 |
174
+ | **Clip Statistics** | | | |
175
+ | Total Video Clips | **1,324** | **996** | **328** |
176
+ | Total Video Clip Duration (hr) | 180.4 | 135.3 | 45.0 |
177
+ | Avg. Video Clip Duration (sec) | 8.2 | 8.2 | 8.2 |
178
+ | **Frame Statistics** | | | |
179
+ | Total Frames (k) | **45,700** | **34,800** | **10,900** |
180
+ | Avg. Frames per Video (k) | 46.3 | 46.8 | 44.7 |
181
+ | Ratio of Keyframe / Frame (‰) | 0.62 | 0.59 | 0.71 |
182
+ | **QA Statistics** | | | |
183
+ | Total QA Pairs | **15,092** | **11,218** | **3,874** |
184
+ | Avg. QA Pair per Video | 15.3 | 15.1 | 15.9 |
185
+ | Avg. QA Pair per Clip | 11.4 | 11.3 | 11.8 |
186
+ | Avg. Keyframes per Question | 1.88 | 1.84 | 2.01 |
187
 
188
 
189
+ #### Download Videos
 
 
190
 
191
+ Assume your video is in ./videos/
192
 
193
+ #### Evaluation scripts
194
 
195
+ Please refer to ./eval.py (add hyperlink).
196
 
197
 
 
 
 
 
 
 
 
 
 
 
 
 
198
 
 
 
199
 
 
 
200
 
 
 
201
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
202
 
 
 
203
 
204
+ #### Contact
205
+ - Jinhui Ye: [email protected]
206
+ - Zihan Wang: [email protected]
207
+ - Haosen Sun: [email protected]
208
+ - Keshigeyan Chandrasegaran: [email protected]
209
+ - Manling Li: [email protected]
210
 
211
+ #### Citation
212
 
213
  ```bibtex
214
+ @misc{tstar,
215
+ title={Re-thinking Temporal Search for Long-Form Video Understanding},
216
+ author={Jinhui Ye and Zihan Wang and Haosen Sun and Keshigeyan Chandrasegaran and Zane Durante and Cristobal Eyzaguirre and Yonatan Bisk and Juan Carlos Niebles and Ehsan Adeli and Li Fei-Fei and Jiajun Wu and Manling Li},
217
+ year={2025},
218
+ eprint={2501.TODO},
219
+ archivePrefix={arXiv},
220
+ primaryClass={cs.LG}
221
+ }
222
  ```
223
+ Website template borrowed from [HourVideo](https://huggingface.co/datasets/HourVideo/HourVideo).
eval.py ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn.functional as F
3
+ import numpy as np
4
+ from typing import List, Tuple, Union, Protocol, Callable
5
+ from abc import ABC, abstractmethod
6
+
7
+
8
+ class ElementSimilarity(Protocol):
9
+ """Protocol for computing similarity between two elements"""
10
+ def __call__(self, x: any, y: any) -> float:
11
+ ...
12
+
13
+
14
+ class SetSimilarity:
15
+ """Calculate similarity metrics between two sets based on element-wise similarity"""
16
+
17
+ def __init__(self, element_similarity: ElementSimilarity):
18
+ self.element_similarity = element_similarity
19
+
20
+ def compute_similarity_matrix(self, pred_set: List, gt_set: List) -> np.ndarray:
21
+ """Compute pairwise similarity matrix between elements of two sets"""
22
+ return np.array([
23
+ [self.element_similarity(pred, gt) for gt in gt_set]
24
+ for pred in pred_set
25
+ ])
26
+
27
+ def __call__(self, pred_set: List, gt_set: List) -> Tuple[float, float, float]:
28
+ """Compute precision, recall, and F1 between two sets"""
29
+ if not pred_set or not gt_set:
30
+ return 0.0, 0.0, 0.0
31
+
32
+ # Compute similarity matrix
33
+ sim_matrix = self.compute_similarity_matrix(pred_set, gt_set)
34
+ # For each prediction, get its highest similarity with any ground truth
35
+ pred_max_sim = np.max(sim_matrix, axis=1)
36
+ precision = np.mean(pred_max_sim)
37
+
38
+ # Count how many predictions match with ground truths
39
+ match_threshold = 1 # Could be parameterized
40
+ total_matches = np.sum(pred_max_sim >= match_threshold)
41
+
42
+ # Apply penalty if there are more matches than ground truths
43
+ if total_matches > len(gt_set):
44
+ precision *= len(gt_set) / total_matches
45
+
46
+ # For each ground truth, get its highest similarity with any prediction
47
+ recall = np.mean(np.max(sim_matrix, axis=0))
48
+
49
+ # Compute F1
50
+ f1 = 2 * precision * recall / (precision + recall) if precision + recall > 0 else 0.0
51
+
52
+ return precision, recall, f1
53
+
54
+
55
+ class TimestampSimilarity:
56
+ """Compute similarity between two timestamps"""
57
+
58
+ def __init__(self, threshold: float = 5.0):
59
+ self.threshold = threshold
60
+
61
+ def __call__(self, t1: float, t2: float) -> float:
62
+ """Return 1 if timestamps are within threshold, 0 otherwise"""
63
+ return float(abs(t1 - t2) <= self.threshold)
64
+
65
+
66
+ class SSIMSimilarity:
67
+ """Compute SSIM similarity between two images.
68
+ Assumes input images are in range [0, 255]."""
69
+
70
+ def __init__(self, window_size: int = 11):
71
+ self.window_size = window_size
72
+ self._window_cache = {}
73
+ self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
74
+ # Parameters for images in [0, 255] range
75
+ self.C1 = (0.01 * 255) ** 2
76
+ self.C2 = (0.03 * 255) ** 2
77
+
78
+ def _create_window(self, channel: int) -> torch.Tensor:
79
+ """Create a 2D Gaussian window"""
80
+ kernel_1d = self._gaussian_kernel()
81
+ window_2d = kernel_1d.unsqueeze(1) @ kernel_1d.unsqueeze(0)
82
+ return window_2d.expand(channel, 1, self.window_size, self.window_size)
83
+
84
+ def _gaussian_kernel(self, sigma: float = 1.5) -> torch.Tensor:
85
+ """Generate 1D Gaussian kernel"""
86
+ coords = torch.arange(self.window_size, dtype=torch.float32)
87
+ coords = coords - (self.window_size - 1) / 2
88
+ kernel = torch.exp(-(coords ** 2) / (2 * sigma ** 2))
89
+ return kernel / kernel.sum()
90
+
91
+ def __call__(self, img1: torch.Tensor, img2: torch.Tensor) -> float:
92
+ """Compute SSIM between two images in range [0, 255]"""
93
+ if img1.shape != img2.shape:
94
+ raise ValueError("Images must have the same shape")
95
+
96
+ # Move images to device
97
+ img1 = img1.to(self.device)
98
+ img2 = img2.to(self.device)
99
+
100
+ if img1.dim() == 3:
101
+ img1 = img1.unsqueeze(0)
102
+ img2 = img2.unsqueeze(0)
103
+
104
+ channel = img1.size(1)
105
+ if channel not in self._window_cache:
106
+ self._window_cache[channel] = self._create_window(channel).to(self.device)
107
+ window = self._window_cache[channel]
108
+
109
+ # Compute means
110
+ mu1 = F.conv2d(img1, window, padding=self.window_size//2, groups=channel)
111
+ mu2 = F.conv2d(img2, window, padding=self.window_size//2, groups=channel)
112
+ mu1_sq, mu2_sq = mu1 ** 2, mu2 ** 2
113
+ mu1_mu2 = mu1 * mu2
114
+
115
+ # Compute variances and covariance
116
+ sigma1_sq = F.conv2d(img1 ** 2, window, padding=self.window_size//2, groups=channel) - mu1_sq
117
+ sigma2_sq = F.conv2d(img2 ** 2, window, padding=self.window_size//2, groups=channel) - mu2_sq
118
+ sigma12 = F.conv2d(img1 * img2, window, padding=self.window_size//2, groups=channel) - mu1_mu2
119
+
120
+ # Compute SSIM
121
+ ssim = ((2 * mu1_mu2 + self.C1) * (2 * sigma12 + self.C2)) / \
122
+ ((mu1_sq + mu2_sq + self.C1) * (sigma1_sq + sigma2_sq + self.C2))
123
+
124
+ # Return mean SSIM
125
+ return float(ssim.mean())
126
+
127
+
128
+ class BatchEvaluator:
129
+ """Evaluate similarity metrics for a batch of set pairs"""
130
+
131
+ def __init__(self, set_similarity: SetSimilarity):
132
+ self.set_similarity = set_similarity
133
+
134
+ def __call__(self, pred_sets: List[List], gt_sets: List[List]) -> Tuple[float, float, float]:
135
+ """Compute average precision, recall, and F1 across all set pairs"""
136
+ if len(pred_sets) != len(gt_sets):
137
+ raise ValueError("Number of predicted and ground truth sets must match")
138
+
139
+ metrics = [
140
+ self.set_similarity(pred_set, gt_set)
141
+ for pred_set, gt_set in zip(pred_sets, gt_sets)
142
+ ]
143
+
144
+ avg_precision = np.mean([p for p, _, _ in metrics])
145
+ avg_recall = np.mean([r for _, r, _ in metrics])
146
+ avg_f1 = np.mean([f for _, _, f in metrics])
147
+
148
+ return avg_precision, avg_recall, avg_f1
149
+
150
+
151
+ # Example usage
152
+ def main():
153
+ # Example 1: Timestamp similarity
154
+ timestamp_sim = TimestampSimilarity(threshold=5.0)
155
+ set_sim = SetSimilarity(timestamp_sim)
156
+
157
+ # Example where we have multiple predictions matching the same ground truth
158
+ gt_set = [10.0, 20.0] # Two ground truth timestamps
159
+ pred_set = [9.0, 9.5, 10.2, 10.8, 19.8] # Multiple predictions near first GT
160
+
161
+ p, r, f1 = set_sim(pred_set, gt_set)
162
+ print(f"Timestamp Metrics with penalty:")
163
+ print(f"P: {p:.3f}, R: {r:.3f}, F1: {f1:.3f}")
164
+
165
+ # Test batch evaluation
166
+ batch_eval = BatchEvaluator(set_sim)
167
+ pred_sets = [
168
+ [9.0, 9.5, 10.2, 19.8], # Multiple predictions for first GT
169
+ [15.0, 25.0, 25.2] # Multiple predictions for second GT
170
+ ]
171
+ gt_sets = [
172
+ [10.0, 20.0],
173
+ [15.0, 25.0]
174
+ ]
175
+ p, r, f1 = batch_eval(pred_sets, gt_sets)
176
+ print(f"\nBatch Metrics:")
177
+ print(f"P: {p:.3f}, R: {r:.3f}, F1: {f1:.3f}")
178
+
179
+
180
+
181
+ # Example 2: Image similarity
182
+ ssim_sim = SSIMSimilarity()
183
+ set_sim_images = SetSimilarity(ssim_sim)
184
+ batch_eval_images = BatchEvaluator(set_sim_images)
185
+
186
+ # Sample image data (assuming torch tensors of shape [C, H, W])
187
+ img1 = (torch.randn(3, 64, 64) * 255).to(torch.uint8).float()
188
+ img2 = (torch.randn(3, 64, 64) * 255).to(torch.uint8).float()
189
+ pred_sets = [[img1, img2]]
190
+ gt_sets = [[img2]]
191
+
192
+ p, r, f1 = batch_eval_images(pred_sets, gt_sets)
193
+ print(f"Image Metrics - P: {p:.3f}, R: {r:.3f}, F1: {f1:.3f}")
194
+
195
+
196
+ if __name__ == "__main__":
197
+ main()