Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ size_categories:
|
|
22 |
---
|
23 |
|
24 |
# The StoryFrames Dataset
|
25 |
-
StoryFrames is a human-annotated dataset created to enhance a model's capability of understanding and reasoning over sequences of images.
|
26 |
It is specifically designed for tasks like generating a description for the next scene in a story based on previous visual and textual information.
|
27 |
The dataset repurposes the [StoryBench dataset](https://arxiv.org/abs/2308.11606), a video dataset originally designed to predict future frames of a video.
|
28 |
StoryFrames subsamples frames from those videos and pairs them with annotations for the task of _next-description prediction_.
|
@@ -121,4 +121,23 @@ Each story is composed of multiple scenes, where each scene is a part of the ove
|
|
121 |
* `sentence_parts_nocontext`
|
122 |
* Type: `List[str]`
|
123 |
* A variant of the scene descriptions that excludes sequential context.
|
124 |
-
* This may be empty if no annotation was provided.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
---
|
23 |
|
24 |
# The StoryFrames Dataset
|
25 |
+
[StoryFrames](https://arxiv.org/abs/2502.19409) is a human-annotated dataset created to enhance a model's capability of understanding and reasoning over sequences of images.
|
26 |
It is specifically designed for tasks like generating a description for the next scene in a story based on previous visual and textual information.
|
27 |
The dataset repurposes the [StoryBench dataset](https://arxiv.org/abs/2308.11606), a video dataset originally designed to predict future frames of a video.
|
28 |
StoryFrames subsamples frames from those videos and pairs them with annotations for the task of _next-description prediction_.
|
|
|
121 |
* `sentence_parts_nocontext`
|
122 |
* Type: `List[str]`
|
123 |
* A variant of the scene descriptions that excludes sequential context.
|
124 |
+
* This may be empty if no annotation was provided.
|
125 |
+
|
126 |
+
## Citation
|
127 |
+
The dataset was introduced as part of the following paper:
|
128 |
+
|
129 |
+
[ImageChain: Advancing Sequential Image-to-Text Reasoning in Multimodal Large Language Models](https://arxiv.org/abs/2502.19409)
|
130 |
+
|
131 |
+
If you use it in your research or applications, please cite the following paper:
|
132 |
+
|
133 |
+
```
|
134 |
+
@misc{villegas2025imagechainadvancingsequentialimagetotext,
|
135 |
+
title={ImageChain: Advancing Sequential Image-to-Text Reasoning in Multimodal Large Language Models},
|
136 |
+
author={Danae Sánchez Villegas and Ingo Ziegler and Desmond Elliott},
|
137 |
+
year={2025},
|
138 |
+
eprint={2502.19409},
|
139 |
+
archivePrefix={arXiv},
|
140 |
+
primaryClass={cs.CV},
|
141 |
+
url={https://arxiv.org/abs/2502.19409},
|
142 |
+
}
|
143 |
+
```
|