Datasets:

Modalities:
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
File size: 4,111 Bytes
62e3cb1
 
 
 
 
 
 
 
 
15f71cf
62e3cb1
 
 
 
5a489e0
 
62e3cb1
5a489e0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
179e461
5a489e0
 
 
 
 
 
 
 
 
179e461
5a489e0
 
 
 
179e461
5a489e0
 
179e461
 
 
 
 
 
 
5a489e0
 
179e461
5a489e0
 
 
 
179e461
 
5a489e0
 
 
 
179e461
 
 
 
 
 
 
 
 
 
 
 
 
e3af706
179e461
5a489e0
62e3cb1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license: mit
---
<div align="center">

# StableV2V: Stablizing Shape Consistency in Video-to-Video Editing

Chang Liu, Rui Li, Kaidong Zhang, Yunwei Lan, Dong Liu

[[`Paper`]](https://arxiv.org/abs/2411.11045) / [[`Project`]](https://alonzoleeeooo.github.io/StableV2V/) / [[`GitHub`]](https://github.com/AlonzoLeeeooo/StableV2V) / [[`Models (Huggingface)`]](https://huggingface.co/AlonzoLeeeooo/StableV2V) / [[`Models (wisemodel)`]](https://wisemodel.cn/models/Alonzo/StableV2V) / [[`DAVIS-Edit (wisemodel)`]](https://wisemodel.cn/datasets/Alonzo/DAVIS-Edit) / [[`Models (ModelScope)`]](https://modelscope.cn/models/AlonzoLeeeoooo/StableV2V) / [[`DAVIS-Edit (ModelScope)`]](https://modelscope.cn/datasets/AlonzoLeeeoooo/DAVIS-Edit)
</div>

HuggingFace repo of the testing benchmark `DAVIS-Edit` proposed in the paper titled "StableV2V: Stablizing Shape Consistency in Video-to-Video Editing".

# Data Structure
We follow the same data structure as the one of [`DAVIS`](https://davischallenge.org/), as is shown below:
```
DAVIS-Edit
β”œβ”€β”€ Annotations                                 <----- Official annotated masks of DAVIS
  β”œβ”€β”€ bear
  β”œβ”€β”€ blackswan
  β”œβ”€β”€ ...
  └── train
β”œβ”€β”€ JPEGImages                                  <----- Official video frames of DAVIS
  β”œβ”€β”€ bear
  β”œβ”€β”€ blackswan
  β”œβ”€β”€ ...
  └── train
  β”œβ”€β”€ ReferenceImages                           <----- Annotated reference images for image-based editing on DAVIS-Edit
  β”œβ”€β”€ bear.png
  β”œβ”€β”€ blackswan.png
  β”œβ”€β”€ ...
  └── train.png
β”œβ”€β”€ .gitattributes
β”œβ”€β”€ README.md
β”œβ”€β”€ edited_video_caption_dict_image.json        <----- Annotated text descriptions for image-based editing on DAVIS-Edit
└── edited_video_caption_dict_text.json         <----- Annotated text descriptions for text-based editing on DAVIS-Edit
```
Specifically, `edited_video_caption_dict_image.json` and `edited_video_caption_dict_text.json` are constructed as Python dictionary, with its keys as the names of video folders in `JPEGImages`. For example in `edited_video_caption_dict_text.json`:
```json
{
  "bear": {
    "original": "a bear walking on rocks in a zoo",
    "similar": "A panda walking on rocks in a zoo",
    "changing": "A rabbit walking on rocks in a zoo"
  },
...
```
The annotations of reference images contain two sub-folders, i.e., `similar` and `changing`, corresponding to the annotations for `DAVIS-Edit-S` and `DAVIS-Edit-C`, respectively, where the structure are constructed in the same folder name as that in `JPEGImages`.

# How to use DAVIS-Edit?
We highly recommend you to index different elements in `DAVIS-Edit` through the *annotation files*. Particularly, you may refer to the script below:
```python
import os
import json
from tqdm import tqdm
from PIL import Image

# TODO: Modify the configurations here to your local paths
frame_root = 'JPEGImages'
mask_root = 'Annotations'
reference_image_root = 'ReferenceImages/similar'            # Or 'ReferenceImages/changing'
annotation_file_path = 'edited_video_caption_dict_text.json'

# Load the annotation file
with open(annotation_file_path, 'r') as f:
  annotations = json.load(f)

# Iterate all data samples in DAVIS-Edit
for video_name in tqdm(annotations.keys()):

  # Load text prompts
  original_prompt = annotations[video_name]['original']
  similar_prompt = annotations[video_name]['similar']
  changing_prompt = annotations[video_name]['changing']

  # Load reference images
  reference_image = Image.open(os.path.join(reference_image_root, video_name + '.png'))

  # Load video frames
  video_frames = []
  for path in sorted(os.listdir(os.path.join(frame_root, video_name))):
    if path != 'Thumbs.db' and path != '.DS_store':
      video_frames.append(Image.open(os.path.join(frame_root, path)))

  # Load masks
  masks = []
  for path in sorted(os.listdir(os.path.join(mask_root, video_name))):
    if path != 'Thumbs.db' and path != '.DS_store':
      masks.append(Image.open(os.path.join(frame_root, path)))

# (add further operations that you expect in the lines below)
```