XiangpengYang commited on
Commit
09e7c75
Β·
1 Parent(s): b585bc1
Files changed (2) hide show
  1. README.md +45 -0
  2. environment.yaml +1 -1
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## πŸ›‘ Setup Environment
2
+ Our method is tested using cuda12.1, fp16 of accelerator and xformers on a single L40.
3
+
4
+ ```bash
5
+ conda create -n s python=3.10
6
+ conda activate st-modulator
7
+ conda install pytorch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 pytorch-cuda=12.1 -c pytorch -c nvidia
8
+ conda env create -f environment.yaml
9
+ pip install diffusters=0.19.0
10
+ ```
11
+
12
+ `xformers` is recommended for A100 GPU to save memory and running time.
13
+
14
+ </details>
15
+
16
+ ```
17
+ You may download all data and checkpoints using the following bash command
18
+ ```
19
+ bash download_all.sh
20
+ ```
21
+
22
+
23
+ ## βš”οΈ ST-Modulator Editing
24
+
25
+ You could reproduce multi-grained editing results in our teaser by running:
26
+
27
+ ```bash
28
+ sh test.sh
29
+ #or accelerate launch test.py --config config/run_two_man.yaml
30
+ ```
31
+
32
+ <details><summary>The result is saved at `./result` . (Click for directory structure) </summary>
33
+
34
+ ```
35
+ result
36
+ β”œβ”€β”€ run_two_man
37
+ β”‚ β”œβ”€β”€ infer_samples
38
+ β”‚ β”œβ”€β”€ sample
39
+ β”‚ β”œβ”€β”€ step_0 # result image folder
40
+ β”‚ β”œβ”€β”€ step_0.mp4 # result video
41
+ β”‚ β”œβ”€β”€ source_video.mp4 # the input video
42
+
43
+ ```
44
+
45
+ </details>
environment.yaml CHANGED
@@ -182,4 +182,4 @@ dependencies:
182
  - wcwidth==0.2.13
183
  - werkzeug==3.0.3
184
  - zipp==3.19.2
185
- prefix: /Data/env/anaconda3/envs/flatten
 
182
  - wcwidth==0.2.13
183
  - werkzeug==3.0.3
184
  - zipp==3.19.2
185
+ prefix: /Data/env/anaconda3/envs/st-modulator