Yanrui95 commited on
Commit
e34f325
·
verified ·
1 Parent(s): 80d54b1

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +3 -3
  2. app.py +5 -5
README.md CHANGED
@@ -2,17 +2,17 @@
2
  title: NormalCrafter
3
  app_file: app.py
4
  sdk: gradio
5
- sdk_version: 5.23.2
6
  ---
7
  ## ___***NormalCrafter: Learning Temporally Consistent Video Normal from Video Diffusion Priors***___
8
 
9
  _**[Yanrui Bin<sup>1</sup>](https://scholar.google.com/citations?user=_9fN3mEAAAAJ&hl=zh-CN),[Wenbo Hu<sup>2*](https://wbhu.github.io),
10
  [Haoyuan Wang<sup>3](https://www.whyy.site/),
11
- [Xinya Chen<sup>3](https://xinyachen21.github.io/),
12
  [Bing Wang<sup>2 &dagger;</sup>](https://bingcs.github.io/)**_
13
  <br><br>
14
  <sup>1</sup>Spatial Intelligence Group, The Hong Kong Polytechnic University
15
- <sup>2</sup>Tencent AI Lab
16
  <sup>3</sup>City University of Hong Kong
17
  <sup>4</sup>Huazhong University of Science and Technology
18
  </div>
 
2
  title: NormalCrafter
3
  app_file: app.py
4
  sdk: gradio
5
+ sdk_version: 5.23.3
6
  ---
7
  ## ___***NormalCrafter: Learning Temporally Consistent Video Normal from Video Diffusion Priors***___
8
 
9
  _**[Yanrui Bin<sup>1</sup>](https://scholar.google.com/citations?user=_9fN3mEAAAAJ&hl=zh-CN),[Wenbo Hu<sup>2*](https://wbhu.github.io),
10
  [Haoyuan Wang<sup>3](https://www.whyy.site/),
11
+ [Xinya Chen<sup>4](https://xinyachen21.github.io/),
12
  [Bing Wang<sup>2 &dagger;</sup>](https://bingcs.github.io/)**_
13
  <br><br>
14
  <sup>1</sup>Spatial Intelligence Group, The Hong Kong Polytechnic University
15
+ <sup>2</sup>ARC Lab, Tencent PCG
16
  <sup>3</sup>City University of Hong Kong
17
  <sup>4</sup>Huazhong University of Science and Technology
18
  </div>
app.py CHANGED
@@ -120,7 +120,7 @@ def construct_demo():
120
  with gr.Column(scale=2):
121
  with gr.Row(equal_height=True):
122
  output_video_1 = gr.Video(
123
- label="Preprocessed video",
124
  interactive=False,
125
  autoplay=True,
126
  loop=True,
@@ -128,7 +128,7 @@ def construct_demo():
128
  scale=5,
129
  )
130
  output_video_2 = gr.Video(
131
- label="Generated normal Video",
132
  interactive=False,
133
  autoplay=True,
134
  loop=True,
@@ -141,21 +141,21 @@ def construct_demo():
141
  with gr.Row(equal_height=False):
142
  with gr.Accordion("Advanced Settings", open=False):
143
  max_res = gr.Slider(
144
- label="max resolution",
145
  minimum=512,
146
  maximum=1024,
147
  value=1024,
148
  step=64,
149
  )
150
  process_length = gr.Slider(
151
- label="process length",
152
  minimum=-1,
153
  maximum=280,
154
  value=60,
155
  step=1,
156
  )
157
  process_target_fps = gr.Slider(
158
- label="target FPS",
159
  minimum=-1,
160
  maximum=30,
161
  value=15,
 
120
  with gr.Column(scale=2):
121
  with gr.Row(equal_height=True):
122
  output_video_1 = gr.Video(
123
+ label="Preprocessed Video",
124
  interactive=False,
125
  autoplay=True,
126
  loop=True,
 
128
  scale=5,
129
  )
130
  output_video_2 = gr.Video(
131
+ label="Generated Normal Video",
132
  interactive=False,
133
  autoplay=True,
134
  loop=True,
 
141
  with gr.Row(equal_height=False):
142
  with gr.Accordion("Advanced Settings", open=False):
143
  max_res = gr.Slider(
144
+ label="Max Resolution",
145
  minimum=512,
146
  maximum=1024,
147
  value=1024,
148
  step=64,
149
  )
150
  process_length = gr.Slider(
151
+ label="Process Length",
152
  minimum=-1,
153
  maximum=280,
154
  value=60,
155
  step=1,
156
  )
157
  process_target_fps = gr.Slider(
158
+ label="Target FPS",
159
  minimum=-1,
160
  maximum=30,
161
  value=15,