|
--- |
|
pipeline_tag: image-to-video |
|
--- |
|
|
|
<p align="center"> |
|
<img src="./demos/demo-01.gif" width="70%" /> |
|
<img src="./demos/demo-02.gif" width="70%" /> |
|
<img src="./demos/demo-03.gif" width="70%" /> |
|
|
|
</p> |
|
<p align="center">Samples generated by AnimateLCM-SVD-xt</p> |
|
|
|
|
|
## Introduction |
|
Consistency Distilled [Stable Video Diffusion Image2Video-XT (SVD-xt)](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt) following the strategy proposed in [AnimateLCM-paper](https://arxiv.org/abs/2402.00769). |
|
AnimateLCM-SVD-xt can generate good quality image-conditioned videos with 25 frames in 2~8 steps with 576x1024 resolutions. |
|
|
|
## Computation comparsion |
|
AnimateLCM-SVD-xt can generally produces demos with good quality in 4 steps without requiring the classifier-free guidance, and therefore can save 25 x 2 / 4 = 12.5 times compuation resources compared with normal SVD models. |
|
|
|
|
|
## Demos |
|
|
|
| | | | |
|
| :---: | :---: | :---: | |
|
|  |  |  | |
|
| 2 steps, cfg=1 | 4 steps, cfg=1 | 8 steps, cfg=1 | |
|
|  |  |  | |
|
| 2 steps, cfg=1 | 4 steps, cfg=1 | 8 steps, cfg=1 | |
|
|  |  |  | |
|
| 2 steps, cfg=1 | 4 steps, cfg=1 | 8 steps, cfg=1 | |
|
|  |  |  | |
|
| 2 steps, cfg=1 | 4 steps, cfg=1 | 8 steps, cfg=1 | |
|
|  |  |  | |
|
| 2 steps, cfg=1 | 4 steps, cfg=1 | 8 steps, cfg=1 | |
|
|
|
|
|
|
|
I have launched a gradio demo at [AnimateLCM SVD space](https://huggingface.co/spaces/wangfuyun/AnimateLCM-SVD). Should you have any questions, please contact Fu-Yun Wang ([email protected]). I might respond a bit later. Thank you! |