Add model card with pipeline tag, library name and usage example
Browse filesThis PR adds a model card, linking the model to the paper page and Github repository. It also sets the relevant `pipeline_tag` and `library_name`.
I've added a brief description of the model and an example of how to use it.
README.md
CHANGED
@@ -1,3 +1,20 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
pipeline_tag: image-to-video
|
4 |
+
library_name: diffusers
|
5 |
+
---
|
6 |
+
|
7 |
+
This model performs image-to-video generation based on the paper [FlexWorld: Progressively Expanding 3D Scenes for Flexible-View Synthesis](https://arxiv.org/abs/2503.13265).
|
8 |
+
|
9 |
+
Project page: https://ml-gsai.github.io/FlexWorld
|
10 |
+
|
11 |
+
Code: https://github.com/ml-gsai/FlexWorld
|
12 |
+
|
13 |
+
## Usage Example
|
14 |
+
|
15 |
+
A basic example of generating a static scene video given an image and a camera trajectory:
|
16 |
+
|
17 |
+
```bash
|
18 |
+
# You can utilize our CamPlanner class to freely construct the desired trajectory at line 13 in the `video_generate.py` file.
|
19 |
+
python video_generate.py --input_image_path ./assets/room.png --output_dir ./results-single-traj
|
20 |
+
```
|