Add metadata and sample usage

#1
by nielsr HF staff - opened
Files changed (1) hide show
  1. README.md +114 -3
README.md CHANGED
@@ -1,3 +1,114 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: diffusers
4
+ pipeline_tag: any-to-any
5
+ ---
6
+
7
+ <div align="center">
8
+ <br>
9
+ <img src="docs/title.png" width="166"> <!-- Replace with your logo -->
10
+ <h3>Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and Generation</h3>
11
+
12
+ [Anonymous CVPR submission]
13
+
14
+ [![ArXiv](https://img.shields.io/badge/ArXiv-PaperID12251-<COLOR>.svg)](https://arxiv.org/abs/your_paper_id) [![Demo](https://img.shields.io/badge/Demo-ComingSoon-<COLOR>.svg)](https://your_demo_link) [![Discord](https://img.shields.io/badge/Discord-join-blueviolet?logo=discord&amp)](https://your_discord_link)
15
+
16
+ </div>
17
+
18
+ ## News
19
+ * **[2024-11-29]** We release a [256-resolution version of the weights](https://huggingface.co/SJTU-Deng-Lab/Show-o-Turbo-256) for Show-o Turbo on Hugging Face.
20
+
21
+
22
+
23
+ ## What's New about Show-o Turbo?
24
+
25
+ Show-o Turbo builds upon Show-o to address its inefficiency issues in both image and text generation. While Show-o relies on progressive denoising for images and autoregressive decoding for text, Show-o Turbo introduces a unified denoising perspective for both modalities, leading to significantly faster generation speeds. Show-o Turbo achieves this through several key innovations:
26
+
27
+ <p align="center">
28
+ <img src="docs/trajectory.png" style="max-width: 100%;"> <!-- Charts and graphs showcasing results -->
29
+ </p>
30
+
31
+
32
+ * **Unified Denoising:** Show-o Turbo utilizes parallel text decoding techniques (Jacobi decoding) to reframe text generation as a denoising process, analogous to image generation. This enables a unified view of both modalities as denoising trajectories.
33
+ * **Consistency Distillation:** Show-o Turbo employs consistency distillation, a technique inspired by diffusion model acceleration, to shorten these multimodal denoising trajectories. This allows the model to generate meaningful content faster.
34
+ * **Trajectory Segmentation and Curriculum Learning:** To enhance convergence, Show-o Turbo uses a staged training approach with decreasing trajectory segments and curriculum learning.
35
+ * **Top-k Sampling:** Show-o Turbo utilizes top-k sampling during inference to improve sample quality, especially with fewer sampling steps.
36
+
37
+ ## Results
38
+
39
+ Show-o Turbo shows significant speedups in both text-to-image and image-to-text generation, while maintaining comparable performance to Show-o.
40
+
41
+ * In text-to-image generation, it achieves performance close to that of Show-o at 8-step sampling at 4-step sampling, and surpasses Show-o at 4-step sampling at 2-step sampling.
42
+ <p align="center">
43
+ <img src="docs/t2i_result.png" width="777"> <!-- Charts and graphs showcasing results -->
44
+ </p>
45
+
46
+ * In multimodal understanding tasks, it is about 1.5 times faster without much performance loss.
47
+ <p align="center">
48
+ <img src="docs/mmu_result.png" width="777"> <!-- Charts and graphs showcasing results -->
49
+ </p>
50
+
51
+ ## Getting Started
52
+
53
+ First, set up the environment:
54
+ ```bash
55
+ pip3 install -r requirements.txt
56
+ ```
57
+
58
+ ### Inference
59
+
60
+ **Multimodal Understanding:**
61
+
62
+ ```bash
63
+ python3 inference_mmu.py config=configs/showo_turbo_mmu.yaml
64
+ ```
65
+
66
+ This will run the multimodal understanding inference with the default settings from `configs/showo_turbo_mmu.yaml`. You can modify this config file to customize the input and other parameters.
67
+
68
+ <p align="center">
69
+ <img src="docs/mmu.png" style="max-width: 100%;"> <!-- Example output of MMU inference -->
70
+ </p>
71
+
72
+
73
+ **Text-to-Image Generation:**
74
+
75
+ ```bash
76
+ python3 inference_t2i.py config=configs/showo_turbo_t2i.yaml
77
+ ```
78
+
79
+ This will run text-to-image generation with default settings. Similar to MMU, you can adjust parameters in the config file.
80
+
81
+ <p align="center">
82
+ <img src="docs/t2i.png" style="max-width: 100%;"> <!-- Example output of T2I inference -->
83
+ </p>
84
+
85
+
86
+
87
+ ## Training pipeline
88
+
89
+ **(Coming Soon)** Details about the training process, including data preparation, scripts, and configuration options will be provided here upon release. Example command:
90
+
91
+ ```bash
92
+ accelerate launch --config_file path/to/your/accelerate_config --main_process_port=8888 training/train_showo_turbo.py config=configs/showo_turbo_training.yaml
93
+ ```
94
+
95
+
96
+ ## TODO
97
+
98
+ - [X] Release the inference and training code.
99
+ - [X] Release the model weights.
100
+ - [ ] Conduct further experiments with larger model sizes and datasets.
101
+
102
+ ## Contributing
103
+
104
+ We welcome contributions to Show-o Turbo! If you have ideas for new features or improvements, please open an issue or submit a pull request.
105
+
106
+
107
+ ## Citation
108
+
109
+ **(Coming Soon)** Citation information will be provided here upon publication.
110
+
111
+
112
+ ## Acknowledgments
113
+
114
+ We would like to thank the authors of Show-o and the developers of the libraries and frameworks upon which Show-o Turbo is built, including open-muse, Phi-1.5, maskgit, taming-transformers, transformers, accelerate, diffusers. Thanks to all the authors for their great work.