Commit
·
ec64732
1
Parent(s):
0c6b59b
Update parquet files (step 84 of 121)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/101-5/gpt4free/g4f/.v1/README.md +0 -255
- spaces/123aa/pastel-mix/README.md +0 -13
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Abaqus 6.11 Torrent.md +0 -133
- spaces/1gistliPinn/ChatGPT4/Examples/Duplicate Photo Finder Professional 5.22 Crack Portable License Key High Quality.md +0 -6
- spaces/1gistliPinn/ChatGPT4/Examples/FileLocator Pro 8.5 Build 2944 Crack [Latest] 2020.md +0 -5
- spaces/1line/AutoGPT/autogpt/agent/__init__.py +0 -4
- spaces/1phancelerku/anime-remove-background/Download Gold Digger FRVR Mod APK and Get Unlimited Gems Coins and Stars.md +0 -72
- spaces/1phancelerku/anime-remove-background/Download Subway Surfers Mod APK v2 31.0 Terbaru 2022 and Unlock All Characters and Boards.md +0 -105
- spaces/1phancelerku/anime-remove-background/Download UNO! and Join the Fun of the Mobile Community Cup..md +0 -132
- spaces/1phancelerku/anime-remove-background/Enjoy the New and Exciting Mobile Game from Azerbaijan Create 017 APK.md +0 -117
- spaces/1toTree/lora_test/ppdiffusers/models/unet_2d_blocks.py +0 -2223
- spaces/AIConsultant/MusicGen/audiocraft/metrics/visqol.py +0 -216
- spaces/AIFILMS/generate_human_motion/VQ-Trans/train_t2m_trans.py +0 -191
- spaces/AIFILMS/generate_human_motion/pyrender/pyrender/font.py +0 -272
- spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/lr_scheduler.py +0 -128
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/datasets/custom_ds.py +0 -55
- spaces/Abdllh/topic2poem/README.md +0 -14
- spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/privacy/$types.d.ts +0 -15
- spaces/AchyuthGamer/OpenGPT-Chat-UI/src/styles/main.css +0 -17
- spaces/AkitoP/umamusume_bert_vits2/transforms.py +0 -209
- spaces/Alashazam/StoryGenerator/app.py +0 -15
- spaces/Altinas/vits-uma-genshin-honkais/text/symbols.py +0 -39
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/ddim.md +0 -88
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/pipeline_flax_utils.py +0 -557
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/repaint/__init__.py +0 -1
- spaces/Andy1621/uniformer_image_detection/configs/instaboost/mask_rcnn_r101_fpn_instaboost_4x_coco.py +0 -2
- spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_faster_rcnn_r50_fpn_1x_coco.py +0 -30
- spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context.py +0 -9
- spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_512x1024_80k_cityscapes.py +0 -9
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/activation.py +0 -92
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/box_iou_rotated.py +0 -45
- spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/gmflow.py +0 -170
- spaces/Artrajz/vits-simple-api/vits/text/mandarin.py +0 -365
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/__init__.py +0 -82
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/INSTALL.md +0 -261
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/cascade_rcnn.py +0 -299
- spaces/BAAI/vid2vid-zero/vid2vid_zero/data/dataset.py +0 -44
- spaces/BABASA/README/README.md +0 -10
- spaces/Beasto/Photo2Monet_Cyclegan/app.py +0 -48
- spaces/Benson/text-generation/Examples/Como Hacer Un rbol De Navidad.md +0 -81
- spaces/Benson/text-generation/Examples/Creacin Y Construccin Apk Hack.md +0 -67
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/build/metadata_editable.py +0 -41
- spaces/CVH-vn1210/make_hair/minigpt4/common/dist_utils.py +0 -137
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/build.py +0 -397
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/solver/build.py +0 -163
- spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/model_loader.py +0 -27
- spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/pose_model_identifier.py +0 -103
- spaces/CVPR/lama-example/models/ade20k/segm_lib/nn/parallel/data_parallel.py +0 -112
- spaces/CVPR/transfiner/configs/common/models/mask_rcnn_fpn.py +0 -93
- spaces/CarlDennis/Lovelive-VITS-JPZH/modules.py +0 -387
spaces/101-5/gpt4free/g4f/.v1/README.md
DELETED
@@ -1,255 +0,0 @@
|
|
1 |
-
**A major update is to come this week (statement written 14 Jun)**
|
2 |
-
**You may check these out in the meanwhile**:
|
3 |
-
|
4 |
-
- v2 prototype of gpt4free someone made: https://gitler.moe/g4f/gpt4free
|
5 |
-
- Discord bot with gpt-4 using poe.com: https://github.com/xtekky/gpt4free-discord
|
6 |
-
|
7 |
-
______
|
8 |
-
What can I do to contribute ?
|
9 |
-
you reverse a site from this list: [sites-to-reverse](https://github.com/xtekky/gpt4free/issues/40), and add it to [`./testing`](https://github.com/xtekky/gpt4free/tree/main/testing) or refractor it and add it to [`./gpt4free`](https://github.com/xtekky/gpt4free/tree/main/gpt4free)
|
10 |
-
|
11 |
-
<p>You may join our discord: <a href="https://discord.com/invite/gpt4free">discord.gg/gpt4free<a> for further updates. <a href="https://discord.gg/gpt4free"><img align="center" alt="gpt4free Discord" width="22px" src="https://raw.githubusercontent.com/peterthehan/peterthehan/master/assets/discord.svg" /></a></p>
|
12 |
-
|
13 |
-
|
14 |
-
<img alt="gpt4free logo" src="https://user-images.githubusercontent.com/98614666/233799515-1a7cb6a3-b17f-42c4-956d-8d2a0664466f.png">
|
15 |
-
|
16 |
-
## Legal Notice <a name="legal-notice"></a>
|
17 |
-
|
18 |
-
This repository is _not_ associated with or endorsed by providers of the APIs contained in this GitHub repository. This project is intended **for educational purposes only**. This is just a little personal project. Sites may contact me to improve their security or request the removal of their site from this repository.
|
19 |
-
|
20 |
-
Please note the following:
|
21 |
-
|
22 |
-
1. **Disclaimer**: The APIs, services, and trademarks mentioned in this repository belong to their respective owners. This project is _not_ claiming any right over them nor is it affiliated with or endorsed by any of the providers mentioned.
|
23 |
-
|
24 |
-
2. **Responsibility**: The author of this repository is _not_ responsible for any consequences, damages, or losses arising from the use or misuse of this repository or the content provided by the third-party APIs. Users are solely responsible for their actions and any repercussions that may follow. We strongly recommend the users to follow the TOS of the each Website.
|
25 |
-
|
26 |
-
3. **Educational Purposes Only**: This repository and its content are provided strictly for educational purposes. By using the information and code provided, users acknowledge that they are using the APIs and models at their own risk and agree to comply with any applicable laws and regulations.
|
27 |
-
|
28 |
-
4. **Indemnification**: Users agree to indemnify, defend, and hold harmless the author of this repository from and against any and all claims, liabilities, damages, losses, or expenses, including legal fees and costs, arising out of or in any way connected with their use or misuse of this repository, its content, or related third-party APIs.
|
29 |
-
|
30 |
-
5. **Updates and Changes**: The author reserves the right to modify, update, or remove any content, information, or features in this repository at any time without prior notice. Users are responsible for regularly reviewing the content and any changes made to this repository.
|
31 |
-
|
32 |
-
By using this repository or any code related to it, you agree to these terms. The author is not responsible for any copies, forks, or reuploads made by other users. This is the author's only account and repository. To prevent impersonation or irresponsible actions, you may comply with the GNU GPL license this Repository uses.
|
33 |
-
|
34 |
-
<br>
|
35 |
-
|
36 |
-
<img src="https://media.giphy.com/media/LnQjpWaON8nhr21vNW/giphy.gif" width="100" align="left">
|
37 |
-
Just API's from some language model sites.
|
38 |
-
|
39 |
-
|
40 |
-
# Related gpt4free projects
|
41 |
-
|
42 |
-
<table>
|
43 |
-
<thead align="center">
|
44 |
-
<tr border: none;>
|
45 |
-
<td><b>🎁 Projects</b></td>
|
46 |
-
<td><b>⭐ Stars</b></td>
|
47 |
-
<td><b>📚 Forks</b></td>
|
48 |
-
<td><b>🛎 Issues</b></td>
|
49 |
-
<td><b>📬 Pull requests</b></td>
|
50 |
-
</tr>
|
51 |
-
</thead>
|
52 |
-
<tbody>
|
53 |
-
<tr>
|
54 |
-
<td><a href="https://github.com/xtekky/gpt4free"><b>gpt4free</b></a></td>
|
55 |
-
<td><a href="https://github.com/xtekky/gpt4free/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/xtekky/gpt4free?style=flat-square&labelColor=343b41"/></a></td>
|
56 |
-
<td><a href="https://github.com/xtekky/gpt4free/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/xtekky/gpt4free?style=flat-square&labelColor=343b41"/></a></td>
|
57 |
-
<td><a href="https://github.com/xtekky/gpt4free/issues"><img alt="Issues" src="https://img.shields.io/github/issues/xtekky/gpt4free?style=flat-square&labelColor=343b41"/></a></td>
|
58 |
-
<td><a href="https://github.com/xtekky/gpt4free/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/xtekky/gpt4free?style=flat-square&labelColor=343b41"/></a></td>
|
59 |
-
</tr>
|
60 |
-
<tr>
|
61 |
-
<td><a href="https://github.com/xiangsx/gpt4free-ts"><b>gpt4free-ts</b></a></td>
|
62 |
-
<td><a href="https://github.com/xiangsx/gpt4free-ts/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/xiangsx/gpt4free-ts?style=flat-square&labelColor=343b41"/></a></td>
|
63 |
-
<td><a href="https://github.com/xiangsx/gpt4free-ts/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/xiangsx/gpt4free-ts?style=flat-square&labelColor=343b41"/></a></td>
|
64 |
-
<td><a href="https://github.com/xiangsx/gpt4free-ts/issues"><img alt="Issues" src="https://img.shields.io/github/issues/xiangsx/gpt4free-ts?style=flat-square&labelColor=343b41"/></a></td>
|
65 |
-
<td><a href="https://github.com/xiangsx/gpt4free-ts/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/xiangsx/gpt4free-ts?style=flat-square&labelColor=343b41"/></a></td>
|
66 |
-
</tr>
|
67 |
-
<tr>
|
68 |
-
<td><a href="https://github.com/xtekky/chatgpt-clone"><b>ChatGPT-Clone</b></a></td>
|
69 |
-
<td><a href="https://github.com/xtekky/chatgpt-clone/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/xtekky/chatgpt-clone?style=flat-square&labelColor=343b41"/></a></td>
|
70 |
-
<td><a href="https://github.com/xtekky/chatgpt-clone/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/xtekky/chatgpt-clone?style=flat-square&labelColor=343b41"/></a></td>
|
71 |
-
<td><a href="https://github.com/xtekky/chatgpt-clone/issues"><img alt="Issues" src="https://img.shields.io/github/issues/xtekky/chatgpt-clone?style=flat-square&labelColor=343b41"/></a></td>
|
72 |
-
<td><a href="https://github.com/xtekky/chatgpt-clone/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/xtekky/chatgpt-clone?style=flat-square&labelColor=343b41"/></a></td>
|
73 |
-
</tr>
|
74 |
-
<tr>
|
75 |
-
<td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free"><b>ChatGpt Discord Bot</b></a></td>
|
76 |
-
<td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
|
77 |
-
<td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free/network/members"><img alt="Forks" src="https://img.shields.io/github/forks/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
|
78 |
-
<td><a href="https://github.com/mishalhossin/Discord-Chatbot-Gpt4Free/issues"><img alt="Issues" src="https://img.shields.io/github/issues/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
|
79 |
-
<td><a href="https://github.com/mishalhossin/Coding-Chatbot-Gpt4Free/pulls"><img alt="Pull Requests" src="https://img.shields.io/github/issues-pr/mishalhossin/Discord-Chatbot-Gpt4Free?style=flat-square&labelColor=343b41"/></a></td>
|
80 |
-
</tr>
|
81 |
-
</tbody>
|
82 |
-
</table>
|
83 |
-
|
84 |
-
|
85 |
-
## Table of Contents
|
86 |
-
| Section | Description | Link | Status |
|
87 |
-
| ------- | ----------- | ---- | ------ |
|
88 |
-
| **To do list** | List of tasks to be done | [](#todo) | - |
|
89 |
-
| **Current Sites** | Current websites or platforms that can be used as APIs | [](#current-sites) | - |
|
90 |
-
| **Best Sites for gpt4** | Recommended websites or platforms for gpt4 | [](#best-sites) | - |
|
91 |
-
| **Streamlit GPT4Free GUI** | Web-based graphical user interface for interacting with gpt4free | [](#streamlit-gpt4free-gui) | - |
|
92 |
-
| **Docker** | Instructions on how to run gpt4free in a Docker container | [](#docker-instructions) | - |
|
93 |
-
| **ChatGPT clone** | A ChatGPT clone with new features and scalability | [](https://chat.chatbot.sex/chat) | - |
|
94 |
-
| **How to install** | Instructions on how to install gpt4free | [](#install) | - |
|
95 |
-
| **Usage Examples** | | | |
|
96 |
-
| `theb` | Example usage for theb (gpt-3.5) | [](gpt4free/theb/README.md) |  |
|
97 |
-
| `forefront` | Example usage for forefront (gpt-4) | [](gpt4free/forefront/README.md) |  | ||
|
98 |
-
| `quora (poe)` | Example usage for quora | [](gpt4free/quora/README.md) |  |
|
99 |
-
| `you` | Example usage for you | [](gpt4free/you/README.md) |  |
|
100 |
-
| `deepai` | Example usage for DeepAI (gpt-3.5, with chat) | [](gpt4free/deepai/README.md) |  |
|
101 |
-
| **Try it Out** | | | |
|
102 |
-
| Google Colab Jupyter Notebook | Example usage for gpt4free | [](https://colab.research.google.com/github/DanielShemesh/gpt4free-colab/blob/main/gpt4free.ipynb) | - |
|
103 |
-
| replit Example (feel free to fork this repl) | Example usage for gpt4free | [](https://replit.com/@gpt4free/gpt4free-webui) | - |
|
104 |
-
| **Legal Notice** | Legal notice or disclaimer | [](#legal-notice) | - |
|
105 |
-
| **Copyright** | Copyright information | [](#copyright) | - |
|
106 |
-
| **Star History** | Star History | [](#star-history) | - |
|
107 |
-
|
108 |
-
|
109 |
-
## To do list <a name="todo"></a>
|
110 |
-
|
111 |
-
- [x] Add a GUI for the repo
|
112 |
-
- [ ] Make a general package named `gpt4free`, instead of different folders
|
113 |
-
- [ ] Live api status to know which are down and which can be used
|
114 |
-
- [ ] Integrate more API's in `./unfinished` as well as other ones in the lists
|
115 |
-
- [ ] Make an API to use as proxy for other projects
|
116 |
-
- [ ] Make a pypi package
|
117 |
-
|
118 |
-
## Current Sites <a name="current-sites"></a>
|
119 |
-
|
120 |
-
| Website s | Model(s) |
|
121 |
-
| ------------------------------------------------ | -------------------------------- |
|
122 |
-
| [forefront.ai](https://chat.forefront.ai) | GPT-4/3.5 |
|
123 |
-
| [poe.com](https://poe.com) | GPT-4/3.5 |
|
124 |
-
| [writesonic.com](https://writesonic.com) | GPT-3.5 / Internet |
|
125 |
-
| [t3nsor.com](https://t3nsor.com) | GPT-3.5 |
|
126 |
-
| [you.com](https://you.com) | GPT-3.5 / Internet / good search |
|
127 |
-
| [sqlchat.ai](https://sqlchat.ai) | GPT-3.5 |
|
128 |
-
| [bard.google.com](https://bard.google.com) | custom / search |
|
129 |
-
| [bing.com/chat](https://bing.com/chat) | GPT-4/3.5 |
|
130 |
-
| [italygpt.it](https://italygpt.it) | GPT-3.5 |
|
131 |
-
| [deepai.org](https://deepai.org/chat) | GPT-3.5 / chat support |
|
132 |
-
|
133 |
-
|
134 |
-
## Best sites <a name="best-sites"></a>
|
135 |
-
|
136 |
-
#### gpt-4
|
137 |
-
|
138 |
-
- [`/forefront`](gpt4free/forefront/README.md)
|
139 |
-
|
140 |
-
#### gpt-3.5
|
141 |
-
|
142 |
-
- [`/you`](gpt4free/you/README.md)
|
143 |
-
|
144 |
-
## Install <a name="install"></a>
|
145 |
-
|
146 |
-
Download or clone this GitHub repo
|
147 |
-
install requirements with:
|
148 |
-
|
149 |
-
```sh
|
150 |
-
python3 -m venv venv
|
151 |
-
. venv/bin/activate
|
152 |
-
pip3 install -r requirements.txt
|
153 |
-
```
|
154 |
-
|
155 |
-
## Install ffmpeg
|
156 |
-
```sh
|
157 |
-
sudo apt-get install ffmpeg
|
158 |
-
```
|
159 |
-
|
160 |
-
## Connect VPN if needed and get proxy (Optional)
|
161 |
-
```sh
|
162 |
-
echo "$http_proxy" # http://127.0.0.1:8889/
|
163 |
-
```
|
164 |
-
|
165 |
-
## Set proxy in gpt4free/you/__init__.py (Optional)
|
166 |
-
```
|
167 |
-
diff --git a/gpt4free/you/__init__.py b/gpt4free/you/__init__.py
|
168 |
-
index 11847fb..59d1162 100644
|
169 |
-
--- a/gpt4free/you/__init__.py
|
170 |
-
+++ b/gpt4free/you/__init__.py
|
171 |
-
@@ -38,6 +38,7 @@ class Completion:
|
172 |
-
if chat is None:
|
173 |
-
chat = []
|
174 |
-
|
175 |
-
+ proxy = '127.0.0.1:8889'
|
176 |
-
proxies = {'http': 'http://' + proxy, 'https': 'http://' + proxy} if proxy else {}
|
177 |
-
|
178 |
-
client = Session(client_identifier='chrome_108')
|
179 |
-
```
|
180 |
-
|
181 |
-
|
182 |
-
## To start gpt4free GUI <a name="streamlit-gpt4free-gui"></a>
|
183 |
-
|
184 |
-
##### Note: streamlit app collects heavy analytics even when running locally. This includes events for every page load, form submission including metadata on queries (like length), browser and client information including host ips. These are all transmitted to a 3rd party analytics group, Segment.com.
|
185 |
-
|
186 |
-
Move `streamlit_app.py` from `./gui` to the base folder then run:
|
187 |
-
`streamlit run streamlit_app.py` or `python3 -m streamlit run streamlit_app.py`
|
188 |
-
|
189 |
-
```sh
|
190 |
-
cp gui/streamlit_app.py .
|
191 |
-
streamlit run streamlit_app.py
|
192 |
-
```
|
193 |
-
|
194 |
-
|
195 |
-
## Docker <a name="docker-instructions"></a>
|
196 |
-
|
197 |
-
Build
|
198 |
-
|
199 |
-
```
|
200 |
-
docker build -t gpt4free:latest .
|
201 |
-
```
|
202 |
-
|
203 |
-
Run
|
204 |
-
|
205 |
-
```
|
206 |
-
docker run -p 8501:8501 gpt4free:latest
|
207 |
-
```
|
208 |
-
|
209 |
-
## Deploy using docker-compose
|
210 |
-
|
211 |
-
Run the following:
|
212 |
-
|
213 |
-
```
|
214 |
-
docker-compose up --build -d
|
215 |
-
```
|
216 |
-
|
217 |
-
## ChatGPT clone
|
218 |
-
|
219 |
-
> Currently implementing new features and trying to scale it, please be patient it may be unstable
|
220 |
-
> https://chat.g4f.ai/chat
|
221 |
-
> This site was developed by me and includes **gpt-4/3.5**, **internet access** and **gpt-jailbreak's** like DAN
|
222 |
-
> Run locally here: https://github.com/xtekky/chatgpt-clone
|
223 |
-
|
224 |
-
## Copyright:
|
225 |
-
|
226 |
-
This program is licensed under the [GNU GPL v3](https://www.gnu.org/licenses/gpl-3.0.txt)
|
227 |
-
|
228 |
-
Most code, with the exception of `quora/api.py` and `deepai/__init__.py` (by [ading2210](https://github.com/ading2210)), has been written by me, [xtekky](https://github.com/xtekky).
|
229 |
-
|
230 |
-
### Copyright Notice: <a name="copyright"></a>
|
231 |
-
|
232 |
-
```
|
233 |
-
xtekky/gpt4free: multiple reverse engineered language-model api's to decentralise the ai industry.
|
234 |
-
Copyright (C) 2023 xtekky
|
235 |
-
|
236 |
-
This program is free software: you can redistribute it and/or modify
|
237 |
-
it under the terms of the GNU General Public License as published by
|
238 |
-
the Free Software Foundation, either version 3 of the License, or
|
239 |
-
(at your option) any later version.
|
240 |
-
|
241 |
-
This program is distributed in the hope that it will be useful,
|
242 |
-
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
243 |
-
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
244 |
-
GNU General Public License for more details.
|
245 |
-
|
246 |
-
You should have received a copy of the GNU General Public License
|
247 |
-
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
248 |
-
```
|
249 |
-
|
250 |
-
|
251 |
-
## Star History <a name="star-history"></a>
|
252 |
-
|
253 |
-
<a href="https://github.com/xtekky/gpt4free/stargazers">
|
254 |
-
<img width="500" alt="Star History Chart" src="https://api.star-history.com/svg?repos=xtekky/gpt4free&type=Date">
|
255 |
-
</a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/123aa/pastel-mix/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Pastel Mix
|
3 |
-
emoji: 🏢
|
4 |
-
colorFrom: yellow
|
5 |
-
colorTo: red
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.16.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
duplicated_from: akhaliq/pastel-mix
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Abaqus 6.11 Torrent.md
DELETED
@@ -1,133 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<br> - Benefits: highlight the features and advantages of Abaqus 6.11 <br> - Risks: warn about the potential dangers and legal issues of downloading torrents | | H2: How to download Abaqus 6.11 torrent safely and legally | - Requirements: list the software and hardware needed to run Abaqus 6.11 <br> - Sources: recommend some reliable and trustworthy websites that offer Abaqus 6.11 torrent <br> - Steps: provide a step-by-step guide on how to download and install Abaqus 6.11 torrent | | H2: How to use Abaqus 6.11 for your simulation needs | - Overview: give a brief overview of the user interface and the main tools of Abaqus 6.11 <br> - Examples: show some practical examples of how to use Abaqus 6.11 for different types of simulations <br> - Tips: share some tips and tricks on how to optimize your simulation results and performance | | H2: Conclusion and FAQs | - Summary: summarize the main points of the article <br> - Call to action: encourage the reader to try Abaqus 6.11 for themselves <br> - FAQs: answer some common questions that the reader might have about Abaqus 6.11 | **Table 2: Article with HTML formatting** ```html <h1>What is Abaqus 6.11 and why you might want to download it</h1>
|
3 |
-
<p>Abaqus is a powerful software suite that allows you to perform realistic simulations of physical phenomena, such as structural mechanics, fluid dynamics, heat transfer, acoustics, and more. It is widely used by engineers, researchers, and students in various fields and industries, such as aerospace, automotive, biomedical, civil, energy, and manufacturing.</p>
|
4 |
-
<h2>abaqus 6.11 torrent</h2><br /><p><b><b>DOWNLOAD</b> ⏩ <a href="https://byltly.com/2uKzag">https://byltly.com/2uKzag</a></b></p><br /><br />
|
5 |
-
<p>Abaqus 6.11 is one of the latest versions of the software that was released in 2011. It offers many improvements and enhancements over the previous versions, such as:</p>
|
6 |
-
<ul>
|
7 |
-
<li>New features for modeling complex geometries, materials, and interactions</li>
|
8 |
-
<li>Improved performance and scalability for large-scale simulations</li>
|
9 |
-
<li>Enhanced integration with other software tools, such as CATIA, SolidWorks, MATLAB, etc.</li>
|
10 |
-
<li>More options for post-processing and visualization of simulation results</li>
|
11 |
-
</ul>
|
12 |
-
<p>If you are interested in using Abaqus 6.11 for your simulation needs, you might be tempted to download it from a torrent website. However, before you do that, you should be aware of the risks involved.</p>
|
13 |
-
<p>Downloading torrents is not only illegal but also risky. You might end up downloading a corrupted or infected file that could harm your computer or compromise your data. You might also face legal consequences if you are caught violating the intellectual property rights of the software developer.</p>
|
14 |
-
<p>DS SIMULIA Suite 2023 Free Download[^1^]<br />
|
15 |
-
Dassault Systemes DS SIMULIA Suite (Abaqus/Isight/Fe-safe/Tosca) x64 for Windows & Linux[^1^]<br />
|
16 |
-
SIMULIA delivers realistic simulation applications[^1^]<br />
|
17 |
-
SIMULIA Suite applications accelerate evaluating materials and products' performance, reliability, and safety[^1^]<br />
|
18 |
-
Aerospace & Defense manufacturers and suppliers use SIMULIA solutions[^1^]<br />
|
19 |
-
SIMULIA Suite delivers robust simulation structures and fluids technology[^1^]<br />
|
20 |
-
Modeling, simulation, and visualization technology are fully integrated into the 3DEXPERIENCE Platform[^1^]<br />
|
21 |
-
SIMULIA offers Abaqus Unified FEA solutions for predicting structure strength and deformations in linear and nonlinear regimes[^1^]<br />
|
22 |
-
Dassault Systemes SIMULIA applications, including Abaqus, fe-safe, Insight, Tosca, Simple, Simpack, and Simulation Lifecycle Management[^1^]<br />
|
23 |
-
SIMULIA applications accelerate the process of evaluating the performance, reliability, and safety of materials and products before committing to physical prototypes[^1^]<br />
|
24 |
-
System Requirements and Technical Details for DS SIMULIA Suite[^1^]<br />
|
25 |
-
simulia abaqus 6.12.1 | SolidTorrents[^2^]<br />
|
26 |
-
simulia abaqus 6.12.1 data.dat - 17.7 MB[^2^]<br />
|
27 |
-
simulia abaqus 6.12.1.zip - 2.81 MB[^2^]<br />
|
28 |
-
Tracker Seeder Leecher for simulia abaqus 6.12.1[^2^]<br />
|
29 |
-
Similar Torrents for simulia abaqus 6.12.1[^2^]<br />
|
30 |
-
DS.SIMULIA.Suite.2022.Win64-SSQ Other/DiskImage[^2^]<br />
|
31 |
-
Simulia Abaqus 6.14.1 Portable.zip Other/Archive[^2^]<br />
|
32 |
-
Dassault.Systemes.SIMULIA.Suite.2021.HF9.x64 Other/DiskImage[^2^]<br />
|
33 |
-
DS SIMULIA CST STUDIO SUITE 2022.04 SP4 (x64)[^2^]<br />
|
34 |
-
DS SIMULIA Suite Abaqus 2023 x64 Other[^2^]<br />
|
35 |
-
Solving Complex Problems for Structures and Bridges using ABAQUS Finite Element Package Other/Document[^2^]<br />
|
36 |
-
ABAQUS_6.14-1_x64_Win_SSQ Other[^2^]<br />
|
37 |
-
DS.SIMULIA.Suite.2021.HF5.Update.Only.Win.Linux-SSQ Other/DiskImage[^2^]<br />
|
38 |
-
Abaqus 6.11 for Catia V5-6R2012 x86 Other/DiskImage[^2^]<br />
|
39 |
-
DS SIMULIA CST STUDIO SUITE 2023.01 SP1 (x64) Other[^2^]<br />
|
40 |
-
Stream Abaqus 6.11 Torrent from Sumpchiscirdzu - SoundCloud[^3^]<br />
|
41 |
-
Play Abaqus 6.11 Torrent from Sumpchiscirdzu on SoundCloud desktop and mobile[^3^]<br />
|
42 |
-
abaqus 6.11 torrent download free full version<br />
|
43 |
-
abaqus 6.11 torrent crack serial keygen<br />
|
44 |
-
abaqus 6.11 torrent installation guide<br />
|
45 |
-
abaqus 6.11 torrent license file<br />
|
46 |
-
abaqus 6.11 torrent user manual<br />
|
47 |
-
abaqus 6.11 torrent tutorial pdf<br />
|
48 |
-
abaqus 6.11 torrent video training<br />
|
49 |
-
abaqus 6.11 torrent online course<br />
|
50 |
-
abaqus 6.11 torrent review ratings<br />
|
51 |
-
abaqus 6.11 torrent comparison with other software<br />
|
52 |
-
abaqus 6.11 torrent features and benefits<br />
|
53 |
-
abaqus 6.11 torrent system requirements<br />
|
54 |
-
abaqus 6.11 torrent technical support<br />
|
55 |
-
abaqus 6.11 torrent latest updates<br />
|
56 |
-
abaqus 6.11 torrent best practices<br />
|
57 |
-
abaqus 6.11 torrent tips and tricks<br />
|
58 |
-
abaqus 6.11 torrent case studies<br />
|
59 |
-
abaqus 6.11 torrent testimonials<br />
|
60 |
-
abaqus 6.11 torrent alternatives<br />
|
61 |
-
abaqus 6.11 torrent discounts and coupons<br />
|
62 |
-
abaqus 6.11 torrent free trial</p>
|
63 |
-
<p>Therefore, if you want to download Abaqus 6.11 torrent safely and legally, you should follow the instructions below.</p>
|
64 |
-
<h2>How to download Abaqus 6.11 torrent safely and legally</h2>
|
65 |
-
<p>To download Abaqus 6.11 torrent safely and legally, you will need the following:</p>
|
66 |
-
<ul>
|
67 |
-
<li>A valid license for Abaqus 6.11 from Dassault Systemes SIMULIA Corp., the developer of the software</li>
|
68 |
-
<li>A VPN service that can protect your online privacy and security</li>
|
69 |
-
<li>A torrent client that can handle magnet links and peer-to-peer file sharing</li>
|
70 |
-
<li>A reliable and trustworthy website that offers Abaqus 6.11 torrent</li>
|
71 |
-
</ul>
|
72 |
-
<p>Once you have these requirements ready, you can proceed with the following steps:</p>
|
73 |
-
<ol>
|
74 |
-
<li>Connect to a VPN server that matches your location or preference</li>
|
75 |
-
<li>Open your torrent client and copy the magnet link of Abaqus 6.11 torrent from one of these websites:<br>
|
76 |
-
- FileCR<br>
|
77 |
-
- SolidTorrents<br>
|
78 |
-
- Wixsite</li>
|
79 |
-
<li>Paste the magnet link into your torrent client and start downloading Abaqus 6.11 torrent</li>
|
80 |
-
<li>Wait until the download is complete and verify the integrity of the file</li>
|
81 |
-
<li>Run the setup file and follow the instructions to install Abaqus 6.11 on your computer</li>
|
82 |
-
<li>Activate your license using your credentials from Dassault Systemes SIMULIA Corp.</li>
|
83 |
-
<li>Enjoy using Abaqus 6.11 for your simulation needs</li>
|
84 |
-
</ol>
|
85 |
-
<h2>How to use Abaqus 6.11 for your simulation needs</h2>
|
86 |
-
<p>Abaqus 6.11 is a comprehensive software suite that consists of several applications, such as:</p>
|
87 |
-
<ul>
|
88 |
-
<li>Abaqus/CAE: a graphical user interface that allows you to create models, define simulations, analyze results, and generate reports</li>
|
89 |
-
<li>Abaqus/Standard: a solver that performs linear and nonlinear static and dynamic simulations</li>
|
90 |
-
<li>Abaqus/Explicit: a solver that performs highly nonlinear transient simulations involving large deformations or contact</li>
|
91 |
-
<li>Abaqus/CFD: a solver that performs computational fluid dynamics simulations involving fluid flow and heat transfer</li>
|
92 |
-
<li>Abaqus/Viewer: a post-processor that allows you to visualize simulation results in various formats</li>
|
93 |
-
</ul>
|
94 |
-
<p>To use Abaqus 6.11 for your simulation needs, you will need to follow these general steps:</p>
|
95 |
-
<ol>
|
96 |
-
<li>Launch Abaqus/CAE from your desktop or start menu</li>
|
97 |
-
<li>Create a new model or open an existing one from a file or database</li>
|
98 |
-
<li>Define the geometry, material properties, boundary conditions, loads, interactions, etc. of your model using the tools available in Abaqus/CAE</li>
|
99 |
-
<li>Select the appropriate solver (Abaqus/Standard or Abaqus/Explicit) and submit your simulation job to run on your computer or on a remote server</li>
|
100 |
-
<li>Monitor the progress and status of your simulation job using Abaqus/CAE or Abaqus/Viewer</li>
|
101 |
-
<li>Analyze the simulation results using Abaqus/CAE or Abaqus/Viewer</li>
|
102 |
-
<li>Create reports or export data using Abaqus/CAE or Abaqus/Viewer</li>
|
103 |
-
</ol>
|
104 |
-
<h3>Examples of how to use Abaqus 6.11 for different types of simulations</h3>
|
105 |
-
<h4>Example 1: Structural analysis of a beam under bending load</h4>
|
106 |
-
<p>In this example, we will use Abaqus/CAE to create a simple model of a beam under bending load and perform a linear static analysis using Abaqus/Standard.</p>
|
107 |
-
<ol type="a">
|
108 |
-
<li>Create a new model in Abaqus/CAE by clicking on File > New Model Database...</li>
|
109 |
-
 <li>Create a part representing the beam by clicking on Part > Create... Select "3D deformable" as type and "Solid" as base feature.</li>
|
110 |
-
 <li>In the Sketcher window, draw a rectangle with dimensions 10 m x 0.2 m x 0.1 m using the Create Lines tool.</li>
|
111 |
-
 <li>In the Part module toolbar, click on Done to exit Sketcher mode.</li>
|
112 |
-
 <li>Create a material representing steel by clicking on Property > Material > Create... Enter "Steel" as name and assign density (7850 kg/m3), elastic modulus (200 GPa), Poisson's ratio (0.3), etc.</li>
|
113 |
-
 <li>Create a section representing the beam cross-section by clicking on Property > Section > Create... Enter "Beam" as name and select "Solid" as category.</li>
|
114 |
-
 <li>In the Edit Section dialog box, select "Steel" as material assignment.</li>
|
115 |
-
 <li>Assign the section to the beam part by clicking on Property > Section Assignment... Select "Beam" as section name.</li>
|
116 |
-
 <li>Create an assembly containing only one instance of the beam part by clicking on Assembly > Instance... Select "Dependent" as type and "Beam" as part name.</li>
|
117 |
-
 <li>Create a datum plane at the mid-span of the beam by clicking on Assembly > Datum > Plane... Select "Offset from plane" as type and enter 5 m as distance.</li>
|
118 |
-
 <li>Create a reference point at the center of the datum plane by clicking on Assembly > Reference Point... Select "Datum plane" as type and select the datum plane.</li>
|
119 |
-
 <li>Create a step representing the bending load by clicking on Step > Create... Enter "Bending" as name and select "Static, General" as procedure type.</li>
|
120 |
-
 <li>Create a load representing the bending load by clicking on Load > Create... Enter "Bending" as name and select "Concentrated force" as category. Select the reference point as region and enter -1000 N as magnitude in the CF2 direction.</li>
|
121 |
-
 <li>Create boundary conditions representing the fixed supports at the ends of the beam by clicking on Boundary Condition > Create... Enter "Fixed" as name and select "Encastre" as type. Select the two end faces of the beam as region.</li>
|
122 |
-
 <li>Create a mesh for the beam part by clicking on Mesh > Part... Select "Beam" as part name and "Linear open section beam" as element type. Enter 20 as approximate size.</li>
|
123 |
-
 <li>Create a job for the analysis by clicking on Job > Manager... Enter "Beam_bending" as name and select "Model-1" as model.</li>
|
124 |
-
 <li>Submit the job for execution by clicking on Job > Submit...</li>
|
125 |
-
 <li>Monitor the progress and status of the job by clicking on Job > Monitor...</li>
|
126 |
-
 <li>Analyze the simulation results by clicking on Visualization > ODB Display...</li>
|
127 |
-
 </ol>
|
128 |
-
<h4>Example 2: Thermal analysis of a beam under heat flux</h4>
|
129 |
-
<p>In this example, we will use Abaqus/CAE to create a simple model of a beam under heat flux and perform a steady-state thermal analysis using Abaqus/Standard.</p>
|
130 |
-
<ol type="a">
|
131 |
-
<li>Create a new model in Abaqus/CAE by clicking on File > New Model Database...</li></p> 0a6ba089eb<br />
|
132 |
-
<br />
|
133 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Duplicate Photo Finder Professional 5.22 Crack Portable License Key High Quality.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>Duplicate Photo Finder Professional 5.22 Crack Portable License key</h2><br /><p><b><b>Download Zip</b> ⚹⚹⚹ <a href="https://imgfil.com/2uxXFq">https://imgfil.com/2uxXFq</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
Crack Download. Duplicate Photo Finder License Key is an influential accessible duplicate image taking away usefulness. The application lies ... 4d29de3e1b<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/FileLocator Pro 8.5 Build 2944 Crack [Latest] 2020.md
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>FileLocator Pro 8.5 Build 2944 Crack [Latest] 2020: A Review</h1>". Do you want me to change or improve it in any way?</p>
|
3 |
-
<h2>FileLocator Pro 8.5 Build 2944 Crack [Latest] 2020</h2><br /><p><b><b>DOWNLOAD</b> ———>>> <a href="https://imgfil.com/2uxZDI">https://imgfil.com/2uxZDI</a></b></p><br /><br /> 3cee63e6c2<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1line/AutoGPT/autogpt/agent/__init__.py
DELETED
@@ -1,4 +0,0 @@
|
|
1 |
-
from autogpt.agent.agent import Agent
|
2 |
-
from autogpt.agent.agent_manager import AgentManager
|
3 |
-
|
4 |
-
__all__ = ["Agent", "AgentManager"]
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Gold Digger FRVR Mod APK and Get Unlimited Gems Coins and Stars.md
DELETED
@@ -1,72 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Gold Digger FRVR Mod APK Unlimited Stars: A Guide for Mining Fans</h1>
|
3 |
-
<p>If you are a fan of mining games, you might have heard of <strong>Gold Digger FRVR</strong>, a 2D mining game from the FRVR developers. In this game, you must dig underground and attempt to find hidden gems and precious metals. You can also buy upgrades for your miner and tools, build your own house, and explore an infinite mine full of treasures, dangers, and puzzles. But what if you want to enjoy the game without any limitations or interruptions? That's where <strong>Gold Digger FRVR mod apk</strong> comes in handy. In this article, we will tell you everything you need to know about this modded version of the game, including its features, benefits, download and installation steps, and tips and tricks. So, let's get started!</p>
|
4 |
-
<h2>gold digger frvr mod apk unlimited stars</h2><br /><p><b><b>Download</b> → <a href="https://jinyurl.com/2uNQ9i">https://jinyurl.com/2uNQ9i</a></b></p><br /><br />
|
5 |
-
<h2>What is Gold Digger FRVR?</h2>
|
6 |
-
<p>Gold Digger FRVR is a casual arcade mining game that can be played on various platforms, such as web browser, Facebook, Google Play Store, App Store, and Samsung Instant. The game was released in March 2019 by Chris Benjaminsen, the founder of FRVR, a company that specializes in creating HTML5 games that work across devices. The game has received positive reviews from players and critics alike, with an average rating of 4.8/5 on the FRVR website and 9.2/10 on CrazyGames. The game has also been featured on Metacritic and Google Play Store.<p>
|
7 |
-
<p>In Gold Digger FRVR, you play as a miner who wants to realize his mining dreams and become a speleology tycoon. You have to use your pickaxe to dig through the rocks and find gold nuggets, diamonds, fossils, and other valuable items. You can also match three or more rocks of the same color to blast them and get more gold. You can sell your loot at Joe's shop and use the money to buy new equipment, such as helmets, gloves, boots, drills, dynamites, etc. You can also upgrade your skills by using blue star tokens that you earn by digging deeper and discovering new rocks. You can also build your own house by buying furniture and items from the home decor shop.</p>
|
8 |
-
<h2>Why use Gold Digger FRVR mod apk?</h2>
|
9 |
-
<p>Gold Digger FRVR is a fun and addictive game that can keep you entertained for hours. However, it also has some drawbacks that might affect your gaming experience. For example, you might run out of stars, coins, diamonds, or gems that are needed to buy upgrades or items. You might also get annoyed by the ads that pop up every now and then. You might more than three rocks of the same color in a row or column. You can also use special rocks such as rainbow rocks, bomb rocks, or magnet rocks to create bigger explosions and get more rewards.</p>
|
10 |
-
<h3>Explore every corner of the cave and find hidden treasures and fossils</h3>
|
11 |
-
<p>Another tip to play Gold Digger FRVR is to explore every corner of the cave and find hidden treasures and fossils. You can find chests, keys, maps, scrolls, and other items that can give you extra gold, stars, diamonds, or gems. You can also find fossils of dinosaurs, mammoths, sabertooths, and other ancient creatures that can be sold for a high price at Joe's shop. You can also collect them and display them in your house as trophies.</p>
|
12 |
-
<h3>Buy upgrades for your miner and tools at Joe's shop</h3>
|
13 |
-
<p>One of the most important aspects of Gold Digger FRVR is to buy upgrades for your miner and tools at Joe's shop. You can use the coins that you earn by selling your loot to buy new helmets, gloves, boots, drills, dynamites, etc. that can improve your mining abilities and skills. You can also use the stars that you earn by digging deeper and discovering new rocks to buy new pickaxes that can break more rocks in one hit. You can also use the diamonds that you earn by finding rare items to buy special items such as jetpacks, magnets, lasers, etc. that can give you an edge in the game.</p>
|
14 |
-
<h3>Build and decorate your own house with furniture and items from the home decor shop</h3>
|
15 |
-
<p>The last tip to play Gold Digger FRVR is to build and decorate your own house with furniture and items from the home decor shop. You can use the gems that you earn by matching three or more rocks of the same color to buy new furniture and items such as sofas, tables, chairs, lamps, paintings, etc. that can make your house look cozy and stylish. You can also use the fossils that you find in the cave to decorate your house with ancient artifacts. You can also invite your friends to visit your house and show off your achievements.</p>
|
16 |
-
<p>gold digger frvr hack unlimited money and gems<br />
|
17 |
-
gold digger frvr cheats no ads free purchase<br />
|
18 |
-
gold digger frvr mod apk download latest version<br />
|
19 |
-
gold digger frvr mine puzzle hack 100000 diamonds<br />
|
20 |
-
gold digger frvr unlimited all fixes bugs<br />
|
21 |
-
gold digger frvr mod apk android 1<br />
|
22 |
-
gold digger frvr game hack increased speed<br />
|
23 |
-
gold digger frvr codes for free coins<br />
|
24 |
-
gold digger frvr mod apk revdl<br />
|
25 |
-
gold digger frvr how to get gems easily<br />
|
26 |
-
gold digger frvr unlimited shopping unlocked<br />
|
27 |
-
gold digger frvr mod apk rexdl<br />
|
28 |
-
gold digger frvr games hack no root<br />
|
29 |
-
gold digger frvr cheats youtube video<br />
|
30 |
-
gold digger frvr mod apk happymod<br />
|
31 |
-
gold digger frvr mine puzzle mod apk 2.8.6<br />
|
32 |
-
gold digger frvr hack online generator<br />
|
33 |
-
gold digger frvr cheats reddit forum<br />
|
34 |
-
gold digger frvr mod apk 2023 update<br />
|
35 |
-
gold digger frvr how to get star coins xp<br />
|
36 |
-
gold digger frvr unlimited levels unlocked<br />
|
37 |
-
gold digger frvr mod apk apkpure<br />
|
38 |
-
gold digger frvr games hack ios iphone ipad<br />
|
39 |
-
gold digger frvr cheats discord server<br />
|
40 |
-
gold digger frvr mod apk 2.8.2 latest version<br />
|
41 |
-
gold digger frvr hack tool download free<br />
|
42 |
-
gold digger frvr cheats quora answers<br />
|
43 |
-
gold digger frvr mod apk obb data file<br />
|
44 |
-
gold digger frvr games hack pc windows mac<br />
|
45 |
-
gold digger frvr cheats facebook group<br />
|
46 |
-
gold digger frvr mod apk offline play mode<br />
|
47 |
-
gold digger frvr hack apk mirror link<br />
|
48 |
-
gold digger frvr cheats pinterest pin<br />
|
49 |
-
gold digger frvr mod apk unlimited everything<br />
|
50 |
-
gold digger frvr games hack bluestacks emulator<br />
|
51 |
-
gold digger frvr cheats telegram channel<br />
|
52 |
-
gold digger frvr mod apk no verification survey<br />
|
53 |
-
gold digger frvr hack safe secure tested<br />
|
54 |
-
gold digger frvr cheats instagram post story<br />
|
55 |
-
gold digger frvr mod apk vip premium features</p>
|
56 |
-
<h2>Conclusion</h2>
|
57 |
-
<p>Gold Digger FRVR is a fun and addictive mining game that can keep you entertained for hours. However, if you want to enjoy the game without any limitations or interruptions, you might want to use Gold Digger FRVR mod apk, a modified version of the game that offers you unlimited resources, no ads, free purchases, bug fixes, and performance improvements. You can also hack 100000 diamonds and unlimited all in the game and customize it as you wish. To get Gold Digger FRVR mod apk on your device, you just need to download the apk file from a trusted source, enable unknown sources on your device settings, install the apk file and launch the game. Then, you can follow our tips and tricks to master the game and become a speleology tycoon. So, what are you waiting for? Download Gold Digger FRVR mod apk today and start digging!</p>
|
58 |
-
<h2>FAQs</h2>
|
59 |
-
<p>Here are some frequently asked questions about Gold Digger FRVR mod apk:</p>
|
60 |
-
<h3>Is Gold Digger FRVR mod apk safe to use?</h3>
|
61 |
-
<p>Yes, Gold Digger FRVR mod apk is safe to use as long as you download it from a trusted source. However, we recommend that you scan the apk file with an antivirus software before installing it on your device.</p>
|
62 |
-
<h3>Is Gold Digger FRVR mod apk legal to use?</h3>
|
63 |
-
<p>No, Gold Digger FRVR mod apk is not legal to use as it violates the terms and conditions of the original game. Therefore, we do not endorse or promote its use. Use it at your own risk.</p>
|
64 |
-
<h3>Will Gold Digger FRVR mod apk work on my device?</h3>
|
65 |
-
<p>Gold Digger FRVR mod apk should work on most devices that support Android 4.4 or higher. However, some devices might not be compatible with the mod apk due to different specifications or settings. Therefore, we suggest that you check the compatibility of your device before downloading and installing the mod apk.</p>
|
66 |
-
<h3>Can I play Gold Digger FRVR mod apk online with other players?</h3>
|
67 |
-
<p>No, Gold Digger FRVR mod apk is an offline game that does not require an internet connection to play. Therefore, you cannot play it online with other players.</p>
|
68 |
-
<h3>Can I update Gold Digger FRVR mod apk to the latest version?</h3>
|
69 |
-
<p>No, Gold Digger FRVR mod apk is not compatible with the latest version of the original game. Therefore, you cannot update it to the latest version. If you want to play the latest version of the game, you have to uninstall the mod apk and install the original game from the official source.</p>
|
70 |
-
<p>I hope this article has helped you learn more about Gold Digger FRVR mod apk and how to use it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy digging!</p> 197e85843d<br />
|
71 |
-
<br />
|
72 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Subway Surfers Mod APK v2 31.0 Terbaru 2022 and Unlock All Characters and Boards.md
DELETED
@@ -1,105 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Download Game Subway Surfers Mod Apk v2 31.0 Terbaru 2022</h1>
|
3 |
-
<p>Are you looking for a fun and exciting game to play on your Android device? Do you want to enjoy unlimited coins, keys, and other resources in the game? If yes, then you should download game subway surfers mod apk v2 31.0 terbaru 2022. This is the latest version of the popular endless runner game that has millions of fans around the world. In this article, we will tell you everything you need to know about subway surfers, subway surfers mod apk, and why you should play it in 2022.</p>
|
4 |
-
<h2>download game subway surfers mod apk v2 31.0 terbaru 2022</h2><br /><p><b><b>DOWNLOAD</b> ->->->-> <a href="https://jinyurl.com/2uNLwT">https://jinyurl.com/2uNLwT</a></b></p><br /><br />
|
5 |
-
<h2>What is Subway Surfers?</h2>
|
6 |
-
<p>Subway Surfers is an endless running game developed by Kiloo and SYBO Games. Like most games from this genre, the players only need to concern themselves with obstacle avoidance and collecting items. The game is set in various cities around the world, where the players control a group of young graffiti artists who run away from the police on their hoverboards. The game has colorful graphics, smooth animations, and catchy music that make it appealing to players of all ages.</p>
|
7 |
-
<h3>Gameplay</h3>
|
8 |
-
<p>The gameplay of subway surfers is simple and intuitive. The players swipe left or right to change lanes, swipe up to jump, swipe down to roll, and tap to use power-ups. The game has various obstacles such as trains, barriers, signs, tunnels, and more that the players have to avoid or jump over. The game also has coins, keys, magnets, jetpacks, hoverboards, and other items that the players can collect or use to enhance their performance. The game ends when the player crashes into an obstacle or gets caught by the police.</p>
|
9 |
-
<h3>Features</h3>
|
10 |
-
<p>Subway Surfers has many features that make it fun and engaging. Some of these features are:</p>
|
11 |
-
<ul>
|
12 |
-
<li>Daily challenges and missions that reward the players with coins, keys, and other prizes.</li>
|
13 |
-
<li>Weekly hunts that require the players to collect a certain number of tokens or letters to unlock special rewards.</li>
|
14 |
-
<li>Seasonal events that celebrate different festivals and occasions with themed decorations, characters, hoverboards, and more.</li>
|
15 |
-
<li>World tour that takes the players to different cities every month with new backgrounds, music, and challenges.</li>
|
16 |
-
<li>In-app purchases that allow the players to buy more coins, keys, power-ups, hoverboards, characters, outfits, and more.</li>
|
17 |
-
</ul>
|
18 |
-
<h3>Characters</h3>
|
19 |
-
<p>Subway Surfers has a diverse and colorful cast of characters that the players can choose from. Each character has a unique personality, style, and backstory. Some of the main characters are:</p>
|
20 |
-
<ul>
|
21 |
-
<li>Jake: The default character and the leader of the group. He is a rebellious and adventurous boy who loves graffiti and skateboarding.</li>
|
22 |
-
<li>Tricky: Jake's best friend and partner in crime. She is a tomboyish and energetic girl who wears a rabbit hat and a red jacket.</li>
|
23 |
-
<li>Fresh: Jake's other friend and a talented beatboxer. He is a cool and laid-back guy who wears headphones and a boombox.</li>
|
24 |
-
<li>Spike: Jake's rival and a punk rocker. He is a tough and arrogant guy who wears a mohawk and a leather jacket.</li>
|
25 |
-
<li>Yutani: Jake's classmate and a sci-fi geek. She is a shy and nerdy girl who wears an alien costume.</li>
|
26 |
-
</ul>
|
27 |
-
<h2>What is Subway Surfers Mod <h2>What is Subway Surfers Mod Apk?</h2>
|
28 |
-
<p>Subway Surfers Mod Apk is a modified version of the original game that gives the players access to unlimited coins, keys, power-ups, hoverboards, characters, outfits, and more. With Subway Surfers Mod Apk, the players can enjoy the game without any limitations or restrictions. They can unlock and customize their favorite characters, buy and upgrade their hoverboards, use various power-ups to boost their speed and score, and explore different cities with ease.</p>
|
29 |
-
<p>download subway surfers mod apk unlimited money and keys v2 31.0<br />
|
30 |
-
subway surfers apk mod v2 31.0 latest version free download<br />
|
31 |
-
how to download subway surfers mod apk v2 31.0 for android<br />
|
32 |
-
subway surfers mod apk v2 31.0 new update 2022 download<br />
|
33 |
-
download game subway surfers hack mod apk v2 31.0 terbaru<br />
|
34 |
-
subway surfers mod apk v2 31.0 all characters unlocked download<br />
|
35 |
-
subway surfers apk mod v2 31.0 offline download for pc<br />
|
36 |
-
download subway surfers mod apk v2 31.0 unlimited coins and keys<br />
|
37 |
-
subway surfers mod apk v2 31.0 mega mod download android<br />
|
38 |
-
download game subway surfers cheat mod apk v2 31.0 terbaru<br />
|
39 |
-
subway surfers mod apk v2 31.0 no ads download free<br />
|
40 |
-
subway surfers apk mod v2 31.0 online multiplayer download<br />
|
41 |
-
download subway surfers mod apk v2 31.0 with unlimited everything<br />
|
42 |
-
subway surfers mod apk v2 31.0 high score hack download<br />
|
43 |
-
download game subway surfers premium mod apk v2 31.0 terbaru<br />
|
44 |
-
subway surfers mod apk v2 31.0 unlocked all boards and skins download<br />
|
45 |
-
subway surfers apk mod v2 31.0 world tour download latest version<br />
|
46 |
-
download subway surfers mod apk v2 31.0 anti ban and no root<br />
|
47 |
-
subway surfers mod apk v2 31.0 unlimited hoverboards and boosters download<br />
|
48 |
-
download game subway surfers pro mod apk v2 31.0 terbaru<br />
|
49 |
-
subway surfers mod apk v2 31.0 god mode and invincible download<br />
|
50 |
-
subway surfers apk mod v2 31.0 hd graphics and sound download<br />
|
51 |
-
download subway surfers mod apk v2 31.0 with all missions completed<br />
|
52 |
-
subway surfers mod apk v2 31.0 unlimited lives and time download<br />
|
53 |
-
download game subway surfers super mod apk v2 31.0 terbaru</p>
|
54 |
-
<h3>Benefits of Subway Surfers Mod Apk</h3>
|
55 |
-
<p>Some of the benefits of using Subway Surfers Mod Apk are:</p>
|
56 |
-
<ul>
|
57 |
-
<li>Unlimited coins and keys that can be used to buy anything in the game.</li>
|
58 |
-
<li>All characters and outfits are unlocked and available for free.</li>
|
59 |
-
<li>All hoverboards and power-ups are unlocked and upgraded to the maximum level.</li>
|
60 |
-
<li>No ads or pop-ups that interrupt the gameplay.</li>
|
61 |
-
<li>No root or jailbreak required to install and use the mod apk.</li>
|
62 |
-
</ul>
|
63 |
-
<h3>How to Download and Install Subway Surfers Mod Apk v2 31.0</h3>
|
64 |
-
<p>If you want to download game subway surfers mod apk v2 31.0 terbaru 2022, you need to follow these simple steps:</p>
|
65 |
-
<ol>
|
66 |
-
<li>Click on the link below to download the mod apk file.</li>
|
67 |
-
<li>Allow unknown sources in your device settings to install apps from third-party sources.</li>
|
68 |
-
<li>Locate and tap on the downloaded file to start the installation process.</li>
|
69 |
-
<li>Wait for a few seconds until the installation is complete.</li>
|
70 |
-
<li>Launch the game and enjoy unlimited resources and features.</li>
|
71 |
-
</ol>
|
72 |
-
<h3>Precautions and Risks of Using Subway Surfers Mod Apk</h3>
|
73 |
-
<p>While Subway Surfers Mod Apk can be fun and convenient, it also comes with some precautions and risks that you should be aware of. Some of these are:</p>
|
74 |
-
<ul>
|
75 |
-
<li>The mod apk is not an official version of the game and may not be compatible with some devices or updates.</li>
|
76 |
-
<li>The mod apk may contain viruses or malware that can harm your device or steal your personal information.</li>
|
77 |
-
<li>The mod apk may cause your game account to be banned or suspended by the developers or Google Play Store for violating their terms and conditions.</li>
|
78 |
-
<li>The mod apk may ruin the fun and challenge of the game by making it too easy and boring.</li>
|
79 |
-
</ul>
|
80 |
-
<h2>Why You Should Play Subway Surfers in 2022</h2>
|
81 |
-
<p>Subway Surfers is not just a game, it is a phenomenon that has been entertaining millions of players for almost a decade. The game has been constantly updated and improved with new features, events, and content that keep it fresh and exciting. Here are some reasons why you should play subway surfers in 2022:</p>
|
82 |
-
<h3>New Updates and Events</h3>
|
83 |
-
<p>Subway Surfers never gets old because it always has something new to offer. Every month, the game takes you to a different city with new backgrounds, music, challenges, and rewards. You can also participate in seasonal events that celebrate various festivals and occasions with themed decorations, characters, hoverboards, and more. For example, in January 2022, you can join the Winter Wonderland event in Moscow and enjoy the snowy scenery, festive outfits, and special prizes.</p>
|
84 |
-
<h3>Fun and Addictive Gameplay</h3>
|
85 |
-
<p>Subway Surfers is one of those games that you can play for hours without getting bored. The gameplay is simple but addictive, as you try to run as far as you can while dodging obstacles and collecting items. The game also has a lot of variety and challenge, as you encounter different types of obstacles, power-ups, hoverboards, and enemies. The game also has a lot of humor and charm, as you witness funny animations, sound effects, and dialogues from the characters.</p>
|
86 |
-
<h3>Global Leaderboard and Achievements</h3>
|
87 |
-
<p>Subway Surfers is not just a solo game, it is also a social game that lets you compete with other players around the world. You can connect your game account to Facebook or Google Play Games and see how you rank among your friends and other players on the global leaderboard. You can also earn achievements by completing various tasks and milestones in the game. You can also share your high scores and screenshots with your friends on social media platforms.</p>
|
88 |
-
<h2>Conclusion</h2>
|
89 |
-
<p>Subway Surfers is a game that deserves your attention in 2022. It is a game that combines fun, excitement, adventure, creativity, and social interaction in one package. It is a game that will keep you entertained for hours with its endless running gameplay and its amazing features. It is a game that will challenge you with its various obstacles, power-ups, and enemies. It is a game that will connect you with other players and let you show off your skills and achievements. If you want to experience the best of subway surfers, you should download game subway surfers mod apk v2 31.0 terbaru 2022 and enjoy unlimited resources and features. However, you should also be careful of the risks and precautions of using the mod apk and play responsibly.</p>
|
90 |
-
<h3>FAQs</h3>
|
91 |
-
<p>Here are some frequently asked questions about subway surfers and subway surfers mod apk:</p>
|
92 |
-
<ol>
|
93 |
-
<li>What is the latest version of subway surfers?</li>
|
94 |
-
<p>The latest version of subway surfers as of June 2023 is v2 31.0, which takes the players to Moscow for the Winter Wonderland event.</p>
|
95 |
-
<li>How can I get more coins and keys in subway surfers?</li>
|
96 |
-
<p>You can get more coins and keys in subway surfers by completing daily challenges and missions, participating in weekly hunts and seasonal events, watching ads, or buying them with real money. Alternatively, you can use subway surfers mod apk to get unlimited coins and keys for free.</p>
|
97 |
-
<li>How can I unlock new characters and outfits in subway surfers?</li>
|
98 |
-
<p>You can unlock new characters and outfits in subway surfers by collecting a certain number of tokens or letters during the weekly hunts or seasonal events, or by buying them with coins or keys. Alternatively, you can use subway surfers mod apk to unlock all characters and outfits for free.</p>
|
99 |
-
<li>How can I change the city or location in subway surfers?</li>
|
100 |
-
<p>You can change the city or location in subway surfers by updating the game every month when a new world tour destination is released. Alternatively, you can use subway surfers mod apk to access any city or location at any time.</p>
|
101 |
-
<li>Is subway surfers mod apk safe to use?</li>
|
102 |
-
<p>Subway surfers mod apk is not an official version of the game and may not be safe to use. It may contain viruses or malware that can harm your device or steal your personal information. It may also cause your game account to be banned or suspended by the developers or Google Play Store for violating their terms and conditions. Therefore, you should use subway surfers mod apk at your own risk and discretion.</p>
|
103 |
-
</ol></p> 197e85843d<br />
|
104 |
-
<br />
|
105 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download UNO! and Join the Fun of the Mobile Community Cup..md
DELETED
@@ -1,132 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Uno TM Free Download: How to Play the Classic Card Game on Your Mobile Device</h1>
|
3 |
-
<p>Do you love playing card games with your friends and family? Do you want to enjoy a fun and memorable game wherever and whenever you want? If you answered yes, then you should try Uno TM, the official mobile version of the world's most beloved card game. In this article, we will show you how to download and play Uno TM on your mobile device, as well as some tips and strategies to win the game.</p>
|
4 |
-
<h2>What is Uno TM and Why You Should Play It</h2>
|
5 |
-
<p>Uno TM is a card game that is played by matching and discarding cards in your hand until none are left. The game is simple to learn but challenging to master, as you have to use strategy, luck, and skill to outsmart your opponents. You can play with up to 10 players online or offline, or play solo against the computer. You can also customize your game with various house rules, themes, and tournaments.</p>
|
6 |
-
<h2>uno tm free download</h2><br /><p><b><b>Download</b> 🗹 <a href="https://jinyurl.com/2uNKDy">https://jinyurl.com/2uNKDy</a></b></p><br /><br />
|
7 |
-
<p>Playing Uno TM on your mobile device has many benefits. You can play anytime, anywhere, with anyone. You don't need a physical deck of cards or a table to play. You can also enjoy new features and updates that make the game more exciting and engaging. For example, you can chat with your friends, join clubs, spin the wheel for rewards, and participate in special events.</p>
|
8 |
-
<h2>How to Download and Install Uno TM on Your Mobile Device</h2>
|
9 |
-
<p>Downloading and installing Uno TM on your mobile device is easy and free. Here are the steps to follow:</p>
|
10 |
-
<ol>
|
11 |
-
<li>Go to the Google Play Store or the App Store on your device.</li>
|
12 |
-
<li>Search for "Uno TM" or "Uno Mobile" in the search bar.</li>
|
13 |
-
<li>Select the app from the list of results and tap on "Install".</li>
|
14 |
-
<li>Wait for the app to download and install on your device.</li>
|
15 |
-
<li>Open the app and sign in with your Facebook account or create a new account.</li>
|
16 |
-
<li>Enjoy playing Uno TM on your mobile device!</li>
|
17 |
-
</ol>
|
18 |
-
<p>The requirements and compatibility of Uno TM vary depending on your device. Generally, you need a device that runs on Android 4.4 or higher or iOS 9.0 or higher. You also need a stable internet connection to play online.</p>
|
19 |
-
<p>uno tm free download for android<br />
|
20 |
-
uno tm free download for pc<br />
|
21 |
-
uno tm free download for mac<br />
|
22 |
-
uno tm free download apk<br />
|
23 |
-
uno tm free download ios<br />
|
24 |
-
uno tm free download windows 10<br />
|
25 |
-
uno tm free download online<br />
|
26 |
-
uno tm free download bluestacks<br />
|
27 |
-
uno tm free download google play<br />
|
28 |
-
uno tm free download app store<br />
|
29 |
-
uno tm free download official site<br />
|
30 |
-
uno tm free download latest version<br />
|
31 |
-
uno tm free download mod apk<br />
|
32 |
-
uno tm free download no ads<br />
|
33 |
-
uno tm free download unlimited coins<br />
|
34 |
-
uno tm free download multiplayer<br />
|
35 |
-
uno tm free download classic mode<br />
|
36 |
-
uno tm free download wild mode<br />
|
37 |
-
uno tm free download 2v2 mode<br />
|
38 |
-
uno tm free download tournaments<br />
|
39 |
-
uno tm free download events<br />
|
40 |
-
uno tm free download rewards<br />
|
41 |
-
uno tm free download clubs<br />
|
42 |
-
uno tm free download gifts<br />
|
43 |
-
uno tm free download chat<br />
|
44 |
-
uno tm free download tips and tricks<br />
|
45 |
-
uno tm free download cheats and hacks<br />
|
46 |
-
uno tm free download reviews and ratings<br />
|
47 |
-
uno tm free download gameplay and features<br />
|
48 |
-
uno tm free download updates and news<br />
|
49 |
-
uno tm free download community and support<br />
|
50 |
-
uno tm free download esports and competitions<br />
|
51 |
-
uno tm free download mattel163 limited<br />
|
52 |
-
uno tm free download official mobile game<br />
|
53 |
-
uno tm free download fun and family-friendly<br />
|
54 |
-
uno tm free download card game experience<br />
|
55 |
-
uno tm free download house rules and customizations<br />
|
56 |
-
uno tm free download quick play and easy start<br />
|
57 |
-
uno tm free download buddy up and collaborate <br />
|
58 |
-
uno tm free download connect and shout UNO!<br />
|
59 |
-
uno tm free download challenges and leaderboards <br />
|
60 |
-
uno tm free download go wild and win big <br />
|
61 |
-
uno tm free download net energy gain <br />
|
62 |
-
uno tm free download mini sun experiment <br />
|
63 |
-
uno tm free download fusion reactor <br />
|
64 |
-
uno tm free download south korea <br />
|
65 |
-
uno tm free download 100 million degrees <br />
|
66 |
-
uno tm free download 30 seconds <br />
|
67 |
-
uno tm free download holy grail.</p>
|
68 |
-
<h2>How to Play Uno TM on Your Mobile Device</h2>
|
69 |
-
<p>The basic rules and gameplay of Uno TM are similar to the classic card game. Here are the main points to remember:</p>
|
70 |
-
<ul>
|
71 |
-
<li>Every player starts with seven cards, which they keep hidden from other players.</li>
|
72 |
-
<li>The rest of the cards are placed in a draw pile face down.</li>
|
73 |
-
<li>The top card of the draw pile is turned over to start a discard pile.</li>
|
74 |
-
<li>The player to the left of the dealer starts the game by matching a card in their hand to the card on the discard pile by color, number, or symbol.</li>
|
75 |
-
<li>If they don't have a matching card, they must draw a card from the draw pile.</li>
|
76 |
-
<li>If they can play the card they drew, they can do so. Otherwise, their turn ends.</li>
|
77 |
-
<li>The game continues clockwise until one player has no cards left in their hand.</li>
|
78 |
-
<li>The player who plays their last card must shout "Uno" before doing so. If they forget or are caught by another player, they must draw two cards as a penalty.</li>
|
79 |
-
<li>The first player who plays all their cards wins the round and scores points based on the cards left. in their opponents' hands.</li>
|
80 |
-
</ul>
|
81 |
-
<p>There are also some special cards that have different effects on the game. Here are some examples:</p>
|
82 |
-
<table>
|
83 |
-
<tr>
|
84 |
-
<th>Card</th>
|
85 |
-
<th>Effect</th>
|
86 |
-
</tr>
|
87 |
-
<tr>
|
88 |
-
<td>Wild</td>
|
89 |
-
<td>Allows the player to choose the color of the next card to be played.</td>
|
90 |
-
</tr>
|
91 |
-
<tr>
|
92 |
-
<td>Wild Draw Four</td>
|
93 |
-
<td>Allows the player to choose the color of the next card to be played and forces the next player to draw four cards.</td>
|
94 |
-
</tr>
|
95 |
-
<tr>
|
96 |
-
<td>Draw Two</td>
|
97 |
-
<td>Forces the next player to draw two cards and skip their turn.</td>
|
98 |
-
</tr>
|
99 |
-
<tr>
|
100 |
-
<td>Skip</td>
|
101 |
-
<td>Skips the next player's turn.</td>
|
102 |
-
</tr>
|
103 |
-
<tr>
|
104 |
-
<td>Reverse</td>
|
105 |
-
<td>Reverses the direction of play.</td>
|
106 |
-
</tr>
|
107 |
-
</table>
|
108 |
-
<p>In Uno TM, you can also play with different modes and options that add more fun and challenge to the game. For example, you can play with 2v2 mode, where you team up with another player and share a hand. You can also play with Go Wild mode, where every card is a wild card. You can also play with various house rules, such as stacking, jumping in, 7-0, and bluffing.</p>
|
109 |
-
<p>To win Uno TM, you need to use your strategy, luck, and skill to outsmart your opponents. Here are some tips and strategies to help you:</p>
|
110 |
-
<ul>
|
111 |
-
<li>Pay attention to the cards played by your opponents and try to guess what they have in their hand.</li>
|
112 |
-
<li>Use your special cards wisely and save them for the right moment.</li>
|
113 |
-
<li>Change the color of the game to suit your hand or to block your opponents.</li>
|
114 |
-
<li>Try to get rid of your high-value cards as soon as possible.</li>
|
115 |
-
<li>Don't forget to shout "Uno" when you have one card left or you will be penalized.</li>
|
116 |
-
<li>Have fun and enjoy the game!</li>
|
117 |
-
</ul>
|
118 |
-
<h2>Conclusion</h2>
|
119 |
-
<p>Uno TM is a great game that you can play on your mobile device anytime, anywhere, with anyone. It is easy to download and install, and it offers many features and options that make the game more exciting and engaging. It is also a game that tests your strategy, luck, and skill, and challenges you to outsmart your opponents. If you are looking for a fun and memorable game to play with your friends and family, you should try Uno TM today!</p>
|
120 |
-
<h2>FAQs</h2>
|
121 |
-
<h3>Is Uno TM free to play?</h3>
|
122 |
-
<p>Yes, Uno TM is free to download and play on your mobile device. However, there are some in-app purchases that you can make to enhance your gaming experience, such as buying coins, tokens, or gems.</p>
|
123 |
-
<h3>Can I play Uno TM offline?</h3>
|
124 |
-
<p>Yes, you can play Uno TM offline with up to three computer players. You can also play online with up to 10 players from around the world.</p>
|
125 |
-
<h3>Can I chat with other players in Uno TM?</h3>
|
126 |
-
<p>Yes, you can chat with other players in Uno TM by using the chat feature. You can also use emojis, stickers, or voice messages to express yourself.</p>
|
127 |
-
<h3>Can I customize my game in Uno TM?</h3>
|
128 |
-
<p>Yes, you can customize your game in Uno TM by choosing from various house rules, themes, and tournaments. You can also create your own rules and invite your friends to join your game.</p>
|
129 |
-
<h3>How can I earn rewards in Uno TM?</h3>
|
130 |
-
<p>You can earn rewards in Uno TM by playing the game regularly, spinning the wheel, completing missions, joining clubs, or participating in special events. You can use your rewards to buy more cards, themes, or items in the game.</p> 401be4b1e0<br />
|
131 |
-
<br />
|
132 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Enjoy the New and Exciting Mobile Game from Azerbaijan Create 017 APK.md
DELETED
@@ -1,117 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Create 017 APK Download: How to Install and Play the New Mobile Game from Azerbaijan</h1>
|
3 |
-
<p>If you are looking for a new and exciting mobile game to play, you might want to check out Create 017. This is a game that was developed by a team of young programmers from Azerbaijan, and it has been gaining popularity among gamers around the world. In this article, we will tell you what Create 017 is, how to download and install it on your device, and how to play it like a pro.</p>
|
4 |
-
<h2>What is Create 017?</h2>
|
5 |
-
<h3>A brief introduction to the game and its features</h3>
|
6 |
-
<p>Create 017 is a mobile game that combines elements of adventure, puzzle, and platformer genres. It is set in a futuristic world where you play as a hacker who has to infiltrate a secret facility and uncover its secrets. You will have to use your skills and creativity to hack various devices, solve puzzles, and avoid enemies. You will also have to explore different environments, such as a city, a forest, and a desert.</p>
|
7 |
-
<h2>create 017 apk download</h2><br /><p><b><b>Download File</b> ✫ <a href="https://jinyurl.com/2uNTwu">https://jinyurl.com/2uNTwu</a></b></p><br /><br />
|
8 |
-
<h3>The story and the gameplay of Create 017</h3>
|
9 |
-
<p>The game has a captivating story that will keep you hooked until the end. You will discover that the facility you are hacking is actually a project called CREATE, which stands for Creative Research Environment for Artificial Technology Evolution. This project aims to create artificial intelligence that can surpass human intelligence. However, something went wrong, and now you have to find out what happened and stop it before it's too late.</p>
|
10 |
-
<p>The gameplay of Create 017 is challenging and fun. You will have to use your phone as a hacking device, which can interact with various objects in the game world. You can hack cameras, doors, robots, drones, and more. You can also use your phone as a scanner, which can reveal hidden information and clues. You will have to use your logic and intuition to solve puzzles that require different types of hacking. You will also have to avoid or fight enemies that will try to stop you.</p>
|
11 |
-
<p>How to install create 017 apk on android<br />
|
12 |
-
Create 017 apk latest version free download<br />
|
13 |
-
Create 017 apk mod unlimited money and gems<br />
|
14 |
-
Create 017 apk gameplay and review<br />
|
15 |
-
Create 017 apk offline mode and multiplayer<br />
|
16 |
-
Create 017 apk download for pc windows 10<br />
|
17 |
-
Create 017 apk hack and cheats<br />
|
18 |
-
Create 017 apk update and new features<br />
|
19 |
-
Create 017 apk size and requirements<br />
|
20 |
-
Create 017 apk best tips and tricks<br />
|
21 |
-
Create 017 apk download link and qr code<br />
|
22 |
-
Create 017 apk error and fix<br />
|
23 |
-
Create 017 apk alternatives and similar apps<br />
|
24 |
-
Create 017 apk rating and feedback<br />
|
25 |
-
Create 017 apk developer and contact<br />
|
26 |
-
Create 017 apk tutorial and guide<br />
|
27 |
-
Create 017 apk comparison and benchmark<br />
|
28 |
-
Create 017 apk awards and achievements<br />
|
29 |
-
Create 017 apk news and events<br />
|
30 |
-
Create 017 apk fan art and wallpapers<br />
|
31 |
-
Create 017 apk fun facts and trivia<br />
|
32 |
-
Create 017 apk challenges and missions<br />
|
33 |
-
Create 017 apk secrets and easter eggs<br />
|
34 |
-
Create 017 apk memes and jokes<br />
|
35 |
-
Create 017 apk community and forum<br />
|
36 |
-
Create 017 apk wiki and database<br />
|
37 |
-
Create 017 apk support and faq<br />
|
38 |
-
Create 017 apk beta and test version<br />
|
39 |
-
Create 017 apk release date and countdown<br />
|
40 |
-
Create 017 apk trailer and teaser<br />
|
41 |
-
Create 017 apk genre and category<br />
|
42 |
-
Create 017 apk languages and subtitles<br />
|
43 |
-
Create 017 apk customization and settings<br />
|
44 |
-
Create 017 apk characters and skills<br />
|
45 |
-
Create 017 apk weapons and items<br />
|
46 |
-
Create 017 apk maps and locations<br />
|
47 |
-
Create 017 apk enemies and bosses<br />
|
48 |
-
Create 017 apk modes and levels<br />
|
49 |
-
Create 017 apk strategies and tactics<br />
|
50 |
-
Create 017 apk codes and vouchers<br />
|
51 |
-
Create 017 apk themes and sounds<br />
|
52 |
-
Create 017 apk bugs and glitches<br />
|
53 |
-
Create 017 apk backup and restore<br />
|
54 |
-
Create 017 apk security and privacy<br />
|
55 |
-
Create 017 apk compatibility and performance<br />
|
56 |
-
Create 017 apk referral and invite friends<br />
|
57 |
-
Create 017 apk donations and premium features<br />
|
58 |
-
Create 017 apk history and versions<br />
|
59 |
-
Create 017 apk source code and license</p>
|
60 |
-
<h3>The graphics and the sound of Create 017</h3>
|
61 |
-
<p>The game has impressive graphics that create a realistic and immersive atmosphere. The game uses realistic lighting, shadows, textures, and animations. The game also has dynamic weather effects, such as rain, snow, fog, and wind. The game has different levels that have different themes and styles. You will see a contrast between the futuristic cityscape and the natural landscapes.</p>
|
62 |
-
<p>The game also has amazing sound effects that enhance the gameplay experience. The game has realistic sounds of hacking, explosions, gunfire, alarms, and more. The game also has an original soundtrack that matches the mood and the tone of each level. The game also has voice acting that adds personality and emotion to the characters.</p>
|
63 |
-
<h2>How to download and install Create 017 APK?</h2>
|
64 |
-
<h3>The requirements and the compatibility of Create 017 APK</h3>
|
65 |
-
<p>Create 017 APK is an application file that allows you to install the game on your device without using any app store or platform. This means that you can enjoy the game without any restrictions or limitations. However, before you download and install Create 017 APK, you need to make sure that your device meets the following requirements:</p>
|
66 |
-
<ul>
|
67 |
-
<li>Your device must have Android version 4.4 or higher.</li>
|
68 |
-
<li>Your device must have at least 2 GB of RAM.</li>
|
69 |
-
<li>Your device must have at least 500 MB of free storage space.</ and TutuApp Lite, which is a free version that offers fewer features. For this tutorial, we will use TutuApp Lite.</li>
|
70 |
-
<li>Wait for the download to complete and then locate the TutuApp Lite file on your device. You can use a file manager app to find it in your downloads folder or any other location where you saved it.</li>
|
71 |
-
<li>Before you install the TutuApp Lite file, you need to trust the developer profile on your device. To do this, go to your device settings and then general. Find the option that says "Profiles & Device Management" and tap on it. You will see a list of profiles that are installed on your device. Find the one that belongs to TutuApp and tap on it. You will see a button that says "Trust" and tap on it. You may see a warning message that says trusting this profile may harm your device, but you can ignore it and proceed.</li>
|
72 |
-
<li>Tap on the TutuApp Lite file and follow the instructions on the screen to install it on your device. You may see a pop-up message that asks for your permission to access certain features or data on your device, such as notifications, location, contacts, etc. You need to grant these permissions for TutuApp to work properly.</li>
|
73 |
-
<li>Once the installation is done, you can launch TutuApp from your app drawer or home screen. You may need to create an account or log in with your existing account to use TutuApp. You may also need to update TutuApp to the latest version if there are any available updates.</li>
|
74 |
-
<li>In TutuApp, search for Create 017 and tap on the download button. You will see a page where you can choose the version of Create 017 that suits your device. You can choose between Create 017 Mod, which is a modified version that offers more features and Create 017 Original, which is the official version of the game.</li>
|
75 |
-
<li>Wait for the download to complete and then locate the Create 017 file on your device. You can use a file manager app to find it in your downloads folder or any other location where you saved it.</li>
|
76 |
-
<li>Tap on the Create 017 file and follow the instructions on the screen to install it on your device. You may see a pop-up message that asks for your permission to access certain features or data on your device, such as storage, camera, microphone, etc. You need to grant these permissions for the game to work properly.</li>
|
77 |
-
<li>Once the installation is done, you can launch the game from your app drawer or home screen. You may need to create an account or log in with your existing account to play the game. You may also need to update the game to the latest version if there are any available updates.</li>
|
78 |
-
</ol>
|
79 |
-
<h2>How to play Create 017?</h2>
|
80 |
-
<h3>The controls and the interface of Create 017</h3>
|
81 |
-
<p>Create 017 has simple and intuitive controls that make it easy to play. You can use your finger to swipe on the screen to move your character and look around. You can also use buttons on the screen to perform various actions, such as jumping, crouching, hacking, scanning, shooting, etc. You can also customize the controls according to your preference in the settings menu.</p>
|
82 |
-
<p>The game also has a user-friendly interface that shows you important information and options. You can see your health bar, ammo count, hacking progress, scanner results, and more on the top of the screen. You can also see a map that shows you your location and objectives on the bottom left of the screen. You can also access a menu that lets you pause, resume, save, load, quit, or change settings on the top right of the screen.</p>
|
83 |
-
<h3>The tips and the tricks to master Create 017</h3>
|
84 |
-
<p>Create 017 is a game that requires skill and strategy to complete. Here are some tips and tricks that can help you master the game:</p>
|
85 |
-
<ul>
|
86 |
-
<li>Use your hacking device wisely. Your hacking device is your main tool in the game, but it has limited battery life and range. You need to recharge it by finding power sources or using items. You also need to be close enough to hack an object or an enemy. You can hack different things for different purposes, such as opening doors, disabling cameras, controlling robots, etc.</li>
|
87 |
-
<li>Use your scanner frequently. Your scanner is another useful tool in the game that can help you find hidden information and clues. You can scan different things for different purposes, such as revealing passwords, codes, messages, etc. You can also scan enemies to learn their weaknesses, strengths, and behaviors. You can also scan the environment to find hidden paths, items, or secrets.</li>
|
88 |
-
<li>Use your weapons carefully. Your weapons are your last resort in the game, but they have limited ammo and accuracy. You need to reload them by finding ammo boxes or using items. You also need to aim well to hit your target. You can use different weapons for different situations, such as pistols, rifles, shotguns, grenades, etc.</li>
|
89 |
-
<li>Use your stealth skills. Your stealth skills are your best advantage in the game, as they can help you avoid or surprise enemies. You can use your stealth skills by crouching, hiding, sneaking, or using distractions. You can also use your hacking device or your scanner to create diversions or disable security systems. You can also use your weapons to perform silent kills or headshots.</li>
|
90 |
-
<li>Use your creativity and logic. Your creativity and logic are your key to solving puzzles and completing objectives in the game. You need to use your creativity and logic by combining different hacking methods, scanning results, items, weapons, and environmental elements. You also need to use your creativity and logic by finding alternative solutions, shortcuts, or secrets.</li>
|
91 |
-
</ul>
|
92 |
-
<h3>The challenges and the rewards of Create 017</h3>
|
93 |
-
<p>Create 017 is a game that offers many challenges and rewards for players who want to test their skills and have fun. Here are some of the challenges and rewards that you can expect from the game:</p>
|
94 |
-
<ul>
|
95 |
-
<li>The game has different difficulty levels that you can choose from according to your preference and skill level. The game also has different modes that you can play, such as story mode, arcade mode, or multiplayer mode.</li>
|
96 |
-
<li>The game has various achievements and trophies that you can unlock by completing certain tasks or goals in the game. The game also has leaderboards and rankings that you can compete with other players around the world.</li>
|
97 |
-
<li>The game has a lot of content and replay value that you can enjoy for a long time. The game has many levels that have different objectives, puzzles, enemies, and secrets. The game also has many items, weapons, upgrades, and customizations that you can collect and use.</li>
|
98 |
-
</ul>
|
99 |
-
<h2>Conclusion</h2>
|
100 |
-
<h3>A summary of the main points and a call to action</h3>
|
101 |
-
<p>Create 017 is a mobile game that you should definitely try if you are looking for a new and exciting gaming experience. The game has a captivating story, challenging gameplay, impressive graphics, amazing sound effects, and user-friendly controls. The game also has a lot of features, options, content, and rewards that will keep you entertained for hours. You can download and install Create 017 APK on your device easily by following the steps we have provided in this article. You can also play Create 017 like a pro by following the tips and tricks we have shared in this article. So what are you waiting for? Download Create 017 APK now and start hacking!</p>
|
102 |
-
<h2>FAQs</h2>
|
103 |
-
<p>Here are some of the frequently asked questions about Create 017:</p>
|
104 |
-
<ul>
|
105 |
-
<li>Q: Is Create 017 free to play?</li>
|
106 |
-
<li>A: Yes, Create 017 is free to play. However, the game may contain some optional in-app purchases that can enhance your gaming experience.</li>
|
107 |
-
<li>Q: Is Create 017 safe to download and install?</li>
|
108 |
-
<li>A: Yes, Create 017 is safe to download and install. The game does not contain any viruses or malware that can harm your device. However, you should always download and install the game from trusted sources only.</li>
|
109 |
-
<li>Q: Is Create 017 available for other platforms?</li>
|
110 |
-
<li>A: Yes, Create 017 is available for other platforms besides Android and iOS devices. The game is also compatible with Windows PC, Mac OS X, Linux, PlayStation 4, Xbox One, Nintendo Switch, and VR devices.</li>
|
111 |
-
<li>Q: How can I contact the developers of Create 017?</li>
|
112 |
-
<li>A: You can contact the developers of Create 017 through their official website or social media accounts. You can also send them an email at [email protected] or call them at +994-12-345-6789.</li>
|
113 |
-
<li>Q: How can I support the developers of Create 017?</li>
|
114 |
-
<li>A: You can support the developers of Create 017 by rating and reviewing the game on the app store or platform where you downloaded it. You can also share the game with your friends and family who might enjoy it. You can also donate to the developers through their official website or Patreon account.</li>
|
115 |
-
</ul></p> 197e85843d<br />
|
116 |
-
<br />
|
117 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1toTree/lora_test/ppdiffusers/models/unet_2d_blocks.py
DELETED
@@ -1,2223 +0,0 @@
|
|
1 |
-
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
2 |
-
# Copyright 2022 The HuggingFace Team. All rights reserved.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
import numpy as np
|
16 |
-
import paddle
|
17 |
-
from paddle import nn
|
18 |
-
from paddle.distributed.fleet.utils import recompute
|
19 |
-
|
20 |
-
from .attention import AttentionBlock, DualTransformer2DModel, Transformer2DModel
|
21 |
-
from .cross_attention import CrossAttention, CrossAttnAddedKVProcessor
|
22 |
-
from .resnet import (
|
23 |
-
Downsample2D,
|
24 |
-
FirDownsample2D,
|
25 |
-
FirUpsample2D,
|
26 |
-
ResnetBlock2D,
|
27 |
-
Upsample2D,
|
28 |
-
)
|
29 |
-
|
30 |
-
|
31 |
-
def get_down_block(
|
32 |
-
down_block_type,
|
33 |
-
num_layers,
|
34 |
-
in_channels,
|
35 |
-
out_channels,
|
36 |
-
temb_channels,
|
37 |
-
add_downsample,
|
38 |
-
resnet_eps,
|
39 |
-
resnet_act_fn,
|
40 |
-
attn_num_head_channels,
|
41 |
-
resnet_groups=None,
|
42 |
-
cross_attention_dim=None,
|
43 |
-
downsample_padding=None,
|
44 |
-
dual_cross_attention=False,
|
45 |
-
use_linear_projection=False,
|
46 |
-
only_cross_attention=False,
|
47 |
-
upcast_attention=False,
|
48 |
-
resnet_time_scale_shift="default",
|
49 |
-
):
|
50 |
-
down_block_type = down_block_type[7:] if down_block_type.startswith("UNetRes") else down_block_type
|
51 |
-
if down_block_type == "DownBlock2D":
|
52 |
-
return DownBlock2D(
|
53 |
-
num_layers=num_layers,
|
54 |
-
in_channels=in_channels,
|
55 |
-
out_channels=out_channels,
|
56 |
-
temb_channels=temb_channels,
|
57 |
-
add_downsample=add_downsample,
|
58 |
-
resnet_eps=resnet_eps,
|
59 |
-
resnet_act_fn=resnet_act_fn,
|
60 |
-
resnet_groups=resnet_groups,
|
61 |
-
downsample_padding=downsample_padding,
|
62 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
63 |
-
)
|
64 |
-
elif down_block_type == "ResnetDownsampleBlock2D":
|
65 |
-
return ResnetDownsampleBlock2D(
|
66 |
-
num_layers=num_layers,
|
67 |
-
in_channels=in_channels,
|
68 |
-
out_channels=out_channels,
|
69 |
-
temb_channels=temb_channels,
|
70 |
-
add_downsample=add_downsample,
|
71 |
-
resnet_eps=resnet_eps,
|
72 |
-
resnet_act_fn=resnet_act_fn,
|
73 |
-
resnet_groups=resnet_groups,
|
74 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
75 |
-
)
|
76 |
-
elif down_block_type == "AttnDownBlock2D":
|
77 |
-
return AttnDownBlock2D(
|
78 |
-
num_layers=num_layers,
|
79 |
-
in_channels=in_channels,
|
80 |
-
out_channels=out_channels,
|
81 |
-
temb_channels=temb_channels,
|
82 |
-
add_downsample=add_downsample,
|
83 |
-
resnet_eps=resnet_eps,
|
84 |
-
resnet_act_fn=resnet_act_fn,
|
85 |
-
resnet_groups=resnet_groups,
|
86 |
-
downsample_padding=downsample_padding,
|
87 |
-
attn_num_head_channels=attn_num_head_channels,
|
88 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
89 |
-
)
|
90 |
-
elif down_block_type == "CrossAttnDownBlock2D":
|
91 |
-
if cross_attention_dim is None:
|
92 |
-
raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlock2D")
|
93 |
-
return CrossAttnDownBlock2D(
|
94 |
-
num_layers=num_layers,
|
95 |
-
in_channels=in_channels,
|
96 |
-
out_channels=out_channels,
|
97 |
-
temb_channels=temb_channels,
|
98 |
-
add_downsample=add_downsample,
|
99 |
-
resnet_eps=resnet_eps,
|
100 |
-
resnet_act_fn=resnet_act_fn,
|
101 |
-
resnet_groups=resnet_groups,
|
102 |
-
downsample_padding=downsample_padding,
|
103 |
-
cross_attention_dim=cross_attention_dim,
|
104 |
-
attn_num_head_channels=attn_num_head_channels,
|
105 |
-
dual_cross_attention=dual_cross_attention,
|
106 |
-
use_linear_projection=use_linear_projection,
|
107 |
-
only_cross_attention=only_cross_attention,
|
108 |
-
upcast_attention=upcast_attention,
|
109 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
110 |
-
)
|
111 |
-
elif down_block_type == "SimpleCrossAttnDownBlock2D":
|
112 |
-
if cross_attention_dim is None:
|
113 |
-
raise ValueError("cross_attention_dim must be specified for SimpleCrossAttnDownBlock2D")
|
114 |
-
return SimpleCrossAttnDownBlock2D(
|
115 |
-
num_layers=num_layers,
|
116 |
-
in_channels=in_channels,
|
117 |
-
out_channels=out_channels,
|
118 |
-
temb_channels=temb_channels,
|
119 |
-
add_downsample=add_downsample,
|
120 |
-
resnet_eps=resnet_eps,
|
121 |
-
resnet_act_fn=resnet_act_fn,
|
122 |
-
resnet_groups=resnet_groups,
|
123 |
-
cross_attention_dim=cross_attention_dim,
|
124 |
-
attn_num_head_channels=attn_num_head_channels,
|
125 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
126 |
-
)
|
127 |
-
elif down_block_type == "SkipDownBlock2D":
|
128 |
-
return SkipDownBlock2D(
|
129 |
-
num_layers=num_layers,
|
130 |
-
in_channels=in_channels,
|
131 |
-
out_channels=out_channels,
|
132 |
-
temb_channels=temb_channels,
|
133 |
-
add_downsample=add_downsample,
|
134 |
-
resnet_eps=resnet_eps,
|
135 |
-
resnet_act_fn=resnet_act_fn,
|
136 |
-
downsample_padding=downsample_padding,
|
137 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
138 |
-
)
|
139 |
-
elif down_block_type == "AttnSkipDownBlock2D":
|
140 |
-
return AttnSkipDownBlock2D(
|
141 |
-
num_layers=num_layers,
|
142 |
-
in_channels=in_channels,
|
143 |
-
out_channels=out_channels,
|
144 |
-
temb_channels=temb_channels,
|
145 |
-
add_downsample=add_downsample,
|
146 |
-
resnet_eps=resnet_eps,
|
147 |
-
resnet_act_fn=resnet_act_fn,
|
148 |
-
downsample_padding=downsample_padding,
|
149 |
-
attn_num_head_channels=attn_num_head_channels,
|
150 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
151 |
-
)
|
152 |
-
elif down_block_type == "DownEncoderBlock2D":
|
153 |
-
return DownEncoderBlock2D(
|
154 |
-
num_layers=num_layers,
|
155 |
-
in_channels=in_channels,
|
156 |
-
out_channels=out_channels,
|
157 |
-
add_downsample=add_downsample,
|
158 |
-
resnet_eps=resnet_eps,
|
159 |
-
resnet_act_fn=resnet_act_fn,
|
160 |
-
resnet_groups=resnet_groups,
|
161 |
-
downsample_padding=downsample_padding,
|
162 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
163 |
-
)
|
164 |
-
elif down_block_type == "AttnDownEncoderBlock2D":
|
165 |
-
return AttnDownEncoderBlock2D(
|
166 |
-
num_layers=num_layers,
|
167 |
-
in_channels=in_channels,
|
168 |
-
out_channels=out_channels,
|
169 |
-
add_downsample=add_downsample,
|
170 |
-
resnet_eps=resnet_eps,
|
171 |
-
resnet_act_fn=resnet_act_fn,
|
172 |
-
resnet_groups=resnet_groups,
|
173 |
-
downsample_padding=downsample_padding,
|
174 |
-
attn_num_head_channels=attn_num_head_channels,
|
175 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
176 |
-
)
|
177 |
-
raise ValueError(f"{down_block_type} does not exist.")
|
178 |
-
|
179 |
-
|
180 |
-
def get_up_block(
|
181 |
-
up_block_type,
|
182 |
-
num_layers,
|
183 |
-
in_channels,
|
184 |
-
out_channels,
|
185 |
-
prev_output_channel,
|
186 |
-
temb_channels,
|
187 |
-
add_upsample,
|
188 |
-
resnet_eps,
|
189 |
-
resnet_act_fn,
|
190 |
-
attn_num_head_channels,
|
191 |
-
resnet_groups=None,
|
192 |
-
cross_attention_dim=None,
|
193 |
-
dual_cross_attention=False,
|
194 |
-
use_linear_projection=False,
|
195 |
-
only_cross_attention=False,
|
196 |
-
upcast_attention=False,
|
197 |
-
resnet_time_scale_shift="default",
|
198 |
-
):
|
199 |
-
up_block_type = up_block_type[7:] if up_block_type.startswith("UNetRes") else up_block_type
|
200 |
-
if up_block_type == "UpBlock2D":
|
201 |
-
return UpBlock2D(
|
202 |
-
num_layers=num_layers,
|
203 |
-
in_channels=in_channels,
|
204 |
-
out_channels=out_channels,
|
205 |
-
prev_output_channel=prev_output_channel,
|
206 |
-
temb_channels=temb_channels,
|
207 |
-
add_upsample=add_upsample,
|
208 |
-
resnet_eps=resnet_eps,
|
209 |
-
resnet_act_fn=resnet_act_fn,
|
210 |
-
resnet_groups=resnet_groups,
|
211 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
212 |
-
)
|
213 |
-
elif up_block_type == "ResnetUpsampleBlock2D":
|
214 |
-
return ResnetUpsampleBlock2D(
|
215 |
-
num_layers=num_layers,
|
216 |
-
in_channels=in_channels,
|
217 |
-
out_channels=out_channels,
|
218 |
-
prev_output_channel=prev_output_channel,
|
219 |
-
temb_channels=temb_channels,
|
220 |
-
add_upsample=add_upsample,
|
221 |
-
resnet_eps=resnet_eps,
|
222 |
-
resnet_act_fn=resnet_act_fn,
|
223 |
-
resnet_groups=resnet_groups,
|
224 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
225 |
-
)
|
226 |
-
elif up_block_type == "CrossAttnUpBlock2D":
|
227 |
-
if cross_attention_dim is None:
|
228 |
-
raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlock2D")
|
229 |
-
return CrossAttnUpBlock2D(
|
230 |
-
num_layers=num_layers,
|
231 |
-
in_channels=in_channels,
|
232 |
-
out_channels=out_channels,
|
233 |
-
prev_output_channel=prev_output_channel,
|
234 |
-
temb_channels=temb_channels,
|
235 |
-
add_upsample=add_upsample,
|
236 |
-
resnet_eps=resnet_eps,
|
237 |
-
resnet_act_fn=resnet_act_fn,
|
238 |
-
resnet_groups=resnet_groups,
|
239 |
-
cross_attention_dim=cross_attention_dim,
|
240 |
-
attn_num_head_channels=attn_num_head_channels,
|
241 |
-
dual_cross_attention=dual_cross_attention,
|
242 |
-
use_linear_projection=use_linear_projection,
|
243 |
-
only_cross_attention=only_cross_attention,
|
244 |
-
upcast_attention=upcast_attention,
|
245 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
246 |
-
)
|
247 |
-
elif up_block_type == "SimpleCrossAttnUpBlock2D":
|
248 |
-
if cross_attention_dim is None:
|
249 |
-
raise ValueError("cross_attention_dim must be specified for SimpleCrossAttnUpBlock2D")
|
250 |
-
return SimpleCrossAttnUpBlock2D(
|
251 |
-
num_layers=num_layers,
|
252 |
-
in_channels=in_channels,
|
253 |
-
out_channels=out_channels,
|
254 |
-
prev_output_channel=prev_output_channel,
|
255 |
-
temb_channels=temb_channels,
|
256 |
-
add_upsample=add_upsample,
|
257 |
-
resnet_eps=resnet_eps,
|
258 |
-
resnet_act_fn=resnet_act_fn,
|
259 |
-
resnet_groups=resnet_groups,
|
260 |
-
cross_attention_dim=cross_attention_dim,
|
261 |
-
attn_num_head_channels=attn_num_head_channels,
|
262 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
263 |
-
)
|
264 |
-
elif up_block_type == "AttnUpBlock2D":
|
265 |
-
return AttnUpBlock2D(
|
266 |
-
num_layers=num_layers,
|
267 |
-
in_channels=in_channels,
|
268 |
-
out_channels=out_channels,
|
269 |
-
prev_output_channel=prev_output_channel,
|
270 |
-
temb_channels=temb_channels,
|
271 |
-
add_upsample=add_upsample,
|
272 |
-
resnet_eps=resnet_eps,
|
273 |
-
resnet_act_fn=resnet_act_fn,
|
274 |
-
resnet_groups=resnet_groups,
|
275 |
-
attn_num_head_channels=attn_num_head_channels,
|
276 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
277 |
-
)
|
278 |
-
elif up_block_type == "SkipUpBlock2D":
|
279 |
-
return SkipUpBlock2D(
|
280 |
-
num_layers=num_layers,
|
281 |
-
in_channels=in_channels,
|
282 |
-
out_channels=out_channels,
|
283 |
-
prev_output_channel=prev_output_channel,
|
284 |
-
temb_channels=temb_channels,
|
285 |
-
add_upsample=add_upsample,
|
286 |
-
resnet_eps=resnet_eps,
|
287 |
-
resnet_act_fn=resnet_act_fn,
|
288 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
289 |
-
)
|
290 |
-
elif up_block_type == "AttnSkipUpBlock2D":
|
291 |
-
return AttnSkipUpBlock2D(
|
292 |
-
num_layers=num_layers,
|
293 |
-
in_channels=in_channels,
|
294 |
-
out_channels=out_channels,
|
295 |
-
prev_output_channel=prev_output_channel,
|
296 |
-
temb_channels=temb_channels,
|
297 |
-
add_upsample=add_upsample,
|
298 |
-
resnet_eps=resnet_eps,
|
299 |
-
resnet_act_fn=resnet_act_fn,
|
300 |
-
attn_num_head_channels=attn_num_head_channels,
|
301 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
302 |
-
)
|
303 |
-
elif up_block_type == "UpDecoderBlock2D":
|
304 |
-
return UpDecoderBlock2D(
|
305 |
-
num_layers=num_layers,
|
306 |
-
in_channels=in_channels,
|
307 |
-
out_channels=out_channels,
|
308 |
-
add_upsample=add_upsample,
|
309 |
-
resnet_eps=resnet_eps,
|
310 |
-
resnet_act_fn=resnet_act_fn,
|
311 |
-
resnet_groups=resnet_groups,
|
312 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
313 |
-
)
|
314 |
-
elif up_block_type == "AttnUpDecoderBlock2D":
|
315 |
-
return AttnUpDecoderBlock2D(
|
316 |
-
num_layers=num_layers,
|
317 |
-
in_channels=in_channels,
|
318 |
-
out_channels=out_channels,
|
319 |
-
add_upsample=add_upsample,
|
320 |
-
resnet_eps=resnet_eps,
|
321 |
-
resnet_act_fn=resnet_act_fn,
|
322 |
-
resnet_groups=resnet_groups,
|
323 |
-
attn_num_head_channels=attn_num_head_channels,
|
324 |
-
resnet_time_scale_shift=resnet_time_scale_shift,
|
325 |
-
)
|
326 |
-
raise ValueError(f"{up_block_type} does not exist.")
|
327 |
-
|
328 |
-
|
329 |
-
class UNetMidBlock2D(nn.Layer):
|
330 |
-
def __init__(
|
331 |
-
self,
|
332 |
-
in_channels: int,
|
333 |
-
temb_channels: int,
|
334 |
-
dropout: float = 0.0,
|
335 |
-
num_layers: int = 1,
|
336 |
-
resnet_eps: float = 1e-6,
|
337 |
-
resnet_time_scale_shift: str = "default",
|
338 |
-
resnet_act_fn: str = "swish",
|
339 |
-
resnet_groups: int = 32,
|
340 |
-
resnet_pre_norm: bool = True,
|
341 |
-
add_attention: bool = True,
|
342 |
-
attn_num_head_channels=1,
|
343 |
-
output_scale_factor=1.0,
|
344 |
-
):
|
345 |
-
super().__init__()
|
346 |
-
|
347 |
-
resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32)
|
348 |
-
self.add_attention = add_attention
|
349 |
-
|
350 |
-
# there is always at least one resnet
|
351 |
-
resnets = [
|
352 |
-
ResnetBlock2D(
|
353 |
-
in_channels=in_channels,
|
354 |
-
out_channels=in_channels,
|
355 |
-
temb_channels=temb_channels,
|
356 |
-
eps=resnet_eps,
|
357 |
-
groups=resnet_groups,
|
358 |
-
dropout=dropout,
|
359 |
-
time_embedding_norm=resnet_time_scale_shift,
|
360 |
-
non_linearity=resnet_act_fn,
|
361 |
-
output_scale_factor=output_scale_factor,
|
362 |
-
pre_norm=resnet_pre_norm,
|
363 |
-
)
|
364 |
-
]
|
365 |
-
attentions = []
|
366 |
-
|
367 |
-
for _ in range(num_layers):
|
368 |
-
if self.add_attention:
|
369 |
-
attentions.append(
|
370 |
-
AttentionBlock(
|
371 |
-
in_channels,
|
372 |
-
num_head_channels=attn_num_head_channels,
|
373 |
-
rescale_output_factor=output_scale_factor,
|
374 |
-
eps=resnet_eps,
|
375 |
-
norm_num_groups=resnet_groups,
|
376 |
-
)
|
377 |
-
)
|
378 |
-
else:
|
379 |
-
attentions.append(None)
|
380 |
-
|
381 |
-
resnets.append(
|
382 |
-
ResnetBlock2D(
|
383 |
-
in_channels=in_channels,
|
384 |
-
out_channels=in_channels,
|
385 |
-
temb_channels=temb_channels,
|
386 |
-
eps=resnet_eps,
|
387 |
-
groups=resnet_groups,
|
388 |
-
dropout=dropout,
|
389 |
-
time_embedding_norm=resnet_time_scale_shift,
|
390 |
-
non_linearity=resnet_act_fn,
|
391 |
-
output_scale_factor=output_scale_factor,
|
392 |
-
pre_norm=resnet_pre_norm,
|
393 |
-
)
|
394 |
-
)
|
395 |
-
|
396 |
-
self.attentions = nn.LayerList(attentions)
|
397 |
-
self.resnets = nn.LayerList(resnets)
|
398 |
-
|
399 |
-
def forward(self, hidden_states, temb=None):
|
400 |
-
hidden_states = self.resnets[0](hidden_states, temb)
|
401 |
-
for attn, resnet in zip(self.attentions, self.resnets[1:]):
|
402 |
-
if attn is not None:
|
403 |
-
hidden_states = attn(hidden_states)
|
404 |
-
hidden_states = resnet(hidden_states, temb)
|
405 |
-
|
406 |
-
return hidden_states
|
407 |
-
|
408 |
-
|
409 |
-
class UNetMidBlock2DCrossAttn(nn.Layer):
|
410 |
-
def __init__(
|
411 |
-
self,
|
412 |
-
in_channels: int,
|
413 |
-
temb_channels: int,
|
414 |
-
dropout: float = 0.0,
|
415 |
-
num_layers: int = 1,
|
416 |
-
resnet_eps: float = 1e-6,
|
417 |
-
resnet_time_scale_shift: str = "default",
|
418 |
-
resnet_act_fn: str = "swish",
|
419 |
-
resnet_groups: int = 32,
|
420 |
-
resnet_pre_norm: bool = True,
|
421 |
-
attn_num_head_channels=1,
|
422 |
-
output_scale_factor=1.0,
|
423 |
-
cross_attention_dim=1280,
|
424 |
-
dual_cross_attention=False,
|
425 |
-
use_linear_projection=False,
|
426 |
-
only_cross_attention=False,
|
427 |
-
upcast_attention=False,
|
428 |
-
):
|
429 |
-
super().__init__()
|
430 |
-
|
431 |
-
self.has_cross_attention = True
|
432 |
-
self.attn_num_head_channels = attn_num_head_channels
|
433 |
-
resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32)
|
434 |
-
|
435 |
-
# there is always at least one resnet
|
436 |
-
resnets = [
|
437 |
-
ResnetBlock2D(
|
438 |
-
in_channels=in_channels,
|
439 |
-
out_channels=in_channels,
|
440 |
-
temb_channels=temb_channels,
|
441 |
-
eps=resnet_eps,
|
442 |
-
groups=resnet_groups,
|
443 |
-
dropout=dropout,
|
444 |
-
time_embedding_norm=resnet_time_scale_shift,
|
445 |
-
non_linearity=resnet_act_fn,
|
446 |
-
output_scale_factor=output_scale_factor,
|
447 |
-
pre_norm=resnet_pre_norm,
|
448 |
-
)
|
449 |
-
]
|
450 |
-
attentions = []
|
451 |
-
|
452 |
-
for _ in range(num_layers):
|
453 |
-
if not dual_cross_attention:
|
454 |
-
attentions.append(
|
455 |
-
Transformer2DModel(
|
456 |
-
attn_num_head_channels,
|
457 |
-
in_channels // attn_num_head_channels,
|
458 |
-
in_channels=in_channels,
|
459 |
-
num_layers=1,
|
460 |
-
cross_attention_dim=cross_attention_dim,
|
461 |
-
norm_num_groups=resnet_groups,
|
462 |
-
use_linear_projection=use_linear_projection,
|
463 |
-
only_cross_attention=only_cross_attention,
|
464 |
-
upcast_attention=upcast_attention,
|
465 |
-
)
|
466 |
-
)
|
467 |
-
else:
|
468 |
-
attentions.append(
|
469 |
-
DualTransformer2DModel(
|
470 |
-
attn_num_head_channels,
|
471 |
-
in_channels // attn_num_head_channels,
|
472 |
-
in_channels=in_channels,
|
473 |
-
num_layers=1,
|
474 |
-
cross_attention_dim=cross_attention_dim,
|
475 |
-
norm_num_groups=resnet_groups,
|
476 |
-
)
|
477 |
-
)
|
478 |
-
resnets.append(
|
479 |
-
ResnetBlock2D(
|
480 |
-
in_channels=in_channels,
|
481 |
-
out_channels=in_channels,
|
482 |
-
temb_channels=temb_channels,
|
483 |
-
eps=resnet_eps,
|
484 |
-
groups=resnet_groups,
|
485 |
-
dropout=dropout,
|
486 |
-
time_embedding_norm=resnet_time_scale_shift,
|
487 |
-
non_linearity=resnet_act_fn,
|
488 |
-
output_scale_factor=output_scale_factor,
|
489 |
-
pre_norm=resnet_pre_norm,
|
490 |
-
)
|
491 |
-
)
|
492 |
-
|
493 |
-
self.attentions = nn.LayerList(attentions)
|
494 |
-
self.resnets = nn.LayerList(resnets)
|
495 |
-
|
496 |
-
def forward(
|
497 |
-
self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None
|
498 |
-
):
|
499 |
-
# TODO(Patrick, William) - attention_mask is currently not used. Implement once used
|
500 |
-
hidden_states = self.resnets[0](hidden_states, temb)
|
501 |
-
for attn, resnet in zip(self.attentions, self.resnets[1:]):
|
502 |
-
hidden_states = attn(
|
503 |
-
hidden_states,
|
504 |
-
encoder_hidden_states=encoder_hidden_states,
|
505 |
-
cross_attention_kwargs=cross_attention_kwargs,
|
506 |
-
).sample
|
507 |
-
hidden_states = resnet(hidden_states, temb)
|
508 |
-
|
509 |
-
return hidden_states
|
510 |
-
|
511 |
-
|
512 |
-
class UNetMidBlock2DSimpleCrossAttn(nn.Layer):
|
513 |
-
def __init__(
|
514 |
-
self,
|
515 |
-
in_channels: int,
|
516 |
-
temb_channels: int,
|
517 |
-
dropout: float = 0.0,
|
518 |
-
num_layers: int = 1,
|
519 |
-
resnet_eps: float = 1e-6,
|
520 |
-
resnet_time_scale_shift: str = "default",
|
521 |
-
resnet_act_fn: str = "swish",
|
522 |
-
resnet_groups: int = 32,
|
523 |
-
resnet_pre_norm: bool = True,
|
524 |
-
attn_num_head_channels=1,
|
525 |
-
output_scale_factor=1.0,
|
526 |
-
cross_attention_dim=1280,
|
527 |
-
):
|
528 |
-
super().__init__()
|
529 |
-
|
530 |
-
self.has_cross_attention = True
|
531 |
-
|
532 |
-
self.attn_num_head_channels = attn_num_head_channels
|
533 |
-
resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32)
|
534 |
-
|
535 |
-
self.num_heads = in_channels // self.attn_num_head_channels
|
536 |
-
|
537 |
-
# there is always at least one resnet
|
538 |
-
resnets = [
|
539 |
-
ResnetBlock2D(
|
540 |
-
in_channels=in_channels,
|
541 |
-
out_channels=in_channels,
|
542 |
-
temb_channels=temb_channels,
|
543 |
-
eps=resnet_eps,
|
544 |
-
groups=resnet_groups,
|
545 |
-
dropout=dropout,
|
546 |
-
time_embedding_norm=resnet_time_scale_shift,
|
547 |
-
non_linearity=resnet_act_fn,
|
548 |
-
output_scale_factor=output_scale_factor,
|
549 |
-
pre_norm=resnet_pre_norm,
|
550 |
-
)
|
551 |
-
]
|
552 |
-
attentions = []
|
553 |
-
|
554 |
-
for _ in range(num_layers):
|
555 |
-
attentions.append(
|
556 |
-
CrossAttention(
|
557 |
-
query_dim=in_channels,
|
558 |
-
cross_attention_dim=in_channels,
|
559 |
-
heads=self.num_heads,
|
560 |
-
dim_head=attn_num_head_channels,
|
561 |
-
added_kv_proj_dim=cross_attention_dim,
|
562 |
-
norm_num_groups=resnet_groups,
|
563 |
-
bias=True,
|
564 |
-
upcast_softmax=True,
|
565 |
-
processor=CrossAttnAddedKVProcessor(),
|
566 |
-
)
|
567 |
-
)
|
568 |
-
resnets.append(
|
569 |
-
ResnetBlock2D(
|
570 |
-
in_channels=in_channels,
|
571 |
-
out_channels=in_channels,
|
572 |
-
temb_channels=temb_channels,
|
573 |
-
eps=resnet_eps,
|
574 |
-
groups=resnet_groups,
|
575 |
-
dropout=dropout,
|
576 |
-
time_embedding_norm=resnet_time_scale_shift,
|
577 |
-
non_linearity=resnet_act_fn,
|
578 |
-
output_scale_factor=output_scale_factor,
|
579 |
-
pre_norm=resnet_pre_norm,
|
580 |
-
)
|
581 |
-
)
|
582 |
-
|
583 |
-
self.attentions = nn.LayerList(attentions)
|
584 |
-
self.resnets = nn.LayerList(resnets)
|
585 |
-
|
586 |
-
def set_attention_slice(self, slice_size):
|
587 |
-
head_dims = self.attn_num_head_channels
|
588 |
-
head_dims = [head_dims] if isinstance(head_dims, int) else head_dims
|
589 |
-
if slice_size is not None and any(dim % slice_size != 0 for dim in head_dims):
|
590 |
-
raise ValueError(
|
591 |
-
f"Make sure slice_size {slice_size} is a common divisor of "
|
592 |
-
f"the number of heads used in cross_attention: {head_dims}"
|
593 |
-
)
|
594 |
-
if slice_size is not None and slice_size > min(head_dims):
|
595 |
-
raise ValueError(
|
596 |
-
f"slice_size {slice_size} has to be smaller or equal to "
|
597 |
-
f"the lowest number of heads used in cross_attention: min({head_dims}) = {min(head_dims)}"
|
598 |
-
)
|
599 |
-
|
600 |
-
for attn in self.attentions:
|
601 |
-
attn._set_attention_slice(slice_size)
|
602 |
-
|
603 |
-
def forward(
|
604 |
-
self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None
|
605 |
-
):
|
606 |
-
cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {}
|
607 |
-
hidden_states = self.resnets[0](hidden_states, temb)
|
608 |
-
for attn, resnet in zip(self.attentions, self.resnets[1:]):
|
609 |
-
# attn
|
610 |
-
hidden_states = attn(
|
611 |
-
hidden_states,
|
612 |
-
encoder_hidden_states=encoder_hidden_states,
|
613 |
-
attention_mask=attention_mask,
|
614 |
-
**cross_attention_kwargs,
|
615 |
-
)
|
616 |
-
# resnet
|
617 |
-
hidden_states = resnet(hidden_states, temb)
|
618 |
-
|
619 |
-
return hidden_states
|
620 |
-
|
621 |
-
|
622 |
-
class AttnDownBlock2D(nn.Layer):
|
623 |
-
def __init__(
|
624 |
-
self,
|
625 |
-
in_channels: int,
|
626 |
-
out_channels: int,
|
627 |
-
temb_channels: int,
|
628 |
-
dropout: float = 0.0,
|
629 |
-
num_layers: int = 1,
|
630 |
-
resnet_eps: float = 1e-6,
|
631 |
-
resnet_time_scale_shift: str = "default",
|
632 |
-
resnet_act_fn: str = "swish",
|
633 |
-
resnet_groups: int = 32,
|
634 |
-
resnet_pre_norm: bool = True,
|
635 |
-
attn_num_head_channels=1,
|
636 |
-
attention_type="default",
|
637 |
-
output_scale_factor=1.0,
|
638 |
-
downsample_padding=1,
|
639 |
-
add_downsample=True,
|
640 |
-
):
|
641 |
-
super().__init__()
|
642 |
-
resnets = []
|
643 |
-
attentions = []
|
644 |
-
|
645 |
-
self.attention_type = attention_type
|
646 |
-
|
647 |
-
for i in range(num_layers):
|
648 |
-
in_channels = in_channels if i == 0 else out_channels
|
649 |
-
resnets.append(
|
650 |
-
ResnetBlock2D(
|
651 |
-
in_channels=in_channels,
|
652 |
-
out_channels=out_channels,
|
653 |
-
temb_channels=temb_channels,
|
654 |
-
eps=resnet_eps,
|
655 |
-
groups=resnet_groups,
|
656 |
-
dropout=dropout,
|
657 |
-
time_embedding_norm=resnet_time_scale_shift,
|
658 |
-
non_linearity=resnet_act_fn,
|
659 |
-
output_scale_factor=output_scale_factor,
|
660 |
-
pre_norm=resnet_pre_norm,
|
661 |
-
)
|
662 |
-
)
|
663 |
-
attentions.append(
|
664 |
-
AttentionBlock(
|
665 |
-
out_channels,
|
666 |
-
num_head_channels=attn_num_head_channels,
|
667 |
-
rescale_output_factor=output_scale_factor,
|
668 |
-
eps=resnet_eps,
|
669 |
-
norm_num_groups=resnet_groups,
|
670 |
-
)
|
671 |
-
)
|
672 |
-
|
673 |
-
self.attentions = nn.LayerList(attentions)
|
674 |
-
self.resnets = nn.LayerList(resnets)
|
675 |
-
|
676 |
-
if add_downsample:
|
677 |
-
self.downsamplers = nn.LayerList(
|
678 |
-
[
|
679 |
-
Downsample2D(
|
680 |
-
out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op"
|
681 |
-
)
|
682 |
-
]
|
683 |
-
)
|
684 |
-
else:
|
685 |
-
self.downsamplers = None
|
686 |
-
|
687 |
-
def forward(self, hidden_states, temb=None):
|
688 |
-
output_states = ()
|
689 |
-
|
690 |
-
for resnet, attn in zip(self.resnets, self.attentions):
|
691 |
-
hidden_states = resnet(hidden_states, temb)
|
692 |
-
hidden_states = attn(hidden_states)
|
693 |
-
output_states += (hidden_states,)
|
694 |
-
|
695 |
-
if self.downsamplers is not None:
|
696 |
-
for downsampler in self.downsamplers:
|
697 |
-
hidden_states = downsampler(hidden_states)
|
698 |
-
|
699 |
-
output_states += (hidden_states,)
|
700 |
-
|
701 |
-
return hidden_states, output_states
|
702 |
-
|
703 |
-
|
704 |
-
class CrossAttnDownBlock2D(nn.Layer):
|
705 |
-
def __init__(
|
706 |
-
self,
|
707 |
-
in_channels: int,
|
708 |
-
out_channels: int,
|
709 |
-
temb_channels: int,
|
710 |
-
dropout: float = 0.0,
|
711 |
-
num_layers: int = 1,
|
712 |
-
resnet_eps: float = 1e-6,
|
713 |
-
resnet_time_scale_shift: str = "default",
|
714 |
-
resnet_act_fn: str = "swish",
|
715 |
-
resnet_groups: int = 32,
|
716 |
-
resnet_pre_norm: bool = True,
|
717 |
-
attn_num_head_channels=1,
|
718 |
-
cross_attention_dim=1280,
|
719 |
-
output_scale_factor=1.0,
|
720 |
-
downsample_padding=1,
|
721 |
-
add_downsample=True,
|
722 |
-
dual_cross_attention=False,
|
723 |
-
use_linear_projection=False,
|
724 |
-
only_cross_attention=False,
|
725 |
-
upcast_attention=False,
|
726 |
-
):
|
727 |
-
super().__init__()
|
728 |
-
resnets = []
|
729 |
-
attentions = []
|
730 |
-
|
731 |
-
self.has_cross_attention = True
|
732 |
-
self.attn_num_head_channels = attn_num_head_channels
|
733 |
-
|
734 |
-
for i in range(num_layers):
|
735 |
-
in_channels = in_channels if i == 0 else out_channels
|
736 |
-
resnets.append(
|
737 |
-
ResnetBlock2D(
|
738 |
-
in_channels=in_channels,
|
739 |
-
out_channels=out_channels,
|
740 |
-
temb_channels=temb_channels,
|
741 |
-
eps=resnet_eps,
|
742 |
-
groups=resnet_groups,
|
743 |
-
dropout=dropout,
|
744 |
-
time_embedding_norm=resnet_time_scale_shift,
|
745 |
-
non_linearity=resnet_act_fn,
|
746 |
-
output_scale_factor=output_scale_factor,
|
747 |
-
pre_norm=resnet_pre_norm,
|
748 |
-
)
|
749 |
-
)
|
750 |
-
if not dual_cross_attention:
|
751 |
-
attentions.append(
|
752 |
-
Transformer2DModel(
|
753 |
-
attn_num_head_channels,
|
754 |
-
out_channels // attn_num_head_channels,
|
755 |
-
in_channels=out_channels,
|
756 |
-
num_layers=1,
|
757 |
-
cross_attention_dim=cross_attention_dim,
|
758 |
-
norm_num_groups=resnet_groups,
|
759 |
-
use_linear_projection=use_linear_projection,
|
760 |
-
upcast_attention=upcast_attention,
|
761 |
-
)
|
762 |
-
)
|
763 |
-
else:
|
764 |
-
attentions.append(
|
765 |
-
DualTransformer2DModel(
|
766 |
-
attn_num_head_channels,
|
767 |
-
out_channels // attn_num_head_channels,
|
768 |
-
in_channels=out_channels,
|
769 |
-
num_layers=1,
|
770 |
-
cross_attention_dim=cross_attention_dim,
|
771 |
-
norm_num_groups=resnet_groups,
|
772 |
-
)
|
773 |
-
)
|
774 |
-
self.attentions = nn.LayerList(attentions)
|
775 |
-
self.resnets = nn.LayerList(resnets)
|
776 |
-
|
777 |
-
if add_downsample:
|
778 |
-
self.downsamplers = nn.LayerList(
|
779 |
-
[
|
780 |
-
Downsample2D(
|
781 |
-
out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op"
|
782 |
-
)
|
783 |
-
]
|
784 |
-
)
|
785 |
-
else:
|
786 |
-
self.downsamplers = None
|
787 |
-
|
788 |
-
self.gradient_checkpointing = False
|
789 |
-
|
790 |
-
def forward(
|
791 |
-
self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None
|
792 |
-
):
|
793 |
-
# TODO(Patrick, William) - attention mask is not used
|
794 |
-
output_states = ()
|
795 |
-
|
796 |
-
for resnet, attn in zip(self.resnets, self.attentions):
|
797 |
-
if self.training and self.gradient_checkpointing:
|
798 |
-
|
799 |
-
def create_custom_forward(module, return_dict=None):
|
800 |
-
def custom_forward(*inputs):
|
801 |
-
if return_dict is not None:
|
802 |
-
return module(*inputs, return_dict=return_dict)[0] # move [0]
|
803 |
-
else:
|
804 |
-
return module(*inputs)
|
805 |
-
|
806 |
-
return custom_forward
|
807 |
-
|
808 |
-
hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb)
|
809 |
-
hidden_states = recompute(
|
810 |
-
create_custom_forward(attn, return_dict=False),
|
811 |
-
hidden_states,
|
812 |
-
encoder_hidden_states,
|
813 |
-
cross_attention_kwargs,
|
814 |
-
) # [0]
|
815 |
-
else:
|
816 |
-
hidden_states = resnet(hidden_states, temb)
|
817 |
-
hidden_states = attn(
|
818 |
-
hidden_states,
|
819 |
-
encoder_hidden_states=encoder_hidden_states,
|
820 |
-
cross_attention_kwargs=cross_attention_kwargs,
|
821 |
-
).sample
|
822 |
-
output_states += (hidden_states,)
|
823 |
-
|
824 |
-
if self.downsamplers is not None:
|
825 |
-
for downsampler in self.downsamplers:
|
826 |
-
hidden_states = downsampler(hidden_states)
|
827 |
-
|
828 |
-
output_states += (hidden_states,)
|
829 |
-
|
830 |
-
return hidden_states, output_states
|
831 |
-
|
832 |
-
|
833 |
-
class DownBlock2D(nn.Layer):
|
834 |
-
def __init__(
|
835 |
-
self,
|
836 |
-
in_channels: int,
|
837 |
-
out_channels: int,
|
838 |
-
temb_channels: int,
|
839 |
-
dropout: float = 0.0,
|
840 |
-
num_layers: int = 1,
|
841 |
-
resnet_eps: float = 1e-6,
|
842 |
-
resnet_time_scale_shift: str = "default",
|
843 |
-
resnet_act_fn: str = "swish",
|
844 |
-
resnet_groups: int = 32,
|
845 |
-
resnet_pre_norm: bool = True,
|
846 |
-
output_scale_factor=1.0,
|
847 |
-
add_downsample=True,
|
848 |
-
downsample_padding=1,
|
849 |
-
):
|
850 |
-
super().__init__()
|
851 |
-
resnets = []
|
852 |
-
|
853 |
-
for i in range(num_layers):
|
854 |
-
in_channels = in_channels if i == 0 else out_channels
|
855 |
-
resnets.append(
|
856 |
-
ResnetBlock2D(
|
857 |
-
in_channels=in_channels,
|
858 |
-
out_channels=out_channels,
|
859 |
-
temb_channels=temb_channels,
|
860 |
-
eps=resnet_eps,
|
861 |
-
groups=resnet_groups,
|
862 |
-
dropout=dropout,
|
863 |
-
time_embedding_norm=resnet_time_scale_shift,
|
864 |
-
non_linearity=resnet_act_fn,
|
865 |
-
output_scale_factor=output_scale_factor,
|
866 |
-
pre_norm=resnet_pre_norm,
|
867 |
-
)
|
868 |
-
)
|
869 |
-
|
870 |
-
self.resnets = nn.LayerList(resnets)
|
871 |
-
|
872 |
-
if add_downsample:
|
873 |
-
self.downsamplers = nn.LayerList(
|
874 |
-
[
|
875 |
-
Downsample2D(
|
876 |
-
out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op"
|
877 |
-
)
|
878 |
-
]
|
879 |
-
)
|
880 |
-
else:
|
881 |
-
self.downsamplers = None
|
882 |
-
|
883 |
-
self.gradient_checkpointing = False
|
884 |
-
|
885 |
-
def forward(self, hidden_states, temb=None):
|
886 |
-
output_states = ()
|
887 |
-
|
888 |
-
for resnet in self.resnets:
|
889 |
-
if self.training and self.gradient_checkpointing:
|
890 |
-
|
891 |
-
def create_custom_forward(module):
|
892 |
-
def custom_forward(*inputs):
|
893 |
-
return module(*inputs)
|
894 |
-
|
895 |
-
return custom_forward
|
896 |
-
|
897 |
-
hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb)
|
898 |
-
else:
|
899 |
-
hidden_states = resnet(hidden_states, temb)
|
900 |
-
|
901 |
-
output_states += (hidden_states,)
|
902 |
-
|
903 |
-
if self.downsamplers is not None:
|
904 |
-
for downsampler in self.downsamplers:
|
905 |
-
hidden_states = downsampler(hidden_states)
|
906 |
-
|
907 |
-
output_states += (hidden_states,)
|
908 |
-
|
909 |
-
return hidden_states, output_states
|
910 |
-
|
911 |
-
|
912 |
-
class DownEncoderBlock2D(nn.Layer):
|
913 |
-
def __init__(
|
914 |
-
self,
|
915 |
-
in_channels: int,
|
916 |
-
out_channels: int,
|
917 |
-
dropout: float = 0.0,
|
918 |
-
num_layers: int = 1,
|
919 |
-
resnet_eps: float = 1e-6,
|
920 |
-
resnet_time_scale_shift: str = "default",
|
921 |
-
resnet_act_fn: str = "swish",
|
922 |
-
resnet_groups: int = 32,
|
923 |
-
resnet_pre_norm: bool = True,
|
924 |
-
output_scale_factor=1.0,
|
925 |
-
add_downsample=True,
|
926 |
-
downsample_padding=1,
|
927 |
-
):
|
928 |
-
super().__init__()
|
929 |
-
resnets = []
|
930 |
-
|
931 |
-
for i in range(num_layers):
|
932 |
-
in_channels = in_channels if i == 0 else out_channels
|
933 |
-
resnets.append(
|
934 |
-
ResnetBlock2D(
|
935 |
-
in_channels=in_channels,
|
936 |
-
out_channels=out_channels,
|
937 |
-
temb_channels=None,
|
938 |
-
eps=resnet_eps,
|
939 |
-
groups=resnet_groups,
|
940 |
-
dropout=dropout,
|
941 |
-
time_embedding_norm=resnet_time_scale_shift,
|
942 |
-
non_linearity=resnet_act_fn,
|
943 |
-
output_scale_factor=output_scale_factor,
|
944 |
-
pre_norm=resnet_pre_norm,
|
945 |
-
)
|
946 |
-
)
|
947 |
-
|
948 |
-
self.resnets = nn.LayerList(resnets)
|
949 |
-
|
950 |
-
if add_downsample:
|
951 |
-
self.downsamplers = nn.LayerList(
|
952 |
-
[
|
953 |
-
Downsample2D(
|
954 |
-
out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op"
|
955 |
-
)
|
956 |
-
]
|
957 |
-
)
|
958 |
-
else:
|
959 |
-
self.downsamplers = None
|
960 |
-
|
961 |
-
def forward(self, hidden_states):
|
962 |
-
for resnet in self.resnets:
|
963 |
-
hidden_states = resnet(hidden_states, temb=None)
|
964 |
-
|
965 |
-
if self.downsamplers is not None:
|
966 |
-
for downsampler in self.downsamplers:
|
967 |
-
hidden_states = downsampler(hidden_states)
|
968 |
-
|
969 |
-
return hidden_states
|
970 |
-
|
971 |
-
|
972 |
-
class AttnDownEncoderBlock2D(nn.Layer):
|
973 |
-
def __init__(
|
974 |
-
self,
|
975 |
-
in_channels: int,
|
976 |
-
out_channels: int,
|
977 |
-
dropout: float = 0.0,
|
978 |
-
num_layers: int = 1,
|
979 |
-
resnet_eps: float = 1e-6,
|
980 |
-
resnet_time_scale_shift: str = "default",
|
981 |
-
resnet_act_fn: str = "swish",
|
982 |
-
resnet_groups: int = 32,
|
983 |
-
resnet_pre_norm: bool = True,
|
984 |
-
attn_num_head_channels=1,
|
985 |
-
output_scale_factor=1.0,
|
986 |
-
add_downsample=True,
|
987 |
-
downsample_padding=1,
|
988 |
-
):
|
989 |
-
super().__init__()
|
990 |
-
resnets = []
|
991 |
-
attentions = []
|
992 |
-
|
993 |
-
for i in range(num_layers):
|
994 |
-
in_channels = in_channels if i == 0 else out_channels
|
995 |
-
resnets.append(
|
996 |
-
ResnetBlock2D(
|
997 |
-
in_channels=in_channels,
|
998 |
-
out_channels=out_channels,
|
999 |
-
temb_channels=None,
|
1000 |
-
eps=resnet_eps,
|
1001 |
-
groups=resnet_groups,
|
1002 |
-
dropout=dropout,
|
1003 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1004 |
-
non_linearity=resnet_act_fn,
|
1005 |
-
output_scale_factor=output_scale_factor,
|
1006 |
-
pre_norm=resnet_pre_norm,
|
1007 |
-
)
|
1008 |
-
)
|
1009 |
-
attentions.append(
|
1010 |
-
AttentionBlock(
|
1011 |
-
out_channels,
|
1012 |
-
num_head_channels=attn_num_head_channels,
|
1013 |
-
rescale_output_factor=output_scale_factor,
|
1014 |
-
eps=resnet_eps,
|
1015 |
-
norm_num_groups=resnet_groups,
|
1016 |
-
)
|
1017 |
-
)
|
1018 |
-
|
1019 |
-
self.attentions = nn.LayerList(attentions)
|
1020 |
-
self.resnets = nn.LayerList(resnets)
|
1021 |
-
|
1022 |
-
if add_downsample:
|
1023 |
-
self.downsamplers = nn.LayerList(
|
1024 |
-
[
|
1025 |
-
Downsample2D(
|
1026 |
-
out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op"
|
1027 |
-
)
|
1028 |
-
]
|
1029 |
-
)
|
1030 |
-
else:
|
1031 |
-
self.downsamplers = None
|
1032 |
-
|
1033 |
-
def forward(self, hidden_states):
|
1034 |
-
for resnet, attn in zip(self.resnets, self.attentions):
|
1035 |
-
hidden_states = resnet(hidden_states, temb=None)
|
1036 |
-
hidden_states = attn(hidden_states)
|
1037 |
-
|
1038 |
-
if self.downsamplers is not None:
|
1039 |
-
for downsampler in self.downsamplers:
|
1040 |
-
hidden_states = downsampler(hidden_states)
|
1041 |
-
|
1042 |
-
return hidden_states
|
1043 |
-
|
1044 |
-
|
1045 |
-
class AttnSkipDownBlock2D(nn.Layer):
|
1046 |
-
def __init__(
|
1047 |
-
self,
|
1048 |
-
in_channels: int,
|
1049 |
-
out_channels: int,
|
1050 |
-
temb_channels: int,
|
1051 |
-
dropout: float = 0.0,
|
1052 |
-
num_layers: int = 1,
|
1053 |
-
resnet_eps: float = 1e-6,
|
1054 |
-
resnet_time_scale_shift: str = "default",
|
1055 |
-
resnet_act_fn: str = "swish",
|
1056 |
-
resnet_pre_norm: bool = True,
|
1057 |
-
attn_num_head_channels=1,
|
1058 |
-
output_scale_factor=np.sqrt(2.0),
|
1059 |
-
downsample_padding=1,
|
1060 |
-
add_downsample=True,
|
1061 |
-
):
|
1062 |
-
super().__init__()
|
1063 |
-
self.attentions = nn.LayerList([])
|
1064 |
-
self.resnets = nn.LayerList([])
|
1065 |
-
|
1066 |
-
for i in range(num_layers):
|
1067 |
-
in_channels = in_channels if i == 0 else out_channels
|
1068 |
-
self.resnets.append(
|
1069 |
-
ResnetBlock2D(
|
1070 |
-
in_channels=in_channels,
|
1071 |
-
out_channels=out_channels,
|
1072 |
-
temb_channels=temb_channels,
|
1073 |
-
eps=resnet_eps,
|
1074 |
-
groups=min(in_channels // 4, 32),
|
1075 |
-
groups_out=min(out_channels // 4, 32),
|
1076 |
-
dropout=dropout,
|
1077 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1078 |
-
non_linearity=resnet_act_fn,
|
1079 |
-
output_scale_factor=output_scale_factor,
|
1080 |
-
pre_norm=resnet_pre_norm,
|
1081 |
-
)
|
1082 |
-
)
|
1083 |
-
self.attentions.append(
|
1084 |
-
AttentionBlock(
|
1085 |
-
out_channels,
|
1086 |
-
num_head_channels=attn_num_head_channels,
|
1087 |
-
rescale_output_factor=output_scale_factor,
|
1088 |
-
eps=resnet_eps,
|
1089 |
-
)
|
1090 |
-
)
|
1091 |
-
|
1092 |
-
if add_downsample:
|
1093 |
-
self.resnet_down = ResnetBlock2D(
|
1094 |
-
in_channels=out_channels,
|
1095 |
-
out_channels=out_channels,
|
1096 |
-
temb_channels=temb_channels,
|
1097 |
-
eps=resnet_eps,
|
1098 |
-
groups=min(out_channels // 4, 32),
|
1099 |
-
dropout=dropout,
|
1100 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1101 |
-
non_linearity=resnet_act_fn,
|
1102 |
-
output_scale_factor=output_scale_factor,
|
1103 |
-
pre_norm=resnet_pre_norm,
|
1104 |
-
use_in_shortcut=True,
|
1105 |
-
down=True,
|
1106 |
-
kernel="fir",
|
1107 |
-
)
|
1108 |
-
self.downsamplers = nn.LayerList([FirDownsample2D(out_channels, out_channels=out_channels)])
|
1109 |
-
self.skip_conv = nn.Conv2D(3, out_channels, kernel_size=(1, 1), stride=(1, 1))
|
1110 |
-
else:
|
1111 |
-
self.resnet_down = None
|
1112 |
-
self.downsamplers = None
|
1113 |
-
self.skip_conv = None
|
1114 |
-
|
1115 |
-
def forward(self, hidden_states, temb=None, skip_sample=None):
|
1116 |
-
output_states = ()
|
1117 |
-
|
1118 |
-
for resnet, attn in zip(self.resnets, self.attentions):
|
1119 |
-
hidden_states = resnet(hidden_states, temb)
|
1120 |
-
hidden_states = attn(hidden_states)
|
1121 |
-
output_states += (hidden_states,)
|
1122 |
-
|
1123 |
-
if self.downsamplers is not None:
|
1124 |
-
hidden_states = self.resnet_down(hidden_states, temb)
|
1125 |
-
for downsampler in self.downsamplers:
|
1126 |
-
skip_sample = downsampler(skip_sample)
|
1127 |
-
|
1128 |
-
hidden_states = self.skip_conv(skip_sample) + hidden_states
|
1129 |
-
|
1130 |
-
output_states += (hidden_states,)
|
1131 |
-
|
1132 |
-
return hidden_states, output_states, skip_sample
|
1133 |
-
|
1134 |
-
|
1135 |
-
class SkipDownBlock2D(nn.Layer):
|
1136 |
-
def __init__(
|
1137 |
-
self,
|
1138 |
-
in_channels: int,
|
1139 |
-
out_channels: int,
|
1140 |
-
temb_channels: int,
|
1141 |
-
dropout: float = 0.0,
|
1142 |
-
num_layers: int = 1,
|
1143 |
-
resnet_eps: float = 1e-6,
|
1144 |
-
resnet_time_scale_shift: str = "default",
|
1145 |
-
resnet_act_fn: str = "swish",
|
1146 |
-
resnet_pre_norm: bool = True,
|
1147 |
-
output_scale_factor=np.sqrt(2.0),
|
1148 |
-
add_downsample=True,
|
1149 |
-
downsample_padding=1,
|
1150 |
-
):
|
1151 |
-
super().__init__()
|
1152 |
-
self.resnets = nn.LayerList([])
|
1153 |
-
|
1154 |
-
for i in range(num_layers):
|
1155 |
-
in_channels = in_channels if i == 0 else out_channels
|
1156 |
-
self.resnets.append(
|
1157 |
-
ResnetBlock2D(
|
1158 |
-
in_channels=in_channels,
|
1159 |
-
out_channels=out_channels,
|
1160 |
-
temb_channels=temb_channels,
|
1161 |
-
eps=resnet_eps,
|
1162 |
-
groups=min(in_channels // 4, 32),
|
1163 |
-
groups_out=min(out_channels // 4, 32),
|
1164 |
-
dropout=dropout,
|
1165 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1166 |
-
non_linearity=resnet_act_fn,
|
1167 |
-
output_scale_factor=output_scale_factor,
|
1168 |
-
pre_norm=resnet_pre_norm,
|
1169 |
-
)
|
1170 |
-
)
|
1171 |
-
|
1172 |
-
if add_downsample:
|
1173 |
-
self.resnet_down = ResnetBlock2D(
|
1174 |
-
in_channels=out_channels,
|
1175 |
-
out_channels=out_channels,
|
1176 |
-
temb_channels=temb_channels,
|
1177 |
-
eps=resnet_eps,
|
1178 |
-
groups=min(out_channels // 4, 32),
|
1179 |
-
dropout=dropout,
|
1180 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1181 |
-
non_linearity=resnet_act_fn,
|
1182 |
-
output_scale_factor=output_scale_factor,
|
1183 |
-
pre_norm=resnet_pre_norm,
|
1184 |
-
use_in_shortcut=True,
|
1185 |
-
down=True,
|
1186 |
-
kernel="fir",
|
1187 |
-
)
|
1188 |
-
self.downsamplers = nn.LayerList([FirDownsample2D(out_channels, out_channels=out_channels)])
|
1189 |
-
self.skip_conv = nn.Conv2D(3, out_channels, kernel_size=(1, 1), stride=(1, 1))
|
1190 |
-
else:
|
1191 |
-
self.resnet_down = None
|
1192 |
-
self.downsamplers = None
|
1193 |
-
self.skip_conv = None
|
1194 |
-
|
1195 |
-
def forward(self, hidden_states, temb=None, skip_sample=None):
|
1196 |
-
output_states = ()
|
1197 |
-
|
1198 |
-
for resnet in self.resnets:
|
1199 |
-
hidden_states = resnet(hidden_states, temb)
|
1200 |
-
output_states += (hidden_states,)
|
1201 |
-
|
1202 |
-
if self.downsamplers is not None:
|
1203 |
-
hidden_states = self.resnet_down(hidden_states, temb)
|
1204 |
-
for downsampler in self.downsamplers:
|
1205 |
-
skip_sample = downsampler(skip_sample)
|
1206 |
-
|
1207 |
-
hidden_states = self.skip_conv(skip_sample) + hidden_states
|
1208 |
-
|
1209 |
-
output_states += (hidden_states,)
|
1210 |
-
|
1211 |
-
return hidden_states, output_states, skip_sample
|
1212 |
-
|
1213 |
-
|
1214 |
-
class ResnetDownsampleBlock2D(nn.Layer):
|
1215 |
-
def __init__(
|
1216 |
-
self,
|
1217 |
-
in_channels: int,
|
1218 |
-
out_channels: int,
|
1219 |
-
temb_channels: int,
|
1220 |
-
dropout: float = 0.0,
|
1221 |
-
num_layers: int = 1,
|
1222 |
-
resnet_eps: float = 1e-6,
|
1223 |
-
resnet_time_scale_shift: str = "default",
|
1224 |
-
resnet_act_fn: str = "swish",
|
1225 |
-
resnet_groups: int = 32,
|
1226 |
-
resnet_pre_norm: bool = True,
|
1227 |
-
output_scale_factor=1.0,
|
1228 |
-
add_downsample=True,
|
1229 |
-
):
|
1230 |
-
super().__init__()
|
1231 |
-
resnets = []
|
1232 |
-
|
1233 |
-
for i in range(num_layers):
|
1234 |
-
in_channels = in_channels if i == 0 else out_channels
|
1235 |
-
resnets.append(
|
1236 |
-
ResnetBlock2D(
|
1237 |
-
in_channels=in_channels,
|
1238 |
-
out_channels=out_channels,
|
1239 |
-
temb_channels=temb_channels,
|
1240 |
-
eps=resnet_eps,
|
1241 |
-
groups=resnet_groups,
|
1242 |
-
dropout=dropout,
|
1243 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1244 |
-
non_linearity=resnet_act_fn,
|
1245 |
-
output_scale_factor=output_scale_factor,
|
1246 |
-
pre_norm=resnet_pre_norm,
|
1247 |
-
)
|
1248 |
-
)
|
1249 |
-
|
1250 |
-
self.resnets = nn.LayerList(resnets)
|
1251 |
-
|
1252 |
-
if add_downsample:
|
1253 |
-
self.downsamplers = nn.LayerList(
|
1254 |
-
[
|
1255 |
-
ResnetBlock2D(
|
1256 |
-
in_channels=out_channels,
|
1257 |
-
out_channels=out_channels,
|
1258 |
-
temb_channels=temb_channels,
|
1259 |
-
eps=resnet_eps,
|
1260 |
-
groups=resnet_groups,
|
1261 |
-
dropout=dropout,
|
1262 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1263 |
-
non_linearity=resnet_act_fn,
|
1264 |
-
output_scale_factor=output_scale_factor,
|
1265 |
-
pre_norm=resnet_pre_norm,
|
1266 |
-
down=True,
|
1267 |
-
)
|
1268 |
-
]
|
1269 |
-
)
|
1270 |
-
else:
|
1271 |
-
self.downsamplers = None
|
1272 |
-
|
1273 |
-
self.gradient_checkpointing = False
|
1274 |
-
|
1275 |
-
def forward(self, hidden_states, temb=None):
|
1276 |
-
output_states = ()
|
1277 |
-
|
1278 |
-
for resnet in self.resnets:
|
1279 |
-
if self.training and self.gradient_checkpointing:
|
1280 |
-
|
1281 |
-
def create_custom_forward(module):
|
1282 |
-
def custom_forward(*inputs):
|
1283 |
-
return module(*inputs)
|
1284 |
-
|
1285 |
-
return custom_forward
|
1286 |
-
|
1287 |
-
hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb)
|
1288 |
-
else:
|
1289 |
-
hidden_states = resnet(hidden_states, temb)
|
1290 |
-
|
1291 |
-
output_states += (hidden_states,)
|
1292 |
-
|
1293 |
-
if self.downsamplers is not None:
|
1294 |
-
for downsampler in self.downsamplers:
|
1295 |
-
hidden_states = downsampler(hidden_states, temb)
|
1296 |
-
|
1297 |
-
output_states += (hidden_states,)
|
1298 |
-
|
1299 |
-
return hidden_states, output_states
|
1300 |
-
|
1301 |
-
|
1302 |
-
class SimpleCrossAttnDownBlock2D(nn.Layer):
|
1303 |
-
def __init__(
|
1304 |
-
self,
|
1305 |
-
in_channels: int,
|
1306 |
-
out_channels: int,
|
1307 |
-
temb_channels: int,
|
1308 |
-
dropout: float = 0.0,
|
1309 |
-
num_layers: int = 1,
|
1310 |
-
resnet_eps: float = 1e-6,
|
1311 |
-
resnet_time_scale_shift: str = "default",
|
1312 |
-
resnet_act_fn: str = "swish",
|
1313 |
-
resnet_groups: int = 32,
|
1314 |
-
resnet_pre_norm: bool = True,
|
1315 |
-
attn_num_head_channels=1,
|
1316 |
-
cross_attention_dim=1280,
|
1317 |
-
output_scale_factor=1.0,
|
1318 |
-
add_downsample=True,
|
1319 |
-
):
|
1320 |
-
super().__init__()
|
1321 |
-
|
1322 |
-
self.has_cross_attention = True
|
1323 |
-
|
1324 |
-
resnets = []
|
1325 |
-
attentions = []
|
1326 |
-
|
1327 |
-
self.attn_num_head_channels = attn_num_head_channels
|
1328 |
-
self.num_heads = out_channels // self.attn_num_head_channels
|
1329 |
-
|
1330 |
-
for i in range(num_layers):
|
1331 |
-
in_channels = in_channels if i == 0 else out_channels
|
1332 |
-
resnets.append(
|
1333 |
-
ResnetBlock2D(
|
1334 |
-
in_channels=in_channels,
|
1335 |
-
out_channels=out_channels,
|
1336 |
-
temb_channels=temb_channels,
|
1337 |
-
eps=resnet_eps,
|
1338 |
-
groups=resnet_groups,
|
1339 |
-
dropout=dropout,
|
1340 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1341 |
-
non_linearity=resnet_act_fn,
|
1342 |
-
output_scale_factor=output_scale_factor,
|
1343 |
-
pre_norm=resnet_pre_norm,
|
1344 |
-
)
|
1345 |
-
)
|
1346 |
-
attentions.append(
|
1347 |
-
CrossAttention(
|
1348 |
-
query_dim=out_channels,
|
1349 |
-
cross_attention_dim=out_channels,
|
1350 |
-
heads=self.num_heads,
|
1351 |
-
dim_head=attn_num_head_channels,
|
1352 |
-
added_kv_proj_dim=cross_attention_dim,
|
1353 |
-
norm_num_groups=resnet_groups,
|
1354 |
-
bias=True,
|
1355 |
-
upcast_softmax=True,
|
1356 |
-
processor=CrossAttnAddedKVProcessor(),
|
1357 |
-
)
|
1358 |
-
)
|
1359 |
-
self.attentions = nn.LayerList(attentions)
|
1360 |
-
self.resnets = nn.LayerList(resnets)
|
1361 |
-
|
1362 |
-
if add_downsample:
|
1363 |
-
self.downsamplers = nn.LayerList(
|
1364 |
-
[
|
1365 |
-
ResnetBlock2D(
|
1366 |
-
in_channels=out_channels,
|
1367 |
-
out_channels=out_channels,
|
1368 |
-
temb_channels=temb_channels,
|
1369 |
-
eps=resnet_eps,
|
1370 |
-
groups=resnet_groups,
|
1371 |
-
dropout=dropout,
|
1372 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1373 |
-
non_linearity=resnet_act_fn,
|
1374 |
-
output_scale_factor=output_scale_factor,
|
1375 |
-
pre_norm=resnet_pre_norm,
|
1376 |
-
down=True,
|
1377 |
-
)
|
1378 |
-
]
|
1379 |
-
)
|
1380 |
-
else:
|
1381 |
-
self.downsamplers = None
|
1382 |
-
|
1383 |
-
self.gradient_checkpointing = False
|
1384 |
-
|
1385 |
-
def forward(
|
1386 |
-
self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None
|
1387 |
-
):
|
1388 |
-
output_states = ()
|
1389 |
-
cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {}
|
1390 |
-
|
1391 |
-
for resnet, attn in zip(self.resnets, self.attentions):
|
1392 |
-
# resnet
|
1393 |
-
hidden_states = resnet(hidden_states, temb)
|
1394 |
-
|
1395 |
-
# attn
|
1396 |
-
hidden_states = attn(
|
1397 |
-
hidden_states,
|
1398 |
-
encoder_hidden_states=encoder_hidden_states,
|
1399 |
-
attention_mask=attention_mask,
|
1400 |
-
**cross_attention_kwargs,
|
1401 |
-
)
|
1402 |
-
|
1403 |
-
output_states += (hidden_states,)
|
1404 |
-
|
1405 |
-
if self.downsamplers is not None:
|
1406 |
-
for downsampler in self.downsamplers:
|
1407 |
-
hidden_states = downsampler(hidden_states, temb)
|
1408 |
-
|
1409 |
-
output_states += (hidden_states,)
|
1410 |
-
|
1411 |
-
return hidden_states, output_states
|
1412 |
-
|
1413 |
-
|
1414 |
-
class AttnUpBlock2D(nn.Layer):
|
1415 |
-
def __init__(
|
1416 |
-
self,
|
1417 |
-
in_channels: int,
|
1418 |
-
prev_output_channel: int,
|
1419 |
-
out_channels: int,
|
1420 |
-
temb_channels: int,
|
1421 |
-
dropout: float = 0.0,
|
1422 |
-
num_layers: int = 1,
|
1423 |
-
resnet_eps: float = 1e-6,
|
1424 |
-
resnet_time_scale_shift: str = "default",
|
1425 |
-
resnet_act_fn: str = "swish",
|
1426 |
-
resnet_groups: int = 32,
|
1427 |
-
resnet_pre_norm: bool = True,
|
1428 |
-
attn_num_head_channels=1,
|
1429 |
-
output_scale_factor=1.0,
|
1430 |
-
add_upsample=True,
|
1431 |
-
):
|
1432 |
-
super().__init__()
|
1433 |
-
resnets = []
|
1434 |
-
attentions = []
|
1435 |
-
|
1436 |
-
for i in range(num_layers):
|
1437 |
-
res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
|
1438 |
-
resnet_in_channels = prev_output_channel if i == 0 else out_channels
|
1439 |
-
|
1440 |
-
resnets.append(
|
1441 |
-
ResnetBlock2D(
|
1442 |
-
in_channels=resnet_in_channels + res_skip_channels,
|
1443 |
-
out_channels=out_channels,
|
1444 |
-
temb_channels=temb_channels,
|
1445 |
-
eps=resnet_eps,
|
1446 |
-
groups=resnet_groups,
|
1447 |
-
dropout=dropout,
|
1448 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1449 |
-
non_linearity=resnet_act_fn,
|
1450 |
-
output_scale_factor=output_scale_factor,
|
1451 |
-
pre_norm=resnet_pre_norm,
|
1452 |
-
)
|
1453 |
-
)
|
1454 |
-
attentions.append(
|
1455 |
-
AttentionBlock(
|
1456 |
-
out_channels,
|
1457 |
-
num_head_channels=attn_num_head_channels,
|
1458 |
-
rescale_output_factor=output_scale_factor,
|
1459 |
-
eps=resnet_eps,
|
1460 |
-
norm_num_groups=resnet_groups,
|
1461 |
-
)
|
1462 |
-
)
|
1463 |
-
|
1464 |
-
self.attentions = nn.LayerList(attentions)
|
1465 |
-
self.resnets = nn.LayerList(resnets)
|
1466 |
-
|
1467 |
-
if add_upsample:
|
1468 |
-
self.upsamplers = nn.LayerList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)])
|
1469 |
-
else:
|
1470 |
-
self.upsamplers = None
|
1471 |
-
|
1472 |
-
def forward(self, hidden_states, res_hidden_states_tuple, temb=None):
|
1473 |
-
for resnet, attn in zip(self.resnets, self.attentions):
|
1474 |
-
# pop res hidden states
|
1475 |
-
res_hidden_states = res_hidden_states_tuple[-1]
|
1476 |
-
res_hidden_states_tuple = res_hidden_states_tuple[:-1]
|
1477 |
-
hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
|
1478 |
-
|
1479 |
-
hidden_states = resnet(hidden_states, temb)
|
1480 |
-
hidden_states = attn(hidden_states)
|
1481 |
-
|
1482 |
-
if self.upsamplers is not None:
|
1483 |
-
for upsampler in self.upsamplers:
|
1484 |
-
hidden_states = upsampler(hidden_states)
|
1485 |
-
|
1486 |
-
return hidden_states
|
1487 |
-
|
1488 |
-
|
1489 |
-
class CrossAttnUpBlock2D(nn.Layer):
|
1490 |
-
def __init__(
|
1491 |
-
self,
|
1492 |
-
in_channels: int,
|
1493 |
-
out_channels: int,
|
1494 |
-
prev_output_channel: int,
|
1495 |
-
temb_channels: int,
|
1496 |
-
dropout: float = 0.0,
|
1497 |
-
num_layers: int = 1,
|
1498 |
-
resnet_eps: float = 1e-6,
|
1499 |
-
resnet_time_scale_shift: str = "default",
|
1500 |
-
resnet_act_fn: str = "swish",
|
1501 |
-
resnet_groups: int = 32,
|
1502 |
-
resnet_pre_norm: bool = True,
|
1503 |
-
attn_num_head_channels=1,
|
1504 |
-
cross_attention_dim=1280,
|
1505 |
-
output_scale_factor=1.0,
|
1506 |
-
add_upsample=True,
|
1507 |
-
dual_cross_attention=False,
|
1508 |
-
use_linear_projection=False,
|
1509 |
-
only_cross_attention=False,
|
1510 |
-
upcast_attention=False,
|
1511 |
-
):
|
1512 |
-
super().__init__()
|
1513 |
-
resnets = []
|
1514 |
-
attentions = []
|
1515 |
-
self.has_cross_attention = True
|
1516 |
-
self.attn_num_head_channels = attn_num_head_channels
|
1517 |
-
|
1518 |
-
for i in range(num_layers):
|
1519 |
-
res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
|
1520 |
-
resnet_in_channels = prev_output_channel if i == 0 else out_channels
|
1521 |
-
|
1522 |
-
resnets.append(
|
1523 |
-
ResnetBlock2D(
|
1524 |
-
in_channels=resnet_in_channels + res_skip_channels,
|
1525 |
-
out_channels=out_channels,
|
1526 |
-
temb_channels=temb_channels,
|
1527 |
-
eps=resnet_eps,
|
1528 |
-
groups=resnet_groups,
|
1529 |
-
dropout=dropout,
|
1530 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1531 |
-
non_linearity=resnet_act_fn,
|
1532 |
-
output_scale_factor=output_scale_factor,
|
1533 |
-
pre_norm=resnet_pre_norm,
|
1534 |
-
)
|
1535 |
-
)
|
1536 |
-
if not dual_cross_attention:
|
1537 |
-
attentions.append(
|
1538 |
-
Transformer2DModel(
|
1539 |
-
attn_num_head_channels,
|
1540 |
-
out_channels // attn_num_head_channels,
|
1541 |
-
in_channels=out_channels,
|
1542 |
-
num_layers=1,
|
1543 |
-
cross_attention_dim=cross_attention_dim,
|
1544 |
-
norm_num_groups=resnet_groups,
|
1545 |
-
use_linear_projection=use_linear_projection,
|
1546 |
-
only_cross_attention=only_cross_attention,
|
1547 |
-
upcast_attention=upcast_attention,
|
1548 |
-
)
|
1549 |
-
)
|
1550 |
-
else:
|
1551 |
-
attentions.append(
|
1552 |
-
DualTransformer2DModel(
|
1553 |
-
attn_num_head_channels,
|
1554 |
-
out_channels // attn_num_head_channels,
|
1555 |
-
in_channels=out_channels,
|
1556 |
-
num_layers=1,
|
1557 |
-
cross_attention_dim=cross_attention_dim,
|
1558 |
-
norm_num_groups=resnet_groups,
|
1559 |
-
)
|
1560 |
-
)
|
1561 |
-
self.attentions = nn.LayerList(attentions)
|
1562 |
-
self.resnets = nn.LayerList(resnets)
|
1563 |
-
|
1564 |
-
if add_upsample:
|
1565 |
-
self.upsamplers = nn.LayerList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)])
|
1566 |
-
else:
|
1567 |
-
self.upsamplers = None
|
1568 |
-
|
1569 |
-
self.gradient_checkpointing = False
|
1570 |
-
|
1571 |
-
def forward(
|
1572 |
-
self,
|
1573 |
-
hidden_states,
|
1574 |
-
res_hidden_states_tuple,
|
1575 |
-
temb=None,
|
1576 |
-
encoder_hidden_states=None,
|
1577 |
-
cross_attention_kwargs=None,
|
1578 |
-
upsample_size=None,
|
1579 |
-
attention_mask=None,
|
1580 |
-
):
|
1581 |
-
# TODO(Patrick, William) - attention mask is not used
|
1582 |
-
for resnet, attn in zip(self.resnets, self.attentions):
|
1583 |
-
# pop res hidden states
|
1584 |
-
res_hidden_states = res_hidden_states_tuple[-1]
|
1585 |
-
res_hidden_states_tuple = res_hidden_states_tuple[:-1]
|
1586 |
-
hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
|
1587 |
-
if self.training and self.gradient_checkpointing:
|
1588 |
-
|
1589 |
-
def create_custom_forward(module, return_dict=None):
|
1590 |
-
def custom_forward(*inputs):
|
1591 |
-
if return_dict is not None:
|
1592 |
-
return module(*inputs, return_dict=return_dict)[0] # move [0]
|
1593 |
-
else:
|
1594 |
-
return module(*inputs)
|
1595 |
-
|
1596 |
-
return custom_forward
|
1597 |
-
|
1598 |
-
hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb)
|
1599 |
-
hidden_states = recompute(
|
1600 |
-
create_custom_forward(attn, return_dict=False),
|
1601 |
-
hidden_states,
|
1602 |
-
encoder_hidden_states,
|
1603 |
-
cross_attention_kwargs,
|
1604 |
-
) # [0]
|
1605 |
-
else:
|
1606 |
-
hidden_states = resnet(hidden_states, temb)
|
1607 |
-
hidden_states = attn(
|
1608 |
-
hidden_states,
|
1609 |
-
encoder_hidden_states=encoder_hidden_states,
|
1610 |
-
cross_attention_kwargs=cross_attention_kwargs,
|
1611 |
-
).sample
|
1612 |
-
|
1613 |
-
if self.upsamplers is not None:
|
1614 |
-
for upsampler in self.upsamplers:
|
1615 |
-
hidden_states = upsampler(hidden_states, upsample_size)
|
1616 |
-
|
1617 |
-
return hidden_states
|
1618 |
-
|
1619 |
-
|
1620 |
-
class UpBlock2D(nn.Layer):
|
1621 |
-
def __init__(
|
1622 |
-
self,
|
1623 |
-
in_channels: int,
|
1624 |
-
prev_output_channel: int,
|
1625 |
-
out_channels: int,
|
1626 |
-
temb_channels: int,
|
1627 |
-
dropout: float = 0.0,
|
1628 |
-
num_layers: int = 1,
|
1629 |
-
resnet_eps: float = 1e-6,
|
1630 |
-
resnet_time_scale_shift: str = "default",
|
1631 |
-
resnet_act_fn: str = "swish",
|
1632 |
-
resnet_groups: int = 32,
|
1633 |
-
resnet_pre_norm: bool = True,
|
1634 |
-
output_scale_factor=1.0,
|
1635 |
-
add_upsample=True,
|
1636 |
-
):
|
1637 |
-
super().__init__()
|
1638 |
-
resnets = []
|
1639 |
-
|
1640 |
-
for i in range(num_layers):
|
1641 |
-
res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
|
1642 |
-
resnet_in_channels = prev_output_channel if i == 0 else out_channels
|
1643 |
-
|
1644 |
-
resnets.append(
|
1645 |
-
ResnetBlock2D(
|
1646 |
-
in_channels=resnet_in_channels + res_skip_channels,
|
1647 |
-
out_channels=out_channels,
|
1648 |
-
temb_channels=temb_channels,
|
1649 |
-
eps=resnet_eps,
|
1650 |
-
groups=resnet_groups,
|
1651 |
-
dropout=dropout,
|
1652 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1653 |
-
non_linearity=resnet_act_fn,
|
1654 |
-
output_scale_factor=output_scale_factor,
|
1655 |
-
pre_norm=resnet_pre_norm,
|
1656 |
-
)
|
1657 |
-
)
|
1658 |
-
|
1659 |
-
self.resnets = nn.LayerList(resnets)
|
1660 |
-
|
1661 |
-
if add_upsample:
|
1662 |
-
self.upsamplers = nn.LayerList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)])
|
1663 |
-
else:
|
1664 |
-
self.upsamplers = None
|
1665 |
-
|
1666 |
-
self.gradient_checkpointing = False
|
1667 |
-
|
1668 |
-
def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None):
|
1669 |
-
for resnet in self.resnets:
|
1670 |
-
# pop res hidden states
|
1671 |
-
res_hidden_states = res_hidden_states_tuple[-1]
|
1672 |
-
res_hidden_states_tuple = res_hidden_states_tuple[:-1]
|
1673 |
-
hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
|
1674 |
-
if self.training and self.gradient_checkpointing:
|
1675 |
-
|
1676 |
-
def create_custom_forward(module):
|
1677 |
-
def custom_forward(*inputs):
|
1678 |
-
return module(*inputs)
|
1679 |
-
|
1680 |
-
return custom_forward
|
1681 |
-
|
1682 |
-
hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb)
|
1683 |
-
else:
|
1684 |
-
hidden_states = resnet(hidden_states, temb)
|
1685 |
-
|
1686 |
-
if self.upsamplers is not None:
|
1687 |
-
for upsampler in self.upsamplers:
|
1688 |
-
hidden_states = upsampler(hidden_states, upsample_size)
|
1689 |
-
|
1690 |
-
return hidden_states
|
1691 |
-
|
1692 |
-
|
1693 |
-
class UpDecoderBlock2D(nn.Layer):
|
1694 |
-
def __init__(
|
1695 |
-
self,
|
1696 |
-
in_channels: int,
|
1697 |
-
out_channels: int,
|
1698 |
-
dropout: float = 0.0,
|
1699 |
-
num_layers: int = 1,
|
1700 |
-
resnet_eps: float = 1e-6,
|
1701 |
-
resnet_time_scale_shift: str = "default",
|
1702 |
-
resnet_act_fn: str = "swish",
|
1703 |
-
resnet_groups: int = 32,
|
1704 |
-
resnet_pre_norm: bool = True,
|
1705 |
-
output_scale_factor=1.0,
|
1706 |
-
add_upsample=True,
|
1707 |
-
):
|
1708 |
-
super().__init__()
|
1709 |
-
resnets = []
|
1710 |
-
|
1711 |
-
for i in range(num_layers):
|
1712 |
-
input_channels = in_channels if i == 0 else out_channels
|
1713 |
-
|
1714 |
-
resnets.append(
|
1715 |
-
ResnetBlock2D(
|
1716 |
-
in_channels=input_channels,
|
1717 |
-
out_channels=out_channels,
|
1718 |
-
temb_channels=None,
|
1719 |
-
eps=resnet_eps,
|
1720 |
-
groups=resnet_groups,
|
1721 |
-
dropout=dropout,
|
1722 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1723 |
-
non_linearity=resnet_act_fn,
|
1724 |
-
output_scale_factor=output_scale_factor,
|
1725 |
-
pre_norm=resnet_pre_norm,
|
1726 |
-
)
|
1727 |
-
)
|
1728 |
-
|
1729 |
-
self.resnets = nn.LayerList(resnets)
|
1730 |
-
|
1731 |
-
if add_upsample:
|
1732 |
-
self.upsamplers = nn.LayerList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)])
|
1733 |
-
else:
|
1734 |
-
self.upsamplers = None
|
1735 |
-
|
1736 |
-
def forward(self, hidden_states):
|
1737 |
-
for resnet in self.resnets:
|
1738 |
-
hidden_states = resnet(hidden_states, temb=None)
|
1739 |
-
|
1740 |
-
if self.upsamplers is not None:
|
1741 |
-
for upsampler in self.upsamplers:
|
1742 |
-
hidden_states = upsampler(hidden_states)
|
1743 |
-
|
1744 |
-
return hidden_states
|
1745 |
-
|
1746 |
-
|
1747 |
-
class AttnUpDecoderBlock2D(nn.Layer):
|
1748 |
-
def __init__(
|
1749 |
-
self,
|
1750 |
-
in_channels: int,
|
1751 |
-
out_channels: int,
|
1752 |
-
dropout: float = 0.0,
|
1753 |
-
num_layers: int = 1,
|
1754 |
-
resnet_eps: float = 1e-6,
|
1755 |
-
resnet_time_scale_shift: str = "default",
|
1756 |
-
resnet_act_fn: str = "swish",
|
1757 |
-
resnet_groups: int = 32,
|
1758 |
-
resnet_pre_norm: bool = True,
|
1759 |
-
attn_num_head_channels=1,
|
1760 |
-
output_scale_factor=1.0,
|
1761 |
-
add_upsample=True,
|
1762 |
-
):
|
1763 |
-
super().__init__()
|
1764 |
-
resnets = []
|
1765 |
-
attentions = []
|
1766 |
-
|
1767 |
-
for i in range(num_layers):
|
1768 |
-
input_channels = in_channels if i == 0 else out_channels
|
1769 |
-
|
1770 |
-
resnets.append(
|
1771 |
-
ResnetBlock2D(
|
1772 |
-
in_channels=input_channels,
|
1773 |
-
out_channels=out_channels,
|
1774 |
-
temb_channels=None,
|
1775 |
-
eps=resnet_eps,
|
1776 |
-
groups=resnet_groups,
|
1777 |
-
dropout=dropout,
|
1778 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1779 |
-
non_linearity=resnet_act_fn,
|
1780 |
-
output_scale_factor=output_scale_factor,
|
1781 |
-
pre_norm=resnet_pre_norm,
|
1782 |
-
)
|
1783 |
-
)
|
1784 |
-
attentions.append(
|
1785 |
-
AttentionBlock(
|
1786 |
-
out_channels,
|
1787 |
-
num_head_channels=attn_num_head_channels,
|
1788 |
-
rescale_output_factor=output_scale_factor,
|
1789 |
-
eps=resnet_eps,
|
1790 |
-
norm_num_groups=resnet_groups,
|
1791 |
-
)
|
1792 |
-
)
|
1793 |
-
|
1794 |
-
self.attentions = nn.LayerList(attentions)
|
1795 |
-
self.resnets = nn.LayerList(resnets)
|
1796 |
-
|
1797 |
-
if add_upsample:
|
1798 |
-
self.upsamplers = nn.LayerList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)])
|
1799 |
-
else:
|
1800 |
-
self.upsamplers = None
|
1801 |
-
|
1802 |
-
def forward(self, hidden_states):
|
1803 |
-
for resnet, attn in zip(self.resnets, self.attentions):
|
1804 |
-
hidden_states = resnet(hidden_states, temb=None)
|
1805 |
-
hidden_states = attn(hidden_states)
|
1806 |
-
|
1807 |
-
if self.upsamplers is not None:
|
1808 |
-
for upsampler in self.upsamplers:
|
1809 |
-
hidden_states = upsampler(hidden_states)
|
1810 |
-
|
1811 |
-
return hidden_states
|
1812 |
-
|
1813 |
-
|
1814 |
-
class AttnSkipUpBlock2D(nn.Layer):
|
1815 |
-
def __init__(
|
1816 |
-
self,
|
1817 |
-
in_channels: int,
|
1818 |
-
prev_output_channel: int,
|
1819 |
-
out_channels: int,
|
1820 |
-
temb_channels: int,
|
1821 |
-
dropout: float = 0.0,
|
1822 |
-
num_layers: int = 1,
|
1823 |
-
resnet_eps: float = 1e-6,
|
1824 |
-
resnet_time_scale_shift: str = "default",
|
1825 |
-
resnet_act_fn: str = "swish",
|
1826 |
-
resnet_pre_norm: bool = True,
|
1827 |
-
attn_num_head_channels=1,
|
1828 |
-
output_scale_factor=np.sqrt(2.0),
|
1829 |
-
upsample_padding=1,
|
1830 |
-
add_upsample=True,
|
1831 |
-
):
|
1832 |
-
super().__init__()
|
1833 |
-
self.attentions = nn.LayerList([])
|
1834 |
-
self.resnets = nn.LayerList([])
|
1835 |
-
|
1836 |
-
for i in range(num_layers):
|
1837 |
-
res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
|
1838 |
-
resnet_in_channels = prev_output_channel if i == 0 else out_channels
|
1839 |
-
|
1840 |
-
self.resnets.append(
|
1841 |
-
ResnetBlock2D(
|
1842 |
-
in_channels=resnet_in_channels + res_skip_channels,
|
1843 |
-
out_channels=out_channels,
|
1844 |
-
temb_channels=temb_channels,
|
1845 |
-
eps=resnet_eps,
|
1846 |
-
groups=min(resnet_in_channels + res_skip_channels // 4, 32),
|
1847 |
-
groups_out=min(out_channels // 4, 32),
|
1848 |
-
dropout=dropout,
|
1849 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1850 |
-
non_linearity=resnet_act_fn,
|
1851 |
-
output_scale_factor=output_scale_factor,
|
1852 |
-
pre_norm=resnet_pre_norm,
|
1853 |
-
)
|
1854 |
-
)
|
1855 |
-
|
1856 |
-
self.attentions.append(
|
1857 |
-
AttentionBlock(
|
1858 |
-
out_channels,
|
1859 |
-
num_head_channels=attn_num_head_channels,
|
1860 |
-
rescale_output_factor=output_scale_factor,
|
1861 |
-
eps=resnet_eps,
|
1862 |
-
)
|
1863 |
-
)
|
1864 |
-
|
1865 |
-
self.upsampler = FirUpsample2D(in_channels, out_channels=out_channels)
|
1866 |
-
if add_upsample:
|
1867 |
-
self.resnet_up = ResnetBlock2D(
|
1868 |
-
in_channels=out_channels,
|
1869 |
-
out_channels=out_channels,
|
1870 |
-
temb_channels=temb_channels,
|
1871 |
-
eps=resnet_eps,
|
1872 |
-
groups=min(out_channels // 4, 32),
|
1873 |
-
groups_out=min(out_channels // 4, 32),
|
1874 |
-
dropout=dropout,
|
1875 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1876 |
-
non_linearity=resnet_act_fn,
|
1877 |
-
output_scale_factor=output_scale_factor,
|
1878 |
-
pre_norm=resnet_pre_norm,
|
1879 |
-
use_in_shortcut=True,
|
1880 |
-
up=True,
|
1881 |
-
kernel="fir",
|
1882 |
-
)
|
1883 |
-
self.skip_conv = nn.Conv2D(out_channels, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
|
1884 |
-
self.skip_norm = nn.GroupNorm(
|
1885 |
-
num_groups=min(out_channels // 4, 32), num_channels=out_channels, epsilon=resnet_eps
|
1886 |
-
)
|
1887 |
-
self.act = nn.Silu()
|
1888 |
-
else:
|
1889 |
-
self.resnet_up = None
|
1890 |
-
self.skip_conv = None
|
1891 |
-
self.skip_norm = None
|
1892 |
-
self.act = None
|
1893 |
-
|
1894 |
-
def forward(self, hidden_states, res_hidden_states_tuple, temb=None, skip_sample=None):
|
1895 |
-
for resnet in self.resnets:
|
1896 |
-
# pop res hidden states
|
1897 |
-
res_hidden_states = res_hidden_states_tuple[-1]
|
1898 |
-
res_hidden_states_tuple = res_hidden_states_tuple[:-1]
|
1899 |
-
hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
|
1900 |
-
hidden_states = resnet(hidden_states, temb)
|
1901 |
-
|
1902 |
-
hidden_states = self.attentions[0](hidden_states)
|
1903 |
-
|
1904 |
-
if skip_sample is not None:
|
1905 |
-
skip_sample = self.upsampler(skip_sample)
|
1906 |
-
else:
|
1907 |
-
skip_sample = 0
|
1908 |
-
|
1909 |
-
if self.resnet_up is not None:
|
1910 |
-
skip_sample_states = self.skip_norm(hidden_states)
|
1911 |
-
skip_sample_states = self.act(skip_sample_states)
|
1912 |
-
skip_sample_states = self.skip_conv(skip_sample_states)
|
1913 |
-
|
1914 |
-
skip_sample = skip_sample + skip_sample_states
|
1915 |
-
|
1916 |
-
hidden_states = self.resnet_up(hidden_states, temb)
|
1917 |
-
|
1918 |
-
return hidden_states, skip_sample
|
1919 |
-
|
1920 |
-
|
1921 |
-
class SkipUpBlock2D(nn.Layer):
|
1922 |
-
def __init__(
|
1923 |
-
self,
|
1924 |
-
in_channels: int,
|
1925 |
-
prev_output_channel: int,
|
1926 |
-
out_channels: int,
|
1927 |
-
temb_channels: int,
|
1928 |
-
dropout: float = 0.0,
|
1929 |
-
num_layers: int = 1,
|
1930 |
-
resnet_eps: float = 1e-6,
|
1931 |
-
resnet_time_scale_shift: str = "default",
|
1932 |
-
resnet_act_fn: str = "swish",
|
1933 |
-
resnet_pre_norm: bool = True,
|
1934 |
-
output_scale_factor=np.sqrt(2.0),
|
1935 |
-
add_upsample=True,
|
1936 |
-
upsample_padding=1,
|
1937 |
-
):
|
1938 |
-
super().__init__()
|
1939 |
-
self.resnets = nn.LayerList([])
|
1940 |
-
|
1941 |
-
for i in range(num_layers):
|
1942 |
-
res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
|
1943 |
-
resnet_in_channels = prev_output_channel if i == 0 else out_channels
|
1944 |
-
|
1945 |
-
self.resnets.append(
|
1946 |
-
ResnetBlock2D(
|
1947 |
-
in_channels=resnet_in_channels + res_skip_channels,
|
1948 |
-
out_channels=out_channels,
|
1949 |
-
temb_channels=temb_channels,
|
1950 |
-
eps=resnet_eps,
|
1951 |
-
groups=min((resnet_in_channels + res_skip_channels) // 4, 32),
|
1952 |
-
groups_out=min(out_channels // 4, 32),
|
1953 |
-
dropout=dropout,
|
1954 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1955 |
-
non_linearity=resnet_act_fn,
|
1956 |
-
output_scale_factor=output_scale_factor,
|
1957 |
-
pre_norm=resnet_pre_norm,
|
1958 |
-
)
|
1959 |
-
)
|
1960 |
-
|
1961 |
-
self.upsampler = FirUpsample2D(in_channels, out_channels=out_channels)
|
1962 |
-
if add_upsample:
|
1963 |
-
self.resnet_up = ResnetBlock2D(
|
1964 |
-
in_channels=out_channels,
|
1965 |
-
out_channels=out_channels,
|
1966 |
-
temb_channels=temb_channels,
|
1967 |
-
eps=resnet_eps,
|
1968 |
-
groups=min(out_channels // 4, 32),
|
1969 |
-
groups_out=min(out_channels // 4, 32),
|
1970 |
-
dropout=dropout,
|
1971 |
-
time_embedding_norm=resnet_time_scale_shift,
|
1972 |
-
non_linearity=resnet_act_fn,
|
1973 |
-
output_scale_factor=output_scale_factor,
|
1974 |
-
pre_norm=resnet_pre_norm,
|
1975 |
-
use_in_shortcut=True,
|
1976 |
-
up=True,
|
1977 |
-
kernel="fir",
|
1978 |
-
)
|
1979 |
-
self.skip_conv = nn.Conv2D(out_channels, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
|
1980 |
-
self.skip_norm = nn.GroupNorm(
|
1981 |
-
num_groups=min(out_channels // 4, 32), num_channels=out_channels, epsilon=resnet_eps
|
1982 |
-
)
|
1983 |
-
self.act = nn.Silu()
|
1984 |
-
else:
|
1985 |
-
self.resnet_up = None
|
1986 |
-
self.skip_conv = None
|
1987 |
-
self.skip_norm = None
|
1988 |
-
self.act = None
|
1989 |
-
|
1990 |
-
def forward(self, hidden_states, res_hidden_states_tuple, temb=None, skip_sample=None):
|
1991 |
-
for resnet in self.resnets:
|
1992 |
-
# pop res hidden states
|
1993 |
-
res_hidden_states = res_hidden_states_tuple[-1]
|
1994 |
-
res_hidden_states_tuple = res_hidden_states_tuple[:-1]
|
1995 |
-
hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
|
1996 |
-
|
1997 |
-
hidden_states = resnet(hidden_states, temb)
|
1998 |
-
|
1999 |
-
if skip_sample is not None:
|
2000 |
-
skip_sample = self.upsampler(skip_sample)
|
2001 |
-
else:
|
2002 |
-
skip_sample = 0
|
2003 |
-
|
2004 |
-
if self.resnet_up is not None:
|
2005 |
-
skip_sample_states = self.skip_norm(hidden_states)
|
2006 |
-
skip_sample_states = self.act(skip_sample_states)
|
2007 |
-
skip_sample_states = self.skip_conv(skip_sample_states)
|
2008 |
-
|
2009 |
-
skip_sample = skip_sample + skip_sample_states
|
2010 |
-
|
2011 |
-
hidden_states = self.resnet_up(hidden_states, temb)
|
2012 |
-
|
2013 |
-
return hidden_states, skip_sample
|
2014 |
-
|
2015 |
-
|
2016 |
-
class ResnetUpsampleBlock2D(nn.Layer):
|
2017 |
-
def __init__(
|
2018 |
-
self,
|
2019 |
-
in_channels: int,
|
2020 |
-
prev_output_channel: int,
|
2021 |
-
out_channels: int,
|
2022 |
-
temb_channels: int,
|
2023 |
-
dropout: float = 0.0,
|
2024 |
-
num_layers: int = 1,
|
2025 |
-
resnet_eps: float = 1e-6,
|
2026 |
-
resnet_time_scale_shift: str = "default",
|
2027 |
-
resnet_act_fn: str = "swish",
|
2028 |
-
resnet_groups: int = 32,
|
2029 |
-
resnet_pre_norm: bool = True,
|
2030 |
-
output_scale_factor=1.0,
|
2031 |
-
add_upsample=True,
|
2032 |
-
):
|
2033 |
-
super().__init__()
|
2034 |
-
resnets = []
|
2035 |
-
|
2036 |
-
for i in range(num_layers):
|
2037 |
-
res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
|
2038 |
-
resnet_in_channels = prev_output_channel if i == 0 else out_channels
|
2039 |
-
|
2040 |
-
resnets.append(
|
2041 |
-
ResnetBlock2D(
|
2042 |
-
in_channels=resnet_in_channels + res_skip_channels,
|
2043 |
-
out_channels=out_channels,
|
2044 |
-
temb_channels=temb_channels,
|
2045 |
-
eps=resnet_eps,
|
2046 |
-
groups=resnet_groups,
|
2047 |
-
dropout=dropout,
|
2048 |
-
time_embedding_norm=resnet_time_scale_shift,
|
2049 |
-
non_linearity=resnet_act_fn,
|
2050 |
-
output_scale_factor=output_scale_factor,
|
2051 |
-
pre_norm=resnet_pre_norm,
|
2052 |
-
)
|
2053 |
-
)
|
2054 |
-
|
2055 |
-
self.resnets = nn.LayerList(resnets)
|
2056 |
-
|
2057 |
-
if add_upsample:
|
2058 |
-
self.upsamplers = nn.LayerList(
|
2059 |
-
[
|
2060 |
-
ResnetBlock2D(
|
2061 |
-
in_channels=out_channels,
|
2062 |
-
out_channels=out_channels,
|
2063 |
-
temb_channels=temb_channels,
|
2064 |
-
eps=resnet_eps,
|
2065 |
-
groups=resnet_groups,
|
2066 |
-
dropout=dropout,
|
2067 |
-
time_embedding_norm=resnet_time_scale_shift,
|
2068 |
-
non_linearity=resnet_act_fn,
|
2069 |
-
output_scale_factor=output_scale_factor,
|
2070 |
-
pre_norm=resnet_pre_norm,
|
2071 |
-
up=True,
|
2072 |
-
)
|
2073 |
-
]
|
2074 |
-
)
|
2075 |
-
else:
|
2076 |
-
self.upsamplers = None
|
2077 |
-
|
2078 |
-
self.gradient_checkpointing = False
|
2079 |
-
|
2080 |
-
def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None):
|
2081 |
-
for resnet in self.resnets:
|
2082 |
-
# pop res hidden states
|
2083 |
-
res_hidden_states = res_hidden_states_tuple[-1]
|
2084 |
-
res_hidden_states_tuple = res_hidden_states_tuple[:-1]
|
2085 |
-
hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
|
2086 |
-
|
2087 |
-
if self.training and self.gradient_checkpointing:
|
2088 |
-
|
2089 |
-
def create_custom_forward(module):
|
2090 |
-
def custom_forward(*inputs):
|
2091 |
-
return module(*inputs)
|
2092 |
-
|
2093 |
-
return custom_forward
|
2094 |
-
|
2095 |
-
hidden_states = recompute(create_custom_forward(resnet), hidden_states, temb)
|
2096 |
-
else:
|
2097 |
-
hidden_states = resnet(hidden_states, temb)
|
2098 |
-
|
2099 |
-
if self.upsamplers is not None:
|
2100 |
-
for upsampler in self.upsamplers:
|
2101 |
-
hidden_states = upsampler(hidden_states, temb)
|
2102 |
-
|
2103 |
-
return hidden_states
|
2104 |
-
|
2105 |
-
|
2106 |
-
class SimpleCrossAttnUpBlock2D(nn.Layer):
|
2107 |
-
def __init__(
|
2108 |
-
self,
|
2109 |
-
in_channels: int,
|
2110 |
-
out_channels: int,
|
2111 |
-
prev_output_channel: int,
|
2112 |
-
temb_channels: int,
|
2113 |
-
dropout: float = 0.0,
|
2114 |
-
num_layers: int = 1,
|
2115 |
-
resnet_eps: float = 1e-6,
|
2116 |
-
resnet_time_scale_shift: str = "default",
|
2117 |
-
resnet_act_fn: str = "swish",
|
2118 |
-
resnet_groups: int = 32,
|
2119 |
-
resnet_pre_norm: bool = True,
|
2120 |
-
attn_num_head_channels=1,
|
2121 |
-
cross_attention_dim=1280,
|
2122 |
-
output_scale_factor=1.0,
|
2123 |
-
add_upsample=True,
|
2124 |
-
):
|
2125 |
-
super().__init__()
|
2126 |
-
resnets = []
|
2127 |
-
attentions = []
|
2128 |
-
|
2129 |
-
self.has_cross_attention = True
|
2130 |
-
self.attn_num_head_channels = attn_num_head_channels
|
2131 |
-
|
2132 |
-
self.num_heads = out_channels // self.attn_num_head_channels
|
2133 |
-
|
2134 |
-
for i in range(num_layers):
|
2135 |
-
res_skip_channels = in_channels if (i == num_layers - 1) else out_channels
|
2136 |
-
resnet_in_channels = prev_output_channel if i == 0 else out_channels
|
2137 |
-
|
2138 |
-
resnets.append(
|
2139 |
-
ResnetBlock2D(
|
2140 |
-
in_channels=resnet_in_channels + res_skip_channels,
|
2141 |
-
out_channels=out_channels,
|
2142 |
-
temb_channels=temb_channels,
|
2143 |
-
eps=resnet_eps,
|
2144 |
-
groups=resnet_groups,
|
2145 |
-
dropout=dropout,
|
2146 |
-
time_embedding_norm=resnet_time_scale_shift,
|
2147 |
-
non_linearity=resnet_act_fn,
|
2148 |
-
output_scale_factor=output_scale_factor,
|
2149 |
-
pre_norm=resnet_pre_norm,
|
2150 |
-
)
|
2151 |
-
)
|
2152 |
-
attentions.append(
|
2153 |
-
CrossAttention(
|
2154 |
-
query_dim=out_channels,
|
2155 |
-
cross_attention_dim=out_channels,
|
2156 |
-
heads=self.num_heads,
|
2157 |
-
dim_head=attn_num_head_channels,
|
2158 |
-
added_kv_proj_dim=cross_attention_dim,
|
2159 |
-
norm_num_groups=resnet_groups,
|
2160 |
-
bias=True,
|
2161 |
-
upcast_softmax=True,
|
2162 |
-
processor=CrossAttnAddedKVProcessor(),
|
2163 |
-
)
|
2164 |
-
)
|
2165 |
-
self.attentions = nn.LayerList(attentions)
|
2166 |
-
self.resnets = nn.LayerList(resnets)
|
2167 |
-
|
2168 |
-
if add_upsample:
|
2169 |
-
self.upsamplers = nn.LayerList(
|
2170 |
-
[
|
2171 |
-
ResnetBlock2D(
|
2172 |
-
in_channels=out_channels,
|
2173 |
-
out_channels=out_channels,
|
2174 |
-
temb_channels=temb_channels,
|
2175 |
-
eps=resnet_eps,
|
2176 |
-
groups=resnet_groups,
|
2177 |
-
dropout=dropout,
|
2178 |
-
time_embedding_norm=resnet_time_scale_shift,
|
2179 |
-
non_linearity=resnet_act_fn,
|
2180 |
-
output_scale_factor=output_scale_factor,
|
2181 |
-
pre_norm=resnet_pre_norm,
|
2182 |
-
up=True,
|
2183 |
-
)
|
2184 |
-
]
|
2185 |
-
)
|
2186 |
-
else:
|
2187 |
-
self.upsamplers = None
|
2188 |
-
|
2189 |
-
self.gradient_checkpointing = False
|
2190 |
-
|
2191 |
-
def forward(
|
2192 |
-
self,
|
2193 |
-
hidden_states,
|
2194 |
-
res_hidden_states_tuple,
|
2195 |
-
temb=None,
|
2196 |
-
encoder_hidden_states=None,
|
2197 |
-
upsample_size=None,
|
2198 |
-
attention_mask=None,
|
2199 |
-
cross_attention_kwargs=None,
|
2200 |
-
):
|
2201 |
-
cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {}
|
2202 |
-
for resnet, attn in zip(self.resnets, self.attentions):
|
2203 |
-
# resnet
|
2204 |
-
# pop res hidden states
|
2205 |
-
res_hidden_states = res_hidden_states_tuple[-1]
|
2206 |
-
res_hidden_states_tuple = res_hidden_states_tuple[:-1]
|
2207 |
-
hidden_states = paddle.concat([hidden_states, res_hidden_states], axis=1)
|
2208 |
-
|
2209 |
-
hidden_states = resnet(hidden_states, temb)
|
2210 |
-
|
2211 |
-
# attn
|
2212 |
-
hidden_states = attn(
|
2213 |
-
hidden_states,
|
2214 |
-
encoder_hidden_states=encoder_hidden_states,
|
2215 |
-
attention_mask=attention_mask,
|
2216 |
-
**cross_attention_kwargs,
|
2217 |
-
)
|
2218 |
-
|
2219 |
-
if self.upsamplers is not None:
|
2220 |
-
for upsampler in self.upsamplers:
|
2221 |
-
hidden_states = upsampler(hidden_states, temb)
|
2222 |
-
|
2223 |
-
return hidden_states
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIConsultant/MusicGen/audiocraft/metrics/visqol.py
DELETED
@@ -1,216 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
import csv
|
8 |
-
import json
|
9 |
-
import logging
|
10 |
-
from pathlib import Path
|
11 |
-
import tempfile
|
12 |
-
import typing as tp
|
13 |
-
import subprocess
|
14 |
-
import shutil
|
15 |
-
|
16 |
-
import torch
|
17 |
-
import torchaudio
|
18 |
-
|
19 |
-
logger = logging.getLogger(__name__)
|
20 |
-
|
21 |
-
|
22 |
-
class ViSQOL:
|
23 |
-
"""ViSQOL wrapper to run ViSQOL from Python using a pre-installed binary.
|
24 |
-
|
25 |
-
To learn more about ViSQOL and how to build ViSQOL binary using bazel, please refer to the
|
26 |
-
instructions available in the open source repository: https://github.com/google/visqol
|
27 |
-
|
28 |
-
ViSQOL is capable of running in two modes:
|
29 |
-
|
30 |
-
Audio Mode:
|
31 |
-
When running in audio mode, input signals must have a 48kHz sample rate. Input should be resampled to 48kHz.
|
32 |
-
Input signals can be multi-channel, but they will be down-mixed to mono for performing the comparison.
|
33 |
-
Audio mode uses support vector regression, with the maximum range at ~4.75.
|
34 |
-
|
35 |
-
Speech Mode:
|
36 |
-
When running in speech mode, ViSQOL uses a wideband model. It therefore expects input sample rates of 16kHz.
|
37 |
-
Input should be resampled to 16kHz.
|
38 |
-
As part of the speech mode processing, a root mean square implementation for voice activity detection
|
39 |
-
is performed on the reference signal to determine what parts of the signal have voice activity and
|
40 |
-
should therefore be included in the comparison. The signal is normalized before performing the voice
|
41 |
-
activity detection.
|
42 |
-
Input signals can be multi-channel, but they will be down-mixed to mono for performing the comparison.
|
43 |
-
Speech mode is scaled to have a maximum MOS of 5.0 to match previous version behavior.
|
44 |
-
|
45 |
-
For more details, check the guidelines: https://github.com/google/visqol#general-guidelines-for-input
|
46 |
-
|
47 |
-
Args:
|
48 |
-
visqol_bin (str): Path to the ViSQOL binary.
|
49 |
-
mode (str): ViSQOL computation mode, expecting "audio" or "speech".
|
50 |
-
model (str): Name of the model to use for similarity to quality model.
|
51 |
-
debug (bool): Whether to also get debug metrics from ViSQOL or not.
|
52 |
-
"""
|
53 |
-
SAMPLE_RATES_MODES = {"audio": 48_000, "speech": 16_000}
|
54 |
-
ALLOWED_SAMPLE_RATES = frozenset(SAMPLE_RATES_MODES.values())
|
55 |
-
|
56 |
-
def __init__(self, bin: tp.Union[Path, str], mode: str = "audio",
|
57 |
-
model: str = "libsvm_nu_svr_model.txt", debug: bool = False):
|
58 |
-
assert bin is not None and Path(bin).exists(), f"Could not find ViSQOL binary in specified path: {bin}"
|
59 |
-
self.visqol_bin = str(bin)
|
60 |
-
self.visqol_mode = mode
|
61 |
-
self.target_sr = self._get_target_sr(self.visqol_mode)
|
62 |
-
self.model = model
|
63 |
-
self.debug = debug
|
64 |
-
assert Path(self.visqol_model).exists(), \
|
65 |
-
f"Could not find the specified model in ViSQOL install: {self.visqol_model}"
|
66 |
-
|
67 |
-
def _get_target_sr(self, mode: str) -> int:
|
68 |
-
# returns target sampling rate for the corresponding ViSQOL mode.
|
69 |
-
if mode not in ViSQOL.SAMPLE_RATES_MODES:
|
70 |
-
raise ValueError(
|
71 |
-
f"Unsupported mode! Allowed are: {', '.join(ViSQOL.SAMPLE_RATES_MODES.keys())}"
|
72 |
-
)
|
73 |
-
return ViSQOL.SAMPLE_RATES_MODES[mode]
|
74 |
-
|
75 |
-
def _prepare_files(
|
76 |
-
self, ref_sig: torch.Tensor, deg_sig: torch.Tensor, sr: int, target_sr: int, pad_with_silence: bool = False
|
77 |
-
):
|
78 |
-
# prepare files for ViSQOL evaluation.
|
79 |
-
assert target_sr in ViSQOL.ALLOWED_SAMPLE_RATES
|
80 |
-
assert len(ref_sig) == len(deg_sig), (
|
81 |
-
"Expects same number of ref and degraded inputs",
|
82 |
-
f" but ref len {len(ref_sig)} != deg len {len(deg_sig)}"
|
83 |
-
)
|
84 |
-
# resample audio if needed
|
85 |
-
if sr != target_sr:
|
86 |
-
transform = torchaudio.transforms.Resample(sr, target_sr)
|
87 |
-
pad = int(0.5 * target_sr)
|
88 |
-
rs_ref = []
|
89 |
-
rs_deg = []
|
90 |
-
for i in range(len(ref_sig)):
|
91 |
-
rs_ref_i = transform(ref_sig[i])
|
92 |
-
rs_deg_i = transform(deg_sig[i])
|
93 |
-
if pad_with_silence:
|
94 |
-
rs_ref_i = torch.nn.functional.pad(rs_ref_i, (pad, pad), mode='constant', value=0)
|
95 |
-
rs_deg_i = torch.nn.functional.pad(rs_deg_i, (pad, pad), mode='constant', value=0)
|
96 |
-
rs_ref.append(rs_ref_i)
|
97 |
-
rs_deg.append(rs_deg_i)
|
98 |
-
ref_sig = torch.stack(rs_ref)
|
99 |
-
deg_sig = torch.stack(rs_deg)
|
100 |
-
# save audio chunks to tmp dir and create csv
|
101 |
-
tmp_dir = Path(tempfile.mkdtemp())
|
102 |
-
try:
|
103 |
-
tmp_input_csv_path = tmp_dir / "input.csv"
|
104 |
-
tmp_results_csv_path = tmp_dir / "results.csv"
|
105 |
-
tmp_debug_json_path = tmp_dir / "debug.json"
|
106 |
-
with open(tmp_input_csv_path, "w") as csv_file:
|
107 |
-
csv_writer = csv.writer(csv_file)
|
108 |
-
csv_writer.writerow(["reference", "degraded"])
|
109 |
-
for i in range(len(ref_sig)):
|
110 |
-
tmp_ref_filename = tmp_dir / f"ref_{i}.wav"
|
111 |
-
tmp_deg_filename = tmp_dir / f"deg_{i}.wav"
|
112 |
-
torchaudio.save(
|
113 |
-
tmp_ref_filename,
|
114 |
-
torch.clamp(ref_sig[i], min=-0.99, max=0.99),
|
115 |
-
sample_rate=target_sr,
|
116 |
-
bits_per_sample=16,
|
117 |
-
encoding="PCM_S"
|
118 |
-
)
|
119 |
-
torchaudio.save(
|
120 |
-
tmp_deg_filename,
|
121 |
-
torch.clamp(deg_sig[i], min=-0.99, max=0.99),
|
122 |
-
sample_rate=target_sr,
|
123 |
-
bits_per_sample=16,
|
124 |
-
encoding="PCM_S"
|
125 |
-
)
|
126 |
-
csv_writer.writerow([str(tmp_ref_filename), str(tmp_deg_filename)])
|
127 |
-
return tmp_dir, tmp_input_csv_path, tmp_results_csv_path, tmp_debug_json_path
|
128 |
-
except Exception as e:
|
129 |
-
logger.error("Exception occurred when preparing files for ViSQOL: %s", e)
|
130 |
-
return tmp_dir, None, None, None
|
131 |
-
|
132 |
-
def _flush_files(self, tmp_dir: tp.Union[Path, str]):
|
133 |
-
# flush tmp files used to compute ViSQOL.
|
134 |
-
shutil.rmtree(str(tmp_dir))
|
135 |
-
|
136 |
-
def _collect_moslqo_score(self, results_csv_path: tp.Union[Path, str]) -> float:
|
137 |
-
# collect results for each evaluated pair and return averaged moslqo score.
|
138 |
-
with open(results_csv_path, "r") as csv_file:
|
139 |
-
reader = csv.DictReader(csv_file)
|
140 |
-
moslqo_scores = [float(row["moslqo"]) for row in reader]
|
141 |
-
if len(moslqo_scores) > 0:
|
142 |
-
return sum(moslqo_scores) / len(moslqo_scores)
|
143 |
-
else:
|
144 |
-
return 0.0
|
145 |
-
|
146 |
-
def _collect_debug_data(self, debug_json_path: tp.Union[Path, str]) -> dict:
|
147 |
-
# collect debug data for the visqol inference.
|
148 |
-
with open(debug_json_path, "r") as f:
|
149 |
-
data = json.load(f)
|
150 |
-
return data
|
151 |
-
|
152 |
-
@property
|
153 |
-
def visqol_model(self):
|
154 |
-
return f'{self.visqol_bin}/model/{self.model}'
|
155 |
-
|
156 |
-
def _run_visqol(
|
157 |
-
self,
|
158 |
-
input_csv_path: tp.Union[Path, str],
|
159 |
-
results_csv_path: tp.Union[Path, str],
|
160 |
-
debug_csv_path: tp.Optional[tp.Union[Path, str]],
|
161 |
-
):
|
162 |
-
input_csv_path = str(input_csv_path)
|
163 |
-
results_csv_path = str(results_csv_path)
|
164 |
-
debug_csv_path = str(debug_csv_path)
|
165 |
-
cmd = [
|
166 |
-
f'{self.visqol_bin}/bazel-bin/visqol',
|
167 |
-
'--batch_input_csv', f'{input_csv_path}',
|
168 |
-
'--results_csv', f'{results_csv_path}'
|
169 |
-
]
|
170 |
-
if debug_csv_path is not None:
|
171 |
-
cmd += ['--output_debug', f'{debug_csv_path}']
|
172 |
-
if self.visqol_mode == "speech":
|
173 |
-
cmd += ['--use_speech_mode']
|
174 |
-
cmd += ['--similarity_to_quality_model', f'{self.visqol_model}']
|
175 |
-
result = subprocess.run(cmd, capture_output=True)
|
176 |
-
if result.returncode:
|
177 |
-
logger.error("Error with visqol: \n %s \n %s", result.stdout.decode(), result.stderr.decode())
|
178 |
-
raise RuntimeError("Error while executing visqol")
|
179 |
-
result.check_returncode()
|
180 |
-
|
181 |
-
def __call__(
|
182 |
-
self,
|
183 |
-
ref_sig: torch.Tensor,
|
184 |
-
deg_sig: torch.Tensor,
|
185 |
-
sr: int,
|
186 |
-
pad_with_silence: bool = False,
|
187 |
-
):
|
188 |
-
"""Calculate the ViSQOL metric for a pair of audio signals at a given sample rate.
|
189 |
-
Args:
|
190 |
-
ref_sig (torch.Tensor): Reference signals as [B, C, T].
|
191 |
-
deg_sig (torch.Tensor): Degraded signals as [B, C, T].
|
192 |
-
sr (int): Sample rate of the two audio signals.
|
193 |
-
pad_with_silence (bool): Whether to pad the file with silences as recommended
|
194 |
-
in visqol guidelines (see: https://github.com/google/visqol#general-guidelines-for-input).
|
195 |
-
Returns:
|
196 |
-
float: The ViSQOL score or mean score for the batch.
|
197 |
-
"""
|
198 |
-
logger.debug(f"Calculating visqol with mode={self.visqol_mode} on {len(ref_sig)} samples")
|
199 |
-
tmp_dir, input_csv, results_csv, debug_json = self._prepare_files(
|
200 |
-
ref_sig, deg_sig, sr, self.target_sr, pad_with_silence
|
201 |
-
)
|
202 |
-
try:
|
203 |
-
if input_csv and results_csv:
|
204 |
-
self._run_visqol(
|
205 |
-
input_csv,
|
206 |
-
results_csv,
|
207 |
-
debug_json if self.debug else None,
|
208 |
-
)
|
209 |
-
mosqol = self._collect_moslqo_score(results_csv)
|
210 |
-
return mosqol
|
211 |
-
else:
|
212 |
-
raise RuntimeError("Something unexpected happened when running VISQOL!")
|
213 |
-
except Exception as e:
|
214 |
-
logger.error("Exception occurred when running ViSQOL: %s", e)
|
215 |
-
finally:
|
216 |
-
self._flush_files(tmp_dir)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIFILMS/generate_human_motion/VQ-Trans/train_t2m_trans.py
DELETED
@@ -1,191 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import torch
|
3 |
-
import numpy as np
|
4 |
-
|
5 |
-
from torch.utils.tensorboard import SummaryWriter
|
6 |
-
from os.path import join as pjoin
|
7 |
-
from torch.distributions import Categorical
|
8 |
-
import json
|
9 |
-
import clip
|
10 |
-
|
11 |
-
import options.option_transformer as option_trans
|
12 |
-
import models.vqvae as vqvae
|
13 |
-
import utils.utils_model as utils_model
|
14 |
-
import utils.eval_trans as eval_trans
|
15 |
-
from dataset import dataset_TM_train
|
16 |
-
from dataset import dataset_TM_eval
|
17 |
-
from dataset import dataset_tokenize
|
18 |
-
import models.t2m_trans as trans
|
19 |
-
from options.get_eval_option import get_opt
|
20 |
-
from models.evaluator_wrapper import EvaluatorModelWrapper
|
21 |
-
import warnings
|
22 |
-
warnings.filterwarnings('ignore')
|
23 |
-
|
24 |
-
##### ---- Exp dirs ---- #####
|
25 |
-
args = option_trans.get_args_parser()
|
26 |
-
torch.manual_seed(args.seed)
|
27 |
-
|
28 |
-
args.out_dir = os.path.join(args.out_dir, f'{args.exp_name}')
|
29 |
-
args.vq_dir= os.path.join("./dataset/KIT-ML" if args.dataname == 'kit' else "./dataset/HumanML3D", f'{args.vq_name}')
|
30 |
-
os.makedirs(args.out_dir, exist_ok = True)
|
31 |
-
os.makedirs(args.vq_dir, exist_ok = True)
|
32 |
-
|
33 |
-
##### ---- Logger ---- #####
|
34 |
-
logger = utils_model.get_logger(args.out_dir)
|
35 |
-
writer = SummaryWriter(args.out_dir)
|
36 |
-
logger.info(json.dumps(vars(args), indent=4, sort_keys=True))
|
37 |
-
|
38 |
-
##### ---- Dataloader ---- #####
|
39 |
-
train_loader_token = dataset_tokenize.DATALoader(args.dataname, 1, unit_length=2**args.down_t)
|
40 |
-
|
41 |
-
from utils.word_vectorizer import WordVectorizer
|
42 |
-
w_vectorizer = WordVectorizer('./glove', 'our_vab')
|
43 |
-
val_loader = dataset_TM_eval.DATALoader(args.dataname, False, 32, w_vectorizer)
|
44 |
-
|
45 |
-
dataset_opt_path = 'checkpoints/kit/Comp_v6_KLD005/opt.txt' if args.dataname == 'kit' else 'checkpoints/t2m/Comp_v6_KLD005/opt.txt'
|
46 |
-
|
47 |
-
wrapper_opt = get_opt(dataset_opt_path, torch.device('cuda'))
|
48 |
-
eval_wrapper = EvaluatorModelWrapper(wrapper_opt)
|
49 |
-
|
50 |
-
##### ---- Network ---- #####
|
51 |
-
clip_model, clip_preprocess = clip.load("ViT-B/32", device=torch.device('cuda'), jit=False, download_root='/apdcephfs_cq2/share_1290939/maelyszhang/.cache/clip') # Must set jit=False for training
|
52 |
-
clip.model.convert_weights(clip_model) # Actually this line is unnecessary since clip by default already on float16
|
53 |
-
clip_model.eval()
|
54 |
-
for p in clip_model.parameters():
|
55 |
-
p.requires_grad = False
|
56 |
-
|
57 |
-
net = vqvae.HumanVQVAE(args, ## use args to define different parameters in different quantizers
|
58 |
-
args.nb_code,
|
59 |
-
args.code_dim,
|
60 |
-
args.output_emb_width,
|
61 |
-
args.down_t,
|
62 |
-
args.stride_t,
|
63 |
-
args.width,
|
64 |
-
args.depth,
|
65 |
-
args.dilation_growth_rate)
|
66 |
-
|
67 |
-
|
68 |
-
trans_encoder = trans.Text2Motion_Transformer(num_vq=args.nb_code,
|
69 |
-
embed_dim=args.embed_dim_gpt,
|
70 |
-
clip_dim=args.clip_dim,
|
71 |
-
block_size=args.block_size,
|
72 |
-
num_layers=args.num_layers,
|
73 |
-
n_head=args.n_head_gpt,
|
74 |
-
drop_out_rate=args.drop_out_rate,
|
75 |
-
fc_rate=args.ff_rate)
|
76 |
-
|
77 |
-
|
78 |
-
print ('loading checkpoint from {}'.format(args.resume_pth))
|
79 |
-
ckpt = torch.load(args.resume_pth, map_location='cpu')
|
80 |
-
net.load_state_dict(ckpt['net'], strict=True)
|
81 |
-
net.eval()
|
82 |
-
net.cuda()
|
83 |
-
|
84 |
-
if args.resume_trans is not None:
|
85 |
-
print ('loading transformer checkpoint from {}'.format(args.resume_trans))
|
86 |
-
ckpt = torch.load(args.resume_trans, map_location='cpu')
|
87 |
-
trans_encoder.load_state_dict(ckpt['trans'], strict=True)
|
88 |
-
trans_encoder.train()
|
89 |
-
trans_encoder.cuda()
|
90 |
-
|
91 |
-
##### ---- Optimizer & Scheduler ---- #####
|
92 |
-
optimizer = utils_model.initial_optim(args.decay_option, args.lr, args.weight_decay, trans_encoder, args.optimizer)
|
93 |
-
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=args.lr_scheduler, gamma=args.gamma)
|
94 |
-
|
95 |
-
##### ---- Optimization goals ---- #####
|
96 |
-
loss_ce = torch.nn.CrossEntropyLoss()
|
97 |
-
|
98 |
-
nb_iter, avg_loss_cls, avg_acc = 0, 0., 0.
|
99 |
-
right_num = 0
|
100 |
-
nb_sample_train = 0
|
101 |
-
|
102 |
-
##### ---- get code ---- #####
|
103 |
-
for batch in train_loader_token:
|
104 |
-
pose, name = batch
|
105 |
-
bs, seq = pose.shape[0], pose.shape[1]
|
106 |
-
|
107 |
-
pose = pose.cuda().float() # bs, nb_joints, joints_dim, seq_len
|
108 |
-
target = net.encode(pose)
|
109 |
-
target = target.cpu().numpy()
|
110 |
-
np.save(pjoin(args.vq_dir, name[0] +'.npy'), target)
|
111 |
-
|
112 |
-
|
113 |
-
train_loader = dataset_TM_train.DATALoader(args.dataname, args.batch_size, args.nb_code, args.vq_name, unit_length=2**args.down_t)
|
114 |
-
train_loader_iter = dataset_TM_train.cycle(train_loader)
|
115 |
-
|
116 |
-
|
117 |
-
##### ---- Training ---- #####
|
118 |
-
best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, writer, logger = eval_trans.evaluation_transformer(args.out_dir, val_loader, net, trans_encoder, logger, writer, 0, best_fid=1000, best_iter=0, best_div=100, best_top1=0, best_top2=0, best_top3=0, best_matching=100, clip_model=clip_model, eval_wrapper=eval_wrapper)
|
119 |
-
while nb_iter <= args.total_iter:
|
120 |
-
|
121 |
-
batch = next(train_loader_iter)
|
122 |
-
clip_text, m_tokens, m_tokens_len = batch
|
123 |
-
m_tokens, m_tokens_len = m_tokens.cuda(), m_tokens_len.cuda()
|
124 |
-
bs = m_tokens.shape[0]
|
125 |
-
target = m_tokens # (bs, 26)
|
126 |
-
target = target.cuda()
|
127 |
-
|
128 |
-
text = clip.tokenize(clip_text, truncate=True).cuda()
|
129 |
-
|
130 |
-
feat_clip_text = clip_model.encode_text(text).float()
|
131 |
-
|
132 |
-
input_index = target[:,:-1]
|
133 |
-
|
134 |
-
if args.pkeep == -1:
|
135 |
-
proba = np.random.rand(1)[0]
|
136 |
-
mask = torch.bernoulli(proba * torch.ones(input_index.shape,
|
137 |
-
device=input_index.device))
|
138 |
-
else:
|
139 |
-
mask = torch.bernoulli(args.pkeep * torch.ones(input_index.shape,
|
140 |
-
device=input_index.device))
|
141 |
-
mask = mask.round().to(dtype=torch.int64)
|
142 |
-
r_indices = torch.randint_like(input_index, args.nb_code)
|
143 |
-
a_indices = mask*input_index+(1-mask)*r_indices
|
144 |
-
|
145 |
-
cls_pred = trans_encoder(a_indices, feat_clip_text)
|
146 |
-
cls_pred = cls_pred.contiguous()
|
147 |
-
|
148 |
-
loss_cls = 0.0
|
149 |
-
for i in range(bs):
|
150 |
-
# loss function (26), (26, 513)
|
151 |
-
loss_cls += loss_ce(cls_pred[i][:m_tokens_len[i] + 1], target[i][:m_tokens_len[i] + 1]) / bs
|
152 |
-
|
153 |
-
# Accuracy
|
154 |
-
probs = torch.softmax(cls_pred[i][:m_tokens_len[i] + 1], dim=-1)
|
155 |
-
|
156 |
-
if args.if_maxtest:
|
157 |
-
_, cls_pred_index = torch.max(probs, dim=-1)
|
158 |
-
|
159 |
-
else:
|
160 |
-
dist = Categorical(probs)
|
161 |
-
cls_pred_index = dist.sample()
|
162 |
-
right_num += (cls_pred_index.flatten(0) == target[i][:m_tokens_len[i] + 1].flatten(0)).sum().item()
|
163 |
-
|
164 |
-
## global loss
|
165 |
-
optimizer.zero_grad()
|
166 |
-
loss_cls.backward()
|
167 |
-
optimizer.step()
|
168 |
-
scheduler.step()
|
169 |
-
|
170 |
-
avg_loss_cls = avg_loss_cls + loss_cls.item()
|
171 |
-
nb_sample_train = nb_sample_train + (m_tokens_len + 1).sum().item()
|
172 |
-
|
173 |
-
nb_iter += 1
|
174 |
-
if nb_iter % args.print_iter == 0 :
|
175 |
-
avg_loss_cls = avg_loss_cls / args.print_iter
|
176 |
-
avg_acc = right_num * 100 / nb_sample_train
|
177 |
-
writer.add_scalar('./Loss/train', avg_loss_cls, nb_iter)
|
178 |
-
writer.add_scalar('./ACC/train', avg_acc, nb_iter)
|
179 |
-
msg = f"Train. Iter {nb_iter} : Loss. {avg_loss_cls:.5f}, ACC. {avg_acc:.4f}"
|
180 |
-
logger.info(msg)
|
181 |
-
avg_loss_cls = 0.
|
182 |
-
right_num = 0
|
183 |
-
nb_sample_train = 0
|
184 |
-
|
185 |
-
if nb_iter % args.eval_iter == 0:
|
186 |
-
best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, writer, logger = eval_trans.evaluation_transformer(args.out_dir, val_loader, net, trans_encoder, logger, writer, nb_iter, best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, clip_model=clip_model, eval_wrapper=eval_wrapper)
|
187 |
-
|
188 |
-
if nb_iter == args.total_iter:
|
189 |
-
msg_final = f"Train. Iter {best_iter} : FID. {best_fid:.5f}, Diversity. {best_div:.4f}, TOP1. {best_top1:.4f}, TOP2. {best_top2:.4f}, TOP3. {best_top3:.4f}"
|
190 |
-
logger.info(msg_final)
|
191 |
-
break
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIFILMS/generate_human_motion/pyrender/pyrender/font.py
DELETED
@@ -1,272 +0,0 @@
|
|
1 |
-
"""Font texture loader and processor.
|
2 |
-
|
3 |
-
Author: Matthew Matl
|
4 |
-
"""
|
5 |
-
import freetype
|
6 |
-
import numpy as np
|
7 |
-
import os
|
8 |
-
|
9 |
-
import OpenGL
|
10 |
-
from OpenGL.GL import *
|
11 |
-
|
12 |
-
from .constants import TextAlign, FLOAT_SZ
|
13 |
-
from .texture import Texture
|
14 |
-
from .sampler import Sampler
|
15 |
-
|
16 |
-
|
17 |
-
class FontCache(object):
|
18 |
-
"""A cache for fonts.
|
19 |
-
"""
|
20 |
-
|
21 |
-
def __init__(self, font_dir=None):
|
22 |
-
self._font_cache = {}
|
23 |
-
self.font_dir = font_dir
|
24 |
-
if self.font_dir is None:
|
25 |
-
base_dir, _ = os.path.split(os.path.realpath(__file__))
|
26 |
-
self.font_dir = os.path.join(base_dir, 'fonts')
|
27 |
-
|
28 |
-
def get_font(self, font_name, font_pt):
|
29 |
-
# If it's a file, load it directly, else, try to load from font dir.
|
30 |
-
if os.path.isfile(font_name):
|
31 |
-
font_filename = font_name
|
32 |
-
_, font_name = os.path.split(font_name)
|
33 |
-
font_name, _ = os.path.split(font_name)
|
34 |
-
else:
|
35 |
-
font_filename = os.path.join(self.font_dir, font_name) + '.ttf'
|
36 |
-
|
37 |
-
cid = OpenGL.contextdata.getContext()
|
38 |
-
key = (cid, font_name, int(font_pt))
|
39 |
-
|
40 |
-
if key not in self._font_cache:
|
41 |
-
self._font_cache[key] = Font(font_filename, font_pt)
|
42 |
-
return self._font_cache[key]
|
43 |
-
|
44 |
-
def clear(self):
|
45 |
-
for key in self._font_cache:
|
46 |
-
self._font_cache[key].delete()
|
47 |
-
self._font_cache = {}
|
48 |
-
|
49 |
-
|
50 |
-
class Character(object):
|
51 |
-
"""A single character, with its texture and attributes.
|
52 |
-
"""
|
53 |
-
|
54 |
-
def __init__(self, texture, size, bearing, advance):
|
55 |
-
self.texture = texture
|
56 |
-
self.size = size
|
57 |
-
self.bearing = bearing
|
58 |
-
self.advance = advance
|
59 |
-
|
60 |
-
|
61 |
-
class Font(object):
|
62 |
-
"""A font object.
|
63 |
-
|
64 |
-
Parameters
|
65 |
-
----------
|
66 |
-
font_file : str
|
67 |
-
The file to load the font from.
|
68 |
-
font_pt : int
|
69 |
-
The height of the font in pixels.
|
70 |
-
"""
|
71 |
-
|
72 |
-
def __init__(self, font_file, font_pt=40):
|
73 |
-
self.font_file = font_file
|
74 |
-
self.font_pt = int(font_pt)
|
75 |
-
self._face = freetype.Face(font_file)
|
76 |
-
self._face.set_pixel_sizes(0, font_pt)
|
77 |
-
self._character_map = {}
|
78 |
-
|
79 |
-
for i in range(0, 128):
|
80 |
-
|
81 |
-
# Generate texture
|
82 |
-
face = self._face
|
83 |
-
face.load_char(chr(i))
|
84 |
-
buf = face.glyph.bitmap.buffer
|
85 |
-
src = (np.array(buf) / 255.0).astype(np.float32)
|
86 |
-
src = src.reshape((face.glyph.bitmap.rows,
|
87 |
-
face.glyph.bitmap.width))
|
88 |
-
tex = Texture(
|
89 |
-
sampler=Sampler(
|
90 |
-
magFilter=GL_LINEAR,
|
91 |
-
minFilter=GL_LINEAR,
|
92 |
-
wrapS=GL_CLAMP_TO_EDGE,
|
93 |
-
wrapT=GL_CLAMP_TO_EDGE
|
94 |
-
),
|
95 |
-
source=src,
|
96 |
-
source_channels='R',
|
97 |
-
)
|
98 |
-
character = Character(
|
99 |
-
texture=tex,
|
100 |
-
size=np.array([face.glyph.bitmap.width,
|
101 |
-
face.glyph.bitmap.rows]),
|
102 |
-
bearing=np.array([face.glyph.bitmap_left,
|
103 |
-
face.glyph.bitmap_top]),
|
104 |
-
advance=face.glyph.advance.x
|
105 |
-
)
|
106 |
-
self._character_map[chr(i)] = character
|
107 |
-
|
108 |
-
self._vbo = None
|
109 |
-
self._vao = None
|
110 |
-
|
111 |
-
@property
|
112 |
-
def font_file(self):
|
113 |
-
"""str : The file the font was loaded from.
|
114 |
-
"""
|
115 |
-
return self._font_file
|
116 |
-
|
117 |
-
@font_file.setter
|
118 |
-
def font_file(self, value):
|
119 |
-
self._font_file = value
|
120 |
-
|
121 |
-
@property
|
122 |
-
def font_pt(self):
|
123 |
-
"""int : The height of the font in pixels.
|
124 |
-
"""
|
125 |
-
return self._font_pt
|
126 |
-
|
127 |
-
@font_pt.setter
|
128 |
-
def font_pt(self, value):
|
129 |
-
self._font_pt = int(value)
|
130 |
-
|
131 |
-
def _add_to_context(self):
|
132 |
-
|
133 |
-
self._vao = glGenVertexArrays(1)
|
134 |
-
glBindVertexArray(self._vao)
|
135 |
-
self._vbo = glGenBuffers(1)
|
136 |
-
glBindBuffer(GL_ARRAY_BUFFER, self._vbo)
|
137 |
-
glBufferData(GL_ARRAY_BUFFER, FLOAT_SZ * 6 * 4, None, GL_DYNAMIC_DRAW)
|
138 |
-
glEnableVertexAttribArray(0)
|
139 |
-
glVertexAttribPointer(
|
140 |
-
0, 4, GL_FLOAT, GL_FALSE, 4 * FLOAT_SZ, ctypes.c_void_p(0)
|
141 |
-
)
|
142 |
-
glBindVertexArray(0)
|
143 |
-
|
144 |
-
glPixelStorei(GL_UNPACK_ALIGNMENT, 1)
|
145 |
-
for c in self._character_map:
|
146 |
-
ch = self._character_map[c]
|
147 |
-
if not ch.texture._in_context():
|
148 |
-
ch.texture._add_to_context()
|
149 |
-
|
150 |
-
def _remove_from_context(self):
|
151 |
-
for c in self._character_map:
|
152 |
-
ch = self._character_map[c]
|
153 |
-
ch.texture.delete()
|
154 |
-
if self._vao is not None:
|
155 |
-
glDeleteVertexArrays(1, [self._vao])
|
156 |
-
glDeleteBuffers(1, [self._vbo])
|
157 |
-
self._vao = None
|
158 |
-
self._vbo = None
|
159 |
-
|
160 |
-
def _in_context(self):
|
161 |
-
return self._vao is not None
|
162 |
-
|
163 |
-
def _bind(self):
|
164 |
-
glBindVertexArray(self._vao)
|
165 |
-
|
166 |
-
def _unbind(self):
|
167 |
-
glBindVertexArray(0)
|
168 |
-
|
169 |
-
def delete(self):
|
170 |
-
self._unbind()
|
171 |
-
self._remove_from_context()
|
172 |
-
|
173 |
-
def render_string(self, text, x, y, scale=1.0,
|
174 |
-
align=TextAlign.BOTTOM_LEFT):
|
175 |
-
"""Render a string to the current view buffer.
|
176 |
-
|
177 |
-
Note
|
178 |
-
----
|
179 |
-
Assumes correct shader program already bound w/ uniforms set.
|
180 |
-
|
181 |
-
Parameters
|
182 |
-
----------
|
183 |
-
text : str
|
184 |
-
The text to render.
|
185 |
-
x : int
|
186 |
-
Horizontal pixel location of text.
|
187 |
-
y : int
|
188 |
-
Vertical pixel location of text.
|
189 |
-
scale : int
|
190 |
-
Scaling factor for text.
|
191 |
-
align : int
|
192 |
-
One of the TextAlign options which specifies where the ``x``
|
193 |
-
and ``y`` parameters lie on the text. For example,
|
194 |
-
:attr:`.TextAlign.BOTTOM_LEFT` means that ``x`` and ``y`` indicate
|
195 |
-
the position of the bottom-left corner of the textbox.
|
196 |
-
"""
|
197 |
-
glActiveTexture(GL_TEXTURE0)
|
198 |
-
glEnable(GL_BLEND)
|
199 |
-
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
|
200 |
-
glDisable(GL_DEPTH_TEST)
|
201 |
-
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
|
202 |
-
self._bind()
|
203 |
-
|
204 |
-
# Determine width and height of text relative to x, y
|
205 |
-
width = 0.0
|
206 |
-
height = 0.0
|
207 |
-
for c in text:
|
208 |
-
ch = self._character_map[c]
|
209 |
-
height = max(height, ch.bearing[1] * scale)
|
210 |
-
width += (ch.advance >> 6) * scale
|
211 |
-
|
212 |
-
# Determine offsets based on alignments
|
213 |
-
xoff = 0
|
214 |
-
yoff = 0
|
215 |
-
if align == TextAlign.BOTTOM_RIGHT:
|
216 |
-
xoff = -width
|
217 |
-
elif align == TextAlign.BOTTOM_CENTER:
|
218 |
-
xoff = -width / 2.0
|
219 |
-
elif align == TextAlign.TOP_LEFT:
|
220 |
-
yoff = -height
|
221 |
-
elif align == TextAlign.TOP_RIGHT:
|
222 |
-
yoff = -height
|
223 |
-
xoff = -width
|
224 |
-
elif align == TextAlign.TOP_CENTER:
|
225 |
-
yoff = -height
|
226 |
-
xoff = -width / 2.0
|
227 |
-
elif align == TextAlign.CENTER:
|
228 |
-
xoff = -width / 2.0
|
229 |
-
yoff = -height / 2.0
|
230 |
-
elif align == TextAlign.CENTER_LEFT:
|
231 |
-
yoff = -height / 2.0
|
232 |
-
elif align == TextAlign.CENTER_RIGHT:
|
233 |
-
xoff = -width
|
234 |
-
yoff = -height / 2.0
|
235 |
-
|
236 |
-
x += xoff
|
237 |
-
y += yoff
|
238 |
-
|
239 |
-
ch = None
|
240 |
-
for c in text:
|
241 |
-
ch = self._character_map[c]
|
242 |
-
xpos = x + ch.bearing[0] * scale
|
243 |
-
ypos = y - (ch.size[1] - ch.bearing[1]) * scale
|
244 |
-
w = ch.size[0] * scale
|
245 |
-
h = ch.size[1] * scale
|
246 |
-
|
247 |
-
vertices = np.array([
|
248 |
-
[xpos, ypos, 0.0, 0.0],
|
249 |
-
[xpos + w, ypos, 1.0, 0.0],
|
250 |
-
[xpos + w, ypos + h, 1.0, 1.0],
|
251 |
-
[xpos + w, ypos + h, 1.0, 1.0],
|
252 |
-
[xpos, ypos + h, 0.0, 1.0],
|
253 |
-
[xpos, ypos, 0.0, 0.0],
|
254 |
-
], dtype=np.float32)
|
255 |
-
|
256 |
-
ch.texture._bind()
|
257 |
-
|
258 |
-
glBindBuffer(GL_ARRAY_BUFFER, self._vbo)
|
259 |
-
glBufferData(
|
260 |
-
GL_ARRAY_BUFFER, FLOAT_SZ * 6 * 4, vertices, GL_DYNAMIC_DRAW
|
261 |
-
)
|
262 |
-
# TODO MAKE THIS MORE EFFICIENT, lgBufferSubData is broken
|
263 |
-
# glBufferSubData(
|
264 |
-
# GL_ARRAY_BUFFER, 0, 6 * 4 * FLOAT_SZ,
|
265 |
-
# np.ascontiguousarray(vertices.flatten)
|
266 |
-
# )
|
267 |
-
glDrawArrays(GL_TRIANGLES, 0, 6)
|
268 |
-
x += (ch.advance >> 6) * scale
|
269 |
-
|
270 |
-
self._unbind()
|
271 |
-
if ch:
|
272 |
-
ch.texture._unbind()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/lr_scheduler.py
DELETED
@@ -1,128 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import torch
|
3 |
-
|
4 |
-
|
5 |
-
class ExponentialDecayScheduler(torch.optim.lr_scheduler._LRScheduler):
|
6 |
-
|
7 |
-
def __init__(self, optimizer, total_iters, final_lrs,
|
8 |
-
warmup_iters=3000, last_epoch=-1, verbose=False):
|
9 |
-
self.total_iters = total_iters
|
10 |
-
self.final_lrs = final_lrs
|
11 |
-
if not isinstance(self.final_lrs, list) and not isinstance(
|
12 |
-
self.final_lrs, tuple):
|
13 |
-
self.final_lrs = [self.final_lrs] * len(optimizer.param_groups)
|
14 |
-
self.warmup_iters = warmup_iters
|
15 |
-
self.bases = [0.0,] * len(optimizer.param_groups)
|
16 |
-
super().__init__(optimizer, last_epoch, verbose)
|
17 |
-
for i, (base_lr, final_lr) in enumerate(zip(self.base_lrs, self.final_lrs)):
|
18 |
-
base = (final_lr / base_lr) ** (1 / (
|
19 |
-
self.total_iters - self.warmup_iters))
|
20 |
-
self.bases[i] = base
|
21 |
-
|
22 |
-
def _get_closed_form_lr(self):
|
23 |
-
warmup_coeff = 1.0
|
24 |
-
current_iter = self._step_count
|
25 |
-
if current_iter < self.warmup_iters:
|
26 |
-
warmup_coeff = current_iter / self.warmup_iters
|
27 |
-
current_lrs = []
|
28 |
-
# if not self.linear_warmup:
|
29 |
-
# for base_lr, final_lr, base in zip(self.base_lrs, self.final_lrs, self.bases):
|
30 |
-
# # current_lr = warmup_coeff * base_lr * math.exp(((current_iter - self.warmup_iters) / self.total_iters) * math.log(final_lr / base_lr))
|
31 |
-
# current_lr = warmup_coeff * base_lr * (base ** (current_iter - self.warmup_iters))
|
32 |
-
# current_lrs.append(current_lr)
|
33 |
-
# else:
|
34 |
-
for base_lr, final_lr, base in zip(self.base_lrs, self.final_lrs,
|
35 |
-
self.bases):
|
36 |
-
if current_iter <= self.warmup_iters:
|
37 |
-
current_lr = warmup_coeff * base_lr
|
38 |
-
else:
|
39 |
-
# current_lr = warmup_coeff * base_lr * math.exp(((current_iter - self.warmup_iters) / self.total_iters) * math.log(final_lr / base_lr))
|
40 |
-
current_lr = base_lr * (base ** (current_iter - self.warmup_iters))
|
41 |
-
current_lrs.append(current_lr)
|
42 |
-
return current_lrs
|
43 |
-
|
44 |
-
def get_lr(self):
|
45 |
-
return self._get_closed_form_lr()
|
46 |
-
|
47 |
-
|
48 |
-
class NoamScheduler(torch.optim.lr_scheduler._LRScheduler):
|
49 |
-
|
50 |
-
def __init__(self, optimizer, model_size=512, factor=1, warmup_iters=3000,
|
51 |
-
last_epoch=-1, verbose=False):
|
52 |
-
self.model_size = model_size
|
53 |
-
self.warmup_iters = warmup_iters
|
54 |
-
# self.factors = [group["lr"] / (self.model_size ** (-0.5) * self.warmup_iters ** (-0.5)) for group in optimizer.param_groups]
|
55 |
-
self.factor = factor
|
56 |
-
super().__init__(optimizer, last_epoch, verbose)
|
57 |
-
|
58 |
-
def _get_closed_form_lr(self):
|
59 |
-
current_iter = self._step_count
|
60 |
-
current_lrs = []
|
61 |
-
for _ in self.base_lrs:
|
62 |
-
current_lr = self.factor * \
|
63 |
-
(self.model_size ** (-0.5) * min(current_iter ** (-0.5),
|
64 |
-
current_iter * self.warmup_iters ** (-1.5)))
|
65 |
-
current_lrs.append(current_lr)
|
66 |
-
return current_lrs
|
67 |
-
|
68 |
-
def get_lr(self):
|
69 |
-
return self._get_closed_form_lr()
|
70 |
-
|
71 |
-
|
72 |
-
class CosineWithWarmup(torch.optim.lr_scheduler._LRScheduler):
|
73 |
-
|
74 |
-
def __init__(self, optimizer, total_iters, warmup_iters,
|
75 |
-
num_cycles=0.5, last_epoch=-1, verbose=False):
|
76 |
-
self.total_iters = total_iters
|
77 |
-
self.warmup_iters = warmup_iters
|
78 |
-
self.num_cycles = num_cycles
|
79 |
-
super().__init__(optimizer, last_epoch, verbose)
|
80 |
-
|
81 |
-
def lr_lambda(self, iteration):
|
82 |
-
if iteration < self.warmup_iters:
|
83 |
-
return float(iteration) / float(max(1, self.warmup_iters))
|
84 |
-
progress = float(iteration - self.warmup_iters) / float(max(1,
|
85 |
-
self.total_iters - self.warmup_iters))
|
86 |
-
return max(0.0, 0.5 * (1.0 + math.cos(math.pi * float(
|
87 |
-
self.num_cycles) * 2.0 * progress)))
|
88 |
-
|
89 |
-
def _get_closed_form_lr(self):
|
90 |
-
current_iter = self._step_count
|
91 |
-
current_lrs = []
|
92 |
-
for base_lr in self.base_lrs:
|
93 |
-
current_lr = base_lr * self.lr_lambda(current_iter)
|
94 |
-
current_lrs.append(current_lr)
|
95 |
-
return current_lrs
|
96 |
-
|
97 |
-
def get_lr(self):
|
98 |
-
return self._get_closed_form_lr()
|
99 |
-
|
100 |
-
|
101 |
-
if __name__ == "__main__":
|
102 |
-
model = torch.nn.Linear(10, 5)
|
103 |
-
optimizer = torch.optim.Adam(model.parameters(), 5e-4)
|
104 |
-
epochs = 25
|
105 |
-
iters = 600
|
106 |
-
scheduler = CosineWithWarmup(optimizer, 600 * 25, 600 * 5,)
|
107 |
-
# scheduler = ExponentialDecayScheduler(optimizer, 600 * 25, 5e-7, 600 * 5)
|
108 |
-
criterion = torch.nn.MSELoss()
|
109 |
-
lrs = []
|
110 |
-
for epoch in range(1, epochs + 1):
|
111 |
-
for iteration in range(1, iters + 1):
|
112 |
-
optimizer.zero_grad()
|
113 |
-
x = torch.randn(4, 10)
|
114 |
-
y = torch.randn(4, 5)
|
115 |
-
loss = criterion(model(x), y)
|
116 |
-
loss.backward()
|
117 |
-
optimizer.step()
|
118 |
-
scheduler.step()
|
119 |
-
# print(f"lr: {scheduler.get_last_lr()}")
|
120 |
-
# lrs.append(scheduler.get_last_lr())
|
121 |
-
lrs.append(optimizer.param_groups[0]["lr"])
|
122 |
-
import matplotlib.pyplot as plt
|
123 |
-
plt.plot(list(range(1, len(lrs) + 1)), lrs, '-o', markersize=1)
|
124 |
-
# plt.legend(loc="best")
|
125 |
-
plt.xlabel("Iteration")
|
126 |
-
plt.ylabel("LR")
|
127 |
-
|
128 |
-
plt.savefig("lr_curve.png", dpi=100)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/datasets/custom_ds.py
DELETED
@@ -1,55 +0,0 @@
|
|
1 |
-
dataset_type = 'CustomDataset'
|
2 |
-
|
3 |
-
# config of data prepare
|
4 |
-
# None
|
5 |
-
|
6 |
-
# config of pipline
|
7 |
-
train_pipeline = [
|
8 |
-
dict(type='LoadImageFromFile'), # 读取图像
|
9 |
-
dict(type='RandomResizedCrop', scale=224), # 随机放缩裁剪
|
10 |
-
dict(type='RandomFlip', prob=0.5, direction='horizontal'), # 随机水平翻转
|
11 |
-
dict(type='PackInputs'), # 准备图像以及标签
|
12 |
-
]
|
13 |
-
|
14 |
-
test_pipeline = [
|
15 |
-
dict(type='LoadImageFromFile'), # 读取图像
|
16 |
-
dict(type='ResizeEdge', scale=256, edge='short'), # 缩放短边尺寸至 256px
|
17 |
-
dict(type='CenterCrop', crop_size=224), # 中心裁剪
|
18 |
-
dict(type='PackInputs'), # 准备图像以及标签
|
19 |
-
]
|
20 |
-
|
21 |
-
# config of dataloader
|
22 |
-
train_dataloader = dict(
|
23 |
-
batch_size=8, # 每张 GPU 的 batchsize
|
24 |
-
num_workers=4, # 每个 GPU 的线程数
|
25 |
-
dataset=dict( # 训练数据集
|
26 |
-
type=dataset_type,
|
27 |
-
data_root='../2_preprocess_data_3000',
|
28 |
-
with_label=True,
|
29 |
-
ann_file='',
|
30 |
-
data_prefix='train',
|
31 |
-
pipeline=train_pipeline),
|
32 |
-
sampler=dict(type='DefaultSampler', shuffle=True), # 默认采样器
|
33 |
-
persistent_workers=True, # 是否保持进程,可以缩短每个 epoch 的准备时间
|
34 |
-
)
|
35 |
-
|
36 |
-
# 构造验证集 dataloader
|
37 |
-
val_dataloader = dict(
|
38 |
-
batch_size=8,
|
39 |
-
num_workers=4,
|
40 |
-
dataset=dict(
|
41 |
-
type=dataset_type,
|
42 |
-
data_root='../2_preprocess_data_3000',
|
43 |
-
with_label=True,
|
44 |
-
ann_file='',
|
45 |
-
data_prefix='val',
|
46 |
-
pipeline=test_pipeline),
|
47 |
-
sampler=dict(type='DefaultSampler', shuffle=False),
|
48 |
-
persistent_workers=True,
|
49 |
-
)
|
50 |
-
|
51 |
-
# set evaluator of validation dataset. Here uses top1 and top3 accuracy
|
52 |
-
val_evaluator = dict(type='Accuracy', topk=(1, 3))
|
53 |
-
|
54 |
-
test_dataloader = val_dataloader
|
55 |
-
test_evaluator = val_evaluator
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Abdllh/topic2poem/README.md
DELETED
@@ -1,14 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Topic2poem
|
3 |
-
emoji: 💻
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: purple
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: afl-3.0
|
11 |
-
duplicated_from: aaaaaabbbbbbbdddddddduuuuulllll/topic2poem
|
12 |
-
---
|
13 |
-
|
14 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/privacy/$types.d.ts
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
import type * as Kit from '@sveltejs/kit';
|
2 |
-
|
3 |
-
type Expand<T> = T extends infer O ? { [K in keyof O]: O[K] } : never;
|
4 |
-
type RouteParams = { }
|
5 |
-
type RouteId = '/privacy';
|
6 |
-
type MaybeWithVoid<T> = {} extends T ? T | void : T;
|
7 |
-
export type RequiredKeys<T> = { [K in keyof T]-?: {} extends { [P in K]: T[K] } ? never : K; }[keyof T];
|
8 |
-
type OutputDataShape<T> = MaybeWithVoid<Omit<App.PageData, RequiredKeys<T>> & Partial<Pick<App.PageData, keyof T & keyof App.PageData>> & Record<string, any>>
|
9 |
-
type EnsureDefined<T> = T extends null | undefined ? {} : T;
|
10 |
-
type OptionalUnion<U extends Record<string, any>, A extends keyof U = U extends U ? keyof U : never> = U extends unknown ? { [P in Exclude<A, keyof U>]?: never } & U : never;
|
11 |
-
export type Snapshot<T = any> = Kit.Snapshot<T>;
|
12 |
-
type PageParentData = EnsureDefined<import('../$types.js').LayoutData>;
|
13 |
-
|
14 |
-
export type PageServerData = null;
|
15 |
-
export type PageData = Expand<PageParentData>;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/styles/main.css
DELETED
@@ -1,17 +0,0 @@
|
|
1 |
-
@import "./highlight-js.css";
|
2 |
-
|
3 |
-
@tailwind base;
|
4 |
-
@tailwind components;
|
5 |
-
@tailwind utilities;
|
6 |
-
|
7 |
-
@layer components {
|
8 |
-
.btn {
|
9 |
-
@apply inline-flex flex-shrink-0 cursor-pointer select-none items-center justify-center whitespace-nowrap outline-none transition-all focus:ring disabled:cursor-default;
|
10 |
-
}
|
11 |
-
}
|
12 |
-
|
13 |
-
@layer utilities {
|
14 |
-
.scrollbar-custom {
|
15 |
-
@apply scrollbar-thin scrollbar-track-transparent scrollbar-thumb-black/10 scrollbar-thumb-rounded-full scrollbar-w-1 hover:scrollbar-thumb-black/20 dark:scrollbar-thumb-white/10 dark:hover:scrollbar-thumb-white/20;
|
16 |
-
}
|
17 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AkitoP/umamusume_bert_vits2/transforms.py
DELETED
@@ -1,209 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch.nn import functional as F
|
3 |
-
|
4 |
-
import numpy as np
|
5 |
-
|
6 |
-
|
7 |
-
DEFAULT_MIN_BIN_WIDTH = 1e-3
|
8 |
-
DEFAULT_MIN_BIN_HEIGHT = 1e-3
|
9 |
-
DEFAULT_MIN_DERIVATIVE = 1e-3
|
10 |
-
|
11 |
-
|
12 |
-
def piecewise_rational_quadratic_transform(
|
13 |
-
inputs,
|
14 |
-
unnormalized_widths,
|
15 |
-
unnormalized_heights,
|
16 |
-
unnormalized_derivatives,
|
17 |
-
inverse=False,
|
18 |
-
tails=None,
|
19 |
-
tail_bound=1.0,
|
20 |
-
min_bin_width=DEFAULT_MIN_BIN_WIDTH,
|
21 |
-
min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
|
22 |
-
min_derivative=DEFAULT_MIN_DERIVATIVE,
|
23 |
-
):
|
24 |
-
if tails is None:
|
25 |
-
spline_fn = rational_quadratic_spline
|
26 |
-
spline_kwargs = {}
|
27 |
-
else:
|
28 |
-
spline_fn = unconstrained_rational_quadratic_spline
|
29 |
-
spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
|
30 |
-
|
31 |
-
outputs, logabsdet = spline_fn(
|
32 |
-
inputs=inputs,
|
33 |
-
unnormalized_widths=unnormalized_widths,
|
34 |
-
unnormalized_heights=unnormalized_heights,
|
35 |
-
unnormalized_derivatives=unnormalized_derivatives,
|
36 |
-
inverse=inverse,
|
37 |
-
min_bin_width=min_bin_width,
|
38 |
-
min_bin_height=min_bin_height,
|
39 |
-
min_derivative=min_derivative,
|
40 |
-
**spline_kwargs
|
41 |
-
)
|
42 |
-
return outputs, logabsdet
|
43 |
-
|
44 |
-
|
45 |
-
def searchsorted(bin_locations, inputs, eps=1e-6):
|
46 |
-
bin_locations[..., -1] += eps
|
47 |
-
return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
|
48 |
-
|
49 |
-
|
50 |
-
def unconstrained_rational_quadratic_spline(
|
51 |
-
inputs,
|
52 |
-
unnormalized_widths,
|
53 |
-
unnormalized_heights,
|
54 |
-
unnormalized_derivatives,
|
55 |
-
inverse=False,
|
56 |
-
tails="linear",
|
57 |
-
tail_bound=1.0,
|
58 |
-
min_bin_width=DEFAULT_MIN_BIN_WIDTH,
|
59 |
-
min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
|
60 |
-
min_derivative=DEFAULT_MIN_DERIVATIVE,
|
61 |
-
):
|
62 |
-
inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
|
63 |
-
outside_interval_mask = ~inside_interval_mask
|
64 |
-
|
65 |
-
outputs = torch.zeros_like(inputs)
|
66 |
-
logabsdet = torch.zeros_like(inputs)
|
67 |
-
|
68 |
-
if tails == "linear":
|
69 |
-
unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
|
70 |
-
constant = np.log(np.exp(1 - min_derivative) - 1)
|
71 |
-
unnormalized_derivatives[..., 0] = constant
|
72 |
-
unnormalized_derivatives[..., -1] = constant
|
73 |
-
|
74 |
-
outputs[outside_interval_mask] = inputs[outside_interval_mask]
|
75 |
-
logabsdet[outside_interval_mask] = 0
|
76 |
-
else:
|
77 |
-
raise RuntimeError("{} tails are not implemented.".format(tails))
|
78 |
-
|
79 |
-
(
|
80 |
-
outputs[inside_interval_mask],
|
81 |
-
logabsdet[inside_interval_mask],
|
82 |
-
) = rational_quadratic_spline(
|
83 |
-
inputs=inputs[inside_interval_mask],
|
84 |
-
unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
|
85 |
-
unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
|
86 |
-
unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
|
87 |
-
inverse=inverse,
|
88 |
-
left=-tail_bound,
|
89 |
-
right=tail_bound,
|
90 |
-
bottom=-tail_bound,
|
91 |
-
top=tail_bound,
|
92 |
-
min_bin_width=min_bin_width,
|
93 |
-
min_bin_height=min_bin_height,
|
94 |
-
min_derivative=min_derivative,
|
95 |
-
)
|
96 |
-
|
97 |
-
return outputs, logabsdet
|
98 |
-
|
99 |
-
|
100 |
-
def rational_quadratic_spline(
|
101 |
-
inputs,
|
102 |
-
unnormalized_widths,
|
103 |
-
unnormalized_heights,
|
104 |
-
unnormalized_derivatives,
|
105 |
-
inverse=False,
|
106 |
-
left=0.0,
|
107 |
-
right=1.0,
|
108 |
-
bottom=0.0,
|
109 |
-
top=1.0,
|
110 |
-
min_bin_width=DEFAULT_MIN_BIN_WIDTH,
|
111 |
-
min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
|
112 |
-
min_derivative=DEFAULT_MIN_DERIVATIVE,
|
113 |
-
):
|
114 |
-
if torch.min(inputs) < left or torch.max(inputs) > right:
|
115 |
-
raise ValueError("Input to a transform is not within its domain")
|
116 |
-
|
117 |
-
num_bins = unnormalized_widths.shape[-1]
|
118 |
-
|
119 |
-
if min_bin_width * num_bins > 1.0:
|
120 |
-
raise ValueError("Minimal bin width too large for the number of bins")
|
121 |
-
if min_bin_height * num_bins > 1.0:
|
122 |
-
raise ValueError("Minimal bin height too large for the number of bins")
|
123 |
-
|
124 |
-
widths = F.softmax(unnormalized_widths, dim=-1)
|
125 |
-
widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
|
126 |
-
cumwidths = torch.cumsum(widths, dim=-1)
|
127 |
-
cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
|
128 |
-
cumwidths = (right - left) * cumwidths + left
|
129 |
-
cumwidths[..., 0] = left
|
130 |
-
cumwidths[..., -1] = right
|
131 |
-
widths = cumwidths[..., 1:] - cumwidths[..., :-1]
|
132 |
-
|
133 |
-
derivatives = min_derivative + F.softplus(unnormalized_derivatives)
|
134 |
-
|
135 |
-
heights = F.softmax(unnormalized_heights, dim=-1)
|
136 |
-
heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
|
137 |
-
cumheights = torch.cumsum(heights, dim=-1)
|
138 |
-
cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
|
139 |
-
cumheights = (top - bottom) * cumheights + bottom
|
140 |
-
cumheights[..., 0] = bottom
|
141 |
-
cumheights[..., -1] = top
|
142 |
-
heights = cumheights[..., 1:] - cumheights[..., :-1]
|
143 |
-
|
144 |
-
if inverse:
|
145 |
-
bin_idx = searchsorted(cumheights, inputs)[..., None]
|
146 |
-
else:
|
147 |
-
bin_idx = searchsorted(cumwidths, inputs)[..., None]
|
148 |
-
|
149 |
-
input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
|
150 |
-
input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
|
151 |
-
|
152 |
-
input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
|
153 |
-
delta = heights / widths
|
154 |
-
input_delta = delta.gather(-1, bin_idx)[..., 0]
|
155 |
-
|
156 |
-
input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
|
157 |
-
input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
|
158 |
-
|
159 |
-
input_heights = heights.gather(-1, bin_idx)[..., 0]
|
160 |
-
|
161 |
-
if inverse:
|
162 |
-
a = (inputs - input_cumheights) * (
|
163 |
-
input_derivatives + input_derivatives_plus_one - 2 * input_delta
|
164 |
-
) + input_heights * (input_delta - input_derivatives)
|
165 |
-
b = input_heights * input_derivatives - (inputs - input_cumheights) * (
|
166 |
-
input_derivatives + input_derivatives_plus_one - 2 * input_delta
|
167 |
-
)
|
168 |
-
c = -input_delta * (inputs - input_cumheights)
|
169 |
-
|
170 |
-
discriminant = b.pow(2) - 4 * a * c
|
171 |
-
assert (discriminant >= 0).all()
|
172 |
-
|
173 |
-
root = (2 * c) / (-b - torch.sqrt(discriminant))
|
174 |
-
outputs = root * input_bin_widths + input_cumwidths
|
175 |
-
|
176 |
-
theta_one_minus_theta = root * (1 - root)
|
177 |
-
denominator = input_delta + (
|
178 |
-
(input_derivatives + input_derivatives_plus_one - 2 * input_delta)
|
179 |
-
* theta_one_minus_theta
|
180 |
-
)
|
181 |
-
derivative_numerator = input_delta.pow(2) * (
|
182 |
-
input_derivatives_plus_one * root.pow(2)
|
183 |
-
+ 2 * input_delta * theta_one_minus_theta
|
184 |
-
+ input_derivatives * (1 - root).pow(2)
|
185 |
-
)
|
186 |
-
logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
|
187 |
-
|
188 |
-
return outputs, -logabsdet
|
189 |
-
else:
|
190 |
-
theta = (inputs - input_cumwidths) / input_bin_widths
|
191 |
-
theta_one_minus_theta = theta * (1 - theta)
|
192 |
-
|
193 |
-
numerator = input_heights * (
|
194 |
-
input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
|
195 |
-
)
|
196 |
-
denominator = input_delta + (
|
197 |
-
(input_derivatives + input_derivatives_plus_one - 2 * input_delta)
|
198 |
-
* theta_one_minus_theta
|
199 |
-
)
|
200 |
-
outputs = input_cumheights + numerator / denominator
|
201 |
-
|
202 |
-
derivative_numerator = input_delta.pow(2) * (
|
203 |
-
input_derivatives_plus_one * theta.pow(2)
|
204 |
-
+ 2 * input_delta * theta_one_minus_theta
|
205 |
-
+ input_derivatives * (1 - theta).pow(2)
|
206 |
-
)
|
207 |
-
logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
|
208 |
-
|
209 |
-
return outputs, logabsdet
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Alashazam/StoryGenerator/app.py
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
from gradio import inputs
|
3 |
-
description = "Story generation with GPT-2"
|
4 |
-
interface = gr.Interface.load("huggingface/pranavpsv/gpt2-genre-story-generator",
|
5 |
-
title = "Story Generation with GPT-2",
|
6 |
-
inputs = [
|
7 |
-
gr.inputs.Textbox(lines=7, label="Story"),
|
8 |
-
],
|
9 |
-
description=description,
|
10 |
-
examples=[["Adventurer is approached by a mysterious stranger in the tavern for a new quest"],
|
11 |
-
["A skilled pilot drives a spaceship ino a new quest"],
|
12 |
-
["A wizard learn spells for a quest"]
|
13 |
-
]
|
14 |
-
)
|
15 |
-
interface.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Altinas/vits-uma-genshin-honkais/text/symbols.py
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
'''
|
2 |
-
Defines the set of symbols used in text input to the model.
|
3 |
-
'''
|
4 |
-
|
5 |
-
'''# japanese_cleaners
|
6 |
-
_pad = '_'
|
7 |
-
_punctuation = ',.!?-'
|
8 |
-
_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ '
|
9 |
-
'''
|
10 |
-
|
11 |
-
'''# japanese_cleaners2
|
12 |
-
_pad = '_'
|
13 |
-
_punctuation = ',.!?-~…'
|
14 |
-
_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ '
|
15 |
-
'''
|
16 |
-
|
17 |
-
'''# korean_cleaners
|
18 |
-
_pad = '_'
|
19 |
-
_punctuation = ',.!?…~'
|
20 |
-
_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ '
|
21 |
-
'''
|
22 |
-
|
23 |
-
'''# chinese_cleaners
|
24 |
-
_pad = '_'
|
25 |
-
_punctuation = ',。!?—…'
|
26 |
-
_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ '
|
27 |
-
'''
|
28 |
-
|
29 |
-
# zh_ja_mixture_cleaners
|
30 |
-
_pad = '_'
|
31 |
-
_punctuation = ',.!?-~…'
|
32 |
-
_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ '
|
33 |
-
|
34 |
-
|
35 |
-
# Export all symbols:
|
36 |
-
symbols = [_pad] + list(_punctuation) + list(_letters)
|
37 |
-
|
38 |
-
# Special symbol ids
|
39 |
-
SPACE_ID = symbols.index(" ")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/ddim.md
DELETED
@@ -1,88 +0,0 @@
|
|
1 |
-
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
-
|
3 |
-
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
-
the License. You may obtain a copy of the License at
|
5 |
-
|
6 |
-
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
-
|
8 |
-
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
-
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
-
specific language governing permissions and limitations under the License.
|
11 |
-
-->
|
12 |
-
|
13 |
-
# Denoising Diffusion Implicit Models (DDIM)
|
14 |
-
|
15 |
-
## Overview
|
16 |
-
|
17 |
-
[Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.
|
18 |
-
|
19 |
-
The abstract of the paper is the following:
|
20 |
-
|
21 |
-
*Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training,
|
22 |
-
yet they require simulating a Markov chain for many steps to produce a sample.
|
23 |
-
To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models
|
24 |
-
with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process.
|
25 |
-
We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from.
|
26 |
-
We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off
|
27 |
-
computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.*
|
28 |
-
|
29 |
-
The original codebase of this paper can be found here: [ermongroup/ddim](https://github.com/ermongroup/ddim).
|
30 |
-
For questions, feel free to contact the author on [tsong.me](https://tsong.me/).
|
31 |
-
|
32 |
-
### Experimental: "Common Diffusion Noise Schedules and Sample Steps are Flawed":
|
33 |
-
|
34 |
-
The paper **[Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/abs/2305.08891)**
|
35 |
-
claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion.
|
36 |
-
|
37 |
-
The abstract reads as follows:
|
38 |
-
|
39 |
-
*We discover that common diffusion noise schedules do not enforce the last timestep to have zero signal-to-noise ratio (SNR),
|
40 |
-
and some implementations of diffusion samplers do not start from the last timestep.
|
41 |
-
Such designs are flawed and do not reflect the fact that the model is given pure Gaussian noise at inference, creating a discrepancy between training and inference.
|
42 |
-
We show that the flawed design causes real problems in existing implementations.
|
43 |
-
In Stable Diffusion, it severely limits the model to only generate images with medium brightness and
|
44 |
-
prevents it from generating very bright and dark samples. We propose a few simple fixes:
|
45 |
-
- (1) rescale the noise schedule to enforce zero terminal SNR;
|
46 |
-
- (2) train the model with v prediction;
|
47 |
-
- (3) change the sampler to always start from the last timestep;
|
48 |
-
- (4) rescale classifier-free guidance to prevent over-exposure.
|
49 |
-
These simple changes ensure the diffusion process is congruent between training and inference and
|
50 |
-
allow the model to generate samples more faithful to the original data distribution.*
|
51 |
-
|
52 |
-
You can apply all of these changes in `diffusers` when using [`DDIMScheduler`]:
|
53 |
-
- (1) rescale the noise schedule to enforce zero terminal SNR;
|
54 |
-
```py
|
55 |
-
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True)
|
56 |
-
```
|
57 |
-
- (2) train the model with v prediction;
|
58 |
-
Continue fine-tuning a checkpoint with [`train_text_to_image.py`](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) or [`train_text_to_image_lora.py`](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py)
|
59 |
-
and `--prediction_type="v_prediction"`.
|
60 |
-
- (3) change the sampler to always start from the last timestep;
|
61 |
-
```py
|
62 |
-
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
|
63 |
-
```
|
64 |
-
- (4) rescale classifier-free guidance to prevent over-exposure.
|
65 |
-
```py
|
66 |
-
pipe(..., guidance_rescale=0.7)
|
67 |
-
```
|
68 |
-
|
69 |
-
An example is to use [this checkpoint](https://huggingface.co/ptx0/pseudo-journey-v2)
|
70 |
-
which has been fine-tuned using the `"v_prediction"`.
|
71 |
-
|
72 |
-
The checkpoint can then be run in inference as follows:
|
73 |
-
|
74 |
-
```py
|
75 |
-
from diffusers import DiffusionPipeline, DDIMScheduler
|
76 |
-
|
77 |
-
pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16)
|
78 |
-
pipe.scheduler = DDIMScheduler.from_config(
|
79 |
-
pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing"
|
80 |
-
)
|
81 |
-
pipe.to("cuda")
|
82 |
-
|
83 |
-
prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k"
|
84 |
-
image = pipeline(prompt, guidance_rescale=0.7).images[0]
|
85 |
-
```
|
86 |
-
|
87 |
-
## DDIMScheduler
|
88 |
-
[[autodoc]] DDIMScheduler
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/pipeline_flax_utils.py
DELETED
@@ -1,557 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2023 The HuggingFace Inc. team.
|
3 |
-
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
|
4 |
-
#
|
5 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
6 |
-
# you may not use this file except in compliance with the License.
|
7 |
-
# You may obtain a copy of the License at
|
8 |
-
#
|
9 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
10 |
-
#
|
11 |
-
# Unless required by applicable law or agreed to in writing, software
|
12 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
13 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
14 |
-
# See the License for the specific language governing permissions and
|
15 |
-
# limitations under the License.
|
16 |
-
|
17 |
-
import importlib
|
18 |
-
import inspect
|
19 |
-
import os
|
20 |
-
from typing import Any, Dict, List, Optional, Union
|
21 |
-
|
22 |
-
import flax
|
23 |
-
import numpy as np
|
24 |
-
import PIL
|
25 |
-
from flax.core.frozen_dict import FrozenDict
|
26 |
-
from huggingface_hub import snapshot_download
|
27 |
-
from PIL import Image
|
28 |
-
from tqdm.auto import tqdm
|
29 |
-
|
30 |
-
from ..configuration_utils import ConfigMixin
|
31 |
-
from ..models.modeling_flax_utils import FLAX_WEIGHTS_NAME, FlaxModelMixin
|
32 |
-
from ..schedulers.scheduling_utils_flax import SCHEDULER_CONFIG_NAME, FlaxSchedulerMixin
|
33 |
-
from ..utils import CONFIG_NAME, DIFFUSERS_CACHE, BaseOutput, http_user_agent, is_transformers_available, logging
|
34 |
-
|
35 |
-
|
36 |
-
if is_transformers_available():
|
37 |
-
from transformers import FlaxPreTrainedModel
|
38 |
-
|
39 |
-
INDEX_FILE = "diffusion_flax_model.bin"
|
40 |
-
|
41 |
-
|
42 |
-
logger = logging.get_logger(__name__)
|
43 |
-
|
44 |
-
|
45 |
-
LOADABLE_CLASSES = {
|
46 |
-
"diffusers": {
|
47 |
-
"FlaxModelMixin": ["save_pretrained", "from_pretrained"],
|
48 |
-
"FlaxSchedulerMixin": ["save_pretrained", "from_pretrained"],
|
49 |
-
"FlaxDiffusionPipeline": ["save_pretrained", "from_pretrained"],
|
50 |
-
},
|
51 |
-
"transformers": {
|
52 |
-
"PreTrainedTokenizer": ["save_pretrained", "from_pretrained"],
|
53 |
-
"PreTrainedTokenizerFast": ["save_pretrained", "from_pretrained"],
|
54 |
-
"FlaxPreTrainedModel": ["save_pretrained", "from_pretrained"],
|
55 |
-
"FeatureExtractionMixin": ["save_pretrained", "from_pretrained"],
|
56 |
-
"ProcessorMixin": ["save_pretrained", "from_pretrained"],
|
57 |
-
"ImageProcessingMixin": ["save_pretrained", "from_pretrained"],
|
58 |
-
},
|
59 |
-
}
|
60 |
-
|
61 |
-
ALL_IMPORTABLE_CLASSES = {}
|
62 |
-
for library in LOADABLE_CLASSES:
|
63 |
-
ALL_IMPORTABLE_CLASSES.update(LOADABLE_CLASSES[library])
|
64 |
-
|
65 |
-
|
66 |
-
def import_flax_or_no_model(module, class_name):
|
67 |
-
try:
|
68 |
-
# 1. First make sure that if a Flax object is present, import this one
|
69 |
-
class_obj = getattr(module, "Flax" + class_name)
|
70 |
-
except AttributeError:
|
71 |
-
# 2. If this doesn't work, it's not a model and we don't append "Flax"
|
72 |
-
class_obj = getattr(module, class_name)
|
73 |
-
except AttributeError:
|
74 |
-
raise ValueError(f"Neither Flax{class_name} nor {class_name} exist in {module}")
|
75 |
-
|
76 |
-
return class_obj
|
77 |
-
|
78 |
-
|
79 |
-
@flax.struct.dataclass
|
80 |
-
class FlaxImagePipelineOutput(BaseOutput):
|
81 |
-
"""
|
82 |
-
Output class for image pipelines.
|
83 |
-
|
84 |
-
Args:
|
85 |
-
images (`List[PIL.Image.Image]` or `np.ndarray`)
|
86 |
-
List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
|
87 |
-
num_channels)`.
|
88 |
-
"""
|
89 |
-
|
90 |
-
images: Union[List[PIL.Image.Image], np.ndarray]
|
91 |
-
|
92 |
-
|
93 |
-
class FlaxDiffusionPipeline(ConfigMixin):
|
94 |
-
r"""
|
95 |
-
Base class for Flax-based pipelines.
|
96 |
-
|
97 |
-
[`FlaxDiffusionPipeline`] stores all components (models, schedulers, and processors) for diffusion pipelines and
|
98 |
-
provides methods for loading, downloading and saving models. It also includes methods to:
|
99 |
-
|
100 |
-
- enable/disable the progress bar for the denoising iteration
|
101 |
-
|
102 |
-
Class attributes:
|
103 |
-
|
104 |
-
- **config_name** ([`str`]) -- The configuration filename that stores the class and module names of all the
|
105 |
-
diffusion pipeline's components.
|
106 |
-
"""
|
107 |
-
config_name = "model_index.json"
|
108 |
-
|
109 |
-
def register_modules(self, **kwargs):
|
110 |
-
# import it here to avoid circular import
|
111 |
-
from diffusers import pipelines
|
112 |
-
|
113 |
-
for name, module in kwargs.items():
|
114 |
-
if module is None:
|
115 |
-
register_dict = {name: (None, None)}
|
116 |
-
else:
|
117 |
-
# retrieve library
|
118 |
-
library = module.__module__.split(".")[0]
|
119 |
-
|
120 |
-
# check if the module is a pipeline module
|
121 |
-
pipeline_dir = module.__module__.split(".")[-2]
|
122 |
-
path = module.__module__.split(".")
|
123 |
-
is_pipeline_module = pipeline_dir in path and hasattr(pipelines, pipeline_dir)
|
124 |
-
|
125 |
-
# if library is not in LOADABLE_CLASSES, then it is a custom module.
|
126 |
-
# Or if it's a pipeline module, then the module is inside the pipeline
|
127 |
-
# folder so we set the library to module name.
|
128 |
-
if library not in LOADABLE_CLASSES or is_pipeline_module:
|
129 |
-
library = pipeline_dir
|
130 |
-
|
131 |
-
# retrieve class_name
|
132 |
-
class_name = module.__class__.__name__
|
133 |
-
|
134 |
-
register_dict = {name: (library, class_name)}
|
135 |
-
|
136 |
-
# save model index config
|
137 |
-
self.register_to_config(**register_dict)
|
138 |
-
|
139 |
-
# set models
|
140 |
-
setattr(self, name, module)
|
141 |
-
|
142 |
-
def save_pretrained(self, save_directory: Union[str, os.PathLike], params: Union[Dict, FrozenDict]):
|
143 |
-
# TODO: handle inference_state
|
144 |
-
"""
|
145 |
-
Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its
|
146 |
-
class implements both a save and loading method. The pipeline is easily reloaded using the
|
147 |
-
[`~FlaxDiffusionPipeline.from_pretrained`] class method.
|
148 |
-
|
149 |
-
Arguments:
|
150 |
-
save_directory (`str` or `os.PathLike`):
|
151 |
-
Directory to which to save. Will be created if it doesn't exist.
|
152 |
-
"""
|
153 |
-
self.save_config(save_directory)
|
154 |
-
|
155 |
-
model_index_dict = dict(self.config)
|
156 |
-
model_index_dict.pop("_class_name")
|
157 |
-
model_index_dict.pop("_diffusers_version")
|
158 |
-
model_index_dict.pop("_module", None)
|
159 |
-
|
160 |
-
for pipeline_component_name in model_index_dict.keys():
|
161 |
-
sub_model = getattr(self, pipeline_component_name)
|
162 |
-
if sub_model is None:
|
163 |
-
# edge case for saving a pipeline with safety_checker=None
|
164 |
-
continue
|
165 |
-
|
166 |
-
model_cls = sub_model.__class__
|
167 |
-
|
168 |
-
save_method_name = None
|
169 |
-
# search for the model's base class in LOADABLE_CLASSES
|
170 |
-
for library_name, library_classes in LOADABLE_CLASSES.items():
|
171 |
-
library = importlib.import_module(library_name)
|
172 |
-
for base_class, save_load_methods in library_classes.items():
|
173 |
-
class_candidate = getattr(library, base_class, None)
|
174 |
-
if class_candidate is not None and issubclass(model_cls, class_candidate):
|
175 |
-
# if we found a suitable base class in LOADABLE_CLASSES then grab its save method
|
176 |
-
save_method_name = save_load_methods[0]
|
177 |
-
break
|
178 |
-
if save_method_name is not None:
|
179 |
-
break
|
180 |
-
|
181 |
-
save_method = getattr(sub_model, save_method_name)
|
182 |
-
expects_params = "params" in set(inspect.signature(save_method).parameters.keys())
|
183 |
-
|
184 |
-
if expects_params:
|
185 |
-
save_method(
|
186 |
-
os.path.join(save_directory, pipeline_component_name), params=params[pipeline_component_name]
|
187 |
-
)
|
188 |
-
else:
|
189 |
-
save_method(os.path.join(save_directory, pipeline_component_name))
|
190 |
-
|
191 |
-
@classmethod
|
192 |
-
def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs):
|
193 |
-
r"""
|
194 |
-
Instantiate a Flax-based diffusion pipeline from pretrained pipeline weights.
|
195 |
-
|
196 |
-
The pipeline is set in evaluation mode (`model.eval()) by default and dropout modules are deactivated.
|
197 |
-
|
198 |
-
If you get the error message below, you need to finetune the weights for your downstream task:
|
199 |
-
|
200 |
-
```
|
201 |
-
Some weights of FlaxUNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match:
|
202 |
-
```
|
203 |
-
|
204 |
-
Parameters:
|
205 |
-
pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*):
|
206 |
-
Can be either:
|
207 |
-
|
208 |
-
- A string, the *repo id* (for example `runwayml/stable-diffusion-v1-5`) of a pretrained pipeline
|
209 |
-
hosted on the Hub.
|
210 |
-
- A path to a *directory* (for example `./my_model_directory`) containing the model weights saved
|
211 |
-
using [`~FlaxDiffusionPipeline.save_pretrained`].
|
212 |
-
dtype (`str` or `jnp.dtype`, *optional*):
|
213 |
-
Override the default `jnp.dtype` and load the model under this dtype. If `"auto"`, the dtype is
|
214 |
-
automatically derived from the model's weights.
|
215 |
-
force_download (`bool`, *optional*, defaults to `False`):
|
216 |
-
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
|
217 |
-
cached versions if they exist.
|
218 |
-
resume_download (`bool`, *optional*, defaults to `False`):
|
219 |
-
Whether or not to resume downloading the model weights and configuration files. If set to `False`, any
|
220 |
-
incompletely downloaded files are deleted.
|
221 |
-
proxies (`Dict[str, str]`, *optional*):
|
222 |
-
A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
|
223 |
-
'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
|
224 |
-
output_loading_info(`bool`, *optional*, defaults to `False`):
|
225 |
-
Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
|
226 |
-
local_files_only (`bool`, *optional*, defaults to `False`):
|
227 |
-
Whether to only load local model weights and configuration files or not. If set to `True`, the model
|
228 |
-
won't be downloaded from the Hub.
|
229 |
-
use_auth_token (`str` or *bool*, *optional*):
|
230 |
-
The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
|
231 |
-
`diffusers-cli login` (stored in `~/.huggingface`) is used.
|
232 |
-
revision (`str`, *optional*, defaults to `"main"`):
|
233 |
-
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
|
234 |
-
allowed by Git.
|
235 |
-
mirror (`str`, *optional*):
|
236 |
-
Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
|
237 |
-
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
|
238 |
-
information.
|
239 |
-
kwargs (remaining dictionary of keyword arguments, *optional*):
|
240 |
-
Can be used to overwrite load and saveable variables (the pipeline components) of the specific pipeline
|
241 |
-
class. The overwritten components are passed directly to the pipelines `__init__` method.
|
242 |
-
|
243 |
-
<Tip>
|
244 |
-
|
245 |
-
To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in with
|
246 |
-
`huggingface-cli login`. You can also activate the special
|
247 |
-
[“offline-mode”](https://huggingface.co/diffusers/installation.html#offline-mode) to use this method in a
|
248 |
-
firewalled environment.
|
249 |
-
|
250 |
-
</Tip>
|
251 |
-
|
252 |
-
Examples:
|
253 |
-
|
254 |
-
```py
|
255 |
-
>>> from diffusers import FlaxDiffusionPipeline
|
256 |
-
|
257 |
-
>>> # Download pipeline from huggingface.co and cache.
|
258 |
-
>>> # Requires to be logged in to Hugging Face hub,
|
259 |
-
>>> # see more in [the documentation](https://huggingface.co/docs/hub/security-tokens)
|
260 |
-
>>> pipeline, params = FlaxDiffusionPipeline.from_pretrained(
|
261 |
-
... "runwayml/stable-diffusion-v1-5",
|
262 |
-
... revision="bf16",
|
263 |
-
... dtype=jnp.bfloat16,
|
264 |
-
... )
|
265 |
-
|
266 |
-
>>> # Download pipeline, but use a different scheduler
|
267 |
-
>>> from diffusers import FlaxDPMSolverMultistepScheduler
|
268 |
-
|
269 |
-
>>> model_id = "runwayml/stable-diffusion-v1-5"
|
270 |
-
>>> dpmpp, dpmpp_state = FlaxDPMSolverMultistepScheduler.from_pretrained(
|
271 |
-
... model_id,
|
272 |
-
... subfolder="scheduler",
|
273 |
-
... )
|
274 |
-
|
275 |
-
>>> dpm_pipe, dpm_params = FlaxStableDiffusionPipeline.from_pretrained(
|
276 |
-
... model_id, revision="bf16", dtype=jnp.bfloat16, scheduler=dpmpp
|
277 |
-
... )
|
278 |
-
>>> dpm_params["scheduler"] = dpmpp_state
|
279 |
-
```
|
280 |
-
"""
|
281 |
-
cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
|
282 |
-
resume_download = kwargs.pop("resume_download", False)
|
283 |
-
proxies = kwargs.pop("proxies", None)
|
284 |
-
local_files_only = kwargs.pop("local_files_only", False)
|
285 |
-
use_auth_token = kwargs.pop("use_auth_token", None)
|
286 |
-
revision = kwargs.pop("revision", None)
|
287 |
-
from_pt = kwargs.pop("from_pt", False)
|
288 |
-
use_memory_efficient_attention = kwargs.pop("use_memory_efficient_attention", False)
|
289 |
-
dtype = kwargs.pop("dtype", None)
|
290 |
-
|
291 |
-
# 1. Download the checkpoints and configs
|
292 |
-
# use snapshot download here to get it working from from_pretrained
|
293 |
-
if not os.path.isdir(pretrained_model_name_or_path):
|
294 |
-
config_dict = cls.load_config(
|
295 |
-
pretrained_model_name_or_path,
|
296 |
-
cache_dir=cache_dir,
|
297 |
-
resume_download=resume_download,
|
298 |
-
proxies=proxies,
|
299 |
-
local_files_only=local_files_only,
|
300 |
-
use_auth_token=use_auth_token,
|
301 |
-
revision=revision,
|
302 |
-
)
|
303 |
-
# make sure we only download sub-folders and `diffusers` filenames
|
304 |
-
folder_names = [k for k in config_dict.keys() if not k.startswith("_")]
|
305 |
-
allow_patterns = [os.path.join(k, "*") for k in folder_names]
|
306 |
-
allow_patterns += [FLAX_WEIGHTS_NAME, SCHEDULER_CONFIG_NAME, CONFIG_NAME, cls.config_name]
|
307 |
-
|
308 |
-
# make sure we don't download PyTorch weights, unless when using from_pt
|
309 |
-
ignore_patterns = "*.bin" if not from_pt else []
|
310 |
-
|
311 |
-
if cls != FlaxDiffusionPipeline:
|
312 |
-
requested_pipeline_class = cls.__name__
|
313 |
-
else:
|
314 |
-
requested_pipeline_class = config_dict.get("_class_name", cls.__name__)
|
315 |
-
requested_pipeline_class = (
|
316 |
-
requested_pipeline_class
|
317 |
-
if requested_pipeline_class.startswith("Flax")
|
318 |
-
else "Flax" + requested_pipeline_class
|
319 |
-
)
|
320 |
-
|
321 |
-
user_agent = {"pipeline_class": requested_pipeline_class}
|
322 |
-
user_agent = http_user_agent(user_agent)
|
323 |
-
|
324 |
-
# download all allow_patterns
|
325 |
-
cached_folder = snapshot_download(
|
326 |
-
pretrained_model_name_or_path,
|
327 |
-
cache_dir=cache_dir,
|
328 |
-
resume_download=resume_download,
|
329 |
-
proxies=proxies,
|
330 |
-
local_files_only=local_files_only,
|
331 |
-
use_auth_token=use_auth_token,
|
332 |
-
revision=revision,
|
333 |
-
allow_patterns=allow_patterns,
|
334 |
-
ignore_patterns=ignore_patterns,
|
335 |
-
user_agent=user_agent,
|
336 |
-
)
|
337 |
-
else:
|
338 |
-
cached_folder = pretrained_model_name_or_path
|
339 |
-
|
340 |
-
config_dict = cls.load_config(cached_folder)
|
341 |
-
|
342 |
-
# 2. Load the pipeline class, if using custom module then load it from the hub
|
343 |
-
# if we load from explicit class, let's use it
|
344 |
-
if cls != FlaxDiffusionPipeline:
|
345 |
-
pipeline_class = cls
|
346 |
-
else:
|
347 |
-
diffusers_module = importlib.import_module(cls.__module__.split(".")[0])
|
348 |
-
class_name = (
|
349 |
-
config_dict["_class_name"]
|
350 |
-
if config_dict["_class_name"].startswith("Flax")
|
351 |
-
else "Flax" + config_dict["_class_name"]
|
352 |
-
)
|
353 |
-
pipeline_class = getattr(diffusers_module, class_name)
|
354 |
-
|
355 |
-
# some modules can be passed directly to the init
|
356 |
-
# in this case they are already instantiated in `kwargs`
|
357 |
-
# extract them here
|
358 |
-
expected_modules, optional_kwargs = cls._get_signature_keys(pipeline_class)
|
359 |
-
passed_class_obj = {k: kwargs.pop(k) for k in expected_modules if k in kwargs}
|
360 |
-
|
361 |
-
init_dict, _, _ = pipeline_class.extract_init_dict(config_dict, **kwargs)
|
362 |
-
|
363 |
-
init_kwargs = {}
|
364 |
-
|
365 |
-
# inference_params
|
366 |
-
params = {}
|
367 |
-
|
368 |
-
# import it here to avoid circular import
|
369 |
-
from diffusers import pipelines
|
370 |
-
|
371 |
-
# 3. Load each module in the pipeline
|
372 |
-
for name, (library_name, class_name) in init_dict.items():
|
373 |
-
if class_name is None:
|
374 |
-
# edge case for when the pipeline was saved with safety_checker=None
|
375 |
-
init_kwargs[name] = None
|
376 |
-
continue
|
377 |
-
|
378 |
-
is_pipeline_module = hasattr(pipelines, library_name)
|
379 |
-
loaded_sub_model = None
|
380 |
-
sub_model_should_be_defined = True
|
381 |
-
|
382 |
-
# if the model is in a pipeline module, then we load it from the pipeline
|
383 |
-
if name in passed_class_obj:
|
384 |
-
# 1. check that passed_class_obj has correct parent class
|
385 |
-
if not is_pipeline_module:
|
386 |
-
library = importlib.import_module(library_name)
|
387 |
-
class_obj = getattr(library, class_name)
|
388 |
-
importable_classes = LOADABLE_CLASSES[library_name]
|
389 |
-
class_candidates = {c: getattr(library, c, None) for c in importable_classes.keys()}
|
390 |
-
|
391 |
-
expected_class_obj = None
|
392 |
-
for class_name, class_candidate in class_candidates.items():
|
393 |
-
if class_candidate is not None and issubclass(class_obj, class_candidate):
|
394 |
-
expected_class_obj = class_candidate
|
395 |
-
|
396 |
-
if not issubclass(passed_class_obj[name].__class__, expected_class_obj):
|
397 |
-
raise ValueError(
|
398 |
-
f"{passed_class_obj[name]} is of type: {type(passed_class_obj[name])}, but should be"
|
399 |
-
f" {expected_class_obj}"
|
400 |
-
)
|
401 |
-
elif passed_class_obj[name] is None:
|
402 |
-
logger.warning(
|
403 |
-
f"You have passed `None` for {name} to disable its functionality in {pipeline_class}. Note"
|
404 |
-
f" that this might lead to problems when using {pipeline_class} and is not recommended."
|
405 |
-
)
|
406 |
-
sub_model_should_be_defined = False
|
407 |
-
else:
|
408 |
-
logger.warning(
|
409 |
-
f"You have passed a non-standard module {passed_class_obj[name]}. We cannot verify whether it"
|
410 |
-
" has the correct type"
|
411 |
-
)
|
412 |
-
|
413 |
-
# set passed class object
|
414 |
-
loaded_sub_model = passed_class_obj[name]
|
415 |
-
elif is_pipeline_module:
|
416 |
-
pipeline_module = getattr(pipelines, library_name)
|
417 |
-
class_obj = import_flax_or_no_model(pipeline_module, class_name)
|
418 |
-
|
419 |
-
importable_classes = ALL_IMPORTABLE_CLASSES
|
420 |
-
class_candidates = {c: class_obj for c in importable_classes.keys()}
|
421 |
-
else:
|
422 |
-
# else we just import it from the library.
|
423 |
-
library = importlib.import_module(library_name)
|
424 |
-
class_obj = import_flax_or_no_model(library, class_name)
|
425 |
-
|
426 |
-
importable_classes = LOADABLE_CLASSES[library_name]
|
427 |
-
class_candidates = {c: getattr(library, c, None) for c in importable_classes.keys()}
|
428 |
-
|
429 |
-
if loaded_sub_model is None and sub_model_should_be_defined:
|
430 |
-
load_method_name = None
|
431 |
-
for class_name, class_candidate in class_candidates.items():
|
432 |
-
if class_candidate is not None and issubclass(class_obj, class_candidate):
|
433 |
-
load_method_name = importable_classes[class_name][1]
|
434 |
-
|
435 |
-
load_method = getattr(class_obj, load_method_name)
|
436 |
-
|
437 |
-
# check if the module is in a subdirectory
|
438 |
-
if os.path.isdir(os.path.join(cached_folder, name)):
|
439 |
-
loadable_folder = os.path.join(cached_folder, name)
|
440 |
-
else:
|
441 |
-
loaded_sub_model = cached_folder
|
442 |
-
|
443 |
-
if issubclass(class_obj, FlaxModelMixin):
|
444 |
-
loaded_sub_model, loaded_params = load_method(
|
445 |
-
loadable_folder,
|
446 |
-
from_pt=from_pt,
|
447 |
-
use_memory_efficient_attention=use_memory_efficient_attention,
|
448 |
-
dtype=dtype,
|
449 |
-
)
|
450 |
-
params[name] = loaded_params
|
451 |
-
elif is_transformers_available() and issubclass(class_obj, FlaxPreTrainedModel):
|
452 |
-
if from_pt:
|
453 |
-
# TODO(Suraj): Fix this in Transformers. We should be able to use `_do_init=False` here
|
454 |
-
loaded_sub_model = load_method(loadable_folder, from_pt=from_pt)
|
455 |
-
loaded_params = loaded_sub_model.params
|
456 |
-
del loaded_sub_model._params
|
457 |
-
else:
|
458 |
-
loaded_sub_model, loaded_params = load_method(loadable_folder, _do_init=False)
|
459 |
-
params[name] = loaded_params
|
460 |
-
elif issubclass(class_obj, FlaxSchedulerMixin):
|
461 |
-
loaded_sub_model, scheduler_state = load_method(loadable_folder)
|
462 |
-
params[name] = scheduler_state
|
463 |
-
else:
|
464 |
-
loaded_sub_model = load_method(loadable_folder)
|
465 |
-
|
466 |
-
init_kwargs[name] = loaded_sub_model # UNet(...), # DiffusionSchedule(...)
|
467 |
-
|
468 |
-
# 4. Potentially add passed objects if expected
|
469 |
-
missing_modules = set(expected_modules) - set(init_kwargs.keys())
|
470 |
-
passed_modules = list(passed_class_obj.keys())
|
471 |
-
|
472 |
-
if len(missing_modules) > 0 and missing_modules <= set(passed_modules):
|
473 |
-
for module in missing_modules:
|
474 |
-
init_kwargs[module] = passed_class_obj.get(module, None)
|
475 |
-
elif len(missing_modules) > 0:
|
476 |
-
passed_modules = set(list(init_kwargs.keys()) + list(passed_class_obj.keys())) - optional_kwargs
|
477 |
-
raise ValueError(
|
478 |
-
f"Pipeline {pipeline_class} expected {expected_modules}, but only {passed_modules} were passed."
|
479 |
-
)
|
480 |
-
|
481 |
-
model = pipeline_class(**init_kwargs, dtype=dtype)
|
482 |
-
return model, params
|
483 |
-
|
484 |
-
@staticmethod
|
485 |
-
def _get_signature_keys(obj):
|
486 |
-
parameters = inspect.signature(obj.__init__).parameters
|
487 |
-
required_parameters = {k: v for k, v in parameters.items() if v.default == inspect._empty}
|
488 |
-
optional_parameters = set({k for k, v in parameters.items() if v.default != inspect._empty})
|
489 |
-
expected_modules = set(required_parameters.keys()) - {"self"}
|
490 |
-
return expected_modules, optional_parameters
|
491 |
-
|
492 |
-
@property
|
493 |
-
def components(self) -> Dict[str, Any]:
|
494 |
-
r"""
|
495 |
-
|
496 |
-
The `self.components` property can be useful to run different pipelines with the same weights and
|
497 |
-
configurations to not have to re-allocate memory.
|
498 |
-
|
499 |
-
Examples:
|
500 |
-
|
501 |
-
```py
|
502 |
-
>>> from diffusers import (
|
503 |
-
... FlaxStableDiffusionPipeline,
|
504 |
-
... FlaxStableDiffusionImg2ImgPipeline,
|
505 |
-
... )
|
506 |
-
|
507 |
-
>>> text2img = FlaxStableDiffusionPipeline.from_pretrained(
|
508 |
-
... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jnp.bfloat16
|
509 |
-
... )
|
510 |
-
>>> img2img = FlaxStableDiffusionImg2ImgPipeline(**text2img.components)
|
511 |
-
```
|
512 |
-
|
513 |
-
Returns:
|
514 |
-
A dictionary containing all the modules needed to initialize the pipeline.
|
515 |
-
"""
|
516 |
-
expected_modules, optional_parameters = self._get_signature_keys(self)
|
517 |
-
components = {
|
518 |
-
k: getattr(self, k) for k in self.config.keys() if not k.startswith("_") and k not in optional_parameters
|
519 |
-
}
|
520 |
-
|
521 |
-
if set(components.keys()) != expected_modules:
|
522 |
-
raise ValueError(
|
523 |
-
f"{self} has been incorrectly initialized or {self.__class__} is incorrectly implemented. Expected"
|
524 |
-
f" {expected_modules} to be defined, but {components} are defined."
|
525 |
-
)
|
526 |
-
|
527 |
-
return components
|
528 |
-
|
529 |
-
@staticmethod
|
530 |
-
def numpy_to_pil(images):
|
531 |
-
"""
|
532 |
-
Convert a NumPy image or a batch of images to a PIL image.
|
533 |
-
"""
|
534 |
-
if images.ndim == 3:
|
535 |
-
images = images[None, ...]
|
536 |
-
images = (images * 255).round().astype("uint8")
|
537 |
-
if images.shape[-1] == 1:
|
538 |
-
# special case for grayscale (single channel) images
|
539 |
-
pil_images = [Image.fromarray(image.squeeze(), mode="L") for image in images]
|
540 |
-
else:
|
541 |
-
pil_images = [Image.fromarray(image) for image in images]
|
542 |
-
|
543 |
-
return pil_images
|
544 |
-
|
545 |
-
# TODO: make it compatible with jax.lax
|
546 |
-
def progress_bar(self, iterable):
|
547 |
-
if not hasattr(self, "_progress_bar_config"):
|
548 |
-
self._progress_bar_config = {}
|
549 |
-
elif not isinstance(self._progress_bar_config, dict):
|
550 |
-
raise ValueError(
|
551 |
-
f"`self._progress_bar_config` should be of type `dict`, but is {type(self._progress_bar_config)}."
|
552 |
-
)
|
553 |
-
|
554 |
-
return tqdm(iterable, **self._progress_bar_config)
|
555 |
-
|
556 |
-
def set_progress_bar_config(self, **kwargs):
|
557 |
-
self._progress_bar_config = kwargs
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/repaint/__init__.py
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
from .pipeline_repaint import RePaintPipeline
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/instaboost/mask_rcnn_r101_fpn_instaboost_4x_coco.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
_base_ = './mask_rcnn_r50_fpn_instaboost_4x_coco.py'
|
2 |
-
model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_faster_rcnn_r50_fpn_1x_coco.py
DELETED
@@ -1,30 +0,0 @@
|
|
1 |
-
_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
|
2 |
-
|
3 |
-
model = dict(
|
4 |
-
roi_head=dict(
|
5 |
-
type='PISARoIHead',
|
6 |
-
bbox_head=dict(
|
7 |
-
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))),
|
8 |
-
train_cfg=dict(
|
9 |
-
rpn_proposal=dict(
|
10 |
-
nms_pre=2000,
|
11 |
-
max_per_img=2000,
|
12 |
-
nms=dict(type='nms', iou_threshold=0.7),
|
13 |
-
min_bbox_size=0),
|
14 |
-
rcnn=dict(
|
15 |
-
sampler=dict(
|
16 |
-
type='ScoreHLRSampler',
|
17 |
-
num=512,
|
18 |
-
pos_fraction=0.25,
|
19 |
-
neg_pos_ub=-1,
|
20 |
-
add_gt_as_proposals=True,
|
21 |
-
k=0.5,
|
22 |
-
bias=0.),
|
23 |
-
isr=dict(k=2, bias=0),
|
24 |
-
carl=dict(k=1, bias=0.2))),
|
25 |
-
test_cfg=dict(
|
26 |
-
rpn=dict(
|
27 |
-
nms_pre=2000,
|
28 |
-
max_per_img=2000,
|
29 |
-
nms=dict(type='nms', iou_threshold=0.7),
|
30 |
-
min_bbox_size=0)))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context.py
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
_base_ = './fcn_hr18_480x480_80k_pascal_context.py'
|
2 |
-
model = dict(
|
3 |
-
pretrained='open-mmlab://msra/hrnetv2_w18_small',
|
4 |
-
backbone=dict(
|
5 |
-
extra=dict(
|
6 |
-
stage1=dict(num_blocks=(2, )),
|
7 |
-
stage2=dict(num_blocks=(2, 2)),
|
8 |
-
stage3=dict(num_modules=3, num_blocks=(2, 2, 2)),
|
9 |
-
stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2)))))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_512x1024_80k_cityscapes.py
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
_base_ = './fcn_hr18_512x1024_80k_cityscapes.py'
|
2 |
-
model = dict(
|
3 |
-
pretrained='open-mmlab://msra/hrnetv2_w18_small',
|
4 |
-
backbone=dict(
|
5 |
-
extra=dict(
|
6 |
-
stage1=dict(num_blocks=(2, )),
|
7 |
-
stage2=dict(num_blocks=(2, 2)),
|
8 |
-
stage3=dict(num_modules=3, num_blocks=(2, 2, 2)),
|
9 |
-
stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2)))))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/activation.py
DELETED
@@ -1,92 +0,0 @@
|
|
1 |
-
# Copyright (c) OpenMMLab. All rights reserved.
|
2 |
-
import torch
|
3 |
-
import torch.nn as nn
|
4 |
-
import torch.nn.functional as F
|
5 |
-
|
6 |
-
from annotator.uniformer.mmcv.utils import TORCH_VERSION, build_from_cfg, digit_version
|
7 |
-
from .registry import ACTIVATION_LAYERS
|
8 |
-
|
9 |
-
for module in [
|
10 |
-
nn.ReLU, nn.LeakyReLU, nn.PReLU, nn.RReLU, nn.ReLU6, nn.ELU,
|
11 |
-
nn.Sigmoid, nn.Tanh
|
12 |
-
]:
|
13 |
-
ACTIVATION_LAYERS.register_module(module=module)
|
14 |
-
|
15 |
-
|
16 |
-
@ACTIVATION_LAYERS.register_module(name='Clip')
|
17 |
-
@ACTIVATION_LAYERS.register_module()
|
18 |
-
class Clamp(nn.Module):
|
19 |
-
"""Clamp activation layer.
|
20 |
-
|
21 |
-
This activation function is to clamp the feature map value within
|
22 |
-
:math:`[min, max]`. More details can be found in ``torch.clamp()``.
|
23 |
-
|
24 |
-
Args:
|
25 |
-
min (Number | optional): Lower-bound of the range to be clamped to.
|
26 |
-
Default to -1.
|
27 |
-
max (Number | optional): Upper-bound of the range to be clamped to.
|
28 |
-
Default to 1.
|
29 |
-
"""
|
30 |
-
|
31 |
-
def __init__(self, min=-1., max=1.):
|
32 |
-
super(Clamp, self).__init__()
|
33 |
-
self.min = min
|
34 |
-
self.max = max
|
35 |
-
|
36 |
-
def forward(self, x):
|
37 |
-
"""Forward function.
|
38 |
-
|
39 |
-
Args:
|
40 |
-
x (torch.Tensor): The input tensor.
|
41 |
-
|
42 |
-
Returns:
|
43 |
-
torch.Tensor: Clamped tensor.
|
44 |
-
"""
|
45 |
-
return torch.clamp(x, min=self.min, max=self.max)
|
46 |
-
|
47 |
-
|
48 |
-
class GELU(nn.Module):
|
49 |
-
r"""Applies the Gaussian Error Linear Units function:
|
50 |
-
|
51 |
-
.. math::
|
52 |
-
\text{GELU}(x) = x * \Phi(x)
|
53 |
-
where :math:`\Phi(x)` is the Cumulative Distribution Function for
|
54 |
-
Gaussian Distribution.
|
55 |
-
|
56 |
-
Shape:
|
57 |
-
- Input: :math:`(N, *)` where `*` means, any number of additional
|
58 |
-
dimensions
|
59 |
-
- Output: :math:`(N, *)`, same shape as the input
|
60 |
-
|
61 |
-
.. image:: scripts/activation_images/GELU.png
|
62 |
-
|
63 |
-
Examples::
|
64 |
-
|
65 |
-
>>> m = nn.GELU()
|
66 |
-
>>> input = torch.randn(2)
|
67 |
-
>>> output = m(input)
|
68 |
-
"""
|
69 |
-
|
70 |
-
def forward(self, input):
|
71 |
-
return F.gelu(input)
|
72 |
-
|
73 |
-
|
74 |
-
if (TORCH_VERSION == 'parrots'
|
75 |
-
or digit_version(TORCH_VERSION) < digit_version('1.4')):
|
76 |
-
ACTIVATION_LAYERS.register_module(module=GELU)
|
77 |
-
else:
|
78 |
-
ACTIVATION_LAYERS.register_module(module=nn.GELU)
|
79 |
-
|
80 |
-
|
81 |
-
def build_activation_layer(cfg):
|
82 |
-
"""Build activation layer.
|
83 |
-
|
84 |
-
Args:
|
85 |
-
cfg (dict): The activation layer config, which should contain:
|
86 |
-
- type (str): Layer type.
|
87 |
-
- layer args: Args needed to instantiate an activation layer.
|
88 |
-
|
89 |
-
Returns:
|
90 |
-
nn.Module: Created activation layer.
|
91 |
-
"""
|
92 |
-
return build_from_cfg(cfg, ACTIVATION_LAYERS)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/box_iou_rotated.py
DELETED
@@ -1,45 +0,0 @@
|
|
1 |
-
# Copyright (c) OpenMMLab. All rights reserved.
|
2 |
-
from ..utils import ext_loader
|
3 |
-
|
4 |
-
ext_module = ext_loader.load_ext('_ext', ['box_iou_rotated'])
|
5 |
-
|
6 |
-
|
7 |
-
def box_iou_rotated(bboxes1, bboxes2, mode='iou', aligned=False):
|
8 |
-
"""Return intersection-over-union (Jaccard index) of boxes.
|
9 |
-
|
10 |
-
Both sets of boxes are expected to be in
|
11 |
-
(x_center, y_center, width, height, angle) format.
|
12 |
-
|
13 |
-
If ``aligned`` is ``False``, then calculate the ious between each bbox
|
14 |
-
of bboxes1 and bboxes2, otherwise the ious between each aligned pair of
|
15 |
-
bboxes1 and bboxes2.
|
16 |
-
|
17 |
-
Arguments:
|
18 |
-
boxes1 (Tensor): rotated bboxes 1. \
|
19 |
-
It has shape (N, 5), indicating (x, y, w, h, theta) for each row.
|
20 |
-
Note that theta is in radian.
|
21 |
-
boxes2 (Tensor): rotated bboxes 2. \
|
22 |
-
It has shape (M, 5), indicating (x, y, w, h, theta) for each row.
|
23 |
-
Note that theta is in radian.
|
24 |
-
mode (str): "iou" (intersection over union) or iof (intersection over
|
25 |
-
foreground).
|
26 |
-
|
27 |
-
Returns:
|
28 |
-
ious(Tensor): shape (N, M) if aligned == False else shape (N,)
|
29 |
-
"""
|
30 |
-
assert mode in ['iou', 'iof']
|
31 |
-
mode_dict = {'iou': 0, 'iof': 1}
|
32 |
-
mode_flag = mode_dict[mode]
|
33 |
-
rows = bboxes1.size(0)
|
34 |
-
cols = bboxes2.size(0)
|
35 |
-
if aligned:
|
36 |
-
ious = bboxes1.new_zeros(rows)
|
37 |
-
else:
|
38 |
-
ious = bboxes1.new_zeros((rows * cols))
|
39 |
-
bboxes1 = bboxes1.contiguous()
|
40 |
-
bboxes2 = bboxes2.contiguous()
|
41 |
-
ext_module.box_iou_rotated(
|
42 |
-
bboxes1, bboxes2, ious, mode_flag=mode_flag, aligned=aligned)
|
43 |
-
if not aligned:
|
44 |
-
ious = ious.view(rows, cols)
|
45 |
-
return ious
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/gmflow_module/gmflow/gmflow.py
DELETED
@@ -1,170 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
import torch.nn.functional as F
|
4 |
-
|
5 |
-
from .backbone import CNNEncoder
|
6 |
-
from .transformer import FeatureTransformer, FeatureFlowAttention
|
7 |
-
from .matching import global_correlation_softmax, local_correlation_softmax
|
8 |
-
from .geometry import flow_warp
|
9 |
-
from .utils import normalize_img, feature_add_position
|
10 |
-
|
11 |
-
|
12 |
-
class GMFlow(nn.Module):
|
13 |
-
def __init__(self,
|
14 |
-
num_scales=1,
|
15 |
-
upsample_factor=8,
|
16 |
-
feature_channels=128,
|
17 |
-
attention_type='swin',
|
18 |
-
num_transformer_layers=6,
|
19 |
-
ffn_dim_expansion=4,
|
20 |
-
num_head=1,
|
21 |
-
**kwargs,
|
22 |
-
):
|
23 |
-
super(GMFlow, self).__init__()
|
24 |
-
|
25 |
-
self.num_scales = num_scales
|
26 |
-
self.feature_channels = feature_channels
|
27 |
-
self.upsample_factor = upsample_factor
|
28 |
-
self.attention_type = attention_type
|
29 |
-
self.num_transformer_layers = num_transformer_layers
|
30 |
-
|
31 |
-
# CNN backbone
|
32 |
-
self.backbone = CNNEncoder(output_dim=feature_channels, num_output_scales=num_scales)
|
33 |
-
|
34 |
-
# Transformer
|
35 |
-
self.transformer = FeatureTransformer(num_layers=num_transformer_layers,
|
36 |
-
d_model=feature_channels,
|
37 |
-
nhead=num_head,
|
38 |
-
attention_type=attention_type,
|
39 |
-
ffn_dim_expansion=ffn_dim_expansion,
|
40 |
-
)
|
41 |
-
|
42 |
-
# flow propagation with self-attn
|
43 |
-
self.feature_flow_attn = FeatureFlowAttention(in_channels=feature_channels)
|
44 |
-
|
45 |
-
# convex upsampling: concat feature0 and flow as input
|
46 |
-
self.upsampler = nn.Sequential(nn.Conv2d(2 + feature_channels, 256, 3, 1, 1),
|
47 |
-
nn.ReLU(inplace=True),
|
48 |
-
nn.Conv2d(256, upsample_factor ** 2 * 9, 1, 1, 0))
|
49 |
-
|
50 |
-
def extract_feature(self, img0, img1):
|
51 |
-
concat = torch.cat((img0, img1), dim=0) # [2B, C, H, W]
|
52 |
-
features = self.backbone(concat) # list of [2B, C, H, W], resolution from high to low
|
53 |
-
|
54 |
-
# reverse: resolution from low to high
|
55 |
-
features = features[::-1]
|
56 |
-
|
57 |
-
feature0, feature1 = [], []
|
58 |
-
|
59 |
-
for i in range(len(features)):
|
60 |
-
feature = features[i]
|
61 |
-
chunks = torch.chunk(feature, 2, 0) # tuple
|
62 |
-
feature0.append(chunks[0])
|
63 |
-
feature1.append(chunks[1])
|
64 |
-
|
65 |
-
return feature0, feature1
|
66 |
-
|
67 |
-
def upsample_flow(self, flow, feature, bilinear=False, upsample_factor=8,
|
68 |
-
):
|
69 |
-
if bilinear:
|
70 |
-
up_flow = F.interpolate(flow, scale_factor=upsample_factor,
|
71 |
-
mode='bilinear', align_corners=True) * upsample_factor
|
72 |
-
|
73 |
-
else:
|
74 |
-
# convex upsampling
|
75 |
-
concat = torch.cat((flow, feature), dim=1)
|
76 |
-
|
77 |
-
mask = self.upsampler(concat)
|
78 |
-
b, flow_channel, h, w = flow.shape
|
79 |
-
mask = mask.view(b, 1, 9, self.upsample_factor, self.upsample_factor, h, w) # [B, 1, 9, K, K, H, W]
|
80 |
-
mask = torch.softmax(mask, dim=2)
|
81 |
-
|
82 |
-
up_flow = F.unfold(self.upsample_factor * flow, [3, 3], padding=1)
|
83 |
-
up_flow = up_flow.view(b, flow_channel, 9, 1, 1, h, w) # [B, 2, 9, 1, 1, H, W]
|
84 |
-
|
85 |
-
up_flow = torch.sum(mask * up_flow, dim=2) # [B, 2, K, K, H, W]
|
86 |
-
up_flow = up_flow.permute(0, 1, 4, 2, 5, 3) # [B, 2, K, H, K, W]
|
87 |
-
up_flow = up_flow.reshape(b, flow_channel, self.upsample_factor * h,
|
88 |
-
self.upsample_factor * w) # [B, 2, K*H, K*W]
|
89 |
-
|
90 |
-
return up_flow
|
91 |
-
|
92 |
-
def forward(self, img0, img1,
|
93 |
-
attn_splits_list=None,
|
94 |
-
corr_radius_list=None,
|
95 |
-
prop_radius_list=None,
|
96 |
-
pred_bidir_flow=False,
|
97 |
-
**kwargs,
|
98 |
-
):
|
99 |
-
|
100 |
-
results_dict = {}
|
101 |
-
flow_preds = []
|
102 |
-
|
103 |
-
img0, img1 = normalize_img(img0, img1) # [B, 3, H, W]
|
104 |
-
|
105 |
-
# resolution low to high
|
106 |
-
feature0_list, feature1_list = self.extract_feature(img0, img1) # list of features
|
107 |
-
|
108 |
-
flow = None
|
109 |
-
|
110 |
-
assert len(attn_splits_list) == len(corr_radius_list) == len(prop_radius_list) == self.num_scales
|
111 |
-
|
112 |
-
for scale_idx in range(self.num_scales):
|
113 |
-
feature0, feature1 = feature0_list[scale_idx], feature1_list[scale_idx]
|
114 |
-
|
115 |
-
if pred_bidir_flow and scale_idx > 0:
|
116 |
-
# predicting bidirectional flow with refinement
|
117 |
-
feature0, feature1 = torch.cat((feature0, feature1), dim=0), torch.cat((feature1, feature0), dim=0)
|
118 |
-
|
119 |
-
upsample_factor = self.upsample_factor * (2 ** (self.num_scales - 1 - scale_idx))
|
120 |
-
|
121 |
-
if scale_idx > 0:
|
122 |
-
flow = F.interpolate(flow, scale_factor=2, mode='bilinear', align_corners=True) * 2
|
123 |
-
|
124 |
-
if flow is not None:
|
125 |
-
flow = flow.detach()
|
126 |
-
feature1 = flow_warp(feature1, flow) # [B, C, H, W]
|
127 |
-
|
128 |
-
attn_splits = attn_splits_list[scale_idx]
|
129 |
-
corr_radius = corr_radius_list[scale_idx]
|
130 |
-
prop_radius = prop_radius_list[scale_idx]
|
131 |
-
|
132 |
-
# add position to features
|
133 |
-
feature0, feature1 = feature_add_position(feature0, feature1, attn_splits, self.feature_channels)
|
134 |
-
|
135 |
-
# Transformer
|
136 |
-
feature0, feature1 = self.transformer(feature0, feature1, attn_num_splits=attn_splits)
|
137 |
-
|
138 |
-
# correlation and softmax
|
139 |
-
if corr_radius == -1: # global matching
|
140 |
-
flow_pred = global_correlation_softmax(feature0, feature1, pred_bidir_flow)[0]
|
141 |
-
else: # local matching
|
142 |
-
flow_pred = local_correlation_softmax(feature0, feature1, corr_radius)[0]
|
143 |
-
|
144 |
-
# flow or residual flow
|
145 |
-
flow = flow + flow_pred if flow is not None else flow_pred
|
146 |
-
|
147 |
-
# upsample to the original resolution for supervison
|
148 |
-
if self.training: # only need to upsample intermediate flow predictions at training time
|
149 |
-
flow_bilinear = self.upsample_flow(flow, None, bilinear=True, upsample_factor=upsample_factor)
|
150 |
-
flow_preds.append(flow_bilinear)
|
151 |
-
|
152 |
-
# flow propagation with self-attn
|
153 |
-
if pred_bidir_flow and scale_idx == 0:
|
154 |
-
feature0 = torch.cat((feature0, feature1), dim=0) # [2*B, C, H, W] for propagation
|
155 |
-
flow = self.feature_flow_attn(feature0, flow.detach(),
|
156 |
-
local_window_attn=prop_radius > 0,
|
157 |
-
local_window_radius=prop_radius)
|
158 |
-
|
159 |
-
# bilinear upsampling at training time except the last one
|
160 |
-
if self.training and scale_idx < self.num_scales - 1:
|
161 |
-
flow_up = self.upsample_flow(flow, feature0, bilinear=True, upsample_factor=upsample_factor)
|
162 |
-
flow_preds.append(flow_up)
|
163 |
-
|
164 |
-
if scale_idx == self.num_scales - 1:
|
165 |
-
flow_up = self.upsample_flow(flow, feature0)
|
166 |
-
flow_preds.append(flow_up)
|
167 |
-
|
168 |
-
results_dict.update({'flow_preds': flow_preds})
|
169 |
-
|
170 |
-
return results_dict
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Artrajz/vits-simple-api/vits/text/mandarin.py
DELETED
@@ -1,365 +0,0 @@
|
|
1 |
-
import config
|
2 |
-
import re
|
3 |
-
from pypinyin import lazy_pinyin, BOPOMOFO
|
4 |
-
import jieba
|
5 |
-
import cn2an
|
6 |
-
import logging
|
7 |
-
|
8 |
-
logging.getLogger('jieba').setLevel(logging.WARNING)
|
9 |
-
jieba.set_dictionary(config.ABS_PATH + '/vits/text/jieba/dict.txt')
|
10 |
-
jieba.initialize()
|
11 |
-
|
12 |
-
# List of (Latin alphabet, bopomofo) pairs:
|
13 |
-
_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
|
14 |
-
('a', 'ㄟˉ'),
|
15 |
-
('b', 'ㄅㄧˋ'),
|
16 |
-
('c', 'ㄙㄧˉ'),
|
17 |
-
('d', 'ㄉㄧˋ'),
|
18 |
-
('e', 'ㄧˋ'),
|
19 |
-
('f', 'ㄝˊㄈㄨˋ'),
|
20 |
-
('g', 'ㄐㄧˋ'),
|
21 |
-
('h', 'ㄝˇㄑㄩˋ'),
|
22 |
-
('i', 'ㄞˋ'),
|
23 |
-
('j', 'ㄐㄟˋ'),
|
24 |
-
('k', 'ㄎㄟˋ'),
|
25 |
-
('l', 'ㄝˊㄛˋ'),
|
26 |
-
('m', 'ㄝˊㄇㄨˋ'),
|
27 |
-
('n', 'ㄣˉ'),
|
28 |
-
('o', 'ㄡˉ'),
|
29 |
-
('p', 'ㄆㄧˉ'),
|
30 |
-
('q', 'ㄎㄧㄡˉ'),
|
31 |
-
('r', 'ㄚˋ'),
|
32 |
-
('s', 'ㄝˊㄙˋ'),
|
33 |
-
('t', 'ㄊㄧˋ'),
|
34 |
-
('u', 'ㄧㄡˉ'),
|
35 |
-
('v', 'ㄨㄧˉ'),
|
36 |
-
('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'),
|
37 |
-
('x', 'ㄝˉㄎㄨˋㄙˋ'),
|
38 |
-
('y', 'ㄨㄞˋ'),
|
39 |
-
('z', 'ㄗㄟˋ')
|
40 |
-
]]
|
41 |
-
|
42 |
-
# List of (bopomofo, romaji) pairs:
|
43 |
-
_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [
|
44 |
-
('ㄅㄛ', 'p⁼wo'),
|
45 |
-
('ㄆㄛ', 'pʰwo'),
|
46 |
-
('ㄇㄛ', 'mwo'),
|
47 |
-
('ㄈㄛ', 'fwo'),
|
48 |
-
('ㄅ', 'p⁼'),
|
49 |
-
('ㄆ', 'pʰ'),
|
50 |
-
('ㄇ', 'm'),
|
51 |
-
('ㄈ', 'f'),
|
52 |
-
('ㄉ', 't⁼'),
|
53 |
-
('ㄊ', 'tʰ'),
|
54 |
-
('ㄋ', 'n'),
|
55 |
-
('ㄌ', 'l'),
|
56 |
-
('ㄍ', 'k⁼'),
|
57 |
-
('ㄎ', 'kʰ'),
|
58 |
-
('ㄏ', 'h'),
|
59 |
-
('ㄐ', 'ʧ⁼'),
|
60 |
-
('ㄑ', 'ʧʰ'),
|
61 |
-
('ㄒ', 'ʃ'),
|
62 |
-
('ㄓ', 'ʦ`⁼'),
|
63 |
-
('ㄔ', 'ʦ`ʰ'),
|
64 |
-
('ㄕ', 's`'),
|
65 |
-
('ㄖ', 'ɹ`'),
|
66 |
-
('ㄗ', 'ʦ⁼'),
|
67 |
-
('ㄘ', 'ʦʰ'),
|
68 |
-
('ㄙ', 's'),
|
69 |
-
('ㄚ', 'a'),
|
70 |
-
('ㄛ', 'o'),
|
71 |
-
('ㄜ', 'ə'),
|
72 |
-
('ㄝ', 'e'),
|
73 |
-
('ㄞ', 'ai'),
|
74 |
-
('ㄟ', 'ei'),
|
75 |
-
('ㄠ', 'au'),
|
76 |
-
('ㄡ', 'ou'),
|
77 |
-
('ㄧㄢ', 'yeNN'),
|
78 |
-
('ㄢ', 'aNN'),
|
79 |
-
('ㄧㄣ', 'iNN'),
|
80 |
-
('ㄣ', 'əNN'),
|
81 |
-
('ㄤ', 'aNg'),
|
82 |
-
('ㄧㄥ', 'iNg'),
|
83 |
-
('ㄨㄥ', 'uNg'),
|
84 |
-
('ㄩㄥ', 'yuNg'),
|
85 |
-
('ㄥ', 'əNg'),
|
86 |
-
('ㄦ', 'əɻ'),
|
87 |
-
('ㄧ', 'i'),
|
88 |
-
('ㄨ', 'u'),
|
89 |
-
('ㄩ', 'ɥ'),
|
90 |
-
('ˉ', '→'),
|
91 |
-
('ˊ', '↑'),
|
92 |
-
('ˇ', '↓↑'),
|
93 |
-
('ˋ', '↓'),
|
94 |
-
('˙', ''),
|
95 |
-
(',', ','),
|
96 |
-
('。', '.'),
|
97 |
-
('!', '!'),
|
98 |
-
('?', '?'),
|
99 |
-
('—', '-')
|
100 |
-
]]
|
101 |
-
|
102 |
-
# List of (romaji, ipa) pairs:
|
103 |
-
_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
|
104 |
-
('ʃy', 'ʃ'),
|
105 |
-
('ʧʰy', 'ʧʰ'),
|
106 |
-
('ʧ⁼y', 'ʧ⁼'),
|
107 |
-
('NN', 'n'),
|
108 |
-
('Ng', 'ŋ'),
|
109 |
-
('y', 'j'),
|
110 |
-
('h', 'x')
|
111 |
-
]]
|
112 |
-
|
113 |
-
# List of (bopomofo, ipa) pairs:
|
114 |
-
_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
|
115 |
-
('ㄅㄛ', 'p⁼wo'),
|
116 |
-
('ㄆㄛ', 'pʰwo'),
|
117 |
-
('ㄇㄛ', 'mwo'),
|
118 |
-
('ㄈㄛ', 'fwo'),
|
119 |
-
('ㄅ', 'p⁼'),
|
120 |
-
('ㄆ', 'pʰ'),
|
121 |
-
('ㄇ', 'm'),
|
122 |
-
('ㄈ', 'f'),
|
123 |
-
('ㄉ', 't⁼'),
|
124 |
-
('ㄊ', 'tʰ'),
|
125 |
-
('ㄋ', 'n'),
|
126 |
-
('ㄌ', 'l'),
|
127 |
-
('ㄍ', 'k⁼'),
|
128 |
-
('ㄎ', 'kʰ'),
|
129 |
-
('ㄏ', 'x'),
|
130 |
-
('ㄐ', 'tʃ⁼'),
|
131 |
-
('ㄑ', 'tʃʰ'),
|
132 |
-
('ㄒ', 'ʃ'),
|
133 |
-
('ㄓ', 'ts`⁼'),
|
134 |
-
('ㄔ', 'ts`ʰ'),
|
135 |
-
('ㄕ', 's`'),
|
136 |
-
('ㄖ', 'ɹ`'),
|
137 |
-
('ㄗ', 'ts⁼'),
|
138 |
-
('ㄘ', 'tsʰ'),
|
139 |
-
('ㄙ', 's'),
|
140 |
-
('ㄚ', 'a'),
|
141 |
-
('ㄛ', 'o'),
|
142 |
-
('ㄜ', 'ə'),
|
143 |
-
('ㄝ', 'ɛ'),
|
144 |
-
('ㄞ', 'aɪ'),
|
145 |
-
('ㄟ', 'eɪ'),
|
146 |
-
('ㄠ', 'ɑʊ'),
|
147 |
-
('ㄡ', 'oʊ'),
|
148 |
-
('ㄧㄢ', 'jɛn'),
|
149 |
-
('ㄩㄢ', 'ɥæn'),
|
150 |
-
('ㄢ', 'an'),
|
151 |
-
('ㄧㄣ', 'in'),
|
152 |
-
('ㄩㄣ', 'ɥn'),
|
153 |
-
('ㄣ', 'ən'),
|
154 |
-
('ㄤ', 'ɑŋ'),
|
155 |
-
('ㄧㄥ', 'iŋ'),
|
156 |
-
('ㄨㄥ', 'ʊŋ'),
|
157 |
-
('ㄩㄥ', 'jʊŋ'),
|
158 |
-
('ㄥ', 'əŋ'),
|
159 |
-
('ㄦ', 'əɻ'),
|
160 |
-
('ㄧ', 'i'),
|
161 |
-
('ㄨ', 'u'),
|
162 |
-
('ㄩ', 'ɥ'),
|
163 |
-
('ˉ', '→'),
|
164 |
-
('ˊ', '↑'),
|
165 |
-
('ˇ', '↓↑'),
|
166 |
-
('ˋ', '↓'),
|
167 |
-
('˙', ''),
|
168 |
-
(',', ','),
|
169 |
-
('。', '.'),
|
170 |
-
('!', '!'),
|
171 |
-
('?', '?'),
|
172 |
-
('—', '-')
|
173 |
-
]]
|
174 |
-
|
175 |
-
# List of (bopomofo, ipa2) pairs:
|
176 |
-
_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
|
177 |
-
('ㄅㄛ', 'pwo'),
|
178 |
-
('ㄆㄛ', 'pʰwo'),
|
179 |
-
('ㄇㄛ', 'mwo'),
|
180 |
-
('ㄈㄛ', 'fwo'),
|
181 |
-
('ㄅ', 'p'),
|
182 |
-
('ㄆ', 'pʰ'),
|
183 |
-
('ㄇ', 'm'),
|
184 |
-
('ㄈ', 'f'),
|
185 |
-
('ㄉ', 't'),
|
186 |
-
('ㄊ', 'tʰ'),
|
187 |
-
('ㄋ', 'n'),
|
188 |
-
('ㄌ', 'l'),
|
189 |
-
('ㄍ', 'k'),
|
190 |
-
('ㄎ', 'kʰ'),
|
191 |
-
('ㄏ', 'h'),
|
192 |
-
('ㄐ', 'tɕ'),
|
193 |
-
('ㄑ', 'tɕʰ'),
|
194 |
-
('ㄒ', 'ɕ'),
|
195 |
-
('ㄓ', 'tʂ'),
|
196 |
-
('ㄔ', 'tʂʰ'),
|
197 |
-
('ㄕ', 'ʂ'),
|
198 |
-
('ㄖ', 'ɻ'),
|
199 |
-
('ㄗ', 'ts'),
|
200 |
-
('ㄘ', 'tsʰ'),
|
201 |
-
('ㄙ', 's'),
|
202 |
-
('ㄚ', 'a'),
|
203 |
-
('ㄛ', 'o'),
|
204 |
-
('ㄜ', 'ɤ'),
|
205 |
-
('ㄝ', 'ɛ'),
|
206 |
-
('ㄞ', 'aɪ'),
|
207 |
-
('ㄟ', 'eɪ'),
|
208 |
-
('ㄠ', 'ɑʊ'),
|
209 |
-
('ㄡ', 'oʊ'),
|
210 |
-
('ㄧㄢ', 'jɛn'),
|
211 |
-
('ㄩㄢ', 'yæn'),
|
212 |
-
('ㄢ', 'an'),
|
213 |
-
('ㄧㄣ', 'in'),
|
214 |
-
('ㄩㄣ', 'yn'),
|
215 |
-
('ㄣ', 'ən'),
|
216 |
-
('ㄤ', 'ɑŋ'),
|
217 |
-
('ㄧㄥ', 'iŋ'),
|
218 |
-
('ㄨㄥ', 'ʊŋ'),
|
219 |
-
('ㄩㄥ', 'jʊŋ'),
|
220 |
-
('ㄥ', 'ɤŋ'),
|
221 |
-
('ㄦ', 'əɻ'),
|
222 |
-
('ㄧ', 'i'),
|
223 |
-
('ㄨ', 'u'),
|
224 |
-
('ㄩ', 'y'),
|
225 |
-
('ˉ', '˥'),
|
226 |
-
('ˊ', '˧˥'),
|
227 |
-
('ˇ', '˨˩˦'),
|
228 |
-
('ˋ', '˥˩'),
|
229 |
-
('˙', ''),
|
230 |
-
(',', ','),
|
231 |
-
('。', '.'),
|
232 |
-
('!', '!'),
|
233 |
-
('?', '?'),
|
234 |
-
('—', '-')
|
235 |
-
]]
|
236 |
-
|
237 |
-
_symbols_to_chinese = [(re.compile(f'{x[0]}'), x[1]) for x in [
|
238 |
-
('([0-9]+(?:\.?[0-9]+)?)%', r'百分之\1'),
|
239 |
-
('([0-9]+)/([0-9]+)', r'\2分之\1'),
|
240 |
-
('\+', r'加'),
|
241 |
-
('([0-9]+)-([0-9]+)', r'\1减\2'),
|
242 |
-
('×', r'乘以'),
|
243 |
-
('([0-9]+)x([0-9]+)', r'\1乘以\2'),
|
244 |
-
('([0-9]+)\*([0-9]+)', r'\1乘以\2'),
|
245 |
-
('÷', r'除以'),
|
246 |
-
('=', r'等于'),
|
247 |
-
('≠', r'不等于'),
|
248 |
-
]]
|
249 |
-
|
250 |
-
|
251 |
-
def symbols_to_chinese(text):
|
252 |
-
for regex, replacement in _symbols_to_chinese:
|
253 |
-
text = re.sub(regex, replacement, text)
|
254 |
-
return text
|
255 |
-
|
256 |
-
|
257 |
-
def number_to_chinese(text):
|
258 |
-
numbers = re.findall(r'[0-9]+(?:\.?[0-9]+)?', text)
|
259 |
-
for number in numbers:
|
260 |
-
text = text.replace(number, cn2an.an2cn(number), 1)
|
261 |
-
return text
|
262 |
-
|
263 |
-
|
264 |
-
def number_transform_to_chinese(text):
|
265 |
-
text = cn2an.transform(text, "an2cn")
|
266 |
-
return text
|
267 |
-
|
268 |
-
|
269 |
-
def chinese_to_bopomofo(text):
|
270 |
-
text = text.replace('、', ',').replace(';', ',').replace(':', ',')
|
271 |
-
words = jieba.lcut(text, cut_all=False)
|
272 |
-
text = ''
|
273 |
-
for word in words:
|
274 |
-
bopomofos = lazy_pinyin(word, BOPOMOFO)
|
275 |
-
if not re.search('[\u4e00-\u9fff]', word):
|
276 |
-
text += word
|
277 |
-
continue
|
278 |
-
for i in range(len(bopomofos)):
|
279 |
-
bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i])
|
280 |
-
if text != '':
|
281 |
-
text += ' '
|
282 |
-
text += ''.join(bopomofos)
|
283 |
-
return text
|
284 |
-
|
285 |
-
|
286 |
-
def latin_to_bopomofo(text):
|
287 |
-
for regex, replacement in _latin_to_bopomofo:
|
288 |
-
text = re.sub(regex, replacement, text)
|
289 |
-
return text
|
290 |
-
|
291 |
-
|
292 |
-
def bopomofo_to_romaji(text):
|
293 |
-
for regex, replacement in _bopomofo_to_romaji:
|
294 |
-
text = re.sub(regex, replacement, text)
|
295 |
-
return text
|
296 |
-
|
297 |
-
|
298 |
-
def bopomofo_to_ipa(text):
|
299 |
-
for regex, replacement in _bopomofo_to_ipa:
|
300 |
-
text = re.sub(regex, replacement, text)
|
301 |
-
return text
|
302 |
-
|
303 |
-
|
304 |
-
def bopomofo_to_ipa2(text):
|
305 |
-
for regex, replacement in _bopomofo_to_ipa2:
|
306 |
-
text = re.sub(regex, replacement, text)
|
307 |
-
return text
|
308 |
-
|
309 |
-
|
310 |
-
def chinese_to_romaji(text):
|
311 |
-
text = symbols_to_chinese(text)
|
312 |
-
text = number_transform_to_chinese(text)
|
313 |
-
text = chinese_to_bopomofo(text)
|
314 |
-
text = latin_to_bopomofo(text)
|
315 |
-
text = bopomofo_to_romaji(text)
|
316 |
-
text = re.sub('i([aoe])', r'y\1', text)
|
317 |
-
text = re.sub('u([aoəe])', r'w\1', text)
|
318 |
-
text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
|
319 |
-
r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
|
320 |
-
text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
|
321 |
-
return text
|
322 |
-
|
323 |
-
|
324 |
-
def chinese_to_lazy_ipa(text):
|
325 |
-
text = chinese_to_romaji(text)
|
326 |
-
for regex, replacement in _romaji_to_ipa:
|
327 |
-
text = re.sub(regex, replacement, text)
|
328 |
-
return text
|
329 |
-
|
330 |
-
|
331 |
-
def chinese_to_ipa(text):
|
332 |
-
text = symbols_to_chinese(text)
|
333 |
-
text = number_transform_to_chinese(text)
|
334 |
-
text = chinese_to_bopomofo(text)
|
335 |
-
text = latin_to_bopomofo(text)
|
336 |
-
text = bopomofo_to_ipa(text)
|
337 |
-
text = re.sub('i([aoe])', r'j\1', text)
|
338 |
-
text = re.sub('u([aoəe])', r'w\1', text)
|
339 |
-
text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)',
|
340 |
-
r'\1ɹ`\2', text).replace('ɻ', 'ɹ`')
|
341 |
-
text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text)
|
342 |
-
return text
|
343 |
-
|
344 |
-
|
345 |
-
def chinese_to_ipa2(text):
|
346 |
-
text = symbols_to_chinese(text)
|
347 |
-
text = number_transform_to_chinese(text)
|
348 |
-
text = chinese_to_bopomofo(text)
|
349 |
-
text = latin_to_bopomofo(text)
|
350 |
-
text = bopomofo_to_ipa2(text)
|
351 |
-
text = re.sub(r'i([aoe])', r'j\1', text)
|
352 |
-
text = re.sub(r'u([aoəe])', r'w\1', text)
|
353 |
-
text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text)
|
354 |
-
text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text)
|
355 |
-
return text
|
356 |
-
|
357 |
-
|
358 |
-
def VITS_PinYin_model():
|
359 |
-
import torch
|
360 |
-
import config
|
361 |
-
from vits.text.vits_pinyin import VITS_PinYin
|
362 |
-
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
363 |
-
# pinyin
|
364 |
-
tts_front = VITS_PinYin(f"{config.ABS_PATH}/vits/bert", device)
|
365 |
-
return tts_front
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/__init__.py
DELETED
@@ -1,82 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Pygments
|
3 |
-
~~~~~~~~
|
4 |
-
|
5 |
-
Pygments is a syntax highlighting package written in Python.
|
6 |
-
|
7 |
-
It is a generic syntax highlighter for general use in all kinds of software
|
8 |
-
such as forum systems, wikis or other applications that need to prettify
|
9 |
-
source code. Highlights are:
|
10 |
-
|
11 |
-
* a wide range of common languages and markup formats is supported
|
12 |
-
* special attention is paid to details, increasing quality by a fair amount
|
13 |
-
* support for new languages and formats are added easily
|
14 |
-
* a number of output formats, presently HTML, LaTeX, RTF, SVG, all image
|
15 |
-
formats that PIL supports, and ANSI sequences
|
16 |
-
* it is usable as a command-line tool and as a library
|
17 |
-
* ... and it highlights even Brainfuck!
|
18 |
-
|
19 |
-
The `Pygments master branch`_ is installable with ``easy_install Pygments==dev``.
|
20 |
-
|
21 |
-
.. _Pygments master branch:
|
22 |
-
https://github.com/pygments/pygments/archive/master.zip#egg=Pygments-dev
|
23 |
-
|
24 |
-
:copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
|
25 |
-
:license: BSD, see LICENSE for details.
|
26 |
-
"""
|
27 |
-
from io import StringIO, BytesIO
|
28 |
-
|
29 |
-
__version__ = '2.14.0'
|
30 |
-
__docformat__ = 'restructuredtext'
|
31 |
-
|
32 |
-
__all__ = ['lex', 'format', 'highlight']
|
33 |
-
|
34 |
-
|
35 |
-
def lex(code, lexer):
|
36 |
-
"""
|
37 |
-
Lex ``code`` with ``lexer`` and return an iterable of tokens.
|
38 |
-
"""
|
39 |
-
try:
|
40 |
-
return lexer.get_tokens(code)
|
41 |
-
except TypeError:
|
42 |
-
# Heuristic to catch a common mistake.
|
43 |
-
from pip._vendor.pygments.lexer import RegexLexer
|
44 |
-
if isinstance(lexer, type) and issubclass(lexer, RegexLexer):
|
45 |
-
raise TypeError('lex() argument must be a lexer instance, '
|
46 |
-
'not a class')
|
47 |
-
raise
|
48 |
-
|
49 |
-
|
50 |
-
def format(tokens, formatter, outfile=None): # pylint: disable=redefined-builtin
|
51 |
-
"""
|
52 |
-
Format a tokenlist ``tokens`` with the formatter ``formatter``.
|
53 |
-
|
54 |
-
If ``outfile`` is given and a valid file object (an object
|
55 |
-
with a ``write`` method), the result will be written to it, otherwise
|
56 |
-
it is returned as a string.
|
57 |
-
"""
|
58 |
-
try:
|
59 |
-
if not outfile:
|
60 |
-
realoutfile = getattr(formatter, 'encoding', None) and BytesIO() or StringIO()
|
61 |
-
formatter.format(tokens, realoutfile)
|
62 |
-
return realoutfile.getvalue()
|
63 |
-
else:
|
64 |
-
formatter.format(tokens, outfile)
|
65 |
-
except TypeError:
|
66 |
-
# Heuristic to catch a common mistake.
|
67 |
-
from pip._vendor.pygments.formatter import Formatter
|
68 |
-
if isinstance(formatter, type) and issubclass(formatter, Formatter):
|
69 |
-
raise TypeError('format() argument must be a formatter instance, '
|
70 |
-
'not a class')
|
71 |
-
raise
|
72 |
-
|
73 |
-
|
74 |
-
def highlight(code, lexer, formatter, outfile=None):
|
75 |
-
"""
|
76 |
-
Lex ``code`` with ``lexer`` and format it with the formatter ``formatter``.
|
77 |
-
|
78 |
-
If ``outfile`` is given and a valid file object (an object
|
79 |
-
with a ``write`` method), the result will be written to it, otherwise
|
80 |
-
it is returned as a string.
|
81 |
-
"""
|
82 |
-
return format(lex(code, lexer), formatter, outfile)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/INSTALL.md
DELETED
@@ -1,261 +0,0 @@
|
|
1 |
-
## Installation
|
2 |
-
|
3 |
-
### Requirements
|
4 |
-
- Linux or macOS with Python ≥ 3.6
|
5 |
-
- PyTorch ≥ 1.8 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation.
|
6 |
-
Install them together at [pytorch.org](https://pytorch.org) to make sure of this
|
7 |
-
- OpenCV is optional but needed by demo and visualization
|
8 |
-
|
9 |
-
|
10 |
-
### Build Detectron2 from Source
|
11 |
-
|
12 |
-
gcc & g++ ≥ 5.4 are required. [ninja](https://ninja-build.org/) is optional but recommended for faster build.
|
13 |
-
After having them, run:
|
14 |
-
```
|
15 |
-
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
|
16 |
-
# (add --user if you don't have permission)
|
17 |
-
|
18 |
-
# Or, to install it from a local clone:
|
19 |
-
git clone https://github.com/facebookresearch/detectron2.git
|
20 |
-
python -m pip install -e detectron2
|
21 |
-
|
22 |
-
# On macOS, you may need to prepend the above commands with a few environment variables:
|
23 |
-
CC=clang CXX=clang++ ARCHFLAGS="-arch x86_64" python -m pip install ...
|
24 |
-
```
|
25 |
-
|
26 |
-
To __rebuild__ detectron2 that's built from a local clone, use `rm -rf build/ **/*.so` to clean the
|
27 |
-
old build first. You often need to rebuild detectron2 after reinstalling PyTorch.
|
28 |
-
|
29 |
-
### Install Pre-Built Detectron2 (Linux only)
|
30 |
-
|
31 |
-
Choose from this table to install [v0.6 (Oct 2021)](https://github.com/facebookresearch/detectron2/releases):
|
32 |
-
|
33 |
-
<table class="docutils"><tbody><th width="80"> CUDA </th><th valign="bottom" align="left" width="100">torch 1.10</th><th valign="bottom" align="left" width="100">torch 1.9</th><th valign="bottom" align="left" width="100">torch 1.8</th> <tr><td align="left">11.3</td><td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
|
34 |
-
https://dl.fbaipublicfiles.com/detectron2/wheels/cu113/torch1.10/index.html
|
35 |
-
</code></pre> </details> </td> <td align="left"> </td> <td align="left"> </td> </tr> <tr><td align="left">11.1</td><td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
|
36 |
-
https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.10/index.html
|
37 |
-
</code></pre> </details> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
|
38 |
-
https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.9/index.html
|
39 |
-
</code></pre> </details> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
|
40 |
-
https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.8/index.html
|
41 |
-
</code></pre> </details> </td> </tr> <tr><td align="left">10.2</td><td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
|
42 |
-
https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.10/index.html
|
43 |
-
</code></pre> </details> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
|
44 |
-
https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html
|
45 |
-
</code></pre> </details> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
|
46 |
-
https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.8/index.html
|
47 |
-
</code></pre> </details> </td> </tr> <tr><td align="left">10.1</td><td align="left"> </td> <td align="left"> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
|
48 |
-
https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.8/index.html
|
49 |
-
</code></pre> </details> </td> </tr> <tr><td align="left">cpu</td><td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
|
50 |
-
https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.10/index.html
|
51 |
-
</code></pre> </details> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
|
52 |
-
https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.9/index.html
|
53 |
-
</code></pre> </details> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
|
54 |
-
https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.8/index.html
|
55 |
-
</code></pre> </details> </td> </tr></tbody></table>
|
56 |
-
|
57 |
-
Note that:
|
58 |
-
1. The pre-built packages have to be used with corresponding version of CUDA and the official package of PyTorch.
|
59 |
-
Otherwise, please build detectron2 from source.
|
60 |
-
2. New packages are released every few months. Therefore, packages may not contain latest features in the main
|
61 |
-
branch and may not be compatible with the main branch of a research project that uses detectron2
|
62 |
-
(e.g. those in [projects](projects)).
|
63 |
-
|
64 |
-
### Common Installation Issues
|
65 |
-
|
66 |
-
Click each issue for its solutions:
|
67 |
-
|
68 |
-
<details>
|
69 |
-
<summary>
|
70 |
-
Undefined symbols that looks like "TH..","at::Tensor...","torch..."
|
71 |
-
</summary>
|
72 |
-
<br/>
|
73 |
-
|
74 |
-
This usually happens when detectron2 or torchvision is not
|
75 |
-
compiled with the version of PyTorch you're running.
|
76 |
-
|
77 |
-
If the error comes from a pre-built torchvision, uninstall torchvision and pytorch and reinstall them
|
78 |
-
following [pytorch.org](http://pytorch.org). So the versions will match.
|
79 |
-
|
80 |
-
If the error comes from a pre-built detectron2, check [release notes](https://github.com/facebookresearch/detectron2/releases),
|
81 |
-
uninstall and reinstall the correct pre-built detectron2 that matches pytorch version.
|
82 |
-
|
83 |
-
If the error comes from detectron2 or torchvision that you built manually from source,
|
84 |
-
remove files you built (`build/`, `**/*.so`) and rebuild it so it can pick up the version of pytorch currently in your environment.
|
85 |
-
|
86 |
-
If the above instructions do not resolve this problem, please provide an environment (e.g. a dockerfile) that can reproduce the issue.
|
87 |
-
</details>
|
88 |
-
|
89 |
-
<details>
|
90 |
-
<summary>
|
91 |
-
Missing torch dynamic libraries, OR segmentation fault immediately when using detectron2.
|
92 |
-
</summary>
|
93 |
-
This usually happens when detectron2 or torchvision is not
|
94 |
-
compiled with the version of PyTorch you're running. See the previous common issue for the solution.
|
95 |
-
</details>
|
96 |
-
|
97 |
-
<details>
|
98 |
-
<summary>
|
99 |
-
Undefined C++ symbols (e.g. "GLIBCXX..") or C++ symbols not found.
|
100 |
-
</summary>
|
101 |
-
<br/>
|
102 |
-
Usually it's because the library is compiled with a newer C++ compiler but run with an old C++ runtime.
|
103 |
-
|
104 |
-
This often happens with old anaconda.
|
105 |
-
It may help to run `conda update libgcc` to upgrade its runtime.
|
106 |
-
|
107 |
-
The fundamental solution is to avoid the mismatch, either by compiling using older version of C++
|
108 |
-
compiler, or run the code with proper C++ runtime.
|
109 |
-
To run the code with a specific C++ runtime, you can use environment variable `LD_PRELOAD=/path/to/libstdc++.so`.
|
110 |
-
|
111 |
-
</details>
|
112 |
-
|
113 |
-
<details>
|
114 |
-
<summary>
|
115 |
-
"nvcc not found" or "Not compiled with GPU support" or "Detectron2 CUDA Compiler: not available".
|
116 |
-
</summary>
|
117 |
-
<br/>
|
118 |
-
CUDA is not found when building detectron2.
|
119 |
-
You should make sure
|
120 |
-
|
121 |
-
```
|
122 |
-
python -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)'
|
123 |
-
```
|
124 |
-
|
125 |
-
print `(True, a directory with cuda)` at the time you build detectron2.
|
126 |
-
|
127 |
-
Most models can run inference (but not training) without GPU support. To use CPUs, set `MODEL.DEVICE='cpu'` in the config.
|
128 |
-
</details>
|
129 |
-
|
130 |
-
<details>
|
131 |
-
<summary>
|
132 |
-
"invalid device function" or "no kernel image is available for execution".
|
133 |
-
</summary>
|
134 |
-
<br/>
|
135 |
-
Two possibilities:
|
136 |
-
|
137 |
-
* You build detectron2 with one version of CUDA but run it with a different version.
|
138 |
-
|
139 |
-
To check whether it is the case,
|
140 |
-
use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions.
|
141 |
-
In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA"
|
142 |
-
to contain cuda libraries of the same version.
|
143 |
-
|
144 |
-
When they are inconsistent,
|
145 |
-
you need to either install a different build of PyTorch (or build by yourself)
|
146 |
-
to match your local CUDA installation, or install a different version of CUDA to match PyTorch.
|
147 |
-
|
148 |
-
* PyTorch/torchvision/Detectron2 is not built for the correct GPU SM architecture (aka. compute capability).
|
149 |
-
|
150 |
-
The architecture included by PyTorch/detectron2/torchvision is available in the "architecture flags" in
|
151 |
-
`python -m detectron2.utils.collect_env`. It must include
|
152 |
-
the architecture of your GPU, which can be found at [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus).
|
153 |
-
|
154 |
-
If you're using pre-built PyTorch/detectron2/torchvision, they have included support for most popular GPUs already.
|
155 |
-
If not supported, you need to build them from source.
|
156 |
-
|
157 |
-
When building detectron2/torchvision from source, they detect the GPU device and build for only the device.
|
158 |
-
This means the compiled code may not work on a different GPU device.
|
159 |
-
To recompile them for the correct architecture, remove all installed/compiled files,
|
160 |
-
and rebuild them with the `TORCH_CUDA_ARCH_LIST` environment variable set properly.
|
161 |
-
For example, `export TORCH_CUDA_ARCH_LIST="6.0;7.0"` makes it compile for both P100s and V100s.
|
162 |
-
</details>
|
163 |
-
|
164 |
-
<details>
|
165 |
-
<summary>
|
166 |
-
Undefined CUDA symbols; Cannot open libcudart.so
|
167 |
-
</summary>
|
168 |
-
<br/>
|
169 |
-
The version of NVCC you use to build detectron2 or torchvision does
|
170 |
-
not match the version of CUDA you are running with.
|
171 |
-
This often happens when using anaconda's CUDA runtime.
|
172 |
-
|
173 |
-
Use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions.
|
174 |
-
In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA"
|
175 |
-
to contain cuda libraries of the same version.
|
176 |
-
|
177 |
-
When they are inconsistent,
|
178 |
-
you need to either install a different build of PyTorch (or build by yourself)
|
179 |
-
to match your local CUDA installation, or install a different version of CUDA to match PyTorch.
|
180 |
-
</details>
|
181 |
-
|
182 |
-
|
183 |
-
<details>
|
184 |
-
<summary>
|
185 |
-
C++ compilation errors from NVCC / NVRTC, or "Unsupported gpu architecture"
|
186 |
-
</summary>
|
187 |
-
<br/>
|
188 |
-
A few possibilities:
|
189 |
-
|
190 |
-
1. Local CUDA/NVCC version has to match the CUDA version of your PyTorch. Both can be found in `python collect_env.py`.
|
191 |
-
When they are inconsistent, you need to either install a different build of PyTorch (or build by yourself)
|
192 |
-
to match your local CUDA installation, or install a different version of CUDA to match PyTorch.
|
193 |
-
|
194 |
-
2. Local CUDA/NVCC version shall support the SM architecture (a.k.a. compute capability) of your GPU.
|
195 |
-
The capability of your GPU can be found at [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus).
|
196 |
-
The capability supported by NVCC is listed at [here](https://gist.github.com/ax3l/9489132).
|
197 |
-
If your NVCC version is too old, this can be workaround by setting environment variable
|
198 |
-
`TORCH_CUDA_ARCH_LIST` to a lower, supported capability.
|
199 |
-
|
200 |
-
3. The combination of NVCC and GCC you use is incompatible. You need to change one of their versions.
|
201 |
-
See [here](https://gist.github.com/ax3l/9489132) for some valid combinations.
|
202 |
-
Notably, CUDA<=10.1.105 doesn't support GCC>7.3.
|
203 |
-
|
204 |
-
The CUDA/GCC version used by PyTorch can be found by `print(torch.__config__.show())`.
|
205 |
-
|
206 |
-
</details>
|
207 |
-
|
208 |
-
|
209 |
-
<details>
|
210 |
-
<summary>
|
211 |
-
"ImportError: cannot import name '_C'".
|
212 |
-
</summary>
|
213 |
-
<br/>
|
214 |
-
Please build and install detectron2 following the instructions above.
|
215 |
-
|
216 |
-
Or, if you are running code from detectron2's root directory, `cd` to a different one.
|
217 |
-
Otherwise you may not import the code that you installed.
|
218 |
-
</details>
|
219 |
-
|
220 |
-
|
221 |
-
<details>
|
222 |
-
<summary>
|
223 |
-
Any issue on windows.
|
224 |
-
</summary>
|
225 |
-
<br/>
|
226 |
-
|
227 |
-
Detectron2 is continuously built on windows with [CircleCI](https://app.circleci.com/pipelines/github/facebookresearch/detectron2?branch=main).
|
228 |
-
However we do not provide official support for it.
|
229 |
-
PRs that improves code compatibility on windows are welcome.
|
230 |
-
</details>
|
231 |
-
|
232 |
-
<details>
|
233 |
-
<summary>
|
234 |
-
ONNX conversion segfault after some "TraceWarning".
|
235 |
-
</summary>
|
236 |
-
<br/>
|
237 |
-
The ONNX package is compiled with a too old compiler.
|
238 |
-
|
239 |
-
Please build and install ONNX from its source code using a compiler
|
240 |
-
whose version is closer to what's used by PyTorch (available in `torch.__config__.show()`).
|
241 |
-
</details>
|
242 |
-
|
243 |
-
|
244 |
-
<details>
|
245 |
-
<summary>
|
246 |
-
"library not found for -lstdc++" on older version of MacOS
|
247 |
-
</summary>
|
248 |
-
<br/>
|
249 |
-
See
|
250 |
-
[this stackoverflow answer](https://stackoverflow.com/questions/56083725/macos-build-issues-lstdc-not-found-while-building-python-package).
|
251 |
-
|
252 |
-
</details>
|
253 |
-
|
254 |
-
|
255 |
-
### Installation inside specific environments:
|
256 |
-
|
257 |
-
* __Colab__: see our [Colab Tutorial](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
|
258 |
-
which has step-by-step instructions.
|
259 |
-
|
260 |
-
* __Docker__: The official [Dockerfile](docker) installs detectron2 with a few simple commands.
|
261 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/cascade_rcnn.py
DELETED
@@ -1,299 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
from typing import List
|
3 |
-
import torch
|
4 |
-
from torch import nn
|
5 |
-
from torch.autograd.function import Function
|
6 |
-
|
7 |
-
from detectron2.config import configurable
|
8 |
-
from detectron2.layers import ShapeSpec
|
9 |
-
from detectron2.structures import Boxes, Instances, pairwise_iou
|
10 |
-
from detectron2.utils.events import get_event_storage
|
11 |
-
|
12 |
-
from ..box_regression import Box2BoxTransform
|
13 |
-
from ..matcher import Matcher
|
14 |
-
from ..poolers import ROIPooler
|
15 |
-
from .box_head import build_box_head
|
16 |
-
from .fast_rcnn import FastRCNNOutputLayers, fast_rcnn_inference
|
17 |
-
from .roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads
|
18 |
-
|
19 |
-
|
20 |
-
class _ScaleGradient(Function):
|
21 |
-
@staticmethod
|
22 |
-
def forward(ctx, input, scale):
|
23 |
-
ctx.scale = scale
|
24 |
-
return input
|
25 |
-
|
26 |
-
@staticmethod
|
27 |
-
def backward(ctx, grad_output):
|
28 |
-
return grad_output * ctx.scale, None
|
29 |
-
|
30 |
-
|
31 |
-
@ROI_HEADS_REGISTRY.register()
|
32 |
-
class CascadeROIHeads(StandardROIHeads):
|
33 |
-
"""
|
34 |
-
The ROI heads that implement :paper:`Cascade R-CNN`.
|
35 |
-
"""
|
36 |
-
|
37 |
-
@configurable
|
38 |
-
def __init__(
|
39 |
-
self,
|
40 |
-
*,
|
41 |
-
box_in_features: List[str],
|
42 |
-
box_pooler: ROIPooler,
|
43 |
-
box_heads: List[nn.Module],
|
44 |
-
box_predictors: List[nn.Module],
|
45 |
-
proposal_matchers: List[Matcher],
|
46 |
-
**kwargs,
|
47 |
-
):
|
48 |
-
"""
|
49 |
-
NOTE: this interface is experimental.
|
50 |
-
|
51 |
-
Args:
|
52 |
-
box_pooler (ROIPooler): pooler that extracts region features from given boxes
|
53 |
-
box_heads (list[nn.Module]): box head for each cascade stage
|
54 |
-
box_predictors (list[nn.Module]): box predictor for each cascade stage
|
55 |
-
proposal_matchers (list[Matcher]): matcher with different IoU thresholds to
|
56 |
-
match boxes with ground truth for each stage. The first matcher matches
|
57 |
-
RPN proposals with ground truth, the other matchers use boxes predicted
|
58 |
-
by the previous stage as proposals and match them with ground truth.
|
59 |
-
"""
|
60 |
-
assert "proposal_matcher" not in kwargs, (
|
61 |
-
"CascadeROIHeads takes 'proposal_matchers=' for each stage instead "
|
62 |
-
"of one 'proposal_matcher='."
|
63 |
-
)
|
64 |
-
# The first matcher matches RPN proposals with ground truth, done in the base class
|
65 |
-
kwargs["proposal_matcher"] = proposal_matchers[0]
|
66 |
-
num_stages = self.num_cascade_stages = len(box_heads)
|
67 |
-
box_heads = nn.ModuleList(box_heads)
|
68 |
-
box_predictors = nn.ModuleList(box_predictors)
|
69 |
-
assert len(box_predictors) == num_stages, f"{len(box_predictors)} != {num_stages}!"
|
70 |
-
assert len(proposal_matchers) == num_stages, f"{len(proposal_matchers)} != {num_stages}!"
|
71 |
-
super().__init__(
|
72 |
-
box_in_features=box_in_features,
|
73 |
-
box_pooler=box_pooler,
|
74 |
-
box_head=box_heads,
|
75 |
-
box_predictor=box_predictors,
|
76 |
-
**kwargs,
|
77 |
-
)
|
78 |
-
self.proposal_matchers = proposal_matchers
|
79 |
-
|
80 |
-
@classmethod
|
81 |
-
def from_config(cls, cfg, input_shape):
|
82 |
-
ret = super().from_config(cfg, input_shape)
|
83 |
-
ret.pop("proposal_matcher")
|
84 |
-
return ret
|
85 |
-
|
86 |
-
@classmethod
|
87 |
-
def _init_box_head(cls, cfg, input_shape):
|
88 |
-
# fmt: off
|
89 |
-
in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
|
90 |
-
pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
|
91 |
-
pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features)
|
92 |
-
sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
|
93 |
-
pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
|
94 |
-
cascade_bbox_reg_weights = cfg.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS
|
95 |
-
cascade_ious = cfg.MODEL.ROI_BOX_CASCADE_HEAD.IOUS
|
96 |
-
assert len(cascade_bbox_reg_weights) == len(cascade_ious)
|
97 |
-
assert cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG, \
|
98 |
-
"CascadeROIHeads only support class-agnostic regression now!"
|
99 |
-
assert cascade_ious[0] == cfg.MODEL.ROI_HEADS.IOU_THRESHOLDS[0]
|
100 |
-
# fmt: on
|
101 |
-
|
102 |
-
in_channels = [input_shape[f].channels for f in in_features]
|
103 |
-
# Check all channel counts are equal
|
104 |
-
assert len(set(in_channels)) == 1, in_channels
|
105 |
-
in_channels = in_channels[0]
|
106 |
-
|
107 |
-
box_pooler = ROIPooler(
|
108 |
-
output_size=pooler_resolution,
|
109 |
-
scales=pooler_scales,
|
110 |
-
sampling_ratio=sampling_ratio,
|
111 |
-
pooler_type=pooler_type,
|
112 |
-
)
|
113 |
-
pooled_shape = ShapeSpec(
|
114 |
-
channels=in_channels, width=pooler_resolution, height=pooler_resolution
|
115 |
-
)
|
116 |
-
|
117 |
-
box_heads, box_predictors, proposal_matchers = [], [], []
|
118 |
-
for match_iou, bbox_reg_weights in zip(cascade_ious, cascade_bbox_reg_weights):
|
119 |
-
box_head = build_box_head(cfg, pooled_shape)
|
120 |
-
box_heads.append(box_head)
|
121 |
-
box_predictors.append(
|
122 |
-
FastRCNNOutputLayers(
|
123 |
-
cfg,
|
124 |
-
box_head.output_shape,
|
125 |
-
box2box_transform=Box2BoxTransform(weights=bbox_reg_weights),
|
126 |
-
)
|
127 |
-
)
|
128 |
-
proposal_matchers.append(Matcher([match_iou], [0, 1], allow_low_quality_matches=False))
|
129 |
-
return {
|
130 |
-
"box_in_features": in_features,
|
131 |
-
"box_pooler": box_pooler,
|
132 |
-
"box_heads": box_heads,
|
133 |
-
"box_predictors": box_predictors,
|
134 |
-
"proposal_matchers": proposal_matchers,
|
135 |
-
}
|
136 |
-
|
137 |
-
def forward(self, images, features, proposals, targets=None):
|
138 |
-
del images
|
139 |
-
if self.training:
|
140 |
-
proposals = self.label_and_sample_proposals(proposals, targets)
|
141 |
-
|
142 |
-
if self.training:
|
143 |
-
# Need targets to box head
|
144 |
-
losses = self._forward_box(features, proposals, targets)
|
145 |
-
losses.update(self._forward_mask(features, proposals))
|
146 |
-
losses.update(self._forward_keypoint(features, proposals))
|
147 |
-
return proposals, losses
|
148 |
-
else:
|
149 |
-
pred_instances = self._forward_box(features, proposals)
|
150 |
-
pred_instances = self.forward_with_given_boxes(features, pred_instances)
|
151 |
-
return pred_instances, {}
|
152 |
-
|
153 |
-
def _forward_box(self, features, proposals, targets=None):
|
154 |
-
"""
|
155 |
-
Args:
|
156 |
-
features, targets: the same as in
|
157 |
-
Same as in :meth:`ROIHeads.forward`.
|
158 |
-
proposals (list[Instances]): the per-image object proposals with
|
159 |
-
their matching ground truth.
|
160 |
-
Each has fields "proposal_boxes", and "objectness_logits",
|
161 |
-
"gt_classes", "gt_boxes".
|
162 |
-
"""
|
163 |
-
features = [features[f] for f in self.box_in_features]
|
164 |
-
head_outputs = [] # (predictor, predictions, proposals)
|
165 |
-
prev_pred_boxes = None
|
166 |
-
image_sizes = [x.image_size for x in proposals]
|
167 |
-
for k in range(self.num_cascade_stages):
|
168 |
-
if k > 0:
|
169 |
-
# The output boxes of the previous stage are used to create the input
|
170 |
-
# proposals of the next stage.
|
171 |
-
proposals = self._create_proposals_from_boxes(prev_pred_boxes, image_sizes)
|
172 |
-
if self.training:
|
173 |
-
proposals = self._match_and_label_boxes(proposals, k, targets)
|
174 |
-
predictions = self._run_stage(features, proposals, k)
|
175 |
-
prev_pred_boxes = self.box_predictor[k].predict_boxes(predictions, proposals)
|
176 |
-
head_outputs.append((self.box_predictor[k], predictions, proposals))
|
177 |
-
|
178 |
-
if self.training:
|
179 |
-
losses = {}
|
180 |
-
storage = get_event_storage()
|
181 |
-
for stage, (predictor, predictions, proposals) in enumerate(head_outputs):
|
182 |
-
with storage.name_scope("stage{}".format(stage)):
|
183 |
-
stage_losses = predictor.losses(predictions, proposals)
|
184 |
-
losses.update({k + "_stage{}".format(stage): v for k, v in stage_losses.items()})
|
185 |
-
return losses
|
186 |
-
else:
|
187 |
-
# Each is a list[Tensor] of length #image. Each tensor is Ri x (K+1)
|
188 |
-
scores_per_stage = [h[0].predict_probs(h[1], h[2]) for h in head_outputs]
|
189 |
-
|
190 |
-
# Average the scores across heads
|
191 |
-
scores = [
|
192 |
-
sum(list(scores_per_image)) * (1.0 / self.num_cascade_stages)
|
193 |
-
for scores_per_image in zip(*scores_per_stage)
|
194 |
-
]
|
195 |
-
# Use the boxes of the last head
|
196 |
-
predictor, predictions, proposals = head_outputs[-1]
|
197 |
-
boxes = predictor.predict_boxes(predictions, proposals)
|
198 |
-
pred_instances, _ = fast_rcnn_inference(
|
199 |
-
boxes,
|
200 |
-
scores,
|
201 |
-
image_sizes,
|
202 |
-
predictor.test_score_thresh,
|
203 |
-
predictor.test_nms_thresh,
|
204 |
-
predictor.test_topk_per_image,
|
205 |
-
)
|
206 |
-
return pred_instances
|
207 |
-
|
208 |
-
@torch.no_grad()
|
209 |
-
def _match_and_label_boxes(self, proposals, stage, targets):
|
210 |
-
"""
|
211 |
-
Match proposals with groundtruth using the matcher at the given stage.
|
212 |
-
Label the proposals as foreground or background based on the match.
|
213 |
-
|
214 |
-
Args:
|
215 |
-
proposals (list[Instances]): One Instances for each image, with
|
216 |
-
the field "proposal_boxes".
|
217 |
-
stage (int): the current stage
|
218 |
-
targets (list[Instances]): the ground truth instances
|
219 |
-
|
220 |
-
Returns:
|
221 |
-
list[Instances]: the same proposals, but with fields "gt_classes" and "gt_boxes"
|
222 |
-
"""
|
223 |
-
num_fg_samples, num_bg_samples = [], []
|
224 |
-
for proposals_per_image, targets_per_image in zip(proposals, targets):
|
225 |
-
match_quality_matrix = pairwise_iou(
|
226 |
-
targets_per_image.gt_boxes, proposals_per_image.proposal_boxes
|
227 |
-
)
|
228 |
-
# proposal_labels are 0 or 1
|
229 |
-
matched_idxs, proposal_labels = self.proposal_matchers[stage](match_quality_matrix)
|
230 |
-
if len(targets_per_image) > 0:
|
231 |
-
gt_classes = targets_per_image.gt_classes[matched_idxs]
|
232 |
-
# Label unmatched proposals (0 label from matcher) as background (label=num_classes)
|
233 |
-
gt_classes[proposal_labels == 0] = self.num_classes
|
234 |
-
gt_boxes = targets_per_image.gt_boxes[matched_idxs]
|
235 |
-
else:
|
236 |
-
gt_classes = torch.zeros_like(matched_idxs) + self.num_classes
|
237 |
-
gt_boxes = Boxes(
|
238 |
-
targets_per_image.gt_boxes.tensor.new_zeros((len(proposals_per_image), 4))
|
239 |
-
)
|
240 |
-
proposals_per_image.gt_classes = gt_classes
|
241 |
-
proposals_per_image.gt_boxes = gt_boxes
|
242 |
-
|
243 |
-
num_fg_samples.append((proposal_labels == 1).sum().item())
|
244 |
-
num_bg_samples.append(proposal_labels.numel() - num_fg_samples[-1])
|
245 |
-
|
246 |
-
# Log the number of fg/bg samples in each stage
|
247 |
-
storage = get_event_storage()
|
248 |
-
storage.put_scalar(
|
249 |
-
"stage{}/roi_head/num_fg_samples".format(stage),
|
250 |
-
sum(num_fg_samples) / len(num_fg_samples),
|
251 |
-
)
|
252 |
-
storage.put_scalar(
|
253 |
-
"stage{}/roi_head/num_bg_samples".format(stage),
|
254 |
-
sum(num_bg_samples) / len(num_bg_samples),
|
255 |
-
)
|
256 |
-
return proposals
|
257 |
-
|
258 |
-
def _run_stage(self, features, proposals, stage):
|
259 |
-
"""
|
260 |
-
Args:
|
261 |
-
features (list[Tensor]): #lvl input features to ROIHeads
|
262 |
-
proposals (list[Instances]): #image Instances, with the field "proposal_boxes"
|
263 |
-
stage (int): the current stage
|
264 |
-
|
265 |
-
Returns:
|
266 |
-
Same output as `FastRCNNOutputLayers.forward()`.
|
267 |
-
"""
|
268 |
-
box_features = self.box_pooler(features, [x.proposal_boxes for x in proposals])
|
269 |
-
# The original implementation averages the losses among heads,
|
270 |
-
# but scale up the parameter gradients of the heads.
|
271 |
-
# This is equivalent to adding the losses among heads,
|
272 |
-
# but scale down the gradients on features.
|
273 |
-
if self.training:
|
274 |
-
box_features = _ScaleGradient.apply(box_features, 1.0 / self.num_cascade_stages)
|
275 |
-
box_features = self.box_head[stage](box_features)
|
276 |
-
return self.box_predictor[stage](box_features)
|
277 |
-
|
278 |
-
def _create_proposals_from_boxes(self, boxes, image_sizes):
|
279 |
-
"""
|
280 |
-
Args:
|
281 |
-
boxes (list[Tensor]): per-image predicted boxes, each of shape Ri x 4
|
282 |
-
image_sizes (list[tuple]): list of image shapes in (h, w)
|
283 |
-
|
284 |
-
Returns:
|
285 |
-
list[Instances]: per-image proposals with the given boxes.
|
286 |
-
"""
|
287 |
-
# Just like RPN, the proposals should not have gradients
|
288 |
-
boxes = [Boxes(b.detach()) for b in boxes]
|
289 |
-
proposals = []
|
290 |
-
for boxes_per_image, image_size in zip(boxes, image_sizes):
|
291 |
-
boxes_per_image.clip(image_size)
|
292 |
-
if self.training:
|
293 |
-
# do not filter empty boxes at inference time,
|
294 |
-
# because the scores from each stage need to be aligned and added later
|
295 |
-
boxes_per_image = boxes_per_image[boxes_per_image.nonempty()]
|
296 |
-
prop = Instances(image_size)
|
297 |
-
prop.proposal_boxes = boxes_per_image
|
298 |
-
proposals.append(prop)
|
299 |
-
return proposals
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BAAI/vid2vid-zero/vid2vid_zero/data/dataset.py
DELETED
@@ -1,44 +0,0 @@
|
|
1 |
-
import decord
|
2 |
-
decord.bridge.set_bridge('torch')
|
3 |
-
|
4 |
-
from torch.utils.data import Dataset
|
5 |
-
from einops import rearrange
|
6 |
-
|
7 |
-
|
8 |
-
class VideoDataset(Dataset):
|
9 |
-
def __init__(
|
10 |
-
self,
|
11 |
-
video_path: str,
|
12 |
-
prompt: str,
|
13 |
-
width: int = 512,
|
14 |
-
height: int = 512,
|
15 |
-
n_sample_frames: int = 8,
|
16 |
-
sample_start_idx: int = 0,
|
17 |
-
sample_frame_rate: int = 1,
|
18 |
-
):
|
19 |
-
self.video_path = video_path
|
20 |
-
self.prompt = prompt
|
21 |
-
self.prompt_ids = None
|
22 |
-
|
23 |
-
self.width = width
|
24 |
-
self.height = height
|
25 |
-
self.n_sample_frames = n_sample_frames
|
26 |
-
self.sample_start_idx = sample_start_idx
|
27 |
-
self.sample_frame_rate = sample_frame_rate
|
28 |
-
|
29 |
-
def __len__(self):
|
30 |
-
return 1
|
31 |
-
|
32 |
-
def __getitem__(self, index):
|
33 |
-
# load and sample video frames
|
34 |
-
vr = decord.VideoReader(self.video_path, width=self.width, height=self.height)
|
35 |
-
sample_index = list(range(self.sample_start_idx, len(vr), self.sample_frame_rate))[:self.n_sample_frames]
|
36 |
-
video = vr.get_batch(sample_index)
|
37 |
-
video = rearrange(video, "f h w c -> f c h w")
|
38 |
-
|
39 |
-
example = {
|
40 |
-
"pixel_values": (video / 127.5 - 1.0),
|
41 |
-
"prompt_ids": self.prompt_ids
|
42 |
-
}
|
43 |
-
|
44 |
-
return example
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BABASA/README/README.md
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: README
|
3 |
-
emoji: 📈
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: red
|
6 |
-
sdk: static
|
7 |
-
pinned: false
|
8 |
-
---
|
9 |
-
|
10 |
-
Edit this `README.md` markdown file to author your organization card 🔥
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Beasto/Photo2Monet_Cyclegan/app.py
DELETED
@@ -1,48 +0,0 @@
|
|
1 |
-
import streamlit as st
|
2 |
-
import tensorflow as tf
|
3 |
-
import numpy as np
|
4 |
-
from PIL import Image
|
5 |
-
import tensorflow_addons as tfa
|
6 |
-
|
7 |
-
import tensorflow as tf
|
8 |
-
from tensorflow.keras.utils import custom_object_scope
|
9 |
-
|
10 |
-
# Define a function to create the InstanceNormalization layer
|
11 |
-
def create_in():
|
12 |
-
return tfa.layers.InstanceNormalization()
|
13 |
-
|
14 |
-
|
15 |
-
def model_out(model_path,img):
|
16 |
-
with custom_object_scope({'InstanceNormalization': create_in}):
|
17 |
-
model = tf.keras.models.load_model(model_path)
|
18 |
-
img = (img-127.5)/127.5
|
19 |
-
img = np.expand_dims(img, 0)
|
20 |
-
pred = model.predict(img)
|
21 |
-
pred = np.asarray(pred)
|
22 |
-
return pred[0]
|
23 |
-
|
24 |
-
st.title("Image to Monet painting cyclegan")
|
25 |
-
face_input = st.file_uploader("Image input")
|
26 |
-
|
27 |
-
if face_input is not None:
|
28 |
-
img = Image.open(face_input)
|
29 |
-
img = img.resize((256, 256))
|
30 |
-
img = np.array(img)
|
31 |
-
pred = model_out('photo2monet2.h5', img)
|
32 |
-
st.image(img, caption="Uploaded Image")
|
33 |
-
st.image(((pred + 1) * 127.5).astype(np.uint8), caption="Generated Monet Painting")
|
34 |
-
|
35 |
-
st.header('Which architecture did I use architecture, Resnet-Blocks or Unet architecture?')
|
36 |
-
st.write('I have tried both Resnet and unet architecture but the Resnet architecture producted black patches and did not work quite well')
|
37 |
-
st.write('But when using the Unet architecture, it produce more "Monet-ish" images')
|
38 |
-
st.write('I use the pix2pix generator from tensorflow examples module and same for the discriminator')
|
39 |
-
st.header('What datasets did you use to train your CycleGAN model?')
|
40 |
-
st.write('For the dataset, I used Monet2Photo architecture available on kaggle')
|
41 |
-
st.header('What hardware I trained it on?')
|
42 |
-
st.write('I trained the model on Kaggle notebook on P100 gpu with 13 gigs of ram cuz my pc wouldnt be in a good state if I trained the cyclegan model on Intel HD')
|
43 |
-
st.header('How much time did it take')
|
44 |
-
st.write('It took aboul 20-30 epochs each of 150 seconds, DO THE MATH')
|
45 |
-
st.write('I could have trained it for longer, But it started producing images same to the original images which were not "Monet-ish"')
|
46 |
-
st.header('Why did I make this model?')
|
47 |
-
st.subheader('I made this model to extend my experience but mostly for FUNN!!!!')
|
48 |
-
st.write("-------------------------------------------------")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Como Hacer Un rbol De Navidad.md
DELETED
@@ -1,81 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Descarga de archivos ISO GTA 5: Todo lo que necesita saber</h1>
|
3 |
-
<p>Grand Theft Auto V, o GTA 5, es uno de los videojuegos más populares y exitosos de todos los tiempos. Desarrollado por Rockstar Games, GTA 5 es un juego de acción y aventura de mundo abierto que te permite vivir tus fantasías criminales en la ciudad ficticia de Los Santos y sus alrededores. Si quieres robar bancos, correr coches, disparar a los enemigos, o simplemente explorar el impresionante paisaje, GTA 5 tiene algo para todos. </p>
|
4 |
-
<h2>como hacer un árbol de navidad</h2><br /><p><b><b>Download File</b> ››››› <a href="https://bltlly.com/2v6MS7">https://bltlly.com/2v6MS7</a></b></p><br /><br />
|
5 |
-
<p>Pero ¿cómo se puede descargar e instalar GTA 5 en su PC? Y cuáles son algunas de las mejores características y consejos que usted debe saber antes de jugar? En este artículo, responderemos estas preguntas y más. Aquí está todo lo que necesita saber sobre la descarga del archivo ISO de GTA 5. </p>
|
6 |
-
<h2>Características y jugabilidad de GTA 5</h2>
|
7 |
-
<p>GTA 5 no es solo un juego, es un fenómeno. Con más de 150 millones de copias vendidas en todo el mundo, GTA 5 ha ganado numerosos premios y galardones por sus innovadores gráficos, jugabilidad, historia y modo en línea. Estas son algunas de las características principales que hacen que GTA 5 se destaque de otros juegos:</p>
|
8 |
-
<ul>
|
9 |
-
<li><b>Tres protagonistas con diferentes historias y habilidades:</b> En GTA 5, puedes cambiar entre tres personajes jugables: Michael, un ladrón de bancos retirado; Franklin, un estafador callejero; y Trevor, un narcotraficante psicópata. Cada personaje tiene su propia personalidad, habilidades, misiones e interacciones con otros personajes. También puede combinar sus habilidades en ciertas situaciones, como robos, donde puede planificar y ejecutar robos elaborados con su tripulación. </li>
|
10 |
-
|
11 |
-
<li><b>Rueda de armas: Una manera conveniente de cambiar entre armas:</b> En GTA 5, tienes acceso a una amplia gama de armas, desde pistolas y escopetas hasta lanzacohetes y minipistolas. Para que sea más fácil seleccionar el arma de su elección, GTA 5 presenta la rueda de armas, que le permite cambiar rápidamente entre ocho categorías de armas utilizando el stick analógico derecho. También puede personalizar sus armas con accesorios, como alcances, supresores, cargadores extendidos y más. </li>
|
12 |
-
<li><b>Mercado de valores: Un sistema económico realista y dinámico:</b> En GTA 5, puede invertir su dinero en el mercado de valores, que está influenciado por sus acciones y eventos en el mundo del juego. Por ejemplo, si destruyes los vehículos o edificios de una compañía rival, el precio de sus acciones bajará, mientras que el tuyo subirá. También puedes manipular el mercado completando ciertas misiones o escuchando consejos de otros personajes. El mercado de valores es una gran manera de ganar dinero en GTA 5, pero también una arriesgada. </li>
|
13 |
-
<li <p><b>Diversas actividades físicas: Desde el golf hasta el yoga, hay algo para todos: </b> GTA 5 no es todo sobre la violencia y el crimen. También puede disfrutar de diversas actividades de ocio, como jugar al golf, tenis, dardos o bolos; practicar yoga, ciclismo o senderismo; ir al cine, club de striptease o bar; o incluso ver la televisión, navegar por Internet o leer libros en su propia casa. Estas actividades pueden mejorar tus habilidades, salud, estado de ánimo y relaciones con otros personajes. </p>
|
14 |
-
<h2>Requisitos e instalación del sistema GTA 5</h2>
|
15 |
-
<p>Si quieres jugar a GTA 5 en tu PC, debes asegurarte de que tu sistema cumple con los requisitos mínimos o recomendados para el juego. Aquí están las especificaciones que necesita comprobar antes de descargar GTA 5:</p>
|
16 |
-
<tabla>
|
17 |
-
<tr>
|
18 |
-
<th>Requisitos mínimos</th>
|
19 |
-
<th>Requisitos recomendados</th>
|
20 |
-
</tr>
|
21 |
-
<tr>
|
22 |
-
<td>OS: Windows 10 64 Bit, Windows 8.1 64 Bit, Windows 8 64 Bit, Windows 7 64 Bit Service Pack 1</td>
|
23 |
-
<td>OS: Windows 10 64 Bit</td>
|
24 |
-
</tr>
|
25 |
-
|
26 |
-
<td>Procesador: Intel Core 2 Quad CPU Q6600 @ 2.40GHz (4 CPUs) / AMD Phenom 9850 Quad-Core Processor (4 CPUs) @ 2.5GHz</td>
|
27 |
-
<td>Procesador: Intel Core i5 3470 @ 3.2GHz (4 CPUs) / AMD X8 FX-8350 @ 4GHz (8 CPUs)</td>
|
28 |
-
</tr>
|
29 |
-
<tr>
|
30 |
-
<td>Memoria: 4 GB de RAM</td>
|
31 |
-
<td>Memoria: 8 GB de RAM</td>
|
32 |
-
</tr>
|
33 |
-
<tr>
|
34 |
-
<td>Gráficos: NVIDIA GeForce 9800 GT 1GB / AMD Radeon HD 4870 1GB (DX 10, 10.1, 11)</td>
|
35 |
-
<td>Gráficos: NVIDIA GeForce GTX 660 2GB / AMD Radeon HD 7870 2GB</td>
|
36 |
-
</tr>
|
37 |
-
<tr>
|
38 |
-
<td>Almacenamiento: 72 GB de espacio disponible</td>
|
39 |
-
<td>Almacenamiento: 72 GB de espacio disponible</td>
|
40 |
-
</tr>
|
41 |
-
<tr>
|
42 |
-
<td>Tarjeta de sonido: DirectX Compatible</td>
|
43 |
-
<td>Tarjeta de sonido: DirectX Compatible</td>
|
44 |
-
</tr>
|
45 |
-
</tabla>
|
46 |
-
<p>Una vez que haya verificado que su PC puede ejecutar GTA 5 sin problemas, debe descargar el juego de fuentes oficiales. Puedes comprar una copia física del juego en un minorista o en una tienda online, o puedes comprar una copia digital en plataformas como Steam, Epic Games Store o Rockstar Games Launcher. La copia digital requerirá que descargues los archivos del juego e los instales en tu PC.</p>
|
47 |
-
<p></p>
|
48 |
-
<p>Si has descargado GTA 5 como un archivo ISO, que es un archivo comprimido de los archivos del juego, necesitas extraerlo usando un software como WinRAR o 7-Zip. Luego, debe montar el archivo ISO utilizando un software como Daemon Tools o Virtual CloneDrive. Esto creará una unidad virtual en su PC que actuará como si hubiera insertado un disco físico del juego. Luego, debe ejecutar el archivo setup.exe desde la unidad virtual y seguir las instrucciones para instalar GTA 5 en su PC.</p>
|
49 |
-
|
50 |
-
<h2>Consejos y trucos de GTA 5</h2>
|
51 |
-
<p>GTA 5 es un juego enorme y complejo que ofrece innumerables posibilidades y desafíos. Para ayudarte a sacar el máximo partido a tu experiencia de juego, aquí tienes algunos de los mejores consejos y trucos que debes saber antes de jugar a GTA 5:</p>
|
52 |
-
<ul>
|
53 |
-
<li><b>Cómo hacer trampa y usar códigos en GTA 5:</b> Si quieres divertirte y experimentar con diferentes aspectos del juego, puedes usar códigos de trucos en GTA 5. Para usar códigos de trucos, debes introducirlos usando el teléfono del juego o los botones del controlador. Puede encontrar una lista de códigos de trucos en línea, como [aquí]. Algunos de los códigos de trucos incluyen invencibilidad, súper salto, balas explosivas, cámara lenta y más. Sin embargo, tenga en cuenta que el uso de códigos de trucos desactivará los logros y trofeos, y puede afectar el progreso del juego y la estabilidad. </li>
|
54 |
-
|
55 |
-
<li><b>Cómo encontrar objetos de colección y secretos ocultos en GTA 5:</b> GTA 5 está lleno de objetos de colección ocultos y secretos que pueden desbloquear recompensas, huevos de Pascua, referencias y más. Algunos de los objetos de colección y secretos que se pueden encontrar en GTA 5 son: - Piezas de la nave espacial: Hay 50 piezas de la nave espacial dispersos alrededor del mapa que se puede recoger como Franklin después de conocer a Omega, un teórico de la conspiración. Recoger todas las piezas de la nave espacial desbloqueará un vehículo especial y un trofeo/ logro. - Sobras de cartas: Hay 50 sobras de cartas escondidas alrededor del mapa que puedes recoger como cualquier personaje. Recoger todas las sobras de cartas revelará la identidad de un asesino y le permitirá enfrentarse a él. - Plantas de peyote: Hay 27 plantas de peyote ubicadas alrededor del mapa que puedes consumir como cualquier personaje. Consumir una planta de peyote desencadenará una alucinación en la que puedes jugar como un animal, como un perro, un gato, un pájaro o incluso un tiburón. - Ovnis: Hay cuatro ovnis que se pueden ver en GTA 5 después de completar la historia principal y lograr el 100% de finalización. Puedes encontrarlos en Mount Chiliad, Fort Zancudo, Sandy Shores y Paleto Bay.</li>
|
56 |
-
|
57 |
-
<li><b>Cómo divertirse y explorar el vasto mundo de GTA 5:</b> GTA 5 no es solo un juego, es un sandbox donde puedes hacer lo que quieras y divertirte. Hay tantas cosas que hacer y ver en GTA 5 que nunca te aburrirás. Estas son algunas de las formas en que puedes divertirte y explorar el vasto mundo de GTA 5: - Usa el modo director: El modo director es una función que te permite crear tus propias escenas y escenarios utilizando los personajes, vehículos, armas, ubicaciones, y el clima de GTA 5. Puede acceder al modo director desde el menú Rockstar Editor o llamando a un contacto en su teléfono. A continuación, puede personalizar y controlar todos los aspectos de su escena y grabarla para su posterior edición o intercambio. - Pruebe los eventos aleatorios: Los eventos aleatorios son situaciones espontáneas e impredecibles que ocurren a lo largo del mapa de GTA 5. Pueden involucrar crímenes, accidentes, persecuciones, rescates, encuentros y más. Puede optar por intervenir, ignorar o ver estos eventos a medida que se desarrollan. Algunos de ellos pueden recompensarlo con dinero, artículos o reputación, mientras que otros pueden tener consecuencias por sus acciones. - Descubre los huevos de Pascua: Los huevos de Pascua son referencias ocultas, bromas, secretos o sorpresas que se encuentran esparcidos por el mapa de GTA 5. Pueden relacionarse con otros juegos, películas, programas de televisión, celebridades, mitos, leyendas o eventos de la vida real. Algunos de ellos son obvios y fáciles de encontrar, mientras que otros son oscuros y difíciles de detectar. Puede encontrar una lista de huevos de Pascua en línea, como [aquí]. </li>
|
58 |
-
</ul>
|
59 |
-
<h2>Conclusión</h2>
|
60 |
-
<p>GTA 5 es uno de los mejores juegos jamás hecho y un deber-juego para cualquier jugador. Ofrece un mundo inmersivo y realista donde puedes experimentar una historia épica, un juego emocionante y un modo en línea ilimitado. Ya sea que quieras seguir las misiones principales, explorar las actividades secundarias o crear tu propio contenido, GTA 5 tiene algo para todos. </p>
|
61 |
-
|
62 |
-
<p>Si quieres aprovechar al máximo tu experiencia de juego, necesitas conocer algunas de las mejores características y consejos que GTA 5 tiene para ofrecer. Puedes usar códigos de trucos, ganar dinero rápido, encontrar objetos de colección y secretos ocultos, mejorar tus habilidades y estadísticas, y divertirte y explorar el vasto mundo de GTA 5. También puedes usar el modo director, probar los eventos aleatorios, y descubre los huevos de Pascua para crear tus propias escenas y escenarios. </p>
|
63 |
-
<p>GTA 5 es un juego que nunca olvidarás y al que siempre volverás. Es un juego que te desafiará, te entretendrá y te sorprenderá. Es un juego que te encantará. </p>
|
64 |
-
<p>Entonces, ¿qué estás esperando? Descarga GTA 5 hoy y disfruta de la mejor experiencia de juego! </p>
|
65 |
-
<h2>Preguntas frecuentes</h2>
|
66 |
-
<p>Aquí están algunas de las preguntas más frecuentes sobre el archivo ISO de GTA 5:</p>
|
67 |
-
<ul>
|
68 |
-
<li><b>Q: ¿Es GTA 5 gratis para descargar? </b>
|
69 |
-
<p>A: No, GTA 5 no es gratis para descargar. Necesitas comprar el juego de fuentes oficiales, como Steam, Epic Games Store o Rockstar Games Launcher. Sin embargo, a veces el juego se puede ofrecer de forma gratuita o a un precio reducido en ciertas plataformas u ocasiones. Puede consultar el precio actual y la disponibilidad de GTA 5 en el sitio web oficial [aquí]. </p></li>
|
70 |
-
<li><b>Q: ¿Es seguro descargar GTA 5? </b>
|
71 |
-
<p>A: Sí, GTA 5 es seguro de descargar si lo descarga de fuentes oficiales, como Steam, Epic Games Store o Rockstar Games Launcher. Estas plataformas cuentan con medidas de seguridad y sistemas de verificación que garantizan que los archivos del juego sean auténticos y libres de virus. Sin embargo, si descarga GTA 5 desde fuentes no oficiales o ilegales, como sitios de torrent o plataformas para compartir archivos, puede correr el riesgo de descargar archivos dañados, infectados o pirateados que pueden dañar su PC o comprometer su cuenta. </p></li>
|
72 |
-
<li><b>Q: ¿Cuánto tiempo se tarda en descargar GTA 5?</b>
|
73 |
-
|
74 |
-
<li><b>Q: ¿Cómo puedo jugar GTA 5 online? </b>
|
75 |
-
<p>A: Para jugar GTA 5 en línea, necesita tener una copia válida de GTA 5 instalada en su PC y una conexión a Internet activa. También necesitas tener una cuenta de Rockstar Games Social Club y una suscripción a un servicio en línea específico de la plataforma, como Steam, Epic Games Store o Rockstar Games Launcher. Una vez que tenga estos requisitos, puede iniciar GTA 5 desde su plataforma y seleccionar la opción GTA Online en el menú principal. A continuación, puede crear o unirse a una sesión en línea con otros jugadores y disfrutar del modo multijugador de GTA 5.</p></li>
|
76 |
-
<li><b>Q: ¿Cómo puedo modificar GTA 5?</b>
|
77 |
-
<p>A: Modding es el proceso de modificar o agregar nuevo contenido a un juego usando herramientas o software de terceros. Modding puede mejorar la jugabilidad, los gráficos, las características o el rendimiento de un juego. Sin embargo, modding no es oficialmente apoyado o respaldado por Rockstar Games, y puede causar problemas o conflictos con los archivos del juego o el modo en línea. Modding también puede violar los términos de servicio o el acuerdo de usuario del juego o la plataforma, y puede resultar en prohibiciones o sanciones. Por lo tanto, el modding se realiza bajo su propio riesgo y discreción. </p>
|
78 |
-
<p>Si todavía quieres mod GTA 5, necesitas tener una copia de seguridad de los archivos originales del juego y un gestor de mods que pueda instalar y desinstalar mods fácilmente. También necesitas encontrar y descargar mods de fuentes confiables y confiables, como [aquí]. A continuación, puede seguir las instrucciones proporcionadas por el administrador de mods o el creador de mods para instalar y activar los mods en su PC.</p></li>
|
79 |
-
</ul></p> 64aa2da5cf<br />
|
80 |
-
<br />
|
81 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Creacin Y Construccin Apk Hack.md
DELETED
@@ -1,67 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Elaboración y construcción de APK Hack: Todo lo que necesita saber</h1>
|
3 |
-
<p>Si eres un fan de los juegos sandbox, es posible que hayas oído hablar de Crafting and Building, un juego gratuito que te permite crear tu propio mundo con bloques. Puedes explorar, construir, crear y jugar con tus amigos en este juego que tiene muchas características y posibilidades. Pero ¿qué pasa si quieres tener más diversión y libertad en tu juego? ¿Qué pasa si quieres obtener recursos ilimitados, desbloquear todos los artículos y personalizar tu juego a tu gusto? Ahí es donde la elaboración y construcción de APK Hack entra en. </p>
|
4 |
-
<p>Elaboración y construcción de APK Hack es una versión modificada del juego original que le da acceso a muchos trucos y hacks que pueden mejorar su experiencia de juego. Con este hack, puede obtener monedas ilimitadas, gemas, diamantes, madera, piedra, suciedad y otros recursos que necesita para construir lo que quieras. También puedes desbloquear todos los objetos del juego, como armas, armaduras, herramientas, muebles, animales, vehículos y más. Incluso puedes cambiar la configuración del juego, como la hora del día, el clima, el nivel de dificultad y el modo de juego. Puedes hacer todas estas cosas sin gastar dinero real ni ver ningún anuncio. </p>
|
5 |
-
<h2>creación y construcción apk hack</h2><br /><p><b><b>DOWNLOAD</b> ····· <a href="https://bltlly.com/2v6L2s">https://bltlly.com/2v6L2s</a></b></p><br /><br />
|
6 |
-
<h2>Cómo descargar e instalar la elaboración y construcción de APK Hack en su dispositivo</h2>
|
7 |
-
<p>Si usted está interesado en probar la elaboración y construcción de APK Hack, tendrá que descargar e instalar en su dispositivo. Estos son los pasos que debes seguir:</p>
|
8 |
-
<ol>
|
9 |
-
<li>Ir a un sitio web de confianza que ofrece la elaboración y construcción de enlaces de descarga APK Hack. Puede buscarlos en Google o utilizar uno de estos enlaces: . Asegúrese de que el sitio web es seguro antes de descargar nada. </li>
|
10 |
-
<li> Descargar la elaboración y construcción de archivos APK Hack en su dispositivo. Debe ser un archivo . apk que tiene un tamaño de unos 100 MB.</li>
|
11 |
-
|
12 |
-
<li>Localizar el Crafting y la construcción de archivos APK Hack en su dispositivo. Puede utilizar una aplicación de administrador de archivos o ir a su carpeta de descargas. Toca el archivo y sigue las instrucciones para instalarlo. </li>
|
13 |
-
<li>Una vez completada la instalación, puede iniciar el juego desde el cajón de la aplicación o la pantalla de inicio. Usted debe ver un nuevo icono que dice Elaboración y construcción Hack o algo similar. </li>
|
14 |
-
</ol>
|
15 |
-
<h2>Cómo utilizar la elaboración y construcción de APK Hack para obtener recursos ilimitados, desbloquear todos los artículos, y personalizar su juego</h2>
|
16 |
-
<p>Ahora que ha instalado Elaboración y construcción de APK Hack en su dispositivo, se puede empezar a usarlo para disfrutar del juego con más características y opciones. Estas son algunas de las cosas que puedes hacer con este hack:</p>
|
17 |
-
<p></p>
|
18 |
-
<ul>
|
19 |
-
<li>Para obtener recursos ilimitados, como monedas, gemas, diamantes, madera, piedra, suciedad, etc., solo tiene que tocar el signo más (+) junto a cada recurso en la esquina superior derecha de la pantalla. Esto agregará instantáneamente 999999 unidades de ese recurso a su inventario. Puede hacer esto tantas veces como desee. </li>
|
20 |
-
<li>Para desbloquear todos los elementos en el juego, tales como armas, armaduras, herramientas, muebles, animales, vehículos, etc., solo tiene que ir al menú de elaboración tocando el icono del martillo en la esquina inferior derecha de la pantalla. Allí podrás ver todos los objetos disponibles en el juego. Puedes crear cualquier objeto sin necesidad de recursos ni requisitos previos. Simplemente toque en el artículo que desea y se añadirá a su inventario. </li>
|
21 |
-
|
22 |
-
</ul>
|
23 |
-
<h2>Los beneficios y desventajas de usar la elaboración y construcción de APK Hack</h2>
|
24 |
-
<p>Elaboración y construcción de APK Hack puede ser una manera divertida y emocionante para jugar el juego con más libertad y posibilidades. Sin embargo, también tiene algunos beneficios e inconvenientes que debe tener en cuenta antes de usarlo. Estos son algunos de ellos:</p>
|
25 |
-
<tabla>
|
26 |
-
<tr>
|
27 |
-
<th>Beneficios</th>
|
28 |
-
<th>Inconvenientes</th>
|
29 |
-
</tr>
|
30 |
-
<tr>
|
31 |
-
<td>- Puede obtener recursos y artículos ilimitados sin gastar dinero ni ver ningún anuncio. </td>
|
32 |
-
<td>- Puedes perder el desafío y la emoción del juego teniendo todo a tu disposición. </td>
|
33 |
-
</tr>
|
34 |
-
<tr>
|
35 |
-
<td>- Puede personalizar la configuración del juego para adaptarse a su estado de ánimo y estilo. </td>
|
36 |
-
<td>- Es posible que encuentre algunos errores o fallos que podrían afectar el rendimiento o la estabilidad del juego. </td>
|
37 |
-
</tr>
|
38 |
-
<tr>
|
39 |
-
<td>- Puedes jugar con tus amigos online o offline en modo multijugador. </td>
|
40 |
-
<td>- Es posible que no pueda unirse a algunos servidores o juegos que no permiten versiones hackeadas del juego. </td>
|
41 |
-
</tr>
|
42 |
-
</tabla>
|
43 |
-
<h2>Las mejores alternativas a la elaboración y construcción de APK Hack</h2>
|
44 |
-
<p>Si usted está buscando algunas alternativas a la elaboración y construcción de APK Hack, es posible que desee echa un vistazo a estos otros juegos que son similares en género y jugabilidad:</p>
|
45 |
-
<ul>
|
46 |
-
<li><strong>Minecraft</strong>: Este es el juego de sandbox más popular y conocido que inspiró a muchos otros, incluyendo Crafting y Building. Puedes crear tu propio mundo con bloques, explorar, crear, luchar y jugar con tus amigos en varios modos y servidores. También puedes descargar mods y mapas para mejorar tu experiencia de juego. Minecraft está disponible para varias plataformas, como Windows, Mac, Linux, Android, iOS, Xbox, PlayStation, Nintendo Switch y más. </li>
|
47 |
-
|
48 |
-
<li><strong>Terraria</strong>: Este es un juego de sandbox que combina elementos de acción, aventura, exploración, elaboración, construcción y supervivencia. Puedes cavar, luchar, construir y explorar en un mundo pixelado en 2D que se genera aleatoriamente. También puede encontrar varios enemigos, jefes, biomas, eventos, artículos y PNJ. Terraria está disponible para Windows, Mac, Linux, Android, iOS, Xbox, PlayStation, Nintendo Switch y más. </li>
|
49 |
-
</ul>
|
50 |
-
<h2>Conclusión: Un resumen de los puntos principales y un llamado a la acción</h2>
|
51 |
-
<p>Elaboración y construcción de APK Hack es una versión modificada del juego original que le da acceso a muchos trucos y hacks que pueden mejorar su experiencia de juego. Puede obtener recursos ilimitados, desbloquear todos los artículos, y personalizar la configuración de su juego con este hack. Sin embargo, también debe ser consciente de los beneficios y desventajas de usar este truco, así como las mejores alternativas a ella. Si desea probar la elaboración y construcción de APK Hack, puede seguir los pasos anteriores para descargar e instalar en su dispositivo. ¡Diviértete y disfruta del juego! </p>
|
52 |
-
<p>Si te gustó este artículo, por favor compártelo con tus amigos y deja un comentario a continuación. También puede suscribirse a nuestro boletín para obtener más consejos y trucos sobre juegos y tecnología. ¡Gracias por leer! </p>
|
53 |
-
<h2>Preguntas frecuentes: Cinco preguntas y respuestas comunes sobre la elaboración y construcción de APK Hack</h2>
|
54 |
-
<ol>
|
55 |
-
<li><strong>Es la elaboración y construcción de APK Hack seguro de usar? </strong></li>
|
56 |
-
<p>Elaboración y construcción de APK Hack es generalmente seguro de usar, siempre y cuando se descarga desde un sitio web de confianza y escanearlo en busca de virus o malware antes de instalarlo. Sin embargo, siempre debe tener cuidado al descargar e instalar cualquier aplicación o archivo de fuentes desconocidas, ya que podrían contener contenido dañino o no deseado. También debe hacer una copia de seguridad de sus datos antes de usar cualquier truco, ya que podría causar algunos problemas o errores en su juego o dispositivo. </p>
|
57 |
-
<li><strong>Es la elaboración y construcción de APK Hack legal de usar? </strong></li>
|
58 |
-
|
59 |
-
<li><strong>¿La elaboración y construcción de APK Hack funciona en todos los dispositivos? </strong></li>
|
60 |
-
<p>Elaboración y construcción de APK Hack funciona en la mayoría de los dispositivos Android que soportan el juego original. Sin embargo, es posible que no funcione en algunos dispositivos que tienen diferentes especificaciones o problemas de compatibilidad. También podría no funcionar en algunos dispositivos que se han actualizado a la última versión del juego o el sistema operativo. Por lo tanto, usted debe comprobar la compatibilidad de su dispositivo antes de descargar e instalar este hack. </p>
|
61 |
-
<li><strong>¿Puedo actualizar la elaboración y construcción de APK Hack? </strong></li>
|
62 |
-
<p>Elaboración y construcción de APK Hack se puede actualizar mediante la descarga e instalación de la última versión del hack desde el mismo sitio web que lo obtuvo de. Sin embargo, debe tener en cuenta que la actualización de este hack puede causar algunos problemas o errores en su juego o dispositivo. También podría hacer su hack incompatible con el juego original u otros hacks. Por lo tanto, usted debe copia de seguridad de sus datos antes de actualizar este hack. </p>
|
63 |
-
<li><strong>¿Puedo desinstalar la elaboración y construcción de APK Hack? </strong></li>
|
64 |
-
<p>Elaboración y construcción de APK Hack se puede desinstalar mediante la eliminación de la aplicación de su dispositivo. Usted puede hacer esto yendo a la configuración de su dispositivo, a continuación, aplicaciones o aplicaciones, a continuación, la elaboración y la construcción de Hack o algo similar. Toque en la aplicación y seleccione desinstalar o quitar. También puede eliminar el archivo . apk de su dispositivo si todavía lo tiene. Sin embargo, debe tener en cuenta que la desinstalación de este truco podría no restaurar los datos del juego o la configuración a su estado original. Por lo tanto, debe hacer una copia de seguridad de sus datos antes de desinstalar este hack. </p>
|
65 |
-
</ol></p> 64aa2da5cf<br />
|
66 |
-
<br />
|
67 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/build/metadata_editable.py
DELETED
@@ -1,41 +0,0 @@
|
|
1 |
-
"""Metadata generation logic for source distributions.
|
2 |
-
"""
|
3 |
-
|
4 |
-
import os
|
5 |
-
|
6 |
-
from pip._vendor.pyproject_hooks import BuildBackendHookCaller
|
7 |
-
|
8 |
-
from pip._internal.build_env import BuildEnvironment
|
9 |
-
from pip._internal.exceptions import (
|
10 |
-
InstallationSubprocessError,
|
11 |
-
MetadataGenerationFailed,
|
12 |
-
)
|
13 |
-
from pip._internal.utils.subprocess import runner_with_spinner_message
|
14 |
-
from pip._internal.utils.temp_dir import TempDirectory
|
15 |
-
|
16 |
-
|
17 |
-
def generate_editable_metadata(
|
18 |
-
build_env: BuildEnvironment, backend: BuildBackendHookCaller, details: str
|
19 |
-
) -> str:
|
20 |
-
"""Generate metadata using mechanisms described in PEP 660.
|
21 |
-
|
22 |
-
Returns the generated metadata directory.
|
23 |
-
"""
|
24 |
-
metadata_tmpdir = TempDirectory(kind="modern-metadata", globally_managed=True)
|
25 |
-
|
26 |
-
metadata_dir = metadata_tmpdir.path
|
27 |
-
|
28 |
-
with build_env:
|
29 |
-
# Note that BuildBackendHookCaller implements a fallback for
|
30 |
-
# prepare_metadata_for_build_wheel/editable, so we don't have to
|
31 |
-
# consider the possibility that this hook doesn't exist.
|
32 |
-
runner = runner_with_spinner_message(
|
33 |
-
"Preparing editable metadata (pyproject.toml)"
|
34 |
-
)
|
35 |
-
with backend.subprocess_runner(runner):
|
36 |
-
try:
|
37 |
-
distinfo_dir = backend.prepare_metadata_for_build_editable(metadata_dir)
|
38 |
-
except InstallationSubprocessError as error:
|
39 |
-
raise MetadataGenerationFailed(package_details=details) from error
|
40 |
-
|
41 |
-
return os.path.join(metadata_dir, distinfo_dir)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVH-vn1210/make_hair/minigpt4/common/dist_utils.py
DELETED
@@ -1,137 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Copyright (c) 2022, salesforce.com, inc.
|
3 |
-
All rights reserved.
|
4 |
-
SPDX-License-Identifier: BSD-3-Clause
|
5 |
-
For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
|
6 |
-
"""
|
7 |
-
|
8 |
-
import datetime
|
9 |
-
import functools
|
10 |
-
import os
|
11 |
-
|
12 |
-
import torch
|
13 |
-
import torch.distributed as dist
|
14 |
-
import timm.models.hub as timm_hub
|
15 |
-
|
16 |
-
|
17 |
-
def setup_for_distributed(is_master):
|
18 |
-
"""
|
19 |
-
This function disables printing when not in master process
|
20 |
-
"""
|
21 |
-
import builtins as __builtin__
|
22 |
-
|
23 |
-
builtin_print = __builtin__.print
|
24 |
-
|
25 |
-
def print(*args, **kwargs):
|
26 |
-
force = kwargs.pop("force", False)
|
27 |
-
if is_master or force:
|
28 |
-
builtin_print(*args, **kwargs)
|
29 |
-
|
30 |
-
__builtin__.print = print
|
31 |
-
|
32 |
-
|
33 |
-
def is_dist_avail_and_initialized():
|
34 |
-
if not dist.is_available():
|
35 |
-
return False
|
36 |
-
if not dist.is_initialized():
|
37 |
-
return False
|
38 |
-
return True
|
39 |
-
|
40 |
-
|
41 |
-
def get_world_size():
|
42 |
-
if not is_dist_avail_and_initialized():
|
43 |
-
return 1
|
44 |
-
return dist.get_world_size()
|
45 |
-
|
46 |
-
|
47 |
-
def get_rank():
|
48 |
-
if not is_dist_avail_and_initialized():
|
49 |
-
return 0
|
50 |
-
return dist.get_rank()
|
51 |
-
|
52 |
-
|
53 |
-
def is_main_process():
|
54 |
-
return get_rank() == 0
|
55 |
-
|
56 |
-
|
57 |
-
def init_distributed_mode(args):
|
58 |
-
if "RANK" in os.environ and "WORLD_SIZE" in os.environ:
|
59 |
-
args.rank = int(os.environ["RANK"])
|
60 |
-
args.world_size = int(os.environ["WORLD_SIZE"])
|
61 |
-
args.gpu = int(os.environ["LOCAL_RANK"])
|
62 |
-
elif "SLURM_PROCID" in os.environ:
|
63 |
-
args.rank = int(os.environ["SLURM_PROCID"])
|
64 |
-
args.gpu = args.rank % torch.cuda.device_count()
|
65 |
-
else:
|
66 |
-
print("Not using distributed mode")
|
67 |
-
args.distributed = False
|
68 |
-
return
|
69 |
-
|
70 |
-
args.distributed = True
|
71 |
-
|
72 |
-
torch.cuda.set_device(args.gpu)
|
73 |
-
args.dist_backend = "nccl"
|
74 |
-
print(
|
75 |
-
"| distributed init (rank {}, world {}): {}".format(
|
76 |
-
args.rank, args.world_size, args.dist_url
|
77 |
-
),
|
78 |
-
flush=True,
|
79 |
-
)
|
80 |
-
torch.distributed.init_process_group(
|
81 |
-
backend=args.dist_backend,
|
82 |
-
init_method=args.dist_url,
|
83 |
-
world_size=args.world_size,
|
84 |
-
rank=args.rank,
|
85 |
-
timeout=datetime.timedelta(
|
86 |
-
days=365
|
87 |
-
), # allow auto-downloading and de-compressing
|
88 |
-
)
|
89 |
-
torch.distributed.barrier()
|
90 |
-
setup_for_distributed(args.rank == 0)
|
91 |
-
|
92 |
-
|
93 |
-
def get_dist_info():
|
94 |
-
if torch.__version__ < "1.0":
|
95 |
-
initialized = dist._initialized
|
96 |
-
else:
|
97 |
-
initialized = dist.is_initialized()
|
98 |
-
if initialized:
|
99 |
-
rank = dist.get_rank()
|
100 |
-
world_size = dist.get_world_size()
|
101 |
-
else: # non-distributed training
|
102 |
-
rank = 0
|
103 |
-
world_size = 1
|
104 |
-
return rank, world_size
|
105 |
-
|
106 |
-
|
107 |
-
def main_process(func):
|
108 |
-
@functools.wraps(func)
|
109 |
-
def wrapper(*args, **kwargs):
|
110 |
-
rank, _ = get_dist_info()
|
111 |
-
if rank == 0:
|
112 |
-
return func(*args, **kwargs)
|
113 |
-
|
114 |
-
return wrapper
|
115 |
-
|
116 |
-
|
117 |
-
def download_cached_file(url, check_hash=True, progress=False):
|
118 |
-
"""
|
119 |
-
Download a file from a URL and cache it locally. If the file already exists, it is not downloaded again.
|
120 |
-
If distributed, only the main process downloads the file, and the other processes wait for the file to be downloaded.
|
121 |
-
"""
|
122 |
-
|
123 |
-
def get_cached_file_path():
|
124 |
-
# a hack to sync the file path across processes
|
125 |
-
parts = torch.hub.urlparse(url)
|
126 |
-
filename = os.path.basename(parts.path)
|
127 |
-
cached_file = os.path.join(timm_hub.get_cache_dir(), filename)
|
128 |
-
|
129 |
-
return cached_file
|
130 |
-
|
131 |
-
if is_main_process():
|
132 |
-
timm_hub.download_cached_file(url, check_hash, progress)
|
133 |
-
|
134 |
-
if is_dist_avail_and_initialized():
|
135 |
-
dist.barrier()
|
136 |
-
|
137 |
-
return get_cached_file_path()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/build.py
DELETED
@@ -1,397 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
2 |
-
import bisect
|
3 |
-
import copy
|
4 |
-
import itertools
|
5 |
-
import logging
|
6 |
-
import numpy as np
|
7 |
-
import operator
|
8 |
-
import pickle
|
9 |
-
import torch.utils.data
|
10 |
-
from fvcore.common.file_io import PathManager
|
11 |
-
from tabulate import tabulate
|
12 |
-
from termcolor import colored
|
13 |
-
|
14 |
-
from detectron2.structures import BoxMode
|
15 |
-
from detectron2.utils.comm import get_world_size
|
16 |
-
from detectron2.utils.env import seed_all_rng
|
17 |
-
from detectron2.utils.logger import log_first_n
|
18 |
-
|
19 |
-
from . import samplers
|
20 |
-
from .catalog import DatasetCatalog, MetadataCatalog
|
21 |
-
from .common import AspectRatioGroupedDataset, DatasetFromList, MapDataset
|
22 |
-
from .dataset_mapper import DatasetMapper
|
23 |
-
from .detection_utils import check_metadata_consistency
|
24 |
-
|
25 |
-
"""
|
26 |
-
This file contains the default logic to build a dataloader for training or testing.
|
27 |
-
"""
|
28 |
-
|
29 |
-
__all__ = [
|
30 |
-
"build_detection_train_loader",
|
31 |
-
"build_detection_test_loader",
|
32 |
-
"get_detection_dataset_dicts",
|
33 |
-
"load_proposals_into_dataset",
|
34 |
-
"print_instances_class_histogram",
|
35 |
-
]
|
36 |
-
|
37 |
-
|
38 |
-
def filter_images_with_only_crowd_annotations(dataset_dicts):
|
39 |
-
"""
|
40 |
-
Filter out images with none annotations or only crowd annotations
|
41 |
-
(i.e., images without non-crowd annotations).
|
42 |
-
A common training-time preprocessing on COCO dataset.
|
43 |
-
|
44 |
-
Args:
|
45 |
-
dataset_dicts (list[dict]): annotations in Detectron2 Dataset format.
|
46 |
-
|
47 |
-
Returns:
|
48 |
-
list[dict]: the same format, but filtered.
|
49 |
-
"""
|
50 |
-
num_before = len(dataset_dicts)
|
51 |
-
|
52 |
-
def valid(anns):
|
53 |
-
for ann in anns:
|
54 |
-
if ann.get("iscrowd", 0) == 0:
|
55 |
-
return True
|
56 |
-
return False
|
57 |
-
|
58 |
-
dataset_dicts = [x for x in dataset_dicts if valid(x["annotations"])]
|
59 |
-
num_after = len(dataset_dicts)
|
60 |
-
logger = logging.getLogger(__name__)
|
61 |
-
logger.info(
|
62 |
-
"Removed {} images with no usable annotations. {} images left.".format(
|
63 |
-
num_before - num_after, num_after
|
64 |
-
)
|
65 |
-
)
|
66 |
-
return dataset_dicts
|
67 |
-
|
68 |
-
|
69 |
-
def filter_images_with_few_keypoints(dataset_dicts, min_keypoints_per_image):
|
70 |
-
"""
|
71 |
-
Filter out images with too few number of keypoints.
|
72 |
-
|
73 |
-
Args:
|
74 |
-
dataset_dicts (list[dict]): annotations in Detectron2 Dataset format.
|
75 |
-
|
76 |
-
Returns:
|
77 |
-
list[dict]: the same format as dataset_dicts, but filtered.
|
78 |
-
"""
|
79 |
-
num_before = len(dataset_dicts)
|
80 |
-
|
81 |
-
def visible_keypoints_in_image(dic):
|
82 |
-
# Each keypoints field has the format [x1, y1, v1, ...], where v is visibility
|
83 |
-
annotations = dic["annotations"]
|
84 |
-
return sum(
|
85 |
-
(np.array(ann["keypoints"][2::3]) > 0).sum()
|
86 |
-
for ann in annotations
|
87 |
-
if "keypoints" in ann
|
88 |
-
)
|
89 |
-
|
90 |
-
dataset_dicts = [
|
91 |
-
x for x in dataset_dicts if visible_keypoints_in_image(x) >= min_keypoints_per_image
|
92 |
-
]
|
93 |
-
num_after = len(dataset_dicts)
|
94 |
-
logger = logging.getLogger(__name__)
|
95 |
-
logger.info(
|
96 |
-
"Removed {} images with fewer than {} keypoints.".format(
|
97 |
-
num_before - num_after, min_keypoints_per_image
|
98 |
-
)
|
99 |
-
)
|
100 |
-
return dataset_dicts
|
101 |
-
|
102 |
-
|
103 |
-
def load_proposals_into_dataset(dataset_dicts, proposal_file):
|
104 |
-
"""
|
105 |
-
Load precomputed object proposals into the dataset.
|
106 |
-
|
107 |
-
The proposal file should be a pickled dict with the following keys:
|
108 |
-
|
109 |
-
- "ids": list[int] or list[str], the image ids
|
110 |
-
- "boxes": list[np.ndarray], each is an Nx4 array of boxes corresponding to the image id
|
111 |
-
- "objectness_logits": list[np.ndarray], each is an N sized array of objectness scores
|
112 |
-
corresponding to the boxes.
|
113 |
-
- "bbox_mode": the BoxMode of the boxes array. Defaults to ``BoxMode.XYXY_ABS``.
|
114 |
-
|
115 |
-
Args:
|
116 |
-
dataset_dicts (list[dict]): annotations in Detectron2 Dataset format.
|
117 |
-
proposal_file (str): file path of pre-computed proposals, in pkl format.
|
118 |
-
|
119 |
-
Returns:
|
120 |
-
list[dict]: the same format as dataset_dicts, but added proposal field.
|
121 |
-
"""
|
122 |
-
logger = logging.getLogger(__name__)
|
123 |
-
logger.info("Loading proposals from: {}".format(proposal_file))
|
124 |
-
|
125 |
-
with PathManager.open(proposal_file, "rb") as f:
|
126 |
-
proposals = pickle.load(f, encoding="latin1")
|
127 |
-
|
128 |
-
# Rename the key names in D1 proposal files
|
129 |
-
rename_keys = {"indexes": "ids", "scores": "objectness_logits"}
|
130 |
-
for key in rename_keys:
|
131 |
-
if key in proposals:
|
132 |
-
proposals[rename_keys[key]] = proposals.pop(key)
|
133 |
-
|
134 |
-
# Fetch the indexes of all proposals that are in the dataset
|
135 |
-
# Convert image_id to str since they could be int.
|
136 |
-
img_ids = set({str(record["image_id"]) for record in dataset_dicts})
|
137 |
-
id_to_index = {str(id): i for i, id in enumerate(proposals["ids"]) if str(id) in img_ids}
|
138 |
-
|
139 |
-
# Assuming default bbox_mode of precomputed proposals are 'XYXY_ABS'
|
140 |
-
bbox_mode = BoxMode(proposals["bbox_mode"]) if "bbox_mode" in proposals else BoxMode.XYXY_ABS
|
141 |
-
|
142 |
-
for record in dataset_dicts:
|
143 |
-
# Get the index of the proposal
|
144 |
-
i = id_to_index[str(record["image_id"])]
|
145 |
-
|
146 |
-
boxes = proposals["boxes"][i]
|
147 |
-
objectness_logits = proposals["objectness_logits"][i]
|
148 |
-
# Sort the proposals in descending order of the scores
|
149 |
-
inds = objectness_logits.argsort()[::-1]
|
150 |
-
record["proposal_boxes"] = boxes[inds]
|
151 |
-
record["proposal_objectness_logits"] = objectness_logits[inds]
|
152 |
-
record["proposal_bbox_mode"] = bbox_mode
|
153 |
-
|
154 |
-
return dataset_dicts
|
155 |
-
|
156 |
-
|
157 |
-
def _quantize(x, bin_edges):
|
158 |
-
bin_edges = copy.copy(bin_edges)
|
159 |
-
bin_edges = sorted(bin_edges)
|
160 |
-
quantized = list(map(lambda y: bisect.bisect_right(bin_edges, y), x))
|
161 |
-
return quantized
|
162 |
-
|
163 |
-
|
164 |
-
def print_instances_class_histogram(dataset_dicts, class_names):
|
165 |
-
"""
|
166 |
-
Args:
|
167 |
-
dataset_dicts (list[dict]): list of dataset dicts.
|
168 |
-
class_names (list[str]): list of class names (zero-indexed).
|
169 |
-
"""
|
170 |
-
num_classes = len(class_names)
|
171 |
-
hist_bins = np.arange(num_classes + 1)
|
172 |
-
histogram = np.zeros((num_classes,), dtype=np.int)
|
173 |
-
for entry in dataset_dicts:
|
174 |
-
annos = entry["annotations"]
|
175 |
-
classes = [x["category_id"] for x in annos if not x.get("iscrowd", 0)]
|
176 |
-
histogram += np.histogram(classes, bins=hist_bins)[0]
|
177 |
-
|
178 |
-
N_COLS = min(6, len(class_names) * 2)
|
179 |
-
|
180 |
-
def short_name(x):
|
181 |
-
# make long class names shorter. useful for lvis
|
182 |
-
if len(x) > 13:
|
183 |
-
return x[:11] + ".."
|
184 |
-
return x
|
185 |
-
|
186 |
-
data = list(
|
187 |
-
itertools.chain(*[[short_name(class_names[i]), int(v)] for i, v in enumerate(histogram)])
|
188 |
-
)
|
189 |
-
total_num_instances = sum(data[1::2])
|
190 |
-
data.extend([None] * (N_COLS - (len(data) % N_COLS)))
|
191 |
-
if num_classes > 1:
|
192 |
-
data.extend(["total", total_num_instances])
|
193 |
-
data = itertools.zip_longest(*[data[i::N_COLS] for i in range(N_COLS)])
|
194 |
-
table = tabulate(
|
195 |
-
data,
|
196 |
-
headers=["category", "#instances"] * (N_COLS // 2),
|
197 |
-
tablefmt="pipe",
|
198 |
-
numalign="left",
|
199 |
-
stralign="center",
|
200 |
-
)
|
201 |
-
log_first_n(
|
202 |
-
logging.INFO,
|
203 |
-
"Distribution of instances among all {} categories:\n".format(num_classes)
|
204 |
-
+ colored(table, "cyan"),
|
205 |
-
key="message",
|
206 |
-
)
|
207 |
-
|
208 |
-
|
209 |
-
def get_detection_dataset_dicts(
|
210 |
-
dataset_names, filter_empty=True, min_keypoints=0, proposal_files=None
|
211 |
-
):
|
212 |
-
"""
|
213 |
-
Load and prepare dataset dicts for instance detection/segmentation and semantic segmentation.
|
214 |
-
|
215 |
-
Args:
|
216 |
-
dataset_names (list[str]): a list of dataset names
|
217 |
-
filter_empty (bool): whether to filter out images without instance annotations
|
218 |
-
min_keypoints (int): filter out images with fewer keypoints than
|
219 |
-
`min_keypoints`. Set to 0 to do nothing.
|
220 |
-
proposal_files (list[str]): if given, a list of object proposal files
|
221 |
-
that match each dataset in `dataset_names`.
|
222 |
-
"""
|
223 |
-
assert len(dataset_names)
|
224 |
-
dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in dataset_names]
|
225 |
-
for dataset_name, dicts in zip(dataset_names, dataset_dicts):
|
226 |
-
assert len(dicts), "Dataset '{}' is empty!".format(dataset_name)
|
227 |
-
|
228 |
-
if proposal_files is not None:
|
229 |
-
assert len(dataset_names) == len(proposal_files)
|
230 |
-
# load precomputed proposals from proposal files
|
231 |
-
dataset_dicts = [
|
232 |
-
load_proposals_into_dataset(dataset_i_dicts, proposal_file)
|
233 |
-
for dataset_i_dicts, proposal_file in zip(dataset_dicts, proposal_files)
|
234 |
-
]
|
235 |
-
|
236 |
-
dataset_dicts = list(itertools.chain.from_iterable(dataset_dicts))
|
237 |
-
|
238 |
-
has_instances = "annotations" in dataset_dicts[0]
|
239 |
-
# Keep images without instance-level GT if the dataset has semantic labels.
|
240 |
-
if filter_empty and has_instances and "sem_seg_file_name" not in dataset_dicts[0]:
|
241 |
-
dataset_dicts = filter_images_with_only_crowd_annotations(dataset_dicts)
|
242 |
-
|
243 |
-
if min_keypoints > 0 and has_instances:
|
244 |
-
dataset_dicts = filter_images_with_few_keypoints(dataset_dicts, min_keypoints)
|
245 |
-
|
246 |
-
if has_instances:
|
247 |
-
try:
|
248 |
-
class_names = MetadataCatalog.get(dataset_names[0]).thing_classes
|
249 |
-
check_metadata_consistency("thing_classes", dataset_names)
|
250 |
-
print_instances_class_histogram(dataset_dicts, class_names)
|
251 |
-
except AttributeError: # class names are not available for this dataset
|
252 |
-
pass
|
253 |
-
return dataset_dicts
|
254 |
-
|
255 |
-
|
256 |
-
def build_detection_train_loader(cfg, mapper=None):
|
257 |
-
"""
|
258 |
-
A data loader is created by the following steps:
|
259 |
-
|
260 |
-
1. Use the dataset names in config to query :class:`DatasetCatalog`, and obtain a list of dicts.
|
261 |
-
2. Start workers to work on the dicts. Each worker will:
|
262 |
-
|
263 |
-
* Map each metadata dict into another format to be consumed by the model.
|
264 |
-
* Batch them by simply putting dicts into a list.
|
265 |
-
|
266 |
-
The batched ``list[mapped_dict]`` is what this dataloader will return.
|
267 |
-
|
268 |
-
Args:
|
269 |
-
cfg (CfgNode): the config
|
270 |
-
mapper (callable): a callable which takes a sample (dict) from dataset and
|
271 |
-
returns the format to be consumed by the model.
|
272 |
-
By default it will be `DatasetMapper(cfg, True)`.
|
273 |
-
|
274 |
-
Returns:
|
275 |
-
an infinite iterator of training data
|
276 |
-
"""
|
277 |
-
num_workers = get_world_size()
|
278 |
-
images_per_batch = cfg.SOLVER.IMS_PER_BATCH
|
279 |
-
assert (
|
280 |
-
images_per_batch % num_workers == 0
|
281 |
-
), "SOLVER.IMS_PER_BATCH ({}) must be divisible by the number of workers ({}).".format(
|
282 |
-
images_per_batch, num_workers
|
283 |
-
)
|
284 |
-
assert (
|
285 |
-
images_per_batch >= num_workers
|
286 |
-
), "SOLVER.IMS_PER_BATCH ({}) must be larger than the number of workers ({}).".format(
|
287 |
-
images_per_batch, num_workers
|
288 |
-
)
|
289 |
-
images_per_worker = images_per_batch // num_workers
|
290 |
-
|
291 |
-
dataset_dicts = get_detection_dataset_dicts(
|
292 |
-
cfg.DATASETS.TRAIN,
|
293 |
-
filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS,
|
294 |
-
min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE
|
295 |
-
if cfg.MODEL.KEYPOINT_ON
|
296 |
-
else 0,
|
297 |
-
proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None,
|
298 |
-
)
|
299 |
-
dataset = DatasetFromList(dataset_dicts, copy=False)
|
300 |
-
|
301 |
-
if mapper is None:
|
302 |
-
mapper = DatasetMapper(cfg, True)
|
303 |
-
dataset = MapDataset(dataset, mapper)
|
304 |
-
|
305 |
-
sampler_name = cfg.DATALOADER.SAMPLER_TRAIN
|
306 |
-
logger = logging.getLogger(__name__)
|
307 |
-
logger.info("Using training sampler {}".format(sampler_name))
|
308 |
-
if sampler_name == "TrainingSampler":
|
309 |
-
sampler = samplers.TrainingSampler(len(dataset))
|
310 |
-
elif sampler_name == "RepeatFactorTrainingSampler":
|
311 |
-
sampler = samplers.RepeatFactorTrainingSampler(
|
312 |
-
dataset_dicts, cfg.DATALOADER.REPEAT_THRESHOLD
|
313 |
-
)
|
314 |
-
else:
|
315 |
-
raise ValueError("Unknown training sampler: {}".format(sampler_name))
|
316 |
-
|
317 |
-
if cfg.DATALOADER.ASPECT_RATIO_GROUPING:
|
318 |
-
data_loader = torch.utils.data.DataLoader(
|
319 |
-
dataset,
|
320 |
-
sampler=sampler,
|
321 |
-
num_workers=cfg.DATALOADER.NUM_WORKERS,
|
322 |
-
batch_sampler=None,
|
323 |
-
collate_fn=operator.itemgetter(0), # don't batch, but yield individual elements
|
324 |
-
worker_init_fn=worker_init_reset_seed,
|
325 |
-
) # yield individual mapped dict
|
326 |
-
data_loader = AspectRatioGroupedDataset(data_loader, images_per_worker)
|
327 |
-
else:
|
328 |
-
batch_sampler = torch.utils.data.sampler.BatchSampler(
|
329 |
-
sampler, images_per_worker, drop_last=True
|
330 |
-
)
|
331 |
-
# drop_last so the batch always have the same size
|
332 |
-
data_loader = torch.utils.data.DataLoader(
|
333 |
-
dataset,
|
334 |
-
num_workers=cfg.DATALOADER.NUM_WORKERS,
|
335 |
-
batch_sampler=batch_sampler,
|
336 |
-
collate_fn=trivial_batch_collator,
|
337 |
-
worker_init_fn=worker_init_reset_seed,
|
338 |
-
)
|
339 |
-
|
340 |
-
return data_loader
|
341 |
-
|
342 |
-
|
343 |
-
def build_detection_test_loader(cfg, dataset_name, mapper=None):
|
344 |
-
"""
|
345 |
-
Similar to `build_detection_train_loader`.
|
346 |
-
But this function uses the given `dataset_name` argument (instead of the names in cfg),
|
347 |
-
and uses batch size 1.
|
348 |
-
|
349 |
-
Args:
|
350 |
-
cfg: a detectron2 CfgNode
|
351 |
-
dataset_name (str): a name of the dataset that's available in the DatasetCatalog
|
352 |
-
mapper (callable): a callable which takes a sample (dict) from dataset
|
353 |
-
and returns the format to be consumed by the model.
|
354 |
-
By default it will be `DatasetMapper(cfg, False)`.
|
355 |
-
|
356 |
-
Returns:
|
357 |
-
DataLoader: a torch DataLoader, that loads the given detection
|
358 |
-
dataset, with test-time transformation and batching.
|
359 |
-
"""
|
360 |
-
dataset_dicts = get_detection_dataset_dicts(
|
361 |
-
[dataset_name],
|
362 |
-
filter_empty=False,
|
363 |
-
proposal_files=[
|
364 |
-
cfg.DATASETS.PROPOSAL_FILES_TEST[list(cfg.DATASETS.TEST).index(dataset_name)]
|
365 |
-
]
|
366 |
-
if cfg.MODEL.LOAD_PROPOSALS
|
367 |
-
else None,
|
368 |
-
)
|
369 |
-
|
370 |
-
dataset = DatasetFromList(dataset_dicts)
|
371 |
-
if mapper is None:
|
372 |
-
mapper = DatasetMapper(cfg, False)
|
373 |
-
dataset = MapDataset(dataset, mapper)
|
374 |
-
|
375 |
-
sampler = samplers.InferenceSampler(len(dataset))
|
376 |
-
# Always use 1 image per worker during inference since this is the
|
377 |
-
# standard when reporting inference time in papers.
|
378 |
-
batch_sampler = torch.utils.data.sampler.BatchSampler(sampler, 1, drop_last=False)
|
379 |
-
|
380 |
-
data_loader = torch.utils.data.DataLoader(
|
381 |
-
dataset,
|
382 |
-
num_workers=cfg.DATALOADER.NUM_WORKERS,
|
383 |
-
batch_sampler=batch_sampler,
|
384 |
-
collate_fn=trivial_batch_collator,
|
385 |
-
)
|
386 |
-
return data_loader
|
387 |
-
|
388 |
-
|
389 |
-
def trivial_batch_collator(batch):
|
390 |
-
"""
|
391 |
-
A batch collator that does nothing.
|
392 |
-
"""
|
393 |
-
return batch
|
394 |
-
|
395 |
-
|
396 |
-
def worker_init_reset_seed(worker_id):
|
397 |
-
seed_all_rng(np.random.randint(2 ** 31) + worker_id)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/solver/build.py
DELETED
@@ -1,163 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
2 |
-
from enum import Enum
|
3 |
-
from typing import Any, Callable, Dict, Iterable, List, Set, Type, Union
|
4 |
-
import torch
|
5 |
-
|
6 |
-
from detectron2.config import CfgNode
|
7 |
-
|
8 |
-
from .lr_scheduler import WarmupCosineLR, WarmupMultiStepLR
|
9 |
-
|
10 |
-
_GradientClipperInput = Union[torch.Tensor, Iterable[torch.Tensor]]
|
11 |
-
_GradientClipper = Callable[[_GradientClipperInput], None]
|
12 |
-
|
13 |
-
|
14 |
-
class GradientClipType(Enum):
|
15 |
-
VALUE = "value"
|
16 |
-
NORM = "norm"
|
17 |
-
|
18 |
-
|
19 |
-
def _create_gradient_clipper(cfg: CfgNode) -> _GradientClipper:
|
20 |
-
"""
|
21 |
-
Creates gradient clipping closure to clip by value or by norm,
|
22 |
-
according to the provided config.
|
23 |
-
"""
|
24 |
-
cfg = cfg.clone()
|
25 |
-
|
26 |
-
def clip_grad_norm(p: _GradientClipperInput):
|
27 |
-
torch.nn.utils.clip_grad_norm_(p, cfg.CLIP_VALUE, cfg.NORM_TYPE)
|
28 |
-
|
29 |
-
def clip_grad_value(p: _GradientClipperInput):
|
30 |
-
torch.nn.utils.clip_grad_value_(p, cfg.CLIP_VALUE)
|
31 |
-
|
32 |
-
_GRADIENT_CLIP_TYPE_TO_CLIPPER = {
|
33 |
-
GradientClipType.VALUE: clip_grad_value,
|
34 |
-
GradientClipType.NORM: clip_grad_norm,
|
35 |
-
}
|
36 |
-
return _GRADIENT_CLIP_TYPE_TO_CLIPPER[GradientClipType(cfg.CLIP_TYPE)]
|
37 |
-
|
38 |
-
|
39 |
-
def _generate_optimizer_class_with_gradient_clipping(
|
40 |
-
optimizer_type: Type[torch.optim.Optimizer], gradient_clipper: _GradientClipper
|
41 |
-
) -> Type[torch.optim.Optimizer]:
|
42 |
-
"""
|
43 |
-
Dynamically creates a new type that inherits the type of a given instance
|
44 |
-
and overrides the `step` method to add gradient clipping
|
45 |
-
"""
|
46 |
-
|
47 |
-
def optimizer_wgc_step(self, closure=None):
|
48 |
-
for group in self.param_groups:
|
49 |
-
for p in group["params"]:
|
50 |
-
gradient_clipper(p)
|
51 |
-
super(type(self), self).step(closure)
|
52 |
-
|
53 |
-
OptimizerWithGradientClip = type(
|
54 |
-
optimizer_type.__name__ + "WithGradientClip",
|
55 |
-
(optimizer_type,),
|
56 |
-
{"step": optimizer_wgc_step},
|
57 |
-
)
|
58 |
-
return OptimizerWithGradientClip
|
59 |
-
|
60 |
-
|
61 |
-
def maybe_add_gradient_clipping(
|
62 |
-
cfg: CfgNode, optimizer: torch.optim.Optimizer
|
63 |
-
) -> torch.optim.Optimizer:
|
64 |
-
"""
|
65 |
-
If gradient clipping is enabled through config options, wraps the existing
|
66 |
-
optimizer instance of some type OptimizerType to become an instance
|
67 |
-
of the new dynamically created class OptimizerTypeWithGradientClip
|
68 |
-
that inherits OptimizerType and overrides the `step` method to
|
69 |
-
include gradient clipping.
|
70 |
-
|
71 |
-
Args:
|
72 |
-
cfg: CfgNode
|
73 |
-
configuration options
|
74 |
-
optimizer: torch.optim.Optimizer
|
75 |
-
existing optimizer instance
|
76 |
-
|
77 |
-
Return:
|
78 |
-
optimizer: torch.optim.Optimizer
|
79 |
-
either the unmodified optimizer instance (if gradient clipping is
|
80 |
-
disabled), or the same instance with adjusted __class__ to override
|
81 |
-
the `step` method and include gradient clipping
|
82 |
-
"""
|
83 |
-
if not cfg.SOLVER.CLIP_GRADIENTS.ENABLED:
|
84 |
-
return optimizer
|
85 |
-
grad_clipper = _create_gradient_clipper(cfg.SOLVER.CLIP_GRADIENTS)
|
86 |
-
OptimizerWithGradientClip = _generate_optimizer_class_with_gradient_clipping(
|
87 |
-
type(optimizer), grad_clipper
|
88 |
-
)
|
89 |
-
optimizer.__class__ = OptimizerWithGradientClip
|
90 |
-
return optimizer
|
91 |
-
|
92 |
-
|
93 |
-
def build_optimizer(cfg: CfgNode, model: torch.nn.Module) -> torch.optim.Optimizer:
|
94 |
-
"""
|
95 |
-
Build an optimizer from config.
|
96 |
-
"""
|
97 |
-
norm_module_types = (
|
98 |
-
torch.nn.BatchNorm1d,
|
99 |
-
torch.nn.BatchNorm2d,
|
100 |
-
torch.nn.BatchNorm3d,
|
101 |
-
torch.nn.SyncBatchNorm,
|
102 |
-
# NaiveSyncBatchNorm inherits from BatchNorm2d
|
103 |
-
torch.nn.GroupNorm,
|
104 |
-
torch.nn.InstanceNorm1d,
|
105 |
-
torch.nn.InstanceNorm2d,
|
106 |
-
torch.nn.InstanceNorm3d,
|
107 |
-
torch.nn.LayerNorm,
|
108 |
-
torch.nn.LocalResponseNorm,
|
109 |
-
)
|
110 |
-
params: List[Dict[str, Any]] = []
|
111 |
-
memo: Set[torch.nn.parameter.Parameter] = set()
|
112 |
-
for module in model.modules():
|
113 |
-
for key, value in module.named_parameters(recurse=False):
|
114 |
-
if not value.requires_grad:
|
115 |
-
continue
|
116 |
-
# Avoid duplicating parameters
|
117 |
-
if value in memo:
|
118 |
-
continue
|
119 |
-
memo.add(value)
|
120 |
-
lr = cfg.SOLVER.BASE_LR
|
121 |
-
weight_decay = cfg.SOLVER.WEIGHT_DECAY
|
122 |
-
if isinstance(module, norm_module_types):
|
123 |
-
weight_decay = cfg.SOLVER.WEIGHT_DECAY_NORM
|
124 |
-
elif key == "bias":
|
125 |
-
# NOTE: unlike Detectron v1, we now default BIAS_LR_FACTOR to 1.0
|
126 |
-
# and WEIGHT_DECAY_BIAS to WEIGHT_DECAY so that bias optimizer
|
127 |
-
# hyperparameters are by default exactly the same as for regular
|
128 |
-
# weights.
|
129 |
-
lr = cfg.SOLVER.BASE_LR * cfg.SOLVER.BIAS_LR_FACTOR
|
130 |
-
weight_decay = cfg.SOLVER.WEIGHT_DECAY_BIAS
|
131 |
-
params += [{"params": [value], "lr": lr, "weight_decay": weight_decay}]
|
132 |
-
|
133 |
-
optimizer = torch.optim.SGD(params, cfg.SOLVER.BASE_LR, momentum=cfg.SOLVER.MOMENTUM)
|
134 |
-
optimizer = maybe_add_gradient_clipping(cfg, optimizer)
|
135 |
-
return optimizer
|
136 |
-
|
137 |
-
|
138 |
-
def build_lr_scheduler(
|
139 |
-
cfg: CfgNode, optimizer: torch.optim.Optimizer
|
140 |
-
) -> torch.optim.lr_scheduler._LRScheduler:
|
141 |
-
"""
|
142 |
-
Build a LR scheduler from config.
|
143 |
-
"""
|
144 |
-
name = cfg.SOLVER.LR_SCHEDULER_NAME
|
145 |
-
if name == "WarmupMultiStepLR":
|
146 |
-
return WarmupMultiStepLR(
|
147 |
-
optimizer,
|
148 |
-
cfg.SOLVER.STEPS,
|
149 |
-
cfg.SOLVER.GAMMA,
|
150 |
-
warmup_factor=cfg.SOLVER.WARMUP_FACTOR,
|
151 |
-
warmup_iters=cfg.SOLVER.WARMUP_ITERS,
|
152 |
-
warmup_method=cfg.SOLVER.WARMUP_METHOD,
|
153 |
-
)
|
154 |
-
elif name == "WarmupCosineLR":
|
155 |
-
return WarmupCosineLR(
|
156 |
-
optimizer,
|
157 |
-
cfg.SOLVER.MAX_ITER,
|
158 |
-
warmup_factor=cfg.SOLVER.WARMUP_FACTOR,
|
159 |
-
warmup_iters=cfg.SOLVER.WARMUP_ITERS,
|
160 |
-
warmup_method=cfg.SOLVER.WARMUP_METHOD,
|
161 |
-
)
|
162 |
-
else:
|
163 |
-
raise ValueError("Unknown LR scheduler: {}".format(name))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/model_loader.py
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
# --------------------------------------------------------
|
2 |
-
# OpenVQA
|
3 |
-
# Written by Yuhao Cui https://github.com/cuiyuhao1996
|
4 |
-
# --------------------------------------------------------
|
5 |
-
|
6 |
-
from importlib import import_module
|
7 |
-
|
8 |
-
|
9 |
-
class ModelLoader:
|
10 |
-
def __init__(self, __C):
|
11 |
-
|
12 |
-
self.model_use = __C.MODEL_USE
|
13 |
-
model_moudle_path = 'openvqa.models.' + self.model_use + '.net'
|
14 |
-
self.model_moudle = import_module(model_moudle_path)
|
15 |
-
|
16 |
-
def Net(self, __arg1, __arg2, __arg3, __arg4):
|
17 |
-
return self.model_moudle.Net(__arg1, __arg2, __arg3, __arg4)
|
18 |
-
|
19 |
-
|
20 |
-
class CfgLoader:
|
21 |
-
def __init__(self, model_use):
|
22 |
-
|
23 |
-
cfg_moudle_path = 'openvqa.models.' + model_use + '.model_cfgs'
|
24 |
-
self.cfg_moudle = import_module(cfg_moudle_path)
|
25 |
-
|
26 |
-
def load(self):
|
27 |
-
return self.cfg_moudle.Cfgs()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/pose_model_identifier.py
DELETED
@@ -1,103 +0,0 @@
|
|
1 |
-
import pandas as pd
|
2 |
-
|
3 |
-
BODY_IDENTIFIERS = {
|
4 |
-
"nose": 0,
|
5 |
-
"neck": -1,
|
6 |
-
"rightEye": 5,
|
7 |
-
"leftEye": 2,
|
8 |
-
"rightEar": 8,
|
9 |
-
"leftEar": 7,
|
10 |
-
"rightShoulder": 12,
|
11 |
-
"leftShoulder": 11,
|
12 |
-
"rightElbow": 14,
|
13 |
-
"leftElbow": 13,
|
14 |
-
"rightWrist": 16,
|
15 |
-
"leftWrist": 15
|
16 |
-
}
|
17 |
-
HAND_IDENTIFIERS = {
|
18 |
-
"wrist": 0,
|
19 |
-
"indexTip": 8,
|
20 |
-
"indexDIP": 7,
|
21 |
-
"indexPIP": 6,
|
22 |
-
"indexMCP": 5,
|
23 |
-
"middleTip": 12,
|
24 |
-
"middleDIP": 11,
|
25 |
-
"middlePIP": 10,
|
26 |
-
"middleMCP": 9,
|
27 |
-
"ringTip": 16,
|
28 |
-
"ringDIP": 15,
|
29 |
-
"ringPIP": 14,
|
30 |
-
"ringMCP": 13,
|
31 |
-
"littleTip": 20,
|
32 |
-
"littleDIP": 19,
|
33 |
-
"littlePIP": 18,
|
34 |
-
"littleMCP": 17,
|
35 |
-
"thumbTip": 4,
|
36 |
-
"thumbIP": 3,
|
37 |
-
"thumbMP": 2,
|
38 |
-
"thumbCMC": 1
|
39 |
-
}
|
40 |
-
|
41 |
-
|
42 |
-
class mp_holistic_data:
|
43 |
-
def __init__(self, column_names):
|
44 |
-
self.data_hub = {}
|
45 |
-
for n in column_names[1:-1]:
|
46 |
-
self.data_hub[n] = []
|
47 |
-
|
48 |
-
def hand_append_zero(self, handedness):
|
49 |
-
for k in self.data_hub.keys():
|
50 |
-
if "_" + handedness + "_" in k:
|
51 |
-
self.data_hub[k].append(0)
|
52 |
-
|
53 |
-
def hand_append_value(self, handedness, hand_landmarks):
|
54 |
-
for name, lm_idx in HAND_IDENTIFIERS.items():
|
55 |
-
lm = hand_landmarks.landmark[lm_idx]
|
56 |
-
for xy, xy_value in zip(['_X', '_Y'], [lm.x, lm.y]):
|
57 |
-
k = name + '_' + handedness + xy
|
58 |
-
self.data_hub[k].append(xy_value)
|
59 |
-
|
60 |
-
def get_series(self):
|
61 |
-
return pd.Series(self.data_hub)
|
62 |
-
|
63 |
-
def extract_data(self, holistic_results):
|
64 |
-
def neck(pose_results):
|
65 |
-
ls = pose_results.pose_landmarks.landmark[11]
|
66 |
-
rs = pose_results.pose_landmarks.landmark[12]
|
67 |
-
no = pose_results.pose_landmarks.landmark[0]
|
68 |
-
if (ls.visibility > 0.5) & (rs.visibility > 0.5) & (no.visibility > 0.5):
|
69 |
-
# This indicates the neck better. But it does not affect the result.
|
70 |
-
cx = (ls.x + rs.x) / 2
|
71 |
-
cy = (ls.y + rs.y) / 2
|
72 |
-
dx = no.x - cx
|
73 |
-
dy = no.y - cy
|
74 |
-
x = cx + 0.3 * dx
|
75 |
-
y = cy + 0.3 * dy
|
76 |
-
# x = (ls.x+rs.x)/2
|
77 |
-
# y = (ls.y+rs.y)/2
|
78 |
-
else:
|
79 |
-
x = 0
|
80 |
-
y = 0
|
81 |
-
return [x, y]
|
82 |
-
|
83 |
-
# for the frame that can not extract skeleton from
|
84 |
-
if not holistic_results.pose_landmarks:
|
85 |
-
return
|
86 |
-
for name, lm_idx in BODY_IDENTIFIERS.items():
|
87 |
-
if name == "neck":
|
88 |
-
xy_value = neck(holistic_results)
|
89 |
-
else:
|
90 |
-
lm = holistic_results.pose_landmarks.landmark[lm_idx]
|
91 |
-
visible = float(lm.visibility >= 0.5)
|
92 |
-
xy_value = [lm.x * visible, lm.y * visible]
|
93 |
-
for xy_id, xy in zip(['_X', '_Y'], xy_value):
|
94 |
-
s_name = name + xy_id
|
95 |
-
self.data_hub[s_name].append(xy)
|
96 |
-
|
97 |
-
for handedness, lm in zip(['Right', 'Left'],
|
98 |
-
[holistic_results.right_hand_landmarks, holistic_results.left_hand_landmarks]):
|
99 |
-
if lm:
|
100 |
-
self.hand_append_value(handedness, lm)
|
101 |
-
else:
|
102 |
-
self.hand_append_zero(handedness)
|
103 |
-
return
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/lama-example/models/ade20k/segm_lib/nn/parallel/data_parallel.py
DELETED
@@ -1,112 +0,0 @@
|
|
1 |
-
# -*- coding: utf8 -*-
|
2 |
-
|
3 |
-
import torch.cuda as cuda
|
4 |
-
import torch.nn as nn
|
5 |
-
import torch
|
6 |
-
import collections
|
7 |
-
from torch.nn.parallel._functions import Gather
|
8 |
-
|
9 |
-
|
10 |
-
__all__ = ['UserScatteredDataParallel', 'user_scattered_collate', 'async_copy_to']
|
11 |
-
|
12 |
-
|
13 |
-
def async_copy_to(obj, dev, main_stream=None):
|
14 |
-
if torch.is_tensor(obj):
|
15 |
-
v = obj.cuda(dev, non_blocking=True)
|
16 |
-
if main_stream is not None:
|
17 |
-
v.data.record_stream(main_stream)
|
18 |
-
return v
|
19 |
-
elif isinstance(obj, collections.Mapping):
|
20 |
-
return {k: async_copy_to(o, dev, main_stream) for k, o in obj.items()}
|
21 |
-
elif isinstance(obj, collections.Sequence):
|
22 |
-
return [async_copy_to(o, dev, main_stream) for o in obj]
|
23 |
-
else:
|
24 |
-
return obj
|
25 |
-
|
26 |
-
|
27 |
-
def dict_gather(outputs, target_device, dim=0):
|
28 |
-
"""
|
29 |
-
Gathers variables from different GPUs on a specified device
|
30 |
-
(-1 means the CPU), with dictionary support.
|
31 |
-
"""
|
32 |
-
def gather_map(outputs):
|
33 |
-
out = outputs[0]
|
34 |
-
if torch.is_tensor(out):
|
35 |
-
# MJY(20180330) HACK:: force nr_dims > 0
|
36 |
-
if out.dim() == 0:
|
37 |
-
outputs = [o.unsqueeze(0) for o in outputs]
|
38 |
-
return Gather.apply(target_device, dim, *outputs)
|
39 |
-
elif out is None:
|
40 |
-
return None
|
41 |
-
elif isinstance(out, collections.Mapping):
|
42 |
-
return {k: gather_map([o[k] for o in outputs]) for k in out}
|
43 |
-
elif isinstance(out, collections.Sequence):
|
44 |
-
return type(out)(map(gather_map, zip(*outputs)))
|
45 |
-
return gather_map(outputs)
|
46 |
-
|
47 |
-
|
48 |
-
class DictGatherDataParallel(nn.DataParallel):
|
49 |
-
def gather(self, outputs, output_device):
|
50 |
-
return dict_gather(outputs, output_device, dim=self.dim)
|
51 |
-
|
52 |
-
|
53 |
-
class UserScatteredDataParallel(DictGatherDataParallel):
|
54 |
-
def scatter(self, inputs, kwargs, device_ids):
|
55 |
-
assert len(inputs) == 1
|
56 |
-
inputs = inputs[0]
|
57 |
-
inputs = _async_copy_stream(inputs, device_ids)
|
58 |
-
inputs = [[i] for i in inputs]
|
59 |
-
assert len(kwargs) == 0
|
60 |
-
kwargs = [{} for _ in range(len(inputs))]
|
61 |
-
|
62 |
-
return inputs, kwargs
|
63 |
-
|
64 |
-
|
65 |
-
def user_scattered_collate(batch):
|
66 |
-
return batch
|
67 |
-
|
68 |
-
|
69 |
-
def _async_copy(inputs, device_ids):
|
70 |
-
nr_devs = len(device_ids)
|
71 |
-
assert type(inputs) in (tuple, list)
|
72 |
-
assert len(inputs) == nr_devs
|
73 |
-
|
74 |
-
outputs = []
|
75 |
-
for i, dev in zip(inputs, device_ids):
|
76 |
-
with cuda.device(dev):
|
77 |
-
outputs.append(async_copy_to(i, dev))
|
78 |
-
|
79 |
-
return tuple(outputs)
|
80 |
-
|
81 |
-
|
82 |
-
def _async_copy_stream(inputs, device_ids):
|
83 |
-
nr_devs = len(device_ids)
|
84 |
-
assert type(inputs) in (tuple, list)
|
85 |
-
assert len(inputs) == nr_devs
|
86 |
-
|
87 |
-
outputs = []
|
88 |
-
streams = [_get_stream(d) for d in device_ids]
|
89 |
-
for i, dev, stream in zip(inputs, device_ids, streams):
|
90 |
-
with cuda.device(dev):
|
91 |
-
main_stream = cuda.current_stream()
|
92 |
-
with cuda.stream(stream):
|
93 |
-
outputs.append(async_copy_to(i, dev, main_stream=main_stream))
|
94 |
-
main_stream.wait_stream(stream)
|
95 |
-
|
96 |
-
return outputs
|
97 |
-
|
98 |
-
|
99 |
-
"""Adapted from: torch/nn/parallel/_functions.py"""
|
100 |
-
# background streams used for copying
|
101 |
-
_streams = None
|
102 |
-
|
103 |
-
|
104 |
-
def _get_stream(device):
|
105 |
-
"""Gets a background stream for copying between CPU and GPU"""
|
106 |
-
global _streams
|
107 |
-
if device == -1:
|
108 |
-
return None
|
109 |
-
if _streams is None:
|
110 |
-
_streams = [None] * cuda.device_count()
|
111 |
-
if _streams[device] is None: _streams[device] = cuda.Stream(device)
|
112 |
-
return _streams[device]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/transfiner/configs/common/models/mask_rcnn_fpn.py
DELETED
@@ -1,93 +0,0 @@
|
|
1 |
-
from detectron2.config import LazyCall as L
|
2 |
-
from detectron2.layers import ShapeSpec
|
3 |
-
from detectron2.modeling.meta_arch import GeneralizedRCNN
|
4 |
-
from detectron2.modeling.anchor_generator import DefaultAnchorGenerator
|
5 |
-
from detectron2.modeling.backbone.fpn import LastLevelMaxPool
|
6 |
-
from detectron2.modeling.backbone import BasicStem, FPN, ResNet
|
7 |
-
from detectron2.modeling.box_regression import Box2BoxTransform
|
8 |
-
from detectron2.modeling.matcher import Matcher
|
9 |
-
from detectron2.modeling.poolers import ROIPooler
|
10 |
-
from detectron2.modeling.proposal_generator import RPN, StandardRPNHead
|
11 |
-
from detectron2.modeling.roi_heads import (
|
12 |
-
StandardROIHeads,
|
13 |
-
FastRCNNOutputLayers,
|
14 |
-
MaskRCNNConvUpsampleHead,
|
15 |
-
FastRCNNConvFCHead,
|
16 |
-
)
|
17 |
-
|
18 |
-
model = L(GeneralizedRCNN)(
|
19 |
-
backbone=L(FPN)(
|
20 |
-
bottom_up=L(ResNet)(
|
21 |
-
stem=L(BasicStem)(in_channels=3, out_channels=64, norm="FrozenBN"),
|
22 |
-
stages=L(ResNet.make_default_stages)(
|
23 |
-
depth=50,
|
24 |
-
stride_in_1x1=True,
|
25 |
-
norm="FrozenBN",
|
26 |
-
),
|
27 |
-
out_features=["res2", "res3", "res4", "res5"],
|
28 |
-
),
|
29 |
-
in_features="${.bottom_up.out_features}",
|
30 |
-
out_channels=256,
|
31 |
-
top_block=L(LastLevelMaxPool)(),
|
32 |
-
),
|
33 |
-
proposal_generator=L(RPN)(
|
34 |
-
in_features=["p2", "p3", "p4", "p5", "p6"],
|
35 |
-
head=L(StandardRPNHead)(in_channels=256, num_anchors=3),
|
36 |
-
anchor_generator=L(DefaultAnchorGenerator)(
|
37 |
-
sizes=[[32], [64], [128], [256], [512]],
|
38 |
-
aspect_ratios=[0.5, 1.0, 2.0],
|
39 |
-
strides=[4, 8, 16, 32, 64],
|
40 |
-
offset=0.0,
|
41 |
-
),
|
42 |
-
anchor_matcher=L(Matcher)(
|
43 |
-
thresholds=[0.3, 0.7], labels=[0, -1, 1], allow_low_quality_matches=True
|
44 |
-
),
|
45 |
-
box2box_transform=L(Box2BoxTransform)(weights=[1.0, 1.0, 1.0, 1.0]),
|
46 |
-
batch_size_per_image=256,
|
47 |
-
positive_fraction=0.5,
|
48 |
-
pre_nms_topk=(2000, 1000),
|
49 |
-
post_nms_topk=(1000, 1000),
|
50 |
-
nms_thresh=0.7,
|
51 |
-
),
|
52 |
-
roi_heads=L(StandardROIHeads)(
|
53 |
-
num_classes=80,
|
54 |
-
batch_size_per_image=512,
|
55 |
-
positive_fraction=0.25,
|
56 |
-
proposal_matcher=L(Matcher)(
|
57 |
-
thresholds=[0.5], labels=[0, 1], allow_low_quality_matches=False
|
58 |
-
),
|
59 |
-
box_in_features=["p2", "p3", "p4", "p5"],
|
60 |
-
box_pooler=L(ROIPooler)(
|
61 |
-
output_size=7,
|
62 |
-
scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32),
|
63 |
-
sampling_ratio=0,
|
64 |
-
pooler_type="ROIAlignV2",
|
65 |
-
),
|
66 |
-
box_head=L(FastRCNNConvFCHead)(
|
67 |
-
input_shape=ShapeSpec(channels=256, height=7, width=7),
|
68 |
-
conv_dims=[],
|
69 |
-
fc_dims=[1024, 1024],
|
70 |
-
),
|
71 |
-
box_predictor=L(FastRCNNOutputLayers)(
|
72 |
-
input_shape=ShapeSpec(channels=1024),
|
73 |
-
test_score_thresh=0.05,
|
74 |
-
box2box_transform=L(Box2BoxTransform)(weights=(10, 10, 5, 5)),
|
75 |
-
num_classes="${..num_classes}",
|
76 |
-
),
|
77 |
-
mask_in_features=["p2", "p3", "p4", "p5"],
|
78 |
-
mask_pooler=L(ROIPooler)(
|
79 |
-
output_size=14, # ori is 14
|
80 |
-
scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32),
|
81 |
-
sampling_ratio=0,
|
82 |
-
pooler_type="ROIAlignV2",
|
83 |
-
),
|
84 |
-
mask_head=L(MaskRCNNConvUpsampleHead)(
|
85 |
-
input_shape=ShapeSpec(channels=256, width=14, height=14),
|
86 |
-
num_classes="${..num_classes}",
|
87 |
-
conv_dims=[256, 256, 256, 256, 256],
|
88 |
-
),
|
89 |
-
),
|
90 |
-
pixel_mean=[103.530, 116.280, 123.675],
|
91 |
-
pixel_std=[1.0, 1.0, 1.0],
|
92 |
-
input_format="BGR",
|
93 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CarlDennis/Lovelive-VITS-JPZH/modules.py
DELETED
@@ -1,387 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import torch
|
3 |
-
from torch import nn
|
4 |
-
from torch.nn import functional as F
|
5 |
-
|
6 |
-
from torch.nn import Conv1d
|
7 |
-
from torch.nn.utils import weight_norm, remove_weight_norm
|
8 |
-
|
9 |
-
import commons
|
10 |
-
from commons import init_weights, get_padding
|
11 |
-
from transforms import piecewise_rational_quadratic_transform
|
12 |
-
|
13 |
-
|
14 |
-
LRELU_SLOPE = 0.1
|
15 |
-
|
16 |
-
|
17 |
-
class LayerNorm(nn.Module):
|
18 |
-
def __init__(self, channels, eps=1e-5):
|
19 |
-
super().__init__()
|
20 |
-
self.channels = channels
|
21 |
-
self.eps = eps
|
22 |
-
|
23 |
-
self.gamma = nn.Parameter(torch.ones(channels))
|
24 |
-
self.beta = nn.Parameter(torch.zeros(channels))
|
25 |
-
|
26 |
-
def forward(self, x):
|
27 |
-
x = x.transpose(1, -1)
|
28 |
-
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
|
29 |
-
return x.transpose(1, -1)
|
30 |
-
|
31 |
-
|
32 |
-
class ConvReluNorm(nn.Module):
|
33 |
-
def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
|
34 |
-
super().__init__()
|
35 |
-
self.in_channels = in_channels
|
36 |
-
self.hidden_channels = hidden_channels
|
37 |
-
self.out_channels = out_channels
|
38 |
-
self.kernel_size = kernel_size
|
39 |
-
self.n_layers = n_layers
|
40 |
-
self.p_dropout = p_dropout
|
41 |
-
assert n_layers > 1, "Number of layers should be larger than 0."
|
42 |
-
|
43 |
-
self.conv_layers = nn.ModuleList()
|
44 |
-
self.norm_layers = nn.ModuleList()
|
45 |
-
self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
|
46 |
-
self.norm_layers.append(LayerNorm(hidden_channels))
|
47 |
-
self.relu_drop = nn.Sequential(
|
48 |
-
nn.ReLU(),
|
49 |
-
nn.Dropout(p_dropout))
|
50 |
-
for _ in range(n_layers-1):
|
51 |
-
self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
|
52 |
-
self.norm_layers.append(LayerNorm(hidden_channels))
|
53 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
|
54 |
-
self.proj.weight.data.zero_()
|
55 |
-
self.proj.bias.data.zero_()
|
56 |
-
|
57 |
-
def forward(self, x, x_mask):
|
58 |
-
x_org = x
|
59 |
-
for i in range(self.n_layers):
|
60 |
-
x = self.conv_layers[i](x * x_mask)
|
61 |
-
x = self.norm_layers[i](x)
|
62 |
-
x = self.relu_drop(x)
|
63 |
-
x = x_org + self.proj(x)
|
64 |
-
return x * x_mask
|
65 |
-
|
66 |
-
|
67 |
-
class DDSConv(nn.Module):
|
68 |
-
"""
|
69 |
-
Dialted and Depth-Separable Convolution
|
70 |
-
"""
|
71 |
-
def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
|
72 |
-
super().__init__()
|
73 |
-
self.channels = channels
|
74 |
-
self.kernel_size = kernel_size
|
75 |
-
self.n_layers = n_layers
|
76 |
-
self.p_dropout = p_dropout
|
77 |
-
|
78 |
-
self.drop = nn.Dropout(p_dropout)
|
79 |
-
self.convs_sep = nn.ModuleList()
|
80 |
-
self.convs_1x1 = nn.ModuleList()
|
81 |
-
self.norms_1 = nn.ModuleList()
|
82 |
-
self.norms_2 = nn.ModuleList()
|
83 |
-
for i in range(n_layers):
|
84 |
-
dilation = kernel_size ** i
|
85 |
-
padding = (kernel_size * dilation - dilation) // 2
|
86 |
-
self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
|
87 |
-
groups=channels, dilation=dilation, padding=padding
|
88 |
-
))
|
89 |
-
self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
|
90 |
-
self.norms_1.append(LayerNorm(channels))
|
91 |
-
self.norms_2.append(LayerNorm(channels))
|
92 |
-
|
93 |
-
def forward(self, x, x_mask, g=None):
|
94 |
-
if g is not None:
|
95 |
-
x = x + g
|
96 |
-
for i in range(self.n_layers):
|
97 |
-
y = self.convs_sep[i](x * x_mask)
|
98 |
-
y = self.norms_1[i](y)
|
99 |
-
y = F.gelu(y)
|
100 |
-
y = self.convs_1x1[i](y)
|
101 |
-
y = self.norms_2[i](y)
|
102 |
-
y = F.gelu(y)
|
103 |
-
y = self.drop(y)
|
104 |
-
x = x + y
|
105 |
-
return x * x_mask
|
106 |
-
|
107 |
-
|
108 |
-
class WN(torch.nn.Module):
|
109 |
-
def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
|
110 |
-
super(WN, self).__init__()
|
111 |
-
assert(kernel_size % 2 == 1)
|
112 |
-
self.hidden_channels =hidden_channels
|
113 |
-
self.kernel_size = kernel_size,
|
114 |
-
self.dilation_rate = dilation_rate
|
115 |
-
self.n_layers = n_layers
|
116 |
-
self.gin_channels = gin_channels
|
117 |
-
self.p_dropout = p_dropout
|
118 |
-
|
119 |
-
self.in_layers = torch.nn.ModuleList()
|
120 |
-
self.res_skip_layers = torch.nn.ModuleList()
|
121 |
-
self.drop = nn.Dropout(p_dropout)
|
122 |
-
|
123 |
-
if gin_channels != 0:
|
124 |
-
cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
|
125 |
-
self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
|
126 |
-
|
127 |
-
for i in range(n_layers):
|
128 |
-
dilation = dilation_rate ** i
|
129 |
-
padding = int((kernel_size * dilation - dilation) / 2)
|
130 |
-
in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
|
131 |
-
dilation=dilation, padding=padding)
|
132 |
-
in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
|
133 |
-
self.in_layers.append(in_layer)
|
134 |
-
|
135 |
-
# last one is not necessary
|
136 |
-
if i < n_layers - 1:
|
137 |
-
res_skip_channels = 2 * hidden_channels
|
138 |
-
else:
|
139 |
-
res_skip_channels = hidden_channels
|
140 |
-
|
141 |
-
res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
|
142 |
-
res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
|
143 |
-
self.res_skip_layers.append(res_skip_layer)
|
144 |
-
|
145 |
-
def forward(self, x, x_mask, g=None, **kwargs):
|
146 |
-
output = torch.zeros_like(x)
|
147 |
-
n_channels_tensor = torch.IntTensor([self.hidden_channels])
|
148 |
-
|
149 |
-
if g is not None:
|
150 |
-
g = self.cond_layer(g)
|
151 |
-
|
152 |
-
for i in range(self.n_layers):
|
153 |
-
x_in = self.in_layers[i](x)
|
154 |
-
if g is not None:
|
155 |
-
cond_offset = i * 2 * self.hidden_channels
|
156 |
-
g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
|
157 |
-
else:
|
158 |
-
g_l = torch.zeros_like(x_in)
|
159 |
-
|
160 |
-
acts = commons.fused_add_tanh_sigmoid_multiply(
|
161 |
-
x_in,
|
162 |
-
g_l,
|
163 |
-
n_channels_tensor)
|
164 |
-
acts = self.drop(acts)
|
165 |
-
|
166 |
-
res_skip_acts = self.res_skip_layers[i](acts)
|
167 |
-
if i < self.n_layers - 1:
|
168 |
-
res_acts = res_skip_acts[:,:self.hidden_channels,:]
|
169 |
-
x = (x + res_acts) * x_mask
|
170 |
-
output = output + res_skip_acts[:,self.hidden_channels:,:]
|
171 |
-
else:
|
172 |
-
output = output + res_skip_acts
|
173 |
-
return output * x_mask
|
174 |
-
|
175 |
-
def remove_weight_norm(self):
|
176 |
-
if self.gin_channels != 0:
|
177 |
-
torch.nn.utils.remove_weight_norm(self.cond_layer)
|
178 |
-
for l in self.in_layers:
|
179 |
-
torch.nn.utils.remove_weight_norm(l)
|
180 |
-
for l in self.res_skip_layers:
|
181 |
-
torch.nn.utils.remove_weight_norm(l)
|
182 |
-
|
183 |
-
|
184 |
-
class ResBlock1(torch.nn.Module):
|
185 |
-
def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
|
186 |
-
super(ResBlock1, self).__init__()
|
187 |
-
self.convs1 = nn.ModuleList([
|
188 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
|
189 |
-
padding=get_padding(kernel_size, dilation[0]))),
|
190 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
|
191 |
-
padding=get_padding(kernel_size, dilation[1]))),
|
192 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
|
193 |
-
padding=get_padding(kernel_size, dilation[2])))
|
194 |
-
])
|
195 |
-
self.convs1.apply(init_weights)
|
196 |
-
|
197 |
-
self.convs2 = nn.ModuleList([
|
198 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
|
199 |
-
padding=get_padding(kernel_size, 1))),
|
200 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
|
201 |
-
padding=get_padding(kernel_size, 1))),
|
202 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
|
203 |
-
padding=get_padding(kernel_size, 1)))
|
204 |
-
])
|
205 |
-
self.convs2.apply(init_weights)
|
206 |
-
|
207 |
-
def forward(self, x, x_mask=None):
|
208 |
-
for c1, c2 in zip(self.convs1, self.convs2):
|
209 |
-
xt = F.leaky_relu(x, LRELU_SLOPE)
|
210 |
-
if x_mask is not None:
|
211 |
-
xt = xt * x_mask
|
212 |
-
xt = c1(xt)
|
213 |
-
xt = F.leaky_relu(xt, LRELU_SLOPE)
|
214 |
-
if x_mask is not None:
|
215 |
-
xt = xt * x_mask
|
216 |
-
xt = c2(xt)
|
217 |
-
x = xt + x
|
218 |
-
if x_mask is not None:
|
219 |
-
x = x * x_mask
|
220 |
-
return x
|
221 |
-
|
222 |
-
def remove_weight_norm(self):
|
223 |
-
for l in self.convs1:
|
224 |
-
remove_weight_norm(l)
|
225 |
-
for l in self.convs2:
|
226 |
-
remove_weight_norm(l)
|
227 |
-
|
228 |
-
|
229 |
-
class ResBlock2(torch.nn.Module):
|
230 |
-
def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
|
231 |
-
super(ResBlock2, self).__init__()
|
232 |
-
self.convs = nn.ModuleList([
|
233 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
|
234 |
-
padding=get_padding(kernel_size, dilation[0]))),
|
235 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
|
236 |
-
padding=get_padding(kernel_size, dilation[1])))
|
237 |
-
])
|
238 |
-
self.convs.apply(init_weights)
|
239 |
-
|
240 |
-
def forward(self, x, x_mask=None):
|
241 |
-
for c in self.convs:
|
242 |
-
xt = F.leaky_relu(x, LRELU_SLOPE)
|
243 |
-
if x_mask is not None:
|
244 |
-
xt = xt * x_mask
|
245 |
-
xt = c(xt)
|
246 |
-
x = xt + x
|
247 |
-
if x_mask is not None:
|
248 |
-
x = x * x_mask
|
249 |
-
return x
|
250 |
-
|
251 |
-
def remove_weight_norm(self):
|
252 |
-
for l in self.convs:
|
253 |
-
remove_weight_norm(l)
|
254 |
-
|
255 |
-
|
256 |
-
class Log(nn.Module):
|
257 |
-
def forward(self, x, x_mask, reverse=False, **kwargs):
|
258 |
-
if not reverse:
|
259 |
-
y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
|
260 |
-
logdet = torch.sum(-y, [1, 2])
|
261 |
-
return y, logdet
|
262 |
-
else:
|
263 |
-
x = torch.exp(x) * x_mask
|
264 |
-
return x
|
265 |
-
|
266 |
-
|
267 |
-
class Flip(nn.Module):
|
268 |
-
def forward(self, x, *args, reverse=False, **kwargs):
|
269 |
-
x = torch.flip(x, [1])
|
270 |
-
if not reverse:
|
271 |
-
logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
|
272 |
-
return x, logdet
|
273 |
-
else:
|
274 |
-
return x
|
275 |
-
|
276 |
-
|
277 |
-
class ElementwiseAffine(nn.Module):
|
278 |
-
def __init__(self, channels):
|
279 |
-
super().__init__()
|
280 |
-
self.channels = channels
|
281 |
-
self.m = nn.Parameter(torch.zeros(channels,1))
|
282 |
-
self.logs = nn.Parameter(torch.zeros(channels,1))
|
283 |
-
|
284 |
-
def forward(self, x, x_mask, reverse=False, **kwargs):
|
285 |
-
if not reverse:
|
286 |
-
y = self.m + torch.exp(self.logs) * x
|
287 |
-
y = y * x_mask
|
288 |
-
logdet = torch.sum(self.logs * x_mask, [1,2])
|
289 |
-
return y, logdet
|
290 |
-
else:
|
291 |
-
x = (x - self.m) * torch.exp(-self.logs) * x_mask
|
292 |
-
return x
|
293 |
-
|
294 |
-
|
295 |
-
class ResidualCouplingLayer(nn.Module):
|
296 |
-
def __init__(self,
|
297 |
-
channels,
|
298 |
-
hidden_channels,
|
299 |
-
kernel_size,
|
300 |
-
dilation_rate,
|
301 |
-
n_layers,
|
302 |
-
p_dropout=0,
|
303 |
-
gin_channels=0,
|
304 |
-
mean_only=False):
|
305 |
-
assert channels % 2 == 0, "channels should be divisible by 2"
|
306 |
-
super().__init__()
|
307 |
-
self.channels = channels
|
308 |
-
self.hidden_channels = hidden_channels
|
309 |
-
self.kernel_size = kernel_size
|
310 |
-
self.dilation_rate = dilation_rate
|
311 |
-
self.n_layers = n_layers
|
312 |
-
self.half_channels = channels // 2
|
313 |
-
self.mean_only = mean_only
|
314 |
-
|
315 |
-
self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
|
316 |
-
self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
|
317 |
-
self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
|
318 |
-
self.post.weight.data.zero_()
|
319 |
-
self.post.bias.data.zero_()
|
320 |
-
|
321 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
322 |
-
x0, x1 = torch.split(x, [self.half_channels]*2, 1)
|
323 |
-
h = self.pre(x0) * x_mask
|
324 |
-
h = self.enc(h, x_mask, g=g)
|
325 |
-
stats = self.post(h) * x_mask
|
326 |
-
if not self.mean_only:
|
327 |
-
m, logs = torch.split(stats, [self.half_channels]*2, 1)
|
328 |
-
else:
|
329 |
-
m = stats
|
330 |
-
logs = torch.zeros_like(m)
|
331 |
-
|
332 |
-
if not reverse:
|
333 |
-
x1 = m + x1 * torch.exp(logs) * x_mask
|
334 |
-
x = torch.cat([x0, x1], 1)
|
335 |
-
logdet = torch.sum(logs, [1,2])
|
336 |
-
return x, logdet
|
337 |
-
else:
|
338 |
-
x1 = (x1 - m) * torch.exp(-logs) * x_mask
|
339 |
-
x = torch.cat([x0, x1], 1)
|
340 |
-
return x
|
341 |
-
|
342 |
-
|
343 |
-
class ConvFlow(nn.Module):
|
344 |
-
def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
|
345 |
-
super().__init__()
|
346 |
-
self.in_channels = in_channels
|
347 |
-
self.filter_channels = filter_channels
|
348 |
-
self.kernel_size = kernel_size
|
349 |
-
self.n_layers = n_layers
|
350 |
-
self.num_bins = num_bins
|
351 |
-
self.tail_bound = tail_bound
|
352 |
-
self.half_channels = in_channels // 2
|
353 |
-
|
354 |
-
self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
|
355 |
-
self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
|
356 |
-
self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
|
357 |
-
self.proj.weight.data.zero_()
|
358 |
-
self.proj.bias.data.zero_()
|
359 |
-
|
360 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
361 |
-
x0, x1 = torch.split(x, [self.half_channels]*2, 1)
|
362 |
-
h = self.pre(x0)
|
363 |
-
h = self.convs(h, x_mask, g=g)
|
364 |
-
h = self.proj(h) * x_mask
|
365 |
-
|
366 |
-
b, c, t = x0.shape
|
367 |
-
h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
|
368 |
-
|
369 |
-
unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
|
370 |
-
unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
|
371 |
-
unnormalized_derivatives = h[..., 2 * self.num_bins:]
|
372 |
-
|
373 |
-
x1, logabsdet = piecewise_rational_quadratic_transform(x1,
|
374 |
-
unnormalized_widths,
|
375 |
-
unnormalized_heights,
|
376 |
-
unnormalized_derivatives,
|
377 |
-
inverse=reverse,
|
378 |
-
tails='linear',
|
379 |
-
tail_bound=self.tail_bound
|
380 |
-
)
|
381 |
-
|
382 |
-
x = torch.cat([x0, x1], 1) * x_mask
|
383 |
-
logdet = torch.sum(logabsdet * x_mask, [1,2])
|
384 |
-
if not reverse:
|
385 |
-
return x, logdet
|
386 |
-
else:
|
387 |
-
return x
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|