Commit
·
8319926
1
Parent(s):
602d10d
Update parquet files (step 90 of 397)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/(2011) Descargar Gratis Preoc 2012 Las ventajas de contar con este software en tu ordenador.md +0 -267
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Free Download for Windows PC.md +0 -195
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download 25 To Life Pc Game Full Version _BEST_.md +0 -31
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Dota Map 6.83 AI and Challenge Your Friends in Custom Games.md +0 -135
- spaces/7hao/bingo/src/components/ui/select.tsx +0 -123
- spaces/AIFILMS/StyleGANEX/datasets/augmentations.py +0 -110
- spaces/AIFILMS/StyleGANEX/models/stylegan2/lpips/__init__.py +0 -161
- spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/htsat.py +0 -1308
- spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/losses/__init__.py +0 -1
- spaces/AIGC-Audio/Make_An_Audio/ldm/modules/diffusionmodules/model.py +0 -835
- spaces/ASJMO/freegpt/g4f/Provider/Providers/Mishalsgpt.py +0 -23
- spaces/ASJMO/freegpt/g4f/Provider/Providers/Theb.py +0 -28
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_d-p6_syncbn_fast_8x16b-300e_coco.py +0 -21
- spaces/AchyuthGamer/Free-Accounts-Generator/minecraft/index.html +0 -40
- spaces/Adapter/T2I-Adapter/ldm/models/diffusion/plms.py +0 -243
- spaces/AgentVerse/agentVerse/agentverse/tasks/simulation/sde_team/sde_team_2players/build_config.py +0 -21
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/expressionparser.js +0 -2
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/orbit/Orbit.d.ts +0 -2
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/ResolveChildrenWidth.js +0 -14
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/statesroundrectangle/methods/SetStateMethods.js +0 -81
- spaces/AkitoP/umamusume_bert_vits2/text/chinese.py +0 -198
- spaces/AlanMars/QYL-AI-Space/modules/models/inspurai.py +0 -345
- spaces/AlexWang/lama/saicinpainting/training/modules/base.py +0 -80
- spaces/AlhitawiMohammed22/CER_Hu-Evaluation-Metrics/test_accuracy.py +0 -0
- spaces/Alpaca233/SadTalker/src/face3d/util/util.py +0 -208
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/lpw_stable_diffusion_onnx.py +0 -1146
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/custom_diffusion/train_custom_diffusion.py +0 -1306
- spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_x101_32x4d_fpn_2x_coco.py +0 -13
- spaces/Andy1621/uniformer_image_detection/mmdet/utils/util_mixins.py +0 -104
- spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x512_20k_voc12aug.py +0 -9
- spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/text_generation.py +0 -397
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/__init__.py +0 -0
- spaces/Arthur678/vits-uma-genshin-honkai/transforms.py +0 -193
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/prepare.py +0 -721
- spaces/Banbri/zcvzcv/src/lib/getImageDimension.ts +0 -16
- spaces/Bart92/RVC_HF/lib/infer_pack/modules.py +0 -522
- spaces/Bart92/RVC_HF/tools/infer/infer-pm-index256.py +0 -202
- spaces/Benson/text-generation/Examples/Baixar Mortal Kombat Trilogy Apk.md +0 -53
- spaces/Benson/text-generation/Examples/Coche Deriva Carreras Mod Apk 5play.md +0 -48
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/compat.py +0 -63
- spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/discovery.py +0 -600
- spaces/CVH-vn1210/make_hair/minigpt4/processors/blip_processors.py +0 -141
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/config/compat.py +0 -229
- spaces/CVPR/LIVE/color.h +0 -63
- spaces/CVPR/LIVE/thrust/thrust/swap.h +0 -191
- spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/malloc_and_free.h +0 -54
- spaces/CVPR/WALT/mmdet/core/evaluation/class_names.py +0 -116
- spaces/CVPR/regionclip-demo/detectron2/utils/logger.py +0 -237
- spaces/CVPR/transfiner/configs/common/data/coco_panoptic_separated.py +0 -26
- spaces/Cloudy1225/stackoverflow-sentiment-analysis/README.md +0 -15
spaces/1acneusushi/gradio-2dmoleculeeditor/data/(2011) Descargar Gratis Preoc 2012 Las ventajas de contar con este software en tu ordenador.md
DELETED
@@ -1,267 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>(2011) Descargar Gratis Preoc 2012</h1>
|
3 |
-
<p>If you are a professional or a student in the construction industry, you probably know how important it is to have a reliable and accurate software for estimating the costs of your projects. One of the most popular and widely used software for this purpose is Preoc 2012, a powerful tool that allows you to create, edit, and manage your construction cost estimates with ease and efficiency. In this article, we will tell you everything you need to know about Preoc 2012, including what it is, what it can do, how to download it for free in 2011, how to install and use it, and how to troubleshoot and update it. So, if you are interested in learning more about this amazing software, keep reading!</p>
|
4 |
-
<h2>What is Preoc 2012?</h2>
|
5 |
-
<p>Preoc 2012 is a software developed by CYPE Ingenieros, a Spanish company specialized in developing software for architecture, engineering, and construction. Preoc 2012 is part of the CYPECAD suite, which includes other software for structural analysis, design, and calculation. Preoc 2012 is specifically designed for creating and managing construction cost estimates, based on a comprehensive database of items, prices, materials, labor, equipment, and other factors that affect the cost of a project. Preoc 2012 can be used for any type of construction project, from residential to industrial, from new buildings to renovations, from civil works to installations.</p>
|
6 |
-
<h2>(2011) Descargar Gratis Preoc 2012</h2><br /><p><b><b>Download File</b> ✔ <a href="https://byltly.com/2uKwnj">https://byltly.com/2uKwnj</a></b></p><br /><br />
|
7 |
-
<h3>A software for construction cost estimation</h3>
|
8 |
-
<p>Preoc 2012 is a software that helps you to estimate the cost of your construction projects in a fast and accurate way. With Preoc 2012, you can create your own cost estimates from scratch or use one of the many templates available in the software. You can also import data from other sources, such as Excel files or BIM models. Preoc 2012 allows you to organize your cost estimates into chapters, subchapters, items, subitems, measurements, quantities, units, prices, discounts, taxes, overheads, profits, contingencies, etc. You can also add notes, comments, images, attachments, links, etc. to your cost estimates.</p>
|
9 |
-
<h3>The main features and benefits of Preoc 2012</h3>
|
10 |
-
<p>Preoc 2012 has many features and benefits that make it one of the best software for construction cost estimation. Some of them are:</p>
|
11 |
-
<ul>
|
12 |
-
<li>It has a large and updated database of items and prices for different countries and regions.</li>
|
13 |
-
<li>It allows you to customize your items and prices according to your needs and preferences.</li>
|
14 |
-
<li>It has a user-friendly interface that makes it easy to navigate and use.</li>
|
15 |
-
<li>It has a powerful calculation engine that automatically updates the total cost of your project as you make changes.</li>
|
16 |
-
<li>It has a variety of tools and options for editing and formatting your cost estimates.</li>
|
17 |
-
<li>It has a built-in report generator that lets you create professional-looking reports with graphs, tables, charts, etc.</li>
|
18 |
-
<li>It has an export function that lets you export your cost estimates to different formats, such as PDF, Excel, Word, HTML, XML, etc.</li>
|
19 |
-
<li>It has an online service that lets you share your cost estimates with other users or clients via email or web.</li>
|
20 |
-
<li>It has a backup function that lets you save your cost estimates in a secure cloud storage.</li>
|
21 |
-
<li>It has an update function that lets you download the latest version of the software and the database.</li>
|
22 |
-
</ul>
|
23 |
-
<h3>How to download Preoc 2012 for free in 2011</h3>
|
24 |
-
<p>If you want to download Preoc 2012 for free in 2011, you have two options:</p>
|
25 |
-
<ol>
|
26 |
-
<li>You can download the trial version of Preoc 2012 from the official website of CYPE Ingenieros. The trial version is valid for one month and has all the features and functions of the full version. However, you cannot save or print your cost estimates with the trial version. To download the trial version of Preoc 2012 in 2011:</li>
|
27 |
-
<ul>
|
28 |
-
<li>Go to <a href="http://www.cype.com">www.cype.com</a>.</li>
|
29 |
-
<li>Click on "Download" on the top menu.</li>
|
30 |
-
<li>Select "CYPECAD Suite" on the left sidebar.</li>
|
31 |
-
<li>Select "Preoc" on the right sidebar.</li>
|
32 |
-
<li>Select your country and language.</li>
|
33 |
-
<li>Fill in the form with your personal information.</li>
|
34 |
-
<li>Click on "Download" at the bottom.</li>
|
35 |
-
<li>Follow the instructions on the screen to install the software on your computer.</li>
|
36 |
-
</ul>
|
37 |
-
<li>You can download the full version of Preoc 2012 from a third-party website that offers free downloads of software. However, this option is not recommended because it may be illegal or unsafe. You may encounter viruses or malware that can harm your computer or compromise your personal data. You may also face legal consequences if you use pirated software without a license. Therefore,<strong> we do not endorse or encourage this option</strong>. If you still want to download the full version of Preoc 2012 from a third-party website in 2011:</li>
|
38 |
-
<ul>
|
39 |
-
<li>Go to <a href="http://www.google.com">www.google.com</a>.</li>
|
40 |
-
<li>Type "(2011) Descargar Gratis Preoc 2012" in the search box.</li>
|
41 |
-
<li>Browse through the results until you find a website that offers free downloads of Preoc 2012.</li>
|
42 |
-
<li>Click on the link to access the website.</li>
|
43 |
-
<li>Follow the instructions on the website to download the software on your computer.</li>
|
44 |
-
</ul>
|
45 |
-
</ol>
|
46 |
-
<h2>How to install and use Preoc 2012</h2>
|
47 |
-
<h3>The system requirements and compatibility of Preoc 2012</h3>
|
48 |
-
<p>To install and use Preoc 2012 on your computer,<strong> you need to meet the following system requirements</strong>:</p>
|
49 |
-
<table border="1">
|
50 |
-
<tr><td><strong>Operating system</strong></td><td><strong>Minimum requirements</strong></td></tr>
|
51 |
-
<tr><td>Windows XP/Vista/7/8/10 (32-bit or 64-bit)</td><td>Pentium IV processor or higher<br/>512 MB RAM or higher<br/>500 MB free disk space or higher<br/>1024 x 768 screen resolution or higher<br/>Internet connection (for activation and updates)</td></tr>
|
52 |
-
<tr><td>Mac OS X (10.6 or higher)</td><td>Intel processor or higher<br/>512 MB RAM or higher<br/>500 MB free disk space or higher<br/>1024 x 768 screen resolution or higher<br/>Internet connection (for activation and updates)</td></tr>
|
53 |
-
<tr><td>Linux (Ubuntu/Debian/Fedora/Suse)</td><td>Pentium IV processor or higher<br/>512 MB RAM or higher<br/>500 MB free disk space or higher<br/>1024 x 768 screen resolution or higher<br/>Internet connection (for activation and updates)</td></tr>
|
54 |
-
</table>
|
55 |
-
<p><strong>Note:</strong> Preoc 2012 is compatible with other CYPECAD software such as Arquimedes (for budget management), Metal (for metal structures), Instalaciones (for installations), etc. You can install them together on your computer if you have a license for them.</p>
|
56 |
-
<h3>The installation process and activation of Preoc 2012</h3>
|
57 |
-
<p>To install and activate Preoc 2012 on your computer,<strong> you need to follow these steps</strong>:</p>
|
58 |
-
<p>(2011) Preoc 2012 gratis para descargar<br />
|
59 |
-
Descargar Preoc 2012 (2011) sin costo<br />
|
60 |
-
Cómo descargar Preoc 2012 versión 2011 gratis<br />
|
61 |
-
Preoc 2012 edición 2011 descarga gratuita<br />
|
62 |
-
Descarga directa de Preoc 2012 (2011) gratis<br />
|
63 |
-
(2011) Preoc 2012 free download<br />
|
64 |
-
Download Preoc 2012 (2011) for free<br />
|
65 |
-
How to download Preoc 2012 version 2011 for free<br />
|
66 |
-
Preoc 2012 edition 2011 free download<br />
|
67 |
-
Direct download of Preoc 2012 (2011) for free<br />
|
68 |
-
(2011) Télécharger gratuitement Preoc 2012<br />
|
69 |
-
Télécharger Preoc 2012 (2011) sans frais<br />
|
70 |
-
Comment télécharger Preoc 2012 version 2011 gratuitement<br />
|
71 |
-
Preoc 2012 édition 2011 téléchargement gratuit<br />
|
72 |
-
Téléchargement direct de Preoc 2012 (2011) gratuitement<br />
|
73 |
-
(2011) Scaricare gratuitamente Preoc 2012<br />
|
74 |
-
Scaricare Preoc 2012 (2011) senza costi<br />
|
75 |
-
Come scaricare Preoc 2012 versione 2011 gratuitamente<br />
|
76 |
-
Preoc 2012 edizione 2011 download gratuito<br />
|
77 |
-
Download diretto di Preoc 2012 (2011) gratuitamente<br />
|
78 |
-
(2011) Baixar grátis Preoc 2012<br />
|
79 |
-
Baixar Preoc 2012 (2011) sem custo<br />
|
80 |
-
Como baixar Preoc 2012 versão 2011 grátis<br />
|
81 |
-
Preoc 2012 edição 2011 download grátis<br />
|
82 |
-
Download direto de Preoc 2012 (2011) grátis<br />
|
83 |
-
(2011) Kostenlos herunterladen Preoc 2012<br />
|
84 |
-
Herunterladen Preoc 2012 (2011) ohne Kosten<br />
|
85 |
-
Wie man Preoc 2012 Version 2011 kostenlos herunterlädt<br />
|
86 |
-
Preoc 2012 Ausgabe 2011 kostenloser Download<br />
|
87 |
-
Direkter Download von Preoc 2012 (2011) kostenlos<br />
|
88 |
-
(2020) Descargar Gratis Preoc actualizado <br />
|
89 |
-
Descargar gratis el último Preoc <br />
|
90 |
-
Cómo descargar gratis el nuevo Preoc <br />
|
91 |
-
Descarga gratuita de la última versión de Preoc <br />
|
92 |
-
Descarga directa y gratuita del nuevo Preoc <br />
|
93 |
-
(2020) Free Download Updated Preoc <br />
|
94 |
-
Download the latest Preoc for free <br />
|
95 |
-
How to download the new Preoc for free <br />
|
96 |
-
Free download of the latest version of Preoc <br />
|
97 |
-
Direct and free download of the new Preoc <br />
|
98 |
-
(2020) Télécharger gratuitement le nouveau Preoc <br />
|
99 |
-
Télécharger le dernier Preoc gratuitement <br />
|
100 |
-
Comment télécharger le nouveau Preoc gratuitement <br />
|
101 |
-
Téléchargement gratuit de la dernière version de Preoc <br />
|
102 |
-
Téléchargement direct et gratuit du nouveau Preoc <br />
|
103 |
-
(2020) Scaricare gratuitamente il nuovo Preoc <br />
|
104 |
-
Scaricare l'ultimo Preoc gratuitamente <br />
|
105 |
-
Come scaricare il nuovo Preoc gratuitamente <br />
|
106 |
-
Download gratuito dell'ultima versione di Preoc <br />
|
107 |
-
Download diretto e gratuito del nuovo Preoc</p>
|
108 |
-
<ol>
|
109 |
-
<li>If you downloaded the trial version from CYPE Ingenieros website,<strong> run the installer file</strong>. If you downloaded the full version from a third-party website,<strong> unzip the compressed file</strong>.</li>
|
110 |
-
<li><strong>Select your language</strong>.</li>
|
111 |
-
license agreement</strong> and the terms and conditions.</li>
|
112 |
-
<li><strong>Choose the installation folder</strong> or use the default one.</li>
|
113 |
-
<li><strong>Select the components</strong> you want to install. You can choose between Preoc 2012 and other CYPECAD software.</li>
|
114 |
-
<li><strong>Wait for the installation to finish</strong>.</li>
|
115 |
-
<li><strong>Launch Preoc 2012</strong> from your desktop or start menu.</li>
|
116 |
-
<li><strong>Enter your license code</strong> if you have one. If you don't have one, you can use the trial version for one month.</li>
|
117 |
-
<li><strong>Activate Preoc 2012</strong> online or offline. You need an internet connection for online activation. For offline activation, you need to generate a request file and send it to CYPE Ingenieros by email or fax. They will send you back an activation file that you need to load in Preoc 2012.</li>
|
118 |
-
<li><strong>Enjoy Preoc 2012</strong>!</li>
|
119 |
-
</ol>
|
120 |
-
<h3>The user interface and functions of Preoc 2012</h3>
|
121 |
-
<p>The user interface of Preoc 2012 is divided into four main areas:</p>
|
122 |
-
<ul>
|
123 |
-
<li>The <strong>menu bar</strong>, which contains the main commands and options of Preoc 2012.</li>
|
124 |
-
<li>The <strong>toolbar</strong>, which contains the most frequently used commands and options of Preoc 2012.</li>
|
125 |
-
<li>The <strong>tree view</strong>, which shows the structure and organization of your cost estimate.</li>
|
126 |
-
<li>The <strong>table view</strong>, which shows the details and information of your cost estimate.</li>
|
127 |
-
</ul>
|
128 |
-
<p>You can resize, move, hide, or show any of these areas according to your preference. You can also customize the appearance and behavior of Preoc 2012 by changing the settings and preferences in the menu bar.</p>
|
129 |
-
<p>The functions of Preoc 2012 are grouped into four main categories:</p>
|
130 |
-
<ul>
|
131 |
-
<li>The <strong>file functions</strong>, which allow you to create, open, save, print, export, import, backup, restore, share, update, and close your cost estimates.</li>
|
132 |
-
<li>The <strong>edit functions</strong>, which allow you to add, delete, copy, paste, move, rename, sort, group, filter, search, replace, undo, redo, format, comment, attach, link, etc. your cost estimates.</li>
|
133 |
-
<li>The <strong>view functions</strong>, which allow you to zoom in, zoom out, fit to screen, show gridlines, show headers, show totals, show formulas, show notes, show images, show attachments, show links, etc. your cost estimates.</li>
|
134 |
-
apply currency conversion, apply inflation adjustment, apply price update, check errors, check consistency, check coherence, check completeness, etc. your cost estimates.</li>
|
135 |
-
</ul>
|
136 |
-
<h4>How to create a new project and add items</h4>
|
137 |
-
<p>To create a new project and add items in Preoc 2012,<strong> you need to follow these steps</strong>:</p>
|
138 |
-
<ol>
|
139 |
-
<li><strong>Click on "File" and then "New"</strong> in the menu bar or press Ctrl+N on your keyboard.</li>
|
140 |
-
<li><strong>Enter a name for your project</strong> and click on "OK".</li>
|
141 |
-
<li><strong>Select a template for your project</strong> from the list or click on "Blank" to start from scratch.</li>
|
142 |
-
<li><strong>Select a database for your project</strong> from the list or click on "Browse" to choose a different one.</li>
|
143 |
-
<li><strong>Add items to your project</strong> by dragging and dropping them from the database to the tree view or by clicking on "Edit" and then "Add" in the menu bar or pressing Ctrl+A on your keyboard.</li>
|
144 |
-
<li><strong>Edit the items as you wish</strong> by double-clicking on them or by clicking on "Edit" and then "Edit" in the menu bar or pressing Ctrl+E on your keyboard.</li>
|
145 |
-
<li><strong>Save your project</strong> by clicking on "File" and then "Save" in the menu bar or pressing Ctrl+S on your keyboard.</li>
|
146 |
-
</ol>
|
147 |
-
<h4>How to edit and customize the items and prices</h4>
|
148 |
-
<p>To edit and customize the items and prices in Preoc 2012,<strong> you need to follow these steps</strong>:</p>
|
149 |
-
<ol>
|
150 |
-
<li><strong>Select the item or price you want to edit</strong> by clicking on it in the tree view or the table view.</li>
|
151 |
-
<li><strong>Edit the item or price as you wish</strong> by changing its name, code, description, unit, quantity, price, discount, tax, overhead, profit, contingency, formula, note, comment, image, attachment, link, etc. in the table view or by clicking on "Edit" and then "Edit" in the menu bar or pressing Ctrl+E on your keyboard.</li>
|
152 |
-
<li><strong>Save your changes</strong> by clicking on "File" and then "Save" in the menu bar or pressing Ctrl+S on your keyboard.</li>
|
153 |
-
</ol>
|
154 |
-
<h4>How to generate reports and export data</h4>
|
155 |
-
<p>To generate reports and export data in Preoc 2012,<strong> you need to follow these steps</strong>:</p>
|
156 |
-
<ol>
|
157 |
-
<li><strong>Select the data you want to generate a report or export</strong> by clicking on it in the tree view or the table view.</li>
|
158 |
-
<li><strong>Click on "Tools" and then "Report"</strong> in the menu bar or press Ctrl+R on your keyboard.</li>
|
159 |
-
<li><strong>Select a report type</strong> from the list or click on "Customize" to create your own report.</li>
|
160 |
-
<li><strong>Select a report format</strong> from the list or click on "Customize" to change the appearance of your report.</li>
|
161 |
-
<li><strong>Select a report destination</strong> from the list or click on "Browse" to choose a different one. You can choose between printing your report, saving it as a PDF file, sending it by email, uploading it to the web service, etc.</li>
|
162 |
-
<li><strong>Click on "Generate"</strong> to create your report or export your data.</li>
|
163 |
-
</ol>
|
164 |
-
<h2>How to troubleshoot and update Preoc 2012</h2>
|
165 |
-
<h3>The common issues and errors of Preoc 2012</h3>
|
166 |
-
<p>Sometimes, you may encounter some issues or errors when using Preoc 2012. Some of them are:</p>
|
167 |
-
<ul>
|
168 |
-
<li>The software does not start or crashes frequently.</li>
|
169 |
-
<li>The software does not recognize your license code or activation file.</li>
|
170 |
-
<li>The software does not connect to the internet or update properly.</li>
|
171 |
-
<li>The software does not import or export data correctly.</li>
|
172 |
-
<li>The software does not calculate or display data correctly.</li>
|
173 |
-
<li>The software does not print or save reports correctly.</li>
|
174 |
-
<li>The software does not work well with other CYPECAD software.</li>
|
175 |
-
</ul>
|
176 |
-
<h3>The solutions and tips for fixing Preoc 2012 problems</h3>
|
177 |
-
<p>To solve or prevent these issues or errors,<strong> you can try these solutions and tips</strong>:</p>
|
178 |
-
<ul>
|
179 |
-
<li><strong>Check your system requirements and compatibility</strong>. Make sure that your computer meets the minimum requirements for running Preoc 2012 and that your operating system is compatible with it. You can also try to update your drivers and software to improve their performance and stability.</li>
|
180 |
-
<li><strong>Check your license code and activation file</strong>. Make sure that you have entered your license code correctly and that you have activated Preoc 2012 online or offline. You can also try to deactivate and reactivate Preoc 2012 if you have changed your computer or hardware. You can also contact CYPE Ingenieros if you have lost your license code or activation file.</li>
|
181 |
-
<li><strong>Check your internet connection and firewall settings</strong>. Make sure that you have a stable and secure internet connection and that your firewall settings allow Preoc 2012 to access the internet. You can also try to disable any antivirus or anti-malware software that may interfere with Preoc 2012. You can also contact your internet service provider if you have any problems with your connection.</li>
|
182 |
-
<li><strong>Check your data format and compatibility</strong>. Make sure that you have imported or exported data in a compatible format and that you have not corrupted or modified them. You can also try to use different formats or methods for importing or exporting data. You can also contact CYPE Ingenieros if you have any questions about data format and compatibility.</li>
|
183 |
-
your data correctly and that you have not made any mistakes or errors. You can also try to use the tools and options in Preoc 2012 to check and correct your data. You can also contact CYPE Ingenieros if you have any doubts or queries about data accuracy and consistency.</li>
|
184 |
-
<li><strong>Check your report settings and preferences</strong>. Make sure that you have selected the right report type, format, and destination for your data. You can also try to customize your report settings and preferences to suit your needs and preferences. You can also contact CYPE Ingenieros if you have any problems or suggestions about report settings and preferences.</li>
|
185 |
-
<li><strong>Check your software compatibility and integration</strong>. Make sure that you have installed and updated Preoc 2012 and other CYPECAD software correctly and that they work well together. You can also try to uninstall and reinstall Preoc 2012 and other CYPECAD software if you have any conflicts or issues. You can also contact CYPE Ingenieros if you have any questions or requests about software compatibility and integration.</li>
|
186 |
-
</ul>
|
187 |
-
<h3>The sources and methods for updating Preoc 2012</h3>
|
188 |
-
<p>To update Preoc 2012,<strong> you have two sources and methods</strong>:</p>
|
189 |
-
<ol>
|
190 |
-
<li>You can update Preoc 2012 online from the official website of CYPE Ingenieros. This is the easiest and fastest way to update Preoc 2012. To update Preoc 2012 online:</li>
|
191 |
-
<ul>
|
192 |
-
<li>Go to <a href="http://www.cype.com">www.cype.com</a>.</li>
|
193 |
-
<li>Click on "Download" on the top menu.</li>
|
194 |
-
<li>Select "CYPECAD Suite" on the left sidebar.</li>
|
195 |
-
<li>Select "Preoc" on the right sidebar.</li>
|
196 |
-
<li>Select your country and language.</li>
|
197 |
-
<li>Click on "Update" at the bottom.</li>
|
198 |
-
<li>Follow the instructions on the screen to download and install the latest version of Preoc 2012 on your computer.</li>
|
199 |
-
</ul>
|
200 |
-
<li>You can update Preoc 2012 offline from a CD-ROM or a USB drive. This is a useful way to update Preoc 2012 if you don't have an internet connection or if you want to update multiple computers at once. To update Preoc 2012 offline:</li>
|
201 |
-
<ul>
|
202 |
-
<li>Contact CYPE Ingenieros by phone, email, or fax and request an update CD-ROM or USB drive for Preoc 2012.</li>
|
203 |
-
<li>Wait for the delivery of the update CD-ROM or USB drive.</li>
|
204 |
-
<li>Insert the update CD-ROM or USB drive into your computer.</li>
|
205 |
-
<li>Run the update file from the CD-ROM or USB drive.</li>
|
206 |
-
<li>Follow the instructions on the screen to install the latest version of Preoc 2012 on your computer.</li>
|
207 |
-
</ul>
|
208 |
-
</ol>
|
209 |
-
<h1>Conclusion</h1>
|
210 |
-
<p>Preoc 2012 is a software that helps you to create, edit, and manage your construction cost estimates with ease and efficiency. It has many features and benefits that make it one of the best software for construction cost estimation. It is compatible with other CYPECAD software and with different operating systems. It is easy to download, install, use, troubleshoot, and update. It is a powerful tool that can help you to save time, money, and resources in your construction projects. If you want to try Preoc 2012 for free in 2011, you can download the trial version from CYPE Ingenieros website or the full version from a third-party website. However, we do not endorse or encourage the latter option because it may be illegal or unsafe. We hope that this article has been informative and helpful for you. If you have any questions or comments about Preoc 2012, please feel free to contact us or leave a comment below. Thank you for reading!</p>
|
211 |
-
<h1>FAQs</h1>
|
212 |
-
<h4>What is the difference between Preoc 2012 and Arquimedes?</h4>
|
213 |
-
<p>Preoc 2012 and Arquimedes are both software for construction cost estimation developed by CYPE Ingenieros. However, they have some differences:</p>
|
214 |
-
<ul>
|
215 |
-
<li>Preoc 2012 is focused on creating and managing cost estimates based on a database of items and prices.</li>
|
216 |
-
<li>Arquimedes is focused on managing budgets based on cost estimates imported from Preoc 2012 or other sources.</li>
|
217 |
-
<li>Preoc 2012 has a larger and more updated database of items and prices than Arquimedes.</li>
|
218 |
-
<li>Arquimedes has more tools and options for budget management than Preoc 2012.</li>
|
219 |
-
</ul>
|
220 |
-
<h4>How much does Preoc 2012 cost?</h4>
|
221 |
-
<p>The price of Preoc 2012 depends on several factors, such as:</p>
|
222 |
-
<ul>
|
223 |
-
<li>The country and region where you buy it.</li>
|
224 |
-
<li>The number of licenses you need.</li>
|
225 |
-
<li>The type of license you choose (perpetual or annual).</li>
|
226 |
-
<li>The type of support you require (basic or premium).</li>
|
227 |
-
</ul>
|
228 |
-
<p>To get an exact quote for Preoc 2012, you can contact CYPE Ingenieros by phone, email, or fax. You can also visit their website and use their online calculator to get an estimate.</p>
|
229 |
-
<h4>How can I learn how to use Preoc 2012?</h4>
|
230 |
-
<p>If you want to learn how to use Preoc 2012, you have several options:</p>
|
231 |
-
<ul>
|
232 |
-
<li>You can read the user manual that comes with the software or download it from CYPE Ingenieros website.</li>
|
233 |
-
<li>You can watch the video tutorials that are available on CYPE Ingenieros website or YouTube channel.</li>
|
234 |
-
<li>You can attend one of the online courses that are offered by CYPE Ingenieros periodically.</li>
|
235 |
-
<li>You can consult one of the experts that are available on CYPE Ingenieros website or forum.</li>
|
236 |
-
</ul>
|
237 |
-
<h4>Can I use Preoc 2012 on my mobile device?</h4>
|
238 |
-
<p>No, you cannot use Preoc 2012 on your mobile device. Preoc 2012 is only compatible with Windows, Mac OS X, and Linux operating systems. However, you can access your cost estimates online from any device with an internet connection by using the web service of Preoc 2012. You can also export your cost estimates to different formats that are compatible with mobile devices, such as PDF, Excel, Word, HTML, XML, etc.</p>
|
239 |
-
<h4>Can I integrate Preoc 2012 with other software?</h4>
|
240 |
-
2012. You can also export data from Preoc 2012 to different formats such as PDF, Excel, Word, HTML, XML, etc. that can be used by other software.</p>
|
241 |
-
<h4>How can I contact CYPE Ingenieros?</h4>
|
242 |
-
<p>If you want to contact CYPE Ingenieros, you have several options:</p>
|
243 |
-
<ul>
|
244 |
-
<li>You can call them by phone at +34 965 92 25 50.</li>
|
245 |
-
<li>You can send them an email at [email protected].</li>
|
246 |
-
<li>You can send them a fax at +34 965 12 49 50.</li>
|
247 |
-
<li>You can visit their website at www.cype.com.</li>
|
248 |
-
<li>You can follow them on social media such as Facebook, Twitter, LinkedIn, YouTube, etc.</li>
|
249 |
-
</ul>
|
250 |
-
<h4>What are the advantages and disadvantages of Preoc 2012?</h4>
|
251 |
-
<p>Preoc 2012 has many advantages and disadvantages that you should consider before buying or using it. Some of them are:</p>
|
252 |
-
<table border="1">
|
253 |
-
<tr><td><strong>Advantages</strong></td><td><strong>Disadvantages</strong></td></tr>
|
254 |
-
<tr><td>It has a large and updated database of items and prices for different countries and regions.</td><td>It may not have all the items and prices that you need for your specific project or location.</td></tr>
|
255 |
-
<tr><td>It allows you to customize your items and prices according to your needs and preferences.</td><td>It may take some time and effort to edit and customize your items and prices.</td></tr>
|
256 |
-
<tr><td>It has a user-friendly interface that makes it easy to navigate and use.</td><td>It may have some bugs or glitches that affect its performance and stability.</td></tr>
|
257 |
-
<tr><td>It has a powerful calculation engine that automatically updates the total cost of your project as you make changes.</td><td>It may not calculate or display some data correctly due to errors or inconsistencies in your data or settings.</td></tr>
|
258 |
-
<tr><td>It has a variety of tools and options for editing and formatting your cost estimates.</td><td>It may not have all the tools and options that you want or need for your cost estimates.</td></tr>
|
259 |
-
<tr><td>It has a built-in report generator that lets you create professional-looking reports with graphs, tables, charts, etc.</td><td>It may not print or save your reports correctly due to format or compatibility issues.</td></tr>
|
260 |
-
<tr><td>It has an export function that lets you export your cost estimates to different formats, such as PDF, Excel, Word, HTML, XML, etc.</td><td>It may not import or export your data correctly due to format or compatibility issues.</td></tr>
|
261 |
-
<tr><td>It has an online service that lets you share your cost estimates with other users or clients via email or web.</td><td>It may not connect to the internet or update properly due to connection or firewall issues.</td></tr>
|
262 |
-
<tr><td>It has a backup function that lets you save your cost estimates in a secure cloud storage.</td><td>It may not backup or restore your data correctly due to connection or storage issues.</td></tr>
|
263 |
-
<tr><td>It has an update function that lets you download the latest version of the software and the database.</td><td>It may not update properly due to connection or installation issues.</td></tr>
|
264 |
-
</table>
|
265 |
-
<h1></h1></p> 0a6ba089eb<br />
|
266 |
-
<br />
|
267 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Free Download for Windows PC.md
DELETED
@@ -1,195 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<code>
|
3 |
-
<h1>Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc</h1>
|
4 |
-
<h2>Introduction</h2>
|
5 |
-
<p>Are you looking for a professional drawing and illustration tool for designers? Do you want to create stunning graphics, logos, icons, typography and illustrations for print, web, video and mobile? If yes, then you should try Adobe Illustrator CC 2018, the latest version of the industry-standard vector graphics software.</p>
|
6 |
-
<h2>Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc</h2><br /><p><b><b>Download Zip</b> > <a href="https://byltly.com/2uKvaG">https://byltly.com/2uKvaG</a></b></p><br /><br />
|
7 |
-
<p>In this article, I will show you what Adobe Illustrator CC 2018 is, what are its new features, why you need it, how to download it with crack, and how to use it effectively. By the end of this article, you will be able to create amazing artwork with Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc.</p>
|
8 |
-
<h2>What is Adobe Illustrator CC 2018?</h2>
|
9 |
-
<h3>What is Adobe Illustrator CC 2018?</h3>
|
10 |
-
<p>Adobe Illustrator CC 2018 is a vector graphics software that allows you to create and edit scalable graphics that can be resized without losing quality. Unlike raster graphics, which are made of pixels, vector graphics are made of paths and shapes that can be manipulated with various tools and effects.</p>
|
11 |
-
<p>Adobe Illustrator CC 2018 is part of the Adobe Creative Cloud suite, which means you can access all your assets, including Adobe Stock images and fonts, from within the app. You can also sync your settings and preferences across your devices and collaborate with other designers using cloud services.</p>
|
12 |
-
<p>How to get Adobe Illustrator CC 2018 V21.0.2.242 with crack for free<br />
|
13 |
-
Adobe Illustrator CC 2018 V21.0.2.242 full version download link<br />
|
14 |
-
Adobe Illustrator CC 2018 V21.0.2.242 cracked software for graphic design<br />
|
15 |
-
Download Adobe Illustrator CC 2018 V21.0.2.242 incl patch and keygen<br />
|
16 |
-
Adobe Illustrator CC 2018 V21.0.2.242 torrent download with crack<br />
|
17 |
-
Adobe Illustrator CC 2018 V21.0.2.242 activation code generator<br />
|
18 |
-
Adobe Illustrator CC 2018 V21.0.2.242 serial number and license key<br />
|
19 |
-
Adobe Illustrator CC 2018 V21.0.2.242 portable edition download<br />
|
20 |
-
Adobe Illustrator CC 2018 V21.0.2.242 offline installer setup file<br />
|
21 |
-
Adobe Illustrator CC 2018 V21.0.2.242 system requirements and compatibility<br />
|
22 |
-
Adobe Illustrator CC 2018 V21.0.2.242 features and benefits<br />
|
23 |
-
Adobe Illustrator CC 2018 V21.0.2.242 review and rating<br />
|
24 |
-
Adobe Illustrator CC 2018 V21.0.2.242 tutorial and guide<br />
|
25 |
-
Adobe Illustrator CC 2018 V21.0.2.242 tips and tricks<br />
|
26 |
-
Adobe Illustrator CC 2018 V21.0.2.242 alternatives and competitors<br />
|
27 |
-
Adobe Illustrator CC 2018 V21.0.2.242 vs other versions of Adobe Illustrator<br />
|
28 |
-
Adobe Illustrator CC 2018 V21.0.2.242 update and upgrade<br />
|
29 |
-
Adobe Illustrator CC 2018 V21.0.2.242 bugs and issues<br />
|
30 |
-
Adobe Illustrator CC 2018 V21.0.2.242 support and help<br />
|
31 |
-
Adobe Illustrator CC 2018 V21.0.2.242 discount and coupon code<br />
|
32 |
-
Adobe Illustrator CC 2018 V21.0.2.242 free trial and demo<br />
|
33 |
-
Adobe Illustrator CC 2018 V21.0.2.242 price and cost<br />
|
34 |
-
Adobe Illustrator CC 2018 V21.0.2.242 refund and cancellation policy<br />
|
35 |
-
Adobe Illustrator CC 2018 V21.0.2.242 pros and cons<br />
|
36 |
-
Adobe Illustrator CC 2018 V21.0.2.242 testimonials and feedback<br />
|
37 |
-
How to uninstall Adobe Illustrator CC 2018 V21.0 .2 .242 from your pc<br />
|
38 |
-
How to fix Adobe Illustrator CC 2018 V21 .0 .2 .242 errors and crashes<br />
|
39 |
-
How to speed up Adobe Illustrator CC 2018 V21 .0 .2 .242 performance and efficiency<br />
|
40 |
-
How to customize Adobe Illustrator CC 2018 V21 .0 .2 .242 settings and preferences<br />
|
41 |
-
How to use Adobe Illustrator CC 2018 V21 .0 .2 .242 tools and functions<br />
|
42 |
-
How to create vector graphics with Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
43 |
-
How to edit images with Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
44 |
-
How to draw logos with Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
45 |
-
How to make icons with Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
46 |
-
How to design flyers with Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
47 |
-
How to make posters with Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
48 |
-
How to create infographics with Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
49 |
-
How to make animations with Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
50 |
-
How to export files from Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
51 |
-
How to import files into Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
52 |
-
How to convert files with Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
53 |
-
How to print files from Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
54 |
-
How to share files from Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
55 |
-
How to collaborate with others using Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
56 |
-
How to sync files with cloud using Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
57 |
-
How to access online resources using Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
58 |
-
How to learn more about Adobe Illustrator CC 2018 V21 .0 .2 .242 <br />
|
59 |
-
How to contact customer service for Adobe Illustrator CC 2018 V21 .0 .2 .</p>
|
60 |
-
<h3>What are the new features of Adobe Illustrator CC 2018?</h3>
|
61 |
-
<p>Adobe Illustrator CC 2018 brings several exciting enhancements, including:</p>
|
62 |
-
<ul>
|
63 |
-
<li><strong>Variable Fonts:</strong> These allow you to change aspects of a selected font, such as width and weight, using simple sliders. Six variable fonts are included with this release, and they have different characteristics.</li>
|
64 |
-
<li><strong>Puppet Warp:</strong> This function lets you twist and distort parts of your artwork as if it were made of clay. You can add pins to anchor points and drag them to transform your artwork.</li>
|
65 |
-
<li><strong>New Properties Panel:</strong> This panel gives you quick access to the most relevant controls for your selected object. You can adjust fill, stroke, opacity, alignment, and more without switching between panels.</li>
|
66 |
-
<li><strong>Data Merge:</strong> This feature allows you to import data from a CSV file and use it to populate text fields in your artwork. You can use this for creating personalized invitations, labels, certificates, etc.</li>
|
67 |
-
<li><strong>Import Multi-Page PDF Files:</strong> You can now open PDF files with multiple pages and choose which page to import into your document.</li>
|
68 |
-
<li><strong>Many Bug Fixes:</strong> Adobe has fixed many issues and improved the performance and stability of the software.</li>
|
69 |
-
</ul>
|
70 |
-
<h3>Why do you need Adobe Illustrator CC 2018?</h3>
|
71 |
-
<p>You need Adobe Illustrator CC 2018 if you want to:</p>
|
72 |
-
<ul>
|
73 |
-
<li>Create pixel-perfect artwork for screen designs by drawing paths and shapes that seamlessly align to the pixel grid.</li>
|
74 |
-
<li>Modify the text in After Effects compositions without leaving Premiere Pro.</li>
|
75 |
-
<li>Easily access Adobe Stock assets — including new design templates, images, graphics, and our new Premium collection — right from the Illustrator search field.</li>
|
76 |
-
<li>Select an entire artboard or choose individual assets from one or more artboards, and export them to multiple sizes, resolutions, and formats in a single click.</li>
|
77 |
-
<li>Design faster with presets and templates for brochures, business cards, and more that you access from the file menu.</li>
|
78 |
-
</ul>
|
79 |
-
<h2>How to download Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack?</h2>
|
80 |
-
<h3>Step 1: Choose a reliable source</h3>
|
81 |
-
<p>The first step is to choose a reliable source from where you can download the setup file and the patch file for Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc. There are many websites that offer cracked versions of software, but not all of them are safe and trustworthy.</p>
|
82 |
-
<p>You should avoid downloading from unknown or suspicious sources that may contain viruses or malware that can harm your computer or steal your personal information. You should also check the reviews and ratings of the website before downloading anything from it.</p>
|
83 |
-
<p>One of the reliable sources that I recommend is Ask4pc.net, which provides offline installer setup files and patches for various Adobe products. You can also find other sources by searching on Google or other search engines.</p>
|
84 |
-
<h3>Step 2: Download the setup file and the patch file</h3>
|
85 |
-
<p>The next step is to download the setup file and the patch file for Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc from your chosen source. The setup file is about 2.2 GB in size and the patch file is about 435 KB in size.</p>
|
86 |
-
<p>You can use any download manager or browser extension to speed up the download process and resume it if it gets interrupted due to network issues or power failures.</p>
|
87 |
-
<p>You should also scan the downloaded files with an antivirus program before opening them to make sure they are free of any malicious code or infection.</p>
|
88 |
-
<h3>Step 3: Install the setup file</h3>
|
89 |
-
<p>The third step is to install the setup file for Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc on your computer. To do this, follow these steps:</p>
|
90 |
-
<ol>
|
91 |
-
<li>Extract the downloaded setup file using WinRAR or any other extraction tool.</li>
|
92 |
-
<li>Run the extracted setup file as administrator by right-clicking on it and choosing "Run as administrator".</li>
|
93 |
-
<li>Select your language and click "OK".</li>
|
94 |
-
<li>Accept the license agreement and click "Next".</li>
|
95 |
-
<li>Select your installation location and click "Next".</li>
|
96 |
-
<li>Select your installation options and click "Next".</li>
|
97 |
-
<li>Click "Install" and wait for the installation process to complete.</li>
|
98 |
-
<li>Click "Finish" when done.</li>
|
99 |
-
</ol>
|
100 |
-
<h3>Step 4: Apply the patch file</h3>
|
101 |
-
<p>The final step is to apply the patch file for Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc on your installed software. To do this, follow these steps:</p>
|
102 |
-
<ol>
|
103 |
-
<li>Extract the downloaded patch file using WinRAR or any other extraction tool.</li>
|
104 |
-
<li>Run the extracted patch file as administrator by right-clicking on it and choosing "Run as administrator".</li>
|
105 |
-
<li>Select "Adobe illustrator cc" from the drop-down menu and click "Install".</li>
|
106 |
-
<li>Browse to your installation folder (usually C:\Program Files\Adobe\Adobe illustrator cc) and select "amtlib.dll" file.</li>
|
107 |
-
<li>Click "Open" and wait for the patching process to complete.</li>
|
108 |
-
<li>Click "OK" when done.</li>
|
109 |
-
</ol>
|
110 |
-
<p>Congratulations! You have successfully installed Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc on your computer.</p>
|
111 |
-
<h2>How to use Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack?</h2>
|
112 |
-
<h3>How to create pixel-per fect artwork for screen designs?</h3>
|
113 |
-
<p>One of the new features of Adobe Illustrator CC 2018 is the ability to create pixel-perfect artwork for screen designs by drawing paths and shapes that seamlessly align to the pixel grid. This means you can avoid blurry or distorted edges and corners when you export your artwork to different screen resolutions and devices.</p>
|
114 |
-
<p>To create pixel-perfect artwork, you need to enable the Pixel Preview mode and the Snap to Pixel option. You can do this by going to View > Pixel Preview and View > Snap to Pixel. You can also use the Align to Pixel Grid option in the Transform panel or the Properties panel to align selected objects to the pixel grid.</p>
|
115 |
-
<p>When you draw paths and shapes, you can use the Pixel Grid tool or the Rectangular Grid tool to create grids that match the pixel dimensions of your artboard. You can also use the Pixel tool or the Shaper tool to draw pixel-based shapes and patterns. You can adjust the pixel density and color mode of your document by going to File > Document Setup.</p>
|
116 |
-
<h3>How to modify text in After Effects compositions without leaving Premiere Pro?</h3>
|
117 |
-
<p>Another new feature of Adobe Illustrator CC 2018 is the ability to modify text in After Effects compositions without leaving Premiere Pro. This means you can edit text layers in your motion graphics templates without switching between applications.</p>
|
118 |
-
<p>To modify text in After Effects compositions, you need to have both Adobe Illustrator CC 2018 and Adobe Premiere Pro CC 2018 installed on your computer. You also need to have an After Effects composition that contains text layers that are marked as editable in Premiere Pro.</p>
|
119 |
-
<p>To edit text layers, you need to import the After Effects composition into Premiere Pro as a motion graphics template. You can do this by going to File > Import or by dragging and dropping the file into the Project panel. You can then drag and drop the template onto a sequence in the Timeline panel.</p>
|
120 |
-
<p>To modify text layers, you need to select the template clip in the Timeline panel and open the Essential Graphics panel. You can do this by going to Window > Essential Graphics or by clicking on the Graphics workspace. In the Essential Graphics panel, you can see a list of editable text layers under Edit. You can click on each layer and change its font, size, color, alignment, and other properties.</p>
|
121 |
-
<h3>How to access Adobe Stock assets from the Illustrator search field?</h3>
|
122 |
-
<p>A third new feature of Adobe Illustrator CC 2018 is the ability to access Adobe Stock assets from the Illustrator search field. This means you can find and use high-quality images, graphics, templates, and fonts from Adobe Stock without leaving Illustrator.</p>
|
123 |
-
<p>To access Adobe Stock assets from the Illustrator search field, you need to have an Adobe Creative Cloud account and an Adobe Stock subscription or credits. You also need to be signed in to your account in Illustrator.</p>
|
124 |
-
<p>To find Adobe Stock assets, you need to click on the search icon in the upper-right corner of Illustrator. You can then type a keyword or phrase in the search field and press Enter. You can see a list of relevant assets from Adobe Stock in a pop-up window. You can filter the results by type, category, license, color, and more.</p>
|
125 |
-
<p>To use Adobe Stock assets, you need to hover over an asset and click on one of the options: License & Save (to purchase and download the asset), Save Preview (to download a watermarked version of the asset), or View on Web (to open the asset page on Adobe Stock website). You can then find the downloaded asset in your Libraries panel or your Downloads folder.</p>
|
126 |
-
<h3>How to export multiple artboards to different sizes, resolutions and formats in one click?</h3>
|
127 |
-
<p>A fourth new feature of Adobe Illustrator CC 2018 is the ability to export multiple artboards to different sizes, resolutions and formats in one click. This means you can save time and hassle when you need to export your artwork for various purposes and platforms.</p>
|
128 |
-
<p>To export multiple artboards, you need to have a document that contains more than one artboard. You can create multiple artboards by going to File > New or by using the Artboard tool or the Artboards panel. You can rename, reorder, resize, and align your artboards as you wish.</p>
|
129 |
-
<p>To export multiple artboards, you need to go to File > Export > Export for Screens. You can then choose which artboards to export and how to name them. You can also select the format, size, resolution, and location for each artboard or for all artboards at once. You can choose from various presets or create your own custom settings.</p>
|
130 |
-
<p>To export multiple artboards, you need to click on the Export Artboard or Export All Artboards button at the bottom of the dialog box. You can then see a progress bar and a confirmation message when the export is done.</p>
|
131 |
-
<h3>How to use the new Properties panel and the Puppet Warp function?</h3>
|
132 |
-
<p>A fifth new feature of Adobe Illustrator CC 2018 is the ability to use the new Properties panel and the Puppet Warp function. The Properties panel gives you quick access to the most relevant controls for your selected object. The Puppet Warp function lets you twist and distort parts of your artwork as if it were made of clay.</p>
|
133 |
-
<p>To use the new Properties panel, you need to select an object or a group of objects on your artboard. You can then see the Properties panel on the right side of your workspace. You can also open it by going to Window > Properties or by clicking on the Properties workspace.</p>
|
134 |
-
<p>In the Properties panel, you can see different sections depending on the type of object you have selected. For example, if you have selected a text object, you can see sections for Character, Paragraph, Appearance, Quick Actions, and Transform. You can expand or collapse each section by clicking on its title. You can also customize the panel by adding or removing sections using the More Options menu at the bottom of the panel.</p>
|
135 |
-
<p>To use the Puppet Warp function, you need to select an object or a group of objects on your artboard. You can then select the Puppet Warp tool from the toolbar or from the Quick Actions section in the Properties panel. By default, Illustrator will automatically add some pins in the areas it considers to be the most appropriate.</p>
|
136 |
-
<p>You can also add more pins by clicking on the areas you want to transform or anchor. Three or more pins are required for good results. To delete a pin, press the Delete key. To select multiple pins, Shift-click them or choose Select All Pins from the Control panel or the Properties panel.</p>
|
137 |
-
<p>To transform your artwork using Puppet Warp, you need to click and drag a pin to move it around. The adjoining pins will hold the nearby areas intact. To constrain the transformation around a pin, press Alt while dragging it. To twist your artwork around a pin, position your cursor near but not over a pin until a dotted circle appears. Then drag to rotate your artwork.</p>
|
138 |
-
<p>While using Puppet Warp, you can adjust some settings in the Control panel or the Properties panel. You can choose to show or hide the mesh that outlines your artwork by clicking on Show Mesh button. You can also adjust the density and the expansion of the mesh by using the Density and Expansion sliders. You can also choose to show or hide the pins by clicking on Show Pins button.</p>
|
139 |
-
<p>To use Puppet Warp, you can be creative and experiment with different shapes and forms. For example, you can use Puppet Warp to bend a straight line into a curve, to make a flower bloom, to animate a character, to distort a logo, or to create abstract art. The possibilities are endless.</p>
|
140 |
-
<h2>Conclusion</h2>
|
141 |
-
<p>In conclusion, Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc is a powerful and versatile vector graphics software that can help you create stunning artwork for various purposes and platforms. It has many new features that enhance your productivity and creativity, such as variable fonts, puppet warp, new properties panel, data merge, import multi-page PDF files, and more.</p>
|
142 |
-
<p>If you want to download Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc for free, you can follow the steps in this article to choose a reliable source, download the setup file and the patch file, install the setup file, and apply the patch file. You can then enjoy using Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc without any limitations or restrictions.</p>
|
143 |
-
<p>However, if you want to support the developers and get access to more features and updates, you should consider buying Adobe Illustrator CC 2018 from the official website or from an authorized reseller. You can also get Adobe Illustrator CC 2018 as part of the Adobe Creative Cloud subscription plan that gives you access to all Adobe apps and services.</p>
|
144 |
-
<p>Whatever option you choose, I hope this article has helped you learn more about Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc and how to use it effectively. Thank you for reading and happy designing!</p>
|
145 |
-
<h2>FAQs</h2>
|
146 |
-
<p>Here are some frequently asked questions about Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc:</p>
|
147 |
-
<ol>
|
148 |
-
<li><strong>Is Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc safe to use?</strong></li>
|
149 |
-
<p>It depends on where you download it from and how you scan it before opening it. Some sources may provide corrupted or infected files that can harm your computer or steal your personal information. You should always download from trusted sources and scan the files with an antivirus program before opening them.</p>
|
150 |
-
<li><strong>Is Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc legal to use?</strong></li>
|
151 |
-
<p>No, it is not legal to use Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc without paying for it or getting permission from the developers. It is a violation of the license agreement and the intellectual property rights of Adobe Systems Incorporated. You may face legal consequences if you are caught using it illegally.</p>
|
152 |
-
<li><strong>What are the system requirements for Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc?</strong></li>
|
153 |
-
<p>The minimum system requirements for Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc are:</p>
|
154 |
-
<ul>
|
155 |
-
<li>Operating system: Windows 7 SP1 (64-bit) or later</li>
|
156 |
-
<li>Processor: Intel Pentium 4 or AMD Athlon 64 processor</li>
|
157 |
-
<li>RAM: 2 GB (4 GB recommended)</li>
|
158 |
-
<li>Hard disk space: 3 GB of available space</li>
|
159 |
-
<li>Display: 1024 x 768 resolution (1280 x 800 recommended)</li>
|
160 |
-
<li>Graphics card: OpenGL 4.x</li>
|
161 |
-
<li>Internet connection: Required for activation and updates</li>
|
162 |
-
</ul>
|
163 |
-
<li><strong>How can I update Adobe Illustrator CC 2018 V21 .0.2.242 Incl Crack Download Pc?</strong></li>
|
164 |
-
<p>You can update Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc by using the Creative Cloud desktop app. You can do this by following these steps:</p>
|
165 |
-
<ol>
|
166 |
-
<li>Open the Creative Cloud desktop app and go to Apps > Update. You can find the available app updates in the New updates section.</li>
|
167 |
-
<li>Hover over the Update option for your desired app, and then select Later. You get a confirmation message that your app will be updated when it's no longer in use.</li>
|
168 |
-
<li>Close Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc if it's running.</li>
|
169 |
-
<li>Go back to the Creative Cloud desktop app and click on the Update option for your desired app again. You can see a progress bar and a confirmation message when the update is done.</li>
|
170 |
-
</ol>
|
171 |
-
<p>Note: If you don't see any updates available for your app, you may need to check for updates manually by clicking on the Check for updates option in the More actions menu.</p>
|
172 |
-
<li><strong>What are the benefits of updating Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc?</strong></li>
|
173 |
-
<p>Updating Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc can bring you many benefits, such as:</p>
|
174 |
-
<ul>
|
175 |
-
<li>Getting access to new features and improvements that enhance your productivity and creativity.</li>
|
176 |
-
<li>Fixing bugs and issues that may affect the performance and stability of the software.</li>
|
177 |
-
<li>Improving the compatibility and security of the software with the latest operating systems and browsers.</li>
|
178 |
-
<li>Getting support and assistance from Adobe and other users in case of any problems or questions.</li>
|
179 |
-
</ul>
|
180 |
-
<li><strong>How can I uninstall Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc?</strong></li>
|
181 |
-
<p>You can uninstall Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc by using the Creative Cloud desktop app or the Windows Control Panel. You can do this by following these steps:</p>
|
182 |
-
<ol>
|
183 |
-
<li>Open the Creative Cloud desktop app and go to Apps > Installed.</li>
|
184 |
-
<li>Hover over the More actions option for your desired app, and then select Uninstall.</li>
|
185 |
-
<li>Follow the on-screen instructions to complete the uninstallation process.</li>
|
186 |
-
</ol>
|
187 |
-
<p>Alternatively, you can uninstall Adobe Illustrator CC 2018 V21.0.2.242 Incl Crack Download Pc by using the Windows Control Panel. You can do this by following these steps:</p>
|
188 |
-
<ol>
|
189 |
-
<li>Open the Windows Control Panel and go to Programs > Programs and Features.</li>
|
190 |
-
<li>Select Adobe Illustrator CC 2018 from the list of installed programs and click on Uninstall/Change.</li>
|
191 |
-
<li>Follow the on-screen instructions to complete the uninstallation process.</li>
|
192 |
-
</ol>
|
193 |
-
</p> 0a6ba089eb<br />
|
194 |
-
<br />
|
195 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download 25 To Life Pc Game Full Version _BEST_.md
DELETED
@@ -1,31 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Download 25 to Life PC Game Full Version for Free</h1>
|
3 |
-
<p>If you are looking for a thrilling third-person shooter game that pits cops against criminals in a dark corruption scheme, you might want to check out 25 to Life. This game was released in 2006 for Windows, PlayStation 2, and Xbox, and it lets you choose which side you want to join. You can play as Police Officer Williams, who is trying to stop the organized crime and felony across the city, or as Andre Freeze, a criminal who is betrayed by his friend Shaun Calderon and ends up in prison. You can also play online with up to 16 players in different modes and maps.</p>
|
4 |
-
<p>In this article, we will show you how to download 25 to Life PC game full version for free. You can either download a preinstalled version of the game or an ISO file that you have to install yourself. Both options are safe and easy to follow, and we will provide you with the necessary links and instructions.</p>
|
5 |
-
<h2>download 25 to life pc game full version</h2><br /><p><b><b>Download File</b> ☆☆☆☆☆ <a href="https://byltly.com/2uKA80">https://byltly.com/2uKA80</a></b></p><br /><br />
|
6 |
-
<h2>How to Download 25 to Life PC Game Full Version (Preinstalled)</h2>
|
7 |
-
<p>If you want to download 25 to Life PC game full version that is already preinstalled, you can follow these steps:</p>
|
8 |
-
<ol>
|
9 |
-
<li>Click on this link[^1^] to go to the Old Games Download website.</li>
|
10 |
-
<li>Scroll down until you see a section called "Download 25 to Life".</li>
|
11 |
-
<li>Click on the button that says "Download 25_to_Life_Win_Preinstalled_EN.zip (591MB)".</li>
|
12 |
-
<li>Wait for the download to finish and then extract the ZIP file using WinRAR or any other software.</li>
|
13 |
-
<li>Open the folder called "Game Files" and run the file called "TTL.exe".</li>
|
14 |
-
<li>Enjoy playing 25 to Life on your PC!</li>
|
15 |
-
</ol>
|
16 |
-
<h2>How to Download 25 to Life PC Game Full Version (ISO File)</h2>
|
17 |
-
<p>If you prefer to download 25 to Life PC game full version as an ISO file that you have to install yourself, you can follow these steps:</p>
|
18 |
-
<ol>
|
19 |
-
<li>Click on this link[^1^] to go to the Old Games Download website.</li>
|
20 |
-
<li>Scroll down until you see a section called "Download 25 to Life".</li>
|
21 |
-
<li>Click on the button that says "Download 25_to_Life_Win_ISO_EN.zip (1.04GB)".</li>
|
22 |
-
<li>Wait for the download to finish and then extract the ZIP file using WinRAR or any other software.</li>
|
23 |
-
<li>Open the folder called "Game Files" and mount the file called "OGD_25_to_Life.iso" using Daemon Tools or any other software.</li>
|
24 |
-
<li>Run the file called "setup.exe" and follow the on-screen instructions to install the game.</li>
|
25 |
-
<li>Once the installation is complete, go into the folder called "NOCD" and copy-paste the file called "TTL.exe" into the game installation directory.</li>
|
26 |
-
<li>Enjoy playing 25 to Life on your PC!</li>
|
27 |
-
</ol>
|
28 |
-
<h2>Conclusion</h2>
|
29 |
-
<p>25 to Life is a game that offers you a lot of action and excitement as you play as either a cop or a criminal in a corrupt city. You can experience a gripping story mode with three playable protagonists or join online matches with other players. If you want to download 25 to Life PC game full version for free, you can use either of the methods we have shown you in this article. We hope you found this article helpful and informative. If you have any questions or feedback, please let us know in the comments below.</p> cec2833e83<br />
|
30 |
-
<br />
|
31 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Dota Map 6.83 AI and Challenge Your Friends in Custom Games.md
DELETED
@@ -1,135 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Download Dota Map 6.83 AI: A Guide for Warcraft III Fans</h1>
|
3 |
-
<p>If you are a fan of Warcraft III and the popular mod Defense of the Ancients (DotA), you might be interested in downloading and playing Dota Map 6.83 AI, a custom map that allows you to play with computer-controlled bots. In this article, we will explain what Dota Map 6.83 AI is, how to download and install it, and how to play and enjoy it.</p>
|
4 |
-
<h2>download dota map 6.83 ai</h2><br /><p><b><b>Download</b> ✔✔✔ <a href="https://urlin.us/2uSXFL">https://urlin.us/2uSXFL</a></b></p><br /><br />
|
5 |
-
<h2>What is Dota Map 6.83 AI?</h2>
|
6 |
-
<p>Dota Map 6.83 AI is a custom map for Warcraft III: The Frozen Throne that is based on the official DotA version 6.83d by IceFrog, the creator of DotA and DotA 2. Dota Map 6.83 AI adds artificial intelligence (AI) to the game, so that you can play with or against computer-controlled heroes, also known as bots.</p>
|
7 |
-
<h3>The features of Dota Map 6.83 AI</h3>
|
8 |
-
<p>Dota Map 6.83 AI has the following features:</p>
|
9 |
-
<ul>
|
10 |
-
<li>It includes all the 112 heroes from DotA version 6.83d, each with their own skills, items, and strategies.</li>
|
11 |
-
<li>It has four difficulty levels for the bots: Easy, Normal, Hard, and Insane.</li>
|
12 |
-
<li>It has various game modes and options that you can choose from, such as All Pick, Random Draft, All Random, Captains Mode, Reverse Captains Mode, Death Match, and more.</li>
|
13 |
-
<li>It has a built-in cheat system that you can use to test different scenarios and outcomes.</li>
|
14 |
-
<li>It has a map information system that you can access by pressing F9 in-game.</li>
|
15 |
-
<li>It has a website where you can find more information, updates, and feedback: www.PlayDotA.com.</li>
|
16 |
-
</ul>
|
17 |
-
<h3>The advantages and disadvantages of Dota Map 6.83 AI</h3>
|
18 |
-
<p>Dota Map 6.83 AI has some advantages and disadvantages that you should be aware of before playing it:</p>
|
19 |
-
<table>
|
20 |
-
<tr><th>Advantages</th><th>Disadvantages</th></tr>
|
21 |
-
<tr><td>It allows you to play DotA offline without an internet connection or a human opponent.</td><td>It requires Warcraft III: The Frozen Throne with patch 1.26 or higher to run properly.</td></tr>
|
22 |
-
<tr><td>It helps you practice your skills, learn new heroes, and experiment with different strategies.</td><td>It may not be compatible with some other custom maps or mods that you have installed.</td></tr>
|
23 |
-
<tr><td>It offers a challenging and fun experience with different difficulty levels and game modes.</td><td>It may have some bugs or errors that affect the gameplay or performance.</td></tr>
|
24 |
-
<tr><td>It is free to download and play.</td><td>It may not be updated as frequently as the official DotA version or DotA 2.</td></tr>
|
25 |
-
</table>
|
26 |
-
<h2>How to download and install Dota Map 6.83 AI?</h2>
|
27 |
-
<p>If you want to download and install Dota Map 6.83 AI, you need to follow these steps:</p>
|
28 |
-
<h3>The download link for Dota Map 6.83 AI</h3>
|
29 |
-
<p>The download link for Dota Map 6.83 AI is [here](^1^). It is a file named DotA v6.83dAI PMV 1.42 EN.w 3.w3x and has a size of 7.92 MB. You can also find other versions of Dota Map AI on the same website, such as Dota Map 6.88 AI or Dota Map 6.85 AI.</p>
|
30 |
-
<h3>The installation guide for Dota Map 6.83 AI</h3>
|
31 |
-
<p>The installation guide for Dota Map 6.83 AI is as follows:</p>
|
32 |
-
<p>download dota map 6.83 ai latest version<br />
|
33 |
-
download dota map 6.83 ai with cheats<br />
|
34 |
-
download dota map 6.83 ai offline<br />
|
35 |
-
download dota map 6.83 ai free<br />
|
36 |
-
download dota map 6.83 ai for mac<br />
|
37 |
-
download dota map 6.83 ai from wc3maps.com[^1^]<br />
|
38 |
-
download dota map 6.83 ai updated<br />
|
39 |
-
download dota map 6.83 ai full<br />
|
40 |
-
download dota map 6.83 ai english<br />
|
41 |
-
download dota map 6.83 ai direct link<br />
|
42 |
-
download dota map 6.83 ai no virus<br />
|
43 |
-
download dota map 6.83 ai best settings<br />
|
44 |
-
download dota map 6.83 ai easy mode<br />
|
45 |
-
download dota map 6.83 ai fun mode<br />
|
46 |
-
download dota map 6.83 ai custom mode<br />
|
47 |
-
download dota map 6.83 ai original<br />
|
48 |
-
download dota map 6.83 ai patch notes<br />
|
49 |
-
download dota map 6.83 ai tips and tricks<br />
|
50 |
-
download dota map 6.83 ai guide<br />
|
51 |
-
download dota map 6.83 ai tutorial<br />
|
52 |
-
download dota map 6.83 ai gameplay<br />
|
53 |
-
download dota map 6.83 ai review<br />
|
54 |
-
download dota map 6.83 ai features<br />
|
55 |
-
download dota map 6.83 ai changelog<br />
|
56 |
-
download dota map 6.83 ai bug fixes<br />
|
57 |
-
download dota map 6.83 ai new heroes<br />
|
58 |
-
download dota map 6.83 ai new items<br />
|
59 |
-
download dota map 6.83 ai new skills<br />
|
60 |
-
download dota map 6.83 ai new modes<br />
|
61 |
-
download dota map 6.83 ai new maps<br />
|
62 |
-
download dota map 6.83 ai balance changes<br />
|
63 |
-
download dota map 6.83 ai performance improvements<br />
|
64 |
-
download dota map 6.83 ai system requirements<br />
|
65 |
-
download dota map 6.83 ai installation instructions<br />
|
66 |
-
download dota map 6.83 ai troubleshooting steps<br />
|
67 |
-
download dota map 6.83 ai support forum<br />
|
68 |
-
download dota map 6.83 ai feedback form<br />
|
69 |
-
download dota map 6.83 ai developer contact<br />
|
70 |
-
download dota map 6.83 ai source code<br />
|
71 |
-
download dota map 6.83 ai modding tools<br />
|
72 |
-
download dota map 6.83 ai alternative downloads<br />
|
73 |
-
download dota map 6.83 ai mirror links<br />
|
74 |
-
download dota map 6.83 ai torrent file<br />
|
75 |
-
download dota map 6.83 ai magnet link<br />
|
76 |
-
download dota map 6.83 ai rar file<br />
|
77 |
-
download dota map 6.83 ai zip file<br />
|
78 |
-
download dota map 6.83 ai w3x file<br />
|
79 |
-
download dota map 6.83 ai mpq file</p>
|
80 |
-
<ol>
|
81 |
-
<li>Download the file DotA v6.83dAI PMV 1.42 EN.w3x from the link above and save it to your computer.</li>
|
82 |
-
<li>Locate the folder where you have installed Warcraft III: The Frozen Throne. It is usually in C:\Program Files\Warcraft III or C:\Program Files (x86)\Warcraft III.</li>
|
83 |
-
<li>Open the folder named Maps and then open the folder named Download.</li>
|
84 |
-
<li>Copy and paste the file DotA v6.83dAI PMV 1.42 EN.w3x into the Download folder.</li>
|
85 |
-
<li>Launch Warcraft III: The Frozen Throne and select Single Player, then Custom Game.</li>
|
86 |
-
<li>Find and select the map DotA v6.83dAI PMV 1.42 EN.w3x from the list and click Start Game.</li>
|
87 |
-
<li>Choose your game mode, options, and difficulty level, then pick your hero and start playing.</li>
|
88 |
-
</ol>
|
89 |
-
<h2>How to play and enjoy Dota Map 6.83 AI?</h2>
|
90 |
-
<p>If you want to play and enjoy Dota Map 6.83 AI, you need to know some tips and tricks, as well as some review and screenshots of the map:</p>
|
91 |
-
<h3>The tips and tricks for Dota Map 6.83 AI</h3>
|
92 |
-
<p>Here are some tips and tricks that can help you improve your gameplay and have more fun with Dota Map 6.83 AI:</p>
|
93 |
-
<ul>
|
94 |
-
<li>Learn the basics of DotA, such as the map layout, the objectives, the items, the heroes, and the skills. You can find a lot of guides and tutorials online, such as [this one].</li>
|
95 |
-
<li>Practice with different heroes and try to master their skills, strengths, and weaknesses. You can also use the cheat system to test different combinations of items and skills.</li>
|
96 |
-
<li>Adjust the difficulty level of the bots according to your skill level and preference. You can also change the game mode and options to make the game more challenging or interesting.</li>
|
97 |
-
<li>Play with or against your friends using LAN or online multiplayer. You can also join online communities and forums where you can find other players, share your experiences, and get feedback.</li>
|
98 |
-
<li>Have fun and enjoy the game. Don't be discouraged by losing or frustrated by bugs. Remember that Dota Map 6.83 AI is a custom map that is meant to provide entertainment and enjoyment.</li>
|
99 |
-
</ul>
|
100 |
-
<h3>The review and screenshots of Dota Map 6.83 AI</h3>
|
101 |
-
<p>Dota Map 6.83 AI is a great custom map that offers a lot of features, options, and challenges for Warcraft III and DotA fans. It is a well-made map that has a high quality of graphics, sound, and gameplay. It is also a faithful adaptation of the official DotA version 6.83d by IceFrog, with some minor changes and improvements.</p>
|
102 |
-
<p>Dota Map 6.83 AI is not perfect, however, as it may have some bugs or errors that affect the gameplay or performance. It may also not be compatible with some other custom maps or mods that you have installed. It may also not be updated as frequently as the official DotA version or DotA 2.</p>
|
103 |
-
<p>Overall, Dota Map 6.83 AI is a fun and exciting custom map that deserves a try by any Warcraft III and DotA fan. It is a map that can provide hours of entertainment and enjoyment, whether you play alone or with your friends.</p>
|
104 |
-
<p>Here are some screenshots of Dota Map 6.83 AI:</p>
|
105 |
-
<img src="^3^" alt="Dota Map 6.83 AI screenshot 1" width="600" height="400">
|
106 |
-
<img src="^4^" alt="Dota Map 6.83 AI screenshot 2" width="600" height="400">
|
107 |
-
<img src="^5^" alt="Dota Map 6.83 AI screenshot 3" width="600" height="400">
|
108 |
-
<h2>Conclusion</h2>
|
109 |
-
<h3>Summary of the main points</h3>
|
110 |
-
<p>In this article, we have discussed what Dota Map 6.83 AI is, how to download and install it, and how to play and enjoy it. We have learned that:</p>
|
111 |
-
<ul>
|
112 |
-
<li>Dota Map 6.83 AI is a custom map for Warcraft III: The Frozen Throne that is based on the official DotA version 6.83d by IceFrog, with added AI for the bots.</li>
|
113 |
-
<li>Dota Map 6.83 AI has many features, options, and challenges that make it a fun and exciting custom map to play.</li>
|
114 |
-
<li>Dota Map 6.83 AI can be downloaded and installed easily by following the steps provided in this article.</li>
|
115 |
-
<li>Dota Map 6.83 AI can be played and enjoyed by anyone who loves Warcraft III and DotA, whether offline or online, alone or with friends.</li>
|
116 |
-
</ul>
|
117 |
-
<h3>Call to action and invitation for feedback</h3>
|
118 |
-
<p>We hope that this article has helped you learn more about Dota Map 6.83 AI and how to download, install, and play it. If you have any questions, comments, or suggestions, please feel free to leave them below. We would love to hear from you and improve our content.</p>
|
119 |
-
<p>Thank you for reading this article and have a great day!</p>
|
120 |
-
<h2>FAQs</h2>
|
121 |
-
<p>Here are some frequently asked questions (FAQs) about Dota Map 6.83 AI:</p>
|
122 |
-
<ol>
|
123 |
-
<li><b>What is the difference between Dota Map 6.83 AI and DotA 2?</b></li>
|
124 |
-
<p>Dota Map 6.83 AI is a custom map for Warcraft III: The Frozen Throne, while DotA 2 is a standalone game developed by Valve Corporation. Dota Map 6.83 AI is based on the official DotA version 6.83d by IceFrog, who is also the lead developer of DotA 2. Dota Map 6.83 AI and DotA 2 have many similarities, such as the heroes, items, skills, and gameplay mechanics, but they also have some differences, such as the graphics, sound, interface, and updates.</p>
|
125 |
-
<li><b>How can I update Dota Map 6.83 AI to the latest version?</b></li>
|
126 |
-
<p>You can update Dota Map 6.83 AI to the latest version by visiting the website www.PlayDotA.com and downloading the newest file from there. You can also check the website for any news, updates, or feedback about Dota Map AI.</p>
|
127 |
-
<li><b>How can I play Dota Map 6.83 AI online with other players?</b></li>
|
128 |
-
<p>You can play Dota Map 6.83 AI online with other players by using LAN or online multiplayer options in Warcraft III: The Frozen Throne. You can also use third-party platforms or services that allow you to host or join custom games online, such as Garena, Battle.net, or RGC.</p>
|
129 |
-
<li><b>How can I use cheats in Dota Map 6.83 AI?</b></li>
|
130 |
-
<p>You can use cheats in Dota Map 6.83 AI by typing them in the chat box during the game. You can find a list of cheats and their effects [here]. Some of the cheats are: -gold x (gives you x amount of gold), -lvlup x (gives you x levels), -wtf (removes cooldowns and mana costs), -unwtf (disables wtf mode), -createhero name (creates a hero of your choice), -kill (kills your hero), -respawn (respawns your hero), and more.</p>
|
131 |
-
<li><b>Where can I find more information about Dota Map 6.83 AI?</b></li>
|
132 |
-
<p>You can find more information about Dota Map 6.83 AI by visiting the website www.PlayDotA.com, where you can find more details, updates, and feedback about the map. You can also visit other websites or forums that are related to Warcraft III or DotA, such as www.Dota-Utilities.com, www.Dota-Allstars.com, www.DotaFire.com, www.Reddit.com/r/DotA/, and more.</p>
|
133 |
-
</ol></p> 197e85843d<br />
|
134 |
-
<br />
|
135 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/7hao/bingo/src/components/ui/select.tsx
DELETED
@@ -1,123 +0,0 @@
|
|
1 |
-
'use client'
|
2 |
-
|
3 |
-
import * as React from 'react'
|
4 |
-
import * as SelectPrimitive from '@radix-ui/react-select'
|
5 |
-
|
6 |
-
import { cn } from '@/lib/utils'
|
7 |
-
import {
|
8 |
-
IconArrowDown,
|
9 |
-
IconCheck,
|
10 |
-
IconChevronUpDown
|
11 |
-
} from '@/components/ui/icons'
|
12 |
-
|
13 |
-
const Select = SelectPrimitive.Root
|
14 |
-
|
15 |
-
const SelectGroup = SelectPrimitive.Group
|
16 |
-
|
17 |
-
const SelectValue = SelectPrimitive.Value
|
18 |
-
|
19 |
-
const SelectTrigger = React.forwardRef<
|
20 |
-
React.ElementRef<typeof SelectPrimitive.Trigger>,
|
21 |
-
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Trigger>
|
22 |
-
>(({ className, children, ...props }, ref) => (
|
23 |
-
<SelectPrimitive.Trigger
|
24 |
-
ref={ref}
|
25 |
-
className={cn(
|
26 |
-
'flex h-9 w-full items-center justify-between rounded-md border border-input bg-transparent px-3 py-2 text-sm shadow ring-offset-background placeholder:text-muted-foreground focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:cursor-not-allowed disabled:opacity-50',
|
27 |
-
className
|
28 |
-
)}
|
29 |
-
{...props}
|
30 |
-
>
|
31 |
-
{children}
|
32 |
-
<SelectPrimitive.Icon asChild>
|
33 |
-
<IconChevronUpDown className="opacity-50" />
|
34 |
-
</SelectPrimitive.Icon>
|
35 |
-
</SelectPrimitive.Trigger>
|
36 |
-
))
|
37 |
-
SelectTrigger.displayName = SelectPrimitive.Trigger.displayName
|
38 |
-
|
39 |
-
const SelectContent = React.forwardRef<
|
40 |
-
React.ElementRef<typeof SelectPrimitive.Content>,
|
41 |
-
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Content>
|
42 |
-
>(({ className, children, position = 'popper', ...props }, ref) => (
|
43 |
-
<SelectPrimitive.Portal>
|
44 |
-
<SelectPrimitive.Content
|
45 |
-
ref={ref}
|
46 |
-
className={cn(
|
47 |
-
'relative z-50 min-w-[8rem] overflow-hidden rounded-md border bg-popover text-popover-foreground shadow-md animate-in fade-in-80',
|
48 |
-
position === 'popper' && 'translate-y-1',
|
49 |
-
className
|
50 |
-
)}
|
51 |
-
position={position}
|
52 |
-
{...props}
|
53 |
-
>
|
54 |
-
<SelectPrimitive.Viewport
|
55 |
-
className={cn(
|
56 |
-
'p-1',
|
57 |
-
position === 'popper' &&
|
58 |
-
'h-[var(--radix-select-trigger-height)] w-full min-w-[var(--radix-select-trigger-width)]'
|
59 |
-
)}
|
60 |
-
>
|
61 |
-
{children}
|
62 |
-
</SelectPrimitive.Viewport>
|
63 |
-
</SelectPrimitive.Content>
|
64 |
-
</SelectPrimitive.Portal>
|
65 |
-
))
|
66 |
-
SelectContent.displayName = SelectPrimitive.Content.displayName
|
67 |
-
|
68 |
-
const SelectLabel = React.forwardRef<
|
69 |
-
React.ElementRef<typeof SelectPrimitive.Label>,
|
70 |
-
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Label>
|
71 |
-
>(({ className, ...props }, ref) => (
|
72 |
-
<SelectPrimitive.Label
|
73 |
-
ref={ref}
|
74 |
-
className={cn('py-1.5 pl-8 pr-2 text-sm font-semibold', className)}
|
75 |
-
{...props}
|
76 |
-
/>
|
77 |
-
))
|
78 |
-
SelectLabel.displayName = SelectPrimitive.Label.displayName
|
79 |
-
|
80 |
-
const SelectItem = React.forwardRef<
|
81 |
-
React.ElementRef<typeof SelectPrimitive.Item>,
|
82 |
-
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Item>
|
83 |
-
>(({ className, children, ...props }, ref) => (
|
84 |
-
<SelectPrimitive.Item
|
85 |
-
ref={ref}
|
86 |
-
className={cn(
|
87 |
-
'relative flex w-full cursor-default select-none items-center rounded-sm py-1.5 pl-8 pr-2 text-sm outline-none focus:bg-accent focus:text-accent-foreground data-[disabled]:pointer-events-none data-[disabled]:opacity-50',
|
88 |
-
className
|
89 |
-
)}
|
90 |
-
{...props}
|
91 |
-
>
|
92 |
-
<span className="absolute left-2 flex h-3.5 w-3.5 items-center justify-center">
|
93 |
-
<SelectPrimitive.ItemIndicator>
|
94 |
-
<IconCheck className="h-4 w-4" />
|
95 |
-
</SelectPrimitive.ItemIndicator>
|
96 |
-
</span>
|
97 |
-
<SelectPrimitive.ItemText>{children}</SelectPrimitive.ItemText>
|
98 |
-
</SelectPrimitive.Item>
|
99 |
-
))
|
100 |
-
SelectItem.displayName = SelectPrimitive.Item.displayName
|
101 |
-
|
102 |
-
const SelectSeparator = React.forwardRef<
|
103 |
-
React.ElementRef<typeof SelectPrimitive.Separator>,
|
104 |
-
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Separator>
|
105 |
-
>(({ className, ...props }, ref) => (
|
106 |
-
<SelectPrimitive.Separator
|
107 |
-
ref={ref}
|
108 |
-
className={cn('-mx-1 my-1 h-px bg-muted', className)}
|
109 |
-
{...props}
|
110 |
-
/>
|
111 |
-
))
|
112 |
-
SelectSeparator.displayName = SelectPrimitive.Separator.displayName
|
113 |
-
|
114 |
-
export {
|
115 |
-
Select,
|
116 |
-
SelectGroup,
|
117 |
-
SelectValue,
|
118 |
-
SelectTrigger,
|
119 |
-
SelectContent,
|
120 |
-
SelectLabel,
|
121 |
-
SelectItem,
|
122 |
-
SelectSeparator
|
123 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIFILMS/StyleGANEX/datasets/augmentations.py
DELETED
@@ -1,110 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import torch
|
3 |
-
from torch import nn
|
4 |
-
from torch.nn import functional as F
|
5 |
-
from torchvision import transforms
|
6 |
-
|
7 |
-
|
8 |
-
class ToOneHot(object):
|
9 |
-
""" Convert the input PIL image to a one-hot torch tensor """
|
10 |
-
def __init__(self, n_classes=None):
|
11 |
-
self.n_classes = n_classes
|
12 |
-
|
13 |
-
def onehot_initialization(self, a):
|
14 |
-
if self.n_classes is None:
|
15 |
-
self.n_classes = len(np.unique(a))
|
16 |
-
out = np.zeros(a.shape + (self.n_classes, ), dtype=int)
|
17 |
-
out[self.__all_idx(a, axis=2)] = 1
|
18 |
-
return out
|
19 |
-
|
20 |
-
def __all_idx(self, idx, axis):
|
21 |
-
grid = np.ogrid[tuple(map(slice, idx.shape))]
|
22 |
-
grid.insert(axis, idx)
|
23 |
-
return tuple(grid)
|
24 |
-
|
25 |
-
def __call__(self, img):
|
26 |
-
img = np.array(img)
|
27 |
-
one_hot = self.onehot_initialization(img)
|
28 |
-
return one_hot
|
29 |
-
|
30 |
-
|
31 |
-
class BilinearResize(object):
|
32 |
-
def __init__(self, factors=[1, 2, 4, 8, 16, 32]):
|
33 |
-
self.factors = factors
|
34 |
-
|
35 |
-
def __call__(self, image):
|
36 |
-
factor = np.random.choice(self.factors, size=1)[0]
|
37 |
-
D = BicubicDownSample(factor=factor, cuda=False)
|
38 |
-
img_tensor = transforms.ToTensor()(image).unsqueeze(0)
|
39 |
-
img_tensor_lr = D(img_tensor)[0].clamp(0, 1)
|
40 |
-
img_low_res = transforms.ToPILImage()(img_tensor_lr)
|
41 |
-
return img_low_res
|
42 |
-
|
43 |
-
|
44 |
-
class BicubicDownSample(nn.Module):
|
45 |
-
def bicubic_kernel(self, x, a=-0.50):
|
46 |
-
"""
|
47 |
-
This equation is exactly copied from the website below:
|
48 |
-
https://clouard.users.greyc.fr/Pantheon/experiments/rescaling/index-en.html#bicubic
|
49 |
-
"""
|
50 |
-
abs_x = torch.abs(x)
|
51 |
-
if abs_x <= 1.:
|
52 |
-
return (a + 2.) * torch.pow(abs_x, 3.) - (a + 3.) * torch.pow(abs_x, 2.) + 1
|
53 |
-
elif 1. < abs_x < 2.:
|
54 |
-
return a * torch.pow(abs_x, 3) - 5. * a * torch.pow(abs_x, 2.) + 8. * a * abs_x - 4. * a
|
55 |
-
else:
|
56 |
-
return 0.0
|
57 |
-
|
58 |
-
def __init__(self, factor=4, cuda=True, padding='reflect'):
|
59 |
-
super().__init__()
|
60 |
-
self.factor = factor
|
61 |
-
size = factor * 4
|
62 |
-
k = torch.tensor([self.bicubic_kernel((i - torch.floor(torch.tensor(size / 2)) + 0.5) / factor)
|
63 |
-
for i in range(size)], dtype=torch.float32)
|
64 |
-
k = k / torch.sum(k)
|
65 |
-
k1 = torch.reshape(k, shape=(1, 1, size, 1))
|
66 |
-
self.k1 = torch.cat([k1, k1, k1], dim=0)
|
67 |
-
k2 = torch.reshape(k, shape=(1, 1, 1, size))
|
68 |
-
self.k2 = torch.cat([k2, k2, k2], dim=0)
|
69 |
-
self.cuda = '.cuda' if cuda else ''
|
70 |
-
self.padding = padding
|
71 |
-
for param in self.parameters():
|
72 |
-
param.requires_grad = False
|
73 |
-
|
74 |
-
def forward(self, x, nhwc=False, clip_round=False, byte_output=False):
|
75 |
-
filter_height = self.factor * 4
|
76 |
-
filter_width = self.factor * 4
|
77 |
-
stride = self.factor
|
78 |
-
|
79 |
-
pad_along_height = max(filter_height - stride, 0)
|
80 |
-
pad_along_width = max(filter_width - stride, 0)
|
81 |
-
filters1 = self.k1.type('torch{}.FloatTensor'.format(self.cuda))
|
82 |
-
filters2 = self.k2.type('torch{}.FloatTensor'.format(self.cuda))
|
83 |
-
|
84 |
-
# compute actual padding values for each side
|
85 |
-
pad_top = pad_along_height // 2
|
86 |
-
pad_bottom = pad_along_height - pad_top
|
87 |
-
pad_left = pad_along_width // 2
|
88 |
-
pad_right = pad_along_width - pad_left
|
89 |
-
|
90 |
-
# apply mirror padding
|
91 |
-
if nhwc:
|
92 |
-
x = torch.transpose(torch.transpose(x, 2, 3), 1, 2) # NHWC to NCHW
|
93 |
-
|
94 |
-
# downscaling performed by 1-d convolution
|
95 |
-
x = F.pad(x, (0, 0, pad_top, pad_bottom), self.padding)
|
96 |
-
x = F.conv2d(input=x, weight=filters1, stride=(stride, 1), groups=3)
|
97 |
-
if clip_round:
|
98 |
-
x = torch.clamp(torch.round(x), 0.0, 255.)
|
99 |
-
|
100 |
-
x = F.pad(x, (pad_left, pad_right, 0, 0), self.padding)
|
101 |
-
x = F.conv2d(input=x, weight=filters2, stride=(1, stride), groups=3)
|
102 |
-
if clip_round:
|
103 |
-
x = torch.clamp(torch.round(x), 0.0, 255.)
|
104 |
-
|
105 |
-
if nhwc:
|
106 |
-
x = torch.transpose(torch.transpose(x, 1, 3), 1, 2)
|
107 |
-
if byte_output:
|
108 |
-
return x.type('torch.ByteTensor'.format(self.cuda))
|
109 |
-
else:
|
110 |
-
return x
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIFILMS/StyleGANEX/models/stylegan2/lpips/__init__.py
DELETED
@@ -1,161 +0,0 @@
|
|
1 |
-
|
2 |
-
from __future__ import absolute_import
|
3 |
-
from __future__ import division
|
4 |
-
from __future__ import print_function
|
5 |
-
|
6 |
-
import numpy as np
|
7 |
-
#from skimage.measure import compare_ssim
|
8 |
-
from skimage.metrics import structural_similarity as compare_ssim
|
9 |
-
import torch
|
10 |
-
from torch.autograd import Variable
|
11 |
-
|
12 |
-
from models.stylegan2.lpips import dist_model
|
13 |
-
|
14 |
-
class PerceptualLoss(torch.nn.Module):
|
15 |
-
def __init__(self, model='net-lin', net='alex', colorspace='rgb', spatial=False, use_gpu=True, gpu_ids=[0]): # VGG using our perceptually-learned weights (LPIPS metric)
|
16 |
-
# def __init__(self, model='net', net='vgg', use_gpu=True): # "default" way of using VGG as a perceptual loss
|
17 |
-
super(PerceptualLoss, self).__init__()
|
18 |
-
print('Setting up Perceptual loss...')
|
19 |
-
self.use_gpu = use_gpu
|
20 |
-
self.spatial = spatial
|
21 |
-
self.gpu_ids = gpu_ids
|
22 |
-
self.model = dist_model.DistModel()
|
23 |
-
self.model.initialize(model=model, net=net, use_gpu=use_gpu, colorspace=colorspace, spatial=self.spatial, gpu_ids=gpu_ids)
|
24 |
-
print('...[%s] initialized'%self.model.name())
|
25 |
-
print('...Done')
|
26 |
-
|
27 |
-
def forward(self, pred, target, normalize=False):
|
28 |
-
"""
|
29 |
-
Pred and target are Variables.
|
30 |
-
If normalize is True, assumes the images are between [0,1] and then scales them between [-1,+1]
|
31 |
-
If normalize is False, assumes the images are already between [-1,+1]
|
32 |
-
|
33 |
-
Inputs pred and target are Nx3xHxW
|
34 |
-
Output pytorch Variable N long
|
35 |
-
"""
|
36 |
-
|
37 |
-
if normalize:
|
38 |
-
target = 2 * target - 1
|
39 |
-
pred = 2 * pred - 1
|
40 |
-
|
41 |
-
return self.model.forward(target, pred)
|
42 |
-
|
43 |
-
def normalize_tensor(in_feat,eps=1e-10):
|
44 |
-
norm_factor = torch.sqrt(torch.sum(in_feat**2,dim=1,keepdim=True))
|
45 |
-
return in_feat/(norm_factor+eps)
|
46 |
-
|
47 |
-
def l2(p0, p1, range=255.):
|
48 |
-
return .5*np.mean((p0 / range - p1 / range)**2)
|
49 |
-
|
50 |
-
def psnr(p0, p1, peak=255.):
|
51 |
-
return 10*np.log10(peak**2/np.mean((1.*p0-1.*p1)**2))
|
52 |
-
|
53 |
-
def dssim(p0, p1, range=255.):
|
54 |
-
return (1 - compare_ssim(p0, p1, data_range=range, multichannel=True)) / 2.
|
55 |
-
|
56 |
-
def rgb2lab(in_img,mean_cent=False):
|
57 |
-
from skimage import color
|
58 |
-
img_lab = color.rgb2lab(in_img)
|
59 |
-
if(mean_cent):
|
60 |
-
img_lab[:,:,0] = img_lab[:,:,0]-50
|
61 |
-
return img_lab
|
62 |
-
|
63 |
-
def tensor2np(tensor_obj):
|
64 |
-
# change dimension of a tensor object into a numpy array
|
65 |
-
return tensor_obj[0].cpu().float().numpy().transpose((1,2,0))
|
66 |
-
|
67 |
-
def np2tensor(np_obj):
|
68 |
-
# change dimenion of np array into tensor array
|
69 |
-
return torch.Tensor(np_obj[:, :, :, np.newaxis].transpose((3, 2, 0, 1)))
|
70 |
-
|
71 |
-
def tensor2tensorlab(image_tensor,to_norm=True,mc_only=False):
|
72 |
-
# image tensor to lab tensor
|
73 |
-
from skimage import color
|
74 |
-
|
75 |
-
img = tensor2im(image_tensor)
|
76 |
-
img_lab = color.rgb2lab(img)
|
77 |
-
if(mc_only):
|
78 |
-
img_lab[:,:,0] = img_lab[:,:,0]-50
|
79 |
-
if(to_norm and not mc_only):
|
80 |
-
img_lab[:,:,0] = img_lab[:,:,0]-50
|
81 |
-
img_lab = img_lab/100.
|
82 |
-
|
83 |
-
return np2tensor(img_lab)
|
84 |
-
|
85 |
-
def tensorlab2tensor(lab_tensor,return_inbnd=False):
|
86 |
-
from skimage import color
|
87 |
-
import warnings
|
88 |
-
warnings.filterwarnings("ignore")
|
89 |
-
|
90 |
-
lab = tensor2np(lab_tensor)*100.
|
91 |
-
lab[:,:,0] = lab[:,:,0]+50
|
92 |
-
|
93 |
-
rgb_back = 255.*np.clip(color.lab2rgb(lab.astype('float')),0,1)
|
94 |
-
if(return_inbnd):
|
95 |
-
# convert back to lab, see if we match
|
96 |
-
lab_back = color.rgb2lab(rgb_back.astype('uint8'))
|
97 |
-
mask = 1.*np.isclose(lab_back,lab,atol=2.)
|
98 |
-
mask = np2tensor(np.prod(mask,axis=2)[:,:,np.newaxis])
|
99 |
-
return (im2tensor(rgb_back),mask)
|
100 |
-
else:
|
101 |
-
return im2tensor(rgb_back)
|
102 |
-
|
103 |
-
def rgb2lab(input):
|
104 |
-
from skimage import color
|
105 |
-
return color.rgb2lab(input / 255.)
|
106 |
-
|
107 |
-
def tensor2im(image_tensor, imtype=np.uint8, cent=1., factor=255./2.):
|
108 |
-
image_numpy = image_tensor[0].cpu().float().numpy()
|
109 |
-
image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + cent) * factor
|
110 |
-
return image_numpy.astype(imtype)
|
111 |
-
|
112 |
-
def im2tensor(image, imtype=np.uint8, cent=1., factor=255./2.):
|
113 |
-
return torch.Tensor((image / factor - cent)
|
114 |
-
[:, :, :, np.newaxis].transpose((3, 2, 0, 1)))
|
115 |
-
|
116 |
-
def tensor2vec(vector_tensor):
|
117 |
-
return vector_tensor.data.cpu().numpy()[:, :, 0, 0]
|
118 |
-
|
119 |
-
def voc_ap(rec, prec, use_07_metric=False):
|
120 |
-
""" ap = voc_ap(rec, prec, [use_07_metric])
|
121 |
-
Compute VOC AP given precision and recall.
|
122 |
-
If use_07_metric is true, uses the
|
123 |
-
VOC 07 11 point method (default:False).
|
124 |
-
"""
|
125 |
-
if use_07_metric:
|
126 |
-
# 11 point metric
|
127 |
-
ap = 0.
|
128 |
-
for t in np.arange(0., 1.1, 0.1):
|
129 |
-
if np.sum(rec >= t) == 0:
|
130 |
-
p = 0
|
131 |
-
else:
|
132 |
-
p = np.max(prec[rec >= t])
|
133 |
-
ap = ap + p / 11.
|
134 |
-
else:
|
135 |
-
# correct AP calculation
|
136 |
-
# first append sentinel values at the end
|
137 |
-
mrec = np.concatenate(([0.], rec, [1.]))
|
138 |
-
mpre = np.concatenate(([0.], prec, [0.]))
|
139 |
-
|
140 |
-
# compute the precision envelope
|
141 |
-
for i in range(mpre.size - 1, 0, -1):
|
142 |
-
mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
|
143 |
-
|
144 |
-
# to calculate area under PR curve, look for points
|
145 |
-
# where X axis (recall) changes value
|
146 |
-
i = np.where(mrec[1:] != mrec[:-1])[0]
|
147 |
-
|
148 |
-
# and sum (\Delta recall) * prec
|
149 |
-
ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
|
150 |
-
return ap
|
151 |
-
|
152 |
-
def tensor2im(image_tensor, imtype=np.uint8, cent=1., factor=255./2.):
|
153 |
-
# def tensor2im(image_tensor, imtype=np.uint8, cent=1., factor=1.):
|
154 |
-
image_numpy = image_tensor[0].cpu().float().numpy()
|
155 |
-
image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + cent) * factor
|
156 |
-
return image_numpy.astype(imtype)
|
157 |
-
|
158 |
-
def im2tensor(image, imtype=np.uint8, cent=1., factor=255./2.):
|
159 |
-
# def im2tensor(image, imtype=np.uint8, cent=1., factor=1.):
|
160 |
-
return torch.Tensor((image / factor - cent)
|
161 |
-
[:, :, :, np.newaxis].transpose((3, 2, 0, 1)))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/htsat.py
DELETED
@@ -1,1308 +0,0 @@
|
|
1 |
-
# Ke Chen
|
2 | |
3 |
-
# HTS-AT: A HIERARCHICAL TOKEN-SEMANTIC AUDIO TRANSFORMER FOR SOUND CLASSIFICATION AND DETECTION
|
4 |
-
# Some layers designed on the model
|
5 |
-
# below codes are based and referred from https://github.com/microsoft/Swin-Transformer
|
6 |
-
# Swin Transformer for Computer Vision: https://arxiv.org/pdf/2103.14030.pdf
|
7 |
-
|
8 |
-
import torch
|
9 |
-
import torch.nn as nn
|
10 |
-
import torch.nn.functional as F
|
11 |
-
from itertools import repeat
|
12 |
-
import collections.abc
|
13 |
-
import math
|
14 |
-
import warnings
|
15 |
-
|
16 |
-
from torch.nn.init import _calculate_fan_in_and_fan_out
|
17 |
-
import torch.utils.checkpoint as checkpoint
|
18 |
-
|
19 |
-
import random
|
20 |
-
|
21 |
-
from torchlibrosa.stft import Spectrogram, LogmelFilterBank
|
22 |
-
from torchlibrosa.augmentation import SpecAugmentation
|
23 |
-
|
24 |
-
from itertools import repeat
|
25 |
-
from .utils import do_mixup, interpolate
|
26 |
-
|
27 |
-
from .feature_fusion import iAFF, AFF, DAF
|
28 |
-
|
29 |
-
# from PyTorch internals
|
30 |
-
def _ntuple(n):
|
31 |
-
def parse(x):
|
32 |
-
if isinstance(x, collections.abc.Iterable):
|
33 |
-
return x
|
34 |
-
return tuple(repeat(x, n))
|
35 |
-
|
36 |
-
return parse
|
37 |
-
|
38 |
-
|
39 |
-
to_1tuple = _ntuple(1)
|
40 |
-
to_2tuple = _ntuple(2)
|
41 |
-
to_3tuple = _ntuple(3)
|
42 |
-
to_4tuple = _ntuple(4)
|
43 |
-
to_ntuple = _ntuple
|
44 |
-
|
45 |
-
|
46 |
-
def drop_path(x, drop_prob: float = 0.0, training: bool = False):
|
47 |
-
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
|
48 |
-
This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,
|
49 |
-
the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
|
50 |
-
See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for
|
51 |
-
changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use
|
52 |
-
'survival rate' as the argument.
|
53 |
-
"""
|
54 |
-
if drop_prob == 0.0 or not training:
|
55 |
-
return x
|
56 |
-
keep_prob = 1 - drop_prob
|
57 |
-
shape = (x.shape[0],) + (1,) * (
|
58 |
-
x.ndim - 1
|
59 |
-
) # work with diff dim tensors, not just 2D ConvNets
|
60 |
-
random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
|
61 |
-
random_tensor.floor_() # binarize
|
62 |
-
output = x.div(keep_prob) * random_tensor
|
63 |
-
return output
|
64 |
-
|
65 |
-
|
66 |
-
class DropPath(nn.Module):
|
67 |
-
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks)."""
|
68 |
-
|
69 |
-
def __init__(self, drop_prob=None):
|
70 |
-
super(DropPath, self).__init__()
|
71 |
-
self.drop_prob = drop_prob
|
72 |
-
|
73 |
-
def forward(self, x):
|
74 |
-
return drop_path(x, self.drop_prob, self.training)
|
75 |
-
|
76 |
-
|
77 |
-
class PatchEmbed(nn.Module):
|
78 |
-
"""2D Image to Patch Embedding"""
|
79 |
-
|
80 |
-
def __init__(
|
81 |
-
self,
|
82 |
-
img_size=224,
|
83 |
-
patch_size=16,
|
84 |
-
in_chans=3,
|
85 |
-
embed_dim=768,
|
86 |
-
norm_layer=None,
|
87 |
-
flatten=True,
|
88 |
-
patch_stride=16,
|
89 |
-
enable_fusion=False,
|
90 |
-
fusion_type="None",
|
91 |
-
):
|
92 |
-
super().__init__()
|
93 |
-
img_size = to_2tuple(img_size)
|
94 |
-
patch_size = to_2tuple(patch_size)
|
95 |
-
patch_stride = to_2tuple(patch_stride)
|
96 |
-
self.img_size = img_size
|
97 |
-
self.patch_size = patch_size
|
98 |
-
self.patch_stride = patch_stride
|
99 |
-
self.grid_size = (
|
100 |
-
img_size[0] // patch_stride[0],
|
101 |
-
img_size[1] // patch_stride[1],
|
102 |
-
)
|
103 |
-
self.num_patches = self.grid_size[0] * self.grid_size[1]
|
104 |
-
self.flatten = flatten
|
105 |
-
self.in_chans = in_chans
|
106 |
-
self.embed_dim = embed_dim
|
107 |
-
|
108 |
-
self.enable_fusion = enable_fusion
|
109 |
-
self.fusion_type = fusion_type
|
110 |
-
|
111 |
-
padding = (
|
112 |
-
(patch_size[0] - patch_stride[0]) // 2,
|
113 |
-
(patch_size[1] - patch_stride[1]) // 2,
|
114 |
-
)
|
115 |
-
|
116 |
-
if (self.enable_fusion) and (self.fusion_type == "channel_map"):
|
117 |
-
self.proj = nn.Conv2d(
|
118 |
-
in_chans * 4,
|
119 |
-
embed_dim,
|
120 |
-
kernel_size=patch_size,
|
121 |
-
stride=patch_stride,
|
122 |
-
padding=padding,
|
123 |
-
)
|
124 |
-
else:
|
125 |
-
self.proj = nn.Conv2d(
|
126 |
-
in_chans,
|
127 |
-
embed_dim,
|
128 |
-
kernel_size=patch_size,
|
129 |
-
stride=patch_stride,
|
130 |
-
padding=padding,
|
131 |
-
)
|
132 |
-
self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()
|
133 |
-
|
134 |
-
if (self.enable_fusion) and (
|
135 |
-
self.fusion_type in ["daf_2d", "aff_2d", "iaff_2d"]
|
136 |
-
):
|
137 |
-
self.mel_conv2d = nn.Conv2d(
|
138 |
-
in_chans,
|
139 |
-
embed_dim,
|
140 |
-
kernel_size=(patch_size[0], patch_size[1] * 3),
|
141 |
-
stride=(patch_stride[0], patch_stride[1] * 3),
|
142 |
-
padding=padding,
|
143 |
-
)
|
144 |
-
if self.fusion_type == "daf_2d":
|
145 |
-
self.fusion_model = DAF()
|
146 |
-
elif self.fusion_type == "aff_2d":
|
147 |
-
self.fusion_model = AFF(channels=embed_dim, type="2D")
|
148 |
-
elif self.fusion_type == "iaff_2d":
|
149 |
-
self.fusion_model = iAFF(channels=embed_dim, type="2D")
|
150 |
-
|
151 |
-
def forward(self, x, longer_idx=None):
|
152 |
-
if (self.enable_fusion) and (
|
153 |
-
self.fusion_type in ["daf_2d", "aff_2d", "iaff_2d"]
|
154 |
-
):
|
155 |
-
global_x = x[:, 0:1, :, :]
|
156 |
-
|
157 |
-
# global processing
|
158 |
-
B, C, H, W = global_x.shape
|
159 |
-
assert (
|
160 |
-
H == self.img_size[0] and W == self.img_size[1]
|
161 |
-
), f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
|
162 |
-
global_x = self.proj(global_x)
|
163 |
-
TW = global_x.size(-1)
|
164 |
-
if len(longer_idx) > 0:
|
165 |
-
# local processing
|
166 |
-
local_x = x[longer_idx, 1:, :, :].contiguous()
|
167 |
-
B, C, H, W = local_x.shape
|
168 |
-
local_x = local_x.view(B * C, 1, H, W)
|
169 |
-
local_x = self.mel_conv2d(local_x)
|
170 |
-
local_x = local_x.view(
|
171 |
-
B, C, local_x.size(1), local_x.size(2), local_x.size(3)
|
172 |
-
)
|
173 |
-
local_x = local_x.permute((0, 2, 3, 1, 4)).contiguous().flatten(3)
|
174 |
-
TB, TC, TH, _ = local_x.size()
|
175 |
-
if local_x.size(-1) < TW:
|
176 |
-
local_x = torch.cat(
|
177 |
-
[
|
178 |
-
local_x,
|
179 |
-
torch.zeros(
|
180 |
-
(TB, TC, TH, TW - local_x.size(-1)),
|
181 |
-
device=global_x.device,
|
182 |
-
),
|
183 |
-
],
|
184 |
-
dim=-1,
|
185 |
-
)
|
186 |
-
else:
|
187 |
-
local_x = local_x[:, :, :, :TW]
|
188 |
-
|
189 |
-
global_x[longer_idx] = self.fusion_model(global_x[longer_idx], local_x)
|
190 |
-
x = global_x
|
191 |
-
else:
|
192 |
-
B, C, H, W = x.shape
|
193 |
-
assert (
|
194 |
-
H == self.img_size[0] and W == self.img_size[1]
|
195 |
-
), f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
|
196 |
-
x = self.proj(x)
|
197 |
-
|
198 |
-
if self.flatten:
|
199 |
-
x = x.flatten(2).transpose(1, 2) # BCHW -> BNC
|
200 |
-
x = self.norm(x)
|
201 |
-
return x
|
202 |
-
|
203 |
-
|
204 |
-
class Mlp(nn.Module):
|
205 |
-
"""MLP as used in Vision Transformer, MLP-Mixer and related networks"""
|
206 |
-
|
207 |
-
def __init__(
|
208 |
-
self,
|
209 |
-
in_features,
|
210 |
-
hidden_features=None,
|
211 |
-
out_features=None,
|
212 |
-
act_layer=nn.GELU,
|
213 |
-
drop=0.0,
|
214 |
-
):
|
215 |
-
super().__init__()
|
216 |
-
out_features = out_features or in_features
|
217 |
-
hidden_features = hidden_features or in_features
|
218 |
-
self.fc1 = nn.Linear(in_features, hidden_features)
|
219 |
-
self.act = act_layer()
|
220 |
-
self.fc2 = nn.Linear(hidden_features, out_features)
|
221 |
-
self.drop = nn.Dropout(drop)
|
222 |
-
|
223 |
-
def forward(self, x):
|
224 |
-
x = self.fc1(x)
|
225 |
-
x = self.act(x)
|
226 |
-
x = self.drop(x)
|
227 |
-
x = self.fc2(x)
|
228 |
-
x = self.drop(x)
|
229 |
-
return x
|
230 |
-
|
231 |
-
|
232 |
-
def _no_grad_trunc_normal_(tensor, mean, std, a, b):
|
233 |
-
# Cut & paste from PyTorch official master until it's in a few official releases - RW
|
234 |
-
# Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
|
235 |
-
def norm_cdf(x):
|
236 |
-
# Computes standard normal cumulative distribution function
|
237 |
-
return (1.0 + math.erf(x / math.sqrt(2.0))) / 2.0
|
238 |
-
|
239 |
-
if (mean < a - 2 * std) or (mean > b + 2 * std):
|
240 |
-
warnings.warn(
|
241 |
-
"mean is more than 2 std from [a, b] in nn.init.trunc_normal_. "
|
242 |
-
"The distribution of values may be incorrect.",
|
243 |
-
stacklevel=2,
|
244 |
-
)
|
245 |
-
|
246 |
-
with torch.no_grad():
|
247 |
-
# Values are generated by using a truncated uniform distribution and
|
248 |
-
# then using the inverse CDF for the normal distribution.
|
249 |
-
# Get upper and lower cdf values
|
250 |
-
l = norm_cdf((a - mean) / std)
|
251 |
-
u = norm_cdf((b - mean) / std)
|
252 |
-
|
253 |
-
# Uniformly fill tensor with values from [l, u], then translate to
|
254 |
-
# [2l-1, 2u-1].
|
255 |
-
tensor.uniform_(2 * l - 1, 2 * u - 1)
|
256 |
-
|
257 |
-
# Use inverse cdf transform for normal distribution to get truncated
|
258 |
-
# standard normal
|
259 |
-
tensor.erfinv_()
|
260 |
-
|
261 |
-
# Transform to proper mean, std
|
262 |
-
tensor.mul_(std * math.sqrt(2.0))
|
263 |
-
tensor.add_(mean)
|
264 |
-
|
265 |
-
# Clamp to ensure it's in the proper range
|
266 |
-
tensor.clamp_(min=a, max=b)
|
267 |
-
return tensor
|
268 |
-
|
269 |
-
|
270 |
-
def trunc_normal_(tensor, mean=0.0, std=1.0, a=-2.0, b=2.0):
|
271 |
-
# type: (Tensor, float, float, float, float) -> Tensor
|
272 |
-
r"""Fills the input Tensor with values drawn from a truncated
|
273 |
-
normal distribution. The values are effectively drawn from the
|
274 |
-
normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`
|
275 |
-
with values outside :math:`[a, b]` redrawn until they are within
|
276 |
-
the bounds. The method used for generating the random values works
|
277 |
-
best when :math:`a \leq \text{mean} \leq b`.
|
278 |
-
Args:
|
279 |
-
tensor: an n-dimensional `torch.Tensor`
|
280 |
-
mean: the mean of the normal distribution
|
281 |
-
std: the standard deviation of the normal distribution
|
282 |
-
a: the minimum cutoff value
|
283 |
-
b: the maximum cutoff value
|
284 |
-
Examples:
|
285 |
-
>>> w = torch.empty(3, 5)
|
286 |
-
>>> nn.init.trunc_normal_(w)
|
287 |
-
"""
|
288 |
-
return _no_grad_trunc_normal_(tensor, mean, std, a, b)
|
289 |
-
|
290 |
-
|
291 |
-
def variance_scaling_(tensor, scale=1.0, mode="fan_in", distribution="normal"):
|
292 |
-
fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
|
293 |
-
if mode == "fan_in":
|
294 |
-
denom = fan_in
|
295 |
-
elif mode == "fan_out":
|
296 |
-
denom = fan_out
|
297 |
-
elif mode == "fan_avg":
|
298 |
-
denom = (fan_in + fan_out) / 2
|
299 |
-
|
300 |
-
variance = scale / denom
|
301 |
-
|
302 |
-
if distribution == "truncated_normal":
|
303 |
-
# constant is stddev of standard normal truncated to (-2, 2)
|
304 |
-
trunc_normal_(tensor, std=math.sqrt(variance) / 0.87962566103423978)
|
305 |
-
elif distribution == "normal":
|
306 |
-
tensor.normal_(std=math.sqrt(variance))
|
307 |
-
elif distribution == "uniform":
|
308 |
-
bound = math.sqrt(3 * variance)
|
309 |
-
tensor.uniform_(-bound, bound)
|
310 |
-
else:
|
311 |
-
raise ValueError(f"invalid distribution {distribution}")
|
312 |
-
|
313 |
-
|
314 |
-
def lecun_normal_(tensor):
|
315 |
-
variance_scaling_(tensor, mode="fan_in", distribution="truncated_normal")
|
316 |
-
|
317 |
-
|
318 |
-
def window_partition(x, window_size):
|
319 |
-
"""
|
320 |
-
Args:
|
321 |
-
x: (B, H, W, C)
|
322 |
-
window_size (int): window size
|
323 |
-
Returns:
|
324 |
-
windows: (num_windows*B, window_size, window_size, C)
|
325 |
-
"""
|
326 |
-
B, H, W, C = x.shape
|
327 |
-
x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
|
328 |
-
windows = (
|
329 |
-
x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
|
330 |
-
)
|
331 |
-
return windows
|
332 |
-
|
333 |
-
|
334 |
-
def window_reverse(windows, window_size, H, W):
|
335 |
-
"""
|
336 |
-
Args:
|
337 |
-
windows: (num_windows*B, window_size, window_size, C)
|
338 |
-
window_size (int): Window size
|
339 |
-
H (int): Height of image
|
340 |
-
W (int): Width of image
|
341 |
-
Returns:
|
342 |
-
x: (B, H, W, C)
|
343 |
-
"""
|
344 |
-
B = int(windows.shape[0] / (H * W / window_size / window_size))
|
345 |
-
x = windows.view(
|
346 |
-
B, H // window_size, W // window_size, window_size, window_size, -1
|
347 |
-
)
|
348 |
-
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
|
349 |
-
return x
|
350 |
-
|
351 |
-
|
352 |
-
class WindowAttention(nn.Module):
|
353 |
-
r"""Window based multi-head self attention (W-MSA) module with relative position bias.
|
354 |
-
It supports both of shifted and non-shifted window.
|
355 |
-
Args:
|
356 |
-
dim (int): Number of input channels.
|
357 |
-
window_size (tuple[int]): The height and width of the window.
|
358 |
-
num_heads (int): Number of attention heads.
|
359 |
-
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
|
360 |
-
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
|
361 |
-
attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
|
362 |
-
proj_drop (float, optional): Dropout ratio of output. Default: 0.0
|
363 |
-
"""
|
364 |
-
|
365 |
-
def __init__(
|
366 |
-
self,
|
367 |
-
dim,
|
368 |
-
window_size,
|
369 |
-
num_heads,
|
370 |
-
qkv_bias=True,
|
371 |
-
qk_scale=None,
|
372 |
-
attn_drop=0.0,
|
373 |
-
proj_drop=0.0,
|
374 |
-
):
|
375 |
-
|
376 |
-
super().__init__()
|
377 |
-
self.dim = dim
|
378 |
-
self.window_size = window_size # Wh, Ww
|
379 |
-
self.num_heads = num_heads
|
380 |
-
head_dim = dim // num_heads
|
381 |
-
self.scale = qk_scale or head_dim**-0.5
|
382 |
-
|
383 |
-
# define a parameter table of relative position bias
|
384 |
-
self.relative_position_bias_table = nn.Parameter(
|
385 |
-
torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)
|
386 |
-
) # 2*Wh-1 * 2*Ww-1, nH
|
387 |
-
|
388 |
-
# get pair-wise relative position index for each token inside the window
|
389 |
-
coords_h = torch.arange(self.window_size[0])
|
390 |
-
coords_w = torch.arange(self.window_size[1])
|
391 |
-
coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
|
392 |
-
coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
|
393 |
-
relative_coords = (
|
394 |
-
coords_flatten[:, :, None] - coords_flatten[:, None, :]
|
395 |
-
) # 2, Wh*Ww, Wh*Ww
|
396 |
-
relative_coords = relative_coords.permute(
|
397 |
-
1, 2, 0
|
398 |
-
).contiguous() # Wh*Ww, Wh*Ww, 2
|
399 |
-
relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
|
400 |
-
relative_coords[:, :, 1] += self.window_size[1] - 1
|
401 |
-
relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
|
402 |
-
relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
|
403 |
-
self.register_buffer("relative_position_index", relative_position_index)
|
404 |
-
|
405 |
-
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
|
406 |
-
self.attn_drop = nn.Dropout(attn_drop)
|
407 |
-
self.proj = nn.Linear(dim, dim)
|
408 |
-
self.proj_drop = nn.Dropout(proj_drop)
|
409 |
-
|
410 |
-
trunc_normal_(self.relative_position_bias_table, std=0.02)
|
411 |
-
self.softmax = nn.Softmax(dim=-1)
|
412 |
-
|
413 |
-
def forward(self, x, mask=None):
|
414 |
-
"""
|
415 |
-
Args:
|
416 |
-
x: input features with shape of (num_windows*B, N, C)
|
417 |
-
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
|
418 |
-
"""
|
419 |
-
B_, N, C = x.shape
|
420 |
-
qkv = (
|
421 |
-
self.qkv(x)
|
422 |
-
.reshape(B_, N, 3, self.num_heads, C // self.num_heads)
|
423 |
-
.permute(2, 0, 3, 1, 4)
|
424 |
-
)
|
425 |
-
q, k, v = (
|
426 |
-
qkv[0],
|
427 |
-
qkv[1],
|
428 |
-
qkv[2],
|
429 |
-
) # make torchscript happy (cannot use tensor as tuple)
|
430 |
-
|
431 |
-
q = q * self.scale
|
432 |
-
attn = q @ k.transpose(-2, -1)
|
433 |
-
|
434 |
-
relative_position_bias = self.relative_position_bias_table[
|
435 |
-
self.relative_position_index.view(-1)
|
436 |
-
].view(
|
437 |
-
self.window_size[0] * self.window_size[1],
|
438 |
-
self.window_size[0] * self.window_size[1],
|
439 |
-
-1,
|
440 |
-
) # Wh*Ww,Wh*Ww,nH
|
441 |
-
relative_position_bias = relative_position_bias.permute(
|
442 |
-
2, 0, 1
|
443 |
-
).contiguous() # nH, Wh*Ww, Wh*Ww
|
444 |
-
attn = attn + relative_position_bias.unsqueeze(0)
|
445 |
-
|
446 |
-
if mask is not None:
|
447 |
-
nW = mask.shape[0]
|
448 |
-
attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(
|
449 |
-
1
|
450 |
-
).unsqueeze(0)
|
451 |
-
attn = attn.view(-1, self.num_heads, N, N)
|
452 |
-
attn = self.softmax(attn)
|
453 |
-
else:
|
454 |
-
attn = self.softmax(attn)
|
455 |
-
|
456 |
-
attn = self.attn_drop(attn)
|
457 |
-
|
458 |
-
x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
|
459 |
-
x = self.proj(x)
|
460 |
-
x = self.proj_drop(x)
|
461 |
-
return x, attn
|
462 |
-
|
463 |
-
def extra_repr(self):
|
464 |
-
return f"dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}"
|
465 |
-
|
466 |
-
|
467 |
-
# We use the model based on Swintransformer Block, therefore we can use the swin-transformer pretrained model
|
468 |
-
class SwinTransformerBlock(nn.Module):
|
469 |
-
r"""Swin Transformer Block.
|
470 |
-
Args:
|
471 |
-
dim (int): Number of input channels.
|
472 |
-
input_resolution (tuple[int]): Input resulotion.
|
473 |
-
num_heads (int): Number of attention heads.
|
474 |
-
window_size (int): Window size.
|
475 |
-
shift_size (int): Shift size for SW-MSA.
|
476 |
-
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
|
477 |
-
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
|
478 |
-
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
|
479 |
-
drop (float, optional): Dropout rate. Default: 0.0
|
480 |
-
attn_drop (float, optional): Attention dropout rate. Default: 0.0
|
481 |
-
drop_path (float, optional): Stochastic depth rate. Default: 0.0
|
482 |
-
act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
|
483 |
-
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
|
484 |
-
"""
|
485 |
-
|
486 |
-
def __init__(
|
487 |
-
self,
|
488 |
-
dim,
|
489 |
-
input_resolution,
|
490 |
-
num_heads,
|
491 |
-
window_size=7,
|
492 |
-
shift_size=0,
|
493 |
-
mlp_ratio=4.0,
|
494 |
-
qkv_bias=True,
|
495 |
-
qk_scale=None,
|
496 |
-
drop=0.0,
|
497 |
-
attn_drop=0.0,
|
498 |
-
drop_path=0.0,
|
499 |
-
act_layer=nn.GELU,
|
500 |
-
norm_layer=nn.LayerNorm,
|
501 |
-
norm_before_mlp="ln",
|
502 |
-
):
|
503 |
-
super().__init__()
|
504 |
-
self.dim = dim
|
505 |
-
self.input_resolution = input_resolution
|
506 |
-
self.num_heads = num_heads
|
507 |
-
self.window_size = window_size
|
508 |
-
self.shift_size = shift_size
|
509 |
-
self.mlp_ratio = mlp_ratio
|
510 |
-
self.norm_before_mlp = norm_before_mlp
|
511 |
-
if min(self.input_resolution) <= self.window_size:
|
512 |
-
# if window size is larger than input resolution, we don't partition windows
|
513 |
-
self.shift_size = 0
|
514 |
-
self.window_size = min(self.input_resolution)
|
515 |
-
assert (
|
516 |
-
0 <= self.shift_size < self.window_size
|
517 |
-
), "shift_size must in 0-window_size"
|
518 |
-
|
519 |
-
self.norm1 = norm_layer(dim)
|
520 |
-
self.attn = WindowAttention(
|
521 |
-
dim,
|
522 |
-
window_size=to_2tuple(self.window_size),
|
523 |
-
num_heads=num_heads,
|
524 |
-
qkv_bias=qkv_bias,
|
525 |
-
qk_scale=qk_scale,
|
526 |
-
attn_drop=attn_drop,
|
527 |
-
proj_drop=drop,
|
528 |
-
)
|
529 |
-
|
530 |
-
self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity()
|
531 |
-
if self.norm_before_mlp == "ln":
|
532 |
-
self.norm2 = nn.LayerNorm(dim)
|
533 |
-
elif self.norm_before_mlp == "bn":
|
534 |
-
self.norm2 = lambda x: nn.BatchNorm1d(dim)(x.transpose(1, 2)).transpose(
|
535 |
-
1, 2
|
536 |
-
)
|
537 |
-
else:
|
538 |
-
raise NotImplementedError
|
539 |
-
mlp_hidden_dim = int(dim * mlp_ratio)
|
540 |
-
self.mlp = Mlp(
|
541 |
-
in_features=dim,
|
542 |
-
hidden_features=mlp_hidden_dim,
|
543 |
-
act_layer=act_layer,
|
544 |
-
drop=drop,
|
545 |
-
)
|
546 |
-
|
547 |
-
if self.shift_size > 0:
|
548 |
-
# calculate attention mask for SW-MSA
|
549 |
-
H, W = self.input_resolution
|
550 |
-
img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
|
551 |
-
h_slices = (
|
552 |
-
slice(0, -self.window_size),
|
553 |
-
slice(-self.window_size, -self.shift_size),
|
554 |
-
slice(-self.shift_size, None),
|
555 |
-
)
|
556 |
-
w_slices = (
|
557 |
-
slice(0, -self.window_size),
|
558 |
-
slice(-self.window_size, -self.shift_size),
|
559 |
-
slice(-self.shift_size, None),
|
560 |
-
)
|
561 |
-
cnt = 0
|
562 |
-
for h in h_slices:
|
563 |
-
for w in w_slices:
|
564 |
-
img_mask[:, h, w, :] = cnt
|
565 |
-
cnt += 1
|
566 |
-
|
567 |
-
mask_windows = window_partition(
|
568 |
-
img_mask, self.window_size
|
569 |
-
) # nW, window_size, window_size, 1
|
570 |
-
mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
|
571 |
-
attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
|
572 |
-
attn_mask = attn_mask.masked_fill(
|
573 |
-
attn_mask != 0, float(-100.0)
|
574 |
-
).masked_fill(attn_mask == 0, float(0.0))
|
575 |
-
else:
|
576 |
-
attn_mask = None
|
577 |
-
|
578 |
-
self.register_buffer("attn_mask", attn_mask)
|
579 |
-
|
580 |
-
def forward(self, x):
|
581 |
-
# pdb.set_trace()
|
582 |
-
H, W = self.input_resolution
|
583 |
-
# print("H: ", H)
|
584 |
-
# print("W: ", W)
|
585 |
-
# pdb.set_trace()
|
586 |
-
B, L, C = x.shape
|
587 |
-
# assert L == H * W, "input feature has wrong size"
|
588 |
-
|
589 |
-
shortcut = x
|
590 |
-
x = self.norm1(x)
|
591 |
-
x = x.view(B, H, W, C)
|
592 |
-
|
593 |
-
# cyclic shift
|
594 |
-
if self.shift_size > 0:
|
595 |
-
shifted_x = torch.roll(
|
596 |
-
x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)
|
597 |
-
)
|
598 |
-
else:
|
599 |
-
shifted_x = x
|
600 |
-
|
601 |
-
# partition windows
|
602 |
-
x_windows = window_partition(
|
603 |
-
shifted_x, self.window_size
|
604 |
-
) # nW*B, window_size, window_size, C
|
605 |
-
x_windows = x_windows.view(
|
606 |
-
-1, self.window_size * self.window_size, C
|
607 |
-
) # nW*B, window_size*window_size, C
|
608 |
-
|
609 |
-
# W-MSA/SW-MSA
|
610 |
-
attn_windows, attn = self.attn(
|
611 |
-
x_windows, mask=self.attn_mask
|
612 |
-
) # nW*B, window_size*window_size, C
|
613 |
-
|
614 |
-
# merge windows
|
615 |
-
attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
|
616 |
-
shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
|
617 |
-
|
618 |
-
# reverse cyclic shift
|
619 |
-
if self.shift_size > 0:
|
620 |
-
x = torch.roll(
|
621 |
-
shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)
|
622 |
-
)
|
623 |
-
else:
|
624 |
-
x = shifted_x
|
625 |
-
x = x.view(B, H * W, C)
|
626 |
-
|
627 |
-
# FFN
|
628 |
-
x = shortcut + self.drop_path(x)
|
629 |
-
x = x + self.drop_path(self.mlp(self.norm2(x)))
|
630 |
-
|
631 |
-
return x, attn
|
632 |
-
|
633 |
-
def extra_repr(self):
|
634 |
-
return (
|
635 |
-
f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, "
|
636 |
-
f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
|
637 |
-
)
|
638 |
-
|
639 |
-
|
640 |
-
class PatchMerging(nn.Module):
|
641 |
-
r"""Patch Merging Layer.
|
642 |
-
Args:
|
643 |
-
input_resolution (tuple[int]): Resolution of input feature.
|
644 |
-
dim (int): Number of input channels.
|
645 |
-
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
|
646 |
-
"""
|
647 |
-
|
648 |
-
def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
|
649 |
-
super().__init__()
|
650 |
-
self.input_resolution = input_resolution
|
651 |
-
self.dim = dim
|
652 |
-
self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
|
653 |
-
self.norm = norm_layer(4 * dim)
|
654 |
-
|
655 |
-
def forward(self, x):
|
656 |
-
"""
|
657 |
-
x: B, H*W, C
|
658 |
-
"""
|
659 |
-
H, W = self.input_resolution
|
660 |
-
B, L, C = x.shape
|
661 |
-
assert L == H * W, "input feature has wrong size"
|
662 |
-
assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."
|
663 |
-
|
664 |
-
x = x.view(B, H, W, C)
|
665 |
-
|
666 |
-
x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
|
667 |
-
x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
|
668 |
-
x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
|
669 |
-
x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
|
670 |
-
x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
|
671 |
-
x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
|
672 |
-
|
673 |
-
x = self.norm(x)
|
674 |
-
x = self.reduction(x)
|
675 |
-
|
676 |
-
return x
|
677 |
-
|
678 |
-
def extra_repr(self):
|
679 |
-
return f"input_resolution={self.input_resolution}, dim={self.dim}"
|
680 |
-
|
681 |
-
|
682 |
-
class BasicLayer(nn.Module):
|
683 |
-
"""A basic Swin Transformer layer for one stage.
|
684 |
-
Args:
|
685 |
-
dim (int): Number of input channels.
|
686 |
-
input_resolution (tuple[int]): Input resolution.
|
687 |
-
depth (int): Number of blocks.
|
688 |
-
num_heads (int): Number of attention heads.
|
689 |
-
window_size (int): Local window size.
|
690 |
-
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
|
691 |
-
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
|
692 |
-
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
|
693 |
-
drop (float, optional): Dropout rate. Default: 0.0
|
694 |
-
attn_drop (float, optional): Attention dropout rate. Default: 0.0
|
695 |
-
drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
|
696 |
-
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
|
697 |
-
downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
|
698 |
-
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
|
699 |
-
"""
|
700 |
-
|
701 |
-
def __init__(
|
702 |
-
self,
|
703 |
-
dim,
|
704 |
-
input_resolution,
|
705 |
-
depth,
|
706 |
-
num_heads,
|
707 |
-
window_size,
|
708 |
-
mlp_ratio=4.0,
|
709 |
-
qkv_bias=True,
|
710 |
-
qk_scale=None,
|
711 |
-
drop=0.0,
|
712 |
-
attn_drop=0.0,
|
713 |
-
drop_path=0.0,
|
714 |
-
norm_layer=nn.LayerNorm,
|
715 |
-
downsample=None,
|
716 |
-
use_checkpoint=False,
|
717 |
-
norm_before_mlp="ln",
|
718 |
-
):
|
719 |
-
|
720 |
-
super().__init__()
|
721 |
-
self.dim = dim
|
722 |
-
self.input_resolution = input_resolution
|
723 |
-
self.depth = depth
|
724 |
-
self.use_checkpoint = use_checkpoint
|
725 |
-
|
726 |
-
# build blocks
|
727 |
-
self.blocks = nn.ModuleList(
|
728 |
-
[
|
729 |
-
SwinTransformerBlock(
|
730 |
-
dim=dim,
|
731 |
-
input_resolution=input_resolution,
|
732 |
-
num_heads=num_heads,
|
733 |
-
window_size=window_size,
|
734 |
-
shift_size=0 if (i % 2 == 0) else window_size // 2,
|
735 |
-
mlp_ratio=mlp_ratio,
|
736 |
-
qkv_bias=qkv_bias,
|
737 |
-
qk_scale=qk_scale,
|
738 |
-
drop=drop,
|
739 |
-
attn_drop=attn_drop,
|
740 |
-
drop_path=drop_path[i]
|
741 |
-
if isinstance(drop_path, list)
|
742 |
-
else drop_path,
|
743 |
-
norm_layer=norm_layer,
|
744 |
-
norm_before_mlp=norm_before_mlp,
|
745 |
-
)
|
746 |
-
for i in range(depth)
|
747 |
-
]
|
748 |
-
)
|
749 |
-
|
750 |
-
# patch merging layer
|
751 |
-
if downsample is not None:
|
752 |
-
self.downsample = downsample(
|
753 |
-
input_resolution, dim=dim, norm_layer=norm_layer
|
754 |
-
)
|
755 |
-
else:
|
756 |
-
self.downsample = None
|
757 |
-
|
758 |
-
def forward(self, x):
|
759 |
-
attns = []
|
760 |
-
for blk in self.blocks:
|
761 |
-
if self.use_checkpoint:
|
762 |
-
x = checkpoint.checkpoint(blk, x)
|
763 |
-
else:
|
764 |
-
x, attn = blk(x)
|
765 |
-
if not self.training:
|
766 |
-
attns.append(attn.unsqueeze(0))
|
767 |
-
if self.downsample is not None:
|
768 |
-
x = self.downsample(x)
|
769 |
-
if not self.training:
|
770 |
-
attn = torch.cat(attns, dim=0)
|
771 |
-
attn = torch.mean(attn, dim=0)
|
772 |
-
return x, attn
|
773 |
-
|
774 |
-
def extra_repr(self):
|
775 |
-
return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"
|
776 |
-
|
777 |
-
|
778 |
-
# The Core of HTSAT
|
779 |
-
class HTSAT_Swin_Transformer(nn.Module):
|
780 |
-
r"""HTSAT based on the Swin Transformer
|
781 |
-
Args:
|
782 |
-
spec_size (int | tuple(int)): Input Spectrogram size. Default 256
|
783 |
-
patch_size (int | tuple(int)): Patch size. Default: 4
|
784 |
-
path_stride (iot | tuple(int)): Patch Stride for Frequency and Time Axis. Default: 4
|
785 |
-
in_chans (int): Number of input image channels. Default: 1 (mono)
|
786 |
-
num_classes (int): Number of classes for classification head. Default: 527
|
787 |
-
embed_dim (int): Patch embedding dimension. Default: 96
|
788 |
-
depths (tuple(int)): Depth of each HTSAT-Swin Transformer layer.
|
789 |
-
num_heads (tuple(int)): Number of attention heads in different layers.
|
790 |
-
window_size (int): Window size. Default: 8
|
791 |
-
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
|
792 |
-
qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
|
793 |
-
qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
|
794 |
-
drop_rate (float): Dropout rate. Default: 0
|
795 |
-
attn_drop_rate (float): Attention dropout rate. Default: 0
|
796 |
-
drop_path_rate (float): Stochastic depth rate. Default: 0.1
|
797 |
-
norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
|
798 |
-
ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
|
799 |
-
patch_norm (bool): If True, add normalization after patch embedding. Default: True
|
800 |
-
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
|
801 |
-
config (module): The configuration Module from config.py
|
802 |
-
"""
|
803 |
-
|
804 |
-
def __init__(
|
805 |
-
self,
|
806 |
-
spec_size=256,
|
807 |
-
patch_size=4,
|
808 |
-
patch_stride=(4, 4),
|
809 |
-
in_chans=1,
|
810 |
-
num_classes=527,
|
811 |
-
embed_dim=96,
|
812 |
-
depths=[2, 2, 6, 2],
|
813 |
-
num_heads=[4, 8, 16, 32],
|
814 |
-
window_size=8,
|
815 |
-
mlp_ratio=4.0,
|
816 |
-
qkv_bias=True,
|
817 |
-
qk_scale=None,
|
818 |
-
drop_rate=0.0,
|
819 |
-
attn_drop_rate=0.0,
|
820 |
-
drop_path_rate=0.1,
|
821 |
-
norm_layer=nn.LayerNorm,
|
822 |
-
ape=False,
|
823 |
-
patch_norm=True,
|
824 |
-
use_checkpoint=False,
|
825 |
-
norm_before_mlp="ln",
|
826 |
-
config=None,
|
827 |
-
enable_fusion=False,
|
828 |
-
fusion_type="None",
|
829 |
-
**kwargs,
|
830 |
-
):
|
831 |
-
super(HTSAT_Swin_Transformer, self).__init__()
|
832 |
-
|
833 |
-
self.config = config
|
834 |
-
self.spec_size = spec_size
|
835 |
-
self.patch_stride = patch_stride
|
836 |
-
self.patch_size = patch_size
|
837 |
-
self.window_size = window_size
|
838 |
-
self.embed_dim = embed_dim
|
839 |
-
self.depths = depths
|
840 |
-
self.ape = ape
|
841 |
-
self.in_chans = in_chans
|
842 |
-
self.num_classes = num_classes
|
843 |
-
self.num_heads = num_heads
|
844 |
-
self.num_layers = len(self.depths)
|
845 |
-
self.num_features = int(self.embed_dim * 2 ** (self.num_layers - 1))
|
846 |
-
|
847 |
-
self.drop_rate = drop_rate
|
848 |
-
self.attn_drop_rate = attn_drop_rate
|
849 |
-
self.drop_path_rate = drop_path_rate
|
850 |
-
|
851 |
-
self.qkv_bias = qkv_bias
|
852 |
-
self.qk_scale = None
|
853 |
-
|
854 |
-
self.patch_norm = patch_norm
|
855 |
-
self.norm_layer = norm_layer if self.patch_norm else None
|
856 |
-
self.norm_before_mlp = norm_before_mlp
|
857 |
-
self.mlp_ratio = mlp_ratio
|
858 |
-
|
859 |
-
self.use_checkpoint = use_checkpoint
|
860 |
-
|
861 |
-
self.enable_fusion = enable_fusion
|
862 |
-
self.fusion_type = fusion_type
|
863 |
-
|
864 |
-
# process mel-spec ; used only once
|
865 |
-
self.freq_ratio = self.spec_size // self.config.mel_bins
|
866 |
-
window = "hann"
|
867 |
-
center = True
|
868 |
-
pad_mode = "reflect"
|
869 |
-
ref = 1.0
|
870 |
-
amin = 1e-10
|
871 |
-
top_db = None
|
872 |
-
self.interpolate_ratio = 32 # Downsampled ratio
|
873 |
-
# Spectrogram extractor
|
874 |
-
self.spectrogram_extractor = Spectrogram(
|
875 |
-
n_fft=config.window_size,
|
876 |
-
hop_length=config.hop_size,
|
877 |
-
win_length=config.window_size,
|
878 |
-
window=window,
|
879 |
-
center=center,
|
880 |
-
pad_mode=pad_mode,
|
881 |
-
freeze_parameters=True,
|
882 |
-
)
|
883 |
-
# Logmel feature extractor
|
884 |
-
self.logmel_extractor = LogmelFilterBank(
|
885 |
-
sr=config.sample_rate,
|
886 |
-
n_fft=config.window_size,
|
887 |
-
n_mels=config.mel_bins,
|
888 |
-
fmin=config.fmin,
|
889 |
-
fmax=config.fmax,
|
890 |
-
ref=ref,
|
891 |
-
amin=amin,
|
892 |
-
top_db=top_db,
|
893 |
-
freeze_parameters=True,
|
894 |
-
)
|
895 |
-
# Spec augmenter
|
896 |
-
self.spec_augmenter = SpecAugmentation(
|
897 |
-
time_drop_width=64,
|
898 |
-
time_stripes_num=2,
|
899 |
-
freq_drop_width=8,
|
900 |
-
freq_stripes_num=2,
|
901 |
-
) # 2 2
|
902 |
-
self.bn0 = nn.BatchNorm2d(self.config.mel_bins)
|
903 |
-
|
904 |
-
# split spctrogram into non-overlapping patches
|
905 |
-
self.patch_embed = PatchEmbed(
|
906 |
-
img_size=self.spec_size,
|
907 |
-
patch_size=self.patch_size,
|
908 |
-
in_chans=self.in_chans,
|
909 |
-
embed_dim=self.embed_dim,
|
910 |
-
norm_layer=self.norm_layer,
|
911 |
-
patch_stride=patch_stride,
|
912 |
-
enable_fusion=self.enable_fusion,
|
913 |
-
fusion_type=self.fusion_type,
|
914 |
-
)
|
915 |
-
|
916 |
-
num_patches = self.patch_embed.num_patches
|
917 |
-
patches_resolution = self.patch_embed.grid_size
|
918 |
-
self.patches_resolution = patches_resolution
|
919 |
-
|
920 |
-
# absolute position embedding
|
921 |
-
if self.ape:
|
922 |
-
self.absolute_pos_embed = nn.Parameter(
|
923 |
-
torch.zeros(1, num_patches, self.embed_dim)
|
924 |
-
)
|
925 |
-
trunc_normal_(self.absolute_pos_embed, std=0.02)
|
926 |
-
|
927 |
-
self.pos_drop = nn.Dropout(p=self.drop_rate)
|
928 |
-
|
929 |
-
# stochastic depth
|
930 |
-
dpr = [
|
931 |
-
x.item() for x in torch.linspace(0, self.drop_path_rate, sum(self.depths))
|
932 |
-
] # stochastic depth decay rule
|
933 |
-
|
934 |
-
# build layers
|
935 |
-
self.layers = nn.ModuleList()
|
936 |
-
for i_layer in range(self.num_layers):
|
937 |
-
layer = BasicLayer(
|
938 |
-
dim=int(self.embed_dim * 2**i_layer),
|
939 |
-
input_resolution=(
|
940 |
-
patches_resolution[0] // (2**i_layer),
|
941 |
-
patches_resolution[1] // (2**i_layer),
|
942 |
-
),
|
943 |
-
depth=self.depths[i_layer],
|
944 |
-
num_heads=self.num_heads[i_layer],
|
945 |
-
window_size=self.window_size,
|
946 |
-
mlp_ratio=self.mlp_ratio,
|
947 |
-
qkv_bias=self.qkv_bias,
|
948 |
-
qk_scale=self.qk_scale,
|
949 |
-
drop=self.drop_rate,
|
950 |
-
attn_drop=self.attn_drop_rate,
|
951 |
-
drop_path=dpr[
|
952 |
-
sum(self.depths[:i_layer]) : sum(self.depths[: i_layer + 1])
|
953 |
-
],
|
954 |
-
norm_layer=self.norm_layer,
|
955 |
-
downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
|
956 |
-
use_checkpoint=use_checkpoint,
|
957 |
-
norm_before_mlp=self.norm_before_mlp,
|
958 |
-
)
|
959 |
-
self.layers.append(layer)
|
960 |
-
|
961 |
-
self.norm = self.norm_layer(self.num_features)
|
962 |
-
self.avgpool = nn.AdaptiveAvgPool1d(1)
|
963 |
-
self.maxpool = nn.AdaptiveMaxPool1d(1)
|
964 |
-
|
965 |
-
SF = (
|
966 |
-
self.spec_size
|
967 |
-
// (2 ** (len(self.depths) - 1))
|
968 |
-
// self.patch_stride[0]
|
969 |
-
// self.freq_ratio
|
970 |
-
)
|
971 |
-
self.tscam_conv = nn.Conv2d(
|
972 |
-
in_channels=self.num_features,
|
973 |
-
out_channels=self.num_classes,
|
974 |
-
kernel_size=(SF, 3),
|
975 |
-
padding=(0, 1),
|
976 |
-
)
|
977 |
-
self.head = nn.Linear(num_classes, num_classes)
|
978 |
-
|
979 |
-
if (self.enable_fusion) and (
|
980 |
-
self.fusion_type in ["daf_1d", "aff_1d", "iaff_1d"]
|
981 |
-
):
|
982 |
-
self.mel_conv1d = nn.Sequential(
|
983 |
-
nn.Conv1d(64, 64, kernel_size=5, stride=3, padding=2),
|
984 |
-
nn.BatchNorm1d(64),
|
985 |
-
)
|
986 |
-
if self.fusion_type == "daf_1d":
|
987 |
-
self.fusion_model = DAF()
|
988 |
-
elif self.fusion_type == "aff_1d":
|
989 |
-
self.fusion_model = AFF(channels=64, type="1D")
|
990 |
-
elif self.fusion_type == "iaff_1d":
|
991 |
-
self.fusion_model = iAFF(channels=64, type="1D")
|
992 |
-
|
993 |
-
self.apply(self._init_weights)
|
994 |
-
|
995 |
-
def _init_weights(self, m):
|
996 |
-
if isinstance(m, nn.Linear):
|
997 |
-
trunc_normal_(m.weight, std=0.02)
|
998 |
-
if isinstance(m, nn.Linear) and m.bias is not None:
|
999 |
-
nn.init.constant_(m.bias, 0)
|
1000 |
-
elif isinstance(m, nn.LayerNorm):
|
1001 |
-
nn.init.constant_(m.bias, 0)
|
1002 |
-
nn.init.constant_(m.weight, 1.0)
|
1003 |
-
|
1004 |
-
@torch.jit.ignore
|
1005 |
-
def no_weight_decay(self):
|
1006 |
-
return {"absolute_pos_embed"}
|
1007 |
-
|
1008 |
-
@torch.jit.ignore
|
1009 |
-
def no_weight_decay_keywords(self):
|
1010 |
-
return {"relative_position_bias_table"}
|
1011 |
-
|
1012 |
-
def forward_features(self, x, longer_idx=None):
|
1013 |
-
# A deprecated optimization for using a hierarchical output from different blocks
|
1014 |
-
|
1015 |
-
frames_num = x.shape[2]
|
1016 |
-
x = self.patch_embed(x, longer_idx=longer_idx)
|
1017 |
-
if self.ape:
|
1018 |
-
x = x + self.absolute_pos_embed
|
1019 |
-
x = self.pos_drop(x)
|
1020 |
-
for i, layer in enumerate(self.layers):
|
1021 |
-
x, attn = layer(x)
|
1022 |
-
# for x
|
1023 |
-
x = self.norm(x)
|
1024 |
-
B, N, C = x.shape
|
1025 |
-
SF = frames_num // (2 ** (len(self.depths) - 1)) // self.patch_stride[0]
|
1026 |
-
ST = frames_num // (2 ** (len(self.depths) - 1)) // self.patch_stride[1]
|
1027 |
-
x = x.permute(0, 2, 1).contiguous().reshape(B, C, SF, ST)
|
1028 |
-
B, C, F, T = x.shape
|
1029 |
-
# group 2D CNN
|
1030 |
-
c_freq_bin = F // self.freq_ratio
|
1031 |
-
x = x.reshape(B, C, F // c_freq_bin, c_freq_bin, T)
|
1032 |
-
x = x.permute(0, 1, 3, 2, 4).contiguous().reshape(B, C, c_freq_bin, -1)
|
1033 |
-
# get latent_output
|
1034 |
-
fine_grained_latent_output = torch.mean(x, dim=2)
|
1035 |
-
fine_grained_latent_output = interpolate(
|
1036 |
-
fine_grained_latent_output.permute(0, 2, 1).contiguous(),
|
1037 |
-
8 * self.patch_stride[1],
|
1038 |
-
)
|
1039 |
-
|
1040 |
-
latent_output = self.avgpool(torch.flatten(x, 2))
|
1041 |
-
latent_output = torch.flatten(latent_output, 1)
|
1042 |
-
|
1043 |
-
# display the attention map, if needed
|
1044 |
-
|
1045 |
-
x = self.tscam_conv(x)
|
1046 |
-
x = torch.flatten(x, 2) # B, C, T
|
1047 |
-
|
1048 |
-
fpx = interpolate(
|
1049 |
-
torch.sigmoid(x).permute(0, 2, 1).contiguous(), 8 * self.patch_stride[1]
|
1050 |
-
)
|
1051 |
-
|
1052 |
-
x = self.avgpool(x)
|
1053 |
-
x = torch.flatten(x, 1)
|
1054 |
-
|
1055 |
-
output_dict = {
|
1056 |
-
"framewise_output": fpx, # already sigmoided
|
1057 |
-
"clipwise_output": torch.sigmoid(x),
|
1058 |
-
"fine_grained_embedding": fine_grained_latent_output,
|
1059 |
-
"embedding": latent_output,
|
1060 |
-
}
|
1061 |
-
|
1062 |
-
return output_dict
|
1063 |
-
|
1064 |
-
def crop_wav(self, x, crop_size, spe_pos=None):
|
1065 |
-
time_steps = x.shape[2]
|
1066 |
-
tx = torch.zeros(x.shape[0], x.shape[1], crop_size, x.shape[3]).to(x.device)
|
1067 |
-
for i in range(len(x)):
|
1068 |
-
if spe_pos is None:
|
1069 |
-
crop_pos = random.randint(0, time_steps - crop_size - 1)
|
1070 |
-
else:
|
1071 |
-
crop_pos = spe_pos
|
1072 |
-
tx[i][0] = x[i, 0, crop_pos : crop_pos + crop_size, :]
|
1073 |
-
return tx
|
1074 |
-
|
1075 |
-
# Reshape the wavform to a img size, if you want to use the pretrained swin transformer model
|
1076 |
-
def reshape_wav2img(self, x):
|
1077 |
-
B, C, T, F = x.shape
|
1078 |
-
target_T = int(self.spec_size * self.freq_ratio)
|
1079 |
-
target_F = self.spec_size // self.freq_ratio
|
1080 |
-
assert (
|
1081 |
-
T <= target_T and F <= target_F
|
1082 |
-
), "the wav size should less than or equal to the swin input size"
|
1083 |
-
# to avoid bicubic zero error
|
1084 |
-
if T < target_T:
|
1085 |
-
x = nn.functional.interpolate(
|
1086 |
-
x, (target_T, x.shape[3]), mode="bicubic", align_corners=True
|
1087 |
-
)
|
1088 |
-
if F < target_F:
|
1089 |
-
x = nn.functional.interpolate(
|
1090 |
-
x, (x.shape[2], target_F), mode="bicubic", align_corners=True
|
1091 |
-
)
|
1092 |
-
x = x.permute(0, 1, 3, 2).contiguous()
|
1093 |
-
x = x.reshape(
|
1094 |
-
x.shape[0],
|
1095 |
-
x.shape[1],
|
1096 |
-
x.shape[2],
|
1097 |
-
self.freq_ratio,
|
1098 |
-
x.shape[3] // self.freq_ratio,
|
1099 |
-
)
|
1100 |
-
# print(x.shape)
|
1101 |
-
x = x.permute(0, 1, 3, 2, 4).contiguous()
|
1102 |
-
x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3], x.shape[4])
|
1103 |
-
return x
|
1104 |
-
|
1105 |
-
# Repeat the wavform to a img size, if you want to use the pretrained swin transformer model
|
1106 |
-
def repeat_wat2img(self, x, cur_pos):
|
1107 |
-
B, C, T, F = x.shape
|
1108 |
-
target_T = int(self.spec_size * self.freq_ratio)
|
1109 |
-
target_F = self.spec_size // self.freq_ratio
|
1110 |
-
assert (
|
1111 |
-
T <= target_T and F <= target_F
|
1112 |
-
), "the wav size should less than or equal to the swin input size"
|
1113 |
-
# to avoid bicubic zero error
|
1114 |
-
if T < target_T:
|
1115 |
-
x = nn.functional.interpolate(
|
1116 |
-
x, (target_T, x.shape[3]), mode="bicubic", align_corners=True
|
1117 |
-
)
|
1118 |
-
if F < target_F:
|
1119 |
-
x = nn.functional.interpolate(
|
1120 |
-
x, (x.shape[2], target_F), mode="bicubic", align_corners=True
|
1121 |
-
)
|
1122 |
-
x = x.permute(0, 1, 3, 2).contiguous() # B C F T
|
1123 |
-
x = x[:, :, :, cur_pos : cur_pos + self.spec_size]
|
1124 |
-
x = x.repeat(repeats=(1, 1, 4, 1))
|
1125 |
-
return x
|
1126 |
-
|
1127 |
-
def forward(
|
1128 |
-
self, x: torch.Tensor, mixup_lambda=None, infer_mode=False, device=None
|
1129 |
-
): # out_feat_keys: List[str] = None):
|
1130 |
-
|
1131 |
-
if self.enable_fusion and x["longer"].sum() == 0:
|
1132 |
-
# if no audio is longer than 10s, then randomly select one audio to be longer
|
1133 |
-
x["longer"][torch.randint(0, x["longer"].shape[0], (1,))] = True
|
1134 |
-
|
1135 |
-
if not self.enable_fusion:
|
1136 |
-
x = x["waveform"].to(device=device, non_blocking=True)
|
1137 |
-
x = self.spectrogram_extractor(x) # (batch_size, 1, time_steps, freq_bins)
|
1138 |
-
x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
|
1139 |
-
x = x.transpose(1, 3)
|
1140 |
-
x = self.bn0(x)
|
1141 |
-
x = x.transpose(1, 3)
|
1142 |
-
if self.training:
|
1143 |
-
x = self.spec_augmenter(x)
|
1144 |
-
|
1145 |
-
if self.training and mixup_lambda is not None:
|
1146 |
-
x = do_mixup(x, mixup_lambda)
|
1147 |
-
|
1148 |
-
x = self.reshape_wav2img(x)
|
1149 |
-
output_dict = self.forward_features(x)
|
1150 |
-
else:
|
1151 |
-
longer_list = x["longer"].to(device=device, non_blocking=True)
|
1152 |
-
x = x["mel_fusion"].to(device=device, non_blocking=True)
|
1153 |
-
x = x.transpose(1, 3)
|
1154 |
-
x = self.bn0(x)
|
1155 |
-
x = x.transpose(1, 3)
|
1156 |
-
longer_list_idx = torch.where(longer_list)[0]
|
1157 |
-
if self.fusion_type in ["daf_1d", "aff_1d", "iaff_1d"]:
|
1158 |
-
new_x = x[:, 0:1, :, :].clone().contiguous()
|
1159 |
-
if len(longer_list_idx) > 0:
|
1160 |
-
# local processing
|
1161 |
-
fusion_x_local = x[longer_list_idx, 1:, :, :].clone().contiguous()
|
1162 |
-
FB, FC, FT, FF = fusion_x_local.size()
|
1163 |
-
fusion_x_local = fusion_x_local.view(FB * FC, FT, FF)
|
1164 |
-
fusion_x_local = torch.permute(
|
1165 |
-
fusion_x_local, (0, 2, 1)
|
1166 |
-
).contiguous()
|
1167 |
-
fusion_x_local = self.mel_conv1d(fusion_x_local)
|
1168 |
-
fusion_x_local = fusion_x_local.view(
|
1169 |
-
FB, FC, FF, fusion_x_local.size(-1)
|
1170 |
-
)
|
1171 |
-
fusion_x_local = (
|
1172 |
-
torch.permute(fusion_x_local, (0, 2, 1, 3))
|
1173 |
-
.contiguous()
|
1174 |
-
.flatten(2)
|
1175 |
-
)
|
1176 |
-
if fusion_x_local.size(-1) < FT:
|
1177 |
-
fusion_x_local = torch.cat(
|
1178 |
-
[
|
1179 |
-
fusion_x_local,
|
1180 |
-
torch.zeros(
|
1181 |
-
(FB, FF, FT - fusion_x_local.size(-1)),
|
1182 |
-
device=device,
|
1183 |
-
),
|
1184 |
-
],
|
1185 |
-
dim=-1,
|
1186 |
-
)
|
1187 |
-
else:
|
1188 |
-
fusion_x_local = fusion_x_local[:, :, :FT]
|
1189 |
-
# 1D fusion
|
1190 |
-
new_x = new_x.squeeze(1).permute((0, 2, 1)).contiguous()
|
1191 |
-
new_x[longer_list_idx] = self.fusion_model(
|
1192 |
-
new_x[longer_list_idx], fusion_x_local
|
1193 |
-
)
|
1194 |
-
x = new_x.permute((0, 2, 1)).contiguous()[:, None, :, :]
|
1195 |
-
else:
|
1196 |
-
x = new_x
|
1197 |
-
|
1198 |
-
elif self.fusion_type in ["daf_2d", "aff_2d", "iaff_2d", "channel_map"]:
|
1199 |
-
x = x # no change
|
1200 |
-
|
1201 |
-
if self.training:
|
1202 |
-
x = self.spec_augmenter(x)
|
1203 |
-
if self.training and mixup_lambda is not None:
|
1204 |
-
x = do_mixup(x, mixup_lambda)
|
1205 |
-
|
1206 |
-
x = self.reshape_wav2img(x)
|
1207 |
-
output_dict = self.forward_features(x, longer_idx=longer_list_idx)
|
1208 |
-
|
1209 |
-
# if infer_mode:
|
1210 |
-
# # in infer mode. we need to handle different length audio input
|
1211 |
-
# frame_num = x.shape[2]
|
1212 |
-
# target_T = int(self.spec_size * self.freq_ratio)
|
1213 |
-
# repeat_ratio = math.floor(target_T / frame_num)
|
1214 |
-
# x = x.repeat(repeats=(1,1,repeat_ratio,1))
|
1215 |
-
# x = self.reshape_wav2img(x)
|
1216 |
-
# output_dict = self.forward_features(x)
|
1217 |
-
# else:
|
1218 |
-
# if x.shape[2] > self.freq_ratio * self.spec_size:
|
1219 |
-
# if self.training:
|
1220 |
-
# x = self.crop_wav(x, crop_size=self.freq_ratio * self.spec_size)
|
1221 |
-
# x = self.reshape_wav2img(x)
|
1222 |
-
# output_dict = self.forward_features(x)
|
1223 |
-
# else:
|
1224 |
-
# # Change: Hard code here
|
1225 |
-
# overlap_size = (x.shape[2] - 1) // 4
|
1226 |
-
# output_dicts = []
|
1227 |
-
# crop_size = (x.shape[2] - 1) // 2
|
1228 |
-
# for cur_pos in range(0, x.shape[2] - crop_size - 1, overlap_size):
|
1229 |
-
# tx = self.crop_wav(x, crop_size = crop_size, spe_pos = cur_pos)
|
1230 |
-
# tx = self.reshape_wav2img(tx)
|
1231 |
-
# output_dicts.append(self.forward_features(tx))
|
1232 |
-
# clipwise_output = torch.zeros_like(output_dicts[0]["clipwise_output"]).float().to(x.device)
|
1233 |
-
# framewise_output = torch.zeros_like(output_dicts[0]["framewise_output"]).float().to(x.device)
|
1234 |
-
# for d in output_dicts:
|
1235 |
-
# clipwise_output += d["clipwise_output"]
|
1236 |
-
# framewise_output += d["framewise_output"]
|
1237 |
-
# clipwise_output = clipwise_output / len(output_dicts)
|
1238 |
-
# framewise_output = framewise_output / len(output_dicts)
|
1239 |
-
# output_dict = {
|
1240 |
-
# 'framewise_output': framewise_output,
|
1241 |
-
# 'clipwise_output': clipwise_output
|
1242 |
-
# }
|
1243 |
-
# else: # this part is typically used, and most easy one
|
1244 |
-
# x = self.reshape_wav2img(x)
|
1245 |
-
# output_dict = self.forward_features(x)
|
1246 |
-
# x = self.head(x)
|
1247 |
-
|
1248 |
-
# We process the data in the dataloader part, in that here we only consider the input_T < fixed_T
|
1249 |
-
|
1250 |
-
return output_dict
|
1251 |
-
|
1252 |
-
|
1253 |
-
def create_htsat_model(audio_cfg, enable_fusion=False, fusion_type="None"):
|
1254 |
-
try:
|
1255 |
-
|
1256 |
-
assert audio_cfg.model_name in [
|
1257 |
-
"tiny",
|
1258 |
-
"base",
|
1259 |
-
"large",
|
1260 |
-
], "model name for HTS-AT is wrong!"
|
1261 |
-
if audio_cfg.model_name == "tiny":
|
1262 |
-
model = HTSAT_Swin_Transformer(
|
1263 |
-
spec_size=256,
|
1264 |
-
patch_size=4,
|
1265 |
-
patch_stride=(4, 4),
|
1266 |
-
num_classes=audio_cfg.class_num,
|
1267 |
-
embed_dim=96,
|
1268 |
-
depths=[2, 2, 6, 2],
|
1269 |
-
num_heads=[4, 8, 16, 32],
|
1270 |
-
window_size=8,
|
1271 |
-
config=audio_cfg,
|
1272 |
-
enable_fusion=enable_fusion,
|
1273 |
-
fusion_type=fusion_type,
|
1274 |
-
)
|
1275 |
-
elif audio_cfg.model_name == "base":
|
1276 |
-
model = HTSAT_Swin_Transformer(
|
1277 |
-
spec_size=256,
|
1278 |
-
patch_size=4,
|
1279 |
-
patch_stride=(4, 4),
|
1280 |
-
num_classes=audio_cfg.class_num,
|
1281 |
-
embed_dim=128,
|
1282 |
-
depths=[2, 2, 12, 2],
|
1283 |
-
num_heads=[4, 8, 16, 32],
|
1284 |
-
window_size=8,
|
1285 |
-
config=audio_cfg,
|
1286 |
-
enable_fusion=enable_fusion,
|
1287 |
-
fusion_type=fusion_type,
|
1288 |
-
)
|
1289 |
-
elif audio_cfg.model_name == "large":
|
1290 |
-
model = HTSAT_Swin_Transformer(
|
1291 |
-
spec_size=256,
|
1292 |
-
patch_size=4,
|
1293 |
-
patch_stride=(4, 4),
|
1294 |
-
num_classes=audio_cfg.class_num,
|
1295 |
-
embed_dim=256,
|
1296 |
-
depths=[2, 2, 12, 2],
|
1297 |
-
num_heads=[4, 8, 16, 32],
|
1298 |
-
window_size=8,
|
1299 |
-
config=audio_cfg,
|
1300 |
-
enable_fusion=enable_fusion,
|
1301 |
-
fusion_type=fusion_type,
|
1302 |
-
)
|
1303 |
-
|
1304 |
-
return model
|
1305 |
-
except:
|
1306 |
-
raise RuntimeError(
|
1307 |
-
f"Import Model for {audio_cfg.model_name} not found, or the audio cfg parameters are not enough."
|
1308 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/losses/__init__.py
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
from .stft_loss import * # NOQA
|
|
|
|
spaces/AIGC-Audio/Make_An_Audio/ldm/modules/diffusionmodules/model.py
DELETED
@@ -1,835 +0,0 @@
|
|
1 |
-
# pytorch_diffusion + derived encoder decoder
|
2 |
-
import math
|
3 |
-
import torch
|
4 |
-
import torch.nn as nn
|
5 |
-
import numpy as np
|
6 |
-
from einops import rearrange
|
7 |
-
|
8 |
-
from ldm.util import instantiate_from_config
|
9 |
-
from ldm.modules.attention import LinearAttention
|
10 |
-
|
11 |
-
|
12 |
-
def get_timestep_embedding(timesteps, embedding_dim):
|
13 |
-
"""
|
14 |
-
This matches the implementation in Denoising Diffusion Probabilistic Models:
|
15 |
-
From Fairseq.
|
16 |
-
Build sinusoidal embeddings.
|
17 |
-
This matches the implementation in tensor2tensor, but differs slightly
|
18 |
-
from the description in Section 3.5 of "Attention Is All You Need".
|
19 |
-
"""
|
20 |
-
assert len(timesteps.shape) == 1
|
21 |
-
|
22 |
-
half_dim = embedding_dim // 2
|
23 |
-
emb = math.log(10000) / (half_dim - 1)
|
24 |
-
emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb)
|
25 |
-
emb = emb.to(device=timesteps.device)
|
26 |
-
emb = timesteps.float()[:, None] * emb[None, :]
|
27 |
-
emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1)
|
28 |
-
if embedding_dim % 2 == 1: # zero pad
|
29 |
-
emb = torch.nn.functional.pad(emb, (0,1,0,0))
|
30 |
-
return emb
|
31 |
-
|
32 |
-
|
33 |
-
def nonlinearity(x):
|
34 |
-
# swish
|
35 |
-
return x*torch.sigmoid(x)
|
36 |
-
|
37 |
-
|
38 |
-
def Normalize(in_channels, num_groups=32):
|
39 |
-
return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True)
|
40 |
-
|
41 |
-
|
42 |
-
class Upsample(nn.Module):
|
43 |
-
def __init__(self, in_channels, with_conv):
|
44 |
-
super().__init__()
|
45 |
-
self.with_conv = with_conv
|
46 |
-
if self.with_conv:
|
47 |
-
self.conv = torch.nn.Conv2d(in_channels,
|
48 |
-
in_channels,
|
49 |
-
kernel_size=3,
|
50 |
-
stride=1,
|
51 |
-
padding=1)
|
52 |
-
|
53 |
-
def forward(self, x):
|
54 |
-
x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest")
|
55 |
-
if self.with_conv:
|
56 |
-
x = self.conv(x)
|
57 |
-
return x
|
58 |
-
|
59 |
-
|
60 |
-
class Downsample(nn.Module):
|
61 |
-
def __init__(self, in_channels, with_conv):
|
62 |
-
super().__init__()
|
63 |
-
self.with_conv = with_conv
|
64 |
-
if self.with_conv:
|
65 |
-
# no asymmetric padding in torch conv, must do it ourselves
|
66 |
-
self.conv = torch.nn.Conv2d(in_channels,
|
67 |
-
in_channels,
|
68 |
-
kernel_size=3,
|
69 |
-
stride=2,
|
70 |
-
padding=0)
|
71 |
-
|
72 |
-
def forward(self, x):
|
73 |
-
if self.with_conv:
|
74 |
-
pad = (0,1,0,1)
|
75 |
-
x = torch.nn.functional.pad(x, pad, mode="constant", value=0)
|
76 |
-
x = self.conv(x)
|
77 |
-
else:
|
78 |
-
x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2)
|
79 |
-
return x
|
80 |
-
|
81 |
-
|
82 |
-
class ResnetBlock(nn.Module):
|
83 |
-
def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False,
|
84 |
-
dropout, temb_channels=512):
|
85 |
-
super().__init__()
|
86 |
-
self.in_channels = in_channels
|
87 |
-
out_channels = in_channels if out_channels is None else out_channels
|
88 |
-
self.out_channels = out_channels
|
89 |
-
self.use_conv_shortcut = conv_shortcut
|
90 |
-
|
91 |
-
self.norm1 = Normalize(in_channels)
|
92 |
-
self.conv1 = torch.nn.Conv2d(in_channels,
|
93 |
-
out_channels,
|
94 |
-
kernel_size=3,
|
95 |
-
stride=1,
|
96 |
-
padding=1)
|
97 |
-
if temb_channels > 0:
|
98 |
-
self.temb_proj = torch.nn.Linear(temb_channels,
|
99 |
-
out_channels)
|
100 |
-
self.norm2 = Normalize(out_channels)
|
101 |
-
self.dropout = torch.nn.Dropout(dropout)
|
102 |
-
self.conv2 = torch.nn.Conv2d(out_channels,
|
103 |
-
out_channels,
|
104 |
-
kernel_size=3,
|
105 |
-
stride=1,
|
106 |
-
padding=1)
|
107 |
-
if self.in_channels != self.out_channels:
|
108 |
-
if self.use_conv_shortcut:
|
109 |
-
self.conv_shortcut = torch.nn.Conv2d(in_channels,
|
110 |
-
out_channels,
|
111 |
-
kernel_size=3,
|
112 |
-
stride=1,
|
113 |
-
padding=1)
|
114 |
-
else:
|
115 |
-
self.nin_shortcut = torch.nn.Conv2d(in_channels,
|
116 |
-
out_channels,
|
117 |
-
kernel_size=1,
|
118 |
-
stride=1,
|
119 |
-
padding=0)
|
120 |
-
|
121 |
-
def forward(self, x, temb):
|
122 |
-
h = x
|
123 |
-
h = self.norm1(h)
|
124 |
-
h = nonlinearity(h)
|
125 |
-
h = self.conv1(h)
|
126 |
-
|
127 |
-
if temb is not None:
|
128 |
-
h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None]
|
129 |
-
|
130 |
-
h = self.norm2(h)
|
131 |
-
h = nonlinearity(h)
|
132 |
-
h = self.dropout(h)
|
133 |
-
h = self.conv2(h)
|
134 |
-
|
135 |
-
if self.in_channels != self.out_channels:
|
136 |
-
if self.use_conv_shortcut:
|
137 |
-
x = self.conv_shortcut(x)
|
138 |
-
else:
|
139 |
-
x = self.nin_shortcut(x)
|
140 |
-
|
141 |
-
return x+h
|
142 |
-
|
143 |
-
|
144 |
-
class LinAttnBlock(LinearAttention):
|
145 |
-
"""to match AttnBlock usage"""
|
146 |
-
def __init__(self, in_channels):
|
147 |
-
super().__init__(dim=in_channels, heads=1, dim_head=in_channels)
|
148 |
-
|
149 |
-
|
150 |
-
class AttnBlock(nn.Module):
|
151 |
-
def __init__(self, in_channels):
|
152 |
-
super().__init__()
|
153 |
-
self.in_channels = in_channels
|
154 |
-
|
155 |
-
self.norm = Normalize(in_channels)
|
156 |
-
self.q = torch.nn.Conv2d(in_channels,
|
157 |
-
in_channels,
|
158 |
-
kernel_size=1,
|
159 |
-
stride=1,
|
160 |
-
padding=0)
|
161 |
-
self.k = torch.nn.Conv2d(in_channels,
|
162 |
-
in_channels,
|
163 |
-
kernel_size=1,
|
164 |
-
stride=1,
|
165 |
-
padding=0)
|
166 |
-
self.v = torch.nn.Conv2d(in_channels,
|
167 |
-
in_channels,
|
168 |
-
kernel_size=1,
|
169 |
-
stride=1,
|
170 |
-
padding=0)
|
171 |
-
self.proj_out = torch.nn.Conv2d(in_channels,
|
172 |
-
in_channels,
|
173 |
-
kernel_size=1,
|
174 |
-
stride=1,
|
175 |
-
padding=0)
|
176 |
-
|
177 |
-
|
178 |
-
def forward(self, x):
|
179 |
-
h_ = x
|
180 |
-
h_ = self.norm(h_)
|
181 |
-
q = self.q(h_)
|
182 |
-
k = self.k(h_)
|
183 |
-
v = self.v(h_)
|
184 |
-
|
185 |
-
# compute attention
|
186 |
-
b,c,h,w = q.shape
|
187 |
-
q = q.reshape(b,c,h*w)
|
188 |
-
q = q.permute(0,2,1) # b,hw,c
|
189 |
-
k = k.reshape(b,c,h*w) # b,c,hw
|
190 |
-
w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j]
|
191 |
-
w_ = w_ * (int(c)**(-0.5))
|
192 |
-
w_ = torch.nn.functional.softmax(w_, dim=2)
|
193 |
-
|
194 |
-
# attend to values
|
195 |
-
v = v.reshape(b,c,h*w)
|
196 |
-
w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q)
|
197 |
-
h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j]
|
198 |
-
h_ = h_.reshape(b,c,h,w)
|
199 |
-
|
200 |
-
h_ = self.proj_out(h_)
|
201 |
-
|
202 |
-
return x+h_
|
203 |
-
|
204 |
-
|
205 |
-
def make_attn(in_channels, attn_type="vanilla"):
|
206 |
-
assert attn_type in ["vanilla", "linear", "none"], f'attn_type {attn_type} unknown'
|
207 |
-
print(f"making attention of type '{attn_type}' with {in_channels} in_channels")
|
208 |
-
if attn_type == "vanilla":
|
209 |
-
return AttnBlock(in_channels)
|
210 |
-
elif attn_type == "none":
|
211 |
-
return nn.Identity(in_channels)
|
212 |
-
else:
|
213 |
-
return LinAttnBlock(in_channels)
|
214 |
-
|
215 |
-
|
216 |
-
class Model(nn.Module):
|
217 |
-
def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
|
218 |
-
attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
|
219 |
-
resolution, use_timestep=True, use_linear_attn=False, attn_type="vanilla"):
|
220 |
-
super().__init__()
|
221 |
-
if use_linear_attn: attn_type = "linear"
|
222 |
-
self.ch = ch
|
223 |
-
self.temb_ch = self.ch*4
|
224 |
-
self.num_resolutions = len(ch_mult)
|
225 |
-
self.num_res_blocks = num_res_blocks
|
226 |
-
self.resolution = resolution
|
227 |
-
self.in_channels = in_channels
|
228 |
-
|
229 |
-
self.use_timestep = use_timestep
|
230 |
-
if self.use_timestep:
|
231 |
-
# timestep embedding
|
232 |
-
self.temb = nn.Module()
|
233 |
-
self.temb.dense = nn.ModuleList([
|
234 |
-
torch.nn.Linear(self.ch,
|
235 |
-
self.temb_ch),
|
236 |
-
torch.nn.Linear(self.temb_ch,
|
237 |
-
self.temb_ch),
|
238 |
-
])
|
239 |
-
|
240 |
-
# downsampling
|
241 |
-
self.conv_in = torch.nn.Conv2d(in_channels,
|
242 |
-
self.ch,
|
243 |
-
kernel_size=3,
|
244 |
-
stride=1,
|
245 |
-
padding=1)
|
246 |
-
|
247 |
-
curr_res = resolution
|
248 |
-
in_ch_mult = (1,)+tuple(ch_mult)
|
249 |
-
self.down = nn.ModuleList()
|
250 |
-
for i_level in range(self.num_resolutions):
|
251 |
-
block = nn.ModuleList()
|
252 |
-
attn = nn.ModuleList()
|
253 |
-
block_in = ch*in_ch_mult[i_level]
|
254 |
-
block_out = ch*ch_mult[i_level]
|
255 |
-
for i_block in range(self.num_res_blocks):
|
256 |
-
block.append(ResnetBlock(in_channels=block_in,
|
257 |
-
out_channels=block_out,
|
258 |
-
temb_channels=self.temb_ch,
|
259 |
-
dropout=dropout))
|
260 |
-
block_in = block_out
|
261 |
-
if curr_res in attn_resolutions:
|
262 |
-
attn.append(make_attn(block_in, attn_type=attn_type))
|
263 |
-
down = nn.Module()
|
264 |
-
down.block = block
|
265 |
-
down.attn = attn
|
266 |
-
if i_level != self.num_resolutions-1:
|
267 |
-
down.downsample = Downsample(block_in, resamp_with_conv)
|
268 |
-
curr_res = curr_res // 2
|
269 |
-
self.down.append(down)
|
270 |
-
|
271 |
-
# middle
|
272 |
-
self.mid = nn.Module()
|
273 |
-
self.mid.block_1 = ResnetBlock(in_channels=block_in,
|
274 |
-
out_channels=block_in,
|
275 |
-
temb_channels=self.temb_ch,
|
276 |
-
dropout=dropout)
|
277 |
-
self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
|
278 |
-
self.mid.block_2 = ResnetBlock(in_channels=block_in,
|
279 |
-
out_channels=block_in,
|
280 |
-
temb_channels=self.temb_ch,
|
281 |
-
dropout=dropout)
|
282 |
-
|
283 |
-
# upsampling
|
284 |
-
self.up = nn.ModuleList()
|
285 |
-
for i_level in reversed(range(self.num_resolutions)):
|
286 |
-
block = nn.ModuleList()
|
287 |
-
attn = nn.ModuleList()
|
288 |
-
block_out = ch*ch_mult[i_level]
|
289 |
-
skip_in = ch*ch_mult[i_level]
|
290 |
-
for i_block in range(self.num_res_blocks+1):
|
291 |
-
if i_block == self.num_res_blocks:
|
292 |
-
skip_in = ch*in_ch_mult[i_level]
|
293 |
-
block.append(ResnetBlock(in_channels=block_in+skip_in,
|
294 |
-
out_channels=block_out,
|
295 |
-
temb_channels=self.temb_ch,
|
296 |
-
dropout=dropout))
|
297 |
-
block_in = block_out
|
298 |
-
if curr_res in attn_resolutions:
|
299 |
-
attn.append(make_attn(block_in, attn_type=attn_type))
|
300 |
-
up = nn.Module()
|
301 |
-
up.block = block
|
302 |
-
up.attn = attn
|
303 |
-
if i_level != 0:
|
304 |
-
up.upsample = Upsample(block_in, resamp_with_conv)
|
305 |
-
curr_res = curr_res * 2
|
306 |
-
self.up.insert(0, up) # prepend to get consistent order
|
307 |
-
|
308 |
-
# end
|
309 |
-
self.norm_out = Normalize(block_in)
|
310 |
-
self.conv_out = torch.nn.Conv2d(block_in,
|
311 |
-
out_ch,
|
312 |
-
kernel_size=3,
|
313 |
-
stride=1,
|
314 |
-
padding=1)
|
315 |
-
|
316 |
-
def forward(self, x, t=None, context=None):
|
317 |
-
#assert x.shape[2] == x.shape[3] == self.resolution
|
318 |
-
if context is not None:
|
319 |
-
# assume aligned context, cat along channel axis
|
320 |
-
x = torch.cat((x, context), dim=1)
|
321 |
-
if self.use_timestep:
|
322 |
-
# timestep embedding
|
323 |
-
assert t is not None
|
324 |
-
temb = get_timestep_embedding(t, self.ch)
|
325 |
-
temb = self.temb.dense[0](temb)
|
326 |
-
temb = nonlinearity(temb)
|
327 |
-
temb = self.temb.dense[1](temb)
|
328 |
-
else:
|
329 |
-
temb = None
|
330 |
-
|
331 |
-
# downsampling
|
332 |
-
hs = [self.conv_in(x)]
|
333 |
-
for i_level in range(self.num_resolutions):
|
334 |
-
for i_block in range(self.num_res_blocks):
|
335 |
-
h = self.down[i_level].block[i_block](hs[-1], temb)
|
336 |
-
if len(self.down[i_level].attn) > 0:
|
337 |
-
h = self.down[i_level].attn[i_block](h)
|
338 |
-
hs.append(h)
|
339 |
-
if i_level != self.num_resolutions-1:
|
340 |
-
hs.append(self.down[i_level].downsample(hs[-1]))
|
341 |
-
|
342 |
-
# middle
|
343 |
-
h = hs[-1]
|
344 |
-
h = self.mid.block_1(h, temb)
|
345 |
-
h = self.mid.attn_1(h)
|
346 |
-
h = self.mid.block_2(h, temb)
|
347 |
-
|
348 |
-
# upsampling
|
349 |
-
for i_level in reversed(range(self.num_resolutions)):
|
350 |
-
for i_block in range(self.num_res_blocks+1):
|
351 |
-
h = self.up[i_level].block[i_block](
|
352 |
-
torch.cat([h, hs.pop()], dim=1), temb)
|
353 |
-
if len(self.up[i_level].attn) > 0:
|
354 |
-
h = self.up[i_level].attn[i_block](h)
|
355 |
-
if i_level != 0:
|
356 |
-
h = self.up[i_level].upsample(h)
|
357 |
-
|
358 |
-
# end
|
359 |
-
h = self.norm_out(h)
|
360 |
-
h = nonlinearity(h)
|
361 |
-
h = self.conv_out(h)
|
362 |
-
return h
|
363 |
-
|
364 |
-
def get_last_layer(self):
|
365 |
-
return self.conv_out.weight
|
366 |
-
|
367 |
-
|
368 |
-
class Encoder(nn.Module):
|
369 |
-
def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
|
370 |
-
attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
|
371 |
-
resolution, z_channels, double_z=True, use_linear_attn=False, attn_type="vanilla",
|
372 |
-
**ignore_kwargs):
|
373 |
-
super().__init__()
|
374 |
-
if use_linear_attn: attn_type = "linear"
|
375 |
-
self.ch = ch
|
376 |
-
self.temb_ch = 0
|
377 |
-
self.num_resolutions = len(ch_mult)
|
378 |
-
self.num_res_blocks = num_res_blocks
|
379 |
-
self.resolution = resolution
|
380 |
-
self.in_channels = in_channels
|
381 |
-
|
382 |
-
# downsampling
|
383 |
-
self.conv_in = torch.nn.Conv2d(in_channels,
|
384 |
-
self.ch,
|
385 |
-
kernel_size=3,
|
386 |
-
stride=1,
|
387 |
-
padding=1)
|
388 |
-
|
389 |
-
curr_res = resolution
|
390 |
-
in_ch_mult = (1,)+tuple(ch_mult)
|
391 |
-
self.in_ch_mult = in_ch_mult
|
392 |
-
self.down = nn.ModuleList()
|
393 |
-
for i_level in range(self.num_resolutions):
|
394 |
-
block = nn.ModuleList()
|
395 |
-
attn = nn.ModuleList()
|
396 |
-
block_in = ch*in_ch_mult[i_level]
|
397 |
-
block_out = ch*ch_mult[i_level]
|
398 |
-
for i_block in range(self.num_res_blocks):
|
399 |
-
block.append(ResnetBlock(in_channels=block_in,
|
400 |
-
out_channels=block_out,
|
401 |
-
temb_channels=self.temb_ch,
|
402 |
-
dropout=dropout))
|
403 |
-
block_in = block_out
|
404 |
-
if curr_res in attn_resolutions:
|
405 |
-
attn.append(make_attn(block_in, attn_type=attn_type))# vanilla attention
|
406 |
-
down = nn.Module()
|
407 |
-
down.block = block
|
408 |
-
down.attn = attn
|
409 |
-
if i_level != self.num_resolutions-1:
|
410 |
-
down.downsample = Downsample(block_in, resamp_with_conv)
|
411 |
-
curr_res = curr_res // 2
|
412 |
-
self.down.append(down)
|
413 |
-
|
414 |
-
# middle
|
415 |
-
self.mid = nn.Module()
|
416 |
-
self.mid.block_1 = ResnetBlock(in_channels=block_in,
|
417 |
-
out_channels=block_in,
|
418 |
-
temb_channels=self.temb_ch,
|
419 |
-
dropout=dropout)
|
420 |
-
self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
|
421 |
-
self.mid.block_2 = ResnetBlock(in_channels=block_in,
|
422 |
-
out_channels=block_in,
|
423 |
-
temb_channels=self.temb_ch,
|
424 |
-
dropout=dropout)
|
425 |
-
|
426 |
-
# end
|
427 |
-
self.norm_out = Normalize(block_in)# GroupNorm
|
428 |
-
self.conv_out = torch.nn.Conv2d(block_in,
|
429 |
-
2*z_channels if double_z else z_channels,
|
430 |
-
kernel_size=3,
|
431 |
-
stride=1,
|
432 |
-
padding=1)
|
433 |
-
|
434 |
-
def forward(self, x):
|
435 |
-
# timestep embedding
|
436 |
-
temb = None
|
437 |
-
|
438 |
-
# downsampling
|
439 |
-
hs = [self.conv_in(x)]
|
440 |
-
for i_level in range(self.num_resolutions):
|
441 |
-
for i_block in range(self.num_res_blocks):
|
442 |
-
h = self.down[i_level].block[i_block](hs[-1], temb)
|
443 |
-
if len(self.down[i_level].attn) > 0:
|
444 |
-
h = self.down[i_level].attn[i_block](h)
|
445 |
-
hs.append(h)
|
446 |
-
if i_level != self.num_resolutions-1:
|
447 |
-
hs.append(self.down[i_level].downsample(hs[-1]))
|
448 |
-
|
449 |
-
# middle
|
450 |
-
h = hs[-1]
|
451 |
-
h = self.mid.block_1(h, temb)
|
452 |
-
h = self.mid.attn_1(h)
|
453 |
-
h = self.mid.block_2(h, temb)
|
454 |
-
|
455 |
-
# end
|
456 |
-
h = self.norm_out(h)
|
457 |
-
h = nonlinearity(h)
|
458 |
-
h = self.conv_out(h)
|
459 |
-
return h
|
460 |
-
|
461 |
-
|
462 |
-
class Decoder(nn.Module):
|
463 |
-
def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
|
464 |
-
attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
|
465 |
-
resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False,
|
466 |
-
attn_type="vanilla", **ignorekwargs):
|
467 |
-
super().__init__()
|
468 |
-
if use_linear_attn: attn_type = "linear"
|
469 |
-
self.ch = ch
|
470 |
-
self.temb_ch = 0
|
471 |
-
self.num_resolutions = len(ch_mult)
|
472 |
-
self.num_res_blocks = num_res_blocks
|
473 |
-
self.resolution = resolution
|
474 |
-
self.in_channels = in_channels
|
475 |
-
self.give_pre_end = give_pre_end
|
476 |
-
self.tanh_out = tanh_out
|
477 |
-
|
478 |
-
# compute in_ch_mult, block_in and curr_res at lowest res
|
479 |
-
in_ch_mult = (1,)+tuple(ch_mult)
|
480 |
-
block_in = ch*ch_mult[self.num_resolutions-1]
|
481 |
-
curr_res = resolution // 2**(self.num_resolutions-1)
|
482 |
-
self.z_shape = (1,z_channels,curr_res,curr_res)
|
483 |
-
print("Working with z of shape {} = {} dimensions.".format(
|
484 |
-
self.z_shape, np.prod(self.z_shape)))
|
485 |
-
|
486 |
-
# z to block_in
|
487 |
-
self.conv_in = torch.nn.Conv2d(z_channels,
|
488 |
-
block_in,
|
489 |
-
kernel_size=3,
|
490 |
-
stride=1,
|
491 |
-
padding=1)
|
492 |
-
|
493 |
-
# middle
|
494 |
-
self.mid = nn.Module()
|
495 |
-
self.mid.block_1 = ResnetBlock(in_channels=block_in,
|
496 |
-
out_channels=block_in,
|
497 |
-
temb_channels=self.temb_ch,
|
498 |
-
dropout=dropout)
|
499 |
-
self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
|
500 |
-
self.mid.block_2 = ResnetBlock(in_channels=block_in,
|
501 |
-
out_channels=block_in,
|
502 |
-
temb_channels=self.temb_ch,
|
503 |
-
dropout=dropout)
|
504 |
-
|
505 |
-
# upsampling
|
506 |
-
self.up = nn.ModuleList()
|
507 |
-
for i_level in reversed(range(self.num_resolutions)):
|
508 |
-
block = nn.ModuleList()
|
509 |
-
attn = nn.ModuleList()
|
510 |
-
block_out = ch*ch_mult[i_level]
|
511 |
-
for i_block in range(self.num_res_blocks+1):
|
512 |
-
block.append(ResnetBlock(in_channels=block_in,
|
513 |
-
out_channels=block_out,
|
514 |
-
temb_channels=self.temb_ch,
|
515 |
-
dropout=dropout))
|
516 |
-
block_in = block_out
|
517 |
-
if curr_res in attn_resolutions:
|
518 |
-
attn.append(make_attn(block_in, attn_type=attn_type))
|
519 |
-
up = nn.Module()
|
520 |
-
up.block = block
|
521 |
-
up.attn = attn
|
522 |
-
if i_level != 0:
|
523 |
-
up.upsample = Upsample(block_in, resamp_with_conv)
|
524 |
-
curr_res = curr_res * 2
|
525 |
-
self.up.insert(0, up) # prepend to get consistent order
|
526 |
-
|
527 |
-
# end
|
528 |
-
self.norm_out = Normalize(block_in)
|
529 |
-
self.conv_out = torch.nn.Conv2d(block_in,
|
530 |
-
out_ch,
|
531 |
-
kernel_size=3,
|
532 |
-
stride=1,
|
533 |
-
padding=1)
|
534 |
-
|
535 |
-
def forward(self, z):
|
536 |
-
#assert z.shape[1:] == self.z_shape[1:]
|
537 |
-
self.last_z_shape = z.shape
|
538 |
-
|
539 |
-
# timestep embedding
|
540 |
-
temb = None
|
541 |
-
|
542 |
-
# z to block_in
|
543 |
-
h = self.conv_in(z)
|
544 |
-
|
545 |
-
# middle
|
546 |
-
h = self.mid.block_1(h, temb)
|
547 |
-
h = self.mid.attn_1(h)
|
548 |
-
h = self.mid.block_2(h, temb)
|
549 |
-
|
550 |
-
# upsampling
|
551 |
-
for i_level in reversed(range(self.num_resolutions)):
|
552 |
-
for i_block in range(self.num_res_blocks+1):
|
553 |
-
h = self.up[i_level].block[i_block](h, temb)
|
554 |
-
if len(self.up[i_level].attn) > 0:
|
555 |
-
h = self.up[i_level].attn[i_block](h)
|
556 |
-
if i_level != 0:
|
557 |
-
h = self.up[i_level].upsample(h)
|
558 |
-
|
559 |
-
# end
|
560 |
-
if self.give_pre_end:
|
561 |
-
return h
|
562 |
-
|
563 |
-
h = self.norm_out(h)
|
564 |
-
h = nonlinearity(h)
|
565 |
-
h = self.conv_out(h)
|
566 |
-
if self.tanh_out:
|
567 |
-
h = torch.tanh(h)
|
568 |
-
return h
|
569 |
-
|
570 |
-
|
571 |
-
class SimpleDecoder(nn.Module):
|
572 |
-
def __init__(self, in_channels, out_channels, *args, **kwargs):
|
573 |
-
super().__init__()
|
574 |
-
self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1),
|
575 |
-
ResnetBlock(in_channels=in_channels,
|
576 |
-
out_channels=2 * in_channels,
|
577 |
-
temb_channels=0, dropout=0.0),
|
578 |
-
ResnetBlock(in_channels=2 * in_channels,
|
579 |
-
out_channels=4 * in_channels,
|
580 |
-
temb_channels=0, dropout=0.0),
|
581 |
-
ResnetBlock(in_channels=4 * in_channels,
|
582 |
-
out_channels=2 * in_channels,
|
583 |
-
temb_channels=0, dropout=0.0),
|
584 |
-
nn.Conv2d(2*in_channels, in_channels, 1),
|
585 |
-
Upsample(in_channels, with_conv=True)])
|
586 |
-
# end
|
587 |
-
self.norm_out = Normalize(in_channels)
|
588 |
-
self.conv_out = torch.nn.Conv2d(in_channels,
|
589 |
-
out_channels,
|
590 |
-
kernel_size=3,
|
591 |
-
stride=1,
|
592 |
-
padding=1)
|
593 |
-
|
594 |
-
def forward(self, x):
|
595 |
-
for i, layer in enumerate(self.model):
|
596 |
-
if i in [1,2,3]:
|
597 |
-
x = layer(x, None)
|
598 |
-
else:
|
599 |
-
x = layer(x)
|
600 |
-
|
601 |
-
h = self.norm_out(x)
|
602 |
-
h = nonlinearity(h)
|
603 |
-
x = self.conv_out(h)
|
604 |
-
return x
|
605 |
-
|
606 |
-
|
607 |
-
class UpsampleDecoder(nn.Module):
|
608 |
-
def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution,
|
609 |
-
ch_mult=(2,2), dropout=0.0):
|
610 |
-
super().__init__()
|
611 |
-
# upsampling
|
612 |
-
self.temb_ch = 0
|
613 |
-
self.num_resolutions = len(ch_mult)
|
614 |
-
self.num_res_blocks = num_res_blocks
|
615 |
-
block_in = in_channels
|
616 |
-
curr_res = resolution // 2 ** (self.num_resolutions - 1)
|
617 |
-
self.res_blocks = nn.ModuleList()
|
618 |
-
self.upsample_blocks = nn.ModuleList()
|
619 |
-
for i_level in range(self.num_resolutions):
|
620 |
-
res_block = []
|
621 |
-
block_out = ch * ch_mult[i_level]
|
622 |
-
for i_block in range(self.num_res_blocks + 1):
|
623 |
-
res_block.append(ResnetBlock(in_channels=block_in,
|
624 |
-
out_channels=block_out,
|
625 |
-
temb_channels=self.temb_ch,
|
626 |
-
dropout=dropout))
|
627 |
-
block_in = block_out
|
628 |
-
self.res_blocks.append(nn.ModuleList(res_block))
|
629 |
-
if i_level != self.num_resolutions - 1:
|
630 |
-
self.upsample_blocks.append(Upsample(block_in, True))
|
631 |
-
curr_res = curr_res * 2
|
632 |
-
|
633 |
-
# end
|
634 |
-
self.norm_out = Normalize(block_in)
|
635 |
-
self.conv_out = torch.nn.Conv2d(block_in,
|
636 |
-
out_channels,
|
637 |
-
kernel_size=3,
|
638 |
-
stride=1,
|
639 |
-
padding=1)
|
640 |
-
|
641 |
-
def forward(self, x):
|
642 |
-
# upsampling
|
643 |
-
h = x
|
644 |
-
for k, i_level in enumerate(range(self.num_resolutions)):
|
645 |
-
for i_block in range(self.num_res_blocks + 1):
|
646 |
-
h = self.res_blocks[i_level][i_block](h, None)
|
647 |
-
if i_level != self.num_resolutions - 1:
|
648 |
-
h = self.upsample_blocks[k](h)
|
649 |
-
h = self.norm_out(h)
|
650 |
-
h = nonlinearity(h)
|
651 |
-
h = self.conv_out(h)
|
652 |
-
return h
|
653 |
-
|
654 |
-
|
655 |
-
class LatentRescaler(nn.Module):
|
656 |
-
def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2):
|
657 |
-
super().__init__()
|
658 |
-
# residual block, interpolate, residual block
|
659 |
-
self.factor = factor
|
660 |
-
self.conv_in = nn.Conv2d(in_channels,
|
661 |
-
mid_channels,
|
662 |
-
kernel_size=3,
|
663 |
-
stride=1,
|
664 |
-
padding=1)
|
665 |
-
self.res_block1 = nn.ModuleList([ResnetBlock(in_channels=mid_channels,
|
666 |
-
out_channels=mid_channels,
|
667 |
-
temb_channels=0,
|
668 |
-
dropout=0.0) for _ in range(depth)])
|
669 |
-
self.attn = AttnBlock(mid_channels)
|
670 |
-
self.res_block2 = nn.ModuleList([ResnetBlock(in_channels=mid_channels,
|
671 |
-
out_channels=mid_channels,
|
672 |
-
temb_channels=0,
|
673 |
-
dropout=0.0) for _ in range(depth)])
|
674 |
-
|
675 |
-
self.conv_out = nn.Conv2d(mid_channels,
|
676 |
-
out_channels,
|
677 |
-
kernel_size=1,
|
678 |
-
)
|
679 |
-
|
680 |
-
def forward(self, x):
|
681 |
-
x = self.conv_in(x)
|
682 |
-
for block in self.res_block1:
|
683 |
-
x = block(x, None)
|
684 |
-
x = torch.nn.functional.interpolate(x, size=(int(round(x.shape[2]*self.factor)), int(round(x.shape[3]*self.factor))))
|
685 |
-
x = self.attn(x)
|
686 |
-
for block in self.res_block2:
|
687 |
-
x = block(x, None)
|
688 |
-
x = self.conv_out(x)
|
689 |
-
return x
|
690 |
-
|
691 |
-
|
692 |
-
class MergedRescaleEncoder(nn.Module):
|
693 |
-
def __init__(self, in_channels, ch, resolution, out_ch, num_res_blocks,
|
694 |
-
attn_resolutions, dropout=0.0, resamp_with_conv=True,
|
695 |
-
ch_mult=(1,2,4,8), rescale_factor=1.0, rescale_module_depth=1):
|
696 |
-
super().__init__()
|
697 |
-
intermediate_chn = ch * ch_mult[-1]
|
698 |
-
self.encoder = Encoder(in_channels=in_channels, num_res_blocks=num_res_blocks, ch=ch, ch_mult=ch_mult,
|
699 |
-
z_channels=intermediate_chn, double_z=False, resolution=resolution,
|
700 |
-
attn_resolutions=attn_resolutions, dropout=dropout, resamp_with_conv=resamp_with_conv,
|
701 |
-
out_ch=None)
|
702 |
-
self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=intermediate_chn,
|
703 |
-
mid_channels=intermediate_chn, out_channels=out_ch, depth=rescale_module_depth)
|
704 |
-
|
705 |
-
def forward(self, x):
|
706 |
-
x = self.encoder(x)
|
707 |
-
x = self.rescaler(x)
|
708 |
-
return x
|
709 |
-
|
710 |
-
|
711 |
-
class MergedRescaleDecoder(nn.Module):
|
712 |
-
def __init__(self, z_channels, out_ch, resolution, num_res_blocks, attn_resolutions, ch, ch_mult=(1,2,4,8),
|
713 |
-
dropout=0.0, resamp_with_conv=True, rescale_factor=1.0, rescale_module_depth=1):
|
714 |
-
super().__init__()
|
715 |
-
tmp_chn = z_channels*ch_mult[-1]
|
716 |
-
self.decoder = Decoder(out_ch=out_ch, z_channels=tmp_chn, attn_resolutions=attn_resolutions, dropout=dropout,
|
717 |
-
resamp_with_conv=resamp_with_conv, in_channels=None, num_res_blocks=num_res_blocks,
|
718 |
-
ch_mult=ch_mult, resolution=resolution, ch=ch)
|
719 |
-
self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=z_channels, mid_channels=tmp_chn,
|
720 |
-
out_channels=tmp_chn, depth=rescale_module_depth)
|
721 |
-
|
722 |
-
def forward(self, x):
|
723 |
-
x = self.rescaler(x)
|
724 |
-
x = self.decoder(x)
|
725 |
-
return x
|
726 |
-
|
727 |
-
|
728 |
-
class Upsampler(nn.Module):
|
729 |
-
def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2):
|
730 |
-
super().__init__()
|
731 |
-
assert out_size >= in_size
|
732 |
-
num_blocks = int(np.log2(out_size//in_size))+1
|
733 |
-
factor_up = 1.+ (out_size % in_size)
|
734 |
-
print(f"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}")
|
735 |
-
self.rescaler = LatentRescaler(factor=factor_up, in_channels=in_channels, mid_channels=2*in_channels,
|
736 |
-
out_channels=in_channels)
|
737 |
-
self.decoder = Decoder(out_ch=out_channels, resolution=out_size, z_channels=in_channels, num_res_blocks=2,
|
738 |
-
attn_resolutions=[], in_channels=None, ch=in_channels,
|
739 |
-
ch_mult=[ch_mult for _ in range(num_blocks)])
|
740 |
-
|
741 |
-
def forward(self, x):
|
742 |
-
x = self.rescaler(x)
|
743 |
-
x = self.decoder(x)
|
744 |
-
return x
|
745 |
-
|
746 |
-
|
747 |
-
class Resize(nn.Module):
|
748 |
-
def __init__(self, in_channels=None, learned=False, mode="bilinear"):
|
749 |
-
super().__init__()
|
750 |
-
self.with_conv = learned
|
751 |
-
self.mode = mode
|
752 |
-
if self.with_conv:
|
753 |
-
print(f"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode")
|
754 |
-
raise NotImplementedError()
|
755 |
-
assert in_channels is not None
|
756 |
-
# no asymmetric padding in torch conv, must do it ourselves
|
757 |
-
self.conv = torch.nn.Conv2d(in_channels,
|
758 |
-
in_channels,
|
759 |
-
kernel_size=4,
|
760 |
-
stride=2,
|
761 |
-
padding=1)
|
762 |
-
|
763 |
-
def forward(self, x, scale_factor=1.0):
|
764 |
-
if scale_factor==1.0:
|
765 |
-
return x
|
766 |
-
else:
|
767 |
-
x = torch.nn.functional.interpolate(x, mode=self.mode, align_corners=False, scale_factor=scale_factor)
|
768 |
-
return x
|
769 |
-
|
770 |
-
class FirstStagePostProcessor(nn.Module):
|
771 |
-
|
772 |
-
def __init__(self, ch_mult:list, in_channels,
|
773 |
-
pretrained_model:nn.Module=None,
|
774 |
-
reshape=False,
|
775 |
-
n_channels=None,
|
776 |
-
dropout=0.,
|
777 |
-
pretrained_config=None):
|
778 |
-
super().__init__()
|
779 |
-
if pretrained_config is None:
|
780 |
-
assert pretrained_model is not None, 'Either "pretrained_model" or "pretrained_config" must not be None'
|
781 |
-
self.pretrained_model = pretrained_model
|
782 |
-
else:
|
783 |
-
assert pretrained_config is not None, 'Either "pretrained_model" or "pretrained_config" must not be None'
|
784 |
-
self.instantiate_pretrained(pretrained_config)
|
785 |
-
|
786 |
-
self.do_reshape = reshape
|
787 |
-
|
788 |
-
if n_channels is None:
|
789 |
-
n_channels = self.pretrained_model.encoder.ch
|
790 |
-
|
791 |
-
self.proj_norm = Normalize(in_channels,num_groups=in_channels//2)
|
792 |
-
self.proj = nn.Conv2d(in_channels,n_channels,kernel_size=3,
|
793 |
-
stride=1,padding=1)
|
794 |
-
|
795 |
-
blocks = []
|
796 |
-
downs = []
|
797 |
-
ch_in = n_channels
|
798 |
-
for m in ch_mult:
|
799 |
-
blocks.append(ResnetBlock(in_channels=ch_in,out_channels=m*n_channels,dropout=dropout))
|
800 |
-
ch_in = m * n_channels
|
801 |
-
downs.append(Downsample(ch_in, with_conv=False))
|
802 |
-
|
803 |
-
self.model = nn.ModuleList(blocks)
|
804 |
-
self.downsampler = nn.ModuleList(downs)
|
805 |
-
|
806 |
-
|
807 |
-
def instantiate_pretrained(self, config):
|
808 |
-
model = instantiate_from_config(config)
|
809 |
-
self.pretrained_model = model.eval()
|
810 |
-
# self.pretrained_model.train = False
|
811 |
-
for param in self.pretrained_model.parameters():
|
812 |
-
param.requires_grad = False
|
813 |
-
|
814 |
-
|
815 |
-
@torch.no_grad()
|
816 |
-
def encode_with_pretrained(self,x):
|
817 |
-
c = self.pretrained_model.encode(x)
|
818 |
-
if isinstance(c, DiagonalGaussianDistribution):
|
819 |
-
c = c.mode()
|
820 |
-
return c
|
821 |
-
|
822 |
-
def forward(self,x):
|
823 |
-
z_fs = self.encode_with_pretrained(x)
|
824 |
-
z = self.proj_norm(z_fs)
|
825 |
-
z = self.proj(z)
|
826 |
-
z = nonlinearity(z)
|
827 |
-
|
828 |
-
for submodel, downmodel in zip(self.model,self.downsampler):
|
829 |
-
z = submodel(z,temb=None)
|
830 |
-
z = downmodel(z)
|
831 |
-
|
832 |
-
if self.do_reshape:
|
833 |
-
z = rearrange(z,'b c h w -> b (h w) c')
|
834 |
-
return z
|
835 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ASJMO/freegpt/g4f/Provider/Providers/Mishalsgpt.py
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
import os, requests, uuid
|
2 |
-
from ...typing import sha256, Dict, get_type_hints
|
3 |
-
|
4 |
-
url = 'https://mishalsgpt.vercel.app'
|
5 |
-
model = ['gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo']
|
6 |
-
supports_stream = True
|
7 |
-
needs_auth = False
|
8 |
-
|
9 |
-
def _create_completion(model: str, messages: list, stream: bool, **kwargs):
|
10 |
-
headers = {
|
11 |
-
'Content-Type': 'application/json',
|
12 |
-
}
|
13 |
-
data = {
|
14 |
-
'model': model,
|
15 |
-
'temperature': 0.7,
|
16 |
-
'messages': messages
|
17 |
-
}
|
18 |
-
response = requests.post(url + '/api/openai/v1/chat/completions',
|
19 |
-
headers=headers, json=data, stream=True)
|
20 |
-
yield response.json()['choices'][0]['message']['content']
|
21 |
-
|
22 |
-
params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
|
23 |
-
'(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ASJMO/freegpt/g4f/Provider/Providers/Theb.py
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import json
|
3 |
-
import time
|
4 |
-
import subprocess
|
5 |
-
|
6 |
-
from ...typing import sha256, Dict, get_type_hints
|
7 |
-
|
8 |
-
url = 'https://theb.ai'
|
9 |
-
model = ['gpt-3.5-turbo']
|
10 |
-
supports_stream = True
|
11 |
-
needs_auth = False
|
12 |
-
|
13 |
-
def _create_completion(model: str, messages: list, stream: bool, **kwargs):
|
14 |
-
|
15 |
-
path = os.path.dirname(os.path.realpath(__file__))
|
16 |
-
config = json.dumps({
|
17 |
-
'messages': messages,
|
18 |
-
'model': model}, separators=(',', ':'))
|
19 |
-
|
20 |
-
cmd = ['python3', f'{path}/helpers/theb.py', config]
|
21 |
-
|
22 |
-
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
|
23 |
-
|
24 |
-
for line in iter(p.stdout.readline, b''):
|
25 |
-
yield line.decode('utf-8')
|
26 |
-
|
27 |
-
params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
|
28 |
-
'(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_d-p6_syncbn_fast_8x16b-300e_coco.py
DELETED
@@ -1,21 +0,0 @@
|
|
1 |
-
_base_ = './yolov7_w-p6_syncbn_fast_8x16b-300e_coco.py'
|
2 |
-
|
3 |
-
model = dict(
|
4 |
-
backbone=dict(arch='D'),
|
5 |
-
neck=dict(
|
6 |
-
use_maxpool_in_downsample=True,
|
7 |
-
use_in_channels_in_downsample=True,
|
8 |
-
block_cfg=dict(
|
9 |
-
type='ELANBlock',
|
10 |
-
middle_ratio=0.4,
|
11 |
-
block_ratio=0.2,
|
12 |
-
num_blocks=6,
|
13 |
-
num_convs_in_block=1),
|
14 |
-
in_channels=[384, 768, 1152, 1536],
|
15 |
-
out_channels=[192, 384, 576, 768]),
|
16 |
-
bbox_head=dict(
|
17 |
-
head_module=dict(
|
18 |
-
in_channels=[192, 384, 576, 768],
|
19 |
-
main_out_channels=[384, 768, 1152, 1536],
|
20 |
-
aux_out_channels=[384, 768, 1152, 1536],
|
21 |
-
)))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/Free-Accounts-Generator/minecraft/index.html
DELETED
@@ -1,40 +0,0 @@
|
|
1 |
-
<!DOCTYPE HTML>
|
2 |
-
<html>
|
3 |
-
<title>Free Minecraft Account Generator</title>
|
4 |
-
<link rel="icon" type="image/png" href="https://huggingface.co/spaces/AchyuthGamer/Free-Accounts-Generator/resolve/main/img/steam-chrome-logo.png">
|
5 |
-
|
6 |
-
<!-- Mirrored from altsforyou.org/fortnite/ by HTTrack Website Copier/3.x [XR&CO'2014], Tue, 23 Jun 2020 17:59:11 GMT -->
|
7 |
-
<meta name="description context="fortnite, alt generator, fortnite free premium">
|
8 |
-
<meta name="keywords" content="nordvpn, alt generator, nordvpn free premium">
|
9 |
-
<meta http-equiv="cache-control" content="no-cache" />
|
10 |
-
<meta http-equiv="Pragma" content="no-cache" />
|
11 |
-
<meta http-equiv="Expires" content="-1" />
|
12 |
-
<head>
|
13 |
-
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
14 |
-
<link rel="stylesheet" href="css/style.css" />
|
15 |
-
<link href="https://fonts.googleapis.com/css?family=Montserrat:400,700" rel='stylesheet' type='text/css'>
|
16 |
-
<script type='text/javascript' src='js/d140ouchebag.js'></script>
|
17 |
-
<script type="text/javascript">
|
18 |
-
document.oncontextmenu =new Function("return false;")
|
19 |
-
document.onselectstart =new Function("return false;")
|
20 |
-
</script>
|
21 |
-
<header>
|
22 |
-
<span style="cursor: pointer;">Free Accounts Paradise</span>
|
23 |
-
</header>
|
24 |
-
<style>
|
25 |
-
header { margin-top: 40px; position: absolute; float: left; font-size: 24px; font-weight: bold; }
|
26 |
-
nav { margin-top: 40px; float: right; color: #FFF; fong: 1px; } nav ut-size: 16px; letter-spacil { list-style: none; margin: 0; margin: 0; } nav li { display: inline; float:left; } nav li a { text-decoration: none; margin: 0px 10px 0px 10px; color: #FFF; } nav li a:hover { color: #191919; transition: 0.5s; }
|
27 |
-
</style>
|
28 |
-
<nav>
|
29 |
-
<ul>
|
30 |
-
<li><a href="../index.html">Steam</a></li>
|
31 |
-
<li><a href="../fortnite/index.html">Fortnite</a></li>
|
32 |
-
<li><a href="https://discord.gg/gZwP9gRWZN">Discord</a></li>
|
33 |
-
</ul>
|
34 |
-
</nav>
|
35 |
-
<section>
|
36 |
-
<h1>Minecraft Account Generator</h1>
|
37 |
-
<FORM NAME="WordForm">
|
38 |
-
<INPUT TYPE=TEXT NAME="WordBox" id="wordbox"><BR>
|
39 |
-
<INPUT TYPE=BUTTON VALUE="Generate" onClick="PickRandomWord(document.WordForm);" id="button">
|
40 |
-
</FORM>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Adapter/T2I-Adapter/ldm/models/diffusion/plms.py
DELETED
@@ -1,243 +0,0 @@
|
|
1 |
-
"""SAMPLING ONLY."""
|
2 |
-
|
3 |
-
import torch
|
4 |
-
import numpy as np
|
5 |
-
from tqdm import tqdm
|
6 |
-
from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like
|
7 |
-
|
8 |
-
|
9 |
-
class PLMSSampler(object):
|
10 |
-
def __init__(self, model, schedule="linear", **kwargs):
|
11 |
-
super().__init__()
|
12 |
-
self.model = model
|
13 |
-
self.ddpm_num_timesteps = model.num_timesteps
|
14 |
-
self.schedule = schedule
|
15 |
-
|
16 |
-
def register_buffer(self, name, attr):
|
17 |
-
if type(attr) == torch.Tensor:
|
18 |
-
if attr.device != torch.device("cuda"):
|
19 |
-
attr = attr.to(torch.device("cuda"))
|
20 |
-
setattr(self, name, attr)
|
21 |
-
|
22 |
-
def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True):
|
23 |
-
if ddim_eta != 0:
|
24 |
-
raise ValueError('ddim_eta must be 0 for PLMS')
|
25 |
-
self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,
|
26 |
-
num_ddpm_timesteps=self.ddpm_num_timesteps, verbose=verbose)
|
27 |
-
alphas_cumprod = self.model.alphas_cumprod
|
28 |
-
assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'
|
29 |
-
to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)
|
30 |
-
|
31 |
-
self.register_buffer('betas', to_torch(self.model.betas))
|
32 |
-
self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
|
33 |
-
self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))
|
34 |
-
|
35 |
-
# calculations for diffusion q(x_t | x_{t-1}) and others
|
36 |
-
self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))
|
37 |
-
self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))
|
38 |
-
self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))
|
39 |
-
self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))
|
40 |
-
self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))
|
41 |
-
|
42 |
-
# ddim sampling parameters
|
43 |
-
ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),
|
44 |
-
ddim_timesteps=self.ddim_timesteps,
|
45 |
-
eta=ddim_eta, verbose=verbose)
|
46 |
-
self.register_buffer('ddim_sigmas', ddim_sigmas)
|
47 |
-
self.register_buffer('ddim_alphas', ddim_alphas)
|
48 |
-
self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)
|
49 |
-
self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))
|
50 |
-
sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(
|
51 |
-
(1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (
|
52 |
-
1 - self.alphas_cumprod / self.alphas_cumprod_prev))
|
53 |
-
self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)
|
54 |
-
|
55 |
-
@torch.no_grad()
|
56 |
-
def sample(self,
|
57 |
-
S,
|
58 |
-
batch_size,
|
59 |
-
shape,
|
60 |
-
conditioning=None,
|
61 |
-
callback=None,
|
62 |
-
normals_sequence=None,
|
63 |
-
img_callback=None,
|
64 |
-
quantize_x0=False,
|
65 |
-
eta=0.,
|
66 |
-
mask=None,
|
67 |
-
x0=None,
|
68 |
-
temperature=1.,
|
69 |
-
noise_dropout=0.,
|
70 |
-
score_corrector=None,
|
71 |
-
corrector_kwargs=None,
|
72 |
-
verbose=True,
|
73 |
-
x_T=None,
|
74 |
-
log_every_t=100,
|
75 |
-
unconditional_guidance_scale=1.,
|
76 |
-
unconditional_conditioning=None,
|
77 |
-
features_adapter=None,
|
78 |
-
cond_tau=0.4,
|
79 |
-
# this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
|
80 |
-
**kwargs
|
81 |
-
):
|
82 |
-
# print('*'*20,x_T)
|
83 |
-
# exit(0)
|
84 |
-
if conditioning is not None:
|
85 |
-
if isinstance(conditioning, dict):
|
86 |
-
cbs = conditioning[list(conditioning.keys())[0]].shape[0]
|
87 |
-
if cbs != batch_size:
|
88 |
-
print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}")
|
89 |
-
else:
|
90 |
-
if conditioning.shape[0] != batch_size:
|
91 |
-
print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}")
|
92 |
-
|
93 |
-
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
|
94 |
-
C, H, W = shape
|
95 |
-
size = (batch_size, C, H, W)
|
96 |
-
print(f'Data shape for PLMS sampling is {size}')
|
97 |
-
|
98 |
-
samples, intermediates = self.plms_sampling(conditioning, size,
|
99 |
-
callback=callback,
|
100 |
-
img_callback=img_callback,
|
101 |
-
quantize_denoised=quantize_x0,
|
102 |
-
mask=mask, x0=x0,
|
103 |
-
ddim_use_original_steps=False,
|
104 |
-
noise_dropout=noise_dropout,
|
105 |
-
temperature=temperature,
|
106 |
-
score_corrector=score_corrector,
|
107 |
-
corrector_kwargs=corrector_kwargs,
|
108 |
-
x_T=x_T,
|
109 |
-
log_every_t=log_every_t,
|
110 |
-
unconditional_guidance_scale=unconditional_guidance_scale,
|
111 |
-
unconditional_conditioning=unconditional_conditioning,
|
112 |
-
features_adapter=features_adapter,
|
113 |
-
cond_tau=cond_tau
|
114 |
-
)
|
115 |
-
return samples, intermediates
|
116 |
-
|
117 |
-
@torch.no_grad()
|
118 |
-
def plms_sampling(self, cond, shape,
|
119 |
-
x_T=None, ddim_use_original_steps=False,
|
120 |
-
callback=None, timesteps=None, quantize_denoised=False,
|
121 |
-
mask=None, x0=None, img_callback=None, log_every_t=100,
|
122 |
-
temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
|
123 |
-
unconditional_guidance_scale=1., unconditional_conditioning=None, features_adapter=None,
|
124 |
-
cond_tau=0.4):
|
125 |
-
device = self.model.betas.device
|
126 |
-
b = shape[0]
|
127 |
-
if x_T is None:
|
128 |
-
img = torch.randn(shape, device=device)
|
129 |
-
else:
|
130 |
-
img = x_T
|
131 |
-
if timesteps is None:
|
132 |
-
timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps
|
133 |
-
elif timesteps is not None and not ddim_use_original_steps:
|
134 |
-
subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1
|
135 |
-
timesteps = self.ddim_timesteps[:subset_end]
|
136 |
-
|
137 |
-
intermediates = {'x_inter': [img], 'pred_x0': [img]}
|
138 |
-
time_range = list(reversed(range(0, timesteps))) if ddim_use_original_steps else np.flip(timesteps)
|
139 |
-
total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]
|
140 |
-
print(f"Running PLMS Sampling with {total_steps} timesteps")
|
141 |
-
|
142 |
-
iterator = tqdm(time_range, desc='PLMS Sampler', total=total_steps)
|
143 |
-
old_eps = []
|
144 |
-
|
145 |
-
for i, step in enumerate(iterator):
|
146 |
-
index = total_steps - i - 1
|
147 |
-
ts = torch.full((b,), step, device=device, dtype=torch.long)
|
148 |
-
ts_next = torch.full((b,), time_range[min(i + 1, len(time_range) - 1)], device=device, dtype=torch.long)
|
149 |
-
|
150 |
-
if mask is not None: # and index>=10:
|
151 |
-
assert x0 is not None
|
152 |
-
img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass?
|
153 |
-
img = img_orig * mask + (1. - mask) * img
|
154 |
-
|
155 |
-
outs = self.p_sample_plms(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
|
156 |
-
quantize_denoised=quantize_denoised, temperature=temperature,
|
157 |
-
noise_dropout=noise_dropout, score_corrector=score_corrector,
|
158 |
-
corrector_kwargs=corrector_kwargs,
|
159 |
-
unconditional_guidance_scale=unconditional_guidance_scale,
|
160 |
-
unconditional_conditioning=unconditional_conditioning,
|
161 |
-
old_eps=old_eps, t_next=ts_next,
|
162 |
-
features_adapter=None if index < int(
|
163 |
-
(1 - cond_tau) * total_steps) else features_adapter)
|
164 |
-
|
165 |
-
img, pred_x0, e_t = outs
|
166 |
-
old_eps.append(e_t)
|
167 |
-
if len(old_eps) >= 4:
|
168 |
-
old_eps.pop(0)
|
169 |
-
if callback: callback(i)
|
170 |
-
if img_callback: img_callback(pred_x0, i)
|
171 |
-
|
172 |
-
if index % log_every_t == 0 or index == total_steps - 1:
|
173 |
-
intermediates['x_inter'].append(img)
|
174 |
-
intermediates['pred_x0'].append(pred_x0)
|
175 |
-
|
176 |
-
return img, intermediates
|
177 |
-
|
178 |
-
@torch.no_grad()
|
179 |
-
def p_sample_plms(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
|
180 |
-
temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
|
181 |
-
unconditional_guidance_scale=1., unconditional_conditioning=None, old_eps=None, t_next=None,
|
182 |
-
features_adapter=None):
|
183 |
-
b, *_, device = *x.shape, x.device
|
184 |
-
|
185 |
-
def get_model_output(x, t):
|
186 |
-
if unconditional_conditioning is None or unconditional_guidance_scale == 1.:
|
187 |
-
e_t = self.model.apply_model(x, t, c, features_adapter=features_adapter)
|
188 |
-
else:
|
189 |
-
x_in = torch.cat([x] * 2)
|
190 |
-
t_in = torch.cat([t] * 2)
|
191 |
-
c_in = torch.cat([unconditional_conditioning, c])
|
192 |
-
e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in, features_adapter=features_adapter).chunk(2)
|
193 |
-
e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
|
194 |
-
|
195 |
-
if score_corrector is not None:
|
196 |
-
assert self.model.parameterization == "eps"
|
197 |
-
e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)
|
198 |
-
|
199 |
-
return e_t
|
200 |
-
|
201 |
-
alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
|
202 |
-
alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev
|
203 |
-
sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas
|
204 |
-
sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas
|
205 |
-
|
206 |
-
def get_x_prev_and_pred_x0(e_t, index):
|
207 |
-
# select parameters corresponding to the currently considered timestep
|
208 |
-
a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
|
209 |
-
a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
|
210 |
-
sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
|
211 |
-
sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index], device=device)
|
212 |
-
|
213 |
-
# current prediction for x_0
|
214 |
-
pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
|
215 |
-
if quantize_denoised:
|
216 |
-
pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
|
217 |
-
# direction pointing to x_t
|
218 |
-
dir_xt = (1. - a_prev - sigma_t ** 2).sqrt() * e_t
|
219 |
-
noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
|
220 |
-
if noise_dropout > 0.:
|
221 |
-
noise = torch.nn.functional.dropout(noise, p=noise_dropout)
|
222 |
-
x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise
|
223 |
-
return x_prev, pred_x0
|
224 |
-
|
225 |
-
e_t = get_model_output(x, t)
|
226 |
-
if len(old_eps) == 0:
|
227 |
-
# Pseudo Improved Euler (2nd order)
|
228 |
-
x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t, index)
|
229 |
-
e_t_next = get_model_output(x_prev, t_next)
|
230 |
-
e_t_prime = (e_t + e_t_next) / 2
|
231 |
-
elif len(old_eps) == 1:
|
232 |
-
# 2nd order Pseudo Linear Multistep (Adams-Bashforth)
|
233 |
-
e_t_prime = (3 * e_t - old_eps[-1]) / 2
|
234 |
-
elif len(old_eps) == 2:
|
235 |
-
# 3nd order Pseudo Linear Multistep (Adams-Bashforth)
|
236 |
-
e_t_prime = (23 * e_t - 16 * old_eps[-1] + 5 * old_eps[-2]) / 12
|
237 |
-
elif len(old_eps) >= 3:
|
238 |
-
# 4nd order Pseudo Linear Multistep (Adams-Bashforth)
|
239 |
-
e_t_prime = (55 * e_t - 59 * old_eps[-1] + 37 * old_eps[-2] - 9 * old_eps[-3]) / 24
|
240 |
-
|
241 |
-
x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t_prime, index)
|
242 |
-
|
243 |
-
return x_prev, pred_x0, e_t
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/agentverse/tasks/simulation/sde_team/sde_team_2players/build_config.py
DELETED
@@ -1,21 +0,0 @@
|
|
1 |
-
import yaml
|
2 |
-
import json
|
3 |
-
|
4 |
-
config_path = "partial_config.yaml"
|
5 |
-
|
6 |
-
code_problem = json.load(open("code_problem.json", "r"))
|
7 |
-
problem_string = "\n\n<problem>:\n" + code_problem["problem"]
|
8 |
-
unit_tests = str(code_problem["unit_tests"])
|
9 |
-
|
10 |
-
print(problem_string)
|
11 |
-
print(unit_tests)
|
12 |
-
|
13 |
-
task_config = yaml.safe_load(open(config_path))
|
14 |
-
|
15 |
-
for agent_configs in task_config["agents"]:
|
16 |
-
if agent_configs["name"] != "code_tester":
|
17 |
-
agent_configs["role_description"] += problem_string
|
18 |
-
task_config["environment"]["unit_tests"] = unit_tests
|
19 |
-
|
20 |
-
with open("config.yaml", "w") as f:
|
21 |
-
yaml.safe_dump(task_config, f)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/expressionparser.js
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
import ExpressionParser from './math/expressionparser/ExpressionParser.js';
|
2 |
-
export default ExpressionParser;
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/orbit/Orbit.d.ts
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
import Base from '../base/Base';
|
2 |
-
export default class Orbit extends Base { }
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/ResolveChildrenWidth.js
DELETED
@@ -1,14 +0,0 @@
|
|
1 |
-
var ResolveChildrenWidth = function (parentWidth) {
|
2 |
-
// Resolve width of sizer children
|
3 |
-
var child, childWidth;
|
4 |
-
for (var i in this.sizerChildren) {
|
5 |
-
child = this.sizerChildren[i];
|
6 |
-
if (child && child.isRexSizer && !child.ignoreLayout) {
|
7 |
-
childWidth = this.getExpandedChildWidth(child, parentWidth);
|
8 |
-
childWidth = child.resolveWidth(childWidth);
|
9 |
-
child.resolveChildrenWidth(childWidth);
|
10 |
-
}
|
11 |
-
}
|
12 |
-
}
|
13 |
-
|
14 |
-
export default ResolveChildrenWidth;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/statesroundrectangle/methods/SetStateMethods.js
DELETED
@@ -1,81 +0,0 @@
|
|
1 |
-
import GetPartialData from '../../../../plugins/utils/object/GetPartialData.js';
|
2 |
-
import IsKeyValueEqual from '../../../../plugins/utils/object/IsKeyValueEqual.js';
|
3 |
-
|
4 |
-
var ApplyStyle = function (gameObject, newStyle) {
|
5 |
-
if (!newStyle) {
|
6 |
-
return undefined;
|
7 |
-
}
|
8 |
-
|
9 |
-
var currentStyle = GetPartialData(gameObject, newStyle);
|
10 |
-
if (!IsKeyValueEqual(currentStyle, newStyle)) {
|
11 |
-
gameObject.modifyStyle(newStyle);
|
12 |
-
return currentStyle;
|
13 |
-
} else {
|
14 |
-
return undefined;
|
15 |
-
}
|
16 |
-
}
|
17 |
-
|
18 |
-
export default {
|
19 |
-
setActiveState(enable) {
|
20 |
-
if (enable === undefined) {
|
21 |
-
enable = true;
|
22 |
-
}
|
23 |
-
|
24 |
-
if (this.activeState === enable) {
|
25 |
-
return this;
|
26 |
-
}
|
27 |
-
|
28 |
-
this.activeState = enable;
|
29 |
-
|
30 |
-
if (enable) {
|
31 |
-
this.activeStyleSave = ApplyStyle(this, this.activeStyle);
|
32 |
-
} else {
|
33 |
-
ApplyStyle(this, this.activeStyleSave);
|
34 |
-
this.activeStyleSave = undefined;
|
35 |
-
}
|
36 |
-
|
37 |
-
return this;
|
38 |
-
},
|
39 |
-
|
40 |
-
setHoverState(enable) {
|
41 |
-
if (enable === undefined) {
|
42 |
-
enable = true;
|
43 |
-
}
|
44 |
-
|
45 |
-
if (this.hoverState === enable) {
|
46 |
-
return this;
|
47 |
-
}
|
48 |
-
|
49 |
-
this.hoverState = enable;
|
50 |
-
|
51 |
-
if (enable) {
|
52 |
-
this.hoverStyleSave = ApplyStyle(this, this.hoverStyle);
|
53 |
-
} else {
|
54 |
-
ApplyStyle(this, this.hoverStyleSave);
|
55 |
-
this.hoverStyleSave = undefined;
|
56 |
-
}
|
57 |
-
|
58 |
-
return this;
|
59 |
-
},
|
60 |
-
|
61 |
-
setDisableState(enable) {
|
62 |
-
if (enable === undefined) {
|
63 |
-
enable = true;
|
64 |
-
}
|
65 |
-
|
66 |
-
if (this.disableState === enable) {
|
67 |
-
return this;
|
68 |
-
}
|
69 |
-
|
70 |
-
this.disableState = enable;
|
71 |
-
|
72 |
-
if (enable) {
|
73 |
-
this.disableStyleSave = ApplyStyle(this, this.disableStyle);
|
74 |
-
} else {
|
75 |
-
ApplyStyle(this, this.disableStyleSave);
|
76 |
-
this.disableStyleSave = undefined;
|
77 |
-
}
|
78 |
-
|
79 |
-
return this;
|
80 |
-
}
|
81 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AkitoP/umamusume_bert_vits2/text/chinese.py
DELETED
@@ -1,198 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import re
|
3 |
-
|
4 |
-
import cn2an
|
5 |
-
from pypinyin import lazy_pinyin, Style
|
6 |
-
|
7 |
-
from text.symbols import punctuation
|
8 |
-
from text.tone_sandhi import ToneSandhi
|
9 |
-
|
10 |
-
current_file_path = os.path.dirname(__file__)
|
11 |
-
pinyin_to_symbol_map = {
|
12 |
-
line.split("\t")[0]: line.strip().split("\t")[1]
|
13 |
-
for line in open(os.path.join(current_file_path, "opencpop-strict.txt")).readlines()
|
14 |
-
}
|
15 |
-
|
16 |
-
import jieba.posseg as psg
|
17 |
-
|
18 |
-
|
19 |
-
rep_map = {
|
20 |
-
":": ",",
|
21 |
-
";": ",",
|
22 |
-
",": ",",
|
23 |
-
"。": ".",
|
24 |
-
"!": "!",
|
25 |
-
"?": "?",
|
26 |
-
"\n": ".",
|
27 |
-
"·": ",",
|
28 |
-
"、": ",",
|
29 |
-
"...": "…",
|
30 |
-
"$": ".",
|
31 |
-
"“": "'",
|
32 |
-
"”": "'",
|
33 |
-
"‘": "'",
|
34 |
-
"’": "'",
|
35 |
-
"(": "'",
|
36 |
-
")": "'",
|
37 |
-
"(": "'",
|
38 |
-
")": "'",
|
39 |
-
"《": "'",
|
40 |
-
"》": "'",
|
41 |
-
"【": "'",
|
42 |
-
"】": "'",
|
43 |
-
"[": "'",
|
44 |
-
"]": "'",
|
45 |
-
"—": "-",
|
46 |
-
"~": "-",
|
47 |
-
"~": "-",
|
48 |
-
"「": "'",
|
49 |
-
"」": "'",
|
50 |
-
}
|
51 |
-
|
52 |
-
tone_modifier = ToneSandhi()
|
53 |
-
|
54 |
-
|
55 |
-
def replace_punctuation(text):
|
56 |
-
text = text.replace("嗯", "恩").replace("呣", "母")
|
57 |
-
pattern = re.compile("|".join(re.escape(p) for p in rep_map.keys()))
|
58 |
-
|
59 |
-
replaced_text = pattern.sub(lambda x: rep_map[x.group()], text)
|
60 |
-
|
61 |
-
replaced_text = re.sub(
|
62 |
-
r"[^\u4e00-\u9fa5" + "".join(punctuation) + r"]+", "", replaced_text
|
63 |
-
)
|
64 |
-
|
65 |
-
return replaced_text
|
66 |
-
|
67 |
-
|
68 |
-
def g2p(text):
|
69 |
-
pattern = r"(?<=[{0}])\s*".format("".join(punctuation))
|
70 |
-
sentences = [i for i in re.split(pattern, text) if i.strip() != ""]
|
71 |
-
phones, tones, word2ph = _g2p(sentences)
|
72 |
-
assert sum(word2ph) == len(phones)
|
73 |
-
assert len(word2ph) == len(text) # Sometimes it will crash,you can add a try-catch.
|
74 |
-
phones = ["_"] + phones + ["_"]
|
75 |
-
tones = [0] + tones + [0]
|
76 |
-
word2ph = [1] + word2ph + [1]
|
77 |
-
return phones, tones, word2ph
|
78 |
-
|
79 |
-
|
80 |
-
def _get_initials_finals(word):
|
81 |
-
initials = []
|
82 |
-
finals = []
|
83 |
-
orig_initials = lazy_pinyin(word, neutral_tone_with_five=True, style=Style.INITIALS)
|
84 |
-
orig_finals = lazy_pinyin(
|
85 |
-
word, neutral_tone_with_five=True, style=Style.FINALS_TONE3
|
86 |
-
)
|
87 |
-
for c, v in zip(orig_initials, orig_finals):
|
88 |
-
initials.append(c)
|
89 |
-
finals.append(v)
|
90 |
-
return initials, finals
|
91 |
-
|
92 |
-
|
93 |
-
def _g2p(segments):
|
94 |
-
phones_list = []
|
95 |
-
tones_list = []
|
96 |
-
word2ph = []
|
97 |
-
for seg in segments:
|
98 |
-
# Replace all English words in the sentence
|
99 |
-
seg = re.sub("[a-zA-Z]+", "", seg)
|
100 |
-
seg_cut = psg.lcut(seg)
|
101 |
-
initials = []
|
102 |
-
finals = []
|
103 |
-
seg_cut = tone_modifier.pre_merge_for_modify(seg_cut)
|
104 |
-
for word, pos in seg_cut:
|
105 |
-
if pos == "eng":
|
106 |
-
continue
|
107 |
-
sub_initials, sub_finals = _get_initials_finals(word)
|
108 |
-
sub_finals = tone_modifier.modified_tone(word, pos, sub_finals)
|
109 |
-
initials.append(sub_initials)
|
110 |
-
finals.append(sub_finals)
|
111 |
-
|
112 |
-
# assert len(sub_initials) == len(sub_finals) == len(word)
|
113 |
-
initials = sum(initials, [])
|
114 |
-
finals = sum(finals, [])
|
115 |
-
#
|
116 |
-
for c, v in zip(initials, finals):
|
117 |
-
raw_pinyin = c + v
|
118 |
-
# NOTE: post process for pypinyin outputs
|
119 |
-
# we discriminate i, ii and iii
|
120 |
-
if c == v:
|
121 |
-
assert c in punctuation
|
122 |
-
phone = [c]
|
123 |
-
tone = "0"
|
124 |
-
word2ph.append(1)
|
125 |
-
else:
|
126 |
-
v_without_tone = v[:-1]
|
127 |
-
tone = v[-1]
|
128 |
-
|
129 |
-
pinyin = c + v_without_tone
|
130 |
-
assert tone in "12345"
|
131 |
-
|
132 |
-
if c:
|
133 |
-
# 多音节
|
134 |
-
v_rep_map = {
|
135 |
-
"uei": "ui",
|
136 |
-
"iou": "iu",
|
137 |
-
"uen": "un",
|
138 |
-
}
|
139 |
-
if v_without_tone in v_rep_map.keys():
|
140 |
-
pinyin = c + v_rep_map[v_without_tone]
|
141 |
-
else:
|
142 |
-
# 单音节
|
143 |
-
pinyin_rep_map = {
|
144 |
-
"ing": "ying",
|
145 |
-
"i": "yi",
|
146 |
-
"in": "yin",
|
147 |
-
"u": "wu",
|
148 |
-
}
|
149 |
-
if pinyin in pinyin_rep_map.keys():
|
150 |
-
pinyin = pinyin_rep_map[pinyin]
|
151 |
-
else:
|
152 |
-
single_rep_map = {
|
153 |
-
"v": "yu",
|
154 |
-
"e": "e",
|
155 |
-
"i": "y",
|
156 |
-
"u": "w",
|
157 |
-
}
|
158 |
-
if pinyin[0] in single_rep_map.keys():
|
159 |
-
pinyin = single_rep_map[pinyin[0]] + pinyin[1:]
|
160 |
-
|
161 |
-
assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin)
|
162 |
-
phone = pinyin_to_symbol_map[pinyin].split(" ")
|
163 |
-
word2ph.append(len(phone))
|
164 |
-
|
165 |
-
phones_list += phone
|
166 |
-
tones_list += [int(tone)] * len(phone)
|
167 |
-
return phones_list, tones_list, word2ph
|
168 |
-
|
169 |
-
|
170 |
-
def text_normalize(text):
|
171 |
-
numbers = re.findall(r"\d+(?:\.?\d+)?", text)
|
172 |
-
for number in numbers:
|
173 |
-
text = text.replace(number, cn2an.an2cn(number), 1)
|
174 |
-
text = replace_punctuation(text)
|
175 |
-
return text
|
176 |
-
|
177 |
-
|
178 |
-
def get_bert_feature(text, word2ph):
|
179 |
-
from text import chinese_bert
|
180 |
-
|
181 |
-
return chinese_bert.get_bert_feature(text, word2ph)
|
182 |
-
|
183 |
-
|
184 |
-
if __name__ == "__main__":
|
185 |
-
from text.chinese_bert import get_bert_feature
|
186 |
-
|
187 |
-
text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏"
|
188 |
-
text = text_normalize(text)
|
189 |
-
print(text)
|
190 |
-
phones, tones, word2ph = g2p(text)
|
191 |
-
bert = get_bert_feature(text, word2ph)
|
192 |
-
|
193 |
-
print(phones, tones, word2ph, bert.shape)
|
194 |
-
|
195 |
-
|
196 |
-
# # 示例用法
|
197 |
-
# text = "这是一个示例文本:,你好!这是一个测试...."
|
198 |
-
# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlanMars/QYL-AI-Space/modules/models/inspurai.py
DELETED
@@ -1,345 +0,0 @@
|
|
1 |
-
# 代码主要来源于 https://github.com/Shawn-Inspur/Yuan-1.0/blob/main/yuan_api/inspurai.py
|
2 |
-
|
3 |
-
import hashlib
|
4 |
-
import json
|
5 |
-
import os
|
6 |
-
import time
|
7 |
-
import uuid
|
8 |
-
from datetime import datetime
|
9 |
-
|
10 |
-
import pytz
|
11 |
-
import requests
|
12 |
-
|
13 |
-
from modules.presets import NO_APIKEY_MSG
|
14 |
-
from modules.models.base_model import BaseLLMModel
|
15 |
-
|
16 |
-
|
17 |
-
class Example:
|
18 |
-
""" store some examples(input, output pairs and formats) for few-shots to prime the model."""
|
19 |
-
|
20 |
-
def __init__(self, inp, out):
|
21 |
-
self.input = inp
|
22 |
-
self.output = out
|
23 |
-
self.id = uuid.uuid4().hex
|
24 |
-
|
25 |
-
def get_input(self):
|
26 |
-
"""return the input of the example."""
|
27 |
-
return self.input
|
28 |
-
|
29 |
-
def get_output(self):
|
30 |
-
"""Return the output of the example."""
|
31 |
-
return self.output
|
32 |
-
|
33 |
-
def get_id(self):
|
34 |
-
"""Returns the unique ID of the example."""
|
35 |
-
return self.id
|
36 |
-
|
37 |
-
def as_dict(self):
|
38 |
-
return {
|
39 |
-
"input": self.get_input(),
|
40 |
-
"output": self.get_output(),
|
41 |
-
"id": self.get_id(),
|
42 |
-
}
|
43 |
-
|
44 |
-
|
45 |
-
class Yuan:
|
46 |
-
"""The main class for a user to interface with the Inspur Yuan API.
|
47 |
-
A user can set account info and add examples of the API request.
|
48 |
-
"""
|
49 |
-
|
50 |
-
def __init__(self,
|
51 |
-
engine='base_10B',
|
52 |
-
temperature=0.9,
|
53 |
-
max_tokens=100,
|
54 |
-
input_prefix='',
|
55 |
-
input_suffix='\n',
|
56 |
-
output_prefix='答:',
|
57 |
-
output_suffix='\n\n',
|
58 |
-
append_output_prefix_to_query=False,
|
59 |
-
topK=1,
|
60 |
-
topP=0.9,
|
61 |
-
frequencyPenalty=1.2,
|
62 |
-
responsePenalty=1.2,
|
63 |
-
noRepeatNgramSize=2):
|
64 |
-
|
65 |
-
self.examples = {}
|
66 |
-
self.engine = engine
|
67 |
-
self.temperature = temperature
|
68 |
-
self.max_tokens = max_tokens
|
69 |
-
self.topK = topK
|
70 |
-
self.topP = topP
|
71 |
-
self.frequencyPenalty = frequencyPenalty
|
72 |
-
self.responsePenalty = responsePenalty
|
73 |
-
self.noRepeatNgramSize = noRepeatNgramSize
|
74 |
-
self.input_prefix = input_prefix
|
75 |
-
self.input_suffix = input_suffix
|
76 |
-
self.output_prefix = output_prefix
|
77 |
-
self.output_suffix = output_suffix
|
78 |
-
self.append_output_prefix_to_query = append_output_prefix_to_query
|
79 |
-
self.stop = (output_suffix + input_prefix).strip()
|
80 |
-
self.api = None
|
81 |
-
|
82 |
-
# if self.engine not in ['base_10B','translate','dialog']:
|
83 |
-
# raise Exception('engine must be one of [\'base_10B\',\'translate\',\'dialog\'] ')
|
84 |
-
def set_account(self, api_key):
|
85 |
-
account = api_key.split('||')
|
86 |
-
self.api = YuanAPI(user=account[0], phone=account[1])
|
87 |
-
|
88 |
-
def add_example(self, ex):
|
89 |
-
"""Add an example to the object.
|
90 |
-
Example must be an instance of the Example class."""
|
91 |
-
assert isinstance(ex, Example), "Please create an Example object."
|
92 |
-
self.examples[ex.get_id()] = ex
|
93 |
-
|
94 |
-
def delete_example(self, id):
|
95 |
-
"""Delete example with the specific id."""
|
96 |
-
if id in self.examples:
|
97 |
-
del self.examples[id]
|
98 |
-
|
99 |
-
def get_example(self, id):
|
100 |
-
"""Get a single example."""
|
101 |
-
return self.examples.get(id, None)
|
102 |
-
|
103 |
-
def get_all_examples(self):
|
104 |
-
"""Returns all examples as a list of dicts."""
|
105 |
-
return {k: v.as_dict() for k, v in self.examples.items()}
|
106 |
-
|
107 |
-
def get_prime_text(self):
|
108 |
-
"""Formats all examples to prime the model."""
|
109 |
-
return "".join(
|
110 |
-
[self.format_example(ex) for ex in self.examples.values()])
|
111 |
-
|
112 |
-
def get_engine(self):
|
113 |
-
"""Returns the engine specified for the API."""
|
114 |
-
return self.engine
|
115 |
-
|
116 |
-
def get_temperature(self):
|
117 |
-
"""Returns the temperature specified for the API."""
|
118 |
-
return self.temperature
|
119 |
-
|
120 |
-
def get_max_tokens(self):
|
121 |
-
"""Returns the max tokens specified for the API."""
|
122 |
-
return self.max_tokens
|
123 |
-
|
124 |
-
def craft_query(self, prompt):
|
125 |
-
"""Creates the query for the API request."""
|
126 |
-
q = self.get_prime_text(
|
127 |
-
) + self.input_prefix + prompt + self.input_suffix
|
128 |
-
if self.append_output_prefix_to_query:
|
129 |
-
q = q + self.output_prefix
|
130 |
-
|
131 |
-
return q
|
132 |
-
|
133 |
-
def format_example(self, ex):
|
134 |
-
"""Formats the input, output pair."""
|
135 |
-
return self.input_prefix + ex.get_input(
|
136 |
-
) + self.input_suffix + self.output_prefix + ex.get_output(
|
137 |
-
) + self.output_suffix
|
138 |
-
|
139 |
-
def response(self,
|
140 |
-
query,
|
141 |
-
engine='base_10B',
|
142 |
-
max_tokens=20,
|
143 |
-
temperature=0.9,
|
144 |
-
topP=0.1,
|
145 |
-
topK=1,
|
146 |
-
frequencyPenalty=1.0,
|
147 |
-
responsePenalty=1.0,
|
148 |
-
noRepeatNgramSize=0):
|
149 |
-
"""Obtains the original result returned by the API."""
|
150 |
-
|
151 |
-
if self.api is None:
|
152 |
-
return NO_APIKEY_MSG
|
153 |
-
try:
|
154 |
-
# requestId = submit_request(query,temperature,topP,topK,max_tokens, engine)
|
155 |
-
requestId = self.api.submit_request(query, temperature, topP, topK, max_tokens, engine, frequencyPenalty,
|
156 |
-
responsePenalty, noRepeatNgramSize)
|
157 |
-
response_text = self.api.reply_request(requestId)
|
158 |
-
except Exception as e:
|
159 |
-
raise e
|
160 |
-
|
161 |
-
return response_text
|
162 |
-
|
163 |
-
def del_special_chars(self, msg):
|
164 |
-
special_chars = ['<unk>', '<eod>', '#', '▃', '▁', '▂', ' ']
|
165 |
-
for char in special_chars:
|
166 |
-
msg = msg.replace(char, '')
|
167 |
-
return msg
|
168 |
-
|
169 |
-
def submit_API(self, prompt, trun=[]):
|
170 |
-
"""Submit prompt to yuan API interface and obtain an pure text reply.
|
171 |
-
:prompt: Question or any content a user may input.
|
172 |
-
:return: pure text response."""
|
173 |
-
query = self.craft_query(prompt)
|
174 |
-
res = self.response(query, engine=self.engine,
|
175 |
-
max_tokens=self.max_tokens,
|
176 |
-
temperature=self.temperature,
|
177 |
-
topP=self.topP,
|
178 |
-
topK=self.topK,
|
179 |
-
frequencyPenalty=self.frequencyPenalty,
|
180 |
-
responsePenalty=self.responsePenalty,
|
181 |
-
noRepeatNgramSize=self.noRepeatNgramSize)
|
182 |
-
if 'resData' in res and res['resData'] != None:
|
183 |
-
txt = res['resData']
|
184 |
-
else:
|
185 |
-
txt = '模型返回为空,请尝试修改输入'
|
186 |
-
# 单独针对翻译模型的后处理
|
187 |
-
if self.engine == 'translate':
|
188 |
-
txt = txt.replace(' ##', '').replace(' "', '"').replace(": ", ":").replace(" ,", ",") \
|
189 |
-
.replace('英文:', '').replace('文:', '').replace("( ", "(").replace(" )", ")")
|
190 |
-
else:
|
191 |
-
txt = txt.replace(' ', '')
|
192 |
-
txt = self.del_special_chars(txt)
|
193 |
-
|
194 |
-
# trun多结束符截断模型输出
|
195 |
-
if isinstance(trun, str):
|
196 |
-
trun = [trun]
|
197 |
-
try:
|
198 |
-
if trun != None and isinstance(trun, list) and trun != []:
|
199 |
-
for tr in trun:
|
200 |
-
if tr in txt and tr != "":
|
201 |
-
txt = txt[:txt.index(tr)]
|
202 |
-
else:
|
203 |
-
continue
|
204 |
-
except:
|
205 |
-
return txt
|
206 |
-
return txt
|
207 |
-
|
208 |
-
|
209 |
-
class YuanAPI:
|
210 |
-
ACCOUNT = ''
|
211 |
-
PHONE = ''
|
212 |
-
|
213 |
-
SUBMIT_URL = "http://api.airyuan.cn:32102/v1/interface/api/infer/getRequestId?"
|
214 |
-
REPLY_URL = "http://api.airyuan.cn:32102/v1/interface/api/result?"
|
215 |
-
|
216 |
-
def __init__(self, user, phone):
|
217 |
-
self.ACCOUNT = user
|
218 |
-
self.PHONE = phone
|
219 |
-
|
220 |
-
@staticmethod
|
221 |
-
def code_md5(str):
|
222 |
-
code = str.encode("utf-8")
|
223 |
-
m = hashlib.md5()
|
224 |
-
m.update(code)
|
225 |
-
result = m.hexdigest()
|
226 |
-
return result
|
227 |
-
|
228 |
-
@staticmethod
|
229 |
-
def rest_get(url, header, timeout, show_error=False):
|
230 |
-
'''Call rest get method'''
|
231 |
-
try:
|
232 |
-
response = requests.get(url, headers=header, timeout=timeout, verify=False)
|
233 |
-
return response
|
234 |
-
except Exception as exception:
|
235 |
-
if show_error:
|
236 |
-
print(exception)
|
237 |
-
return None
|
238 |
-
|
239 |
-
def header_generation(self):
|
240 |
-
"""Generate header for API request."""
|
241 |
-
t = datetime.now(pytz.timezone("Asia/Shanghai")).strftime("%Y-%m-%d")
|
242 |
-
token = self.code_md5(self.ACCOUNT + self.PHONE + t)
|
243 |
-
headers = {'token': token}
|
244 |
-
return headers
|
245 |
-
|
246 |
-
def submit_request(self, query, temperature, topP, topK, max_tokens, engine, frequencyPenalty, responsePenalty,
|
247 |
-
noRepeatNgramSize):
|
248 |
-
"""Submit query to the backend server and get requestID."""
|
249 |
-
headers = self.header_generation()
|
250 |
-
# url=SUBMIT_URL + "account={0}&data={1}&temperature={2}&topP={3}&topK={4}&tokensToGenerate={5}&type={6}".format(ACCOUNT,query,temperature,topP,topK,max_tokens,"api")
|
251 |
-
# url=SUBMIT_URL + "engine={0}&account={1}&data={2}&temperature={3}&topP={4}&topK={5}&tokensToGenerate={6}" \
|
252 |
-
# "&type={7}".format(engine,ACCOUNT,query,temperature,topP,topK, max_tokens,"api")
|
253 |
-
url = self.SUBMIT_URL + "engine={0}&account={1}&data={2}&temperature={3}&topP={4}&topK={5}&tokensToGenerate={6}" \
|
254 |
-
"&type={7}&frequencyPenalty={8}&responsePenalty={9}&noRepeatNgramSize={10}". \
|
255 |
-
format(engine, self.ACCOUNT, query, temperature, topP, topK, max_tokens, "api", frequencyPenalty,
|
256 |
-
responsePenalty, noRepeatNgramSize)
|
257 |
-
response = self.rest_get(url, headers, 30)
|
258 |
-
response_text = json.loads(response.text)
|
259 |
-
if response_text["flag"]:
|
260 |
-
requestId = response_text["resData"]
|
261 |
-
return requestId
|
262 |
-
else:
|
263 |
-
raise RuntimeWarning(response_text)
|
264 |
-
|
265 |
-
def reply_request(self, requestId, cycle_count=5):
|
266 |
-
"""Check reply API to get the inference response."""
|
267 |
-
url = self.REPLY_URL + "account={0}&requestId={1}".format(self.ACCOUNT, requestId)
|
268 |
-
headers = self.header_generation()
|
269 |
-
response_text = {"flag": True, "resData": None}
|
270 |
-
for i in range(cycle_count):
|
271 |
-
response = self.rest_get(url, headers, 30, show_error=True)
|
272 |
-
response_text = json.loads(response.text)
|
273 |
-
if response_text["resData"] is not None:
|
274 |
-
return response_text
|
275 |
-
if response_text["flag"] is False and i == cycle_count - 1:
|
276 |
-
raise RuntimeWarning(response_text)
|
277 |
-
time.sleep(3)
|
278 |
-
return response_text
|
279 |
-
|
280 |
-
|
281 |
-
class Yuan_Client(BaseLLMModel):
|
282 |
-
|
283 |
-
def __init__(self, model_name, api_key, user_name="", system_prompt=None):
|
284 |
-
super().__init__(model_name=model_name, user=user_name)
|
285 |
-
self.history = []
|
286 |
-
self.api_key = api_key
|
287 |
-
self.system_prompt = system_prompt
|
288 |
-
|
289 |
-
self.input_prefix = ""
|
290 |
-
self.output_prefix = ""
|
291 |
-
|
292 |
-
def set_text_prefix(self, option, value):
|
293 |
-
if option == 'input_prefix':
|
294 |
-
self.input_prefix = value
|
295 |
-
elif option == 'output_prefix':
|
296 |
-
self.output_prefix = value
|
297 |
-
|
298 |
-
def get_answer_at_once(self):
|
299 |
-
# yuan temperature is (0,1] and base model temperature is [0,2], and yuan 0.9 == base 1 so need to convert
|
300 |
-
temperature = self.temperature if self.temperature <= 1 else 0.9 + (self.temperature - 1) / 10
|
301 |
-
topP = self.top_p
|
302 |
-
topK = self.n_choices
|
303 |
-
# max_tokens should be in [1,200]
|
304 |
-
max_tokens = self.max_generation_token if self.max_generation_token is not None else 50
|
305 |
-
if max_tokens > 200:
|
306 |
-
max_tokens = 200
|
307 |
-
stop = self.stop_sequence if self.stop_sequence is not None else []
|
308 |
-
examples = []
|
309 |
-
system_prompt = self.system_prompt
|
310 |
-
if system_prompt is not None:
|
311 |
-
lines = system_prompt.splitlines()
|
312 |
-
# TODO: support prefixes in system prompt or settings
|
313 |
-
"""
|
314 |
-
if lines[0].startswith('-'):
|
315 |
-
prefixes = lines.pop()[1:].split('|')
|
316 |
-
self.input_prefix = prefixes[0]
|
317 |
-
if len(prefixes) > 1:
|
318 |
-
self.output_prefix = prefixes[1]
|
319 |
-
if len(prefixes) > 2:
|
320 |
-
stop = prefixes[2].split(',')
|
321 |
-
"""
|
322 |
-
for i in range(0, len(lines), 2):
|
323 |
-
in_line = lines[i]
|
324 |
-
out_line = lines[i + 1] if i + 1 < len(lines) else ""
|
325 |
-
examples.append((in_line, out_line))
|
326 |
-
yuan = Yuan(engine=self.model_name.replace('yuanai-1.0-', ''),
|
327 |
-
temperature=temperature,
|
328 |
-
max_tokens=max_tokens,
|
329 |
-
topK=topK,
|
330 |
-
topP=topP,
|
331 |
-
input_prefix=self.input_prefix,
|
332 |
-
input_suffix="",
|
333 |
-
output_prefix=self.output_prefix,
|
334 |
-
output_suffix="".join(stop),
|
335 |
-
)
|
336 |
-
if not self.api_key:
|
337 |
-
return NO_APIKEY_MSG, 0
|
338 |
-
yuan.set_account(self.api_key)
|
339 |
-
|
340 |
-
for in_line, out_line in examples:
|
341 |
-
yuan.add_example(Example(inp=in_line, out=out_line))
|
342 |
-
|
343 |
-
prompt = self.history[-1]["content"]
|
344 |
-
answer = yuan.submit_API(prompt, trun=stop)
|
345 |
-
return answer, len(answer)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlexWang/lama/saicinpainting/training/modules/base.py
DELETED
@@ -1,80 +0,0 @@
|
|
1 |
-
import abc
|
2 |
-
from typing import Tuple, List
|
3 |
-
|
4 |
-
import torch
|
5 |
-
import torch.nn as nn
|
6 |
-
|
7 |
-
from saicinpainting.training.modules.depthwise_sep_conv import DepthWiseSeperableConv
|
8 |
-
from saicinpainting.training.modules.multidilated_conv import MultidilatedConv
|
9 |
-
|
10 |
-
|
11 |
-
class BaseDiscriminator(nn.Module):
|
12 |
-
@abc.abstractmethod
|
13 |
-
def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, List[torch.Tensor]]:
|
14 |
-
"""
|
15 |
-
Predict scores and get intermediate activations. Useful for feature matching loss
|
16 |
-
:return tuple (scores, list of intermediate activations)
|
17 |
-
"""
|
18 |
-
raise NotImplemented()
|
19 |
-
|
20 |
-
|
21 |
-
def get_conv_block_ctor(kind='default'):
|
22 |
-
if not isinstance(kind, str):
|
23 |
-
return kind
|
24 |
-
if kind == 'default':
|
25 |
-
return nn.Conv2d
|
26 |
-
if kind == 'depthwise':
|
27 |
-
return DepthWiseSeperableConv
|
28 |
-
if kind == 'multidilated':
|
29 |
-
return MultidilatedConv
|
30 |
-
raise ValueError(f'Unknown convolutional block kind {kind}')
|
31 |
-
|
32 |
-
|
33 |
-
def get_norm_layer(kind='bn'):
|
34 |
-
if not isinstance(kind, str):
|
35 |
-
return kind
|
36 |
-
if kind == 'bn':
|
37 |
-
return nn.BatchNorm2d
|
38 |
-
if kind == 'in':
|
39 |
-
return nn.InstanceNorm2d
|
40 |
-
raise ValueError(f'Unknown norm block kind {kind}')
|
41 |
-
|
42 |
-
|
43 |
-
def get_activation(kind='tanh'):
|
44 |
-
if kind == 'tanh':
|
45 |
-
return nn.Tanh()
|
46 |
-
if kind == 'sigmoid':
|
47 |
-
return nn.Sigmoid()
|
48 |
-
if kind is False:
|
49 |
-
return nn.Identity()
|
50 |
-
raise ValueError(f'Unknown activation kind {kind}')
|
51 |
-
|
52 |
-
|
53 |
-
class SimpleMultiStepGenerator(nn.Module):
|
54 |
-
def __init__(self, steps: List[nn.Module]):
|
55 |
-
super().__init__()
|
56 |
-
self.steps = nn.ModuleList(steps)
|
57 |
-
|
58 |
-
def forward(self, x):
|
59 |
-
cur_in = x
|
60 |
-
outs = []
|
61 |
-
for step in self.steps:
|
62 |
-
cur_out = step(cur_in)
|
63 |
-
outs.append(cur_out)
|
64 |
-
cur_in = torch.cat((cur_in, cur_out), dim=1)
|
65 |
-
return torch.cat(outs[::-1], dim=1)
|
66 |
-
|
67 |
-
def deconv_factory(kind, ngf, mult, norm_layer, activation, max_features):
|
68 |
-
if kind == 'convtranspose':
|
69 |
-
return [nn.ConvTranspose2d(min(max_features, ngf * mult),
|
70 |
-
min(max_features, int(ngf * mult / 2)),
|
71 |
-
kernel_size=3, stride=2, padding=1, output_padding=1),
|
72 |
-
norm_layer(min(max_features, int(ngf * mult / 2))), activation]
|
73 |
-
elif kind == 'bilinear':
|
74 |
-
return [nn.Upsample(scale_factor=2, mode='bilinear'),
|
75 |
-
DepthWiseSeperableConv(min(max_features, ngf * mult),
|
76 |
-
min(max_features, int(ngf * mult / 2)),
|
77 |
-
kernel_size=3, stride=1, padding=1),
|
78 |
-
norm_layer(min(max_features, int(ngf * mult / 2))), activation]
|
79 |
-
else:
|
80 |
-
raise Exception(f"Invalid deconv kind: {kind}")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlhitawiMohammed22/CER_Hu-Evaluation-Metrics/test_accuracy.py
DELETED
File without changes
|
spaces/Alpaca233/SadTalker/src/face3d/util/util.py
DELETED
@@ -1,208 +0,0 @@
|
|
1 |
-
"""This script contains basic utilities for Deep3DFaceRecon_pytorch
|
2 |
-
"""
|
3 |
-
from __future__ import print_function
|
4 |
-
import numpy as np
|
5 |
-
import torch
|
6 |
-
from PIL import Image
|
7 |
-
import os
|
8 |
-
import importlib
|
9 |
-
import argparse
|
10 |
-
from argparse import Namespace
|
11 |
-
import torchvision
|
12 |
-
|
13 |
-
|
14 |
-
def str2bool(v):
|
15 |
-
if isinstance(v, bool):
|
16 |
-
return v
|
17 |
-
if v.lower() in ('yes', 'true', 't', 'y', '1'):
|
18 |
-
return True
|
19 |
-
elif v.lower() in ('no', 'false', 'f', 'n', '0'):
|
20 |
-
return False
|
21 |
-
else:
|
22 |
-
raise argparse.ArgumentTypeError('Boolean value expected.')
|
23 |
-
|
24 |
-
|
25 |
-
def copyconf(default_opt, **kwargs):
|
26 |
-
conf = Namespace(**vars(default_opt))
|
27 |
-
for key in kwargs:
|
28 |
-
setattr(conf, key, kwargs[key])
|
29 |
-
return conf
|
30 |
-
|
31 |
-
def genvalconf(train_opt, **kwargs):
|
32 |
-
conf = Namespace(**vars(train_opt))
|
33 |
-
attr_dict = train_opt.__dict__
|
34 |
-
for key, value in attr_dict.items():
|
35 |
-
if 'val' in key and key.split('_')[0] in attr_dict:
|
36 |
-
setattr(conf, key.split('_')[0], value)
|
37 |
-
|
38 |
-
for key in kwargs:
|
39 |
-
setattr(conf, key, kwargs[key])
|
40 |
-
|
41 |
-
return conf
|
42 |
-
|
43 |
-
def find_class_in_module(target_cls_name, module):
|
44 |
-
target_cls_name = target_cls_name.replace('_', '').lower()
|
45 |
-
clslib = importlib.import_module(module)
|
46 |
-
cls = None
|
47 |
-
for name, clsobj in clslib.__dict__.items():
|
48 |
-
if name.lower() == target_cls_name:
|
49 |
-
cls = clsobj
|
50 |
-
|
51 |
-
assert cls is not None, "In %s, there should be a class whose name matches %s in lowercase without underscore(_)" % (module, target_cls_name)
|
52 |
-
|
53 |
-
return cls
|
54 |
-
|
55 |
-
|
56 |
-
def tensor2im(input_image, imtype=np.uint8):
|
57 |
-
""""Converts a Tensor array into a numpy image array.
|
58 |
-
|
59 |
-
Parameters:
|
60 |
-
input_image (tensor) -- the input image tensor array, range(0, 1)
|
61 |
-
imtype (type) -- the desired type of the converted numpy array
|
62 |
-
"""
|
63 |
-
if not isinstance(input_image, np.ndarray):
|
64 |
-
if isinstance(input_image, torch.Tensor): # get the data from a variable
|
65 |
-
image_tensor = input_image.data
|
66 |
-
else:
|
67 |
-
return input_image
|
68 |
-
image_numpy = image_tensor.clamp(0.0, 1.0).cpu().float().numpy() # convert it into a numpy array
|
69 |
-
if image_numpy.shape[0] == 1: # grayscale to RGB
|
70 |
-
image_numpy = np.tile(image_numpy, (3, 1, 1))
|
71 |
-
image_numpy = np.transpose(image_numpy, (1, 2, 0)) * 255.0 # post-processing: tranpose and scaling
|
72 |
-
else: # if it is a numpy array, do nothing
|
73 |
-
image_numpy = input_image
|
74 |
-
return image_numpy.astype(imtype)
|
75 |
-
|
76 |
-
|
77 |
-
def diagnose_network(net, name='network'):
|
78 |
-
"""Calculate and print the mean of average absolute(gradients)
|
79 |
-
|
80 |
-
Parameters:
|
81 |
-
net (torch network) -- Torch network
|
82 |
-
name (str) -- the name of the network
|
83 |
-
"""
|
84 |
-
mean = 0.0
|
85 |
-
count = 0
|
86 |
-
for param in net.parameters():
|
87 |
-
if param.grad is not None:
|
88 |
-
mean += torch.mean(torch.abs(param.grad.data))
|
89 |
-
count += 1
|
90 |
-
if count > 0:
|
91 |
-
mean = mean / count
|
92 |
-
print(name)
|
93 |
-
print(mean)
|
94 |
-
|
95 |
-
|
96 |
-
def save_image(image_numpy, image_path, aspect_ratio=1.0):
|
97 |
-
"""Save a numpy image to the disk
|
98 |
-
|
99 |
-
Parameters:
|
100 |
-
image_numpy (numpy array) -- input numpy array
|
101 |
-
image_path (str) -- the path of the image
|
102 |
-
"""
|
103 |
-
|
104 |
-
image_pil = Image.fromarray(image_numpy)
|
105 |
-
h, w, _ = image_numpy.shape
|
106 |
-
|
107 |
-
if aspect_ratio is None:
|
108 |
-
pass
|
109 |
-
elif aspect_ratio > 1.0:
|
110 |
-
image_pil = image_pil.resize((h, int(w * aspect_ratio)), Image.BICUBIC)
|
111 |
-
elif aspect_ratio < 1.0:
|
112 |
-
image_pil = image_pil.resize((int(h / aspect_ratio), w), Image.BICUBIC)
|
113 |
-
image_pil.save(image_path)
|
114 |
-
|
115 |
-
|
116 |
-
def print_numpy(x, val=True, shp=False):
|
117 |
-
"""Print the mean, min, max, median, std, and size of a numpy array
|
118 |
-
|
119 |
-
Parameters:
|
120 |
-
val (bool) -- if print the values of the numpy array
|
121 |
-
shp (bool) -- if print the shape of the numpy array
|
122 |
-
"""
|
123 |
-
x = x.astype(np.float64)
|
124 |
-
if shp:
|
125 |
-
print('shape,', x.shape)
|
126 |
-
if val:
|
127 |
-
x = x.flatten()
|
128 |
-
print('mean = %3.3f, min = %3.3f, max = %3.3f, median = %3.3f, std=%3.3f' % (
|
129 |
-
np.mean(x), np.min(x), np.max(x), np.median(x), np.std(x)))
|
130 |
-
|
131 |
-
|
132 |
-
def mkdirs(paths):
|
133 |
-
"""create empty directories if they don't exist
|
134 |
-
|
135 |
-
Parameters:
|
136 |
-
paths (str list) -- a list of directory paths
|
137 |
-
"""
|
138 |
-
if isinstance(paths, list) and not isinstance(paths, str):
|
139 |
-
for path in paths:
|
140 |
-
mkdir(path)
|
141 |
-
else:
|
142 |
-
mkdir(paths)
|
143 |
-
|
144 |
-
|
145 |
-
def mkdir(path):
|
146 |
-
"""create a single empty directory if it didn't exist
|
147 |
-
|
148 |
-
Parameters:
|
149 |
-
path (str) -- a single directory path
|
150 |
-
"""
|
151 |
-
if not os.path.exists(path):
|
152 |
-
os.makedirs(path)
|
153 |
-
|
154 |
-
|
155 |
-
def correct_resize_label(t, size):
|
156 |
-
device = t.device
|
157 |
-
t = t.detach().cpu()
|
158 |
-
resized = []
|
159 |
-
for i in range(t.size(0)):
|
160 |
-
one_t = t[i, :1]
|
161 |
-
one_np = np.transpose(one_t.numpy().astype(np.uint8), (1, 2, 0))
|
162 |
-
one_np = one_np[:, :, 0]
|
163 |
-
one_image = Image.fromarray(one_np).resize(size, Image.NEAREST)
|
164 |
-
resized_t = torch.from_numpy(np.array(one_image)).long()
|
165 |
-
resized.append(resized_t)
|
166 |
-
return torch.stack(resized, dim=0).to(device)
|
167 |
-
|
168 |
-
|
169 |
-
def correct_resize(t, size, mode=Image.BICUBIC):
|
170 |
-
device = t.device
|
171 |
-
t = t.detach().cpu()
|
172 |
-
resized = []
|
173 |
-
for i in range(t.size(0)):
|
174 |
-
one_t = t[i:i + 1]
|
175 |
-
one_image = Image.fromarray(tensor2im(one_t)).resize(size, Image.BICUBIC)
|
176 |
-
resized_t = torchvision.transforms.functional.to_tensor(one_image) * 2 - 1.0
|
177 |
-
resized.append(resized_t)
|
178 |
-
return torch.stack(resized, dim=0).to(device)
|
179 |
-
|
180 |
-
def draw_landmarks(img, landmark, color='r', step=2):
|
181 |
-
"""
|
182 |
-
Return:
|
183 |
-
img -- numpy.array, (B, H, W, 3) img with landmark, RGB order, range (0, 255)
|
184 |
-
|
185 |
-
|
186 |
-
Parameters:
|
187 |
-
img -- numpy.array, (B, H, W, 3), RGB order, range (0, 255)
|
188 |
-
landmark -- numpy.array, (B, 68, 2), y direction is opposite to v direction
|
189 |
-
color -- str, 'r' or 'b' (red or blue)
|
190 |
-
"""
|
191 |
-
if color =='r':
|
192 |
-
c = np.array([255., 0, 0])
|
193 |
-
else:
|
194 |
-
c = np.array([0, 0, 255.])
|
195 |
-
|
196 |
-
_, H, W, _ = img.shape
|
197 |
-
img, landmark = img.copy(), landmark.copy()
|
198 |
-
landmark[..., 1] = H - 1 - landmark[..., 1]
|
199 |
-
landmark = np.round(landmark).astype(np.int32)
|
200 |
-
for i in range(landmark.shape[1]):
|
201 |
-
x, y = landmark[:, i, 0], landmark[:, i, 1]
|
202 |
-
for j in range(-step, step):
|
203 |
-
for k in range(-step, step):
|
204 |
-
u = np.clip(x + j, 0, W - 1)
|
205 |
-
v = np.clip(y + k, 0, H - 1)
|
206 |
-
for m in range(landmark.shape[0]):
|
207 |
-
img[m, v[m], u[m]] = c
|
208 |
-
return img
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/lpw_stable_diffusion_onnx.py
DELETED
@@ -1,1146 +0,0 @@
|
|
1 |
-
import inspect
|
2 |
-
import re
|
3 |
-
from typing import Callable, List, Optional, Union
|
4 |
-
|
5 |
-
import numpy as np
|
6 |
-
import PIL
|
7 |
-
import torch
|
8 |
-
from packaging import version
|
9 |
-
from transformers import CLIPImageProcessor, CLIPTokenizer
|
10 |
-
|
11 |
-
import diffusers
|
12 |
-
from diffusers import OnnxRuntimeModel, OnnxStableDiffusionPipeline, SchedulerMixin
|
13 |
-
from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
|
14 |
-
from diffusers.utils import logging
|
15 |
-
|
16 |
-
|
17 |
-
try:
|
18 |
-
from diffusers.pipelines.onnx_utils import ORT_TO_NP_TYPE
|
19 |
-
except ImportError:
|
20 |
-
ORT_TO_NP_TYPE = {
|
21 |
-
"tensor(bool)": np.bool_,
|
22 |
-
"tensor(int8)": np.int8,
|
23 |
-
"tensor(uint8)": np.uint8,
|
24 |
-
"tensor(int16)": np.int16,
|
25 |
-
"tensor(uint16)": np.uint16,
|
26 |
-
"tensor(int32)": np.int32,
|
27 |
-
"tensor(uint32)": np.uint32,
|
28 |
-
"tensor(int64)": np.int64,
|
29 |
-
"tensor(uint64)": np.uint64,
|
30 |
-
"tensor(float16)": np.float16,
|
31 |
-
"tensor(float)": np.float32,
|
32 |
-
"tensor(double)": np.float64,
|
33 |
-
}
|
34 |
-
|
35 |
-
try:
|
36 |
-
from diffusers.utils import PIL_INTERPOLATION
|
37 |
-
except ImportError:
|
38 |
-
if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
|
39 |
-
PIL_INTERPOLATION = {
|
40 |
-
"linear": PIL.Image.Resampling.BILINEAR,
|
41 |
-
"bilinear": PIL.Image.Resampling.BILINEAR,
|
42 |
-
"bicubic": PIL.Image.Resampling.BICUBIC,
|
43 |
-
"lanczos": PIL.Image.Resampling.LANCZOS,
|
44 |
-
"nearest": PIL.Image.Resampling.NEAREST,
|
45 |
-
}
|
46 |
-
else:
|
47 |
-
PIL_INTERPOLATION = {
|
48 |
-
"linear": PIL.Image.LINEAR,
|
49 |
-
"bilinear": PIL.Image.BILINEAR,
|
50 |
-
"bicubic": PIL.Image.BICUBIC,
|
51 |
-
"lanczos": PIL.Image.LANCZOS,
|
52 |
-
"nearest": PIL.Image.NEAREST,
|
53 |
-
}
|
54 |
-
# ------------------------------------------------------------------------------
|
55 |
-
|
56 |
-
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
|
57 |
-
|
58 |
-
re_attention = re.compile(
|
59 |
-
r"""
|
60 |
-
\\\(|
|
61 |
-
\\\)|
|
62 |
-
\\\[|
|
63 |
-
\\]|
|
64 |
-
\\\\|
|
65 |
-
\\|
|
66 |
-
\(|
|
67 |
-
\[|
|
68 |
-
:([+-]?[.\d]+)\)|
|
69 |
-
\)|
|
70 |
-
]|
|
71 |
-
[^\\()\[\]:]+|
|
72 |
-
:
|
73 |
-
""",
|
74 |
-
re.X,
|
75 |
-
)
|
76 |
-
|
77 |
-
|
78 |
-
def parse_prompt_attention(text):
|
79 |
-
"""
|
80 |
-
Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
|
81 |
-
Accepted tokens are:
|
82 |
-
(abc) - increases attention to abc by a multiplier of 1.1
|
83 |
-
(abc:3.12) - increases attention to abc by a multiplier of 3.12
|
84 |
-
[abc] - decreases attention to abc by a multiplier of 1.1
|
85 |
-
\( - literal character '('
|
86 |
-
\[ - literal character '['
|
87 |
-
\) - literal character ')'
|
88 |
-
\] - literal character ']'
|
89 |
-
\\ - literal character '\'
|
90 |
-
anything else - just text
|
91 |
-
>>> parse_prompt_attention('normal text')
|
92 |
-
[['normal text', 1.0]]
|
93 |
-
>>> parse_prompt_attention('an (important) word')
|
94 |
-
[['an ', 1.0], ['important', 1.1], [' word', 1.0]]
|
95 |
-
>>> parse_prompt_attention('(unbalanced')
|
96 |
-
[['unbalanced', 1.1]]
|
97 |
-
>>> parse_prompt_attention('\(literal\]')
|
98 |
-
[['(literal]', 1.0]]
|
99 |
-
>>> parse_prompt_attention('(unnecessary)(parens)')
|
100 |
-
[['unnecessaryparens', 1.1]]
|
101 |
-
>>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
|
102 |
-
[['a ', 1.0],
|
103 |
-
['house', 1.5730000000000004],
|
104 |
-
[' ', 1.1],
|
105 |
-
['on', 1.0],
|
106 |
-
[' a ', 1.1],
|
107 |
-
['hill', 0.55],
|
108 |
-
[', sun, ', 1.1],
|
109 |
-
['sky', 1.4641000000000006],
|
110 |
-
['.', 1.1]]
|
111 |
-
"""
|
112 |
-
|
113 |
-
res = []
|
114 |
-
round_brackets = []
|
115 |
-
square_brackets = []
|
116 |
-
|
117 |
-
round_bracket_multiplier = 1.1
|
118 |
-
square_bracket_multiplier = 1 / 1.1
|
119 |
-
|
120 |
-
def multiply_range(start_position, multiplier):
|
121 |
-
for p in range(start_position, len(res)):
|
122 |
-
res[p][1] *= multiplier
|
123 |
-
|
124 |
-
for m in re_attention.finditer(text):
|
125 |
-
text = m.group(0)
|
126 |
-
weight = m.group(1)
|
127 |
-
|
128 |
-
if text.startswith("\\"):
|
129 |
-
res.append([text[1:], 1.0])
|
130 |
-
elif text == "(":
|
131 |
-
round_brackets.append(len(res))
|
132 |
-
elif text == "[":
|
133 |
-
square_brackets.append(len(res))
|
134 |
-
elif weight is not None and len(round_brackets) > 0:
|
135 |
-
multiply_range(round_brackets.pop(), float(weight))
|
136 |
-
elif text == ")" and len(round_brackets) > 0:
|
137 |
-
multiply_range(round_brackets.pop(), round_bracket_multiplier)
|
138 |
-
elif text == "]" and len(square_brackets) > 0:
|
139 |
-
multiply_range(square_brackets.pop(), square_bracket_multiplier)
|
140 |
-
else:
|
141 |
-
res.append([text, 1.0])
|
142 |
-
|
143 |
-
for pos in round_brackets:
|
144 |
-
multiply_range(pos, round_bracket_multiplier)
|
145 |
-
|
146 |
-
for pos in square_brackets:
|
147 |
-
multiply_range(pos, square_bracket_multiplier)
|
148 |
-
|
149 |
-
if len(res) == 0:
|
150 |
-
res = [["", 1.0]]
|
151 |
-
|
152 |
-
# merge runs of identical weights
|
153 |
-
i = 0
|
154 |
-
while i + 1 < len(res):
|
155 |
-
if res[i][1] == res[i + 1][1]:
|
156 |
-
res[i][0] += res[i + 1][0]
|
157 |
-
res.pop(i + 1)
|
158 |
-
else:
|
159 |
-
i += 1
|
160 |
-
|
161 |
-
return res
|
162 |
-
|
163 |
-
|
164 |
-
def get_prompts_with_weights(pipe, prompt: List[str], max_length: int):
|
165 |
-
r"""
|
166 |
-
Tokenize a list of prompts and return its tokens with weights of each token.
|
167 |
-
|
168 |
-
No padding, starting or ending token is included.
|
169 |
-
"""
|
170 |
-
tokens = []
|
171 |
-
weights = []
|
172 |
-
truncated = False
|
173 |
-
for text in prompt:
|
174 |
-
texts_and_weights = parse_prompt_attention(text)
|
175 |
-
text_token = []
|
176 |
-
text_weight = []
|
177 |
-
for word, weight in texts_and_weights:
|
178 |
-
# tokenize and discard the starting and the ending token
|
179 |
-
token = pipe.tokenizer(word, return_tensors="np").input_ids[0, 1:-1]
|
180 |
-
text_token += list(token)
|
181 |
-
# copy the weight by length of token
|
182 |
-
text_weight += [weight] * len(token)
|
183 |
-
# stop if the text is too long (longer than truncation limit)
|
184 |
-
if len(text_token) > max_length:
|
185 |
-
truncated = True
|
186 |
-
break
|
187 |
-
# truncate
|
188 |
-
if len(text_token) > max_length:
|
189 |
-
truncated = True
|
190 |
-
text_token = text_token[:max_length]
|
191 |
-
text_weight = text_weight[:max_length]
|
192 |
-
tokens.append(text_token)
|
193 |
-
weights.append(text_weight)
|
194 |
-
if truncated:
|
195 |
-
logger.warning("Prompt was truncated. Try to shorten the prompt or increase max_embeddings_multiples")
|
196 |
-
return tokens, weights
|
197 |
-
|
198 |
-
|
199 |
-
def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, pad, no_boseos_middle=True, chunk_length=77):
|
200 |
-
r"""
|
201 |
-
Pad the tokens (with starting and ending tokens) and weights (with 1.0) to max_length.
|
202 |
-
"""
|
203 |
-
max_embeddings_multiples = (max_length - 2) // (chunk_length - 2)
|
204 |
-
weights_length = max_length if no_boseos_middle else max_embeddings_multiples * chunk_length
|
205 |
-
for i in range(len(tokens)):
|
206 |
-
tokens[i] = [bos] + tokens[i] + [pad] * (max_length - 1 - len(tokens[i]) - 1) + [eos]
|
207 |
-
if no_boseos_middle:
|
208 |
-
weights[i] = [1.0] + weights[i] + [1.0] * (max_length - 1 - len(weights[i]))
|
209 |
-
else:
|
210 |
-
w = []
|
211 |
-
if len(weights[i]) == 0:
|
212 |
-
w = [1.0] * weights_length
|
213 |
-
else:
|
214 |
-
for j in range(max_embeddings_multiples):
|
215 |
-
w.append(1.0) # weight for starting token in this chunk
|
216 |
-
w += weights[i][j * (chunk_length - 2) : min(len(weights[i]), (j + 1) * (chunk_length - 2))]
|
217 |
-
w.append(1.0) # weight for ending token in this chunk
|
218 |
-
w += [1.0] * (weights_length - len(w))
|
219 |
-
weights[i] = w[:]
|
220 |
-
|
221 |
-
return tokens, weights
|
222 |
-
|
223 |
-
|
224 |
-
def get_unweighted_text_embeddings(
|
225 |
-
pipe,
|
226 |
-
text_input: np.array,
|
227 |
-
chunk_length: int,
|
228 |
-
no_boseos_middle: Optional[bool] = True,
|
229 |
-
):
|
230 |
-
"""
|
231 |
-
When the length of tokens is a multiple of the capacity of the text encoder,
|
232 |
-
it should be split into chunks and sent to the text encoder individually.
|
233 |
-
"""
|
234 |
-
max_embeddings_multiples = (text_input.shape[1] - 2) // (chunk_length - 2)
|
235 |
-
if max_embeddings_multiples > 1:
|
236 |
-
text_embeddings = []
|
237 |
-
for i in range(max_embeddings_multiples):
|
238 |
-
# extract the i-th chunk
|
239 |
-
text_input_chunk = text_input[:, i * (chunk_length - 2) : (i + 1) * (chunk_length - 2) + 2].copy()
|
240 |
-
|
241 |
-
# cover the head and the tail by the starting and the ending tokens
|
242 |
-
text_input_chunk[:, 0] = text_input[0, 0]
|
243 |
-
text_input_chunk[:, -1] = text_input[0, -1]
|
244 |
-
|
245 |
-
text_embedding = pipe.text_encoder(input_ids=text_input_chunk)[0]
|
246 |
-
|
247 |
-
if no_boseos_middle:
|
248 |
-
if i == 0:
|
249 |
-
# discard the ending token
|
250 |
-
text_embedding = text_embedding[:, :-1]
|
251 |
-
elif i == max_embeddings_multiples - 1:
|
252 |
-
# discard the starting token
|
253 |
-
text_embedding = text_embedding[:, 1:]
|
254 |
-
else:
|
255 |
-
# discard both starting and ending tokens
|
256 |
-
text_embedding = text_embedding[:, 1:-1]
|
257 |
-
|
258 |
-
text_embeddings.append(text_embedding)
|
259 |
-
text_embeddings = np.concatenate(text_embeddings, axis=1)
|
260 |
-
else:
|
261 |
-
text_embeddings = pipe.text_encoder(input_ids=text_input)[0]
|
262 |
-
return text_embeddings
|
263 |
-
|
264 |
-
|
265 |
-
def get_weighted_text_embeddings(
|
266 |
-
pipe,
|
267 |
-
prompt: Union[str, List[str]],
|
268 |
-
uncond_prompt: Optional[Union[str, List[str]]] = None,
|
269 |
-
max_embeddings_multiples: Optional[int] = 4,
|
270 |
-
no_boseos_middle: Optional[bool] = False,
|
271 |
-
skip_parsing: Optional[bool] = False,
|
272 |
-
skip_weighting: Optional[bool] = False,
|
273 |
-
**kwargs,
|
274 |
-
):
|
275 |
-
r"""
|
276 |
-
Prompts can be assigned with local weights using brackets. For example,
|
277 |
-
prompt 'A (very beautiful) masterpiece' highlights the words 'very beautiful',
|
278 |
-
and the embedding tokens corresponding to the words get multiplied by a constant, 1.1.
|
279 |
-
|
280 |
-
Also, to regularize of the embedding, the weighted embedding would be scaled to preserve the original mean.
|
281 |
-
|
282 |
-
Args:
|
283 |
-
pipe (`OnnxStableDiffusionPipeline`):
|
284 |
-
Pipe to provide access to the tokenizer and the text encoder.
|
285 |
-
prompt (`str` or `List[str]`):
|
286 |
-
The prompt or prompts to guide the image generation.
|
287 |
-
uncond_prompt (`str` or `List[str]`):
|
288 |
-
The unconditional prompt or prompts for guide the image generation. If unconditional prompt
|
289 |
-
is provided, the embeddings of prompt and uncond_prompt are concatenated.
|
290 |
-
max_embeddings_multiples (`int`, *optional*, defaults to `1`):
|
291 |
-
The max multiple length of prompt embeddings compared to the max output length of text encoder.
|
292 |
-
no_boseos_middle (`bool`, *optional*, defaults to `False`):
|
293 |
-
If the length of text token is multiples of the capacity of text encoder, whether reserve the starting and
|
294 |
-
ending token in each of the chunk in the middle.
|
295 |
-
skip_parsing (`bool`, *optional*, defaults to `False`):
|
296 |
-
Skip the parsing of brackets.
|
297 |
-
skip_weighting (`bool`, *optional*, defaults to `False`):
|
298 |
-
Skip the weighting. When the parsing is skipped, it is forced True.
|
299 |
-
"""
|
300 |
-
max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
|
301 |
-
if isinstance(prompt, str):
|
302 |
-
prompt = [prompt]
|
303 |
-
|
304 |
-
if not skip_parsing:
|
305 |
-
prompt_tokens, prompt_weights = get_prompts_with_weights(pipe, prompt, max_length - 2)
|
306 |
-
if uncond_prompt is not None:
|
307 |
-
if isinstance(uncond_prompt, str):
|
308 |
-
uncond_prompt = [uncond_prompt]
|
309 |
-
uncond_tokens, uncond_weights = get_prompts_with_weights(pipe, uncond_prompt, max_length - 2)
|
310 |
-
else:
|
311 |
-
prompt_tokens = [
|
312 |
-
token[1:-1]
|
313 |
-
for token in pipe.tokenizer(prompt, max_length=max_length, truncation=True, return_tensors="np").input_ids
|
314 |
-
]
|
315 |
-
prompt_weights = [[1.0] * len(token) for token in prompt_tokens]
|
316 |
-
if uncond_prompt is not None:
|
317 |
-
if isinstance(uncond_prompt, str):
|
318 |
-
uncond_prompt = [uncond_prompt]
|
319 |
-
uncond_tokens = [
|
320 |
-
token[1:-1]
|
321 |
-
for token in pipe.tokenizer(
|
322 |
-
uncond_prompt,
|
323 |
-
max_length=max_length,
|
324 |
-
truncation=True,
|
325 |
-
return_tensors="np",
|
326 |
-
).input_ids
|
327 |
-
]
|
328 |
-
uncond_weights = [[1.0] * len(token) for token in uncond_tokens]
|
329 |
-
|
330 |
-
# round up the longest length of tokens to a multiple of (model_max_length - 2)
|
331 |
-
max_length = max([len(token) for token in prompt_tokens])
|
332 |
-
if uncond_prompt is not None:
|
333 |
-
max_length = max(max_length, max([len(token) for token in uncond_tokens]))
|
334 |
-
|
335 |
-
max_embeddings_multiples = min(
|
336 |
-
max_embeddings_multiples,
|
337 |
-
(max_length - 1) // (pipe.tokenizer.model_max_length - 2) + 1,
|
338 |
-
)
|
339 |
-
max_embeddings_multiples = max(1, max_embeddings_multiples)
|
340 |
-
max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
|
341 |
-
|
342 |
-
# pad the length of tokens and weights
|
343 |
-
bos = pipe.tokenizer.bos_token_id
|
344 |
-
eos = pipe.tokenizer.eos_token_id
|
345 |
-
pad = getattr(pipe.tokenizer, "pad_token_id", eos)
|
346 |
-
prompt_tokens, prompt_weights = pad_tokens_and_weights(
|
347 |
-
prompt_tokens,
|
348 |
-
prompt_weights,
|
349 |
-
max_length,
|
350 |
-
bos,
|
351 |
-
eos,
|
352 |
-
pad,
|
353 |
-
no_boseos_middle=no_boseos_middle,
|
354 |
-
chunk_length=pipe.tokenizer.model_max_length,
|
355 |
-
)
|
356 |
-
prompt_tokens = np.array(prompt_tokens, dtype=np.int32)
|
357 |
-
if uncond_prompt is not None:
|
358 |
-
uncond_tokens, uncond_weights = pad_tokens_and_weights(
|
359 |
-
uncond_tokens,
|
360 |
-
uncond_weights,
|
361 |
-
max_length,
|
362 |
-
bos,
|
363 |
-
eos,
|
364 |
-
pad,
|
365 |
-
no_boseos_middle=no_boseos_middle,
|
366 |
-
chunk_length=pipe.tokenizer.model_max_length,
|
367 |
-
)
|
368 |
-
uncond_tokens = np.array(uncond_tokens, dtype=np.int32)
|
369 |
-
|
370 |
-
# get the embeddings
|
371 |
-
text_embeddings = get_unweighted_text_embeddings(
|
372 |
-
pipe,
|
373 |
-
prompt_tokens,
|
374 |
-
pipe.tokenizer.model_max_length,
|
375 |
-
no_boseos_middle=no_boseos_middle,
|
376 |
-
)
|
377 |
-
prompt_weights = np.array(prompt_weights, dtype=text_embeddings.dtype)
|
378 |
-
if uncond_prompt is not None:
|
379 |
-
uncond_embeddings = get_unweighted_text_embeddings(
|
380 |
-
pipe,
|
381 |
-
uncond_tokens,
|
382 |
-
pipe.tokenizer.model_max_length,
|
383 |
-
no_boseos_middle=no_boseos_middle,
|
384 |
-
)
|
385 |
-
uncond_weights = np.array(uncond_weights, dtype=uncond_embeddings.dtype)
|
386 |
-
|
387 |
-
# assign weights to the prompts and normalize in the sense of mean
|
388 |
-
# TODO: should we normalize by chunk or in a whole (current implementation)?
|
389 |
-
if (not skip_parsing) and (not skip_weighting):
|
390 |
-
previous_mean = text_embeddings.mean(axis=(-2, -1))
|
391 |
-
text_embeddings *= prompt_weights[:, :, None]
|
392 |
-
text_embeddings *= (previous_mean / text_embeddings.mean(axis=(-2, -1)))[:, None, None]
|
393 |
-
if uncond_prompt is not None:
|
394 |
-
previous_mean = uncond_embeddings.mean(axis=(-2, -1))
|
395 |
-
uncond_embeddings *= uncond_weights[:, :, None]
|
396 |
-
uncond_embeddings *= (previous_mean / uncond_embeddings.mean(axis=(-2, -1)))[:, None, None]
|
397 |
-
|
398 |
-
# For classifier free guidance, we need to do two forward passes.
|
399 |
-
# Here we concatenate the unconditional and text embeddings into a single batch
|
400 |
-
# to avoid doing two forward passes
|
401 |
-
if uncond_prompt is not None:
|
402 |
-
return text_embeddings, uncond_embeddings
|
403 |
-
|
404 |
-
return text_embeddings
|
405 |
-
|
406 |
-
|
407 |
-
def preprocess_image(image):
|
408 |
-
w, h = image.size
|
409 |
-
w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32
|
410 |
-
image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
|
411 |
-
image = np.array(image).astype(np.float32) / 255.0
|
412 |
-
image = image[None].transpose(0, 3, 1, 2)
|
413 |
-
return 2.0 * image - 1.0
|
414 |
-
|
415 |
-
|
416 |
-
def preprocess_mask(mask, scale_factor=8):
|
417 |
-
mask = mask.convert("L")
|
418 |
-
w, h = mask.size
|
419 |
-
w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32
|
420 |
-
mask = mask.resize((w // scale_factor, h // scale_factor), resample=PIL_INTERPOLATION["nearest"])
|
421 |
-
mask = np.array(mask).astype(np.float32) / 255.0
|
422 |
-
mask = np.tile(mask, (4, 1, 1))
|
423 |
-
mask = mask[None].transpose(0, 1, 2, 3) # what does this step do?
|
424 |
-
mask = 1 - mask # repaint white, keep black
|
425 |
-
return mask
|
426 |
-
|
427 |
-
|
428 |
-
class OnnxStableDiffusionLongPromptWeightingPipeline(OnnxStableDiffusionPipeline):
|
429 |
-
r"""
|
430 |
-
Pipeline for text-to-image generation using Stable Diffusion without tokens length limit, and support parsing
|
431 |
-
weighting in prompt.
|
432 |
-
|
433 |
-
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
|
434 |
-
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
|
435 |
-
"""
|
436 |
-
if version.parse(version.parse(diffusers.__version__).base_version) >= version.parse("0.9.0"):
|
437 |
-
|
438 |
-
def __init__(
|
439 |
-
self,
|
440 |
-
vae_encoder: OnnxRuntimeModel,
|
441 |
-
vae_decoder: OnnxRuntimeModel,
|
442 |
-
text_encoder: OnnxRuntimeModel,
|
443 |
-
tokenizer: CLIPTokenizer,
|
444 |
-
unet: OnnxRuntimeModel,
|
445 |
-
scheduler: SchedulerMixin,
|
446 |
-
safety_checker: OnnxRuntimeModel,
|
447 |
-
feature_extractor: CLIPImageProcessor,
|
448 |
-
requires_safety_checker: bool = True,
|
449 |
-
):
|
450 |
-
super().__init__(
|
451 |
-
vae_encoder=vae_encoder,
|
452 |
-
vae_decoder=vae_decoder,
|
453 |
-
text_encoder=text_encoder,
|
454 |
-
tokenizer=tokenizer,
|
455 |
-
unet=unet,
|
456 |
-
scheduler=scheduler,
|
457 |
-
safety_checker=safety_checker,
|
458 |
-
feature_extractor=feature_extractor,
|
459 |
-
requires_safety_checker=requires_safety_checker,
|
460 |
-
)
|
461 |
-
self.__init__additional__()
|
462 |
-
|
463 |
-
else:
|
464 |
-
|
465 |
-
def __init__(
|
466 |
-
self,
|
467 |
-
vae_encoder: OnnxRuntimeModel,
|
468 |
-
vae_decoder: OnnxRuntimeModel,
|
469 |
-
text_encoder: OnnxRuntimeModel,
|
470 |
-
tokenizer: CLIPTokenizer,
|
471 |
-
unet: OnnxRuntimeModel,
|
472 |
-
scheduler: SchedulerMixin,
|
473 |
-
safety_checker: OnnxRuntimeModel,
|
474 |
-
feature_extractor: CLIPImageProcessor,
|
475 |
-
):
|
476 |
-
super().__init__(
|
477 |
-
vae_encoder=vae_encoder,
|
478 |
-
vae_decoder=vae_decoder,
|
479 |
-
text_encoder=text_encoder,
|
480 |
-
tokenizer=tokenizer,
|
481 |
-
unet=unet,
|
482 |
-
scheduler=scheduler,
|
483 |
-
safety_checker=safety_checker,
|
484 |
-
feature_extractor=feature_extractor,
|
485 |
-
)
|
486 |
-
self.__init__additional__()
|
487 |
-
|
488 |
-
def __init__additional__(self):
|
489 |
-
self.unet.config.in_channels = 4
|
490 |
-
self.vae_scale_factor = 8
|
491 |
-
|
492 |
-
def _encode_prompt(
|
493 |
-
self,
|
494 |
-
prompt,
|
495 |
-
num_images_per_prompt,
|
496 |
-
do_classifier_free_guidance,
|
497 |
-
negative_prompt,
|
498 |
-
max_embeddings_multiples,
|
499 |
-
):
|
500 |
-
r"""
|
501 |
-
Encodes the prompt into text encoder hidden states.
|
502 |
-
|
503 |
-
Args:
|
504 |
-
prompt (`str` or `list(int)`):
|
505 |
-
prompt to be encoded
|
506 |
-
num_images_per_prompt (`int`):
|
507 |
-
number of images that should be generated per prompt
|
508 |
-
do_classifier_free_guidance (`bool`):
|
509 |
-
whether to use classifier free guidance or not
|
510 |
-
negative_prompt (`str` or `List[str]`):
|
511 |
-
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
|
512 |
-
if `guidance_scale` is less than `1`).
|
513 |
-
max_embeddings_multiples (`int`, *optional*, defaults to `3`):
|
514 |
-
The max multiple length of prompt embeddings compared to the max output length of text encoder.
|
515 |
-
"""
|
516 |
-
batch_size = len(prompt) if isinstance(prompt, list) else 1
|
517 |
-
|
518 |
-
if negative_prompt is None:
|
519 |
-
negative_prompt = [""] * batch_size
|
520 |
-
elif isinstance(negative_prompt, str):
|
521 |
-
negative_prompt = [negative_prompt] * batch_size
|
522 |
-
if batch_size != len(negative_prompt):
|
523 |
-
raise ValueError(
|
524 |
-
f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
|
525 |
-
f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
|
526 |
-
" the batch size of `prompt`."
|
527 |
-
)
|
528 |
-
|
529 |
-
text_embeddings, uncond_embeddings = get_weighted_text_embeddings(
|
530 |
-
pipe=self,
|
531 |
-
prompt=prompt,
|
532 |
-
uncond_prompt=negative_prompt if do_classifier_free_guidance else None,
|
533 |
-
max_embeddings_multiples=max_embeddings_multiples,
|
534 |
-
)
|
535 |
-
|
536 |
-
text_embeddings = text_embeddings.repeat(num_images_per_prompt, 0)
|
537 |
-
if do_classifier_free_guidance:
|
538 |
-
uncond_embeddings = uncond_embeddings.repeat(num_images_per_prompt, 0)
|
539 |
-
text_embeddings = np.concatenate([uncond_embeddings, text_embeddings])
|
540 |
-
|
541 |
-
return text_embeddings
|
542 |
-
|
543 |
-
def check_inputs(self, prompt, height, width, strength, callback_steps):
|
544 |
-
if not isinstance(prompt, str) and not isinstance(prompt, list):
|
545 |
-
raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
|
546 |
-
|
547 |
-
if strength < 0 or strength > 1:
|
548 |
-
raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
|
549 |
-
|
550 |
-
if height % 8 != 0 or width % 8 != 0:
|
551 |
-
raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
|
552 |
-
|
553 |
-
if (callback_steps is None) or (
|
554 |
-
callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
|
555 |
-
):
|
556 |
-
raise ValueError(
|
557 |
-
f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
|
558 |
-
f" {type(callback_steps)}."
|
559 |
-
)
|
560 |
-
|
561 |
-
def get_timesteps(self, num_inference_steps, strength, is_text2img):
|
562 |
-
if is_text2img:
|
563 |
-
return self.scheduler.timesteps, num_inference_steps
|
564 |
-
else:
|
565 |
-
# get the original timestep using init_timestep
|
566 |
-
offset = self.scheduler.config.get("steps_offset", 0)
|
567 |
-
init_timestep = int(num_inference_steps * strength) + offset
|
568 |
-
init_timestep = min(init_timestep, num_inference_steps)
|
569 |
-
|
570 |
-
t_start = max(num_inference_steps - init_timestep + offset, 0)
|
571 |
-
timesteps = self.scheduler.timesteps[t_start:]
|
572 |
-
return timesteps, num_inference_steps - t_start
|
573 |
-
|
574 |
-
def run_safety_checker(self, image):
|
575 |
-
if self.safety_checker is not None:
|
576 |
-
safety_checker_input = self.feature_extractor(
|
577 |
-
self.numpy_to_pil(image), return_tensors="np"
|
578 |
-
).pixel_values.astype(image.dtype)
|
579 |
-
# There will throw an error if use safety_checker directly and batchsize>1
|
580 |
-
images, has_nsfw_concept = [], []
|
581 |
-
for i in range(image.shape[0]):
|
582 |
-
image_i, has_nsfw_concept_i = self.safety_checker(
|
583 |
-
clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1]
|
584 |
-
)
|
585 |
-
images.append(image_i)
|
586 |
-
has_nsfw_concept.append(has_nsfw_concept_i[0])
|
587 |
-
image = np.concatenate(images)
|
588 |
-
else:
|
589 |
-
has_nsfw_concept = None
|
590 |
-
return image, has_nsfw_concept
|
591 |
-
|
592 |
-
def decode_latents(self, latents):
|
593 |
-
latents = 1 / 0.18215 * latents
|
594 |
-
# image = self.vae_decoder(latent_sample=latents)[0]
|
595 |
-
# it seems likes there is a strange result for using half-precision vae decoder if batchsize>1
|
596 |
-
image = np.concatenate(
|
597 |
-
[self.vae_decoder(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])]
|
598 |
-
)
|
599 |
-
image = np.clip(image / 2 + 0.5, 0, 1)
|
600 |
-
image = image.transpose((0, 2, 3, 1))
|
601 |
-
return image
|
602 |
-
|
603 |
-
def prepare_extra_step_kwargs(self, generator, eta):
|
604 |
-
# prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
|
605 |
-
# eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
|
606 |
-
# eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
|
607 |
-
# and should be between [0, 1]
|
608 |
-
|
609 |
-
accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
|
610 |
-
extra_step_kwargs = {}
|
611 |
-
if accepts_eta:
|
612 |
-
extra_step_kwargs["eta"] = eta
|
613 |
-
|
614 |
-
# check if the scheduler accepts generator
|
615 |
-
accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
|
616 |
-
if accepts_generator:
|
617 |
-
extra_step_kwargs["generator"] = generator
|
618 |
-
return extra_step_kwargs
|
619 |
-
|
620 |
-
def prepare_latents(self, image, timestep, batch_size, height, width, dtype, generator, latents=None):
|
621 |
-
if image is None:
|
622 |
-
shape = (
|
623 |
-
batch_size,
|
624 |
-
self.unet.config.in_channels,
|
625 |
-
height // self.vae_scale_factor,
|
626 |
-
width // self.vae_scale_factor,
|
627 |
-
)
|
628 |
-
|
629 |
-
if latents is None:
|
630 |
-
latents = torch.randn(shape, generator=generator, device="cpu").numpy().astype(dtype)
|
631 |
-
else:
|
632 |
-
if latents.shape != shape:
|
633 |
-
raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
|
634 |
-
|
635 |
-
# scale the initial noise by the standard deviation required by the scheduler
|
636 |
-
latents = (torch.from_numpy(latents) * self.scheduler.init_noise_sigma).numpy()
|
637 |
-
return latents, None, None
|
638 |
-
else:
|
639 |
-
init_latents = self.vae_encoder(sample=image)[0]
|
640 |
-
init_latents = 0.18215 * init_latents
|
641 |
-
init_latents = np.concatenate([init_latents] * batch_size, axis=0)
|
642 |
-
init_latents_orig = init_latents
|
643 |
-
shape = init_latents.shape
|
644 |
-
|
645 |
-
# add noise to latents using the timesteps
|
646 |
-
noise = torch.randn(shape, generator=generator, device="cpu").numpy().astype(dtype)
|
647 |
-
latents = self.scheduler.add_noise(
|
648 |
-
torch.from_numpy(init_latents), torch.from_numpy(noise), timestep
|
649 |
-
).numpy()
|
650 |
-
return latents, init_latents_orig, noise
|
651 |
-
|
652 |
-
@torch.no_grad()
|
653 |
-
def __call__(
|
654 |
-
self,
|
655 |
-
prompt: Union[str, List[str]],
|
656 |
-
negative_prompt: Optional[Union[str, List[str]]] = None,
|
657 |
-
image: Union[np.ndarray, PIL.Image.Image] = None,
|
658 |
-
mask_image: Union[np.ndarray, PIL.Image.Image] = None,
|
659 |
-
height: int = 512,
|
660 |
-
width: int = 512,
|
661 |
-
num_inference_steps: int = 50,
|
662 |
-
guidance_scale: float = 7.5,
|
663 |
-
strength: float = 0.8,
|
664 |
-
num_images_per_prompt: Optional[int] = 1,
|
665 |
-
eta: float = 0.0,
|
666 |
-
generator: Optional[torch.Generator] = None,
|
667 |
-
latents: Optional[np.ndarray] = None,
|
668 |
-
max_embeddings_multiples: Optional[int] = 3,
|
669 |
-
output_type: Optional[str] = "pil",
|
670 |
-
return_dict: bool = True,
|
671 |
-
callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
|
672 |
-
is_cancelled_callback: Optional[Callable[[], bool]] = None,
|
673 |
-
callback_steps: int = 1,
|
674 |
-
**kwargs,
|
675 |
-
):
|
676 |
-
r"""
|
677 |
-
Function invoked when calling the pipeline for generation.
|
678 |
-
|
679 |
-
Args:
|
680 |
-
prompt (`str` or `List[str]`):
|
681 |
-
The prompt or prompts to guide the image generation.
|
682 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
683 |
-
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
|
684 |
-
if `guidance_scale` is less than `1`).
|
685 |
-
image (`np.ndarray` or `PIL.Image.Image`):
|
686 |
-
`Image`, or tensor representing an image batch, that will be used as the starting point for the
|
687 |
-
process.
|
688 |
-
mask_image (`np.ndarray` or `PIL.Image.Image`):
|
689 |
-
`Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
|
690 |
-
replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
|
691 |
-
PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
|
692 |
-
contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
|
693 |
-
height (`int`, *optional*, defaults to 512):
|
694 |
-
The height in pixels of the generated image.
|
695 |
-
width (`int`, *optional*, defaults to 512):
|
696 |
-
The width in pixels of the generated image.
|
697 |
-
num_inference_steps (`int`, *optional*, defaults to 50):
|
698 |
-
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
699 |
-
expense of slower inference.
|
700 |
-
guidance_scale (`float`, *optional*, defaults to 7.5):
|
701 |
-
Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
|
702 |
-
`guidance_scale` is defined as `w` of equation 2. of [Imagen
|
703 |
-
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
|
704 |
-
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
|
705 |
-
usually at the expense of lower image quality.
|
706 |
-
strength (`float`, *optional*, defaults to 0.8):
|
707 |
-
Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
|
708 |
-
`image` will be used as a starting point, adding more noise to it the larger the `strength`. The
|
709 |
-
number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
|
710 |
-
noise will be maximum and the denoising process will run for the full number of iterations specified in
|
711 |
-
`num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
|
712 |
-
num_images_per_prompt (`int`, *optional*, defaults to 1):
|
713 |
-
The number of images to generate per prompt.
|
714 |
-
eta (`float`, *optional*, defaults to 0.0):
|
715 |
-
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
|
716 |
-
[`schedulers.DDIMScheduler`], will be ignored for others.
|
717 |
-
generator (`torch.Generator`, *optional*):
|
718 |
-
A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
|
719 |
-
deterministic.
|
720 |
-
latents (`np.ndarray`, *optional*):
|
721 |
-
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
|
722 |
-
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
723 |
-
tensor will ge generated by sampling using the supplied random `generator`.
|
724 |
-
max_embeddings_multiples (`int`, *optional*, defaults to `3`):
|
725 |
-
The max multiple length of prompt embeddings compared to the max output length of text encoder.
|
726 |
-
output_type (`str`, *optional*, defaults to `"pil"`):
|
727 |
-
The output format of the generate image. Choose between
|
728 |
-
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
|
729 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
730 |
-
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
|
731 |
-
plain tuple.
|
732 |
-
callback (`Callable`, *optional*):
|
733 |
-
A function that will be called every `callback_steps` steps during inference. The function will be
|
734 |
-
called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
|
735 |
-
is_cancelled_callback (`Callable`, *optional*):
|
736 |
-
A function that will be called every `callback_steps` steps during inference. If the function returns
|
737 |
-
`True`, the inference will be cancelled.
|
738 |
-
callback_steps (`int`, *optional*, defaults to 1):
|
739 |
-
The frequency at which the `callback` function will be called. If not specified, the callback will be
|
740 |
-
called at every step.
|
741 |
-
|
742 |
-
Returns:
|
743 |
-
`None` if cancelled by `is_cancelled_callback`,
|
744 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
|
745 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
|
746 |
-
When returning a tuple, the first element is a list with the generated images, and the second element is a
|
747 |
-
list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
|
748 |
-
(nsfw) content, according to the `safety_checker`.
|
749 |
-
"""
|
750 |
-
# 0. Default height and width to unet
|
751 |
-
height = height or self.unet.config.sample_size * self.vae_scale_factor
|
752 |
-
width = width or self.unet.config.sample_size * self.vae_scale_factor
|
753 |
-
|
754 |
-
# 1. Check inputs. Raise error if not correct
|
755 |
-
self.check_inputs(prompt, height, width, strength, callback_steps)
|
756 |
-
|
757 |
-
# 2. Define call parameters
|
758 |
-
batch_size = 1 if isinstance(prompt, str) else len(prompt)
|
759 |
-
# here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
|
760 |
-
# of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
|
761 |
-
# corresponds to doing no classifier free guidance.
|
762 |
-
do_classifier_free_guidance = guidance_scale > 1.0
|
763 |
-
|
764 |
-
# 3. Encode input prompt
|
765 |
-
text_embeddings = self._encode_prompt(
|
766 |
-
prompt,
|
767 |
-
num_images_per_prompt,
|
768 |
-
do_classifier_free_guidance,
|
769 |
-
negative_prompt,
|
770 |
-
max_embeddings_multiples,
|
771 |
-
)
|
772 |
-
dtype = text_embeddings.dtype
|
773 |
-
|
774 |
-
# 4. Preprocess image and mask
|
775 |
-
if isinstance(image, PIL.Image.Image):
|
776 |
-
image = preprocess_image(image)
|
777 |
-
if image is not None:
|
778 |
-
image = image.astype(dtype)
|
779 |
-
if isinstance(mask_image, PIL.Image.Image):
|
780 |
-
mask_image = preprocess_mask(mask_image, self.vae_scale_factor)
|
781 |
-
if mask_image is not None:
|
782 |
-
mask = mask_image.astype(dtype)
|
783 |
-
mask = np.concatenate([mask] * batch_size * num_images_per_prompt)
|
784 |
-
else:
|
785 |
-
mask = None
|
786 |
-
|
787 |
-
# 5. set timesteps
|
788 |
-
self.scheduler.set_timesteps(num_inference_steps)
|
789 |
-
timestep_dtype = next(
|
790 |
-
(input.type for input in self.unet.model.get_inputs() if input.name == "timestep"), "tensor(float)"
|
791 |
-
)
|
792 |
-
timestep_dtype = ORT_TO_NP_TYPE[timestep_dtype]
|
793 |
-
timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, image is None)
|
794 |
-
latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
|
795 |
-
|
796 |
-
# 6. Prepare latent variables
|
797 |
-
latents, init_latents_orig, noise = self.prepare_latents(
|
798 |
-
image,
|
799 |
-
latent_timestep,
|
800 |
-
batch_size * num_images_per_prompt,
|
801 |
-
height,
|
802 |
-
width,
|
803 |
-
dtype,
|
804 |
-
generator,
|
805 |
-
latents,
|
806 |
-
)
|
807 |
-
|
808 |
-
# 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
|
809 |
-
extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
|
810 |
-
|
811 |
-
# 8. Denoising loop
|
812 |
-
for i, t in enumerate(self.progress_bar(timesteps)):
|
813 |
-
# expand the latents if we are doing classifier free guidance
|
814 |
-
latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents
|
815 |
-
latent_model_input = self.scheduler.scale_model_input(torch.from_numpy(latent_model_input), t)
|
816 |
-
latent_model_input = latent_model_input.numpy()
|
817 |
-
|
818 |
-
# predict the noise residual
|
819 |
-
noise_pred = self.unet(
|
820 |
-
sample=latent_model_input,
|
821 |
-
timestep=np.array([t], dtype=timestep_dtype),
|
822 |
-
encoder_hidden_states=text_embeddings,
|
823 |
-
)
|
824 |
-
noise_pred = noise_pred[0]
|
825 |
-
|
826 |
-
# perform guidance
|
827 |
-
if do_classifier_free_guidance:
|
828 |
-
noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2)
|
829 |
-
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
|
830 |
-
|
831 |
-
# compute the previous noisy sample x_t -> x_t-1
|
832 |
-
scheduler_output = self.scheduler.step(
|
833 |
-
torch.from_numpy(noise_pred), t, torch.from_numpy(latents), **extra_step_kwargs
|
834 |
-
)
|
835 |
-
latents = scheduler_output.prev_sample.numpy()
|
836 |
-
|
837 |
-
if mask is not None:
|
838 |
-
# masking
|
839 |
-
init_latents_proper = self.scheduler.add_noise(
|
840 |
-
torch.from_numpy(init_latents_orig),
|
841 |
-
torch.from_numpy(noise),
|
842 |
-
t,
|
843 |
-
).numpy()
|
844 |
-
latents = (init_latents_proper * mask) + (latents * (1 - mask))
|
845 |
-
|
846 |
-
# call the callback, if provided
|
847 |
-
if i % callback_steps == 0:
|
848 |
-
if callback is not None:
|
849 |
-
callback(i, t, latents)
|
850 |
-
if is_cancelled_callback is not None and is_cancelled_callback():
|
851 |
-
return None
|
852 |
-
|
853 |
-
# 9. Post-processing
|
854 |
-
image = self.decode_latents(latents)
|
855 |
-
|
856 |
-
# 10. Run safety checker
|
857 |
-
image, has_nsfw_concept = self.run_safety_checker(image)
|
858 |
-
|
859 |
-
# 11. Convert to PIL
|
860 |
-
if output_type == "pil":
|
861 |
-
image = self.numpy_to_pil(image)
|
862 |
-
|
863 |
-
if not return_dict:
|
864 |
-
return image, has_nsfw_concept
|
865 |
-
|
866 |
-
return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
|
867 |
-
|
868 |
-
def text2img(
|
869 |
-
self,
|
870 |
-
prompt: Union[str, List[str]],
|
871 |
-
negative_prompt: Optional[Union[str, List[str]]] = None,
|
872 |
-
height: int = 512,
|
873 |
-
width: int = 512,
|
874 |
-
num_inference_steps: int = 50,
|
875 |
-
guidance_scale: float = 7.5,
|
876 |
-
num_images_per_prompt: Optional[int] = 1,
|
877 |
-
eta: float = 0.0,
|
878 |
-
generator: Optional[torch.Generator] = None,
|
879 |
-
latents: Optional[np.ndarray] = None,
|
880 |
-
max_embeddings_multiples: Optional[int] = 3,
|
881 |
-
output_type: Optional[str] = "pil",
|
882 |
-
return_dict: bool = True,
|
883 |
-
callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
|
884 |
-
callback_steps: int = 1,
|
885 |
-
**kwargs,
|
886 |
-
):
|
887 |
-
r"""
|
888 |
-
Function for text-to-image generation.
|
889 |
-
Args:
|
890 |
-
prompt (`str` or `List[str]`):
|
891 |
-
The prompt or prompts to guide the image generation.
|
892 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
893 |
-
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
|
894 |
-
if `guidance_scale` is less than `1`).
|
895 |
-
height (`int`, *optional*, defaults to 512):
|
896 |
-
The height in pixels of the generated image.
|
897 |
-
width (`int`, *optional*, defaults to 512):
|
898 |
-
The width in pixels of the generated image.
|
899 |
-
num_inference_steps (`int`, *optional*, defaults to 50):
|
900 |
-
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
901 |
-
expense of slower inference.
|
902 |
-
guidance_scale (`float`, *optional*, defaults to 7.5):
|
903 |
-
Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
|
904 |
-
`guidance_scale` is defined as `w` of equation 2. of [Imagen
|
905 |
-
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
|
906 |
-
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
|
907 |
-
usually at the expense of lower image quality.
|
908 |
-
num_images_per_prompt (`int`, *optional*, defaults to 1):
|
909 |
-
The number of images to generate per prompt.
|
910 |
-
eta (`float`, *optional*, defaults to 0.0):
|
911 |
-
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
|
912 |
-
[`schedulers.DDIMScheduler`], will be ignored for others.
|
913 |
-
generator (`torch.Generator`, *optional*):
|
914 |
-
A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
|
915 |
-
deterministic.
|
916 |
-
latents (`np.ndarray`, *optional*):
|
917 |
-
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
|
918 |
-
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
919 |
-
tensor will ge generated by sampling using the supplied random `generator`.
|
920 |
-
max_embeddings_multiples (`int`, *optional*, defaults to `3`):
|
921 |
-
The max multiple length of prompt embeddings compared to the max output length of text encoder.
|
922 |
-
output_type (`str`, *optional*, defaults to `"pil"`):
|
923 |
-
The output format of the generate image. Choose between
|
924 |
-
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
|
925 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
926 |
-
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
|
927 |
-
plain tuple.
|
928 |
-
callback (`Callable`, *optional*):
|
929 |
-
A function that will be called every `callback_steps` steps during inference. The function will be
|
930 |
-
called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
|
931 |
-
callback_steps (`int`, *optional*, defaults to 1):
|
932 |
-
The frequency at which the `callback` function will be called. If not specified, the callback will be
|
933 |
-
called at every step.
|
934 |
-
Returns:
|
935 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
|
936 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
|
937 |
-
When returning a tuple, the first element is a list with the generated images, and the second element is a
|
938 |
-
list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
|
939 |
-
(nsfw) content, according to the `safety_checker`.
|
940 |
-
"""
|
941 |
-
return self.__call__(
|
942 |
-
prompt=prompt,
|
943 |
-
negative_prompt=negative_prompt,
|
944 |
-
height=height,
|
945 |
-
width=width,
|
946 |
-
num_inference_steps=num_inference_steps,
|
947 |
-
guidance_scale=guidance_scale,
|
948 |
-
num_images_per_prompt=num_images_per_prompt,
|
949 |
-
eta=eta,
|
950 |
-
generator=generator,
|
951 |
-
latents=latents,
|
952 |
-
max_embeddings_multiples=max_embeddings_multiples,
|
953 |
-
output_type=output_type,
|
954 |
-
return_dict=return_dict,
|
955 |
-
callback=callback,
|
956 |
-
callback_steps=callback_steps,
|
957 |
-
**kwargs,
|
958 |
-
)
|
959 |
-
|
960 |
-
def img2img(
|
961 |
-
self,
|
962 |
-
image: Union[np.ndarray, PIL.Image.Image],
|
963 |
-
prompt: Union[str, List[str]],
|
964 |
-
negative_prompt: Optional[Union[str, List[str]]] = None,
|
965 |
-
strength: float = 0.8,
|
966 |
-
num_inference_steps: Optional[int] = 50,
|
967 |
-
guidance_scale: Optional[float] = 7.5,
|
968 |
-
num_images_per_prompt: Optional[int] = 1,
|
969 |
-
eta: Optional[float] = 0.0,
|
970 |
-
generator: Optional[torch.Generator] = None,
|
971 |
-
max_embeddings_multiples: Optional[int] = 3,
|
972 |
-
output_type: Optional[str] = "pil",
|
973 |
-
return_dict: bool = True,
|
974 |
-
callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
|
975 |
-
callback_steps: int = 1,
|
976 |
-
**kwargs,
|
977 |
-
):
|
978 |
-
r"""
|
979 |
-
Function for image-to-image generation.
|
980 |
-
Args:
|
981 |
-
image (`np.ndarray` or `PIL.Image.Image`):
|
982 |
-
`Image`, or ndarray representing an image batch, that will be used as the starting point for the
|
983 |
-
process.
|
984 |
-
prompt (`str` or `List[str]`):
|
985 |
-
The prompt or prompts to guide the image generation.
|
986 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
987 |
-
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
|
988 |
-
if `guidance_scale` is less than `1`).
|
989 |
-
strength (`float`, *optional*, defaults to 0.8):
|
990 |
-
Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
|
991 |
-
`image` will be used as a starting point, adding more noise to it the larger the `strength`. The
|
992 |
-
number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
|
993 |
-
noise will be maximum and the denoising process will run for the full number of iterations specified in
|
994 |
-
`num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
|
995 |
-
num_inference_steps (`int`, *optional*, defaults to 50):
|
996 |
-
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
997 |
-
expense of slower inference. This parameter will be modulated by `strength`.
|
998 |
-
guidance_scale (`float`, *optional*, defaults to 7.5):
|
999 |
-
Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
|
1000 |
-
`guidance_scale` is defined as `w` of equation 2. of [Imagen
|
1001 |
-
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
|
1002 |
-
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
|
1003 |
-
usually at the expense of lower image quality.
|
1004 |
-
num_images_per_prompt (`int`, *optional*, defaults to 1):
|
1005 |
-
The number of images to generate per prompt.
|
1006 |
-
eta (`float`, *optional*, defaults to 0.0):
|
1007 |
-
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
|
1008 |
-
[`schedulers.DDIMScheduler`], will be ignored for others.
|
1009 |
-
generator (`torch.Generator`, *optional*):
|
1010 |
-
A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
|
1011 |
-
deterministic.
|
1012 |
-
max_embeddings_multiples (`int`, *optional*, defaults to `3`):
|
1013 |
-
The max multiple length of prompt embeddings compared to the max output length of text encoder.
|
1014 |
-
output_type (`str`, *optional*, defaults to `"pil"`):
|
1015 |
-
The output format of the generate image. Choose between
|
1016 |
-
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
|
1017 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
1018 |
-
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
|
1019 |
-
plain tuple.
|
1020 |
-
callback (`Callable`, *optional*):
|
1021 |
-
A function that will be called every `callback_steps` steps during inference. The function will be
|
1022 |
-
called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
|
1023 |
-
callback_steps (`int`, *optional*, defaults to 1):
|
1024 |
-
The frequency at which the `callback` function will be called. If not specified, the callback will be
|
1025 |
-
called at every step.
|
1026 |
-
Returns:
|
1027 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
|
1028 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
|
1029 |
-
When returning a tuple, the first element is a list with the generated images, and the second element is a
|
1030 |
-
list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
|
1031 |
-
(nsfw) content, according to the `safety_checker`.
|
1032 |
-
"""
|
1033 |
-
return self.__call__(
|
1034 |
-
prompt=prompt,
|
1035 |
-
negative_prompt=negative_prompt,
|
1036 |
-
image=image,
|
1037 |
-
num_inference_steps=num_inference_steps,
|
1038 |
-
guidance_scale=guidance_scale,
|
1039 |
-
strength=strength,
|
1040 |
-
num_images_per_prompt=num_images_per_prompt,
|
1041 |
-
eta=eta,
|
1042 |
-
generator=generator,
|
1043 |
-
max_embeddings_multiples=max_embeddings_multiples,
|
1044 |
-
output_type=output_type,
|
1045 |
-
return_dict=return_dict,
|
1046 |
-
callback=callback,
|
1047 |
-
callback_steps=callback_steps,
|
1048 |
-
**kwargs,
|
1049 |
-
)
|
1050 |
-
|
1051 |
-
def inpaint(
|
1052 |
-
self,
|
1053 |
-
image: Union[np.ndarray, PIL.Image.Image],
|
1054 |
-
mask_image: Union[np.ndarray, PIL.Image.Image],
|
1055 |
-
prompt: Union[str, List[str]],
|
1056 |
-
negative_prompt: Optional[Union[str, List[str]]] = None,
|
1057 |
-
strength: float = 0.8,
|
1058 |
-
num_inference_steps: Optional[int] = 50,
|
1059 |
-
guidance_scale: Optional[float] = 7.5,
|
1060 |
-
num_images_per_prompt: Optional[int] = 1,
|
1061 |
-
eta: Optional[float] = 0.0,
|
1062 |
-
generator: Optional[torch.Generator] = None,
|
1063 |
-
max_embeddings_multiples: Optional[int] = 3,
|
1064 |
-
output_type: Optional[str] = "pil",
|
1065 |
-
return_dict: bool = True,
|
1066 |
-
callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
|
1067 |
-
callback_steps: int = 1,
|
1068 |
-
**kwargs,
|
1069 |
-
):
|
1070 |
-
r"""
|
1071 |
-
Function for inpaint.
|
1072 |
-
Args:
|
1073 |
-
image (`np.ndarray` or `PIL.Image.Image`):
|
1074 |
-
`Image`, or tensor representing an image batch, that will be used as the starting point for the
|
1075 |
-
process. This is the image whose masked region will be inpainted.
|
1076 |
-
mask_image (`np.ndarray` or `PIL.Image.Image`):
|
1077 |
-
`Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
|
1078 |
-
replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
|
1079 |
-
PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
|
1080 |
-
contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
|
1081 |
-
prompt (`str` or `List[str]`):
|
1082 |
-
The prompt or prompts to guide the image generation.
|
1083 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
1084 |
-
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
|
1085 |
-
if `guidance_scale` is less than `1`).
|
1086 |
-
strength (`float`, *optional*, defaults to 0.8):
|
1087 |
-
Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1. When `strength`
|
1088 |
-
is 1, the denoising process will be run on the masked area for the full number of iterations specified
|
1089 |
-
in `num_inference_steps`. `image` will be used as a reference for the masked area, adding more
|
1090 |
-
noise to that region the larger the `strength`. If `strength` is 0, no inpainting will occur.
|
1091 |
-
num_inference_steps (`int`, *optional*, defaults to 50):
|
1092 |
-
The reference number of denoising steps. More denoising steps usually lead to a higher quality image at
|
1093 |
-
the expense of slower inference. This parameter will be modulated by `strength`, as explained above.
|
1094 |
-
guidance_scale (`float`, *optional*, defaults to 7.5):
|
1095 |
-
Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
|
1096 |
-
`guidance_scale` is defined as `w` of equation 2. of [Imagen
|
1097 |
-
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
|
1098 |
-
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
|
1099 |
-
usually at the expense of lower image quality.
|
1100 |
-
num_images_per_prompt (`int`, *optional*, defaults to 1):
|
1101 |
-
The number of images to generate per prompt.
|
1102 |
-
eta (`float`, *optional*, defaults to 0.0):
|
1103 |
-
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
|
1104 |
-
[`schedulers.DDIMScheduler`], will be ignored for others.
|
1105 |
-
generator (`torch.Generator`, *optional*):
|
1106 |
-
A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
|
1107 |
-
deterministic.
|
1108 |
-
max_embeddings_multiples (`int`, *optional*, defaults to `3`):
|
1109 |
-
The max multiple length of prompt embeddings compared to the max output length of text encoder.
|
1110 |
-
output_type (`str`, *optional*, defaults to `"pil"`):
|
1111 |
-
The output format of the generate image. Choose between
|
1112 |
-
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
|
1113 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
1114 |
-
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
|
1115 |
-
plain tuple.
|
1116 |
-
callback (`Callable`, *optional*):
|
1117 |
-
A function that will be called every `callback_steps` steps during inference. The function will be
|
1118 |
-
called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
|
1119 |
-
callback_steps (`int`, *optional*, defaults to 1):
|
1120 |
-
The frequency at which the `callback` function will be called. If not specified, the callback will be
|
1121 |
-
called at every step.
|
1122 |
-
Returns:
|
1123 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
|
1124 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
|
1125 |
-
When returning a tuple, the first element is a list with the generated images, and the second element is a
|
1126 |
-
list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
|
1127 |
-
(nsfw) content, according to the `safety_checker`.
|
1128 |
-
"""
|
1129 |
-
return self.__call__(
|
1130 |
-
prompt=prompt,
|
1131 |
-
negative_prompt=negative_prompt,
|
1132 |
-
image=image,
|
1133 |
-
mask_image=mask_image,
|
1134 |
-
num_inference_steps=num_inference_steps,
|
1135 |
-
guidance_scale=guidance_scale,
|
1136 |
-
strength=strength,
|
1137 |
-
num_images_per_prompt=num_images_per_prompt,
|
1138 |
-
eta=eta,
|
1139 |
-
generator=generator,
|
1140 |
-
max_embeddings_multiples=max_embeddings_multiples,
|
1141 |
-
output_type=output_type,
|
1142 |
-
return_dict=return_dict,
|
1143 |
-
callback=callback,
|
1144 |
-
callback_steps=callback_steps,
|
1145 |
-
**kwargs,
|
1146 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/custom_diffusion/train_custom_diffusion.py
DELETED
@@ -1,1306 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python
|
2 |
-
# coding=utf-8
|
3 |
-
# Copyright 2023 Custom Diffusion authors and the HuggingFace Inc. team. All rights reserved.
|
4 |
-
#
|
5 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
6 |
-
# you may not use this file except in compliance with the License.
|
7 |
-
# You may obtain a copy of the License at
|
8 |
-
#
|
9 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
10 |
-
#
|
11 |
-
# Unless required by applicable law or agreed to in writing, software
|
12 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
13 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
14 |
-
# See the License for the specific language governing permissions and
|
15 |
-
|
16 |
-
import argparse
|
17 |
-
import hashlib
|
18 |
-
import itertools
|
19 |
-
import json
|
20 |
-
import logging
|
21 |
-
import math
|
22 |
-
import os
|
23 |
-
import random
|
24 |
-
import shutil
|
25 |
-
import warnings
|
26 |
-
from pathlib import Path
|
27 |
-
|
28 |
-
import numpy as np
|
29 |
-
import torch
|
30 |
-
import torch.nn.functional as F
|
31 |
-
import torch.utils.checkpoint
|
32 |
-
import transformers
|
33 |
-
from accelerate import Accelerator
|
34 |
-
from accelerate.logging import get_logger
|
35 |
-
from accelerate.utils import ProjectConfiguration, set_seed
|
36 |
-
from huggingface_hub import HfApi, create_repo
|
37 |
-
from packaging import version
|
38 |
-
from PIL import Image
|
39 |
-
from torch.utils.data import Dataset
|
40 |
-
from torchvision import transforms
|
41 |
-
from tqdm.auto import tqdm
|
42 |
-
from transformers import AutoTokenizer, PretrainedConfig
|
43 |
-
|
44 |
-
import diffusers
|
45 |
-
from diffusers import (
|
46 |
-
AutoencoderKL,
|
47 |
-
DDPMScheduler,
|
48 |
-
DiffusionPipeline,
|
49 |
-
DPMSolverMultistepScheduler,
|
50 |
-
UNet2DConditionModel,
|
51 |
-
)
|
52 |
-
from diffusers.loaders import AttnProcsLayers
|
53 |
-
from diffusers.models.attention_processor import CustomDiffusionAttnProcessor, CustomDiffusionXFormersAttnProcessor
|
54 |
-
from diffusers.optimization import get_scheduler
|
55 |
-
from diffusers.utils import check_min_version, is_wandb_available
|
56 |
-
from diffusers.utils.import_utils import is_xformers_available
|
57 |
-
|
58 |
-
|
59 |
-
# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
|
60 |
-
check_min_version("0.19.0")
|
61 |
-
|
62 |
-
logger = get_logger(__name__)
|
63 |
-
|
64 |
-
|
65 |
-
def freeze_params(params):
|
66 |
-
for param in params:
|
67 |
-
param.requires_grad = False
|
68 |
-
|
69 |
-
|
70 |
-
def save_model_card(repo_id: str, images=None, base_model=str, prompt=str, repo_folder=None):
|
71 |
-
img_str = ""
|
72 |
-
for i, image in enumerate(images):
|
73 |
-
image.save(os.path.join(repo_folder, f"image_{i}.png"))
|
74 |
-
img_str += f"\n"
|
75 |
-
|
76 |
-
yaml = f"""
|
77 |
-
---
|
78 |
-
license: creativeml-openrail-m
|
79 |
-
base_model: {base_model}
|
80 |
-
instance_prompt: {prompt}
|
81 |
-
tags:
|
82 |
-
- stable-diffusion
|
83 |
-
- stable-diffusion-diffusers
|
84 |
-
- text-to-image
|
85 |
-
- diffusers
|
86 |
-
- custom-diffusion
|
87 |
-
inference: true
|
88 |
-
---
|
89 |
-
"""
|
90 |
-
model_card = f"""
|
91 |
-
# Custom Diffusion - {repo_id}
|
92 |
-
|
93 |
-
These are Custom Diffusion adaption weights for {base_model}. The weights were trained on {prompt} using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following. \n
|
94 |
-
{img_str}
|
95 |
-
|
96 |
-
\nFor more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
97 |
-
"""
|
98 |
-
with open(os.path.join(repo_folder, "README.md"), "w") as f:
|
99 |
-
f.write(yaml + model_card)
|
100 |
-
|
101 |
-
|
102 |
-
def import_model_class_from_model_name_or_path(pretrained_model_name_or_path: str, revision: str):
|
103 |
-
text_encoder_config = PretrainedConfig.from_pretrained(
|
104 |
-
pretrained_model_name_or_path,
|
105 |
-
subfolder="text_encoder",
|
106 |
-
revision=revision,
|
107 |
-
)
|
108 |
-
model_class = text_encoder_config.architectures[0]
|
109 |
-
|
110 |
-
if model_class == "CLIPTextModel":
|
111 |
-
from transformers import CLIPTextModel
|
112 |
-
|
113 |
-
return CLIPTextModel
|
114 |
-
elif model_class == "RobertaSeriesModelWithTransformation":
|
115 |
-
from diffusers.pipelines.alt_diffusion.modeling_roberta_series import RobertaSeriesModelWithTransformation
|
116 |
-
|
117 |
-
return RobertaSeriesModelWithTransformation
|
118 |
-
else:
|
119 |
-
raise ValueError(f"{model_class} is not supported.")
|
120 |
-
|
121 |
-
|
122 |
-
def collate_fn(examples, with_prior_preservation):
|
123 |
-
input_ids = [example["instance_prompt_ids"] for example in examples]
|
124 |
-
pixel_values = [example["instance_images"] for example in examples]
|
125 |
-
mask = [example["mask"] for example in examples]
|
126 |
-
# Concat class and instance examples for prior preservation.
|
127 |
-
# We do this to avoid doing two forward passes.
|
128 |
-
if with_prior_preservation:
|
129 |
-
input_ids += [example["class_prompt_ids"] for example in examples]
|
130 |
-
pixel_values += [example["class_images"] for example in examples]
|
131 |
-
mask += [example["class_mask"] for example in examples]
|
132 |
-
|
133 |
-
input_ids = torch.cat(input_ids, dim=0)
|
134 |
-
pixel_values = torch.stack(pixel_values)
|
135 |
-
mask = torch.stack(mask)
|
136 |
-
pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
|
137 |
-
mask = mask.to(memory_format=torch.contiguous_format).float()
|
138 |
-
|
139 |
-
batch = {"input_ids": input_ids, "pixel_values": pixel_values, "mask": mask.unsqueeze(1)}
|
140 |
-
return batch
|
141 |
-
|
142 |
-
|
143 |
-
class PromptDataset(Dataset):
|
144 |
-
"A simple dataset to prepare the prompts to generate class images on multiple GPUs."
|
145 |
-
|
146 |
-
def __init__(self, prompt, num_samples):
|
147 |
-
self.prompt = prompt
|
148 |
-
self.num_samples = num_samples
|
149 |
-
|
150 |
-
def __len__(self):
|
151 |
-
return self.num_samples
|
152 |
-
|
153 |
-
def __getitem__(self, index):
|
154 |
-
example = {}
|
155 |
-
example["prompt"] = self.prompt
|
156 |
-
example["index"] = index
|
157 |
-
return example
|
158 |
-
|
159 |
-
|
160 |
-
class CustomDiffusionDataset(Dataset):
|
161 |
-
"""
|
162 |
-
A dataset to prepare the instance and class images with the prompts for fine-tuning the model.
|
163 |
-
It pre-processes the images and the tokenizes prompts.
|
164 |
-
"""
|
165 |
-
|
166 |
-
def __init__(
|
167 |
-
self,
|
168 |
-
concepts_list,
|
169 |
-
tokenizer,
|
170 |
-
size=512,
|
171 |
-
mask_size=64,
|
172 |
-
center_crop=False,
|
173 |
-
with_prior_preservation=False,
|
174 |
-
num_class_images=200,
|
175 |
-
hflip=False,
|
176 |
-
aug=True,
|
177 |
-
):
|
178 |
-
self.size = size
|
179 |
-
self.mask_size = mask_size
|
180 |
-
self.center_crop = center_crop
|
181 |
-
self.tokenizer = tokenizer
|
182 |
-
self.interpolation = Image.BILINEAR
|
183 |
-
self.aug = aug
|
184 |
-
|
185 |
-
self.instance_images_path = []
|
186 |
-
self.class_images_path = []
|
187 |
-
self.with_prior_preservation = with_prior_preservation
|
188 |
-
for concept in concepts_list:
|
189 |
-
inst_img_path = [
|
190 |
-
(x, concept["instance_prompt"]) for x in Path(concept["instance_data_dir"]).iterdir() if x.is_file()
|
191 |
-
]
|
192 |
-
self.instance_images_path.extend(inst_img_path)
|
193 |
-
|
194 |
-
if with_prior_preservation:
|
195 |
-
class_data_root = Path(concept["class_data_dir"])
|
196 |
-
if os.path.isdir(class_data_root):
|
197 |
-
class_images_path = list(class_data_root.iterdir())
|
198 |
-
class_prompt = [concept["class_prompt"] for _ in range(len(class_images_path))]
|
199 |
-
else:
|
200 |
-
with open(class_data_root, "r") as f:
|
201 |
-
class_images_path = f.read().splitlines()
|
202 |
-
with open(concept["class_prompt"], "r") as f:
|
203 |
-
class_prompt = f.read().splitlines()
|
204 |
-
|
205 |
-
class_img_path = [(x, y) for (x, y) in zip(class_images_path, class_prompt)]
|
206 |
-
self.class_images_path.extend(class_img_path[:num_class_images])
|
207 |
-
|
208 |
-
random.shuffle(self.instance_images_path)
|
209 |
-
self.num_instance_images = len(self.instance_images_path)
|
210 |
-
self.num_class_images = len(self.class_images_path)
|
211 |
-
self._length = max(self.num_class_images, self.num_instance_images)
|
212 |
-
self.flip = transforms.RandomHorizontalFlip(0.5 * hflip)
|
213 |
-
|
214 |
-
self.image_transforms = transforms.Compose(
|
215 |
-
[
|
216 |
-
self.flip,
|
217 |
-
transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR),
|
218 |
-
transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size),
|
219 |
-
transforms.ToTensor(),
|
220 |
-
transforms.Normalize([0.5], [0.5]),
|
221 |
-
]
|
222 |
-
)
|
223 |
-
|
224 |
-
def __len__(self):
|
225 |
-
return self._length
|
226 |
-
|
227 |
-
def preprocess(self, image, scale, resample):
|
228 |
-
outer, inner = self.size, scale
|
229 |
-
factor = self.size // self.mask_size
|
230 |
-
if scale > self.size:
|
231 |
-
outer, inner = scale, self.size
|
232 |
-
top, left = np.random.randint(0, outer - inner + 1), np.random.randint(0, outer - inner + 1)
|
233 |
-
image = image.resize((scale, scale), resample=resample)
|
234 |
-
image = np.array(image).astype(np.uint8)
|
235 |
-
image = (image / 127.5 - 1.0).astype(np.float32)
|
236 |
-
instance_image = np.zeros((self.size, self.size, 3), dtype=np.float32)
|
237 |
-
mask = np.zeros((self.size // factor, self.size // factor))
|
238 |
-
if scale > self.size:
|
239 |
-
instance_image = image[top : top + inner, left : left + inner, :]
|
240 |
-
mask = np.ones((self.size // factor, self.size // factor))
|
241 |
-
else:
|
242 |
-
instance_image[top : top + inner, left : left + inner, :] = image
|
243 |
-
mask[
|
244 |
-
top // factor + 1 : (top + scale) // factor - 1, left // factor + 1 : (left + scale) // factor - 1
|
245 |
-
] = 1.0
|
246 |
-
return instance_image, mask
|
247 |
-
|
248 |
-
def __getitem__(self, index):
|
249 |
-
example = {}
|
250 |
-
instance_image, instance_prompt = self.instance_images_path[index % self.num_instance_images]
|
251 |
-
instance_image = Image.open(instance_image)
|
252 |
-
if not instance_image.mode == "RGB":
|
253 |
-
instance_image = instance_image.convert("RGB")
|
254 |
-
instance_image = self.flip(instance_image)
|
255 |
-
|
256 |
-
# apply resize augmentation and create a valid image region mask
|
257 |
-
random_scale = self.size
|
258 |
-
if self.aug:
|
259 |
-
random_scale = (
|
260 |
-
np.random.randint(self.size // 3, self.size + 1)
|
261 |
-
if np.random.uniform() < 0.66
|
262 |
-
else np.random.randint(int(1.2 * self.size), int(1.4 * self.size))
|
263 |
-
)
|
264 |
-
instance_image, mask = self.preprocess(instance_image, random_scale, self.interpolation)
|
265 |
-
|
266 |
-
if random_scale < 0.6 * self.size:
|
267 |
-
instance_prompt = np.random.choice(["a far away ", "very small "]) + instance_prompt
|
268 |
-
elif random_scale > self.size:
|
269 |
-
instance_prompt = np.random.choice(["zoomed in ", "close up "]) + instance_prompt
|
270 |
-
|
271 |
-
example["instance_images"] = torch.from_numpy(instance_image).permute(2, 0, 1)
|
272 |
-
example["mask"] = torch.from_numpy(mask)
|
273 |
-
example["instance_prompt_ids"] = self.tokenizer(
|
274 |
-
instance_prompt,
|
275 |
-
truncation=True,
|
276 |
-
padding="max_length",
|
277 |
-
max_length=self.tokenizer.model_max_length,
|
278 |
-
return_tensors="pt",
|
279 |
-
).input_ids
|
280 |
-
|
281 |
-
if self.with_prior_preservation:
|
282 |
-
class_image, class_prompt = self.class_images_path[index % self.num_class_images]
|
283 |
-
class_image = Image.open(class_image)
|
284 |
-
if not class_image.mode == "RGB":
|
285 |
-
class_image = class_image.convert("RGB")
|
286 |
-
example["class_images"] = self.image_transforms(class_image)
|
287 |
-
example["class_mask"] = torch.ones_like(example["mask"])
|
288 |
-
example["class_prompt_ids"] = self.tokenizer(
|
289 |
-
class_prompt,
|
290 |
-
truncation=True,
|
291 |
-
padding="max_length",
|
292 |
-
max_length=self.tokenizer.model_max_length,
|
293 |
-
return_tensors="pt",
|
294 |
-
).input_ids
|
295 |
-
|
296 |
-
return example
|
297 |
-
|
298 |
-
|
299 |
-
def save_new_embed(text_encoder, modifier_token_id, accelerator, args, output_dir):
|
300 |
-
"""Saves the new token embeddings from the text encoder."""
|
301 |
-
logger.info("Saving embeddings")
|
302 |
-
learned_embeds = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight
|
303 |
-
for x, y in zip(modifier_token_id, args.modifier_token):
|
304 |
-
learned_embeds_dict = {}
|
305 |
-
learned_embeds_dict[y] = learned_embeds[x]
|
306 |
-
torch.save(learned_embeds_dict, f"{output_dir}/{y}.bin")
|
307 |
-
|
308 |
-
|
309 |
-
def parse_args(input_args=None):
|
310 |
-
parser = argparse.ArgumentParser(description="Custom Diffusion training script.")
|
311 |
-
parser.add_argument(
|
312 |
-
"--pretrained_model_name_or_path",
|
313 |
-
type=str,
|
314 |
-
default=None,
|
315 |
-
required=True,
|
316 |
-
help="Path to pretrained model or model identifier from huggingface.co/models.",
|
317 |
-
)
|
318 |
-
parser.add_argument(
|
319 |
-
"--revision",
|
320 |
-
type=str,
|
321 |
-
default=None,
|
322 |
-
required=False,
|
323 |
-
help="Revision of pretrained model identifier from huggingface.co/models.",
|
324 |
-
)
|
325 |
-
parser.add_argument(
|
326 |
-
"--tokenizer_name",
|
327 |
-
type=str,
|
328 |
-
default=None,
|
329 |
-
help="Pretrained tokenizer name or path if not the same as model_name",
|
330 |
-
)
|
331 |
-
parser.add_argument(
|
332 |
-
"--instance_data_dir",
|
333 |
-
type=str,
|
334 |
-
default=None,
|
335 |
-
help="A folder containing the training data of instance images.",
|
336 |
-
)
|
337 |
-
parser.add_argument(
|
338 |
-
"--class_data_dir",
|
339 |
-
type=str,
|
340 |
-
default=None,
|
341 |
-
help="A folder containing the training data of class images.",
|
342 |
-
)
|
343 |
-
parser.add_argument(
|
344 |
-
"--instance_prompt",
|
345 |
-
type=str,
|
346 |
-
default=None,
|
347 |
-
help="The prompt with identifier specifying the instance",
|
348 |
-
)
|
349 |
-
parser.add_argument(
|
350 |
-
"--class_prompt",
|
351 |
-
type=str,
|
352 |
-
default=None,
|
353 |
-
help="The prompt to specify images in the same class as provided instance images.",
|
354 |
-
)
|
355 |
-
parser.add_argument(
|
356 |
-
"--validation_prompt",
|
357 |
-
type=str,
|
358 |
-
default=None,
|
359 |
-
help="A prompt that is used during validation to verify that the model is learning.",
|
360 |
-
)
|
361 |
-
parser.add_argument(
|
362 |
-
"--num_validation_images",
|
363 |
-
type=int,
|
364 |
-
default=2,
|
365 |
-
help="Number of images that should be generated during validation with `validation_prompt`.",
|
366 |
-
)
|
367 |
-
parser.add_argument(
|
368 |
-
"--validation_steps",
|
369 |
-
type=int,
|
370 |
-
default=50,
|
371 |
-
help=(
|
372 |
-
"Run dreambooth validation every X epochs. Dreambooth validation consists of running the prompt"
|
373 |
-
" `args.validation_prompt` multiple times: `args.num_validation_images`."
|
374 |
-
),
|
375 |
-
)
|
376 |
-
parser.add_argument(
|
377 |
-
"--with_prior_preservation",
|
378 |
-
default=False,
|
379 |
-
action="store_true",
|
380 |
-
help="Flag to add prior preservation loss.",
|
381 |
-
)
|
382 |
-
parser.add_argument(
|
383 |
-
"--real_prior",
|
384 |
-
default=False,
|
385 |
-
action="store_true",
|
386 |
-
help="real images as prior.",
|
387 |
-
)
|
388 |
-
parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.")
|
389 |
-
parser.add_argument(
|
390 |
-
"--num_class_images",
|
391 |
-
type=int,
|
392 |
-
default=200,
|
393 |
-
help=(
|
394 |
-
"Minimal class images for prior preservation loss. If there are not enough images already present in"
|
395 |
-
" class_data_dir, additional images will be sampled with class_prompt."
|
396 |
-
),
|
397 |
-
)
|
398 |
-
parser.add_argument(
|
399 |
-
"--output_dir",
|
400 |
-
type=str,
|
401 |
-
default="custom-diffusion-model",
|
402 |
-
help="The output directory where the model predictions and checkpoints will be written.",
|
403 |
-
)
|
404 |
-
parser.add_argument("--seed", type=int, default=42, help="A seed for reproducible training.")
|
405 |
-
parser.add_argument(
|
406 |
-
"--resolution",
|
407 |
-
type=int,
|
408 |
-
default=512,
|
409 |
-
help=(
|
410 |
-
"The resolution for input images, all the images in the train/validation dataset will be resized to this"
|
411 |
-
" resolution"
|
412 |
-
),
|
413 |
-
)
|
414 |
-
parser.add_argument(
|
415 |
-
"--center_crop",
|
416 |
-
default=False,
|
417 |
-
action="store_true",
|
418 |
-
help=(
|
419 |
-
"Whether to center crop the input images to the resolution. If not set, the images will be randomly"
|
420 |
-
" cropped. The images will be resized to the resolution first before cropping."
|
421 |
-
),
|
422 |
-
)
|
423 |
-
parser.add_argument(
|
424 |
-
"--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader."
|
425 |
-
)
|
426 |
-
parser.add_argument(
|
427 |
-
"--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images."
|
428 |
-
)
|
429 |
-
parser.add_argument("--num_train_epochs", type=int, default=1)
|
430 |
-
parser.add_argument(
|
431 |
-
"--max_train_steps",
|
432 |
-
type=int,
|
433 |
-
default=None,
|
434 |
-
help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
|
435 |
-
)
|
436 |
-
parser.add_argument(
|
437 |
-
"--checkpointing_steps",
|
438 |
-
type=int,
|
439 |
-
default=250,
|
440 |
-
help=(
|
441 |
-
"Save a checkpoint of the training state every X updates. These checkpoints can be used both as final"
|
442 |
-
" checkpoints in case they are better than the last checkpoint, and are also suitable for resuming"
|
443 |
-
" training using `--resume_from_checkpoint`."
|
444 |
-
),
|
445 |
-
)
|
446 |
-
parser.add_argument(
|
447 |
-
"--checkpoints_total_limit",
|
448 |
-
type=int,
|
449 |
-
default=None,
|
450 |
-
help=("Max number of checkpoints to store."),
|
451 |
-
)
|
452 |
-
parser.add_argument(
|
453 |
-
"--resume_from_checkpoint",
|
454 |
-
type=str,
|
455 |
-
default=None,
|
456 |
-
help=(
|
457 |
-
"Whether training should be resumed from a previous checkpoint. Use a path saved by"
|
458 |
-
' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
|
459 |
-
),
|
460 |
-
)
|
461 |
-
parser.add_argument(
|
462 |
-
"--gradient_accumulation_steps",
|
463 |
-
type=int,
|
464 |
-
default=1,
|
465 |
-
help="Number of updates steps to accumulate before performing a backward/update pass.",
|
466 |
-
)
|
467 |
-
parser.add_argument(
|
468 |
-
"--gradient_checkpointing",
|
469 |
-
action="store_true",
|
470 |
-
help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
|
471 |
-
)
|
472 |
-
parser.add_argument(
|
473 |
-
"--learning_rate",
|
474 |
-
type=float,
|
475 |
-
default=1e-5,
|
476 |
-
help="Initial learning rate (after the potential warmup period) to use.",
|
477 |
-
)
|
478 |
-
parser.add_argument(
|
479 |
-
"--scale_lr",
|
480 |
-
action="store_true",
|
481 |
-
default=False,
|
482 |
-
help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
|
483 |
-
)
|
484 |
-
parser.add_argument(
|
485 |
-
"--dataloader_num_workers",
|
486 |
-
type=int,
|
487 |
-
default=2,
|
488 |
-
help=(
|
489 |
-
"Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."
|
490 |
-
),
|
491 |
-
)
|
492 |
-
parser.add_argument(
|
493 |
-
"--freeze_model",
|
494 |
-
type=str,
|
495 |
-
default="crossattn_kv",
|
496 |
-
choices=["crossattn_kv", "crossattn"],
|
497 |
-
help="crossattn to enable fine-tuning of all params in the cross attention",
|
498 |
-
)
|
499 |
-
parser.add_argument(
|
500 |
-
"--lr_scheduler",
|
501 |
-
type=str,
|
502 |
-
default="constant",
|
503 |
-
help=(
|
504 |
-
'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
|
505 |
-
' "constant", "constant_with_warmup"]'
|
506 |
-
),
|
507 |
-
)
|
508 |
-
parser.add_argument(
|
509 |
-
"--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
|
510 |
-
)
|
511 |
-
parser.add_argument(
|
512 |
-
"--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
|
513 |
-
)
|
514 |
-
parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
|
515 |
-
parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
|
516 |
-
parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
|
517 |
-
parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
|
518 |
-
parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
|
519 |
-
parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
|
520 |
-
parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
|
521 |
-
parser.add_argument(
|
522 |
-
"--hub_model_id",
|
523 |
-
type=str,
|
524 |
-
default=None,
|
525 |
-
help="The name of the repository to keep in sync with the local `output_dir`.",
|
526 |
-
)
|
527 |
-
parser.add_argument(
|
528 |
-
"--logging_dir",
|
529 |
-
type=str,
|
530 |
-
default="logs",
|
531 |
-
help=(
|
532 |
-
"[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
|
533 |
-
" *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
|
534 |
-
),
|
535 |
-
)
|
536 |
-
parser.add_argument(
|
537 |
-
"--allow_tf32",
|
538 |
-
action="store_true",
|
539 |
-
help=(
|
540 |
-
"Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
|
541 |
-
" https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
|
542 |
-
),
|
543 |
-
)
|
544 |
-
parser.add_argument(
|
545 |
-
"--report_to",
|
546 |
-
type=str,
|
547 |
-
default="tensorboard",
|
548 |
-
help=(
|
549 |
-
'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
|
550 |
-
' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
|
551 |
-
),
|
552 |
-
)
|
553 |
-
parser.add_argument(
|
554 |
-
"--mixed_precision",
|
555 |
-
type=str,
|
556 |
-
default=None,
|
557 |
-
choices=["no", "fp16", "bf16"],
|
558 |
-
help=(
|
559 |
-
"Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
|
560 |
-
" 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the"
|
561 |
-
" flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."
|
562 |
-
),
|
563 |
-
)
|
564 |
-
parser.add_argument(
|
565 |
-
"--prior_generation_precision",
|
566 |
-
type=str,
|
567 |
-
default=None,
|
568 |
-
choices=["no", "fp32", "fp16", "bf16"],
|
569 |
-
help=(
|
570 |
-
"Choose prior generation precision between fp32, fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
|
571 |
-
" 1.10.and an Nvidia Ampere GPU. Default to fp16 if a GPU is available else fp32."
|
572 |
-
),
|
573 |
-
)
|
574 |
-
parser.add_argument(
|
575 |
-
"--concepts_list",
|
576 |
-
type=str,
|
577 |
-
default=None,
|
578 |
-
help="Path to json containing multiple concepts, will overwrite parameters like instance_prompt, class_prompt, etc.",
|
579 |
-
)
|
580 |
-
parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
|
581 |
-
parser.add_argument(
|
582 |
-
"--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
|
583 |
-
)
|
584 |
-
parser.add_argument(
|
585 |
-
"--set_grads_to_none",
|
586 |
-
action="store_true",
|
587 |
-
help=(
|
588 |
-
"Save more memory by using setting grads to None instead of zero. Be aware, that this changes certain"
|
589 |
-
" behaviors, so disable this argument if it causes any problems. More info:"
|
590 |
-
" https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html"
|
591 |
-
),
|
592 |
-
)
|
593 |
-
parser.add_argument(
|
594 |
-
"--modifier_token",
|
595 |
-
type=str,
|
596 |
-
default=None,
|
597 |
-
help="A token to use as a modifier for the concept.",
|
598 |
-
)
|
599 |
-
parser.add_argument(
|
600 |
-
"--initializer_token", type=str, default="ktn+pll+ucd", help="A token to use as initializer word."
|
601 |
-
)
|
602 |
-
parser.add_argument("--hflip", action="store_true", help="Apply horizontal flip data augmentation.")
|
603 |
-
parser.add_argument(
|
604 |
-
"--noaug",
|
605 |
-
action="store_true",
|
606 |
-
help="Dont apply augmentation during data augmentation when this flag is enabled.",
|
607 |
-
)
|
608 |
-
|
609 |
-
if input_args is not None:
|
610 |
-
args = parser.parse_args(input_args)
|
611 |
-
else:
|
612 |
-
args = parser.parse_args()
|
613 |
-
|
614 |
-
env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
|
615 |
-
if env_local_rank != -1 and env_local_rank != args.local_rank:
|
616 |
-
args.local_rank = env_local_rank
|
617 |
-
|
618 |
-
if args.with_prior_preservation:
|
619 |
-
if args.concepts_list is None:
|
620 |
-
if args.class_data_dir is None:
|
621 |
-
raise ValueError("You must specify a data directory for class images.")
|
622 |
-
if args.class_prompt is None:
|
623 |
-
raise ValueError("You must specify prompt for class images.")
|
624 |
-
else:
|
625 |
-
# logger is not available yet
|
626 |
-
if args.class_data_dir is not None:
|
627 |
-
warnings.warn("You need not use --class_data_dir without --with_prior_preservation.")
|
628 |
-
if args.class_prompt is not None:
|
629 |
-
warnings.warn("You need not use --class_prompt without --with_prior_preservation.")
|
630 |
-
|
631 |
-
return args
|
632 |
-
|
633 |
-
|
634 |
-
def main(args):
|
635 |
-
logging_dir = Path(args.output_dir, args.logging_dir)
|
636 |
-
|
637 |
-
accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir)
|
638 |
-
|
639 |
-
accelerator = Accelerator(
|
640 |
-
gradient_accumulation_steps=args.gradient_accumulation_steps,
|
641 |
-
mixed_precision=args.mixed_precision,
|
642 |
-
log_with=args.report_to,
|
643 |
-
project_config=accelerator_project_config,
|
644 |
-
)
|
645 |
-
|
646 |
-
if args.report_to == "wandb":
|
647 |
-
if not is_wandb_available():
|
648 |
-
raise ImportError("Make sure to install wandb if you want to use it for logging during training.")
|
649 |
-
import wandb
|
650 |
-
|
651 |
-
# Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate
|
652 |
-
# This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models.
|
653 |
-
# TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate.
|
654 |
-
# Make one log on every process with the configuration for debugging.
|
655 |
-
logging.basicConfig(
|
656 |
-
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
657 |
-
datefmt="%m/%d/%Y %H:%M:%S",
|
658 |
-
level=logging.INFO,
|
659 |
-
)
|
660 |
-
logger.info(accelerator.state, main_process_only=False)
|
661 |
-
if accelerator.is_local_main_process:
|
662 |
-
transformers.utils.logging.set_verbosity_warning()
|
663 |
-
diffusers.utils.logging.set_verbosity_info()
|
664 |
-
else:
|
665 |
-
transformers.utils.logging.set_verbosity_error()
|
666 |
-
diffusers.utils.logging.set_verbosity_error()
|
667 |
-
|
668 |
-
# We need to initialize the trackers we use, and also store our configuration.
|
669 |
-
# The trackers initializes automatically on the main process.
|
670 |
-
if accelerator.is_main_process:
|
671 |
-
accelerator.init_trackers("custom-diffusion", config=vars(args))
|
672 |
-
|
673 |
-
# If passed along, set the training seed now.
|
674 |
-
if args.seed is not None:
|
675 |
-
set_seed(args.seed)
|
676 |
-
if args.concepts_list is None:
|
677 |
-
args.concepts_list = [
|
678 |
-
{
|
679 |
-
"instance_prompt": args.instance_prompt,
|
680 |
-
"class_prompt": args.class_prompt,
|
681 |
-
"instance_data_dir": args.instance_data_dir,
|
682 |
-
"class_data_dir": args.class_data_dir,
|
683 |
-
}
|
684 |
-
]
|
685 |
-
else:
|
686 |
-
with open(args.concepts_list, "r") as f:
|
687 |
-
args.concepts_list = json.load(f)
|
688 |
-
|
689 |
-
# Generate class images if prior preservation is enabled.
|
690 |
-
if args.with_prior_preservation:
|
691 |
-
for i, concept in enumerate(args.concepts_list):
|
692 |
-
class_images_dir = Path(concept["class_data_dir"])
|
693 |
-
if not class_images_dir.exists():
|
694 |
-
class_images_dir.mkdir(parents=True, exist_ok=True)
|
695 |
-
if args.real_prior:
|
696 |
-
assert (
|
697 |
-
class_images_dir / "images"
|
698 |
-
).exists(), f"Please run: python retrieve.py --class_prompt \"{concept['class_prompt']}\" --class_data_dir {class_images_dir} --num_class_images {args.num_class_images}"
|
699 |
-
assert (
|
700 |
-
len(list((class_images_dir / "images").iterdir())) == args.num_class_images
|
701 |
-
), f"Please run: python retrieve.py --class_prompt \"{concept['class_prompt']}\" --class_data_dir {class_images_dir} --num_class_images {args.num_class_images}"
|
702 |
-
assert (
|
703 |
-
class_images_dir / "caption.txt"
|
704 |
-
).exists(), f"Please run: python retrieve.py --class_prompt \"{concept['class_prompt']}\" --class_data_dir {class_images_dir} --num_class_images {args.num_class_images}"
|
705 |
-
assert (
|
706 |
-
class_images_dir / "images.txt"
|
707 |
-
).exists(), f"Please run: python retrieve.py --class_prompt \"{concept['class_prompt']}\" --class_data_dir {class_images_dir} --num_class_images {args.num_class_images}"
|
708 |
-
concept["class_prompt"] = os.path.join(class_images_dir, "caption.txt")
|
709 |
-
concept["class_data_dir"] = os.path.join(class_images_dir, "images.txt")
|
710 |
-
args.concepts_list[i] = concept
|
711 |
-
accelerator.wait_for_everyone()
|
712 |
-
else:
|
713 |
-
cur_class_images = len(list(class_images_dir.iterdir()))
|
714 |
-
|
715 |
-
if cur_class_images < args.num_class_images:
|
716 |
-
torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32
|
717 |
-
if args.prior_generation_precision == "fp32":
|
718 |
-
torch_dtype = torch.float32
|
719 |
-
elif args.prior_generation_precision == "fp16":
|
720 |
-
torch_dtype = torch.float16
|
721 |
-
elif args.prior_generation_precision == "bf16":
|
722 |
-
torch_dtype = torch.bfloat16
|
723 |
-
pipeline = DiffusionPipeline.from_pretrained(
|
724 |
-
args.pretrained_model_name_or_path,
|
725 |
-
torch_dtype=torch_dtype,
|
726 |
-
safety_checker=None,
|
727 |
-
revision=args.revision,
|
728 |
-
)
|
729 |
-
pipeline.set_progress_bar_config(disable=True)
|
730 |
-
|
731 |
-
num_new_images = args.num_class_images - cur_class_images
|
732 |
-
logger.info(f"Number of class images to sample: {num_new_images}.")
|
733 |
-
|
734 |
-
sample_dataset = PromptDataset(args.class_prompt, num_new_images)
|
735 |
-
sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size)
|
736 |
-
|
737 |
-
sample_dataloader = accelerator.prepare(sample_dataloader)
|
738 |
-
pipeline.to(accelerator.device)
|
739 |
-
|
740 |
-
for example in tqdm(
|
741 |
-
sample_dataloader,
|
742 |
-
desc="Generating class images",
|
743 |
-
disable=not accelerator.is_local_main_process,
|
744 |
-
):
|
745 |
-
images = pipeline(example["prompt"]).images
|
746 |
-
|
747 |
-
for i, image in enumerate(images):
|
748 |
-
hash_image = hashlib.sha1(image.tobytes()).hexdigest()
|
749 |
-
image_filename = (
|
750 |
-
class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg"
|
751 |
-
)
|
752 |
-
image.save(image_filename)
|
753 |
-
|
754 |
-
del pipeline
|
755 |
-
if torch.cuda.is_available():
|
756 |
-
torch.cuda.empty_cache()
|
757 |
-
|
758 |
-
# Handle the repository creation
|
759 |
-
if accelerator.is_main_process:
|
760 |
-
if args.output_dir is not None:
|
761 |
-
os.makedirs(args.output_dir, exist_ok=True)
|
762 |
-
|
763 |
-
if args.push_to_hub:
|
764 |
-
repo_id = create_repo(
|
765 |
-
repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token
|
766 |
-
).repo_id
|
767 |
-
|
768 |
-
# Load the tokenizer
|
769 |
-
if args.tokenizer_name:
|
770 |
-
tokenizer = AutoTokenizer.from_pretrained(
|
771 |
-
args.tokenizer_name,
|
772 |
-
revision=args.revision,
|
773 |
-
use_fast=False,
|
774 |
-
)
|
775 |
-
elif args.pretrained_model_name_or_path:
|
776 |
-
tokenizer = AutoTokenizer.from_pretrained(
|
777 |
-
args.pretrained_model_name_or_path,
|
778 |
-
subfolder="tokenizer",
|
779 |
-
revision=args.revision,
|
780 |
-
use_fast=False,
|
781 |
-
)
|
782 |
-
|
783 |
-
# import correct text encoder class
|
784 |
-
text_encoder_cls = import_model_class_from_model_name_or_path(args.pretrained_model_name_or_path, args.revision)
|
785 |
-
|
786 |
-
# Load scheduler and models
|
787 |
-
noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
|
788 |
-
text_encoder = text_encoder_cls.from_pretrained(
|
789 |
-
args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision
|
790 |
-
)
|
791 |
-
vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision)
|
792 |
-
unet = UNet2DConditionModel.from_pretrained(
|
793 |
-
args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision
|
794 |
-
)
|
795 |
-
|
796 |
-
# Adding a modifier token which is optimized ####
|
797 |
-
# Code taken from https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py
|
798 |
-
modifier_token_id = []
|
799 |
-
initializer_token_id = []
|
800 |
-
if args.modifier_token is not None:
|
801 |
-
args.modifier_token = args.modifier_token.split("+")
|
802 |
-
args.initializer_token = args.initializer_token.split("+")
|
803 |
-
if len(args.modifier_token) > len(args.initializer_token):
|
804 |
-
raise ValueError("You must specify + separated initializer token for each modifier token.")
|
805 |
-
for modifier_token, initializer_token in zip(
|
806 |
-
args.modifier_token, args.initializer_token[: len(args.modifier_token)]
|
807 |
-
):
|
808 |
-
# Add the placeholder token in tokenizer
|
809 |
-
num_added_tokens = tokenizer.add_tokens(modifier_token)
|
810 |
-
if num_added_tokens == 0:
|
811 |
-
raise ValueError(
|
812 |
-
f"The tokenizer already contains the token {modifier_token}. Please pass a different"
|
813 |
-
" `modifier_token` that is not already in the tokenizer."
|
814 |
-
)
|
815 |
-
|
816 |
-
# Convert the initializer_token, placeholder_token to ids
|
817 |
-
token_ids = tokenizer.encode([initializer_token], add_special_tokens=False)
|
818 |
-
print(token_ids)
|
819 |
-
# Check if initializer_token is a single token or a sequence of tokens
|
820 |
-
if len(token_ids) > 1:
|
821 |
-
raise ValueError("The initializer token must be a single token.")
|
822 |
-
|
823 |
-
initializer_token_id.append(token_ids[0])
|
824 |
-
modifier_token_id.append(tokenizer.convert_tokens_to_ids(modifier_token))
|
825 |
-
|
826 |
-
# Resize the token embeddings as we are adding new special tokens to the tokenizer
|
827 |
-
text_encoder.resize_token_embeddings(len(tokenizer))
|
828 |
-
|
829 |
-
# Initialise the newly added placeholder token with the embeddings of the initializer token
|
830 |
-
token_embeds = text_encoder.get_input_embeddings().weight.data
|
831 |
-
for x, y in zip(modifier_token_id, initializer_token_id):
|
832 |
-
token_embeds[x] = token_embeds[y]
|
833 |
-
|
834 |
-
# Freeze all parameters except for the token embeddings in text encoder
|
835 |
-
params_to_freeze = itertools.chain(
|
836 |
-
text_encoder.text_model.encoder.parameters(),
|
837 |
-
text_encoder.text_model.final_layer_norm.parameters(),
|
838 |
-
text_encoder.text_model.embeddings.position_embedding.parameters(),
|
839 |
-
)
|
840 |
-
freeze_params(params_to_freeze)
|
841 |
-
########################################################
|
842 |
-
########################################################
|
843 |
-
|
844 |
-
vae.requires_grad_(False)
|
845 |
-
if args.modifier_token is None:
|
846 |
-
text_encoder.requires_grad_(False)
|
847 |
-
unet.requires_grad_(False)
|
848 |
-
# For mixed precision training we cast the text_encoder and vae weights to half-precision
|
849 |
-
# as these models are only used for inference, keeping weights in full precision is not required.
|
850 |
-
weight_dtype = torch.float32
|
851 |
-
if accelerator.mixed_precision == "fp16":
|
852 |
-
weight_dtype = torch.float16
|
853 |
-
elif accelerator.mixed_precision == "bf16":
|
854 |
-
weight_dtype = torch.bfloat16
|
855 |
-
|
856 |
-
# Move unet, vae and text_encoder to device and cast to weight_dtype
|
857 |
-
if accelerator.mixed_precision != "fp16" and args.modifier_token is not None:
|
858 |
-
text_encoder.to(accelerator.device, dtype=weight_dtype)
|
859 |
-
unet.to(accelerator.device, dtype=weight_dtype)
|
860 |
-
vae.to(accelerator.device, dtype=weight_dtype)
|
861 |
-
|
862 |
-
attention_class = CustomDiffusionAttnProcessor
|
863 |
-
if args.enable_xformers_memory_efficient_attention:
|
864 |
-
if is_xformers_available():
|
865 |
-
import xformers
|
866 |
-
|
867 |
-
xformers_version = version.parse(xformers.__version__)
|
868 |
-
if xformers_version == version.parse("0.0.16"):
|
869 |
-
logger.warn(
|
870 |
-
"xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
|
871 |
-
)
|
872 |
-
attention_class = CustomDiffusionXFormersAttnProcessor
|
873 |
-
else:
|
874 |
-
raise ValueError("xformers is not available. Make sure it is installed correctly")
|
875 |
-
|
876 |
-
# now we will add new Custom Diffusion weights to the attention layers
|
877 |
-
# It's important to realize here how many attention weights will be added and of which sizes
|
878 |
-
# The sizes of the attention layers consist only of two different variables:
|
879 |
-
# 1) - the "hidden_size", which is increased according to `unet.config.block_out_channels`.
|
880 |
-
# 2) - the "cross attention size", which is set to `unet.config.cross_attention_dim`.
|
881 |
-
|
882 |
-
# Let's first see how many attention processors we will have to set.
|
883 |
-
# For Stable Diffusion, it should be equal to:
|
884 |
-
# - down blocks (2x attention layers) * (2x transformer layers) * (3x down blocks) = 12
|
885 |
-
# - mid blocks (2x attention layers) * (1x transformer layers) * (1x mid blocks) = 2
|
886 |
-
# - up blocks (2x attention layers) * (3x transformer layers) * (3x down blocks) = 18
|
887 |
-
# => 32 layers
|
888 |
-
|
889 |
-
# Only train key, value projection layers if freeze_model = 'crossattn_kv' else train all params in the cross attention layer
|
890 |
-
train_kv = True
|
891 |
-
train_q_out = False if args.freeze_model == "crossattn_kv" else True
|
892 |
-
custom_diffusion_attn_procs = {}
|
893 |
-
|
894 |
-
st = unet.state_dict()
|
895 |
-
for name, _ in unet.attn_processors.items():
|
896 |
-
cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
|
897 |
-
if name.startswith("mid_block"):
|
898 |
-
hidden_size = unet.config.block_out_channels[-1]
|
899 |
-
elif name.startswith("up_blocks"):
|
900 |
-
block_id = int(name[len("up_blocks.")])
|
901 |
-
hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
|
902 |
-
elif name.startswith("down_blocks"):
|
903 |
-
block_id = int(name[len("down_blocks.")])
|
904 |
-
hidden_size = unet.config.block_out_channels[block_id]
|
905 |
-
layer_name = name.split(".processor")[0]
|
906 |
-
weights = {
|
907 |
-
"to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"],
|
908 |
-
"to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"],
|
909 |
-
}
|
910 |
-
if train_q_out:
|
911 |
-
weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"]
|
912 |
-
weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"]
|
913 |
-
weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"]
|
914 |
-
if cross_attention_dim is not None:
|
915 |
-
custom_diffusion_attn_procs[name] = attention_class(
|
916 |
-
train_kv=train_kv,
|
917 |
-
train_q_out=train_q_out,
|
918 |
-
hidden_size=hidden_size,
|
919 |
-
cross_attention_dim=cross_attention_dim,
|
920 |
-
).to(unet.device)
|
921 |
-
custom_diffusion_attn_procs[name].load_state_dict(weights)
|
922 |
-
else:
|
923 |
-
custom_diffusion_attn_procs[name] = attention_class(
|
924 |
-
train_kv=False,
|
925 |
-
train_q_out=False,
|
926 |
-
hidden_size=hidden_size,
|
927 |
-
cross_attention_dim=cross_attention_dim,
|
928 |
-
)
|
929 |
-
del st
|
930 |
-
unet.set_attn_processor(custom_diffusion_attn_procs)
|
931 |
-
custom_diffusion_layers = AttnProcsLayers(unet.attn_processors)
|
932 |
-
|
933 |
-
accelerator.register_for_checkpointing(custom_diffusion_layers)
|
934 |
-
|
935 |
-
if args.gradient_checkpointing:
|
936 |
-
unet.enable_gradient_checkpointing()
|
937 |
-
if args.modifier_token is not None:
|
938 |
-
text_encoder.gradient_checkpointing_enable()
|
939 |
-
# Enable TF32 for faster training on Ampere GPUs,
|
940 |
-
# cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
|
941 |
-
if args.allow_tf32:
|
942 |
-
torch.backends.cuda.matmul.allow_tf32 = True
|
943 |
-
|
944 |
-
if args.scale_lr:
|
945 |
-
args.learning_rate = (
|
946 |
-
args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
|
947 |
-
)
|
948 |
-
if args.with_prior_preservation:
|
949 |
-
args.learning_rate = args.learning_rate * 2.0
|
950 |
-
|
951 |
-
# Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs
|
952 |
-
if args.use_8bit_adam:
|
953 |
-
try:
|
954 |
-
import bitsandbytes as bnb
|
955 |
-
except ImportError:
|
956 |
-
raise ImportError(
|
957 |
-
"To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`."
|
958 |
-
)
|
959 |
-
|
960 |
-
optimizer_class = bnb.optim.AdamW8bit
|
961 |
-
else:
|
962 |
-
optimizer_class = torch.optim.AdamW
|
963 |
-
|
964 |
-
# Optimizer creation
|
965 |
-
optimizer = optimizer_class(
|
966 |
-
itertools.chain(text_encoder.get_input_embeddings().parameters(), custom_diffusion_layers.parameters())
|
967 |
-
if args.modifier_token is not None
|
968 |
-
else custom_diffusion_layers.parameters(),
|
969 |
-
lr=args.learning_rate,
|
970 |
-
betas=(args.adam_beta1, args.adam_beta2),
|
971 |
-
weight_decay=args.adam_weight_decay,
|
972 |
-
eps=args.adam_epsilon,
|
973 |
-
)
|
974 |
-
|
975 |
-
# Dataset and DataLoaders creation:
|
976 |
-
train_dataset = CustomDiffusionDataset(
|
977 |
-
concepts_list=args.concepts_list,
|
978 |
-
tokenizer=tokenizer,
|
979 |
-
with_prior_preservation=args.with_prior_preservation,
|
980 |
-
size=args.resolution,
|
981 |
-
mask_size=vae.encode(
|
982 |
-
torch.randn(1, 3, args.resolution, args.resolution).to(dtype=weight_dtype).to(accelerator.device)
|
983 |
-
)
|
984 |
-
.latent_dist.sample()
|
985 |
-
.size()[-1],
|
986 |
-
center_crop=args.center_crop,
|
987 |
-
num_class_images=args.num_class_images,
|
988 |
-
hflip=args.hflip,
|
989 |
-
aug=not args.noaug,
|
990 |
-
)
|
991 |
-
|
992 |
-
train_dataloader = torch.utils.data.DataLoader(
|
993 |
-
train_dataset,
|
994 |
-
batch_size=args.train_batch_size,
|
995 |
-
shuffle=True,
|
996 |
-
collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation),
|
997 |
-
num_workers=args.dataloader_num_workers,
|
998 |
-
)
|
999 |
-
|
1000 |
-
# Scheduler and math around the number of training steps.
|
1001 |
-
overrode_max_train_steps = False
|
1002 |
-
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
|
1003 |
-
if args.max_train_steps is None:
|
1004 |
-
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
|
1005 |
-
overrode_max_train_steps = True
|
1006 |
-
|
1007 |
-
lr_scheduler = get_scheduler(
|
1008 |
-
args.lr_scheduler,
|
1009 |
-
optimizer=optimizer,
|
1010 |
-
num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes,
|
1011 |
-
num_training_steps=args.max_train_steps * accelerator.num_processes,
|
1012 |
-
)
|
1013 |
-
|
1014 |
-
# Prepare everything with our `accelerator`.
|
1015 |
-
if args.modifier_token is not None:
|
1016 |
-
custom_diffusion_layers, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
|
1017 |
-
custom_diffusion_layers, text_encoder, optimizer, train_dataloader, lr_scheduler
|
1018 |
-
)
|
1019 |
-
else:
|
1020 |
-
custom_diffusion_layers, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
|
1021 |
-
custom_diffusion_layers, optimizer, train_dataloader, lr_scheduler
|
1022 |
-
)
|
1023 |
-
|
1024 |
-
# We need to recalculate our total training steps as the size of the training dataloader may have changed.
|
1025 |
-
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
|
1026 |
-
if overrode_max_train_steps:
|
1027 |
-
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
|
1028 |
-
# Afterwards we recalculate our number of training epochs
|
1029 |
-
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
|
1030 |
-
|
1031 |
-
# Train!
|
1032 |
-
total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
|
1033 |
-
|
1034 |
-
logger.info("***** Running training *****")
|
1035 |
-
logger.info(f" Num examples = {len(train_dataset)}")
|
1036 |
-
logger.info(f" Num batches each epoch = {len(train_dataloader)}")
|
1037 |
-
logger.info(f" Num Epochs = {args.num_train_epochs}")
|
1038 |
-
logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
|
1039 |
-
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
|
1040 |
-
logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
|
1041 |
-
logger.info(f" Total optimization steps = {args.max_train_steps}")
|
1042 |
-
global_step = 0
|
1043 |
-
first_epoch = 0
|
1044 |
-
|
1045 |
-
# Potentially load in the weights and states from a previous save
|
1046 |
-
if args.resume_from_checkpoint:
|
1047 |
-
if args.resume_from_checkpoint != "latest":
|
1048 |
-
path = os.path.basename(args.resume_from_checkpoint)
|
1049 |
-
else:
|
1050 |
-
# Get the most recent checkpoint
|
1051 |
-
dirs = os.listdir(args.output_dir)
|
1052 |
-
dirs = [d for d in dirs if d.startswith("checkpoint")]
|
1053 |
-
dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
|
1054 |
-
path = dirs[-1] if len(dirs) > 0 else None
|
1055 |
-
|
1056 |
-
if path is None:
|
1057 |
-
accelerator.print(
|
1058 |
-
f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
|
1059 |
-
)
|
1060 |
-
args.resume_from_checkpoint = None
|
1061 |
-
else:
|
1062 |
-
accelerator.print(f"Resuming from checkpoint {path}")
|
1063 |
-
accelerator.load_state(os.path.join(args.output_dir, path))
|
1064 |
-
global_step = int(path.split("-")[1])
|
1065 |
-
|
1066 |
-
resume_global_step = global_step * args.gradient_accumulation_steps
|
1067 |
-
first_epoch = global_step // num_update_steps_per_epoch
|
1068 |
-
resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps)
|
1069 |
-
|
1070 |
-
# Only show the progress bar once on each machine.
|
1071 |
-
progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process)
|
1072 |
-
progress_bar.set_description("Steps")
|
1073 |
-
|
1074 |
-
for epoch in range(first_epoch, args.num_train_epochs):
|
1075 |
-
unet.train()
|
1076 |
-
if args.modifier_token is not None:
|
1077 |
-
text_encoder.train()
|
1078 |
-
for step, batch in enumerate(train_dataloader):
|
1079 |
-
# Skip steps until we reach the resumed step
|
1080 |
-
if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step:
|
1081 |
-
if step % args.gradient_accumulation_steps == 0:
|
1082 |
-
progress_bar.update(1)
|
1083 |
-
continue
|
1084 |
-
|
1085 |
-
with accelerator.accumulate(unet), accelerator.accumulate(text_encoder):
|
1086 |
-
# Convert images to latent space
|
1087 |
-
latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample()
|
1088 |
-
latents = latents * vae.config.scaling_factor
|
1089 |
-
|
1090 |
-
# Sample noise that we'll add to the latents
|
1091 |
-
noise = torch.randn_like(latents)
|
1092 |
-
bsz = latents.shape[0]
|
1093 |
-
# Sample a random timestep for each image
|
1094 |
-
timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
|
1095 |
-
timesteps = timesteps.long()
|
1096 |
-
|
1097 |
-
# Add noise to the latents according to the noise magnitude at each timestep
|
1098 |
-
# (this is the forward diffusion process)
|
1099 |
-
noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
|
1100 |
-
|
1101 |
-
# Get the text embedding for conditioning
|
1102 |
-
encoder_hidden_states = text_encoder(batch["input_ids"])[0]
|
1103 |
-
|
1104 |
-
# Predict the noise residual
|
1105 |
-
model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
|
1106 |
-
|
1107 |
-
# Get the target for loss depending on the prediction type
|
1108 |
-
if noise_scheduler.config.prediction_type == "epsilon":
|
1109 |
-
target = noise
|
1110 |
-
elif noise_scheduler.config.prediction_type == "v_prediction":
|
1111 |
-
target = noise_scheduler.get_velocity(latents, noise, timesteps)
|
1112 |
-
else:
|
1113 |
-
raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
|
1114 |
-
|
1115 |
-
if args.with_prior_preservation:
|
1116 |
-
# Chunk the noise and model_pred into two parts and compute the loss on each part separately.
|
1117 |
-
model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0)
|
1118 |
-
target, target_prior = torch.chunk(target, 2, dim=0)
|
1119 |
-
mask = torch.chunk(batch["mask"], 2, dim=0)[0]
|
1120 |
-
# Compute instance loss
|
1121 |
-
loss = F.mse_loss(model_pred.float(), target.float(), reduction="none")
|
1122 |
-
loss = ((loss * mask).sum([1, 2, 3]) / mask.sum([1, 2, 3])).mean()
|
1123 |
-
|
1124 |
-
# Compute prior loss
|
1125 |
-
prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean")
|
1126 |
-
|
1127 |
-
# Add the prior loss to the instance loss.
|
1128 |
-
loss = loss + args.prior_loss_weight * prior_loss
|
1129 |
-
else:
|
1130 |
-
mask = batch["mask"]
|
1131 |
-
loss = F.mse_loss(model_pred.float(), target.float(), reduction="none")
|
1132 |
-
loss = ((loss * mask).sum([1, 2, 3]) / mask.sum([1, 2, 3])).mean()
|
1133 |
-
accelerator.backward(loss)
|
1134 |
-
# Zero out the gradients for all token embeddings except the newly added
|
1135 |
-
# embeddings for the concept, as we only want to optimize the concept embeddings
|
1136 |
-
if args.modifier_token is not None:
|
1137 |
-
if accelerator.num_processes > 1:
|
1138 |
-
grads_text_encoder = text_encoder.module.get_input_embeddings().weight.grad
|
1139 |
-
else:
|
1140 |
-
grads_text_encoder = text_encoder.get_input_embeddings().weight.grad
|
1141 |
-
# Get the index for tokens that we want to zero the grads for
|
1142 |
-
index_grads_to_zero = torch.arange(len(tokenizer)) != modifier_token_id[0]
|
1143 |
-
for i in range(len(modifier_token_id[1:])):
|
1144 |
-
index_grads_to_zero = index_grads_to_zero & (
|
1145 |
-
torch.arange(len(tokenizer)) != modifier_token_id[i]
|
1146 |
-
)
|
1147 |
-
grads_text_encoder.data[index_grads_to_zero, :] = grads_text_encoder.data[
|
1148 |
-
index_grads_to_zero, :
|
1149 |
-
].fill_(0)
|
1150 |
-
|
1151 |
-
if accelerator.sync_gradients:
|
1152 |
-
params_to_clip = (
|
1153 |
-
itertools.chain(text_encoder.parameters(), custom_diffusion_layers.parameters())
|
1154 |
-
if args.modifier_token is not None
|
1155 |
-
else custom_diffusion_layers.parameters()
|
1156 |
-
)
|
1157 |
-
accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
|
1158 |
-
optimizer.step()
|
1159 |
-
lr_scheduler.step()
|
1160 |
-
optimizer.zero_grad(set_to_none=args.set_grads_to_none)
|
1161 |
-
|
1162 |
-
# Checks if the accelerator has performed an optimization step behind the scenes
|
1163 |
-
if accelerator.sync_gradients:
|
1164 |
-
progress_bar.update(1)
|
1165 |
-
global_step += 1
|
1166 |
-
|
1167 |
-
if global_step % args.checkpointing_steps == 0:
|
1168 |
-
if accelerator.is_main_process:
|
1169 |
-
# _before_ saving state, check if this save would set us over the `checkpoints_total_limit`
|
1170 |
-
if args.checkpoints_total_limit is not None:
|
1171 |
-
checkpoints = os.listdir(args.output_dir)
|
1172 |
-
checkpoints = [d for d in checkpoints if d.startswith("checkpoint")]
|
1173 |
-
checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1]))
|
1174 |
-
|
1175 |
-
# before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints
|
1176 |
-
if len(checkpoints) >= args.checkpoints_total_limit:
|
1177 |
-
num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1
|
1178 |
-
removing_checkpoints = checkpoints[0:num_to_remove]
|
1179 |
-
|
1180 |
-
logger.info(
|
1181 |
-
f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints"
|
1182 |
-
)
|
1183 |
-
logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}")
|
1184 |
-
|
1185 |
-
for removing_checkpoint in removing_checkpoints:
|
1186 |
-
removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint)
|
1187 |
-
shutil.rmtree(removing_checkpoint)
|
1188 |
-
|
1189 |
-
save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
|
1190 |
-
accelerator.save_state(save_path)
|
1191 |
-
logger.info(f"Saved state to {save_path}")
|
1192 |
-
|
1193 |
-
logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
|
1194 |
-
progress_bar.set_postfix(**logs)
|
1195 |
-
accelerator.log(logs, step=global_step)
|
1196 |
-
|
1197 |
-
if global_step >= args.max_train_steps:
|
1198 |
-
break
|
1199 |
-
|
1200 |
-
if accelerator.is_main_process:
|
1201 |
-
if args.validation_prompt is not None and global_step % args.validation_steps == 0:
|
1202 |
-
logger.info(
|
1203 |
-
f"Running validation... \n Generating {args.num_validation_images} images with prompt:"
|
1204 |
-
f" {args.validation_prompt}."
|
1205 |
-
)
|
1206 |
-
# create pipeline
|
1207 |
-
pipeline = DiffusionPipeline.from_pretrained(
|
1208 |
-
args.pretrained_model_name_or_path,
|
1209 |
-
unet=accelerator.unwrap_model(unet),
|
1210 |
-
text_encoder=accelerator.unwrap_model(text_encoder),
|
1211 |
-
tokenizer=tokenizer,
|
1212 |
-
revision=args.revision,
|
1213 |
-
torch_dtype=weight_dtype,
|
1214 |
-
)
|
1215 |
-
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
|
1216 |
-
pipeline = pipeline.to(accelerator.device)
|
1217 |
-
pipeline.set_progress_bar_config(disable=True)
|
1218 |
-
|
1219 |
-
# run inference
|
1220 |
-
generator = torch.Generator(device=accelerator.device).manual_seed(args.seed)
|
1221 |
-
images = [
|
1222 |
-
pipeline(args.validation_prompt, num_inference_steps=25, generator=generator, eta=1.0).images[0]
|
1223 |
-
for _ in range(args.num_validation_images)
|
1224 |
-
]
|
1225 |
-
|
1226 |
-
for tracker in accelerator.trackers:
|
1227 |
-
if tracker.name == "tensorboard":
|
1228 |
-
np_images = np.stack([np.asarray(img) for img in images])
|
1229 |
-
tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC")
|
1230 |
-
if tracker.name == "wandb":
|
1231 |
-
tracker.log(
|
1232 |
-
{
|
1233 |
-
"validation": [
|
1234 |
-
wandb.Image(image, caption=f"{i}: {args.validation_prompt}")
|
1235 |
-
for i, image in enumerate(images)
|
1236 |
-
]
|
1237 |
-
}
|
1238 |
-
)
|
1239 |
-
|
1240 |
-
del pipeline
|
1241 |
-
torch.cuda.empty_cache()
|
1242 |
-
|
1243 |
-
# Save the custom diffusion layers
|
1244 |
-
accelerator.wait_for_everyone()
|
1245 |
-
if accelerator.is_main_process:
|
1246 |
-
unet = unet.to(torch.float32)
|
1247 |
-
unet.save_attn_procs(args.output_dir)
|
1248 |
-
save_new_embed(text_encoder, modifier_token_id, accelerator, args, args.output_dir)
|
1249 |
-
|
1250 |
-
# Final inference
|
1251 |
-
# Load previous pipeline
|
1252 |
-
pipeline = DiffusionPipeline.from_pretrained(
|
1253 |
-
args.pretrained_model_name_or_path, revision=args.revision, torch_dtype=weight_dtype
|
1254 |
-
)
|
1255 |
-
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
|
1256 |
-
pipeline = pipeline.to(accelerator.device)
|
1257 |
-
|
1258 |
-
# load attention processors
|
1259 |
-
pipeline.unet.load_attn_procs(args.output_dir, weight_name="pytorch_custom_diffusion_weights.bin")
|
1260 |
-
for token in args.modifier_token:
|
1261 |
-
pipeline.load_textual_inversion(args.output_dir, weight_name=f"{token}.bin")
|
1262 |
-
|
1263 |
-
# run inference
|
1264 |
-
if args.validation_prompt and args.num_validation_images > 0:
|
1265 |
-
generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if args.seed else None
|
1266 |
-
images = [
|
1267 |
-
pipeline(args.validation_prompt, num_inference_steps=25, generator=generator, eta=1.0).images[0]
|
1268 |
-
for _ in range(args.num_validation_images)
|
1269 |
-
]
|
1270 |
-
|
1271 |
-
for tracker in accelerator.trackers:
|
1272 |
-
if tracker.name == "tensorboard":
|
1273 |
-
np_images = np.stack([np.asarray(img) for img in images])
|
1274 |
-
tracker.writer.add_images("test", np_images, epoch, dataformats="NHWC")
|
1275 |
-
if tracker.name == "wandb":
|
1276 |
-
tracker.log(
|
1277 |
-
{
|
1278 |
-
"test": [
|
1279 |
-
wandb.Image(image, caption=f"{i}: {args.validation_prompt}")
|
1280 |
-
for i, image in enumerate(images)
|
1281 |
-
]
|
1282 |
-
}
|
1283 |
-
)
|
1284 |
-
|
1285 |
-
if args.push_to_hub:
|
1286 |
-
save_model_card(
|
1287 |
-
repo_id,
|
1288 |
-
images=images,
|
1289 |
-
base_model=args.pretrained_model_name_or_path,
|
1290 |
-
prompt=args.instance_prompt,
|
1291 |
-
repo_folder=args.output_dir,
|
1292 |
-
)
|
1293 |
-
api = HfApi(token=args.hub_token)
|
1294 |
-
api.upload_folder(
|
1295 |
-
repo_id=repo_id,
|
1296 |
-
folder_path=args.output_dir,
|
1297 |
-
commit_message="End of training",
|
1298 |
-
ignore_patterns=["step_*", "epoch_*"],
|
1299 |
-
)
|
1300 |
-
|
1301 |
-
accelerator.end_training()
|
1302 |
-
|
1303 |
-
|
1304 |
-
if __name__ == "__main__":
|
1305 |
-
args = parse_args()
|
1306 |
-
main(args)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_x101_32x4d_fpn_2x_coco.py
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
_base_ = './faster_rcnn_r50_fpn_2x_coco.py'
|
2 |
-
model = dict(
|
3 |
-
pretrained='open-mmlab://resnext101_32x4d',
|
4 |
-
backbone=dict(
|
5 |
-
type='ResNeXt',
|
6 |
-
depth=101,
|
7 |
-
groups=32,
|
8 |
-
base_width=4,
|
9 |
-
num_stages=4,
|
10 |
-
out_indices=(0, 1, 2, 3),
|
11 |
-
frozen_stages=1,
|
12 |
-
norm_cfg=dict(type='BN', requires_grad=True),
|
13 |
-
style='pytorch'))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/mmdet/utils/util_mixins.py
DELETED
@@ -1,104 +0,0 @@
|
|
1 |
-
"""This module defines the :class:`NiceRepr` mixin class, which defines a
|
2 |
-
``__repr__`` and ``__str__`` method that only depend on a custom ``__nice__``
|
3 |
-
method, which you must define. This means you only have to overload one
|
4 |
-
function instead of two. Furthermore, if the object defines a ``__len__``
|
5 |
-
method, then the ``__nice__`` method defaults to something sensible, otherwise
|
6 |
-
it is treated as abstract and raises ``NotImplementedError``.
|
7 |
-
|
8 |
-
To use simply have your object inherit from :class:`NiceRepr`
|
9 |
-
(multi-inheritance should be ok).
|
10 |
-
|
11 |
-
This code was copied from the ubelt library: https://github.com/Erotemic/ubelt
|
12 |
-
|
13 |
-
Example:
|
14 |
-
>>> # Objects that define __nice__ have a default __str__ and __repr__
|
15 |
-
>>> class Student(NiceRepr):
|
16 |
-
... def __init__(self, name):
|
17 |
-
... self.name = name
|
18 |
-
... def __nice__(self):
|
19 |
-
... return self.name
|
20 |
-
>>> s1 = Student('Alice')
|
21 |
-
>>> s2 = Student('Bob')
|
22 |
-
>>> print(f's1 = {s1}')
|
23 |
-
>>> print(f's2 = {s2}')
|
24 |
-
s1 = <Student(Alice)>
|
25 |
-
s2 = <Student(Bob)>
|
26 |
-
|
27 |
-
Example:
|
28 |
-
>>> # Objects that define __len__ have a default __nice__
|
29 |
-
>>> class Group(NiceRepr):
|
30 |
-
... def __init__(self, data):
|
31 |
-
... self.data = data
|
32 |
-
... def __len__(self):
|
33 |
-
... return len(self.data)
|
34 |
-
>>> g = Group([1, 2, 3])
|
35 |
-
>>> print(f'g = {g}')
|
36 |
-
g = <Group(3)>
|
37 |
-
"""
|
38 |
-
import warnings
|
39 |
-
|
40 |
-
|
41 |
-
class NiceRepr(object):
|
42 |
-
"""Inherit from this class and define ``__nice__`` to "nicely" print your
|
43 |
-
objects.
|
44 |
-
|
45 |
-
Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function
|
46 |
-
Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``.
|
47 |
-
If the inheriting class has a ``__len__``, method then the default
|
48 |
-
``__nice__`` method will return its length.
|
49 |
-
|
50 |
-
Example:
|
51 |
-
>>> class Foo(NiceRepr):
|
52 |
-
... def __nice__(self):
|
53 |
-
... return 'info'
|
54 |
-
>>> foo = Foo()
|
55 |
-
>>> assert str(foo) == '<Foo(info)>'
|
56 |
-
>>> assert repr(foo).startswith('<Foo(info) at ')
|
57 |
-
|
58 |
-
Example:
|
59 |
-
>>> class Bar(NiceRepr):
|
60 |
-
... pass
|
61 |
-
>>> bar = Bar()
|
62 |
-
>>> import pytest
|
63 |
-
>>> with pytest.warns(None) as record:
|
64 |
-
>>> assert 'object at' in str(bar)
|
65 |
-
>>> assert 'object at' in repr(bar)
|
66 |
-
|
67 |
-
Example:
|
68 |
-
>>> class Baz(NiceRepr):
|
69 |
-
... def __len__(self):
|
70 |
-
... return 5
|
71 |
-
>>> baz = Baz()
|
72 |
-
>>> assert str(baz) == '<Baz(5)>'
|
73 |
-
"""
|
74 |
-
|
75 |
-
def __nice__(self):
|
76 |
-
"""str: a "nice" summary string describing this module"""
|
77 |
-
if hasattr(self, '__len__'):
|
78 |
-
# It is a common pattern for objects to use __len__ in __nice__
|
79 |
-
# As a convenience we define a default __nice__ for these objects
|
80 |
-
return str(len(self))
|
81 |
-
else:
|
82 |
-
# In all other cases force the subclass to overload __nice__
|
83 |
-
raise NotImplementedError(
|
84 |
-
f'Define the __nice__ method for {self.__class__!r}')
|
85 |
-
|
86 |
-
def __repr__(self):
|
87 |
-
"""str: the string of the module"""
|
88 |
-
try:
|
89 |
-
nice = self.__nice__()
|
90 |
-
classname = self.__class__.__name__
|
91 |
-
return f'<{classname}({nice}) at {hex(id(self))}>'
|
92 |
-
except NotImplementedError as ex:
|
93 |
-
warnings.warn(str(ex), category=RuntimeWarning)
|
94 |
-
return object.__repr__(self)
|
95 |
-
|
96 |
-
def __str__(self):
|
97 |
-
"""str: the string of the module"""
|
98 |
-
try:
|
99 |
-
classname = self.__class__.__name__
|
100 |
-
nice = self.__nice__()
|
101 |
-
return f'<{classname}({nice})>'
|
102 |
-
except NotImplementedError as ex:
|
103 |
-
warnings.warn(str(ex), category=RuntimeWarning)
|
104 |
-
return object.__repr__(self)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x512_20k_voc12aug.py
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
_base_ = './ocrnet_hr18_512x512_20k_voc12aug.py'
|
2 |
-
model = dict(
|
3 |
-
pretrained='open-mmlab://msra/hrnetv2_w18_small',
|
4 |
-
backbone=dict(
|
5 |
-
extra=dict(
|
6 |
-
stage1=dict(num_blocks=(2, )),
|
7 |
-
stage2=dict(num_blocks=(2, 2)),
|
8 |
-
stage3=dict(num_modules=3, num_blocks=(2, 2, 2)),
|
9 |
-
stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2)))))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/text_generation.py
DELETED
@@ -1,397 +0,0 @@
|
|
1 |
-
import ast
|
2 |
-
import copy
|
3 |
-
import html
|
4 |
-
import random
|
5 |
-
import re
|
6 |
-
import time
|
7 |
-
import traceback
|
8 |
-
|
9 |
-
import numpy as np
|
10 |
-
import torch
|
11 |
-
import transformers
|
12 |
-
from transformers import LogitsProcessorList
|
13 |
-
|
14 |
-
import modules.shared as shared
|
15 |
-
from modules.callbacks import (
|
16 |
-
Iteratorize,
|
17 |
-
Stream,
|
18 |
-
_StopEverythingStoppingCriteria
|
19 |
-
)
|
20 |
-
from modules.extensions import apply_extensions
|
21 |
-
from modules.grammar import GrammarLogitsProcessor
|
22 |
-
from modules.html_generator import generate_4chan_html, generate_basic_html
|
23 |
-
from modules.logging_colors import logger
|
24 |
-
from modules.models import clear_torch_cache, local_rank
|
25 |
-
|
26 |
-
|
27 |
-
def generate_reply(*args, **kwargs):
|
28 |
-
shared.generation_lock.acquire()
|
29 |
-
try:
|
30 |
-
for result in _generate_reply(*args, **kwargs):
|
31 |
-
yield result
|
32 |
-
finally:
|
33 |
-
shared.generation_lock.release()
|
34 |
-
|
35 |
-
|
36 |
-
def _generate_reply(question, state, stopping_strings=None, is_chat=False, escape_html=False):
|
37 |
-
|
38 |
-
# Find the appropriate generation function
|
39 |
-
generate_func = apply_extensions('custom_generate_reply')
|
40 |
-
if generate_func is None:
|
41 |
-
if shared.model_name == 'None' or shared.model is None:
|
42 |
-
logger.error("No model is loaded! Select one in the Model tab.")
|
43 |
-
yield ''
|
44 |
-
return
|
45 |
-
|
46 |
-
if shared.model.__class__.__name__ in ['LlamaCppModel', 'RWKVModel', 'ExllamaModel', 'Exllamav2Model', 'CtransformersModel']:
|
47 |
-
generate_func = generate_reply_custom
|
48 |
-
else:
|
49 |
-
generate_func = generate_reply_HF
|
50 |
-
|
51 |
-
# Prepare the input
|
52 |
-
original_question = question
|
53 |
-
if not is_chat:
|
54 |
-
state = apply_extensions('state', state)
|
55 |
-
question = apply_extensions('input', question, state)
|
56 |
-
|
57 |
-
# Find the stopping strings
|
58 |
-
all_stop_strings = []
|
59 |
-
for st in (stopping_strings, ast.literal_eval(f"[{state['custom_stopping_strings']}]")):
|
60 |
-
if type(st) is list and len(st) > 0:
|
61 |
-
all_stop_strings += st
|
62 |
-
|
63 |
-
if shared.args.verbose:
|
64 |
-
print(f'\n\n{question}\n--------------------\n')
|
65 |
-
|
66 |
-
shared.stop_everything = False
|
67 |
-
clear_torch_cache()
|
68 |
-
seed = set_manual_seed(state['seed'])
|
69 |
-
last_update = -1
|
70 |
-
reply = ''
|
71 |
-
is_stream = state['stream']
|
72 |
-
if len(all_stop_strings) > 0 and not state['stream']:
|
73 |
-
state = copy.deepcopy(state)
|
74 |
-
state['stream'] = True
|
75 |
-
|
76 |
-
# Generate
|
77 |
-
for reply in generate_func(question, original_question, seed, state, stopping_strings, is_chat=is_chat):
|
78 |
-
if escape_html:
|
79 |
-
reply = html.escape(reply)
|
80 |
-
|
81 |
-
reply, stop_found = apply_stopping_strings(reply, all_stop_strings)
|
82 |
-
if is_stream:
|
83 |
-
cur_time = time.time()
|
84 |
-
|
85 |
-
# Maximum number of tokens/second
|
86 |
-
if state['max_tokens_second'] > 0:
|
87 |
-
diff = 1 / state['max_tokens_second'] - (cur_time - last_update)
|
88 |
-
if diff > 0:
|
89 |
-
time.sleep(diff)
|
90 |
-
|
91 |
-
last_update = time.time()
|
92 |
-
yield reply
|
93 |
-
|
94 |
-
# Limit updates to 24 per second to not stress low latency networks
|
95 |
-
else:
|
96 |
-
if cur_time - last_update > 0.041666666666666664:
|
97 |
-
last_update = cur_time
|
98 |
-
yield reply
|
99 |
-
|
100 |
-
if stop_found or (state['max_tokens_second'] > 0 and shared.stop_everything):
|
101 |
-
break
|
102 |
-
|
103 |
-
if not is_chat:
|
104 |
-
reply = apply_extensions('output', reply, state)
|
105 |
-
|
106 |
-
yield reply
|
107 |
-
|
108 |
-
|
109 |
-
def encode(prompt, add_special_tokens=True, add_bos_token=True, truncation_length=None):
|
110 |
-
if shared.tokenizer is None:
|
111 |
-
raise ValueError('No tokenizer is loaded')
|
112 |
-
|
113 |
-
if shared.model.__class__.__name__ in ['LlamaCppModel', 'RWKVModel', 'CtransformersModel', 'Exllamav2Model']:
|
114 |
-
input_ids = shared.tokenizer.encode(str(prompt))
|
115 |
-
if shared.model.__class__.__name__ not in ['Exllamav2Model']:
|
116 |
-
input_ids = np.array(input_ids).reshape(1, len(input_ids))
|
117 |
-
else:
|
118 |
-
input_ids = shared.tokenizer.encode(str(prompt), return_tensors='pt', add_special_tokens=add_special_tokens)
|
119 |
-
|
120 |
-
# This is a hack for making replies more creative.
|
121 |
-
if not add_bos_token and input_ids[0][0] == shared.tokenizer.bos_token_id:
|
122 |
-
input_ids = input_ids[:, 1:]
|
123 |
-
|
124 |
-
# Handling truncation
|
125 |
-
if truncation_length is not None:
|
126 |
-
input_ids = input_ids[:, -truncation_length:]
|
127 |
-
|
128 |
-
if shared.model.__class__.__name__ in ['LlamaCppModel', 'RWKVModel', 'ExllamaModel', 'Exllamav2Model', 'CtransformersModel'] or shared.args.cpu:
|
129 |
-
return input_ids
|
130 |
-
elif shared.args.deepspeed:
|
131 |
-
return input_ids.to(device=local_rank)
|
132 |
-
elif torch.backends.mps.is_available():
|
133 |
-
device = torch.device('mps')
|
134 |
-
return input_ids.to(device)
|
135 |
-
else:
|
136 |
-
return input_ids.cuda()
|
137 |
-
|
138 |
-
|
139 |
-
def decode(output_ids, skip_special_tokens=True):
|
140 |
-
if shared.tokenizer is None:
|
141 |
-
raise ValueError('No tokenizer is loaded')
|
142 |
-
|
143 |
-
return shared.tokenizer.decode(output_ids, skip_special_tokens)
|
144 |
-
|
145 |
-
|
146 |
-
def get_encoded_length(prompt):
|
147 |
-
length_after_extensions = apply_extensions('tokenized_length', prompt)
|
148 |
-
if length_after_extensions is not None:
|
149 |
-
return length_after_extensions
|
150 |
-
|
151 |
-
return len(encode(prompt)[0])
|
152 |
-
|
153 |
-
|
154 |
-
def get_token_ids(prompt):
|
155 |
-
tokens = encode(prompt)[0]
|
156 |
-
decoded_tokens = [shared.tokenizer.decode([i]) for i in tokens]
|
157 |
-
|
158 |
-
output = ''
|
159 |
-
for row in list(zip(tokens, decoded_tokens)):
|
160 |
-
output += f"{str(int(row[0])).ljust(5)} - {repr(row[1])}\n"
|
161 |
-
|
162 |
-
return output
|
163 |
-
|
164 |
-
|
165 |
-
def get_max_prompt_length(state):
|
166 |
-
return state['truncation_length'] - state['max_new_tokens']
|
167 |
-
|
168 |
-
|
169 |
-
def generate_reply_wrapper(question, state, stopping_strings=None):
|
170 |
-
"""
|
171 |
-
Returns formatted outputs for the UI
|
172 |
-
"""
|
173 |
-
reply = question if not shared.is_seq2seq else ''
|
174 |
-
yield formatted_outputs(reply, shared.model_name)
|
175 |
-
|
176 |
-
for reply in generate_reply(question, state, stopping_strings, is_chat=False, escape_html=True):
|
177 |
-
if not shared.is_seq2seq:
|
178 |
-
reply = question + reply
|
179 |
-
|
180 |
-
yield formatted_outputs(reply, shared.model_name)
|
181 |
-
|
182 |
-
|
183 |
-
def formatted_outputs(reply, model_name):
|
184 |
-
if any(s in model_name for s in ['gpt-4chan', 'gpt4chan']):
|
185 |
-
reply = fix_gpt4chan(reply)
|
186 |
-
return html.unescape(reply), generate_4chan_html(reply)
|
187 |
-
else:
|
188 |
-
return html.unescape(reply), generate_basic_html(reply)
|
189 |
-
|
190 |
-
|
191 |
-
def fix_gpt4chan(s):
|
192 |
-
"""
|
193 |
-
Removes empty replies from gpt4chan outputs
|
194 |
-
"""
|
195 |
-
for i in range(10):
|
196 |
-
s = re.sub("--- [0-9]*\n>>[0-9]*\n---", "---", s)
|
197 |
-
s = re.sub("--- [0-9]*\n *\n---", "---", s)
|
198 |
-
s = re.sub("--- [0-9]*\n\n\n---", "---", s)
|
199 |
-
|
200 |
-
return s
|
201 |
-
|
202 |
-
|
203 |
-
def fix_galactica(s):
|
204 |
-
"""
|
205 |
-
Fix the LaTeX equations in GALACTICA
|
206 |
-
"""
|
207 |
-
s = s.replace(r'\[', r'$')
|
208 |
-
s = s.replace(r'\]', r'$')
|
209 |
-
s = s.replace(r'\(', r'$')
|
210 |
-
s = s.replace(r'\)', r'$')
|
211 |
-
s = s.replace(r'$$', r'$')
|
212 |
-
s = re.sub(r'\n', r'\n\n', s)
|
213 |
-
s = re.sub(r"\n{3,}", "\n\n", s)
|
214 |
-
return s
|
215 |
-
|
216 |
-
|
217 |
-
def get_reply_from_output_ids(output_ids, input_ids, original_question, state, is_chat=False):
|
218 |
-
if shared.is_seq2seq:
|
219 |
-
reply = decode(output_ids, state['skip_special_tokens'])
|
220 |
-
else:
|
221 |
-
new_tokens = len(output_ids) - len(input_ids[0])
|
222 |
-
reply = decode(output_ids[-new_tokens:], state['skip_special_tokens'])
|
223 |
-
# Prevent LlamaTokenizer from skipping a space
|
224 |
-
if type(shared.tokenizer) in [transformers.LlamaTokenizer, transformers.LlamaTokenizerFast] and len(output_ids) > 0:
|
225 |
-
if shared.tokenizer.convert_ids_to_tokens(int(output_ids[-new_tokens])).startswith('▁'):
|
226 |
-
reply = ' ' + reply
|
227 |
-
|
228 |
-
return reply
|
229 |
-
|
230 |
-
|
231 |
-
def set_manual_seed(seed):
|
232 |
-
seed = int(seed)
|
233 |
-
if seed == -1:
|
234 |
-
seed = random.randint(1, 2**31)
|
235 |
-
|
236 |
-
torch.manual_seed(seed)
|
237 |
-
if torch.cuda.is_available():
|
238 |
-
torch.cuda.manual_seed_all(seed)
|
239 |
-
|
240 |
-
return seed
|
241 |
-
|
242 |
-
|
243 |
-
def stop_everything_event():
|
244 |
-
shared.stop_everything = True
|
245 |
-
|
246 |
-
|
247 |
-
def apply_stopping_strings(reply, all_stop_strings):
|
248 |
-
stop_found = False
|
249 |
-
for string in all_stop_strings:
|
250 |
-
idx = reply.find(string)
|
251 |
-
if idx != -1:
|
252 |
-
reply = reply[:idx]
|
253 |
-
stop_found = True
|
254 |
-
break
|
255 |
-
|
256 |
-
if not stop_found:
|
257 |
-
# If something like "\nYo" is generated just before "\nYou:"
|
258 |
-
# is completed, trim it
|
259 |
-
for string in all_stop_strings:
|
260 |
-
for j in range(len(string) - 1, 0, -1):
|
261 |
-
if reply[-j:] == string[:j]:
|
262 |
-
reply = reply[:-j]
|
263 |
-
break
|
264 |
-
else:
|
265 |
-
continue
|
266 |
-
|
267 |
-
break
|
268 |
-
|
269 |
-
return reply, stop_found
|
270 |
-
|
271 |
-
|
272 |
-
def generate_reply_HF(question, original_question, seed, state, stopping_strings=None, is_chat=False):
|
273 |
-
generate_params = {}
|
274 |
-
for k in ['max_new_tokens', 'do_sample', 'temperature', 'top_p', 'typical_p', 'repetition_penalty', 'repetition_penalty_range', 'encoder_repetition_penalty', 'top_k', 'min_length', 'no_repeat_ngram_size', 'num_beams', 'penalty_alpha', 'length_penalty', 'early_stopping', 'tfs', 'top_a', 'mirostat_mode', 'mirostat_tau', 'mirostat_eta', 'guidance_scale']:
|
275 |
-
generate_params[k] = state[k]
|
276 |
-
|
277 |
-
if state['negative_prompt'] != '':
|
278 |
-
generate_params['negative_prompt_ids'] = encode(state['negative_prompt'])
|
279 |
-
|
280 |
-
for k in ['epsilon_cutoff', 'eta_cutoff']:
|
281 |
-
if state[k] > 0:
|
282 |
-
generate_params[k] = state[k] * 1e-4
|
283 |
-
|
284 |
-
if state['ban_eos_token']:
|
285 |
-
generate_params['suppress_tokens'] = [shared.tokenizer.eos_token_id]
|
286 |
-
|
287 |
-
if state['custom_token_bans']:
|
288 |
-
to_ban = [int(x) for x in state['custom_token_bans'].split(',')]
|
289 |
-
if len(to_ban) > 0:
|
290 |
-
if generate_params.get('suppress_tokens', None):
|
291 |
-
generate_params['suppress_tokens'] += to_ban
|
292 |
-
else:
|
293 |
-
generate_params['suppress_tokens'] = to_ban
|
294 |
-
|
295 |
-
generate_params.update({'use_cache': not shared.args.no_cache})
|
296 |
-
if shared.args.deepspeed:
|
297 |
-
generate_params.update({'synced_gpus': True})
|
298 |
-
|
299 |
-
# Encode the input
|
300 |
-
input_ids = encode(question, add_bos_token=state['add_bos_token'], truncation_length=get_max_prompt_length(state))
|
301 |
-
output = input_ids[0]
|
302 |
-
cuda = not any((shared.args.cpu, shared.args.deepspeed))
|
303 |
-
if state['auto_max_new_tokens']:
|
304 |
-
generate_params['max_new_tokens'] = state['truncation_length'] - input_ids.shape[-1]
|
305 |
-
|
306 |
-
# Add the encoded tokens to generate_params
|
307 |
-
question, input_ids, inputs_embeds = apply_extensions('tokenizer', state, question, input_ids, None)
|
308 |
-
original_input_ids = input_ids
|
309 |
-
generate_params.update({'inputs': input_ids})
|
310 |
-
if inputs_embeds is not None:
|
311 |
-
generate_params.update({'inputs_embeds': inputs_embeds})
|
312 |
-
|
313 |
-
# Stopping criteria / eos token
|
314 |
-
eos_token_ids = [shared.tokenizer.eos_token_id] if shared.tokenizer.eos_token_id is not None else []
|
315 |
-
generate_params['eos_token_id'] = eos_token_ids
|
316 |
-
generate_params['stopping_criteria'] = transformers.StoppingCriteriaList()
|
317 |
-
generate_params['stopping_criteria'].append(_StopEverythingStoppingCriteria())
|
318 |
-
|
319 |
-
processor = state.get('logits_processor', LogitsProcessorList([]))
|
320 |
-
# In case a processor is passed by itself.
|
321 |
-
if not isinstance(processor, LogitsProcessorList):
|
322 |
-
processor = LogitsProcessorList([processor])
|
323 |
-
processor.append(GrammarLogitsProcessor(state['grammar_string']))
|
324 |
-
apply_extensions('logits_processor', processor, input_ids)
|
325 |
-
generate_params['logits_processor'] = processor
|
326 |
-
|
327 |
-
t0 = time.time()
|
328 |
-
try:
|
329 |
-
if not is_chat and not shared.is_seq2seq:
|
330 |
-
yield ''
|
331 |
-
|
332 |
-
# Generate the entire reply at once.
|
333 |
-
if not state['stream']:
|
334 |
-
with torch.no_grad():
|
335 |
-
output = shared.model.generate(**generate_params)[0]
|
336 |
-
if cuda:
|
337 |
-
output = output.cuda()
|
338 |
-
|
339 |
-
yield get_reply_from_output_ids(output, input_ids, original_question, state, is_chat=is_chat)
|
340 |
-
|
341 |
-
# Stream the reply 1 token at a time.
|
342 |
-
# This is based on the trick of using 'stopping_criteria' to create an iterator.
|
343 |
-
else:
|
344 |
-
|
345 |
-
def generate_with_callback(callback=None, *args, **kwargs):
|
346 |
-
kwargs['stopping_criteria'].append(Stream(callback_func=callback))
|
347 |
-
clear_torch_cache()
|
348 |
-
with torch.no_grad():
|
349 |
-
shared.model.generate(**kwargs)
|
350 |
-
|
351 |
-
def generate_with_streaming(**kwargs):
|
352 |
-
return Iteratorize(generate_with_callback, [], kwargs, callback=None)
|
353 |
-
|
354 |
-
with generate_with_streaming(**generate_params) as generator:
|
355 |
-
for output in generator:
|
356 |
-
if output[-1] in eos_token_ids:
|
357 |
-
break
|
358 |
-
|
359 |
-
yield get_reply_from_output_ids(output, input_ids, original_question, state, is_chat=is_chat)
|
360 |
-
|
361 |
-
except Exception:
|
362 |
-
traceback.print_exc()
|
363 |
-
finally:
|
364 |
-
t1 = time.time()
|
365 |
-
original_tokens = len(original_input_ids[0])
|
366 |
-
new_tokens = len(output) - (original_tokens if not shared.is_seq2seq else 0)
|
367 |
-
print(f'Output generated in {(t1-t0):.2f} seconds ({new_tokens/(t1-t0):.2f} tokens/s, {new_tokens} tokens, context {original_tokens}, seed {seed})')
|
368 |
-
return
|
369 |
-
|
370 |
-
|
371 |
-
def generate_reply_custom(question, original_question, seed, state, stopping_strings=None, is_chat=False):
|
372 |
-
"""
|
373 |
-
For models that do not use the transformers library for sampling
|
374 |
-
"""
|
375 |
-
seed = set_manual_seed(state['seed'])
|
376 |
-
|
377 |
-
t0 = time.time()
|
378 |
-
reply = ''
|
379 |
-
try:
|
380 |
-
if not is_chat:
|
381 |
-
yield ''
|
382 |
-
|
383 |
-
if not state['stream']:
|
384 |
-
reply = shared.model.generate(question, state)
|
385 |
-
yield reply
|
386 |
-
else:
|
387 |
-
for reply in shared.model.generate_with_streaming(question, state):
|
388 |
-
yield reply
|
389 |
-
|
390 |
-
except Exception:
|
391 |
-
traceback.print_exc()
|
392 |
-
finally:
|
393 |
-
t1 = time.time()
|
394 |
-
original_tokens = len(encode(original_question)[0])
|
395 |
-
new_tokens = len(encode(original_question + reply)[0]) - original_tokens
|
396 |
-
print(f'Output generated in {(t1-t0):.2f} seconds ({new_tokens/(t1-t0):.2f} tokens/s, {new_tokens} tokens, context {original_tokens}, seed {seed})')
|
397 |
-
return
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/__init__.py
DELETED
File without changes
|
spaces/Arthur678/vits-uma-genshin-honkai/transforms.py
DELETED
@@ -1,193 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch.nn import functional as F
|
3 |
-
|
4 |
-
import numpy as np
|
5 |
-
|
6 |
-
|
7 |
-
DEFAULT_MIN_BIN_WIDTH = 1e-3
|
8 |
-
DEFAULT_MIN_BIN_HEIGHT = 1e-3
|
9 |
-
DEFAULT_MIN_DERIVATIVE = 1e-3
|
10 |
-
|
11 |
-
|
12 |
-
def piecewise_rational_quadratic_transform(inputs,
|
13 |
-
unnormalized_widths,
|
14 |
-
unnormalized_heights,
|
15 |
-
unnormalized_derivatives,
|
16 |
-
inverse=False,
|
17 |
-
tails=None,
|
18 |
-
tail_bound=1.,
|
19 |
-
min_bin_width=DEFAULT_MIN_BIN_WIDTH,
|
20 |
-
min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
|
21 |
-
min_derivative=DEFAULT_MIN_DERIVATIVE):
|
22 |
-
|
23 |
-
if tails is None:
|
24 |
-
spline_fn = rational_quadratic_spline
|
25 |
-
spline_kwargs = {}
|
26 |
-
else:
|
27 |
-
spline_fn = unconstrained_rational_quadratic_spline
|
28 |
-
spline_kwargs = {
|
29 |
-
'tails': tails,
|
30 |
-
'tail_bound': tail_bound
|
31 |
-
}
|
32 |
-
|
33 |
-
outputs, logabsdet = spline_fn(
|
34 |
-
inputs=inputs,
|
35 |
-
unnormalized_widths=unnormalized_widths,
|
36 |
-
unnormalized_heights=unnormalized_heights,
|
37 |
-
unnormalized_derivatives=unnormalized_derivatives,
|
38 |
-
inverse=inverse,
|
39 |
-
min_bin_width=min_bin_width,
|
40 |
-
min_bin_height=min_bin_height,
|
41 |
-
min_derivative=min_derivative,
|
42 |
-
**spline_kwargs
|
43 |
-
)
|
44 |
-
return outputs, logabsdet
|
45 |
-
|
46 |
-
|
47 |
-
def searchsorted(bin_locations, inputs, eps=1e-6):
|
48 |
-
bin_locations[..., -1] += eps
|
49 |
-
return torch.sum(
|
50 |
-
inputs[..., None] >= bin_locations,
|
51 |
-
dim=-1
|
52 |
-
) - 1
|
53 |
-
|
54 |
-
|
55 |
-
def unconstrained_rational_quadratic_spline(inputs,
|
56 |
-
unnormalized_widths,
|
57 |
-
unnormalized_heights,
|
58 |
-
unnormalized_derivatives,
|
59 |
-
inverse=False,
|
60 |
-
tails='linear',
|
61 |
-
tail_bound=1.,
|
62 |
-
min_bin_width=DEFAULT_MIN_BIN_WIDTH,
|
63 |
-
min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
|
64 |
-
min_derivative=DEFAULT_MIN_DERIVATIVE):
|
65 |
-
inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
|
66 |
-
outside_interval_mask = ~inside_interval_mask
|
67 |
-
|
68 |
-
outputs = torch.zeros_like(inputs)
|
69 |
-
logabsdet = torch.zeros_like(inputs)
|
70 |
-
|
71 |
-
if tails == 'linear':
|
72 |
-
unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
|
73 |
-
constant = np.log(np.exp(1 - min_derivative) - 1)
|
74 |
-
unnormalized_derivatives[..., 0] = constant
|
75 |
-
unnormalized_derivatives[..., -1] = constant
|
76 |
-
|
77 |
-
outputs[outside_interval_mask] = inputs[outside_interval_mask]
|
78 |
-
logabsdet[outside_interval_mask] = 0
|
79 |
-
else:
|
80 |
-
raise RuntimeError('{} tails are not implemented.'.format(tails))
|
81 |
-
|
82 |
-
outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
|
83 |
-
inputs=inputs[inside_interval_mask],
|
84 |
-
unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
|
85 |
-
unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
|
86 |
-
unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
|
87 |
-
inverse=inverse,
|
88 |
-
left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
|
89 |
-
min_bin_width=min_bin_width,
|
90 |
-
min_bin_height=min_bin_height,
|
91 |
-
min_derivative=min_derivative
|
92 |
-
)
|
93 |
-
|
94 |
-
return outputs, logabsdet
|
95 |
-
|
96 |
-
def rational_quadratic_spline(inputs,
|
97 |
-
unnormalized_widths,
|
98 |
-
unnormalized_heights,
|
99 |
-
unnormalized_derivatives,
|
100 |
-
inverse=False,
|
101 |
-
left=0., right=1., bottom=0., top=1.,
|
102 |
-
min_bin_width=DEFAULT_MIN_BIN_WIDTH,
|
103 |
-
min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
|
104 |
-
min_derivative=DEFAULT_MIN_DERIVATIVE):
|
105 |
-
if torch.min(inputs) < left or torch.max(inputs) > right:
|
106 |
-
raise ValueError('Input to a transform is not within its domain')
|
107 |
-
|
108 |
-
num_bins = unnormalized_widths.shape[-1]
|
109 |
-
|
110 |
-
if min_bin_width * num_bins > 1.0:
|
111 |
-
raise ValueError('Minimal bin width too large for the number of bins')
|
112 |
-
if min_bin_height * num_bins > 1.0:
|
113 |
-
raise ValueError('Minimal bin height too large for the number of bins')
|
114 |
-
|
115 |
-
widths = F.softmax(unnormalized_widths, dim=-1)
|
116 |
-
widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
|
117 |
-
cumwidths = torch.cumsum(widths, dim=-1)
|
118 |
-
cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
|
119 |
-
cumwidths = (right - left) * cumwidths + left
|
120 |
-
cumwidths[..., 0] = left
|
121 |
-
cumwidths[..., -1] = right
|
122 |
-
widths = cumwidths[..., 1:] - cumwidths[..., :-1]
|
123 |
-
|
124 |
-
derivatives = min_derivative + F.softplus(unnormalized_derivatives)
|
125 |
-
|
126 |
-
heights = F.softmax(unnormalized_heights, dim=-1)
|
127 |
-
heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
|
128 |
-
cumheights = torch.cumsum(heights, dim=-1)
|
129 |
-
cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
|
130 |
-
cumheights = (top - bottom) * cumheights + bottom
|
131 |
-
cumheights[..., 0] = bottom
|
132 |
-
cumheights[..., -1] = top
|
133 |
-
heights = cumheights[..., 1:] - cumheights[..., :-1]
|
134 |
-
|
135 |
-
if inverse:
|
136 |
-
bin_idx = searchsorted(cumheights, inputs)[..., None]
|
137 |
-
else:
|
138 |
-
bin_idx = searchsorted(cumwidths, inputs)[..., None]
|
139 |
-
|
140 |
-
input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
|
141 |
-
input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
|
142 |
-
|
143 |
-
input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
|
144 |
-
delta = heights / widths
|
145 |
-
input_delta = delta.gather(-1, bin_idx)[..., 0]
|
146 |
-
|
147 |
-
input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
|
148 |
-
input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
|
149 |
-
|
150 |
-
input_heights = heights.gather(-1, bin_idx)[..., 0]
|
151 |
-
|
152 |
-
if inverse:
|
153 |
-
a = (((inputs - input_cumheights) * (input_derivatives
|
154 |
-
+ input_derivatives_plus_one
|
155 |
-
- 2 * input_delta)
|
156 |
-
+ input_heights * (input_delta - input_derivatives)))
|
157 |
-
b = (input_heights * input_derivatives
|
158 |
-
- (inputs - input_cumheights) * (input_derivatives
|
159 |
-
+ input_derivatives_plus_one
|
160 |
-
- 2 * input_delta))
|
161 |
-
c = - input_delta * (inputs - input_cumheights)
|
162 |
-
|
163 |
-
discriminant = b.pow(2) - 4 * a * c
|
164 |
-
assert (discriminant >= 0).all()
|
165 |
-
|
166 |
-
root = (2 * c) / (-b - torch.sqrt(discriminant))
|
167 |
-
outputs = root * input_bin_widths + input_cumwidths
|
168 |
-
|
169 |
-
theta_one_minus_theta = root * (1 - root)
|
170 |
-
denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
|
171 |
-
* theta_one_minus_theta)
|
172 |
-
derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
|
173 |
-
+ 2 * input_delta * theta_one_minus_theta
|
174 |
-
+ input_derivatives * (1 - root).pow(2))
|
175 |
-
logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
|
176 |
-
|
177 |
-
return outputs, -logabsdet
|
178 |
-
else:
|
179 |
-
theta = (inputs - input_cumwidths) / input_bin_widths
|
180 |
-
theta_one_minus_theta = theta * (1 - theta)
|
181 |
-
|
182 |
-
numerator = input_heights * (input_delta * theta.pow(2)
|
183 |
-
+ input_derivatives * theta_one_minus_theta)
|
184 |
-
denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
|
185 |
-
* theta_one_minus_theta)
|
186 |
-
outputs = input_cumheights + numerator / denominator
|
187 |
-
|
188 |
-
derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
|
189 |
-
+ 2 * input_delta * theta_one_minus_theta
|
190 |
-
+ input_derivatives * (1 - theta).pow(2))
|
191 |
-
logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
|
192 |
-
|
193 |
-
return outputs, logabsdet
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/prepare.py
DELETED
@@ -1,721 +0,0 @@
|
|
1 |
-
"""Prepares a distribution for installation
|
2 |
-
"""
|
3 |
-
|
4 |
-
# The following comment should be removed at some point in the future.
|
5 |
-
# mypy: strict-optional=False
|
6 |
-
|
7 |
-
import logging
|
8 |
-
import mimetypes
|
9 |
-
import os
|
10 |
-
import shutil
|
11 |
-
from typing import Dict, Iterable, List, Optional
|
12 |
-
|
13 |
-
from pip._vendor.packaging.utils import canonicalize_name
|
14 |
-
|
15 |
-
from pip._internal.distributions import make_distribution_for_install_requirement
|
16 |
-
from pip._internal.distributions.installed import InstalledDistribution
|
17 |
-
from pip._internal.exceptions import (
|
18 |
-
DirectoryUrlHashUnsupported,
|
19 |
-
HashMismatch,
|
20 |
-
HashUnpinned,
|
21 |
-
InstallationError,
|
22 |
-
MetadataInconsistent,
|
23 |
-
NetworkConnectionError,
|
24 |
-
PreviousBuildDirError,
|
25 |
-
VcsHashUnsupported,
|
26 |
-
)
|
27 |
-
from pip._internal.index.package_finder import PackageFinder
|
28 |
-
from pip._internal.metadata import BaseDistribution, get_metadata_distribution
|
29 |
-
from pip._internal.models.direct_url import ArchiveInfo
|
30 |
-
from pip._internal.models.link import Link
|
31 |
-
from pip._internal.models.wheel import Wheel
|
32 |
-
from pip._internal.network.download import BatchDownloader, Downloader
|
33 |
-
from pip._internal.network.lazy_wheel import (
|
34 |
-
HTTPRangeRequestUnsupported,
|
35 |
-
dist_from_wheel_url,
|
36 |
-
)
|
37 |
-
from pip._internal.network.session import PipSession
|
38 |
-
from pip._internal.operations.build.build_tracker import BuildTracker
|
39 |
-
from pip._internal.req.req_install import InstallRequirement
|
40 |
-
from pip._internal.utils.direct_url_helpers import (
|
41 |
-
direct_url_for_editable,
|
42 |
-
direct_url_from_link,
|
43 |
-
)
|
44 |
-
from pip._internal.utils.hashes import Hashes, MissingHashes
|
45 |
-
from pip._internal.utils.logging import indent_log
|
46 |
-
from pip._internal.utils.misc import (
|
47 |
-
display_path,
|
48 |
-
hash_file,
|
49 |
-
hide_url,
|
50 |
-
is_installable_dir,
|
51 |
-
)
|
52 |
-
from pip._internal.utils.temp_dir import TempDirectory
|
53 |
-
from pip._internal.utils.unpacking import unpack_file
|
54 |
-
from pip._internal.vcs import vcs
|
55 |
-
|
56 |
-
logger = logging.getLogger(__name__)
|
57 |
-
|
58 |
-
|
59 |
-
def _get_prepared_distribution(
|
60 |
-
req: InstallRequirement,
|
61 |
-
build_tracker: BuildTracker,
|
62 |
-
finder: PackageFinder,
|
63 |
-
build_isolation: bool,
|
64 |
-
check_build_deps: bool,
|
65 |
-
) -> BaseDistribution:
|
66 |
-
"""Prepare a distribution for installation."""
|
67 |
-
abstract_dist = make_distribution_for_install_requirement(req)
|
68 |
-
with build_tracker.track(req):
|
69 |
-
abstract_dist.prepare_distribution_metadata(
|
70 |
-
finder, build_isolation, check_build_deps
|
71 |
-
)
|
72 |
-
return abstract_dist.get_metadata_distribution()
|
73 |
-
|
74 |
-
|
75 |
-
def unpack_vcs_link(link: Link, location: str, verbosity: int) -> None:
|
76 |
-
vcs_backend = vcs.get_backend_for_scheme(link.scheme)
|
77 |
-
assert vcs_backend is not None
|
78 |
-
vcs_backend.unpack(location, url=hide_url(link.url), verbosity=verbosity)
|
79 |
-
|
80 |
-
|
81 |
-
class File:
|
82 |
-
def __init__(self, path: str, content_type: Optional[str]) -> None:
|
83 |
-
self.path = path
|
84 |
-
if content_type is None:
|
85 |
-
self.content_type = mimetypes.guess_type(path)[0]
|
86 |
-
else:
|
87 |
-
self.content_type = content_type
|
88 |
-
|
89 |
-
|
90 |
-
def get_http_url(
|
91 |
-
link: Link,
|
92 |
-
download: Downloader,
|
93 |
-
download_dir: Optional[str] = None,
|
94 |
-
hashes: Optional[Hashes] = None,
|
95 |
-
) -> File:
|
96 |
-
temp_dir = TempDirectory(kind="unpack", globally_managed=True)
|
97 |
-
# If a download dir is specified, is the file already downloaded there?
|
98 |
-
already_downloaded_path = None
|
99 |
-
if download_dir:
|
100 |
-
already_downloaded_path = _check_download_dir(link, download_dir, hashes)
|
101 |
-
|
102 |
-
if already_downloaded_path:
|
103 |
-
from_path = already_downloaded_path
|
104 |
-
content_type = None
|
105 |
-
else:
|
106 |
-
# let's download to a tmp dir
|
107 |
-
from_path, content_type = download(link, temp_dir.path)
|
108 |
-
if hashes:
|
109 |
-
hashes.check_against_path(from_path)
|
110 |
-
|
111 |
-
return File(from_path, content_type)
|
112 |
-
|
113 |
-
|
114 |
-
def get_file_url(
|
115 |
-
link: Link, download_dir: Optional[str] = None, hashes: Optional[Hashes] = None
|
116 |
-
) -> File:
|
117 |
-
"""Get file and optionally check its hash."""
|
118 |
-
# If a download dir is specified, is the file already there and valid?
|
119 |
-
already_downloaded_path = None
|
120 |
-
if download_dir:
|
121 |
-
already_downloaded_path = _check_download_dir(link, download_dir, hashes)
|
122 |
-
|
123 |
-
if already_downloaded_path:
|
124 |
-
from_path = already_downloaded_path
|
125 |
-
else:
|
126 |
-
from_path = link.file_path
|
127 |
-
|
128 |
-
# If --require-hashes is off, `hashes` is either empty, the
|
129 |
-
# link's embedded hash, or MissingHashes; it is required to
|
130 |
-
# match. If --require-hashes is on, we are satisfied by any
|
131 |
-
# hash in `hashes` matching: a URL-based or an option-based
|
132 |
-
# one; no internet-sourced hash will be in `hashes`.
|
133 |
-
if hashes:
|
134 |
-
hashes.check_against_path(from_path)
|
135 |
-
return File(from_path, None)
|
136 |
-
|
137 |
-
|
138 |
-
def unpack_url(
|
139 |
-
link: Link,
|
140 |
-
location: str,
|
141 |
-
download: Downloader,
|
142 |
-
verbosity: int,
|
143 |
-
download_dir: Optional[str] = None,
|
144 |
-
hashes: Optional[Hashes] = None,
|
145 |
-
) -> Optional[File]:
|
146 |
-
"""Unpack link into location, downloading if required.
|
147 |
-
|
148 |
-
:param hashes: A Hashes object, one of whose embedded hashes must match,
|
149 |
-
or HashMismatch will be raised. If the Hashes is empty, no matches are
|
150 |
-
required, and unhashable types of requirements (like VCS ones, which
|
151 |
-
would ordinarily raise HashUnsupported) are allowed.
|
152 |
-
"""
|
153 |
-
# non-editable vcs urls
|
154 |
-
if link.is_vcs:
|
155 |
-
unpack_vcs_link(link, location, verbosity=verbosity)
|
156 |
-
return None
|
157 |
-
|
158 |
-
assert not link.is_existing_dir()
|
159 |
-
|
160 |
-
# file urls
|
161 |
-
if link.is_file:
|
162 |
-
file = get_file_url(link, download_dir, hashes=hashes)
|
163 |
-
|
164 |
-
# http urls
|
165 |
-
else:
|
166 |
-
file = get_http_url(
|
167 |
-
link,
|
168 |
-
download,
|
169 |
-
download_dir,
|
170 |
-
hashes=hashes,
|
171 |
-
)
|
172 |
-
|
173 |
-
# unpack the archive to the build dir location. even when only downloading
|
174 |
-
# archives, they have to be unpacked to parse dependencies, except wheels
|
175 |
-
if not link.is_wheel:
|
176 |
-
unpack_file(file.path, location, file.content_type)
|
177 |
-
|
178 |
-
return file
|
179 |
-
|
180 |
-
|
181 |
-
def _check_download_dir(
|
182 |
-
link: Link,
|
183 |
-
download_dir: str,
|
184 |
-
hashes: Optional[Hashes],
|
185 |
-
warn_on_hash_mismatch: bool = True,
|
186 |
-
) -> Optional[str]:
|
187 |
-
"""Check download_dir for previously downloaded file with correct hash
|
188 |
-
If a correct file is found return its path else None
|
189 |
-
"""
|
190 |
-
download_path = os.path.join(download_dir, link.filename)
|
191 |
-
|
192 |
-
if not os.path.exists(download_path):
|
193 |
-
return None
|
194 |
-
|
195 |
-
# If already downloaded, does its hash match?
|
196 |
-
logger.info("File was already downloaded %s", download_path)
|
197 |
-
if hashes:
|
198 |
-
try:
|
199 |
-
hashes.check_against_path(download_path)
|
200 |
-
except HashMismatch:
|
201 |
-
if warn_on_hash_mismatch:
|
202 |
-
logger.warning(
|
203 |
-
"Previously-downloaded file %s has bad hash. Re-downloading.",
|
204 |
-
download_path,
|
205 |
-
)
|
206 |
-
os.unlink(download_path)
|
207 |
-
return None
|
208 |
-
return download_path
|
209 |
-
|
210 |
-
|
211 |
-
class RequirementPreparer:
|
212 |
-
"""Prepares a Requirement"""
|
213 |
-
|
214 |
-
def __init__(
|
215 |
-
self,
|
216 |
-
build_dir: str,
|
217 |
-
download_dir: Optional[str],
|
218 |
-
src_dir: str,
|
219 |
-
build_isolation: bool,
|
220 |
-
check_build_deps: bool,
|
221 |
-
build_tracker: BuildTracker,
|
222 |
-
session: PipSession,
|
223 |
-
progress_bar: str,
|
224 |
-
finder: PackageFinder,
|
225 |
-
require_hashes: bool,
|
226 |
-
use_user_site: bool,
|
227 |
-
lazy_wheel: bool,
|
228 |
-
verbosity: int,
|
229 |
-
) -> None:
|
230 |
-
super().__init__()
|
231 |
-
|
232 |
-
self.src_dir = src_dir
|
233 |
-
self.build_dir = build_dir
|
234 |
-
self.build_tracker = build_tracker
|
235 |
-
self._session = session
|
236 |
-
self._download = Downloader(session, progress_bar)
|
237 |
-
self._batch_download = BatchDownloader(session, progress_bar)
|
238 |
-
self.finder = finder
|
239 |
-
|
240 |
-
# Where still-packed archives should be written to. If None, they are
|
241 |
-
# not saved, and are deleted immediately after unpacking.
|
242 |
-
self.download_dir = download_dir
|
243 |
-
|
244 |
-
# Is build isolation allowed?
|
245 |
-
self.build_isolation = build_isolation
|
246 |
-
|
247 |
-
# Should check build dependencies?
|
248 |
-
self.check_build_deps = check_build_deps
|
249 |
-
|
250 |
-
# Should hash-checking be required?
|
251 |
-
self.require_hashes = require_hashes
|
252 |
-
|
253 |
-
# Should install in user site-packages?
|
254 |
-
self.use_user_site = use_user_site
|
255 |
-
|
256 |
-
# Should wheels be downloaded lazily?
|
257 |
-
self.use_lazy_wheel = lazy_wheel
|
258 |
-
|
259 |
-
# How verbose should underlying tooling be?
|
260 |
-
self.verbosity = verbosity
|
261 |
-
|
262 |
-
# Memoized downloaded files, as mapping of url: path.
|
263 |
-
self._downloaded: Dict[str, str] = {}
|
264 |
-
|
265 |
-
# Previous "header" printed for a link-based InstallRequirement
|
266 |
-
self._previous_requirement_header = ("", "")
|
267 |
-
|
268 |
-
def _log_preparing_link(self, req: InstallRequirement) -> None:
|
269 |
-
"""Provide context for the requirement being prepared."""
|
270 |
-
if req.link.is_file and not req.is_wheel_from_cache:
|
271 |
-
message = "Processing %s"
|
272 |
-
information = str(display_path(req.link.file_path))
|
273 |
-
else:
|
274 |
-
message = "Collecting %s"
|
275 |
-
information = str(req.req or req)
|
276 |
-
|
277 |
-
# If we used req.req, inject requirement source if available (this
|
278 |
-
# would already be included if we used req directly)
|
279 |
-
if req.req and req.comes_from:
|
280 |
-
if isinstance(req.comes_from, str):
|
281 |
-
comes_from: Optional[str] = req.comes_from
|
282 |
-
else:
|
283 |
-
comes_from = req.comes_from.from_path()
|
284 |
-
if comes_from:
|
285 |
-
information += f" (from {comes_from})"
|
286 |
-
|
287 |
-
if (message, information) != self._previous_requirement_header:
|
288 |
-
self._previous_requirement_header = (message, information)
|
289 |
-
logger.info(message, information)
|
290 |
-
|
291 |
-
if req.is_wheel_from_cache:
|
292 |
-
with indent_log():
|
293 |
-
logger.info("Using cached %s", req.link.filename)
|
294 |
-
|
295 |
-
def _ensure_link_req_src_dir(
|
296 |
-
self, req: InstallRequirement, parallel_builds: bool
|
297 |
-
) -> None:
|
298 |
-
"""Ensure source_dir of a linked InstallRequirement."""
|
299 |
-
# Since source_dir is only set for editable requirements.
|
300 |
-
if req.link.is_wheel:
|
301 |
-
# We don't need to unpack wheels, so no need for a source
|
302 |
-
# directory.
|
303 |
-
return
|
304 |
-
assert req.source_dir is None
|
305 |
-
if req.link.is_existing_dir():
|
306 |
-
# build local directories in-tree
|
307 |
-
req.source_dir = req.link.file_path
|
308 |
-
return
|
309 |
-
|
310 |
-
# We always delete unpacked sdists after pip runs.
|
311 |
-
req.ensure_has_source_dir(
|
312 |
-
self.build_dir,
|
313 |
-
autodelete=True,
|
314 |
-
parallel_builds=parallel_builds,
|
315 |
-
)
|
316 |
-
|
317 |
-
# If a checkout exists, it's unwise to keep going. version
|
318 |
-
# inconsistencies are logged later, but do not fail the
|
319 |
-
# installation.
|
320 |
-
# FIXME: this won't upgrade when there's an existing
|
321 |
-
# package unpacked in `req.source_dir`
|
322 |
-
# TODO: this check is now probably dead code
|
323 |
-
if is_installable_dir(req.source_dir):
|
324 |
-
raise PreviousBuildDirError(
|
325 |
-
"pip can't proceed with requirements '{}' due to a"
|
326 |
-
"pre-existing build directory ({}). This is likely "
|
327 |
-
"due to a previous installation that failed . pip is "
|
328 |
-
"being responsible and not assuming it can delete this. "
|
329 |
-
"Please delete it and try again.".format(req, req.source_dir)
|
330 |
-
)
|
331 |
-
|
332 |
-
def _get_linked_req_hashes(self, req: InstallRequirement) -> Hashes:
|
333 |
-
# By the time this is called, the requirement's link should have
|
334 |
-
# been checked so we can tell what kind of requirements req is
|
335 |
-
# and raise some more informative errors than otherwise.
|
336 |
-
# (For example, we can raise VcsHashUnsupported for a VCS URL
|
337 |
-
# rather than HashMissing.)
|
338 |
-
if not self.require_hashes:
|
339 |
-
return req.hashes(trust_internet=True)
|
340 |
-
|
341 |
-
# We could check these first 2 conditions inside unpack_url
|
342 |
-
# and save repetition of conditions, but then we would
|
343 |
-
# report less-useful error messages for unhashable
|
344 |
-
# requirements, complaining that there's no hash provided.
|
345 |
-
if req.link.is_vcs:
|
346 |
-
raise VcsHashUnsupported()
|
347 |
-
if req.link.is_existing_dir():
|
348 |
-
raise DirectoryUrlHashUnsupported()
|
349 |
-
|
350 |
-
# Unpinned packages are asking for trouble when a new version
|
351 |
-
# is uploaded. This isn't a security check, but it saves users
|
352 |
-
# a surprising hash mismatch in the future.
|
353 |
-
# file:/// URLs aren't pinnable, so don't complain about them
|
354 |
-
# not being pinned.
|
355 |
-
if req.original_link is None and not req.is_pinned:
|
356 |
-
raise HashUnpinned()
|
357 |
-
|
358 |
-
# If known-good hashes are missing for this requirement,
|
359 |
-
# shim it with a facade object that will provoke hash
|
360 |
-
# computation and then raise a HashMissing exception
|
361 |
-
# showing the user what the hash should be.
|
362 |
-
return req.hashes(trust_internet=False) or MissingHashes()
|
363 |
-
|
364 |
-
def _fetch_metadata_only(
|
365 |
-
self,
|
366 |
-
req: InstallRequirement,
|
367 |
-
) -> Optional[BaseDistribution]:
|
368 |
-
if self.require_hashes:
|
369 |
-
logger.debug(
|
370 |
-
"Metadata-only fetching is not used as hash checking is required",
|
371 |
-
)
|
372 |
-
return None
|
373 |
-
# Try PEP 658 metadata first, then fall back to lazy wheel if unavailable.
|
374 |
-
return self._fetch_metadata_using_link_data_attr(
|
375 |
-
req
|
376 |
-
) or self._fetch_metadata_using_lazy_wheel(req.link)
|
377 |
-
|
378 |
-
def _fetch_metadata_using_link_data_attr(
|
379 |
-
self,
|
380 |
-
req: InstallRequirement,
|
381 |
-
) -> Optional[BaseDistribution]:
|
382 |
-
"""Fetch metadata from the data-dist-info-metadata attribute, if possible."""
|
383 |
-
# (1) Get the link to the metadata file, if provided by the backend.
|
384 |
-
metadata_link = req.link.metadata_link()
|
385 |
-
if metadata_link is None:
|
386 |
-
return None
|
387 |
-
assert req.req is not None
|
388 |
-
logger.info(
|
389 |
-
"Obtaining dependency information for %s from %s",
|
390 |
-
req.req,
|
391 |
-
metadata_link,
|
392 |
-
)
|
393 |
-
# (2) Download the contents of the METADATA file, separate from the dist itself.
|
394 |
-
metadata_file = get_http_url(
|
395 |
-
metadata_link,
|
396 |
-
self._download,
|
397 |
-
hashes=metadata_link.as_hashes(),
|
398 |
-
)
|
399 |
-
with open(metadata_file.path, "rb") as f:
|
400 |
-
metadata_contents = f.read()
|
401 |
-
# (3) Generate a dist just from those file contents.
|
402 |
-
metadata_dist = get_metadata_distribution(
|
403 |
-
metadata_contents,
|
404 |
-
req.link.filename,
|
405 |
-
req.req.name,
|
406 |
-
)
|
407 |
-
# (4) Ensure the Name: field from the METADATA file matches the name from the
|
408 |
-
# install requirement.
|
409 |
-
#
|
410 |
-
# NB: raw_name will fall back to the name from the install requirement if
|
411 |
-
# the Name: field is not present, but it's noted in the raw_name docstring
|
412 |
-
# that that should NEVER happen anyway.
|
413 |
-
if metadata_dist.raw_name != req.req.name:
|
414 |
-
raise MetadataInconsistent(
|
415 |
-
req, "Name", req.req.name, metadata_dist.raw_name
|
416 |
-
)
|
417 |
-
return metadata_dist
|
418 |
-
|
419 |
-
def _fetch_metadata_using_lazy_wheel(
|
420 |
-
self,
|
421 |
-
link: Link,
|
422 |
-
) -> Optional[BaseDistribution]:
|
423 |
-
"""Fetch metadata using lazy wheel, if possible."""
|
424 |
-
# --use-feature=fast-deps must be provided.
|
425 |
-
if not self.use_lazy_wheel:
|
426 |
-
return None
|
427 |
-
if link.is_file or not link.is_wheel:
|
428 |
-
logger.debug(
|
429 |
-
"Lazy wheel is not used as %r does not point to a remote wheel",
|
430 |
-
link,
|
431 |
-
)
|
432 |
-
return None
|
433 |
-
|
434 |
-
wheel = Wheel(link.filename)
|
435 |
-
name = canonicalize_name(wheel.name)
|
436 |
-
logger.info(
|
437 |
-
"Obtaining dependency information from %s %s",
|
438 |
-
name,
|
439 |
-
wheel.version,
|
440 |
-
)
|
441 |
-
url = link.url.split("#", 1)[0]
|
442 |
-
try:
|
443 |
-
return dist_from_wheel_url(name, url, self._session)
|
444 |
-
except HTTPRangeRequestUnsupported:
|
445 |
-
logger.debug("%s does not support range requests", url)
|
446 |
-
return None
|
447 |
-
|
448 |
-
def _complete_partial_requirements(
|
449 |
-
self,
|
450 |
-
partially_downloaded_reqs: Iterable[InstallRequirement],
|
451 |
-
parallel_builds: bool = False,
|
452 |
-
) -> None:
|
453 |
-
"""Download any requirements which were only fetched by metadata."""
|
454 |
-
# Download to a temporary directory. These will be copied over as
|
455 |
-
# needed for downstream 'download', 'wheel', and 'install' commands.
|
456 |
-
temp_dir = TempDirectory(kind="unpack", globally_managed=True).path
|
457 |
-
|
458 |
-
# Map each link to the requirement that owns it. This allows us to set
|
459 |
-
# `req.local_file_path` on the appropriate requirement after passing
|
460 |
-
# all the links at once into BatchDownloader.
|
461 |
-
links_to_fully_download: Dict[Link, InstallRequirement] = {}
|
462 |
-
for req in partially_downloaded_reqs:
|
463 |
-
assert req.link
|
464 |
-
links_to_fully_download[req.link] = req
|
465 |
-
|
466 |
-
batch_download = self._batch_download(
|
467 |
-
links_to_fully_download.keys(),
|
468 |
-
temp_dir,
|
469 |
-
)
|
470 |
-
for link, (filepath, _) in batch_download:
|
471 |
-
logger.debug("Downloading link %s to %s", link, filepath)
|
472 |
-
req = links_to_fully_download[link]
|
473 |
-
req.local_file_path = filepath
|
474 |
-
|
475 |
-
# This step is necessary to ensure all lazy wheels are processed
|
476 |
-
# successfully by the 'download', 'wheel', and 'install' commands.
|
477 |
-
for req in partially_downloaded_reqs:
|
478 |
-
self._prepare_linked_requirement(req, parallel_builds)
|
479 |
-
|
480 |
-
def prepare_linked_requirement(
|
481 |
-
self, req: InstallRequirement, parallel_builds: bool = False
|
482 |
-
) -> BaseDistribution:
|
483 |
-
"""Prepare a requirement to be obtained from req.link."""
|
484 |
-
assert req.link
|
485 |
-
self._log_preparing_link(req)
|
486 |
-
with indent_log():
|
487 |
-
# Check if the relevant file is already available
|
488 |
-
# in the download directory
|
489 |
-
file_path = None
|
490 |
-
if self.download_dir is not None and req.link.is_wheel:
|
491 |
-
hashes = self._get_linked_req_hashes(req)
|
492 |
-
file_path = _check_download_dir(
|
493 |
-
req.link,
|
494 |
-
self.download_dir,
|
495 |
-
hashes,
|
496 |
-
# When a locally built wheel has been found in cache, we don't warn
|
497 |
-
# about re-downloading when the already downloaded wheel hash does
|
498 |
-
# not match. This is because the hash must be checked against the
|
499 |
-
# original link, not the cached link. It that case the already
|
500 |
-
# downloaded file will be removed and re-fetched from cache (which
|
501 |
-
# implies a hash check against the cache entry's origin.json).
|
502 |
-
warn_on_hash_mismatch=not req.is_wheel_from_cache,
|
503 |
-
)
|
504 |
-
|
505 |
-
if file_path is not None:
|
506 |
-
# The file is already available, so mark it as downloaded
|
507 |
-
self._downloaded[req.link.url] = file_path
|
508 |
-
else:
|
509 |
-
# The file is not available, attempt to fetch only metadata
|
510 |
-
metadata_dist = self._fetch_metadata_only(req)
|
511 |
-
if metadata_dist is not None:
|
512 |
-
req.needs_more_preparation = True
|
513 |
-
return metadata_dist
|
514 |
-
|
515 |
-
# None of the optimizations worked, fully prepare the requirement
|
516 |
-
return self._prepare_linked_requirement(req, parallel_builds)
|
517 |
-
|
518 |
-
def prepare_linked_requirements_more(
|
519 |
-
self, reqs: Iterable[InstallRequirement], parallel_builds: bool = False
|
520 |
-
) -> None:
|
521 |
-
"""Prepare linked requirements more, if needed."""
|
522 |
-
reqs = [req for req in reqs if req.needs_more_preparation]
|
523 |
-
for req in reqs:
|
524 |
-
# Determine if any of these requirements were already downloaded.
|
525 |
-
if self.download_dir is not None and req.link.is_wheel:
|
526 |
-
hashes = self._get_linked_req_hashes(req)
|
527 |
-
file_path = _check_download_dir(req.link, self.download_dir, hashes)
|
528 |
-
if file_path is not None:
|
529 |
-
self._downloaded[req.link.url] = file_path
|
530 |
-
req.needs_more_preparation = False
|
531 |
-
|
532 |
-
# Prepare requirements we found were already downloaded for some
|
533 |
-
# reason. The other downloads will be completed separately.
|
534 |
-
partially_downloaded_reqs: List[InstallRequirement] = []
|
535 |
-
for req in reqs:
|
536 |
-
if req.needs_more_preparation:
|
537 |
-
partially_downloaded_reqs.append(req)
|
538 |
-
else:
|
539 |
-
self._prepare_linked_requirement(req, parallel_builds)
|
540 |
-
|
541 |
-
# TODO: separate this part out from RequirementPreparer when the v1
|
542 |
-
# resolver can be removed!
|
543 |
-
self._complete_partial_requirements(
|
544 |
-
partially_downloaded_reqs,
|
545 |
-
parallel_builds=parallel_builds,
|
546 |
-
)
|
547 |
-
|
548 |
-
def _prepare_linked_requirement(
|
549 |
-
self, req: InstallRequirement, parallel_builds: bool
|
550 |
-
) -> BaseDistribution:
|
551 |
-
assert req.link
|
552 |
-
link = req.link
|
553 |
-
|
554 |
-
hashes = self._get_linked_req_hashes(req)
|
555 |
-
|
556 |
-
if hashes and req.is_wheel_from_cache:
|
557 |
-
assert req.download_info is not None
|
558 |
-
assert link.is_wheel
|
559 |
-
assert link.is_file
|
560 |
-
# We need to verify hashes, and we have found the requirement in the cache
|
561 |
-
# of locally built wheels.
|
562 |
-
if (
|
563 |
-
isinstance(req.download_info.info, ArchiveInfo)
|
564 |
-
and req.download_info.info.hashes
|
565 |
-
and hashes.has_one_of(req.download_info.info.hashes)
|
566 |
-
):
|
567 |
-
# At this point we know the requirement was built from a hashable source
|
568 |
-
# artifact, and we verified that the cache entry's hash of the original
|
569 |
-
# artifact matches one of the hashes we expect. We don't verify hashes
|
570 |
-
# against the cached wheel, because the wheel is not the original.
|
571 |
-
hashes = None
|
572 |
-
else:
|
573 |
-
logger.warning(
|
574 |
-
"The hashes of the source archive found in cache entry "
|
575 |
-
"don't match, ignoring cached built wheel "
|
576 |
-
"and re-downloading source."
|
577 |
-
)
|
578 |
-
req.link = req.cached_wheel_source_link
|
579 |
-
link = req.link
|
580 |
-
|
581 |
-
self._ensure_link_req_src_dir(req, parallel_builds)
|
582 |
-
|
583 |
-
if link.is_existing_dir():
|
584 |
-
local_file = None
|
585 |
-
elif link.url not in self._downloaded:
|
586 |
-
try:
|
587 |
-
local_file = unpack_url(
|
588 |
-
link,
|
589 |
-
req.source_dir,
|
590 |
-
self._download,
|
591 |
-
self.verbosity,
|
592 |
-
self.download_dir,
|
593 |
-
hashes,
|
594 |
-
)
|
595 |
-
except NetworkConnectionError as exc:
|
596 |
-
raise InstallationError(
|
597 |
-
"Could not install requirement {} because of HTTP "
|
598 |
-
"error {} for URL {}".format(req, exc, link)
|
599 |
-
)
|
600 |
-
else:
|
601 |
-
file_path = self._downloaded[link.url]
|
602 |
-
if hashes:
|
603 |
-
hashes.check_against_path(file_path)
|
604 |
-
local_file = File(file_path, content_type=None)
|
605 |
-
|
606 |
-
# If download_info is set, we got it from the wheel cache.
|
607 |
-
if req.download_info is None:
|
608 |
-
# Editables don't go through this function (see
|
609 |
-
# prepare_editable_requirement).
|
610 |
-
assert not req.editable
|
611 |
-
req.download_info = direct_url_from_link(link, req.source_dir)
|
612 |
-
# Make sure we have a hash in download_info. If we got it as part of the
|
613 |
-
# URL, it will have been verified and we can rely on it. Otherwise we
|
614 |
-
# compute it from the downloaded file.
|
615 |
-
# FIXME: https://github.com/pypa/pip/issues/11943
|
616 |
-
if (
|
617 |
-
isinstance(req.download_info.info, ArchiveInfo)
|
618 |
-
and not req.download_info.info.hashes
|
619 |
-
and local_file
|
620 |
-
):
|
621 |
-
hash = hash_file(local_file.path)[0].hexdigest()
|
622 |
-
# We populate info.hash for backward compatibility.
|
623 |
-
# This will automatically populate info.hashes.
|
624 |
-
req.download_info.info.hash = f"sha256={hash}"
|
625 |
-
|
626 |
-
# For use in later processing,
|
627 |
-
# preserve the file path on the requirement.
|
628 |
-
if local_file:
|
629 |
-
req.local_file_path = local_file.path
|
630 |
-
|
631 |
-
dist = _get_prepared_distribution(
|
632 |
-
req,
|
633 |
-
self.build_tracker,
|
634 |
-
self.finder,
|
635 |
-
self.build_isolation,
|
636 |
-
self.check_build_deps,
|
637 |
-
)
|
638 |
-
return dist
|
639 |
-
|
640 |
-
def save_linked_requirement(self, req: InstallRequirement) -> None:
|
641 |
-
assert self.download_dir is not None
|
642 |
-
assert req.link is not None
|
643 |
-
link = req.link
|
644 |
-
if link.is_vcs or (link.is_existing_dir() and req.editable):
|
645 |
-
# Make a .zip of the source_dir we already created.
|
646 |
-
req.archive(self.download_dir)
|
647 |
-
return
|
648 |
-
|
649 |
-
if link.is_existing_dir():
|
650 |
-
logger.debug(
|
651 |
-
"Not copying link to destination directory "
|
652 |
-
"since it is a directory: %s",
|
653 |
-
link,
|
654 |
-
)
|
655 |
-
return
|
656 |
-
if req.local_file_path is None:
|
657 |
-
# No distribution was downloaded for this requirement.
|
658 |
-
return
|
659 |
-
|
660 |
-
download_location = os.path.join(self.download_dir, link.filename)
|
661 |
-
if not os.path.exists(download_location):
|
662 |
-
shutil.copy(req.local_file_path, download_location)
|
663 |
-
download_path = display_path(download_location)
|
664 |
-
logger.info("Saved %s", download_path)
|
665 |
-
|
666 |
-
def prepare_editable_requirement(
|
667 |
-
self,
|
668 |
-
req: InstallRequirement,
|
669 |
-
) -> BaseDistribution:
|
670 |
-
"""Prepare an editable requirement."""
|
671 |
-
assert req.editable, "cannot prepare a non-editable req as editable"
|
672 |
-
|
673 |
-
logger.info("Obtaining %s", req)
|
674 |
-
|
675 |
-
with indent_log():
|
676 |
-
if self.require_hashes:
|
677 |
-
raise InstallationError(
|
678 |
-
"The editable requirement {} cannot be installed when "
|
679 |
-
"requiring hashes, because there is no single file to "
|
680 |
-
"hash.".format(req)
|
681 |
-
)
|
682 |
-
req.ensure_has_source_dir(self.src_dir)
|
683 |
-
req.update_editable()
|
684 |
-
assert req.source_dir
|
685 |
-
req.download_info = direct_url_for_editable(req.unpacked_source_directory)
|
686 |
-
|
687 |
-
dist = _get_prepared_distribution(
|
688 |
-
req,
|
689 |
-
self.build_tracker,
|
690 |
-
self.finder,
|
691 |
-
self.build_isolation,
|
692 |
-
self.check_build_deps,
|
693 |
-
)
|
694 |
-
|
695 |
-
req.check_if_exists(self.use_user_site)
|
696 |
-
|
697 |
-
return dist
|
698 |
-
|
699 |
-
def prepare_installed_requirement(
|
700 |
-
self,
|
701 |
-
req: InstallRequirement,
|
702 |
-
skip_reason: str,
|
703 |
-
) -> BaseDistribution:
|
704 |
-
"""Prepare an already-installed requirement."""
|
705 |
-
assert req.satisfied_by, "req should have been satisfied but isn't"
|
706 |
-
assert skip_reason is not None, (
|
707 |
-
"did not get skip reason skipped but req.satisfied_by "
|
708 |
-
"is set to {}".format(req.satisfied_by)
|
709 |
-
)
|
710 |
-
logger.info(
|
711 |
-
"Requirement %s: %s (%s)", skip_reason, req, req.satisfied_by.version
|
712 |
-
)
|
713 |
-
with indent_log():
|
714 |
-
if self.require_hashes:
|
715 |
-
logger.debug(
|
716 |
-
"Since it is already installed, we are trusting this "
|
717 |
-
"package without checking its hash. To ensure a "
|
718 |
-
"completely repeatable environment, install into an "
|
719 |
-
"empty virtualenv."
|
720 |
-
)
|
721 |
-
return InstalledDistribution(req).get_metadata_distribution()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Banbri/zcvzcv/src/lib/getImageDimension.ts
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
export interface ImageDimension {
|
2 |
-
width: number
|
3 |
-
height: number
|
4 |
-
}
|
5 |
-
|
6 |
-
export async function getImageDimension(src: string): Promise<ImageDimension> {
|
7 |
-
if (!src) {
|
8 |
-
return { width: 0, height: 0 }
|
9 |
-
}
|
10 |
-
const img = new Image()
|
11 |
-
img.src = src
|
12 |
-
await img.decode()
|
13 |
-
const width = img.width
|
14 |
-
const height = img.height
|
15 |
-
return { width, height }
|
16 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/lib/infer_pack/modules.py
DELETED
@@ -1,522 +0,0 @@
|
|
1 |
-
import copy
|
2 |
-
import math
|
3 |
-
import numpy as np
|
4 |
-
import scipy
|
5 |
-
import torch
|
6 |
-
from torch import nn
|
7 |
-
from torch.nn import functional as F
|
8 |
-
|
9 |
-
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
|
10 |
-
from torch.nn.utils import weight_norm, remove_weight_norm
|
11 |
-
|
12 |
-
from lib.infer_pack import commons
|
13 |
-
from lib.infer_pack.commons import init_weights, get_padding
|
14 |
-
from lib.infer_pack.transforms import piecewise_rational_quadratic_transform
|
15 |
-
|
16 |
-
|
17 |
-
LRELU_SLOPE = 0.1
|
18 |
-
|
19 |
-
|
20 |
-
class LayerNorm(nn.Module):
|
21 |
-
def __init__(self, channels, eps=1e-5):
|
22 |
-
super().__init__()
|
23 |
-
self.channels = channels
|
24 |
-
self.eps = eps
|
25 |
-
|
26 |
-
self.gamma = nn.Parameter(torch.ones(channels))
|
27 |
-
self.beta = nn.Parameter(torch.zeros(channels))
|
28 |
-
|
29 |
-
def forward(self, x):
|
30 |
-
x = x.transpose(1, -1)
|
31 |
-
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
|
32 |
-
return x.transpose(1, -1)
|
33 |
-
|
34 |
-
|
35 |
-
class ConvReluNorm(nn.Module):
|
36 |
-
def __init__(
|
37 |
-
self,
|
38 |
-
in_channels,
|
39 |
-
hidden_channels,
|
40 |
-
out_channels,
|
41 |
-
kernel_size,
|
42 |
-
n_layers,
|
43 |
-
p_dropout,
|
44 |
-
):
|
45 |
-
super().__init__()
|
46 |
-
self.in_channels = in_channels
|
47 |
-
self.hidden_channels = hidden_channels
|
48 |
-
self.out_channels = out_channels
|
49 |
-
self.kernel_size = kernel_size
|
50 |
-
self.n_layers = n_layers
|
51 |
-
self.p_dropout = p_dropout
|
52 |
-
assert n_layers > 1, "Number of layers should be larger than 0."
|
53 |
-
|
54 |
-
self.conv_layers = nn.ModuleList()
|
55 |
-
self.norm_layers = nn.ModuleList()
|
56 |
-
self.conv_layers.append(
|
57 |
-
nn.Conv1d(
|
58 |
-
in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
|
59 |
-
)
|
60 |
-
)
|
61 |
-
self.norm_layers.append(LayerNorm(hidden_channels))
|
62 |
-
self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
|
63 |
-
for _ in range(n_layers - 1):
|
64 |
-
self.conv_layers.append(
|
65 |
-
nn.Conv1d(
|
66 |
-
hidden_channels,
|
67 |
-
hidden_channels,
|
68 |
-
kernel_size,
|
69 |
-
padding=kernel_size // 2,
|
70 |
-
)
|
71 |
-
)
|
72 |
-
self.norm_layers.append(LayerNorm(hidden_channels))
|
73 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
|
74 |
-
self.proj.weight.data.zero_()
|
75 |
-
self.proj.bias.data.zero_()
|
76 |
-
|
77 |
-
def forward(self, x, x_mask):
|
78 |
-
x_org = x
|
79 |
-
for i in range(self.n_layers):
|
80 |
-
x = self.conv_layers[i](x * x_mask)
|
81 |
-
x = self.norm_layers[i](x)
|
82 |
-
x = self.relu_drop(x)
|
83 |
-
x = x_org + self.proj(x)
|
84 |
-
return x * x_mask
|
85 |
-
|
86 |
-
|
87 |
-
class DDSConv(nn.Module):
|
88 |
-
"""
|
89 |
-
Dialted and Depth-Separable Convolution
|
90 |
-
"""
|
91 |
-
|
92 |
-
def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
|
93 |
-
super().__init__()
|
94 |
-
self.channels = channels
|
95 |
-
self.kernel_size = kernel_size
|
96 |
-
self.n_layers = n_layers
|
97 |
-
self.p_dropout = p_dropout
|
98 |
-
|
99 |
-
self.drop = nn.Dropout(p_dropout)
|
100 |
-
self.convs_sep = nn.ModuleList()
|
101 |
-
self.convs_1x1 = nn.ModuleList()
|
102 |
-
self.norms_1 = nn.ModuleList()
|
103 |
-
self.norms_2 = nn.ModuleList()
|
104 |
-
for i in range(n_layers):
|
105 |
-
dilation = kernel_size**i
|
106 |
-
padding = (kernel_size * dilation - dilation) // 2
|
107 |
-
self.convs_sep.append(
|
108 |
-
nn.Conv1d(
|
109 |
-
channels,
|
110 |
-
channels,
|
111 |
-
kernel_size,
|
112 |
-
groups=channels,
|
113 |
-
dilation=dilation,
|
114 |
-
padding=padding,
|
115 |
-
)
|
116 |
-
)
|
117 |
-
self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
|
118 |
-
self.norms_1.append(LayerNorm(channels))
|
119 |
-
self.norms_2.append(LayerNorm(channels))
|
120 |
-
|
121 |
-
def forward(self, x, x_mask, g=None):
|
122 |
-
if g is not None:
|
123 |
-
x = x + g
|
124 |
-
for i in range(self.n_layers):
|
125 |
-
y = self.convs_sep[i](x * x_mask)
|
126 |
-
y = self.norms_1[i](y)
|
127 |
-
y = F.gelu(y)
|
128 |
-
y = self.convs_1x1[i](y)
|
129 |
-
y = self.norms_2[i](y)
|
130 |
-
y = F.gelu(y)
|
131 |
-
y = self.drop(y)
|
132 |
-
x = x + y
|
133 |
-
return x * x_mask
|
134 |
-
|
135 |
-
|
136 |
-
class WN(torch.nn.Module):
|
137 |
-
def __init__(
|
138 |
-
self,
|
139 |
-
hidden_channels,
|
140 |
-
kernel_size,
|
141 |
-
dilation_rate,
|
142 |
-
n_layers,
|
143 |
-
gin_channels=0,
|
144 |
-
p_dropout=0,
|
145 |
-
):
|
146 |
-
super(WN, self).__init__()
|
147 |
-
assert kernel_size % 2 == 1
|
148 |
-
self.hidden_channels = hidden_channels
|
149 |
-
self.kernel_size = (kernel_size,)
|
150 |
-
self.dilation_rate = dilation_rate
|
151 |
-
self.n_layers = n_layers
|
152 |
-
self.gin_channels = gin_channels
|
153 |
-
self.p_dropout = p_dropout
|
154 |
-
|
155 |
-
self.in_layers = torch.nn.ModuleList()
|
156 |
-
self.res_skip_layers = torch.nn.ModuleList()
|
157 |
-
self.drop = nn.Dropout(p_dropout)
|
158 |
-
|
159 |
-
if gin_channels != 0:
|
160 |
-
cond_layer = torch.nn.Conv1d(
|
161 |
-
gin_channels, 2 * hidden_channels * n_layers, 1
|
162 |
-
)
|
163 |
-
self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
|
164 |
-
|
165 |
-
for i in range(n_layers):
|
166 |
-
dilation = dilation_rate**i
|
167 |
-
padding = int((kernel_size * dilation - dilation) / 2)
|
168 |
-
in_layer = torch.nn.Conv1d(
|
169 |
-
hidden_channels,
|
170 |
-
2 * hidden_channels,
|
171 |
-
kernel_size,
|
172 |
-
dilation=dilation,
|
173 |
-
padding=padding,
|
174 |
-
)
|
175 |
-
in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
|
176 |
-
self.in_layers.append(in_layer)
|
177 |
-
|
178 |
-
# last one is not necessary
|
179 |
-
if i < n_layers - 1:
|
180 |
-
res_skip_channels = 2 * hidden_channels
|
181 |
-
else:
|
182 |
-
res_skip_channels = hidden_channels
|
183 |
-
|
184 |
-
res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
|
185 |
-
res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
|
186 |
-
self.res_skip_layers.append(res_skip_layer)
|
187 |
-
|
188 |
-
def forward(self, x, x_mask, g=None, **kwargs):
|
189 |
-
output = torch.zeros_like(x)
|
190 |
-
n_channels_tensor = torch.IntTensor([self.hidden_channels])
|
191 |
-
|
192 |
-
if g is not None:
|
193 |
-
g = self.cond_layer(g)
|
194 |
-
|
195 |
-
for i in range(self.n_layers):
|
196 |
-
x_in = self.in_layers[i](x)
|
197 |
-
if g is not None:
|
198 |
-
cond_offset = i * 2 * self.hidden_channels
|
199 |
-
g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
|
200 |
-
else:
|
201 |
-
g_l = torch.zeros_like(x_in)
|
202 |
-
|
203 |
-
acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
|
204 |
-
acts = self.drop(acts)
|
205 |
-
|
206 |
-
res_skip_acts = self.res_skip_layers[i](acts)
|
207 |
-
if i < self.n_layers - 1:
|
208 |
-
res_acts = res_skip_acts[:, : self.hidden_channels, :]
|
209 |
-
x = (x + res_acts) * x_mask
|
210 |
-
output = output + res_skip_acts[:, self.hidden_channels :, :]
|
211 |
-
else:
|
212 |
-
output = output + res_skip_acts
|
213 |
-
return output * x_mask
|
214 |
-
|
215 |
-
def remove_weight_norm(self):
|
216 |
-
if self.gin_channels != 0:
|
217 |
-
torch.nn.utils.remove_weight_norm(self.cond_layer)
|
218 |
-
for l in self.in_layers:
|
219 |
-
torch.nn.utils.remove_weight_norm(l)
|
220 |
-
for l in self.res_skip_layers:
|
221 |
-
torch.nn.utils.remove_weight_norm(l)
|
222 |
-
|
223 |
-
|
224 |
-
class ResBlock1(torch.nn.Module):
|
225 |
-
def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
|
226 |
-
super(ResBlock1, self).__init__()
|
227 |
-
self.convs1 = nn.ModuleList(
|
228 |
-
[
|
229 |
-
weight_norm(
|
230 |
-
Conv1d(
|
231 |
-
channels,
|
232 |
-
channels,
|
233 |
-
kernel_size,
|
234 |
-
1,
|
235 |
-
dilation=dilation[0],
|
236 |
-
padding=get_padding(kernel_size, dilation[0]),
|
237 |
-
)
|
238 |
-
),
|
239 |
-
weight_norm(
|
240 |
-
Conv1d(
|
241 |
-
channels,
|
242 |
-
channels,
|
243 |
-
kernel_size,
|
244 |
-
1,
|
245 |
-
dilation=dilation[1],
|
246 |
-
padding=get_padding(kernel_size, dilation[1]),
|
247 |
-
)
|
248 |
-
),
|
249 |
-
weight_norm(
|
250 |
-
Conv1d(
|
251 |
-
channels,
|
252 |
-
channels,
|
253 |
-
kernel_size,
|
254 |
-
1,
|
255 |
-
dilation=dilation[2],
|
256 |
-
padding=get_padding(kernel_size, dilation[2]),
|
257 |
-
)
|
258 |
-
),
|
259 |
-
]
|
260 |
-
)
|
261 |
-
self.convs1.apply(init_weights)
|
262 |
-
|
263 |
-
self.convs2 = nn.ModuleList(
|
264 |
-
[
|
265 |
-
weight_norm(
|
266 |
-
Conv1d(
|
267 |
-
channels,
|
268 |
-
channels,
|
269 |
-
kernel_size,
|
270 |
-
1,
|
271 |
-
dilation=1,
|
272 |
-
padding=get_padding(kernel_size, 1),
|
273 |
-
)
|
274 |
-
),
|
275 |
-
weight_norm(
|
276 |
-
Conv1d(
|
277 |
-
channels,
|
278 |
-
channels,
|
279 |
-
kernel_size,
|
280 |
-
1,
|
281 |
-
dilation=1,
|
282 |
-
padding=get_padding(kernel_size, 1),
|
283 |
-
)
|
284 |
-
),
|
285 |
-
weight_norm(
|
286 |
-
Conv1d(
|
287 |
-
channels,
|
288 |
-
channels,
|
289 |
-
kernel_size,
|
290 |
-
1,
|
291 |
-
dilation=1,
|
292 |
-
padding=get_padding(kernel_size, 1),
|
293 |
-
)
|
294 |
-
),
|
295 |
-
]
|
296 |
-
)
|
297 |
-
self.convs2.apply(init_weights)
|
298 |
-
|
299 |
-
def forward(self, x, x_mask=None):
|
300 |
-
for c1, c2 in zip(self.convs1, self.convs2):
|
301 |
-
xt = F.leaky_relu(x, LRELU_SLOPE)
|
302 |
-
if x_mask is not None:
|
303 |
-
xt = xt * x_mask
|
304 |
-
xt = c1(xt)
|
305 |
-
xt = F.leaky_relu(xt, LRELU_SLOPE)
|
306 |
-
if x_mask is not None:
|
307 |
-
xt = xt * x_mask
|
308 |
-
xt = c2(xt)
|
309 |
-
x = xt + x
|
310 |
-
if x_mask is not None:
|
311 |
-
x = x * x_mask
|
312 |
-
return x
|
313 |
-
|
314 |
-
def remove_weight_norm(self):
|
315 |
-
for l in self.convs1:
|
316 |
-
remove_weight_norm(l)
|
317 |
-
for l in self.convs2:
|
318 |
-
remove_weight_norm(l)
|
319 |
-
|
320 |
-
|
321 |
-
class ResBlock2(torch.nn.Module):
|
322 |
-
def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
|
323 |
-
super(ResBlock2, self).__init__()
|
324 |
-
self.convs = nn.ModuleList(
|
325 |
-
[
|
326 |
-
weight_norm(
|
327 |
-
Conv1d(
|
328 |
-
channels,
|
329 |
-
channels,
|
330 |
-
kernel_size,
|
331 |
-
1,
|
332 |
-
dilation=dilation[0],
|
333 |
-
padding=get_padding(kernel_size, dilation[0]),
|
334 |
-
)
|
335 |
-
),
|
336 |
-
weight_norm(
|
337 |
-
Conv1d(
|
338 |
-
channels,
|
339 |
-
channels,
|
340 |
-
kernel_size,
|
341 |
-
1,
|
342 |
-
dilation=dilation[1],
|
343 |
-
padding=get_padding(kernel_size, dilation[1]),
|
344 |
-
)
|
345 |
-
),
|
346 |
-
]
|
347 |
-
)
|
348 |
-
self.convs.apply(init_weights)
|
349 |
-
|
350 |
-
def forward(self, x, x_mask=None):
|
351 |
-
for c in self.convs:
|
352 |
-
xt = F.leaky_relu(x, LRELU_SLOPE)
|
353 |
-
if x_mask is not None:
|
354 |
-
xt = xt * x_mask
|
355 |
-
xt = c(xt)
|
356 |
-
x = xt + x
|
357 |
-
if x_mask is not None:
|
358 |
-
x = x * x_mask
|
359 |
-
return x
|
360 |
-
|
361 |
-
def remove_weight_norm(self):
|
362 |
-
for l in self.convs:
|
363 |
-
remove_weight_norm(l)
|
364 |
-
|
365 |
-
|
366 |
-
class Log(nn.Module):
|
367 |
-
def forward(self, x, x_mask, reverse=False, **kwargs):
|
368 |
-
if not reverse:
|
369 |
-
y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
|
370 |
-
logdet = torch.sum(-y, [1, 2])
|
371 |
-
return y, logdet
|
372 |
-
else:
|
373 |
-
x = torch.exp(x) * x_mask
|
374 |
-
return x
|
375 |
-
|
376 |
-
|
377 |
-
class Flip(nn.Module):
|
378 |
-
def forward(self, x, *args, reverse=False, **kwargs):
|
379 |
-
x = torch.flip(x, [1])
|
380 |
-
if not reverse:
|
381 |
-
logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
|
382 |
-
return x, logdet
|
383 |
-
else:
|
384 |
-
return x
|
385 |
-
|
386 |
-
|
387 |
-
class ElementwiseAffine(nn.Module):
|
388 |
-
def __init__(self, channels):
|
389 |
-
super().__init__()
|
390 |
-
self.channels = channels
|
391 |
-
self.m = nn.Parameter(torch.zeros(channels, 1))
|
392 |
-
self.logs = nn.Parameter(torch.zeros(channels, 1))
|
393 |
-
|
394 |
-
def forward(self, x, x_mask, reverse=False, **kwargs):
|
395 |
-
if not reverse:
|
396 |
-
y = self.m + torch.exp(self.logs) * x
|
397 |
-
y = y * x_mask
|
398 |
-
logdet = torch.sum(self.logs * x_mask, [1, 2])
|
399 |
-
return y, logdet
|
400 |
-
else:
|
401 |
-
x = (x - self.m) * torch.exp(-self.logs) * x_mask
|
402 |
-
return x
|
403 |
-
|
404 |
-
|
405 |
-
class ResidualCouplingLayer(nn.Module):
|
406 |
-
def __init__(
|
407 |
-
self,
|
408 |
-
channels,
|
409 |
-
hidden_channels,
|
410 |
-
kernel_size,
|
411 |
-
dilation_rate,
|
412 |
-
n_layers,
|
413 |
-
p_dropout=0,
|
414 |
-
gin_channels=0,
|
415 |
-
mean_only=False,
|
416 |
-
):
|
417 |
-
assert channels % 2 == 0, "channels should be divisible by 2"
|
418 |
-
super().__init__()
|
419 |
-
self.channels = channels
|
420 |
-
self.hidden_channels = hidden_channels
|
421 |
-
self.kernel_size = kernel_size
|
422 |
-
self.dilation_rate = dilation_rate
|
423 |
-
self.n_layers = n_layers
|
424 |
-
self.half_channels = channels // 2
|
425 |
-
self.mean_only = mean_only
|
426 |
-
|
427 |
-
self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
|
428 |
-
self.enc = WN(
|
429 |
-
hidden_channels,
|
430 |
-
kernel_size,
|
431 |
-
dilation_rate,
|
432 |
-
n_layers,
|
433 |
-
p_dropout=p_dropout,
|
434 |
-
gin_channels=gin_channels,
|
435 |
-
)
|
436 |
-
self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
|
437 |
-
self.post.weight.data.zero_()
|
438 |
-
self.post.bias.data.zero_()
|
439 |
-
|
440 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
441 |
-
x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
|
442 |
-
h = self.pre(x0) * x_mask
|
443 |
-
h = self.enc(h, x_mask, g=g)
|
444 |
-
stats = self.post(h) * x_mask
|
445 |
-
if not self.mean_only:
|
446 |
-
m, logs = torch.split(stats, [self.half_channels] * 2, 1)
|
447 |
-
else:
|
448 |
-
m = stats
|
449 |
-
logs = torch.zeros_like(m)
|
450 |
-
|
451 |
-
if not reverse:
|
452 |
-
x1 = m + x1 * torch.exp(logs) * x_mask
|
453 |
-
x = torch.cat([x0, x1], 1)
|
454 |
-
logdet = torch.sum(logs, [1, 2])
|
455 |
-
return x, logdet
|
456 |
-
else:
|
457 |
-
x1 = (x1 - m) * torch.exp(-logs) * x_mask
|
458 |
-
x = torch.cat([x0, x1], 1)
|
459 |
-
return x
|
460 |
-
|
461 |
-
def remove_weight_norm(self):
|
462 |
-
self.enc.remove_weight_norm()
|
463 |
-
|
464 |
-
|
465 |
-
class ConvFlow(nn.Module):
|
466 |
-
def __init__(
|
467 |
-
self,
|
468 |
-
in_channels,
|
469 |
-
filter_channels,
|
470 |
-
kernel_size,
|
471 |
-
n_layers,
|
472 |
-
num_bins=10,
|
473 |
-
tail_bound=5.0,
|
474 |
-
):
|
475 |
-
super().__init__()
|
476 |
-
self.in_channels = in_channels
|
477 |
-
self.filter_channels = filter_channels
|
478 |
-
self.kernel_size = kernel_size
|
479 |
-
self.n_layers = n_layers
|
480 |
-
self.num_bins = num_bins
|
481 |
-
self.tail_bound = tail_bound
|
482 |
-
self.half_channels = in_channels // 2
|
483 |
-
|
484 |
-
self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
|
485 |
-
self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
|
486 |
-
self.proj = nn.Conv1d(
|
487 |
-
filter_channels, self.half_channels * (num_bins * 3 - 1), 1
|
488 |
-
)
|
489 |
-
self.proj.weight.data.zero_()
|
490 |
-
self.proj.bias.data.zero_()
|
491 |
-
|
492 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
493 |
-
x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
|
494 |
-
h = self.pre(x0)
|
495 |
-
h = self.convs(h, x_mask, g=g)
|
496 |
-
h = self.proj(h) * x_mask
|
497 |
-
|
498 |
-
b, c, t = x0.shape
|
499 |
-
h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
|
500 |
-
|
501 |
-
unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
|
502 |
-
unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
|
503 |
-
self.filter_channels
|
504 |
-
)
|
505 |
-
unnormalized_derivatives = h[..., 2 * self.num_bins :]
|
506 |
-
|
507 |
-
x1, logabsdet = piecewise_rational_quadratic_transform(
|
508 |
-
x1,
|
509 |
-
unnormalized_widths,
|
510 |
-
unnormalized_heights,
|
511 |
-
unnormalized_derivatives,
|
512 |
-
inverse=reverse,
|
513 |
-
tails="linear",
|
514 |
-
tail_bound=self.tail_bound,
|
515 |
-
)
|
516 |
-
|
517 |
-
x = torch.cat([x0, x1], 1) * x_mask
|
518 |
-
logdet = torch.sum(logabsdet * x_mask, [1, 2])
|
519 |
-
if not reverse:
|
520 |
-
return x, logdet
|
521 |
-
else:
|
522 |
-
return x
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/tools/infer/infer-pm-index256.py
DELETED
@@ -1,202 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
|
3 |
-
对源特征进行检索
|
4 |
-
"""
|
5 |
-
import os
|
6 |
-
import logging
|
7 |
-
|
8 |
-
logger = logging.getLogger(__name__)
|
9 |
-
|
10 |
-
import parselmouth
|
11 |
-
import torch
|
12 |
-
|
13 |
-
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
|
14 |
-
# import torchcrepe
|
15 |
-
from time import time as ttime
|
16 |
-
|
17 |
-
# import pyworld
|
18 |
-
import librosa
|
19 |
-
import numpy as np
|
20 |
-
import soundfile as sf
|
21 |
-
import torch.nn.functional as F
|
22 |
-
from fairseq import checkpoint_utils
|
23 |
-
|
24 |
-
# from models import SynthesizerTrn256#hifigan_nonsf
|
25 |
-
# from lib.infer_pack.models import SynthesizerTrn256NSF as SynthesizerTrn256#hifigan_nsf
|
26 |
-
from infer.lib.infer_pack.models import (
|
27 |
-
SynthesizerTrnMs256NSFsid as SynthesizerTrn256,
|
28 |
-
) # hifigan_nsf
|
29 |
-
from scipy.io import wavfile
|
30 |
-
|
31 |
-
# from lib.infer_pack.models import SynthesizerTrnMs256NSFsid_sim as SynthesizerTrn256#hifigan_nsf
|
32 |
-
# from models import SynthesizerTrn256NSFsim as SynthesizerTrn256#hifigan_nsf
|
33 |
-
# from models import SynthesizerTrn256NSFsimFlow as SynthesizerTrn256#hifigan_nsf
|
34 |
-
|
35 |
-
|
36 |
-
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
37 |
-
model_path = r"E:\codes\py39\vits_vc_gpu_train\assets\hubert\hubert_base.pt" #
|
38 |
-
logger.info("Load model(s) from {}".format(model_path))
|
39 |
-
models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
|
40 |
-
[model_path],
|
41 |
-
suffix="",
|
42 |
-
)
|
43 |
-
model = models[0]
|
44 |
-
model = model.to(device)
|
45 |
-
model = model.half()
|
46 |
-
model.eval()
|
47 |
-
|
48 |
-
# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],183,256,is_half=True)#hifigan#512#256
|
49 |
-
# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],109,256,is_half=True)#hifigan#512#256
|
50 |
-
net_g = SynthesizerTrn256(
|
51 |
-
1025,
|
52 |
-
32,
|
53 |
-
192,
|
54 |
-
192,
|
55 |
-
768,
|
56 |
-
2,
|
57 |
-
6,
|
58 |
-
3,
|
59 |
-
0,
|
60 |
-
"1",
|
61 |
-
[3, 7, 11],
|
62 |
-
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
|
63 |
-
[10, 10, 2, 2],
|
64 |
-
512,
|
65 |
-
[16, 16, 4, 4],
|
66 |
-
183,
|
67 |
-
256,
|
68 |
-
is_half=True,
|
69 |
-
) # hifigan#512#256#no_dropout
|
70 |
-
# net_g = SynthesizerTrn256(1025,32,192,192,768,2,3,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],0)#ts3
|
71 |
-
# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2],512,[16,16,4],0)#hifigan-ps-sr
|
72 |
-
#
|
73 |
-
# net_g = SynthesizerTrn(1025, 32, 192, 192, 768, 2, 6, 3, 0.1, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [5,5], 512, [15,15], 0)#ms
|
74 |
-
# net_g = SynthesizerTrn(1025, 32, 192, 192, 768, 2, 6, 3, 0.1, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,10], 512, [16,16], 0)#idwt2
|
75 |
-
|
76 |
-
# weights=torch.load("infer/ft-mi_1k-noD.pt")
|
77 |
-
# weights=torch.load("infer/ft-mi-freeze-vocoder-flow-enc_q_1k.pt")
|
78 |
-
# weights=torch.load("infer/ft-mi-freeze-vocoder_true_1k.pt")
|
79 |
-
# weights=torch.load("infer/ft-mi-sim1k.pt")
|
80 |
-
weights = torch.load("infer/ft-mi-no_opt-no_dropout.pt")
|
81 |
-
logger.debug(net_g.load_state_dict(weights, strict=True))
|
82 |
-
|
83 |
-
net_g.eval().to(device)
|
84 |
-
net_g.half()
|
85 |
-
|
86 |
-
|
87 |
-
def get_f0(x, p_len, f0_up_key=0):
|
88 |
-
time_step = 160 / 16000 * 1000
|
89 |
-
f0_min = 50
|
90 |
-
f0_max = 1100
|
91 |
-
f0_mel_min = 1127 * np.log(1 + f0_min / 700)
|
92 |
-
f0_mel_max = 1127 * np.log(1 + f0_max / 700)
|
93 |
-
|
94 |
-
f0 = (
|
95 |
-
parselmouth.Sound(x, 16000)
|
96 |
-
.to_pitch_ac(
|
97 |
-
time_step=time_step / 1000,
|
98 |
-
voicing_threshold=0.6,
|
99 |
-
pitch_floor=f0_min,
|
100 |
-
pitch_ceiling=f0_max,
|
101 |
-
)
|
102 |
-
.selected_array["frequency"]
|
103 |
-
)
|
104 |
-
|
105 |
-
pad_size = (p_len - len(f0) + 1) // 2
|
106 |
-
if pad_size > 0 or p_len - len(f0) - pad_size > 0:
|
107 |
-
f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
|
108 |
-
f0 *= pow(2, f0_up_key / 12)
|
109 |
-
f0bak = f0.copy()
|
110 |
-
|
111 |
-
f0_mel = 1127 * np.log(1 + f0 / 700)
|
112 |
-
f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
|
113 |
-
f0_mel_max - f0_mel_min
|
114 |
-
) + 1
|
115 |
-
f0_mel[f0_mel <= 1] = 1
|
116 |
-
f0_mel[f0_mel > 255] = 255
|
117 |
-
# f0_mel[f0_mel > 188] = 188
|
118 |
-
f0_coarse = np.rint(f0_mel).astype(np.int32)
|
119 |
-
return f0_coarse, f0bak
|
120 |
-
|
121 |
-
|
122 |
-
import faiss
|
123 |
-
|
124 |
-
index = faiss.read_index("infer/added_IVF512_Flat_mi_baseline_src_feat.index")
|
125 |
-
big_npy = np.load("infer/big_src_feature_mi.npy")
|
126 |
-
ta0 = ta1 = ta2 = 0
|
127 |
-
for idx, name in enumerate(
|
128 |
-
[
|
129 |
-
"冬之花clip1.wav",
|
130 |
-
]
|
131 |
-
): ##
|
132 |
-
wav_path = "todo-songs/%s" % name #
|
133 |
-
f0_up_key = -2 #
|
134 |
-
audio, sampling_rate = sf.read(wav_path)
|
135 |
-
if len(audio.shape) > 1:
|
136 |
-
audio = librosa.to_mono(audio.transpose(1, 0))
|
137 |
-
if sampling_rate != 16000:
|
138 |
-
audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
|
139 |
-
|
140 |
-
feats = torch.from_numpy(audio).float()
|
141 |
-
if feats.dim() == 2: # double channels
|
142 |
-
feats = feats.mean(-1)
|
143 |
-
assert feats.dim() == 1, feats.dim()
|
144 |
-
feats = feats.view(1, -1)
|
145 |
-
padding_mask = torch.BoolTensor(feats.shape).fill_(False)
|
146 |
-
inputs = {
|
147 |
-
"source": feats.half().to(device),
|
148 |
-
"padding_mask": padding_mask.to(device),
|
149 |
-
"output_layer": 9, # layer 9
|
150 |
-
}
|
151 |
-
if torch.cuda.is_available():
|
152 |
-
torch.cuda.synchronize()
|
153 |
-
t0 = ttime()
|
154 |
-
with torch.no_grad():
|
155 |
-
logits = model.extract_features(**inputs)
|
156 |
-
feats = model.final_proj(logits[0])
|
157 |
-
|
158 |
-
####索引优化
|
159 |
-
npy = feats[0].cpu().numpy().astype("float32")
|
160 |
-
D, I = index.search(npy, 1)
|
161 |
-
feats = (
|
162 |
-
torch.from_numpy(big_npy[I.squeeze()].astype("float16")).unsqueeze(0).to(device)
|
163 |
-
)
|
164 |
-
|
165 |
-
feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
|
166 |
-
if torch.cuda.is_available():
|
167 |
-
torch.cuda.synchronize()
|
168 |
-
t1 = ttime()
|
169 |
-
# p_len = min(feats.shape[1],10000,pitch.shape[0])#太大了爆显存
|
170 |
-
p_len = min(feats.shape[1], 10000) #
|
171 |
-
pitch, pitchf = get_f0(audio, p_len, f0_up_key)
|
172 |
-
p_len = min(feats.shape[1], 10000, pitch.shape[0]) # 太大了爆显存
|
173 |
-
if torch.cuda.is_available():
|
174 |
-
torch.cuda.synchronize()
|
175 |
-
t2 = ttime()
|
176 |
-
feats = feats[:, :p_len, :]
|
177 |
-
pitch = pitch[:p_len]
|
178 |
-
pitchf = pitchf[:p_len]
|
179 |
-
p_len = torch.LongTensor([p_len]).to(device)
|
180 |
-
pitch = torch.LongTensor(pitch).unsqueeze(0).to(device)
|
181 |
-
sid = torch.LongTensor([0]).to(device)
|
182 |
-
pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device)
|
183 |
-
with torch.no_grad():
|
184 |
-
audio = (
|
185 |
-
net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]
|
186 |
-
.data.cpu()
|
187 |
-
.float()
|
188 |
-
.numpy()
|
189 |
-
) # nsf
|
190 |
-
if torch.cuda.is_available():
|
191 |
-
torch.cuda.synchronize()
|
192 |
-
t3 = ttime()
|
193 |
-
ta0 += t1 - t0
|
194 |
-
ta1 += t2 - t1
|
195 |
-
ta2 += t3 - t2
|
196 |
-
# wavfile.write("ft-mi_1k-index256-noD-%s.wav"%name, 40000, audio)##
|
197 |
-
# wavfile.write("ft-mi-freeze-vocoder-flow-enc_q_1k-%s.wav"%name, 40000, audio)##
|
198 |
-
# wavfile.write("ft-mi-sim1k-%s.wav"%name, 40000, audio)##
|
199 |
-
wavfile.write("ft-mi-no_opt-no_dropout-%s.wav" % name, 40000, audio) ##
|
200 |
-
|
201 |
-
|
202 |
-
logger.debug("%.2fs %.2fs %.2fs", ta0, ta1, ta2) #
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Baixar Mortal Kombat Trilogy Apk.md
DELETED
@@ -1,53 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Escapada imprudente 2 2.2 6 Mod Apk: Un juego de carreras divertido y emocionante</h1>
|
3 |
-
<h2>Introducción</h2>
|
4 |
-
<p>¿Te gustan los juegos de carreras llenos de acción, aventura y emoción? ¿Quieres experimentar la emoción de escapar de la policía y otros enemigos mientras conduces rápido y furioso? Si es así, entonces deberías probar Reckless Getaway 2, un popular juego de carreras que te mantendrá al borde de tu asiento. </p>
|
5 |
-
<h3>¿Qué es una escapada temeraria 2?</h3>
|
6 |
-
<p>Reckless Getaway 2 es un juego de carreras desarrollado por Pixelbite, los creadores de otros juegos de éxito como Space Marshals y Xenowerk. En este juego, juegas como un ladrón de bancos que tiene que escapar de la policía y otros rivales después de un robo exitoso. Tienes que conducir a través de diferentes entornos, como calles de la ciudad, carreteras, desiertos y montañas, evitando obstáculos, tráfico y balas. También puede realizar acrobacias, aplastar coches y recoger monedas en el camino. </p>
|
7 |
-
<h2>baixar mortal kombat trilogy apk</h2><br /><p><b><b>Download Zip</b> ✦ <a href="https://bltlly.com/2v6K1O">https://bltlly.com/2v6K1O</a></b></p><br /><br />
|
8 |
-
<h3>¿Cuál es la versión apk mod? </h3>
|
9 |
-
<p>La versión mod apk de Reckless Getaway 2 es una versión modificada del juego original que le da acceso a dinero ilimitado y funciones desbloqueadas. Con esta versión, puede comprar cualquier coche que desee, actualizarlo y personalizarlo a su gusto. También puedes explorar todos los mapas y modos sin restricciones. Además, puedes disfrutar del juego sin anuncios molestos. </p>
|
10 |
-
<h3>¿Por qué debería jugarlo? </h3>
|
11 |
-
<p>Usted debe jugar Escapada imprudente 2 mod apk porque es un juego de carreras divertido y emocionante que pondrá a prueba sus habilidades y reflejos. Te encantará la acción de ritmo rápido, la física realista, los gráficos coloridos y los efectos de sonido pegadizos. También disfrutarás de la variedad de coches, mapas, modos y misiones que te mantendrán entretenido durante horas. Además, tendrás una ventaja sobre otros jugadores con dinero ilimitado y funciones desbloqueadas. </p>
|
12 |
-
<h2>Características de la escapada imprudente 2 2.2 6 Mod Apk</h2>
|
13 |
-
<h3>Dinero ilimitado</h3>
|
14 |
-
|
15 |
-
<h3>Coches y mapas desbloqueados</h3>
|
16 |
-
<p>Otra gran característica de Reckless Getaway 2 mod apk es que desbloquea todos los coches y mapas en el juego. Hay más de 50 coches para elegir, cada uno con sus propias características y habilidades. Algunos coches son más rápidos, algunos son más duraderos, algunos son más ágiles, y algunos tienen características especiales como nitro boost o lanzacohetes. También puedes desbloquear todos los mapas del juego, que incluyen diferentes entornos como calles de ciudades, carreteras, desiertos, montañas y más. Cada mapa tiene sus propios desafíos y oportunidades para acrobacias y éxitos. </p>
|
17 |
-
<h3>No hay anuncios</h3>
|
18 |
-
<p>Otro beneficio de Reckless Getaway 2 mod apk es que elimina todos los anuncios del juego. Los anuncios pueden ser molestos y distraer, especialmente cuando estás en medio de una persecución o una misión. También pueden ralentizar el juego y consumir sus datos. Con Reckless Getaway 2 mod apk, puede disfrutar del juego sin interrupciones ni problemas. </p>
|
19 |
-
<h3>Gráficos y sonido de alta calidad</h3>
|
20 |
-
<p>Escapada imprudente 2 mod apk también mejora los gráficos y la calidad de sonido del juego. El juego tiene gráficos coloridos y detallados que crean una experiencia realista e inmersiva. El juego también tiene efectos de sonido dinámicos y realistas que coinciden con la acción y el medio ambiente. Puedes oír el rugido del motor, los neumáticos chirriar, los coches chocar, y las balas volar. También puedes escuchar la música pegadiza y alegre que se suma a la diversión y la emoción del juego. </p>
|
21 |
-
<h3>Controles y jugabilidad fáciles</h3>
|
22 |
-
<p>Escapada imprudente 2 mod apk también hace que los controles y el juego fácil y suave. El juego tiene controles simples e intuitivos que te permiten dirigir, frenar, acelerar y disparar con facilidad. También puede cambiar entre diferentes ángulos de cámara para obtener una mejor vista de la acción. El juego también tiene una interfaz fácil de usar que muestra su puntuación, su salud, sus monedas, y sus objetivos. El juego también tiene un modo tutorial que te enseña los fundamentos del juego. </p>
|
23 |
-
|
24 |
-
<h3>Paso 1: Descargar el archivo apk mod de una fuente de confianza</h3>
|
25 |
-
<p>El primer paso para descargar e instalar Reckless Getaway 2 mod apk es encontrar una fuente confiable y segura que proporciona el archivo apk mod. Puede buscar en línea para Reckless Getaway 2 mod apk o utilizar el enlace de abajo para descargarlo directamente. </p>
|
26 |
-
<p><a href=">Escapada temeraria 2 2.2 6 Mod Apk Enlace de descarga</a></p>
|
27 |
-
<p></p>
|
28 |
-
<h3>Paso 2: Habilitar fuentes desconocidas en la configuración del dispositivo</h3>
|
29 |
-
<p>El segundo paso para descargar e instalar Reckless Getaway 2 mod apk es habilitar fuentes desconocidas en la configuración del dispositivo. Esto le permitirá instalar aplicaciones que no son de Google Play Store. Para hacer esto, vaya a la configuración del dispositivo, luego a la seguridad, luego a fuentes desconocidas y enciéndala. </p>
|
30 |
-
<h3>Paso 3: Instalar el archivo apk mod y lanzar el juego</h3>
|
31 |
-
<p>El tercer paso para descargar e instalar Reckless Getaway 2 mod apk es instalar el archivo apk mod y lanzar el juego. Para hacer esto, vaya a su administrador de archivos, a continuación, busque el archivo apk mod descargado, a continuación, toque en él para instalarlo. Una vez finalizada la instalación, puedes abrir el juego y disfrutarlo con dinero ilimitado y funciones desbloqueadas. </p>
|
32 |
-
<h3>Paso 4: Disfrutar del juego con dinero ilimitado y desbloqueado características</h3>
|
33 |
-
<p>El paso final para descargar e instalar Reckless Getaway 2 mod apk es disfrutar del juego con dinero ilimitado y funciones desbloqueadas. Ahora puede comprar cualquier coche que desee, actualizarlo, personalizarlo y explorar todos los mapas y modos sin restricciones. También puedes jugar sin anuncios ni interrupciones. </p>
|
34 |
-
<h2>Conclusión</h2>
|
35 |
-
<h3>Resumen de los puntos principales</h3>
|
36 |
-
|
37 |
-
<h3>Llamada a la acción</h3>
|
38 |
-
<p>Si usted está buscando un juego de carreras que está lleno de acción, aventura y emoción, entonces usted debe descargar Reckless Getaway 2 mod apk hoy. No te arrepentirás. Es uno de los mejores juegos de carreras por ahí que desafiará tus habilidades y reflejos. ¿Qué estás esperando? Descargar Reckless Getaway 2 mod apk ahora y divertirse! </p>
|
39 |
-
<h4>Preguntas frecuentes</h4>
|
40 |
-
<ul>
|
41 |
-
<li><b> ¿Es seguro usar Reckless Getaway 2 mod apk? </b></li>
|
42 |
-
<p>Sí, Reckless Getaway 2 mod apk es seguro de usar siempre y cuando se descarga desde una fuente de confianza. No contiene ningún virus o malware que pueda dañar su dispositivo o datos. </p>
|
43 |
-
<li><b>¿Necesito rootear mi dispositivo para usar Reckless Getaway 2 mod apk? </b></li>
|
44 |
-
<p>No, no es necesario rootear el dispositivo para usar Reckless Getaway 2 mod apk. Funciona tanto en dispositivos arraigados y no arraigados. </p>
|
45 |
-
<li><b>¿Cuáles son los requisitos mínimos para jugar Reckless Getaway 2 mod apk? </b></li>
|
46 |
-
<p>Los requisitos mínimos para jugar Reckless Getaway 2 mod apk son: - Android 4.3 o superior - 1 GB de RAM - 100 MB de espacio de almacenamiento libre - Una conexión a Internet estable</p>
|
47 |
-
<li><b>¿Puedo jugar Reckless Getaway 2 mod apk offline? </b></li>
|
48 |
-
<p>Sí, puedes jugar Reckless Getaway 2 mod apk offline. Sin embargo, no podrás acceder a algunas funciones que requieren una conexión a Internet, como tablas de clasificación y logros. </p>
|
49 |
-
<li><b>¿Puedo jugar Reckless Getaway 2 mod apk con mis amigos? </b></li>
|
50 |
-
<p>Sí, puedes jugar Reckless Getaway 2 mod apk con tus amigos. Puedes desafiarlos para vencer a tus puntuaciones más altas y ver quién es el mejor piloto. También puedes compartir tus capturas de pantalla y videos de tu juego con ellos. </p>
|
51 |
-
</ul></p> 64aa2da5cf<br />
|
52 |
-
<br />
|
53 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Coche Deriva Carreras Mod Apk 5play.md
DELETED
@@ -1,48 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>CarX deriva Racing Mod APK 5play: Una guía para los entusiastas de las carreras de coches</h1>
|
3 |
-
<p>Si usted es un fan de los juegos de carreras de coches, es posible que haya oído hablar de CarX Drift Racing, uno de los juegos de deriva más populares y realistas en Android. Pero ¿sabías que se puede disfrutar de este juego aún más con CarX Drift Racing Mod APK 5play? En este artículo, te contaremos todo lo que necesitas saber sobre esta versión modificada del juego, incluyendo sus características, beneficios y cómo descargarlo e instalarlo en tu dispositivo. </p>
|
4 |
-
<h2>¿Qué es CarX Drift Racing? </h2>
|
5 |
-
<p>CarX Drift Racing es un juego de carreras desarrollado por CarX Technologies, una empresa que se especializa en la creación de simulaciones físicas realistas de automóviles. El juego te permite experimentar la emoción de la deriva, una técnica de conducción donde el conductor sobreviraje intencionalmente el coche para que se deslice hacia los lados. Puede elegir entre una variedad de coches y pistas, personalizar la apariencia y el rendimiento de su vehículo, y competir con otros jugadores en línea o fuera de línea. </p>
|
6 |
-
<h2>coche deriva carreras mod apk 5play</h2><br /><p><b><b>Download</b> 🆓 <a href="https://bltlly.com/2v6Jsx">https://bltlly.com/2v6Jsx</a></b></p><br /><br />
|
7 |
-
<h3>Características de CarX Drift Racing</h3>
|
8 |
-
<p>Algunas de las características que hacen que CarX Drift Racing se destaque de otros juegos de carreras son:</p>
|
9 |
-
<h4>Física y gráficos realistas</h4>
|
10 |
-
<p>El juego utiliza un sofisticado motor de física que simula el comportamiento de los coches reales en diferentes superficies y condiciones. Puede sentir la diferencia entre la tracción delantera, la tracción trasera y los vehículos de tracción total, así como el impacto de la presión de los neumáticos, la suspensión y la potencia del motor en su rendimiento a la deriva. El juego también cuenta con gráficos impresionantes que crean un entorno inmersivo para tus aventuras de carreras. </p>
|
11 |
-
<h4>Coches y pistas personalizables</h4>
|
12 |
-
|
13 |
-
<h4>Modos online y offline</h4>
|
14 |
-
<p>El juego le permite jugar en línea con otros jugadores de todo el mundo, o fuera de línea con oponentes de IA. Puede unirse o crear su propio lobby, chatear con otros jugadores y retarlos a batallas o torneos a la deriva. También puedes jugar solo en el modo carrera, donde puedes completar varias misiones y ganar dinero y reputación. También puedes practicar tus habilidades en el modo free ride, donde puedes explorar las pistas a tu propio ritmo. </p>
|
15 |
-
<h2>¿Qué es CarX deriva Racing Mod APK 5play? </h2>
|
16 |
-
<p>CarX Drift Racing Mod APK 5play es una versión modificada del juego original que le da acceso a recursos ilimitados y características que no están disponibles en la versión oficial. Con este mod apk, se puede disfrutar del juego sin limitaciones o restricciones. </p>
|
17 |
-
<h3> Beneficios de CarX deriva Racing Mod APK 5play</h3>
|
18 |
-
<p>Algunos de los beneficios que se pueden obtener de usar CarX Drift Racing Mod APK 5play son:</p>
|
19 |
-
<h4>Dinero y oro ilimitados</h4>
|
20 |
-
<p>Con este apk mod, usted no tiene que preocuparse por quedarse sin dinero o oro en el juego. Puedes utilizarlos para comprar y actualizar cualquier coche o pista que quieras, sin tener que completar ninguna misión o ver ningún anuncio. También puedes usarlas para desbloquear funciones premium, como estatus VIP, ranuras adicionales y coches exclusivos. </p>
|
21 |
-
<p></p>
|
22 |
-
<h4>Desbloqueado todos los coches y pistas</h4>
|
23 |
-
<p>Con este mod apk, usted no tiene que esperar o trabajar duro para desbloquear todos los coches y pistas en el juego. Puedes acceder a todos ellos desde el principio, y disfrutar de la variedad y diversidad del juego. También puedes probar diferentes combinaciones de coches y pistas, y encontrar las que se adapten a tu estilo y preferencia. </p>
|
24 |
-
<h4>No se requieren anuncios ni root</h4>
|
25 |
-
|
26 |
-
<h2>¿Cómo descargar e instalar CarX Drift Racing Mod APK 5play? </h2>
|
27 |
-
<p>Si usted está interesado en descargar e instalar CarX Drift Racing Mod APK 5play en su dispositivo, puede seguir estos sencillos pasos:</p>
|
28 |
-
<h3>Pasos para descargar e instalar CarX Drift Racing Mod APK 5play</h3>
|
29 |
-
<h4>Paso 1: Habilitar fuentes desconocidas en su dispositivo</h4>
|
30 |
-
<p>Antes de que pueda instalar cualquier archivo apk mod en su dispositivo, es necesario habilitar fuentes desconocidas en la configuración de seguridad. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. </p>
|
31 |
-
<h4>Paso 2: Descargar el archivo apk mod desde el enlace proporcionado</h4>
|
32 |
-
<p>Siguiente, es necesario descargar el archivo apk mod de una fuente confiable. Puede utilizar el enlace proporcionado a continuación para descargar la última versión de CarX Drift Racing Mod APK 5play. El tamaño del archivo es de unos 500 MB, así que asegúrese de tener suficiente espacio de almacenamiento y una conexión a Internet estable. </p>
|
33 |
-
<p><a href="">Descargar CarX Drift Racing Mod APK 5play</a></p>
|
34 |
-
<h4>Paso 3: Instalar el archivo apk mod y disfrutar del juego</h4>
|
35 |
-
<p>Finalmente, necesita instalar el archivo apk mod en su dispositivo. Para hacer esto, busque el archivo descargado en su administrador de archivos y toque en él. Siga las instrucciones en la pantalla y espere a que termine el proceso de instalación. Una vez hecho, puede iniciar el juego desde el cajón de la aplicación y disfrutar de las características ilimitadas de CarX Drift Racing Mod APK 5play. </p>
|
36 |
-
<h2>Conclusión</h2>
|
37 |
-
<p>CarX Drift Racing es un juego de deriva divertido y realista que te mantendrá entretenido durante horas. Con CarX Drift Racing Mod APK 5play, puede mejorar su experiencia de juego mediante la obtención de dinero y oro ilimitado, desbloquear todos los coches y pistas, y la eliminación de anuncios y requisito de raíz. Puede descargar e instalar este apk mod fácilmente siguiendo los pasos anteriores. Entonces, ¿qué estás esperando? Descargar CarX Drift Racing Mod APK 5play hoy y la deriva de distancia! </p>
|
38 |
-
<h3>Preguntas frecuentes</h3>
|
39 |
-
|
40 |
-
<tabla>
|
41 |
-
<tr><td><b>Q: Es CarX deriva Racing Mod APK 5play seguro de usar? </b></td><td><b>A: Sí, CarX Drift Racing Mod APK 5play es seguro de usar, siempre y cuando lo descargue de una fuente de confianza. No contiene ningún virus o malware que pueda dañar su dispositivo o datos. </b></td></tr>
|
42 |
-
<tr><td><b>Q: ¿Funciona CarX Drift Racing Mod APK 5play en todos los dispositivos? </b></td><td><b>A: Sí, CarX Drift Racing Mod APK 5play funciona en todos los dispositivos Android que admiten el juego original. Sin embargo, algunos dispositivos pueden tener problemas de compatibilidad o problemas de rendimiento debido a diferentes especificaciones. </b></td></tr>
|
43 |
-
<tr><td><b>Q: ¿Puedo jugar en línea con CarX Drift Racing Mod APK 5play? </b></td><td><b>A: Sí, puedes jugar en línea con CarX Drift Racing Mod APK 5play, pero puedes enfrentar algunas dificultades o riesgos. Por ejemplo, es posible que no pueda unirse a algunos lobbies o torneos, o que los desarrolladores del juego lo prohíban por usar una versión modificada del juego. </b></td></tr>
|
44 |
-
<tr><td><b>Q: ¿Puedo actualizar CarX Drift Racing Mod APK 5play? </b></td><td><b>A: Sí, puede actualizar CarX Dr ift Racing Mod APK 5play, pero es posible que tenga que desinstalar la versión anterior e instalar la nueva. También puede perder su progreso y datos si actualiza el apk mod. Por lo tanto, se recomienda hacer una copia de seguridad de sus datos antes de actualizar. </b></td></tr>
|
45 |
-
<tr><td><b>Q: ¿Cómo puedo contactar a los desarrolladores de CarX Drift Racing Mod APK 5play? </b></td><td><b>A: Puede ponerse en contacto con los desarrolladores de CarX Drift Racing Mod APK 5play visitando su sitio web o páginas de redes sociales. Sin embargo, es posible que no respondan a sus consultas o quejas, ya que no están afiliados con los desarrolladores de juegos oficiales. </b></td></tr>
|
46 |
-
</tabla></p> 64aa2da5cf<br />
|
47 |
-
<br />
|
48 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/compat.py
DELETED
@@ -1,63 +0,0 @@
|
|
1 |
-
"""Stuff that differs in different Python versions and platform
|
2 |
-
distributions."""
|
3 |
-
|
4 |
-
import logging
|
5 |
-
import os
|
6 |
-
import sys
|
7 |
-
|
8 |
-
__all__ = ["get_path_uid", "stdlib_pkgs", "WINDOWS"]
|
9 |
-
|
10 |
-
|
11 |
-
logger = logging.getLogger(__name__)
|
12 |
-
|
13 |
-
|
14 |
-
def has_tls() -> bool:
|
15 |
-
try:
|
16 |
-
import _ssl # noqa: F401 # ignore unused
|
17 |
-
|
18 |
-
return True
|
19 |
-
except ImportError:
|
20 |
-
pass
|
21 |
-
|
22 |
-
from pip._vendor.urllib3.util import IS_PYOPENSSL
|
23 |
-
|
24 |
-
return IS_PYOPENSSL
|
25 |
-
|
26 |
-
|
27 |
-
def get_path_uid(path: str) -> int:
|
28 |
-
"""
|
29 |
-
Return path's uid.
|
30 |
-
|
31 |
-
Does not follow symlinks:
|
32 |
-
https://github.com/pypa/pip/pull/935#discussion_r5307003
|
33 |
-
|
34 |
-
Placed this function in compat due to differences on AIX and
|
35 |
-
Jython, that should eventually go away.
|
36 |
-
|
37 |
-
:raises OSError: When path is a symlink or can't be read.
|
38 |
-
"""
|
39 |
-
if hasattr(os, "O_NOFOLLOW"):
|
40 |
-
fd = os.open(path, os.O_RDONLY | os.O_NOFOLLOW)
|
41 |
-
file_uid = os.fstat(fd).st_uid
|
42 |
-
os.close(fd)
|
43 |
-
else: # AIX and Jython
|
44 |
-
# WARNING: time of check vulnerability, but best we can do w/o NOFOLLOW
|
45 |
-
if not os.path.islink(path):
|
46 |
-
# older versions of Jython don't have `os.fstat`
|
47 |
-
file_uid = os.stat(path).st_uid
|
48 |
-
else:
|
49 |
-
# raise OSError for parity with os.O_NOFOLLOW above
|
50 |
-
raise OSError(f"{path} is a symlink; Will not return uid for symlinks")
|
51 |
-
return file_uid
|
52 |
-
|
53 |
-
|
54 |
-
# packages in the stdlib that may have installation metadata, but should not be
|
55 |
-
# considered 'installed'. this theoretically could be determined based on
|
56 |
-
# dist.location (py27:`sysconfig.get_paths()['stdlib']`,
|
57 |
-
# py26:sysconfig.get_config_vars('LIBDEST')), but fear platform variation may
|
58 |
-
# make this ineffective, so hard-coding
|
59 |
-
stdlib_pkgs = {"python", "wsgiref", "argparse"}
|
60 |
-
|
61 |
-
|
62 |
-
# windows detection, covers cpython and ironpython
|
63 |
-
WINDOWS = sys.platform.startswith("win") or (sys.platform == "cli" and os.name == "nt")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/discovery.py
DELETED
@@ -1,600 +0,0 @@
|
|
1 |
-
"""Automatic discovery of Python modules and packages (for inclusion in the
|
2 |
-
distribution) and other config values.
|
3 |
-
|
4 |
-
For the purposes of this module, the following nomenclature is used:
|
5 |
-
|
6 |
-
- "src-layout": a directory representing a Python project that contains a "src"
|
7 |
-
folder. Everything under the "src" folder is meant to be included in the
|
8 |
-
distribution when packaging the project. Example::
|
9 |
-
|
10 |
-
.
|
11 |
-
├── tox.ini
|
12 |
-
├── pyproject.toml
|
13 |
-
└── src/
|
14 |
-
└── mypkg/
|
15 |
-
├── __init__.py
|
16 |
-
├── mymodule.py
|
17 |
-
└── my_data_file.txt
|
18 |
-
|
19 |
-
- "flat-layout": a Python project that does not use "src-layout" but instead
|
20 |
-
have a directory under the project root for each package::
|
21 |
-
|
22 |
-
.
|
23 |
-
├── tox.ini
|
24 |
-
├── pyproject.toml
|
25 |
-
└── mypkg/
|
26 |
-
├── __init__.py
|
27 |
-
├── mymodule.py
|
28 |
-
└── my_data_file.txt
|
29 |
-
|
30 |
-
- "single-module": a project that contains a single Python script direct under
|
31 |
-
the project root (no directory used)::
|
32 |
-
|
33 |
-
.
|
34 |
-
├── tox.ini
|
35 |
-
├── pyproject.toml
|
36 |
-
└── mymodule.py
|
37 |
-
|
38 |
-
"""
|
39 |
-
|
40 |
-
import itertools
|
41 |
-
import os
|
42 |
-
from fnmatch import fnmatchcase
|
43 |
-
from glob import glob
|
44 |
-
from pathlib import Path
|
45 |
-
from typing import (
|
46 |
-
TYPE_CHECKING,
|
47 |
-
Callable,
|
48 |
-
Dict,
|
49 |
-
Iterable,
|
50 |
-
Iterator,
|
51 |
-
List,
|
52 |
-
Mapping,
|
53 |
-
Optional,
|
54 |
-
Tuple,
|
55 |
-
Union
|
56 |
-
)
|
57 |
-
|
58 |
-
import _distutils_hack.override # noqa: F401
|
59 |
-
|
60 |
-
from distutils import log
|
61 |
-
from distutils.util import convert_path
|
62 |
-
|
63 |
-
_Path = Union[str, os.PathLike]
|
64 |
-
_Filter = Callable[[str], bool]
|
65 |
-
StrIter = Iterator[str]
|
66 |
-
|
67 |
-
chain_iter = itertools.chain.from_iterable
|
68 |
-
|
69 |
-
if TYPE_CHECKING:
|
70 |
-
from setuptools import Distribution # noqa
|
71 |
-
|
72 |
-
|
73 |
-
def _valid_name(path: _Path) -> bool:
|
74 |
-
# Ignore invalid names that cannot be imported directly
|
75 |
-
return os.path.basename(path).isidentifier()
|
76 |
-
|
77 |
-
|
78 |
-
class _Finder:
|
79 |
-
"""Base class that exposes functionality for module/package finders"""
|
80 |
-
|
81 |
-
ALWAYS_EXCLUDE: Tuple[str, ...] = ()
|
82 |
-
DEFAULT_EXCLUDE: Tuple[str, ...] = ()
|
83 |
-
|
84 |
-
@classmethod
|
85 |
-
def find(
|
86 |
-
cls,
|
87 |
-
where: _Path = '.',
|
88 |
-
exclude: Iterable[str] = (),
|
89 |
-
include: Iterable[str] = ('*',)
|
90 |
-
) -> List[str]:
|
91 |
-
"""Return a list of all Python items (packages or modules, depending on
|
92 |
-
the finder implementation) found within directory 'where'.
|
93 |
-
|
94 |
-
'where' is the root directory which will be searched.
|
95 |
-
It should be supplied as a "cross-platform" (i.e. URL-style) path;
|
96 |
-
it will be converted to the appropriate local path syntax.
|
97 |
-
|
98 |
-
'exclude' is a sequence of names to exclude; '*' can be used
|
99 |
-
as a wildcard in the names.
|
100 |
-
When finding packages, 'foo.*' will exclude all subpackages of 'foo'
|
101 |
-
(but not 'foo' itself).
|
102 |
-
|
103 |
-
'include' is a sequence of names to include.
|
104 |
-
If it's specified, only the named items will be included.
|
105 |
-
If it's not specified, all found items will be included.
|
106 |
-
'include' can contain shell style wildcard patterns just like
|
107 |
-
'exclude'.
|
108 |
-
"""
|
109 |
-
|
110 |
-
exclude = exclude or cls.DEFAULT_EXCLUDE
|
111 |
-
return list(
|
112 |
-
cls._find_iter(
|
113 |
-
convert_path(str(where)),
|
114 |
-
cls._build_filter(*cls.ALWAYS_EXCLUDE, *exclude),
|
115 |
-
cls._build_filter(*include),
|
116 |
-
)
|
117 |
-
)
|
118 |
-
|
119 |
-
@classmethod
|
120 |
-
def _find_iter(cls, where: _Path, exclude: _Filter, include: _Filter) -> StrIter:
|
121 |
-
raise NotImplementedError
|
122 |
-
|
123 |
-
@staticmethod
|
124 |
-
def _build_filter(*patterns: str) -> _Filter:
|
125 |
-
"""
|
126 |
-
Given a list of patterns, return a callable that will be true only if
|
127 |
-
the input matches at least one of the patterns.
|
128 |
-
"""
|
129 |
-
return lambda name: any(fnmatchcase(name, pat) for pat in patterns)
|
130 |
-
|
131 |
-
|
132 |
-
class PackageFinder(_Finder):
|
133 |
-
"""
|
134 |
-
Generate a list of all Python packages found within a directory
|
135 |
-
"""
|
136 |
-
|
137 |
-
ALWAYS_EXCLUDE = ("ez_setup", "*__pycache__")
|
138 |
-
|
139 |
-
@classmethod
|
140 |
-
def _find_iter(cls, where: _Path, exclude: _Filter, include: _Filter) -> StrIter:
|
141 |
-
"""
|
142 |
-
All the packages found in 'where' that pass the 'include' filter, but
|
143 |
-
not the 'exclude' filter.
|
144 |
-
"""
|
145 |
-
for root, dirs, files in os.walk(str(where), followlinks=True):
|
146 |
-
# Copy dirs to iterate over it, then empty dirs.
|
147 |
-
all_dirs = dirs[:]
|
148 |
-
dirs[:] = []
|
149 |
-
|
150 |
-
for dir in all_dirs:
|
151 |
-
full_path = os.path.join(root, dir)
|
152 |
-
rel_path = os.path.relpath(full_path, where)
|
153 |
-
package = rel_path.replace(os.path.sep, '.')
|
154 |
-
|
155 |
-
# Skip directory trees that are not valid packages
|
156 |
-
if '.' in dir or not cls._looks_like_package(full_path, package):
|
157 |
-
continue
|
158 |
-
|
159 |
-
# Should this package be included?
|
160 |
-
if include(package) and not exclude(package):
|
161 |
-
yield package
|
162 |
-
|
163 |
-
# Keep searching subdirectories, as there may be more packages
|
164 |
-
# down there, even if the parent was excluded.
|
165 |
-
dirs.append(dir)
|
166 |
-
|
167 |
-
@staticmethod
|
168 |
-
def _looks_like_package(path: _Path, _package_name: str) -> bool:
|
169 |
-
"""Does a directory look like a package?"""
|
170 |
-
return os.path.isfile(os.path.join(path, '__init__.py'))
|
171 |
-
|
172 |
-
|
173 |
-
class PEP420PackageFinder(PackageFinder):
|
174 |
-
@staticmethod
|
175 |
-
def _looks_like_package(_path: _Path, _package_name: str) -> bool:
|
176 |
-
return True
|
177 |
-
|
178 |
-
|
179 |
-
class ModuleFinder(_Finder):
|
180 |
-
"""Find isolated Python modules.
|
181 |
-
This function will **not** recurse subdirectories.
|
182 |
-
"""
|
183 |
-
|
184 |
-
@classmethod
|
185 |
-
def _find_iter(cls, where: _Path, exclude: _Filter, include: _Filter) -> StrIter:
|
186 |
-
for file in glob(os.path.join(where, "*.py")):
|
187 |
-
module, _ext = os.path.splitext(os.path.basename(file))
|
188 |
-
|
189 |
-
if not cls._looks_like_module(module):
|
190 |
-
continue
|
191 |
-
|
192 |
-
if include(module) and not exclude(module):
|
193 |
-
yield module
|
194 |
-
|
195 |
-
_looks_like_module = staticmethod(_valid_name)
|
196 |
-
|
197 |
-
|
198 |
-
# We have to be extra careful in the case of flat layout to not include files
|
199 |
-
# and directories not meant for distribution (e.g. tool-related)
|
200 |
-
|
201 |
-
|
202 |
-
class FlatLayoutPackageFinder(PEP420PackageFinder):
|
203 |
-
_EXCLUDE = (
|
204 |
-
"ci",
|
205 |
-
"bin",
|
206 |
-
"doc",
|
207 |
-
"docs",
|
208 |
-
"documentation",
|
209 |
-
"manpages",
|
210 |
-
"news",
|
211 |
-
"changelog",
|
212 |
-
"test",
|
213 |
-
"tests",
|
214 |
-
"unit_test",
|
215 |
-
"unit_tests",
|
216 |
-
"example",
|
217 |
-
"examples",
|
218 |
-
"scripts",
|
219 |
-
"tools",
|
220 |
-
"util",
|
221 |
-
"utils",
|
222 |
-
"python",
|
223 |
-
"build",
|
224 |
-
"dist",
|
225 |
-
"venv",
|
226 |
-
"env",
|
227 |
-
"requirements",
|
228 |
-
# ---- Task runners / Build tools ----
|
229 |
-
"tasks", # invoke
|
230 |
-
"fabfile", # fabric
|
231 |
-
"site_scons", # SCons
|
232 |
-
# ---- Other tools ----
|
233 |
-
"benchmark",
|
234 |
-
"benchmarks",
|
235 |
-
"exercise",
|
236 |
-
"exercises",
|
237 |
-
# ---- Hidden directories/Private packages ----
|
238 |
-
"[._]*",
|
239 |
-
)
|
240 |
-
|
241 |
-
DEFAULT_EXCLUDE = tuple(chain_iter((p, f"{p}.*") for p in _EXCLUDE))
|
242 |
-
"""Reserved package names"""
|
243 |
-
|
244 |
-
@staticmethod
|
245 |
-
def _looks_like_package(_path: _Path, package_name: str) -> bool:
|
246 |
-
names = package_name.split('.')
|
247 |
-
# Consider PEP 561
|
248 |
-
root_pkg_is_valid = names[0].isidentifier() or names[0].endswith("-stubs")
|
249 |
-
return root_pkg_is_valid and all(name.isidentifier() for name in names[1:])
|
250 |
-
|
251 |
-
|
252 |
-
class FlatLayoutModuleFinder(ModuleFinder):
|
253 |
-
DEFAULT_EXCLUDE = (
|
254 |
-
"setup",
|
255 |
-
"conftest",
|
256 |
-
"test",
|
257 |
-
"tests",
|
258 |
-
"example",
|
259 |
-
"examples",
|
260 |
-
"build",
|
261 |
-
# ---- Task runners ----
|
262 |
-
"toxfile",
|
263 |
-
"noxfile",
|
264 |
-
"pavement",
|
265 |
-
"dodo",
|
266 |
-
"tasks",
|
267 |
-
"fabfile",
|
268 |
-
# ---- Other tools ----
|
269 |
-
"[Ss][Cc]onstruct", # SCons
|
270 |
-
"conanfile", # Connan: C/C++ build tool
|
271 |
-
"manage", # Django
|
272 |
-
"benchmark",
|
273 |
-
"benchmarks",
|
274 |
-
"exercise",
|
275 |
-
"exercises",
|
276 |
-
# ---- Hidden files/Private modules ----
|
277 |
-
"[._]*",
|
278 |
-
)
|
279 |
-
"""Reserved top-level module names"""
|
280 |
-
|
281 |
-
|
282 |
-
def _find_packages_within(root_pkg: str, pkg_dir: _Path) -> List[str]:
|
283 |
-
nested = PEP420PackageFinder.find(pkg_dir)
|
284 |
-
return [root_pkg] + [".".join((root_pkg, n)) for n in nested]
|
285 |
-
|
286 |
-
|
287 |
-
class ConfigDiscovery:
|
288 |
-
"""Fill-in metadata and options that can be automatically derived
|
289 |
-
(from other metadata/options, the file system or conventions)
|
290 |
-
"""
|
291 |
-
|
292 |
-
def __init__(self, distribution: "Distribution"):
|
293 |
-
self.dist = distribution
|
294 |
-
self._called = False
|
295 |
-
self._disabled = False
|
296 |
-
self._skip_ext_modules = False
|
297 |
-
|
298 |
-
def _disable(self):
|
299 |
-
"""Internal API to disable automatic discovery"""
|
300 |
-
self._disabled = True
|
301 |
-
|
302 |
-
def _ignore_ext_modules(self):
|
303 |
-
"""Internal API to disregard ext_modules.
|
304 |
-
|
305 |
-
Normally auto-discovery would not be triggered if ``ext_modules`` are set
|
306 |
-
(this is done for backward compatibility with existing packages relying on
|
307 |
-
``setup.py`` or ``setup.cfg``). However, ``setuptools`` can call this function
|
308 |
-
to ignore given ``ext_modules`` and proceed with the auto-discovery if
|
309 |
-
``packages`` and ``py_modules`` are not given (e.g. when using pyproject.toml
|
310 |
-
metadata).
|
311 |
-
"""
|
312 |
-
self._skip_ext_modules = True
|
313 |
-
|
314 |
-
@property
|
315 |
-
def _root_dir(self) -> _Path:
|
316 |
-
# The best is to wait until `src_root` is set in dist, before using _root_dir.
|
317 |
-
return self.dist.src_root or os.curdir
|
318 |
-
|
319 |
-
@property
|
320 |
-
def _package_dir(self) -> Dict[str, str]:
|
321 |
-
if self.dist.package_dir is None:
|
322 |
-
return {}
|
323 |
-
return self.dist.package_dir
|
324 |
-
|
325 |
-
def __call__(self, force=False, name=True, ignore_ext_modules=False):
|
326 |
-
"""Automatically discover missing configuration fields
|
327 |
-
and modifies the given ``distribution`` object in-place.
|
328 |
-
|
329 |
-
Note that by default this will only have an effect the first time the
|
330 |
-
``ConfigDiscovery`` object is called.
|
331 |
-
|
332 |
-
To repeatedly invoke automatic discovery (e.g. when the project
|
333 |
-
directory changes), please use ``force=True`` (or create a new
|
334 |
-
``ConfigDiscovery`` instance).
|
335 |
-
"""
|
336 |
-
if force is False and (self._called or self._disabled):
|
337 |
-
# Avoid overhead of multiple calls
|
338 |
-
return
|
339 |
-
|
340 |
-
self._analyse_package_layout(ignore_ext_modules)
|
341 |
-
if name:
|
342 |
-
self.analyse_name() # depends on ``packages`` and ``py_modules``
|
343 |
-
|
344 |
-
self._called = True
|
345 |
-
|
346 |
-
def _explicitly_specified(self, ignore_ext_modules: bool) -> bool:
|
347 |
-
"""``True`` if the user has specified some form of package/module listing"""
|
348 |
-
ignore_ext_modules = ignore_ext_modules or self._skip_ext_modules
|
349 |
-
ext_modules = not (self.dist.ext_modules is None or ignore_ext_modules)
|
350 |
-
return (
|
351 |
-
self.dist.packages is not None
|
352 |
-
or self.dist.py_modules is not None
|
353 |
-
or ext_modules
|
354 |
-
or hasattr(self.dist, "configuration") and self.dist.configuration
|
355 |
-
# ^ Some projects use numpy.distutils.misc_util.Configuration
|
356 |
-
)
|
357 |
-
|
358 |
-
def _analyse_package_layout(self, ignore_ext_modules: bool) -> bool:
|
359 |
-
if self._explicitly_specified(ignore_ext_modules):
|
360 |
-
# For backward compatibility, just try to find modules/packages
|
361 |
-
# when nothing is given
|
362 |
-
return True
|
363 |
-
|
364 |
-
log.debug(
|
365 |
-
"No `packages` or `py_modules` configuration, performing "
|
366 |
-
"automatic discovery."
|
367 |
-
)
|
368 |
-
|
369 |
-
return (
|
370 |
-
self._analyse_explicit_layout()
|
371 |
-
or self._analyse_src_layout()
|
372 |
-
# flat-layout is the trickiest for discovery so it should be last
|
373 |
-
or self._analyse_flat_layout()
|
374 |
-
)
|
375 |
-
|
376 |
-
def _analyse_explicit_layout(self) -> bool:
|
377 |
-
"""The user can explicitly give a package layout via ``package_dir``"""
|
378 |
-
package_dir = self._package_dir.copy() # don't modify directly
|
379 |
-
package_dir.pop("", None) # This falls under the "src-layout" umbrella
|
380 |
-
root_dir = self._root_dir
|
381 |
-
|
382 |
-
if not package_dir:
|
383 |
-
return False
|
384 |
-
|
385 |
-
log.debug(f"`explicit-layout` detected -- analysing {package_dir}")
|
386 |
-
pkgs = chain_iter(
|
387 |
-
_find_packages_within(pkg, os.path.join(root_dir, parent_dir))
|
388 |
-
for pkg, parent_dir in package_dir.items()
|
389 |
-
)
|
390 |
-
self.dist.packages = list(pkgs)
|
391 |
-
log.debug(f"discovered packages -- {self.dist.packages}")
|
392 |
-
return True
|
393 |
-
|
394 |
-
def _analyse_src_layout(self) -> bool:
|
395 |
-
"""Try to find all packages or modules under the ``src`` directory
|
396 |
-
(or anything pointed by ``package_dir[""]``).
|
397 |
-
|
398 |
-
The "src-layout" is relatively safe for automatic discovery.
|
399 |
-
We assume that everything within is meant to be included in the
|
400 |
-
distribution.
|
401 |
-
|
402 |
-
If ``package_dir[""]`` is not given, but the ``src`` directory exists,
|
403 |
-
this function will set ``package_dir[""] = "src"``.
|
404 |
-
"""
|
405 |
-
package_dir = self._package_dir
|
406 |
-
src_dir = os.path.join(self._root_dir, package_dir.get("", "src"))
|
407 |
-
if not os.path.isdir(src_dir):
|
408 |
-
return False
|
409 |
-
|
410 |
-
log.debug(f"`src-layout` detected -- analysing {src_dir}")
|
411 |
-
package_dir.setdefault("", os.path.basename(src_dir))
|
412 |
-
self.dist.package_dir = package_dir # persist eventual modifications
|
413 |
-
self.dist.packages = PEP420PackageFinder.find(src_dir)
|
414 |
-
self.dist.py_modules = ModuleFinder.find(src_dir)
|
415 |
-
log.debug(f"discovered packages -- {self.dist.packages}")
|
416 |
-
log.debug(f"discovered py_modules -- {self.dist.py_modules}")
|
417 |
-
return True
|
418 |
-
|
419 |
-
def _analyse_flat_layout(self) -> bool:
|
420 |
-
"""Try to find all packages and modules under the project root.
|
421 |
-
|
422 |
-
Since the ``flat-layout`` is more dangerous in terms of accidentally including
|
423 |
-
extra files/directories, this function is more conservative and will raise an
|
424 |
-
error if multiple packages or modules are found.
|
425 |
-
|
426 |
-
This assumes that multi-package dists are uncommon and refuse to support that
|
427 |
-
use case in order to be able to prevent unintended errors.
|
428 |
-
"""
|
429 |
-
log.debug(f"`flat-layout` detected -- analysing {self._root_dir}")
|
430 |
-
return self._analyse_flat_packages() or self._analyse_flat_modules()
|
431 |
-
|
432 |
-
def _analyse_flat_packages(self) -> bool:
|
433 |
-
self.dist.packages = FlatLayoutPackageFinder.find(self._root_dir)
|
434 |
-
top_level = remove_nested_packages(remove_stubs(self.dist.packages))
|
435 |
-
log.debug(f"discovered packages -- {self.dist.packages}")
|
436 |
-
self._ensure_no_accidental_inclusion(top_level, "packages")
|
437 |
-
return bool(top_level)
|
438 |
-
|
439 |
-
def _analyse_flat_modules(self) -> bool:
|
440 |
-
self.dist.py_modules = FlatLayoutModuleFinder.find(self._root_dir)
|
441 |
-
log.debug(f"discovered py_modules -- {self.dist.py_modules}")
|
442 |
-
self._ensure_no_accidental_inclusion(self.dist.py_modules, "modules")
|
443 |
-
return bool(self.dist.py_modules)
|
444 |
-
|
445 |
-
def _ensure_no_accidental_inclusion(self, detected: List[str], kind: str):
|
446 |
-
if len(detected) > 1:
|
447 |
-
from inspect import cleandoc
|
448 |
-
|
449 |
-
from setuptools.errors import PackageDiscoveryError
|
450 |
-
|
451 |
-
msg = f"""Multiple top-level {kind} discovered in a flat-layout: {detected}.
|
452 |
-
|
453 |
-
To avoid accidental inclusion of unwanted files or directories,
|
454 |
-
setuptools will not proceed with this build.
|
455 |
-
|
456 |
-
If you are trying to create a single distribution with multiple {kind}
|
457 |
-
on purpose, you should not rely on automatic discovery.
|
458 |
-
Instead, consider the following options:
|
459 |
-
|
460 |
-
1. set up custom discovery (`find` directive with `include` or `exclude`)
|
461 |
-
2. use a `src-layout`
|
462 |
-
3. explicitly set `py_modules` or `packages` with a list of names
|
463 |
-
|
464 |
-
To find more information, look for "package discovery" on setuptools docs.
|
465 |
-
"""
|
466 |
-
raise PackageDiscoveryError(cleandoc(msg))
|
467 |
-
|
468 |
-
def analyse_name(self):
|
469 |
-
"""The packages/modules are the essential contribution of the author.
|
470 |
-
Therefore the name of the distribution can be derived from them.
|
471 |
-
"""
|
472 |
-
if self.dist.metadata.name or self.dist.name:
|
473 |
-
# get_name() is not reliable (can return "UNKNOWN")
|
474 |
-
return None
|
475 |
-
|
476 |
-
log.debug("No `name` configuration, performing automatic discovery")
|
477 |
-
|
478 |
-
name = (
|
479 |
-
self._find_name_single_package_or_module()
|
480 |
-
or self._find_name_from_packages()
|
481 |
-
)
|
482 |
-
if name:
|
483 |
-
self.dist.metadata.name = name
|
484 |
-
|
485 |
-
def _find_name_single_package_or_module(self) -> Optional[str]:
|
486 |
-
"""Exactly one module or package"""
|
487 |
-
for field in ('packages', 'py_modules'):
|
488 |
-
items = getattr(self.dist, field, None) or []
|
489 |
-
if items and len(items) == 1:
|
490 |
-
log.debug(f"Single module/package detected, name: {items[0]}")
|
491 |
-
return items[0]
|
492 |
-
|
493 |
-
return None
|
494 |
-
|
495 |
-
def _find_name_from_packages(self) -> Optional[str]:
|
496 |
-
"""Try to find the root package that is not a PEP 420 namespace"""
|
497 |
-
if not self.dist.packages:
|
498 |
-
return None
|
499 |
-
|
500 |
-
packages = remove_stubs(sorted(self.dist.packages, key=len))
|
501 |
-
package_dir = self.dist.package_dir or {}
|
502 |
-
|
503 |
-
parent_pkg = find_parent_package(packages, package_dir, self._root_dir)
|
504 |
-
if parent_pkg:
|
505 |
-
log.debug(f"Common parent package detected, name: {parent_pkg}")
|
506 |
-
return parent_pkg
|
507 |
-
|
508 |
-
log.warn("No parent package detected, impossible to derive `name`")
|
509 |
-
return None
|
510 |
-
|
511 |
-
|
512 |
-
def remove_nested_packages(packages: List[str]) -> List[str]:
|
513 |
-
"""Remove nested packages from a list of packages.
|
514 |
-
|
515 |
-
>>> remove_nested_packages(["a", "a.b1", "a.b2", "a.b1.c1"])
|
516 |
-
['a']
|
517 |
-
>>> remove_nested_packages(["a", "b", "c.d", "c.d.e.f", "g.h", "a.a1"])
|
518 |
-
['a', 'b', 'c.d', 'g.h']
|
519 |
-
"""
|
520 |
-
pkgs = sorted(packages, key=len)
|
521 |
-
top_level = pkgs[:]
|
522 |
-
size = len(pkgs)
|
523 |
-
for i, name in enumerate(reversed(pkgs)):
|
524 |
-
if any(name.startswith(f"{other}.") for other in top_level):
|
525 |
-
top_level.pop(size - i - 1)
|
526 |
-
|
527 |
-
return top_level
|
528 |
-
|
529 |
-
|
530 |
-
def remove_stubs(packages: List[str]) -> List[str]:
|
531 |
-
"""Remove type stubs (:pep:`561`) from a list of packages.
|
532 |
-
|
533 |
-
>>> remove_stubs(["a", "a.b", "a-stubs", "a-stubs.b.c", "b", "c-stubs"])
|
534 |
-
['a', 'a.b', 'b']
|
535 |
-
"""
|
536 |
-
return [pkg for pkg in packages if not pkg.split(".")[0].endswith("-stubs")]
|
537 |
-
|
538 |
-
|
539 |
-
def find_parent_package(
|
540 |
-
packages: List[str], package_dir: Mapping[str, str], root_dir: _Path
|
541 |
-
) -> Optional[str]:
|
542 |
-
"""Find the parent package that is not a namespace."""
|
543 |
-
packages = sorted(packages, key=len)
|
544 |
-
common_ancestors = []
|
545 |
-
for i, name in enumerate(packages):
|
546 |
-
if not all(n.startswith(f"{name}.") for n in packages[i+1:]):
|
547 |
-
# Since packages are sorted by length, this condition is able
|
548 |
-
# to find a list of all common ancestors.
|
549 |
-
# When there is divergence (e.g. multiple root packages)
|
550 |
-
# the list will be empty
|
551 |
-
break
|
552 |
-
common_ancestors.append(name)
|
553 |
-
|
554 |
-
for name in common_ancestors:
|
555 |
-
pkg_path = find_package_path(name, package_dir, root_dir)
|
556 |
-
init = os.path.join(pkg_path, "__init__.py")
|
557 |
-
if os.path.isfile(init):
|
558 |
-
return name
|
559 |
-
|
560 |
-
return None
|
561 |
-
|
562 |
-
|
563 |
-
def find_package_path(
|
564 |
-
name: str, package_dir: Mapping[str, str], root_dir: _Path
|
565 |
-
) -> str:
|
566 |
-
"""Given a package name, return the path where it should be found on
|
567 |
-
disk, considering the ``package_dir`` option.
|
568 |
-
|
569 |
-
>>> path = find_package_path("my.pkg", {"": "root/is/nested"}, ".")
|
570 |
-
>>> path.replace(os.sep, "/")
|
571 |
-
'./root/is/nested/my/pkg'
|
572 |
-
|
573 |
-
>>> path = find_package_path("my.pkg", {"my": "root/is/nested"}, ".")
|
574 |
-
>>> path.replace(os.sep, "/")
|
575 |
-
'./root/is/nested/pkg'
|
576 |
-
|
577 |
-
>>> path = find_package_path("my.pkg", {"my.pkg": "root/is/nested"}, ".")
|
578 |
-
>>> path.replace(os.sep, "/")
|
579 |
-
'./root/is/nested'
|
580 |
-
|
581 |
-
>>> path = find_package_path("other.pkg", {"my.pkg": "root/is/nested"}, ".")
|
582 |
-
>>> path.replace(os.sep, "/")
|
583 |
-
'./other/pkg'
|
584 |
-
"""
|
585 |
-
parts = name.split(".")
|
586 |
-
for i in range(len(parts), 0, -1):
|
587 |
-
# Look backwards, the most specific package_dir first
|
588 |
-
partial_name = ".".join(parts[:i])
|
589 |
-
if partial_name in package_dir:
|
590 |
-
parent = package_dir[partial_name]
|
591 |
-
return os.path.join(root_dir, parent, *parts[i:])
|
592 |
-
|
593 |
-
parent = package_dir.get("") or ""
|
594 |
-
return os.path.join(root_dir, *parent.split("/"), *parts)
|
595 |
-
|
596 |
-
|
597 |
-
def construct_package_dir(packages: List[str], package_path: _Path) -> Dict[str, str]:
|
598 |
-
parent_pkgs = remove_nested_packages(packages)
|
599 |
-
prefix = Path(package_path).parts
|
600 |
-
return {pkg: "/".join([*prefix, *pkg.split(".")]) for pkg in parent_pkgs}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVH-vn1210/make_hair/minigpt4/processors/blip_processors.py
DELETED
@@ -1,141 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Copyright (c) 2022, salesforce.com, inc.
|
3 |
-
All rights reserved.
|
4 |
-
SPDX-License-Identifier: BSD-3-Clause
|
5 |
-
For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
|
6 |
-
"""
|
7 |
-
|
8 |
-
import re
|
9 |
-
|
10 |
-
from minigpt4.common.registry import registry
|
11 |
-
from minigpt4.processors.base_processor import BaseProcessor
|
12 |
-
from minigpt4.processors.randaugment import RandomAugment
|
13 |
-
from omegaconf import OmegaConf
|
14 |
-
from torchvision import transforms
|
15 |
-
from torchvision.transforms.functional import InterpolationMode
|
16 |
-
|
17 |
-
|
18 |
-
class BlipImageBaseProcessor(BaseProcessor):
|
19 |
-
def __init__(self, mean=None, std=None):
|
20 |
-
if mean is None:
|
21 |
-
mean = (0.48145466, 0.4578275, 0.40821073)
|
22 |
-
if std is None:
|
23 |
-
std = (0.26862954, 0.26130258, 0.27577711)
|
24 |
-
|
25 |
-
self.normalize = transforms.Normalize(mean, std)
|
26 |
-
|
27 |
-
|
28 |
-
@registry.register_processor("blip_caption")
|
29 |
-
class BlipCaptionProcessor(BaseProcessor):
|
30 |
-
def __init__(self, prompt="", max_words=50):
|
31 |
-
self.prompt = prompt
|
32 |
-
self.max_words = max_words
|
33 |
-
|
34 |
-
def __call__(self, caption):
|
35 |
-
caption = self.prompt + self.pre_caption(caption)
|
36 |
-
|
37 |
-
return caption
|
38 |
-
|
39 |
-
@classmethod
|
40 |
-
def from_config(cls, cfg=None):
|
41 |
-
if cfg is None:
|
42 |
-
cfg = OmegaConf.create()
|
43 |
-
|
44 |
-
prompt = cfg.get("prompt", "")
|
45 |
-
max_words = cfg.get("max_words", 50)
|
46 |
-
|
47 |
-
return cls(prompt=prompt, max_words=max_words)
|
48 |
-
|
49 |
-
def pre_caption(self, caption):
|
50 |
-
caption = re.sub(
|
51 |
-
r"([.!\"()*#:;~])",
|
52 |
-
" ",
|
53 |
-
caption.lower(),
|
54 |
-
)
|
55 |
-
caption = re.sub(
|
56 |
-
r"\s{2,}",
|
57 |
-
" ",
|
58 |
-
caption,
|
59 |
-
)
|
60 |
-
caption = caption.rstrip("\n")
|
61 |
-
caption = caption.strip(" ")
|
62 |
-
|
63 |
-
# truncate caption
|
64 |
-
caption_words = caption.split(" ")
|
65 |
-
if len(caption_words) > self.max_words:
|
66 |
-
caption = " ".join(caption_words[: self.max_words])
|
67 |
-
|
68 |
-
return caption
|
69 |
-
|
70 |
-
|
71 |
-
@registry.register_processor("blip2_image_train")
|
72 |
-
class Blip2ImageTrainProcessor(BlipImageBaseProcessor):
|
73 |
-
def __init__(self, image_size=224, mean=None, std=None, min_scale=0.5, max_scale=1.0):
|
74 |
-
super().__init__(mean=mean, std=std)
|
75 |
-
|
76 |
-
self.transform = transforms.Compose(
|
77 |
-
[
|
78 |
-
transforms.RandomResizedCrop(
|
79 |
-
image_size,
|
80 |
-
scale=(min_scale, max_scale),
|
81 |
-
interpolation=InterpolationMode.BICUBIC,
|
82 |
-
),
|
83 |
-
transforms.ToTensor(),
|
84 |
-
self.normalize,
|
85 |
-
]
|
86 |
-
)
|
87 |
-
|
88 |
-
def __call__(self, item):
|
89 |
-
return self.transform(item)
|
90 |
-
|
91 |
-
@classmethod
|
92 |
-
def from_config(cls, cfg=None):
|
93 |
-
if cfg is None:
|
94 |
-
cfg = OmegaConf.create()
|
95 |
-
|
96 |
-
image_size = cfg.get("image_size", 224)
|
97 |
-
|
98 |
-
mean = cfg.get("mean", None)
|
99 |
-
std = cfg.get("std", None)
|
100 |
-
|
101 |
-
min_scale = cfg.get("min_scale", 0.5)
|
102 |
-
max_scale = cfg.get("max_scale", 1.0)
|
103 |
-
|
104 |
-
return cls(
|
105 |
-
image_size=image_size,
|
106 |
-
mean=mean,
|
107 |
-
std=std,
|
108 |
-
min_scale=min_scale,
|
109 |
-
max_scale=max_scale,
|
110 |
-
)
|
111 |
-
|
112 |
-
|
113 |
-
@registry.register_processor("blip2_image_eval")
|
114 |
-
class Blip2ImageEvalProcessor(BlipImageBaseProcessor):
|
115 |
-
def __init__(self, image_size=224, mean=None, std=None):
|
116 |
-
super().__init__(mean=mean, std=std)
|
117 |
-
|
118 |
-
self.transform = transforms.Compose(
|
119 |
-
[
|
120 |
-
transforms.Resize(
|
121 |
-
(image_size, image_size), interpolation=InterpolationMode.BICUBIC
|
122 |
-
),
|
123 |
-
transforms.ToTensor(),
|
124 |
-
self.normalize,
|
125 |
-
]
|
126 |
-
)
|
127 |
-
|
128 |
-
def __call__(self, item):
|
129 |
-
return self.transform(item)
|
130 |
-
|
131 |
-
@classmethod
|
132 |
-
def from_config(cls, cfg=None):
|
133 |
-
if cfg is None:
|
134 |
-
cfg = OmegaConf.create()
|
135 |
-
|
136 |
-
image_size = cfg.get("image_size", 224)
|
137 |
-
|
138 |
-
mean = cfg.get("mean", None)
|
139 |
-
std = cfg.get("std", None)
|
140 |
-
|
141 |
-
return cls(image_size=image_size, mean=mean, std=std)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/config/compat.py
DELETED
@@ -1,229 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
2 |
-
"""
|
3 |
-
Backward compatibility of configs.
|
4 |
-
|
5 |
-
Instructions to bump version:
|
6 |
-
+ It's not needed to bump version if new keys are added.
|
7 |
-
It's only needed when backward-incompatible changes happen
|
8 |
-
(i.e., some existing keys disappear, or the meaning of a key changes)
|
9 |
-
+ To bump version, do the following:
|
10 |
-
1. Increment _C.VERSION in defaults.py
|
11 |
-
2. Add a converter in this file.
|
12 |
-
|
13 |
-
Each ConverterVX has a function "upgrade" which in-place upgrades config from X-1 to X,
|
14 |
-
and a function "downgrade" which in-place downgrades config from X to X-1
|
15 |
-
|
16 |
-
In each function, VERSION is left unchanged.
|
17 |
-
|
18 |
-
Each converter assumes that its input has the relevant keys
|
19 |
-
(i.e., the input is not a partial config).
|
20 |
-
3. Run the tests (test_config.py) to make sure the upgrade & downgrade
|
21 |
-
functions are consistent.
|
22 |
-
"""
|
23 |
-
|
24 |
-
import logging
|
25 |
-
from typing import List, Optional, Tuple
|
26 |
-
|
27 |
-
from .config import CfgNode as CN
|
28 |
-
from .defaults import _C
|
29 |
-
|
30 |
-
__all__ = ["upgrade_config", "downgrade_config"]
|
31 |
-
|
32 |
-
|
33 |
-
def upgrade_config(cfg: CN, to_version: Optional[int] = None) -> CN:
|
34 |
-
"""
|
35 |
-
Upgrade a config from its current version to a newer version.
|
36 |
-
|
37 |
-
Args:
|
38 |
-
cfg (CfgNode):
|
39 |
-
to_version (int): defaults to the latest version.
|
40 |
-
"""
|
41 |
-
cfg = cfg.clone()
|
42 |
-
if to_version is None:
|
43 |
-
to_version = _C.VERSION
|
44 |
-
|
45 |
-
assert cfg.VERSION <= to_version, "Cannot upgrade from v{} to v{}!".format(
|
46 |
-
cfg.VERSION, to_version
|
47 |
-
)
|
48 |
-
for k in range(cfg.VERSION, to_version):
|
49 |
-
converter = globals()["ConverterV" + str(k + 1)]
|
50 |
-
converter.upgrade(cfg)
|
51 |
-
cfg.VERSION = k + 1
|
52 |
-
return cfg
|
53 |
-
|
54 |
-
|
55 |
-
def downgrade_config(cfg: CN, to_version: int) -> CN:
|
56 |
-
"""
|
57 |
-
Downgrade a config from its current version to an older version.
|
58 |
-
|
59 |
-
Args:
|
60 |
-
cfg (CfgNode):
|
61 |
-
to_version (int):
|
62 |
-
|
63 |
-
Note:
|
64 |
-
A general downgrade of arbitrary configs is not always possible due to the
|
65 |
-
different functionalities in different versions.
|
66 |
-
The purpose of downgrade is only to recover the defaults in old versions,
|
67 |
-
allowing it to load an old partial yaml config.
|
68 |
-
Therefore, the implementation only needs to fill in the default values
|
69 |
-
in the old version when a general downgrade is not possible.
|
70 |
-
"""
|
71 |
-
cfg = cfg.clone()
|
72 |
-
assert cfg.VERSION >= to_version, "Cannot downgrade from v{} to v{}!".format(
|
73 |
-
cfg.VERSION, to_version
|
74 |
-
)
|
75 |
-
for k in range(cfg.VERSION, to_version, -1):
|
76 |
-
converter = globals()["ConverterV" + str(k)]
|
77 |
-
converter.downgrade(cfg)
|
78 |
-
cfg.VERSION = k - 1
|
79 |
-
return cfg
|
80 |
-
|
81 |
-
|
82 |
-
def guess_version(cfg: CN, filename: str) -> int:
|
83 |
-
"""
|
84 |
-
Guess the version of a partial config where the VERSION field is not specified.
|
85 |
-
Returns the version, or the latest if cannot make a guess.
|
86 |
-
|
87 |
-
This makes it easier for users to migrate.
|
88 |
-
"""
|
89 |
-
logger = logging.getLogger(__name__)
|
90 |
-
|
91 |
-
def _has(name: str) -> bool:
|
92 |
-
cur = cfg
|
93 |
-
for n in name.split("."):
|
94 |
-
if n not in cur:
|
95 |
-
return False
|
96 |
-
cur = cur[n]
|
97 |
-
return True
|
98 |
-
|
99 |
-
# Most users' partial configs have "MODEL.WEIGHT", so guess on it
|
100 |
-
ret = None
|
101 |
-
if _has("MODEL.WEIGHT") or _has("TEST.AUG_ON"):
|
102 |
-
ret = 1
|
103 |
-
|
104 |
-
if ret is not None:
|
105 |
-
logger.warning("Config '{}' has no VERSION. Assuming it to be v{}.".format(filename, ret))
|
106 |
-
else:
|
107 |
-
ret = _C.VERSION
|
108 |
-
logger.warning(
|
109 |
-
"Config '{}' has no VERSION. Assuming it to be compatible with latest v{}.".format(
|
110 |
-
filename, ret
|
111 |
-
)
|
112 |
-
)
|
113 |
-
return ret
|
114 |
-
|
115 |
-
|
116 |
-
def _rename(cfg: CN, old: str, new: str) -> None:
|
117 |
-
old_keys = old.split(".")
|
118 |
-
new_keys = new.split(".")
|
119 |
-
|
120 |
-
def _set(key_seq: List[str], val: str) -> None:
|
121 |
-
cur = cfg
|
122 |
-
for k in key_seq[:-1]:
|
123 |
-
if k not in cur:
|
124 |
-
cur[k] = CN()
|
125 |
-
cur = cur[k]
|
126 |
-
cur[key_seq[-1]] = val
|
127 |
-
|
128 |
-
def _get(key_seq: List[str]) -> CN:
|
129 |
-
cur = cfg
|
130 |
-
for k in key_seq:
|
131 |
-
cur = cur[k]
|
132 |
-
return cur
|
133 |
-
|
134 |
-
def _del(key_seq: List[str]) -> None:
|
135 |
-
cur = cfg
|
136 |
-
for k in key_seq[:-1]:
|
137 |
-
cur = cur[k]
|
138 |
-
del cur[key_seq[-1]]
|
139 |
-
if len(cur) == 0 and len(key_seq) > 1:
|
140 |
-
_del(key_seq[:-1])
|
141 |
-
|
142 |
-
_set(new_keys, _get(old_keys))
|
143 |
-
_del(old_keys)
|
144 |
-
|
145 |
-
|
146 |
-
class _RenameConverter:
|
147 |
-
"""
|
148 |
-
A converter that handles simple rename.
|
149 |
-
"""
|
150 |
-
|
151 |
-
RENAME: List[Tuple[str, str]] = [] # list of tuples of (old name, new name)
|
152 |
-
|
153 |
-
@classmethod
|
154 |
-
def upgrade(cls, cfg: CN) -> None:
|
155 |
-
for old, new in cls.RENAME:
|
156 |
-
_rename(cfg, old, new)
|
157 |
-
|
158 |
-
@classmethod
|
159 |
-
def downgrade(cls, cfg: CN) -> None:
|
160 |
-
for old, new in cls.RENAME[::-1]:
|
161 |
-
_rename(cfg, new, old)
|
162 |
-
|
163 |
-
|
164 |
-
class ConverterV1(_RenameConverter):
|
165 |
-
RENAME = [("MODEL.RPN_HEAD.NAME", "MODEL.RPN.HEAD_NAME")]
|
166 |
-
|
167 |
-
|
168 |
-
class ConverterV2(_RenameConverter):
|
169 |
-
"""
|
170 |
-
A large bulk of rename, before public release.
|
171 |
-
"""
|
172 |
-
|
173 |
-
RENAME = [
|
174 |
-
("MODEL.WEIGHT", "MODEL.WEIGHTS"),
|
175 |
-
("MODEL.PANOPTIC_FPN.SEMANTIC_LOSS_SCALE", "MODEL.SEM_SEG_HEAD.LOSS_WEIGHT"),
|
176 |
-
("MODEL.PANOPTIC_FPN.RPN_LOSS_SCALE", "MODEL.RPN.LOSS_WEIGHT"),
|
177 |
-
("MODEL.PANOPTIC_FPN.INSTANCE_LOSS_SCALE", "MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT"),
|
178 |
-
("MODEL.PANOPTIC_FPN.COMBINE_ON", "MODEL.PANOPTIC_FPN.COMBINE.ENABLED"),
|
179 |
-
(
|
180 |
-
"MODEL.PANOPTIC_FPN.COMBINE_OVERLAP_THRESHOLD",
|
181 |
-
"MODEL.PANOPTIC_FPN.COMBINE.OVERLAP_THRESH",
|
182 |
-
),
|
183 |
-
(
|
184 |
-
"MODEL.PANOPTIC_FPN.COMBINE_STUFF_AREA_LIMIT",
|
185 |
-
"MODEL.PANOPTIC_FPN.COMBINE.STUFF_AREA_LIMIT",
|
186 |
-
),
|
187 |
-
(
|
188 |
-
"MODEL.PANOPTIC_FPN.COMBINE_INSTANCES_CONFIDENCE_THRESHOLD",
|
189 |
-
"MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH",
|
190 |
-
),
|
191 |
-
("MODEL.ROI_HEADS.SCORE_THRESH", "MODEL.ROI_HEADS.SCORE_THRESH_TEST"),
|
192 |
-
("MODEL.ROI_HEADS.NMS", "MODEL.ROI_HEADS.NMS_THRESH_TEST"),
|
193 |
-
("MODEL.RETINANET.INFERENCE_SCORE_THRESHOLD", "MODEL.RETINANET.SCORE_THRESH_TEST"),
|
194 |
-
("MODEL.RETINANET.INFERENCE_TOPK_CANDIDATES", "MODEL.RETINANET.TOPK_CANDIDATES_TEST"),
|
195 |
-
("MODEL.RETINANET.INFERENCE_NMS_THRESHOLD", "MODEL.RETINANET.NMS_THRESH_TEST"),
|
196 |
-
("TEST.DETECTIONS_PER_IMG", "TEST.DETECTIONS_PER_IMAGE"),
|
197 |
-
("TEST.AUG_ON", "TEST.AUG.ENABLED"),
|
198 |
-
("TEST.AUG_MIN_SIZES", "TEST.AUG.MIN_SIZES"),
|
199 |
-
("TEST.AUG_MAX_SIZE", "TEST.AUG.MAX_SIZE"),
|
200 |
-
("TEST.AUG_FLIP", "TEST.AUG.FLIP"),
|
201 |
-
]
|
202 |
-
|
203 |
-
@classmethod
|
204 |
-
def upgrade(cls, cfg: CN) -> None:
|
205 |
-
super().upgrade(cfg)
|
206 |
-
|
207 |
-
if cfg.MODEL.META_ARCHITECTURE == "RetinaNet":
|
208 |
-
_rename(
|
209 |
-
cfg, "MODEL.RETINANET.ANCHOR_ASPECT_RATIOS", "MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS"
|
210 |
-
)
|
211 |
-
_rename(cfg, "MODEL.RETINANET.ANCHOR_SIZES", "MODEL.ANCHOR_GENERATOR.SIZES")
|
212 |
-
del cfg["MODEL"]["RPN"]["ANCHOR_SIZES"]
|
213 |
-
del cfg["MODEL"]["RPN"]["ANCHOR_ASPECT_RATIOS"]
|
214 |
-
else:
|
215 |
-
_rename(cfg, "MODEL.RPN.ANCHOR_ASPECT_RATIOS", "MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS")
|
216 |
-
_rename(cfg, "MODEL.RPN.ANCHOR_SIZES", "MODEL.ANCHOR_GENERATOR.SIZES")
|
217 |
-
del cfg["MODEL"]["RETINANET"]["ANCHOR_SIZES"]
|
218 |
-
del cfg["MODEL"]["RETINANET"]["ANCHOR_ASPECT_RATIOS"]
|
219 |
-
del cfg["MODEL"]["RETINANET"]["ANCHOR_STRIDES"]
|
220 |
-
|
221 |
-
@classmethod
|
222 |
-
def downgrade(cls, cfg: CN) -> None:
|
223 |
-
super().downgrade(cfg)
|
224 |
-
|
225 |
-
_rename(cfg, "MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS", "MODEL.RPN.ANCHOR_ASPECT_RATIOS")
|
226 |
-
_rename(cfg, "MODEL.ANCHOR_GENERATOR.SIZES", "MODEL.RPN.ANCHOR_SIZES")
|
227 |
-
cfg.MODEL.RETINANET.ANCHOR_ASPECT_RATIOS = cfg.MODEL.RPN.ANCHOR_ASPECT_RATIOS
|
228 |
-
cfg.MODEL.RETINANET.ANCHOR_SIZES = cfg.MODEL.RPN.ANCHOR_SIZES
|
229 |
-
cfg.MODEL.RETINANET.ANCHOR_STRIDES = [] # this is not used anywhere in any version
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/color.h
DELETED
@@ -1,63 +0,0 @@
|
|
1 |
-
#pragma once
|
2 |
-
|
3 |
-
#include "diffvg.h"
|
4 |
-
#include "vector.h"
|
5 |
-
#include "ptr.h"
|
6 |
-
|
7 |
-
enum class ColorType {
|
8 |
-
Constant,
|
9 |
-
LinearGradient,
|
10 |
-
RadialGradient
|
11 |
-
};
|
12 |
-
|
13 |
-
struct Constant {
|
14 |
-
Vector4f color;
|
15 |
-
|
16 |
-
ptr<void> get_ptr() {
|
17 |
-
return ptr<void>(this);
|
18 |
-
}
|
19 |
-
};
|
20 |
-
|
21 |
-
struct LinearGradient {
|
22 |
-
LinearGradient(const Vector2f &begin,
|
23 |
-
const Vector2f &end,
|
24 |
-
int num_stops,
|
25 |
-
ptr<float> stop_offsets,
|
26 |
-
ptr<float> stop_colors)
|
27 |
-
: begin(begin), end(end), num_stops(num_stops),
|
28 |
-
stop_offsets(stop_offsets.get()), stop_colors(stop_colors.get()) {}
|
29 |
-
|
30 |
-
ptr<void> get_ptr() {
|
31 |
-
return ptr<void>(this);
|
32 |
-
}
|
33 |
-
|
34 |
-
void copy_to(ptr<float> stop_offset,
|
35 |
-
ptr<float> stop_colors) const;
|
36 |
-
|
37 |
-
Vector2f begin, end;
|
38 |
-
int num_stops;
|
39 |
-
float *stop_offsets;
|
40 |
-
float *stop_colors; // rgba
|
41 |
-
};
|
42 |
-
|
43 |
-
struct RadialGradient {
|
44 |
-
RadialGradient(const Vector2f ¢er,
|
45 |
-
const Vector2f &radius,
|
46 |
-
int num_stops,
|
47 |
-
ptr<float> stop_offsets,
|
48 |
-
ptr<float> stop_colors)
|
49 |
-
: center(center), radius(radius), num_stops(num_stops),
|
50 |
-
stop_offsets(stop_offsets.get()), stop_colors(stop_colors.get()) {}
|
51 |
-
|
52 |
-
ptr<void> get_ptr() {
|
53 |
-
return ptr<void>(this);
|
54 |
-
}
|
55 |
-
|
56 |
-
void copy_to(ptr<float> stop_offset,
|
57 |
-
ptr<float> stop_colors) const;
|
58 |
-
|
59 |
-
Vector2f center, radius;
|
60 |
-
int num_stops;
|
61 |
-
float *stop_offsets;
|
62 |
-
float *stop_colors; // rgba
|
63 |
-
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/swap.h
DELETED
@@ -1,191 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
/*! \file swap.h
|
18 |
-
* \brief Functions for swapping the value of elements
|
19 |
-
*/
|
20 |
-
|
21 |
-
#pragma once
|
22 |
-
|
23 |
-
#include <thrust/detail/config.h>
|
24 |
-
#include <thrust/detail/execution_policy.h>
|
25 |
-
|
26 |
-
// empty Doxygen comment below so namespace thrust's documentation will be extracted
|
27 |
-
|
28 |
-
/*!
|
29 |
-
*/
|
30 |
-
namespace thrust
|
31 |
-
{
|
32 |
-
|
33 |
-
/*! \addtogroup utility
|
34 |
-
* \{
|
35 |
-
*/
|
36 |
-
|
37 |
-
/*! \addtogroup swap
|
38 |
-
* \{
|
39 |
-
*/
|
40 |
-
|
41 |
-
/*! \p swap assigns the contents of \c a to \c b and the
|
42 |
-
* contents of \c b to \c a. This is used as a primitive operation
|
43 |
-
* by many other algorithms.
|
44 |
-
*
|
45 |
-
* \param a The first value of interest. After completion,
|
46 |
-
* the value of b will be returned here.
|
47 |
-
* \param b The second value of interest. After completion,
|
48 |
-
* the value of a will be returned here.
|
49 |
-
*
|
50 |
-
* \tparam Assignable is a model of <a href="http://www.sgi.com/tech/stl/Assignable.html">Assignable</a>.
|
51 |
-
*
|
52 |
-
* The following code snippet demonstrates how to use \p swap to
|
53 |
-
* swap the contents of two variables.
|
54 |
-
*
|
55 |
-
* \code
|
56 |
-
* #include <thrust/swap.h>
|
57 |
-
* ...
|
58 |
-
* int x = 1;
|
59 |
-
* int y = 2;
|
60 |
-
* thrust::swap(x,h);
|
61 |
-
*
|
62 |
-
* // x == 2, y == 1
|
63 |
-
* \endcode
|
64 |
-
*/
|
65 |
-
template<typename Assignable1, typename Assignable2>
|
66 |
-
__host__ __device__
|
67 |
-
inline void swap(Assignable1 &a, Assignable2 &b);
|
68 |
-
|
69 |
-
/*! \} // swap
|
70 |
-
*/
|
71 |
-
|
72 |
-
/*! \} // utility
|
73 |
-
*/
|
74 |
-
|
75 |
-
|
76 |
-
/*! \addtogroup copying
|
77 |
-
* \{
|
78 |
-
*/
|
79 |
-
|
80 |
-
|
81 |
-
/*! \p swap_ranges swaps each of the elements in the range <tt>[first1, last1)</tt>
|
82 |
-
* with the corresponding element in the range <tt>[first2, first2 + (last1 - first1))</tt>.
|
83 |
-
* That is, for each integer \c n such that <tt>0 <= n < (last1 - first1)</tt>, it swaps
|
84 |
-
* <tt>*(first1 + n)</tt> and <tt>*(first2 + n)</tt>. The return value is
|
85 |
-
* <tt>first2 + (last1 - first1)</tt>.
|
86 |
-
*
|
87 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
88 |
-
*
|
89 |
-
* \param exec The execution policy to use for parallelization.
|
90 |
-
* \param first1 The beginning of the first sequence to swap.
|
91 |
-
* \param last1 One position past the last element of the first sequence to swap.
|
92 |
-
* \param first2 The beginning of the second sequence to swap.
|
93 |
-
* \return An iterator pointing to one position past the last element of the second
|
94 |
-
* sequence to swap.
|
95 |
-
*
|
96 |
-
* \tparam DerivedPolicy The name of the derived execution policy.
|
97 |
-
* \tparam ForwardIterator1 is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator.html">Forward Iterator</a>,
|
98 |
-
* and \p ForwardIterator1's \c value_type must be convertible to \p ForwardIterator2's \c value_type.
|
99 |
-
* \tparam ForwardIterator2 is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator.html">Forward Iterator</a>,
|
100 |
-
* and \p ForwardIterator2's \c value_type must be convertible to \p ForwardIterator1's \c value_type.
|
101 |
-
*
|
102 |
-
* \pre \p first1 may equal \p first2, but the range <tt>[first1, last1)</tt> shall not overlap the range <tt>[first2, first2 + (last1 - first1))</tt> otherwise.
|
103 |
-
*
|
104 |
-
* The following code snippet demonstrates how to use \p swap_ranges to
|
105 |
-
* swap the contents of two \c thrust::device_vectors using the \p thrust::device execution
|
106 |
-
* policy for parallelization:
|
107 |
-
*
|
108 |
-
* \code
|
109 |
-
* #include <thrust/swap.h>
|
110 |
-
* #include <thrust/device_vector.h>
|
111 |
-
* #include <thrust/execution_policy.h>
|
112 |
-
* ...
|
113 |
-
* thrust::device_vector<int> v1(2), v2(2);
|
114 |
-
* v1[0] = 1;
|
115 |
-
* v1[1] = 2;
|
116 |
-
* v2[0] = 3;
|
117 |
-
* v2[1] = 4;
|
118 |
-
*
|
119 |
-
* thrust::swap_ranges(thrust::device, v1.begin(), v1.end(), v2.begin());
|
120 |
-
*
|
121 |
-
* // v1[0] == 3, v1[1] == 4, v2[0] == 1, v2[1] == 2
|
122 |
-
* \endcode
|
123 |
-
*
|
124 |
-
* \see http://www.sgi.com/tech/stl/swap_ranges.html
|
125 |
-
* \see \c swap
|
126 |
-
*/
|
127 |
-
template<typename DerivedPolicy,
|
128 |
-
typename ForwardIterator1,
|
129 |
-
typename ForwardIterator2>
|
130 |
-
__host__ __device__
|
131 |
-
ForwardIterator2 swap_ranges(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
132 |
-
ForwardIterator1 first1,
|
133 |
-
ForwardIterator1 last1,
|
134 |
-
ForwardIterator2 first2);
|
135 |
-
|
136 |
-
|
137 |
-
/*! \p swap_ranges swaps each of the elements in the range <tt>[first1, last1)</tt>
|
138 |
-
* with the corresponding element in the range <tt>[first2, first2 + (last1 - first1))</tt>.
|
139 |
-
* That is, for each integer \c n such that <tt>0 <= n < (last1 - first1)</tt>, it swaps
|
140 |
-
* <tt>*(first1 + n)</tt> and <tt>*(first2 + n)</tt>. The return value is
|
141 |
-
* <tt>first2 + (last1 - first1)</tt>.
|
142 |
-
*
|
143 |
-
* \param first1 The beginning of the first sequence to swap.
|
144 |
-
* \param last1 One position past the last element of the first sequence to swap.
|
145 |
-
* \param first2 The beginning of the second sequence to swap.
|
146 |
-
* \return An iterator pointing to one position past the last element of the second
|
147 |
-
* sequence to swap.
|
148 |
-
*
|
149 |
-
* \tparam ForwardIterator1 is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator.html">Forward Iterator</a>,
|
150 |
-
* and \p ForwardIterator1's \c value_type must be convertible to \p ForwardIterator2's \c value_type.
|
151 |
-
* \tparam ForwardIterator2 is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator.html">Forward Iterator</a>,
|
152 |
-
* and \p ForwardIterator2's \c value_type must be convertible to \p ForwardIterator1's \c value_type.
|
153 |
-
*
|
154 |
-
* \pre \p first1 may equal \p first2, but the range <tt>[first1, last1)</tt> shall not overlap the range <tt>[first2, first2 + (last1 - first1))</tt> otherwise.
|
155 |
-
*
|
156 |
-
* The following code snippet demonstrates how to use \p swap_ranges to
|
157 |
-
* swap the contents of two \c thrust::device_vectors.
|
158 |
-
*
|
159 |
-
* \code
|
160 |
-
* #include <thrust/swap.h>
|
161 |
-
* #include <thrust/device_vector.h>
|
162 |
-
* ...
|
163 |
-
* thrust::device_vector<int> v1(2), v2(2);
|
164 |
-
* v1[0] = 1;
|
165 |
-
* v1[1] = 2;
|
166 |
-
* v2[0] = 3;
|
167 |
-
* v2[1] = 4;
|
168 |
-
*
|
169 |
-
* thrust::swap_ranges(v1.begin(), v1.end(), v2.begin());
|
170 |
-
*
|
171 |
-
* // v1[0] == 3, v1[1] == 4, v2[0] == 1, v2[1] == 2
|
172 |
-
* \endcode
|
173 |
-
*
|
174 |
-
* \see http://www.sgi.com/tech/stl/swap_ranges.html
|
175 |
-
* \see \c swap
|
176 |
-
*/
|
177 |
-
template<typename ForwardIterator1,
|
178 |
-
typename ForwardIterator2>
|
179 |
-
ForwardIterator2 swap_ranges(ForwardIterator1 first1,
|
180 |
-
ForwardIterator1 last1,
|
181 |
-
ForwardIterator2 first2);
|
182 |
-
|
183 |
-
|
184 |
-
/*! \} // copying
|
185 |
-
*/
|
186 |
-
|
187 |
-
|
188 |
-
} // end thrust
|
189 |
-
|
190 |
-
#include <thrust/detail/swap.inl>
|
191 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/malloc_and_free.h
DELETED
@@ -1,54 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
#include <thrust/system/detail/sequential/execution_policy.h>
|
21 |
-
#include <cstdlib> // for malloc & free
|
22 |
-
#include <thrust/detail/raw_pointer_cast.h>
|
23 |
-
|
24 |
-
namespace thrust
|
25 |
-
{
|
26 |
-
namespace system
|
27 |
-
{
|
28 |
-
namespace detail
|
29 |
-
{
|
30 |
-
namespace sequential
|
31 |
-
{
|
32 |
-
|
33 |
-
|
34 |
-
template<typename DerivedPolicy>
|
35 |
-
inline __host__ __device__
|
36 |
-
void *malloc(execution_policy<DerivedPolicy> &, std::size_t n)
|
37 |
-
{
|
38 |
-
return std::malloc(n);
|
39 |
-
} // end mallc()
|
40 |
-
|
41 |
-
|
42 |
-
template<typename DerivedPolicy, typename Pointer>
|
43 |
-
inline __host__ __device__
|
44 |
-
void free(sequential::execution_policy<DerivedPolicy> &, Pointer ptr)
|
45 |
-
{
|
46 |
-
std::free(thrust::raw_pointer_cast(ptr));
|
47 |
-
} // end mallc()
|
48 |
-
|
49 |
-
|
50 |
-
} // end sequential
|
51 |
-
} // end detail
|
52 |
-
} // end system
|
53 |
-
} // end thrust
|
54 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/core/evaluation/class_names.py
DELETED
@@ -1,116 +0,0 @@
|
|
1 |
-
import mmcv
|
2 |
-
|
3 |
-
|
4 |
-
def wider_face_classes():
|
5 |
-
return ['face']
|
6 |
-
|
7 |
-
|
8 |
-
def voc_classes():
|
9 |
-
return [
|
10 |
-
'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat',
|
11 |
-
'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person',
|
12 |
-
'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'
|
13 |
-
]
|
14 |
-
|
15 |
-
|
16 |
-
def imagenet_det_classes():
|
17 |
-
return [
|
18 |
-
'accordion', 'airplane', 'ant', 'antelope', 'apple', 'armadillo',
|
19 |
-
'artichoke', 'axe', 'baby_bed', 'backpack', 'bagel', 'balance_beam',
|
20 |
-
'banana', 'band_aid', 'banjo', 'baseball', 'basketball', 'bathing_cap',
|
21 |
-
'beaker', 'bear', 'bee', 'bell_pepper', 'bench', 'bicycle', 'binder',
|
22 |
-
'bird', 'bookshelf', 'bow_tie', 'bow', 'bowl', 'brassiere', 'burrito',
|
23 |
-
'bus', 'butterfly', 'camel', 'can_opener', 'car', 'cart', 'cattle',
|
24 |
-
'cello', 'centipede', 'chain_saw', 'chair', 'chime', 'cocktail_shaker',
|
25 |
-
'coffee_maker', 'computer_keyboard', 'computer_mouse', 'corkscrew',
|
26 |
-
'cream', 'croquet_ball', 'crutch', 'cucumber', 'cup_or_mug', 'diaper',
|
27 |
-
'digital_clock', 'dishwasher', 'dog', 'domestic_cat', 'dragonfly',
|
28 |
-
'drum', 'dumbbell', 'electric_fan', 'elephant', 'face_powder', 'fig',
|
29 |
-
'filing_cabinet', 'flower_pot', 'flute', 'fox', 'french_horn', 'frog',
|
30 |
-
'frying_pan', 'giant_panda', 'goldfish', 'golf_ball', 'golfcart',
|
31 |
-
'guacamole', 'guitar', 'hair_dryer', 'hair_spray', 'hamburger',
|
32 |
-
'hammer', 'hamster', 'harmonica', 'harp', 'hat_with_a_wide_brim',
|
33 |
-
'head_cabbage', 'helmet', 'hippopotamus', 'horizontal_bar', 'horse',
|
34 |
-
'hotdog', 'iPod', 'isopod', 'jellyfish', 'koala_bear', 'ladle',
|
35 |
-
'ladybug', 'lamp', 'laptop', 'lemon', 'lion', 'lipstick', 'lizard',
|
36 |
-
'lobster', 'maillot', 'maraca', 'microphone', 'microwave', 'milk_can',
|
37 |
-
'miniskirt', 'monkey', 'motorcycle', 'mushroom', 'nail', 'neck_brace',
|
38 |
-
'oboe', 'orange', 'otter', 'pencil_box', 'pencil_sharpener', 'perfume',
|
39 |
-
'person', 'piano', 'pineapple', 'ping-pong_ball', 'pitcher', 'pizza',
|
40 |
-
'plastic_bag', 'plate_rack', 'pomegranate', 'popsicle', 'porcupine',
|
41 |
-
'power_drill', 'pretzel', 'printer', 'puck', 'punching_bag', 'purse',
|
42 |
-
'rabbit', 'racket', 'ray', 'red_panda', 'refrigerator',
|
43 |
-
'remote_control', 'rubber_eraser', 'rugby_ball', 'ruler',
|
44 |
-
'salt_or_pepper_shaker', 'saxophone', 'scorpion', 'screwdriver',
|
45 |
-
'seal', 'sheep', 'ski', 'skunk', 'snail', 'snake', 'snowmobile',
|
46 |
-
'snowplow', 'soap_dispenser', 'soccer_ball', 'sofa', 'spatula',
|
47 |
-
'squirrel', 'starfish', 'stethoscope', 'stove', 'strainer',
|
48 |
-
'strawberry', 'stretcher', 'sunglasses', 'swimming_trunks', 'swine',
|
49 |
-
'syringe', 'table', 'tape_player', 'tennis_ball', 'tick', 'tie',
|
50 |
-
'tiger', 'toaster', 'traffic_light', 'train', 'trombone', 'trumpet',
|
51 |
-
'turtle', 'tv_or_monitor', 'unicycle', 'vacuum', 'violin',
|
52 |
-
'volleyball', 'waffle_iron', 'washer', 'water_bottle', 'watercraft',
|
53 |
-
'whale', 'wine_bottle', 'zebra'
|
54 |
-
]
|
55 |
-
|
56 |
-
|
57 |
-
def imagenet_vid_classes():
|
58 |
-
return [
|
59 |
-
'airplane', 'antelope', 'bear', 'bicycle', 'bird', 'bus', 'car',
|
60 |
-
'cattle', 'dog', 'domestic_cat', 'elephant', 'fox', 'giant_panda',
|
61 |
-
'hamster', 'horse', 'lion', 'lizard', 'monkey', 'motorcycle', 'rabbit',
|
62 |
-
'red_panda', 'sheep', 'snake', 'squirrel', 'tiger', 'train', 'turtle',
|
63 |
-
'watercraft', 'whale', 'zebra'
|
64 |
-
]
|
65 |
-
|
66 |
-
|
67 |
-
def coco_classes():
|
68 |
-
return [
|
69 |
-
'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train',
|
70 |
-
'truck', 'boat', 'traffic_light', 'fire_hydrant', 'stop_sign',
|
71 |
-
'parking_meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep',
|
72 |
-
'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella',
|
73 |
-
'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard',
|
74 |
-
'sports_ball', 'kite', 'baseball_bat', 'baseball_glove', 'skateboard',
|
75 |
-
'surfboard', 'tennis_racket', 'bottle', 'wine_glass', 'cup', 'fork',
|
76 |
-
'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange',
|
77 |
-
'broccoli', 'carrot', 'hot_dog', 'pizza', 'donut', 'cake', 'chair',
|
78 |
-
'couch', 'potted_plant', 'bed', 'dining_table', 'toilet', 'tv',
|
79 |
-
'laptop', 'mouse', 'remote', 'keyboard', 'cell_phone', 'microwave',
|
80 |
-
'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase',
|
81 |
-
'scissors', 'teddy_bear', 'hair_drier', 'toothbrush'
|
82 |
-
]
|
83 |
-
|
84 |
-
|
85 |
-
def cityscapes_classes():
|
86 |
-
return [
|
87 |
-
'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle',
|
88 |
-
'bicycle'
|
89 |
-
]
|
90 |
-
|
91 |
-
|
92 |
-
dataset_aliases = {
|
93 |
-
'voc': ['voc', 'pascal_voc', 'voc07', 'voc12'],
|
94 |
-
'imagenet_det': ['det', 'imagenet_det', 'ilsvrc_det'],
|
95 |
-
'imagenet_vid': ['vid', 'imagenet_vid', 'ilsvrc_vid'],
|
96 |
-
'coco': ['coco', 'mscoco', 'ms_coco'],
|
97 |
-
'wider_face': ['WIDERFaceDataset', 'wider_face', 'WIDERFace'],
|
98 |
-
'cityscapes': ['cityscapes']
|
99 |
-
}
|
100 |
-
|
101 |
-
|
102 |
-
def get_classes(dataset):
|
103 |
-
"""Get class names of a dataset."""
|
104 |
-
alias2name = {}
|
105 |
-
for name, aliases in dataset_aliases.items():
|
106 |
-
for alias in aliases:
|
107 |
-
alias2name[alias] = name
|
108 |
-
|
109 |
-
if mmcv.is_str(dataset):
|
110 |
-
if dataset in alias2name:
|
111 |
-
labels = eval(alias2name[dataset] + '_classes()')
|
112 |
-
else:
|
113 |
-
raise ValueError(f'Unrecognized dataset: {dataset}')
|
114 |
-
else:
|
115 |
-
raise TypeError(f'dataset must a str, but got {type(dataset)}')
|
116 |
-
return labels
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/regionclip-demo/detectron2/utils/logger.py
DELETED
@@ -1,237 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
import atexit
|
3 |
-
import functools
|
4 |
-
import logging
|
5 |
-
import os
|
6 |
-
import sys
|
7 |
-
import time
|
8 |
-
from collections import Counter
|
9 |
-
import torch
|
10 |
-
from tabulate import tabulate
|
11 |
-
from termcolor import colored
|
12 |
-
|
13 |
-
from detectron2.utils.file_io import PathManager
|
14 |
-
|
15 |
-
__all__ = ["setup_logger", "log_first_n", "log_every_n", "log_every_n_seconds"]
|
16 |
-
|
17 |
-
|
18 |
-
class _ColorfulFormatter(logging.Formatter):
|
19 |
-
def __init__(self, *args, **kwargs):
|
20 |
-
self._root_name = kwargs.pop("root_name") + "."
|
21 |
-
self._abbrev_name = kwargs.pop("abbrev_name", "")
|
22 |
-
if len(self._abbrev_name):
|
23 |
-
self._abbrev_name = self._abbrev_name + "."
|
24 |
-
super(_ColorfulFormatter, self).__init__(*args, **kwargs)
|
25 |
-
|
26 |
-
def formatMessage(self, record):
|
27 |
-
record.name = record.name.replace(self._root_name, self._abbrev_name)
|
28 |
-
log = super(_ColorfulFormatter, self).formatMessage(record)
|
29 |
-
if record.levelno == logging.WARNING:
|
30 |
-
prefix = colored("WARNING", "red", attrs=["blink"])
|
31 |
-
elif record.levelno == logging.ERROR or record.levelno == logging.CRITICAL:
|
32 |
-
prefix = colored("ERROR", "red", attrs=["blink", "underline"])
|
33 |
-
else:
|
34 |
-
return log
|
35 |
-
return prefix + " " + log
|
36 |
-
|
37 |
-
|
38 |
-
@functools.lru_cache() # so that calling setup_logger multiple times won't add many handlers
|
39 |
-
def setup_logger(
|
40 |
-
output=None, distributed_rank=0, *, color=True, name="detectron2", abbrev_name=None
|
41 |
-
):
|
42 |
-
"""
|
43 |
-
Initialize the detectron2 logger and set its verbosity level to "DEBUG".
|
44 |
-
|
45 |
-
Args:
|
46 |
-
output (str): a file name or a directory to save log. If None, will not save log file.
|
47 |
-
If ends with ".txt" or ".log", assumed to be a file name.
|
48 |
-
Otherwise, logs will be saved to `output/log.txt`.
|
49 |
-
name (str): the root module name of this logger
|
50 |
-
abbrev_name (str): an abbreviation of the module, to avoid long names in logs.
|
51 |
-
Set to "" to not log the root module in logs.
|
52 |
-
By default, will abbreviate "detectron2" to "d2" and leave other
|
53 |
-
modules unchanged.
|
54 |
-
|
55 |
-
Returns:
|
56 |
-
logging.Logger: a logger
|
57 |
-
"""
|
58 |
-
logger = logging.getLogger(name)
|
59 |
-
logger.setLevel(logging.DEBUG)
|
60 |
-
logger.propagate = False
|
61 |
-
|
62 |
-
if abbrev_name is None:
|
63 |
-
abbrev_name = "d2" if name == "detectron2" else name
|
64 |
-
|
65 |
-
plain_formatter = logging.Formatter(
|
66 |
-
"[%(asctime)s] %(name)s %(levelname)s: %(message)s", datefmt="%m/%d %H:%M:%S"
|
67 |
-
)
|
68 |
-
# stdout logging: master only
|
69 |
-
if distributed_rank == 0:
|
70 |
-
ch = logging.StreamHandler(stream=sys.stdout)
|
71 |
-
ch.setLevel(logging.DEBUG)
|
72 |
-
if color:
|
73 |
-
formatter = _ColorfulFormatter(
|
74 |
-
colored("[%(asctime)s %(name)s]: ", "green") + "%(message)s",
|
75 |
-
datefmt="%m/%d %H:%M:%S",
|
76 |
-
root_name=name,
|
77 |
-
abbrev_name=str(abbrev_name),
|
78 |
-
)
|
79 |
-
else:
|
80 |
-
formatter = plain_formatter
|
81 |
-
ch.setFormatter(formatter)
|
82 |
-
logger.addHandler(ch)
|
83 |
-
|
84 |
-
# file logging: all workers
|
85 |
-
if output is not None:
|
86 |
-
if output.endswith(".txt") or output.endswith(".log"):
|
87 |
-
filename = output
|
88 |
-
else:
|
89 |
-
filename = os.path.join(output, "log.txt")
|
90 |
-
if distributed_rank > 0:
|
91 |
-
filename = filename + ".rank{}".format(distributed_rank)
|
92 |
-
PathManager.mkdirs(os.path.dirname(filename))
|
93 |
-
|
94 |
-
fh = logging.StreamHandler(_cached_log_stream(filename))
|
95 |
-
fh.setLevel(logging.DEBUG)
|
96 |
-
fh.setFormatter(plain_formatter)
|
97 |
-
logger.addHandler(fh)
|
98 |
-
|
99 |
-
return logger
|
100 |
-
|
101 |
-
|
102 |
-
# cache the opened file object, so that different calls to `setup_logger`
|
103 |
-
# with the same file name can safely write to the same file.
|
104 |
-
@functools.lru_cache(maxsize=None)
|
105 |
-
def _cached_log_stream(filename):
|
106 |
-
# use 1K buffer if writing to cloud storage
|
107 |
-
io = PathManager.open(filename, "a", buffering=1024 if "://" in filename else -1)
|
108 |
-
atexit.register(io.close)
|
109 |
-
return io
|
110 |
-
|
111 |
-
|
112 |
-
"""
|
113 |
-
Below are some other convenient logging methods.
|
114 |
-
They are mainly adopted from
|
115 |
-
https://github.com/abseil/abseil-py/blob/master/absl/logging/__init__.py
|
116 |
-
"""
|
117 |
-
|
118 |
-
|
119 |
-
def _find_caller():
|
120 |
-
"""
|
121 |
-
Returns:
|
122 |
-
str: module name of the caller
|
123 |
-
tuple: a hashable key to be used to identify different callers
|
124 |
-
"""
|
125 |
-
frame = sys._getframe(2)
|
126 |
-
while frame:
|
127 |
-
code = frame.f_code
|
128 |
-
if os.path.join("utils", "logger.") not in code.co_filename:
|
129 |
-
mod_name = frame.f_globals["__name__"]
|
130 |
-
if mod_name == "__main__":
|
131 |
-
mod_name = "detectron2"
|
132 |
-
return mod_name, (code.co_filename, frame.f_lineno, code.co_name)
|
133 |
-
frame = frame.f_back
|
134 |
-
|
135 |
-
|
136 |
-
_LOG_COUNTER = Counter()
|
137 |
-
_LOG_TIMER = {}
|
138 |
-
|
139 |
-
|
140 |
-
def log_first_n(lvl, msg, n=1, *, name=None, key="caller"):
|
141 |
-
"""
|
142 |
-
Log only for the first n times.
|
143 |
-
|
144 |
-
Args:
|
145 |
-
lvl (int): the logging level
|
146 |
-
msg (str):
|
147 |
-
n (int):
|
148 |
-
name (str): name of the logger to use. Will use the caller's module by default.
|
149 |
-
key (str or tuple[str]): the string(s) can be one of "caller" or
|
150 |
-
"message", which defines how to identify duplicated logs.
|
151 |
-
For example, if called with `n=1, key="caller"`, this function
|
152 |
-
will only log the first call from the same caller, regardless of
|
153 |
-
the message content.
|
154 |
-
If called with `n=1, key="message"`, this function will log the
|
155 |
-
same content only once, even if they are called from different places.
|
156 |
-
If called with `n=1, key=("caller", "message")`, this function
|
157 |
-
will not log only if the same caller has logged the same message before.
|
158 |
-
"""
|
159 |
-
if isinstance(key, str):
|
160 |
-
key = (key,)
|
161 |
-
assert len(key) > 0
|
162 |
-
|
163 |
-
caller_module, caller_key = _find_caller()
|
164 |
-
hash_key = ()
|
165 |
-
if "caller" in key:
|
166 |
-
hash_key = hash_key + caller_key
|
167 |
-
if "message" in key:
|
168 |
-
hash_key = hash_key + (msg,)
|
169 |
-
|
170 |
-
_LOG_COUNTER[hash_key] += 1
|
171 |
-
if _LOG_COUNTER[hash_key] <= n:
|
172 |
-
logging.getLogger(name or caller_module).log(lvl, msg)
|
173 |
-
|
174 |
-
|
175 |
-
def log_every_n(lvl, msg, n=1, *, name=None):
|
176 |
-
"""
|
177 |
-
Log once per n times.
|
178 |
-
|
179 |
-
Args:
|
180 |
-
lvl (int): the logging level
|
181 |
-
msg (str):
|
182 |
-
n (int):
|
183 |
-
name (str): name of the logger to use. Will use the caller's module by default.
|
184 |
-
"""
|
185 |
-
caller_module, key = _find_caller()
|
186 |
-
_LOG_COUNTER[key] += 1
|
187 |
-
if n == 1 or _LOG_COUNTER[key] % n == 1:
|
188 |
-
logging.getLogger(name or caller_module).log(lvl, msg)
|
189 |
-
|
190 |
-
|
191 |
-
def log_every_n_seconds(lvl, msg, n=1, *, name=None):
|
192 |
-
"""
|
193 |
-
Log no more than once per n seconds.
|
194 |
-
|
195 |
-
Args:
|
196 |
-
lvl (int): the logging level
|
197 |
-
msg (str):
|
198 |
-
n (int):
|
199 |
-
name (str): name of the logger to use. Will use the caller's module by default.
|
200 |
-
"""
|
201 |
-
caller_module, key = _find_caller()
|
202 |
-
last_logged = _LOG_TIMER.get(key, None)
|
203 |
-
current_time = time.time()
|
204 |
-
if last_logged is None or current_time - last_logged >= n:
|
205 |
-
logging.getLogger(name or caller_module).log(lvl, msg)
|
206 |
-
_LOG_TIMER[key] = current_time
|
207 |
-
|
208 |
-
|
209 |
-
def create_small_table(small_dict):
|
210 |
-
"""
|
211 |
-
Create a small table using the keys of small_dict as headers. This is only
|
212 |
-
suitable for small dictionaries.
|
213 |
-
|
214 |
-
Args:
|
215 |
-
small_dict (dict): a result dictionary of only a few items.
|
216 |
-
|
217 |
-
Returns:
|
218 |
-
str: the table as a string.
|
219 |
-
"""
|
220 |
-
keys, values = tuple(zip(*small_dict.items()))
|
221 |
-
table = tabulate(
|
222 |
-
[values],
|
223 |
-
headers=keys,
|
224 |
-
tablefmt="pipe",
|
225 |
-
floatfmt=".3f",
|
226 |
-
stralign="center",
|
227 |
-
numalign="center",
|
228 |
-
)
|
229 |
-
return table
|
230 |
-
|
231 |
-
|
232 |
-
def _log_api_usage(identifier: str):
|
233 |
-
"""
|
234 |
-
Internal function used to log the usage of different detectron2 components
|
235 |
-
inside facebook's infra.
|
236 |
-
"""
|
237 |
-
torch._C._log_api_usage_once("detectron2." + identifier)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/transfiner/configs/common/data/coco_panoptic_separated.py
DELETED
@@ -1,26 +0,0 @@
|
|
1 |
-
from detectron2.config import LazyCall as L
|
2 |
-
from detectron2.evaluation import (
|
3 |
-
COCOEvaluator,
|
4 |
-
COCOPanopticEvaluator,
|
5 |
-
DatasetEvaluators,
|
6 |
-
SemSegEvaluator,
|
7 |
-
)
|
8 |
-
|
9 |
-
from .coco import dataloader
|
10 |
-
|
11 |
-
dataloader.train.dataset.names = "coco_2017_train_panoptic_separated"
|
12 |
-
dataloader.train.dataset.filter_empty = False
|
13 |
-
dataloader.test.dataset.names = "coco_2017_val_panoptic_separated"
|
14 |
-
|
15 |
-
|
16 |
-
dataloader.evaluator = [
|
17 |
-
L(COCOEvaluator)(
|
18 |
-
dataset_name="${...test.dataset.names}",
|
19 |
-
),
|
20 |
-
L(SemSegEvaluator)(
|
21 |
-
dataset_name="${...test.dataset.names}",
|
22 |
-
),
|
23 |
-
L(COCOPanopticEvaluator)(
|
24 |
-
dataset_name="${...test.dataset.names}",
|
25 |
-
),
|
26 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cloudy1225/stackoverflow-sentiment-analysis/README.md
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Stackoverflow Sentiment Analysis
|
3 |
-
emoji: 📉
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: yellow
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.33.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: openrail
|
11 |
-
---
|
12 |
-
|
13 |
-
# Sentiment Analysis on Software Engineer Texts
|
14 |
-
|
15 |
-
This is a demo for our fine-tuned model [stackoverflow-roberta-base-sentiment](https://huggingface.co/Cloudy1225/stackoverflow-roberta-base-sentiment).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|