Commit
·
f645660
1
Parent(s):
bc52a38
Update parquet files (step 13 of 476)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Analisis Literario Del Cuento El Hijo Horacio Quiroga Un Estudio Filolgico Y Crtico.md +0 -136
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cara Upgrade Windows 7 SP1 ke SP2 dengan Mudah dan Cepat.md +0 -115
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/GX Developer 8.7 Full Version Learn How to Create Edit Debug and Monitor PLC Programs.md +0 -205
- spaces/1gistliPinn/ChatGPT4/Examples/3planesoft Earth 3d Screensaver Keygen __FULL__.md +0 -9
- spaces/1gistliPinn/ChatGPT4/Examples/Castlevania Lords Of Shadow Mirror Of Fate HD-RELOADED Fitgirl Repack.md +0 -15
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download CarSim The Industry-Leading Tool for Vehicle Dynamics Analysis and Development.md +0 -189
- spaces/1phancelerku/anime-remove-background/Cmo jugar al tablero deuda eterna juego apk y aprender sobre la realidad latinoamericana.md +0 -92
- spaces/1phancelerku/anime-remove-background/Download Hacked My Talking Tom APK with Unlimited Money and Coins.md +0 -77
- spaces/1phancelerku/anime-remove-background/Download Pink Wallpaper Android - Amazing and Free HD Pink Wallpapers for Your Mobile and Desktop.md +0 -168
- spaces/2ndelement/voicevox/speaker_info/388f246b-8c41-4ac1-8e2d-5d79f3ff56d9/policy.md +0 -3
- spaces/2ndelement/voicevox/voicevox_engine/preset/Preset.py +0 -18
- spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/dataset.py +0 -124
- spaces/801artistry/RVC801/infer/modules/onnx/export.py +0 -52
- spaces/801artistry/RVC801/julius/filters.py +0 -258
- spaces/A00001/bingothoo/src/app/layout.tsx +0 -47
- spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Backend 137c41fa386f43249b249e956eb06bb0.md +0 -27
- spaces/AI4PD/hexviz/hexviz/pages/2_🦅Birds_Eye_View.py +0 -176
- spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/dpt_depth.py +0 -109
- spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/layers/__init__.py +0 -5
- spaces/AIWaves/Software_Company/src/agents/Environment/base_environment.py +0 -167
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/voc/__init__.py +0 -0
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnext101_4xb16_1024e_4channel.py +0 -88
- spaces/Abhaykoul/Youtube_video_downloader/README.md +0 -13
- spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/EasyChat.py +0 -111
- spaces/AgentVerse/agentVerse/agentverse/demo.py +0 -487
- spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/concurrent.py +0 -80
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/lzstring.d.ts +0 -2
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/states/MainState.js +0 -222
- spaces/Aki004/herta-so-vits/models.py +0 -420
- spaces/Alpaca233/SadTalker/src/face3d/util/visualizer.py +0 -227
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/__init__.py +0 -37
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/audioldm/pipeline_audioldm.py +0 -559
- spaces/Andy1621/uniformer_image_detection/mmdet/core/__init__.py +0 -7
- spaces/Andy1621/uniformer_image_detection/tools/model_converters/detectron2pytorch.py +0 -82
- spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_80k_ade20k.py +0 -6
- spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_480x480_80k_pascal_context_59.py +0 -2
- spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r50-d8_512x512_80k_ade20k.py +0 -6
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/base_model.py +0 -16
- spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/autoencoder.py +0 -219
- spaces/ArchitSharma/Digital-Photo-Color-Restoration/app.py +0 -149
- spaces/AriaMei/TTSdemo/models.py +0 -537
- spaces/Atualli/yoloxTeste/configs/yolov3.py +0 -33
- spaces/Audio-AGI/AudioSep/models/CLAP/__init__.py +0 -0
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/roi_heads.py +0 -877
- spaces/BadRobot147/SFQ3/Dockerfile +0 -21
- spaces/Bart92/RVC_HF/infer/lib/infer_pack/models_onnx.py +0 -824
- spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets_new.py +0 -133
- spaces/Benson/text-generation/Examples/Casa Escapar Hack Descargar.md +0 -56
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/network/lazy_wheel.py +0 -210
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/terminal256.py +0 -338
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Analisis Literario Del Cuento El Hijo Horacio Quiroga Un Estudio Filolgico Y Crtico.md
DELETED
@@ -1,136 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Dragon Ball Z Sparking Mugen 2010 PC Download Torrent 15</h1>
|
3 |
-
<p>If you are a fan of Dragon Ball Z, you might be interested in downloading and playing Dragon Ball Z Sparking Mugen 2010 PC Torrent 15. This is a fan-made game that combines the elements of Dragon Ball Z and Mugen, a popular engine for creating fighting games. In this article, we will show you how to download and play this game on your PC.</p>
|
4 |
-
<h2>Introduction</h2>
|
5 |
-
<h3>What is Dragon Ball Z Sparking Mugen 2010?</h3>
|
6 |
-
<p>Dragon Ball Z Sparking Mugen 2010 is a game that was created by fans using the Mugen engine. It is based on the anime series Dragon Ball Z, which follows the adventures of Goku and his friends as they fight against various enemies and villains. The game features over 100 characters from the series, each with their own moves and abilities. You can choose from different game modes, such as arcade, team battle, survival, training, and more. You can also customize your own characters and stages using the built-in editor.</p>
|
7 |
-
<h2>Dragon ball z sparking mugen 2010 pc download torrent 15</h2><br /><p><b><b>Download File</b> ⏩ <a href="https://byltly.com/2uKwOS">https://byltly.com/2uKwOS</a></b></p><br /><br />
|
8 |
-
<h3>Why should you download it?</h3>
|
9 |
-
<p>There are many reasons why you should download Dragon Ball Z Sparking Mugen 2010 PC Torrent 15. Here are some of them:</p>
|
10 |
-
<ul>
|
11 |
-
<li>It is free and easy to download and install.</li>
|
12 |
-
<li>It has high-quality graphics and sound effects that capture the essence of the anime.</li>
|
13 |
-
<li>It has a large roster of characters that you can play as or fight against.</li>
|
14 |
-
<li>It has a variety of game modes that offer different challenges and fun.</li>
|
15 |
-
<li>It has a loyal fan community that supports and updates the game regularly.</li>
|
16 |
-
</ul>
|
17 |
-
<h2>How to download Dragon Ball Z Sparking Mugen 2010 PC Torrent 15</h2>
|
18 |
-
<h3>Step 1: Find a reliable torrent site</h3>
|
19 |
-
<p>The first step to download Dragon Ball Z Sparking Mugen 2010 PC Torrent 15 is to find a reliable torrent site that hosts the game file. There are many torrent sites on the internet, but not all of them are safe and trustworthy. Some of them may contain malware, viruses, or fake files that can harm your computer or waste your time. Therefore, you should do some research before choosing a torrent site. You can read reviews, ratings, comments, or feedback from other users to see if the site is reputable and reliable. You can also use a VPN service to protect your privacy and security while browsing torrent sites.</p>
|
20 |
-
<h3>Step 2: Search for the game torrent</h3>
|
21 |
-
<p>The next step is to search for the game torrent on the torrent site that you have chosen. You can use keywords such as "Dragon Ball Z Sparking Mugen 2010 PC", "DBZ Sparking Mugen", or "DBZSM". You should look for the torrent that has the most seeders and leechers, as this indicates that the file is popular and fast to download. You should also check the file size, description, comments, and screenshots to make sure that it is the correct and complete version of the game.</p>
|
22 |
-
<h3>Step 3: Download and install a torrent client</h3>
|
23 |
-
<p>The third step is to download and install a torrent client on your PC. A torrent client is a software that allows you to download files from torrent sites using peer-to-peer technology. There are many torrent clients available online, such as uTorrent, BitTorrent, qBittorrent, etc. You can choose one that suits your preferences and needs. You should also make sure that your torrent client is updated and configured properly to optimize your download speed and performance.</p>
|
24 |
-
<p>Dragon ball z sparking mugen 2010 pc game free download<br />
|
25 |
-
How to download dragon ball z sparking mugen 2010 for pc<br />
|
26 |
-
Dragon ball z sparking mugen 2010 pc full version torrent<br />
|
27 |
-
Dragon ball z sparking mugen 2010 pc gameplay and review<br />
|
28 |
-
Dragon ball z sparking mugen 2010 pc system requirements and installation guide<br />
|
29 |
-
Dragon ball z sparking mugen 2010 pc cheats and mods<br />
|
30 |
-
Dragon ball z sparking mugen 2010 pc best characters and stages<br />
|
31 |
-
Dragon ball z sparking mugen 2010 pc online multiplayer mode<br />
|
32 |
-
Dragon ball z sparking mugen 2010 pc iso file download<br />
|
33 |
-
Dragon ball z sparking mugen 2010 pc rar password and crack<br />
|
34 |
-
Dragon ball z sparking mugen 2010 pc update and patch download<br />
|
35 |
-
Dragon ball z sparking mugen 2010 pc error fix and troubleshooting<br />
|
36 |
-
Dragon ball z sparking mugen 2010 pc comparison with other dragon ball games<br />
|
37 |
-
Dragon ball z sparking mugen 2010 pc fan made and custom content<br />
|
38 |
-
Dragon ball z sparking mugen 2010 pc tips and tricks for beginners<br />
|
39 |
-
Dragon ball z sparking mugen 2010 pc controller support and configuration<br />
|
40 |
-
Dragon ball z sparking mugen 2010 pc high quality graphics and sound settings<br />
|
41 |
-
Dragon ball z sparking mugen 2010 pc download link and torrent magnet<br />
|
42 |
-
Dragon ball z sparking mugen 2010 pc safe and virus free download<br />
|
43 |
-
Dragon ball z sparking mugen 2010 pc latest version and new features<br />
|
44 |
-
Dragon ball z sparking mugen 2010 pc all transformations and fusions<br />
|
45 |
-
Dragon ball z sparking mugen 2010 pc story mode and cutscenes<br />
|
46 |
-
Dragon ball z sparking mugen 2010 pc unlockable characters and secrets<br />
|
47 |
-
Dragon ball z sparking mugen 2010 pc minimum and recommended specs<br />
|
48 |
-
Dragon ball z sparking mugen 2010 pc compatible windows versions and platforms<br />
|
49 |
-
Dragon ball z sparking mugen 2010 pc original and official source<br />
|
50 |
-
Dragon ball z sparking mugen 2010 pc alternative and similar games<br />
|
51 |
-
Dragon ball z sparking mugen 2010 pc feedback and ratings from users<br />
|
52 |
-
Dragon ball z sparking mugen 2010 pc trailer and screenshots<br />
|
53 |
-
Dragon ball z sparking mugen 2010 pc how to burn to dvd or usb drive<br />
|
54 |
-
Dragon ball z sparking mugen 2010 pc how to extract and run the game<br />
|
55 |
-
Dragon ball z sparking mugen 2010 pc how to change the language and subtitles<br />
|
56 |
-
Dragon ball z sparking mugen 2010 pc how to customize the controls and options<br />
|
57 |
-
Dragon ball z sparking mugen 2010 pc how to add more characters and stages<br />
|
58 |
-
Dragon ball z sparking mugen 2010 pc how to play with friends online or offline<br />
|
59 |
-
Dragon ball z sparking mugen 2010 pc how to record and edit gameplay videos<br />
|
60 |
-
Dragon ball z sparking mugen 2010 pc how to stream and share the game online<br />
|
61 |
-
Dragon ball z sparking mugen 2010 pc how to speed up and optimize the game performance<br />
|
62 |
-
Dragon ball z sparking mugen 2010 pc how to backup and restore the game data<br />
|
63 |
-
Dragon ball z sparking mugen 2010 pc how to uninstall and remove the game completely<br />
|
64 |
-
Dragon ball z sparking mugen 2010 pc pros and cons of downloading the game from torrent<br />
|
65 |
-
Dragon ball z sparking mugen 2010 pc legal and ethical issues of downloading the game from torrent<br />
|
66 |
-
Dragon ball z sparking mugen 2010 pc frequently asked questions and answers<br />
|
67 |
-
Dragon ball z sparkling (sic) vs dragon ball z sparkin (sic) vs dragonballzsparkin (sic) vs dragonballzsparkling (sic)<br />
|
68 |
-
What is dragonballzsparkin (sic) or dragonballzsparkling (sic) or dragonballzsparkin (sic) or dragonballzsparkling (sic)<br />
|
69 |
-
What is dragonballzsparkin (sic) or dragonballzsparkling (sic) or dragonballzsparkin (sic) or dragonballzsparkling (sic) in Japanese?<br />
|
70 |
-
What is dragonballzsparkin (sic) or dragonballzsparkling (sic) or dragonballzsparkin (sic) or dragonballzsparkling (sic) in English?<br />
|
71 |
-
What is the difference between dragonballzsparkin (sic) or dragonballzsparkling (sic) or dragonballzsparkin (sic) or dragonballzsparkling (sic) in terms of gameplay, graphics, sound, etc.?</p>
|
72 |
-
<h3>Step 4: Open the torrent file and start downloading</h3>
|
73 |
-
<p>The fourth step is to open the torrent file that you have downloaded from the torrent site using your torrent client. The torrent file contains information about the game file, such as its name, size, hash, trackers, etc. You should double-click on the torrent file or drag it into your torrent client window to start downloading. You can monitor the progress of your download by looking at the status bar or details panel of your torrent client. You can also pause, resume, or cancel your download at any time.</p>
|
74 |
-
<h3>Step 5: Extract the game files and run the setup</h3>
|
75 |
-
the instructions on the screen to install the game on your PC. You may need to agree to some terms and conditions, choose a destination folder, and create a shortcut icon. Once the installation is complete, you can launch the game from your desktop or start menu.</p>
|
76 |
-
<h2>How to play Dragon Ball Z Sparking Mugen 2010 PC</h2>
|
77 |
-
<h3>The game modes</h3>
|
78 |
-
<p>Dragon Ball Z Sparking Mugen 2010 PC has several game modes that you can choose from. Here are some of them:</p>
|
79 |
-
<ul>
|
80 |
-
<li>Arcade: This is the classic mode where you fight against a series of opponents until you reach the final boss. You can choose your difficulty level and the number of rounds per match.</li>
|
81 |
-
<li>Team Battle: This is the mode where you can form a team of up to four characters and fight against another team. You can choose between single, simul, or turns mode. In single mode, you control one character at a time and switch between them when one is defeated. In simul mode, you control one character and the others are controlled by the computer or another player. In turns mode, you control one character per round and switch between them after each round.</li>
|
82 |
-
<li>Survival: This is the mode where you try to survive as long as possible against an endless stream of opponents. You can choose your difficulty level and the number of lives you have. Your health does not regenerate between matches, so you have to be careful.</li>
|
83 |
-
<li>Training: This is the mode where you can practice your moves and combos against a dummy opponent. You can adjust the settings such as the dummy's behavior, health, defense, etc. You can also view your inputs and damage output on the screen.</li>
|
84 |
-
<li>Watch: This is the mode where you can watch two computer-controlled characters fight against each other. You can choose the characters, stage, and music. You can also pause, fast-forward, or rewind the match.</li>
|
85 |
-
<li>Edit: This is the mode where you can create your own characters and stages using the built-in editor. You can modify various aspects such as the sprites, sounds, animations, moves, etc. You can also test your creations in a preview mode.</li>
|
86 |
-
</ul>
|
87 |
-
<h3>The game controls</h3>
|
88 |
-
<p>Dragon Ball Z Sparking Mugen 2010 PC uses a keyboard or a gamepad as its input device. The default keyboard controls are as follows:</p>
|
89 |
-
<table>
|
90 |
-
<tr><td>Key</td><td>Function</td></tr>
|
91 |
-
<tr><td>A</td><td>Light punch</td></tr>
|
92 |
-
<tr><td>S</td><td>Medium punch</td></tr>
|
93 |
-
<tr><td>D</td><td>Heavy punch</td></tr>
|
94 |
-
<tr><td>Z</td><td>Light kick</td></tr>
|
95 |
-
<tr><td>X</td><td>Medium kick</td></tr>
|
96 |
-
<tr><td>C</td><td>Heavy kick</td></tr>
|
97 |
-
<tr><td>V</td><td>Start button</td></tr>
|
98 |
-
<tr><td>B</td><td>Coin button</td></tr>
|
99 |
-
<tr><td>F1</td><td>Pause button</td></tr>
|
100 |
-
<tr><td>F2</td><td>Restart button</td></tr>
|
101 |
-
<tr><td>F4</td><td>Toggle fullscreen/windowed mode</td></tr>
|
102 |
-
<tr><td>F12</td><td>Take screenshot</td></tr>
|
103 |
-
<tr><td>Arrow keys</td><td>Move character</td></tr>
|
104 |
-
<tr><td>Enter</td><td>Select option/menu item</td></tr>
|
105 |
-
<tr><td>Esc</td><td>Cancel/back/exit option/menu item</td></tr>
|
106 |
-
</table>
|
107 |
-
<p>You can change the keyboard controls in the options menu or in the config file. You can also use a gamepad if you prefer. You will need to configure your gamepad buttons in the options menu or in the config file.</p>
|
108 |
-
<h3>The game features</h3>
|
109 |
-
<p>Dragon Ball Z Sparking Mugen 2010 PC has many features that make it an enjoyable and exciting game. Here are some of them:</p>
|
110 |
-
<ul>
|
111 |
-
<li>The game has over 100 characters from Dragon Ball Z and other related series, such as Dragon Ball GT, Dragon Ball Super, Dragon Ball Heroes, etc. You can play as your favorite heroes or villains, such as Goku, Vegeta, Gohan, Piccolo, Frieza, Cell, Buu, Broly, Beerus, Jiren, etc.</li>
|
112 |
-
<li>The game has over 50 stages from different locations in the Dragon Ball universe, such as Earth, Namek, Planet Vegeta, Hell, Tournament of Power, etc. You can fight in different environments and scenarios, such as day or night, rain or snow, lava or water, etc.</li>
|
113 |
-
<li>The game has over 200 music tracks from various sources related to Dragon Ball Z and other anime series. You can listen to iconic themes and songs that match the mood and atmosphere of each stage and character.</li>
|
114 |
-
kicks, blocks, throws, dodges, counters, etc. You can also use special moves and super moves that are unique to each character, such as Kamehameha, Final Flash, Masenko, Death Beam, Spirit Bomb, etc. You can also transform into different forms and power-ups, such as Super Saiyan, Kaioken, Golden Frieza, Ultra Instinct, etc.</li>
|
115 |
-
<li>The game has a realistic and immersive physics and collision system that makes the fights more dynamic and realistic. You can interact with the environment and objects in various ways, such as breaking walls, smashing rocks, creating craters, etc. You can also cause damage and injuries to your opponents and yourself, such as bruises, cuts, blood, etc.</li>
|
116 |
-
<li>The game has a customizable and user-friendly interface that allows you to adjust various settings and preferences according to your liking. You can change the resolution, frame rate, sound volume, language, etc. You can also enable or disable various features and effects, such as shadows, reflections, particles, etc.</li>
|
117 |
-
</ul>
|
118 |
-
<h2>Conclusion</h2>
|
119 |
-
<h3>Summary of the main points</h3>
|
120 |
-
<p>In conclusion, Dragon Ball Z Sparking Mugen 2010 PC Torrent 15 is a fan-made game that lets you experience the thrill and excitement of Dragon Ball Z on your PC. It is free and easy to download and install. It has high-quality graphics and sound effects that resemble the anime. It has a large roster of characters that you can play as or fight against. It has a variety of game modes that offer different challenges and fun. It has a dynamic and responsive combat system that allows you to perform various moves and combos. It has a realistic and immersive physics and collision system that makes the fights more dynamic and realistic. It has a customizable and user-friendly interface that allows you to adjust various settings and preferences.</p>
|
121 |
-
<h3>Call to action</h3>
|
122 |
-
<p>If you are interested in downloading and playing Dragon Ball Z Sparking Mugen 2010 PC Torrent 15, you can follow the steps that we have outlined in this article. You will need a reliable torrent site, a torrent client, and a software to extract the game files. You will also need a PC that meets the minimum system requirements for the game. You can check the game's official website or forum for more information and updates. You can also join the game's fan community and share your feedback and suggestions with other players. We hope that you enjoy playing Dragon Ball Z Sparking Mugen 2010 PC Torrent 15 as much as we do.</p>
|
123 |
-
<h2>FAQs</h2>
|
124 |
-
<h3>Q: Is Dragon Ball Z Sparking Mugen 2010 PC Torrent 15 legal?</h3>
|
125 |
-
<p>A: Dragon Ball Z Sparking Mugen 2010 PC Torrent 15 is a fan-made game that is not affiliated with or endorsed by the official creators or owners of Dragon Ball Z or Mugen. It is made for entertainment purposes only and does not intend to infringe any copyrights or trademarks. However, downloading and sharing torrents may be illegal in some countries or regions depending on their laws and regulations. Therefore, you should be careful and responsible when downloading and playing this game.</p>
|
126 |
-
<h3>Q: Is Dragon Ball Z Sparking Mugen 2010 PC Torrent 15 safe?</h3>
|
127 |
-
<p>A: Dragon Ball Z Sparking Mugen 2010 PC Torrent 15 is safe to download and play as long as you use a reliable torrent site and a torrent client that are free from malware, viruses, or fake files. You should also scan your downloaded file with an antivirus software before extracting it. You should also backup your data before installing the game on your PC in case of any errors or problems.</p>
|
128 |
-
<h3>Q: Is Dragon Ball Z Sparking Mugen 2010 PC Torrent 15 compatible with Windows 10?</h3>
|
129 |
-
Windows Vista, Windows 7, and Windows 8. However, you may need to run the game in compatibility mode or as an administrator if you encounter any issues or errors. You can also check the game's official website or forum for more troubleshooting tips and solutions.</p>
|
130 |
-
<h3>Q: How can I update Dragon Ball Z Sparking Mugen 2010 PC Torrent 15?</h3>
|
131 |
-
<p>A: Dragon Ball Z Sparking Mugen 2010 PC Torrent 15 is updated regularly by its fan community with new characters, stages, music, features, etc. You can check the game's official website or forum for the latest updates and patches. You can also download and install them manually or automatically using your torrent client. You should always backup your data before updating the game to avoid any errors or problems.</p>
|
132 |
-
<h3>Q: How can I contact the developers of Dragon Ball Z Sparking Mugen 2010 PC Torrent 15?</h3>
|
133 |
-
<p>A: Dragon Ball Z Sparking Mugen 2010 PC Torrent 15 is developed by a team of fans who are passionate about Dragon Ball Z and Mugen. You can contact them through their official website or forum where they post their news, announcements, updates, etc. You can also join their Discord server or Facebook group where they interact with other players and fans. You can also send them an email or a message if you have any questions, feedback, suggestions, etc.</p>
|
134 |
-
</p> 0a6ba089eb<br />
|
135 |
-
<br />
|
136 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cara Upgrade Windows 7 SP1 ke SP2 dengan Mudah dan Cepat.md
DELETED
@@ -1,115 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Cara Upgrade Windows 7 SP1 ke SP2</h1>
|
3 |
-
<p>Windows 7 is one of the most popular operating systems in the world, but it is not supported by Microsoft anymore. This means that you will not receive any security updates or bug fixes for your system. However, you can still upgrade your Windows 7 to the latest service pack, which is a collection of updates that improve the performance and stability of your system. In this article, we will show you how to upgrade your Windows 7 SP1 to SP2 offline, without using Windows Update. We will also show you how to install other important updates for your system.</p>
|
4 |
-
<h2>What is Windows 7 SP1 and SP2?</h2>
|
5 |
-
<p>Service packs are cumulative updates that contain all the previous updates for a specific version of Windows. They also include new features and enhancements that improve the functionality and security of your system. Service packs are usually released every few years by Microsoft.</p>
|
6 |
-
<h2>cara upgrade windows 7 sp1 ke sp2</h2><br /><p><b><b>Download Zip</b> ☑ <a href="https://byltly.com/2uKvM0">https://byltly.com/2uKvM0</a></b></p><br /><br />
|
7 |
-
<p>Windows 7 only has one official service pack, which is SP1. It was released in February 2011 and it contains all the updates that were released from the launch of Windows 7 in October 2009 until January 2011. It also includes some new features such as improved support for USB 3.0 devices, enhanced backup and restore options, and better compatibility with third-party software.</p>
|
8 |
-
<p>However, Microsoft also released a convenience rollup update for Windows 7 in May 2016. This update is often called SP2 unofficially, because it contains all the updates that were released from SP1 until April 2016. It also includes some improvements such as reduced network bandwidth consumption, increased reliability of Windows Update, and faster installation time.</p>
|
9 |
-
<p>cara install windows 7 sp1 dan sp2 offline[^1^]<br />
|
10 |
-
how to obtain and install windows 7 sp2[^2^]<br />
|
11 |
-
cara update windows 7 sp1 ke sp2 tanpa internet<br />
|
12 |
-
how to integrate windows 7 convenience rollup into sp1 iso[^2^]<br />
|
13 |
-
cara download windows 7 sp1 dan sp2 offline installer[^1^]<br />
|
14 |
-
how to fix windows 7 update stuck on checking for updates[^2^]<br />
|
15 |
-
cara mengatasi windows 7 sp1 yang tidak bisa upgrade ke sp2<br />
|
16 |
-
how to install internet explorer 11 and .net framework on windows 7 sp2[^2^]<br />
|
17 |
-
cara cek versi windows 7 sp1 atau sp2<br />
|
18 |
-
how to uninstall windows 7 convenience rollup update<br />
|
19 |
-
cara backup data sebelum upgrade windows 7 sp1 ke sp2<br />
|
20 |
-
how to activate windows 7 after installing convenience rollup update<br />
|
21 |
-
cara menghapus file sisa update windows 7 sp1 dan sp2<br />
|
22 |
-
how to speed up windows 7 after applying convenience rollup update<br />
|
23 |
-
cara menonaktifkan windows update setelah upgrade windows 7 sp1 ke sp2<br />
|
24 |
-
how to troubleshoot windows 7 convenience rollup installation issues<br />
|
25 |
-
cara mengembalikan windows 7 ke versi sebelumnya jika upgrade sp1 ke sp2 gagal<br />
|
26 |
-
how to optimize windows 7 performance and security with convenience rollup update<br />
|
27 |
-
cara membuat bootable usb windows 7 dengan service pack terbaru<br />
|
28 |
-
how to slipstream windows 7 convenience rollup into a custom installation disc<br />
|
29 |
-
cara memperbaiki error code saat upgrade windows 7 sp1 ke sp2<br />
|
30 |
-
how to install additional language packs on windows 7 with convenience rollup update<br />
|
31 |
-
cara menambahkan fitur baru di windows 7 setelah upgrade ke sp2<br />
|
32 |
-
how to enable or disable automatic updates on windows 7 with convenience rollup update<br />
|
33 |
-
cara mematikan notifikasi upgrade ke windows 10 di windows 7 sp2<br />
|
34 |
-
how to check the compatibility of your hardware and software with windows 7 convenience rollup update<br />
|
35 |
-
cara mengubah tampilan windows 7 menjadi lebih modern setelah upgrade ke sp2<br />
|
36 |
-
how to create a system restore point before installing windows 7 convenience rollup update<br />
|
37 |
-
cara membersihkan registry dan disk space di windows 7 setelah upgrade ke sp2<br />
|
38 |
-
how to use dism tool to integrate convenience rollup update into your installation media[^2^]<br />
|
39 |
-
cara mengatasi masalah driver dan perangkat keras di windows 7 setelah upgrade ke sp2<br />
|
40 |
-
how to use system file checker tool to repair corrupted files on windows 7 with convenience rollup update<br />
|
41 |
-
cara menginstal aplikasi dan game yang tidak kompatibel dengan windows 7 sp2<br />
|
42 |
-
how to use compatibility mode and troubleshooter to run older programs on windows 7 with convenience rollup update<br />
|
43 |
-
cara mengubah pengaturan power plan dan sleep mode di windows 7 setelah upgrade ke sp2<br />
|
44 |
-
how to adjust display settings and themes on windows 7 with convenience rollup update<br />
|
45 |
-
cara mengubah pengaturan jaringan dan firewall di windows 7 setelah upgrade ke sp2<br />
|
46 |
-
how to configure network and internet settings on windows 7 with convenience rollup update<br />
|
47 |
-
cara mengubah pengaturan suara dan audio di windows 7 setelah upgrade ke sp2<br />
|
48 |
-
how to change sound and audio settings on windows 7 with convenience rollup update<br />
|
49 |
-
cara mengubah pengaturan mouse dan keyboard di windows 7 setelah upgrade ke sp2<br />
|
50 |
-
how to change mouse and keyboard settings on windows 7 with convenience rollup update<br />
|
51 |
-
cara mengubah pengaturan user account dan parental control di windows 7 setelah upgrade ke sp2<br />
|
52 |
-
how to change user account and parental control settings on windows 7 with convenience rollup update<br />
|
53 |
-
cara mengubah pengaturan security dan privacy di windows 7 setelah upgrade ke sp2<br />
|
54 |
-
how to change security and privacy settings on windows 7 with convenience rollup update<br />
|
55 |
-
cara mengubah pengaturan system dan recovery di windows 7 setelah upgrade ke sp2<br />
|
56 |
-
how to change system and recovery settings on windows 7 with convenience rollup update<br />
|
57 |
-
cara mengubah pengaturan date and time di windows 7 setelah upgrade ke sp2<br />
|
58 |
-
how to change date and time settings on windows 7 with convenience rollup update</p>
|
59 |
-
<p>By installing both SP1 and SP2, you can bring your Windows 7 up to date with the latest patches and fixes that Microsoft has provided. This will make your system more secure, stable, and efficient.</p>
|
60 |
-
<h2>How to check your Windows 7 version and edition?</h2>
|
61 |
-
<p>Before you download and install any service pack, you need to check what version and edition of Windows 7 you have on your system. This will help you choose the right file for your system.</p>
|
62 |
-
<p>To check your Windows 7 version and edition, follow these steps:</p>
|
63 |
-
<ol>
|
64 |
-
<li>Click on the Start button and type <code>system</code> in the search box.</li>
|
65 |
-
<li>Click on <code>System</code> in the list of results.</li>
|
66 |
-
<li>In the System window, look for <code>Windows edition</code> section. Here you will see the name and edition of your Windows 7 (such as Home Premium, Professional, or Ultimate).</li>
|
67 |
-
<li>Look for <code>System type</code> section. Here you will see whether your system is <code>32-bit</code> or <code>64-bit</code>.</li>
|
68 |
-
<li>Look for <code>Service Pack</code> section. Here you will see whether you have SP1 installed or not.</li>
|
69 |
-
</ol>
|
70 |
-
<p>If you have SP1 installed, you will see <code>Service Pack 1</code> next to <code>Windows edition</code>. If you don't have SP1 installed, you will not see anything there.</p>
|
71 |
-
<h2>How to download and install Windows 7 SP1 offline?</h2>
|
72 |
-
<p>If you don't have SP1 installed on your system, you need to download and install it before you can install SP2. You can do this offline, without using Windows Update. This is useful if you have a slow or unreliable internet connection, or if you want to save bandwidth and time.</p>
|
73 |
-
<p>To download and install Windows 7 SP1 offline, follow these steps:</p>
|
74 |
-
<ol>
|
75 |
-
<li>Go to the Microsoft website and download the appropriate file for your system. You can use these links:<br/>
|
76 |
-
<a href="https://www.microsoft.com/en-us/download/details.aspx?id=5842">Windows 7 Service Pack 1 for x64-based Systems (KB976932)</a>, this is for Windows 7 64-bit.<br/>
|
77 |
-
<a href="https://www.microsoft.com/en-us/download/details.aspx?id=3132">Windows 7 Service Pack 1 (KB976932)</a>, this is for Windows 7 32-bit.</li>
|
78 |
-
<li>Save the file to a convenient location on your system or on a removable media such as a USB flash drive.</li>
|
79 |
-
<li>Double-click on the file to run it. Follow the on-screen instructions to install SP1.</li>
|
80 |
-
<li>The installation process may take a while. Please be patient and do not turn off your system.</li>
|
81 |
-
<li>When the installation is complete, your system will restart automatically.</li>
|
82 |
-
</ol>
|
83 |
-
<h2>How to download and install Windows 7 SP2 offline?</h2>
|
84 |
-
<p>If you have SP1 installed on your system, you can proceed to download and install SP2 (or convenience rollup update). You can also do this offline, without using Windows Update.</p>
|
85 |
-
<p>To download and install Windows 7 SP2 offline, follow these steps:</p>
|
86 |
-
<ol>
|
87 |
-
<li>Go to the Microsoft website and download the appropriate file for your system. You can use these links:<br/>
|
88 |
-
<a href="https://www.catalog.update.microsoft.com/Search.aspx?q=KB3125574">Update for Windows 7 for x64-based Systems (KB3125574)</a>, this is for systems with x64 architecture.<br/>
|
89 |
-
<a href="https://www.catalog.update.microsoft.com/Search.aspx?q=KB3125574">Update for Windows 7 (KB3125574)</a>, this is for systems with x86 architecture.</li>
|
90 |
-
<li>The file type is .MSU, not .EXE like before. Save the file to a convenient location on your system or on a removable media such as a USB flash drive.</li>
|
91 |
-
<li>Double-click on the file to run it. Follow the on-screen instructions to install SP2.</li>
|
92 |
-
<li>The installation process may take a while. Please be patient and do not turn off your system.</li>
|
93 |
-
<li>When the installation is complete, your system will restart automatically.</li>
|
94 |
-
</ol>
|
95 |
-
<h2>How to install other Windows updates?</h2>
|
96 |
-
<p>The convenience rollup update only contains the updates released after Service Pack 1 and before April 2016. Any updates that Microsoft has released since then will not be installed automatically. You should run Windows Update manually to install any other available updates for your system.</p>
|
97 |
-
<p>To run Windows Update manually, follow these steps:</p>
|
98 |
-
<ol>
|
99 |
-
<li>Click on the Start button and type <code>windows update</code> in the search box.</li>
|
100 |
-
<li>Click on <code>Windows Update</code> in the list of results.</li>
|
101 |
-
<li>In the Windows Update window, click on <code>Check for updates</code>. Wait for a few minutes while Windows searches for new updates.</li>
|
102 |
-
<li>If there are any important or optional updates available, select them and click on <code>Install updates</code>. Follow the on-screen instructions to complete the installation.</li>
|
103 |
-
<li>You may need to restart your system after installing some updates.</li>
|
104 |
-
</ol>
|
105 |
-
<h3>Note:</h3>
|
106 |
-
<p>The convenience update does not include Internet Explorer 11 or the latest .NET Framework Updates. You should install those packages manually or from Windows Update if you need them.</p>
|
107 |
-
<h3>Tips:</h3>
|
108 |
-
<ul>
|
109 |
-
<li>You can change the settings of Windows Update to automatically download and install updates in the background. This way, you don't have to worry about missing any important updates in the future.</li>
|
110 |
-
<li>You can also use third-party tools such as WSUS Offline Update or AutoPatcher to download and install multiple updates at once offline.</li>
|
111 |
-
</ul>
|
112 |
-
<h1>Conclusion</h1>
|
113 |
-
<p>In this article, we have shown you how to upgrade your Windows 7 SP1 to SP2 offline, without using Windows Update. We have also shown you how to check your Windows version and edition, how</p> 0a6ba089eb<br />
|
114 |
-
<br />
|
115 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/GX Developer 8.7 Full Version Learn How to Create Edit Debug and Monitor PLC Programs.md
DELETED
@@ -1,205 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>GX Developer 8.7 Full Version: A Comprehensive Guide</h1>
|
3 |
-
<p>If you are looking for a reliable and easy-to-use programming software for Mitsubishi PLCs, you might want to check out GX Developer 8.7. This software is the latest version of the GX Developer series, which supports various PLC models and operating systems. In this article, we will give you a comprehensive guide on what GX Developer is, how to download and install it, how to use it, and some tips and tricks for using it effectively.</p>
|
4 |
-
<h2>What is GX Developer?</h2>
|
5 |
-
<p>GX Developer is a programming software for Mitsubishi PLCs that allows you to create, edit, monitor, debug, and transfer programs using ladder logic or structured text. It also supports other programming languages such as SFC, MELSAP-L, IL, ST, LD, FBD, and SFC/GRAFCET.</p>
|
6 |
-
<h2>gx developer 8.7 full version</h2><br /><p><b><b>Download Zip</b> ✒ <a href="https://byltly.com/2uKz5w">https://byltly.com/2uKz5w</a></b></p><br /><br />
|
7 |
-
<p>GX Developer is part of the MELSOFT series of engineering software that includes other products such as GX Works2, GX Works3, GT Designer, MX Component, etc. You can use these software products together to create a complete automation system with Mitsubishi devices.</p>
|
8 |
-
<h3>Features and benefits of GX Developer</h3>
|
9 |
-
<p>Some of the features and benefits of using GX Developer are:</p>
|
10 |
-
<ul>
|
11 |
-
<li>It has a user-friendly interface that allows you to easily create and edit programs using drag-and-drop, copy-and-paste, undo-and-redo, etc.</li>
|
12 |
-
<li>It has a powerful simulation function that allows you to test and debug your program without connecting to a PLC.</li>
|
13 |
-
<li>It has a built-in online monitoring function that allows you to view and modify the status of devices, registers, timers, counters, etc. in real time.</li>
|
14 |
-
<li>It has a comprehensive help function that provides detailed information on functions, instructions, devices, error codes, etc.</li>
|
15 |
-
<li>It has a batch processing function that allows you to perform multiple operations such as printing, searching, replacing, etc. at once.</li>
|
16 |
-
<li>It has a data backup function that allows you to save and restore your program data in case of power failure or other accidents.</li>
|
17 |
-
<li>It has a password protection function that allows you to prevent unauthorized access or modification of your program data.</li>
|
18 |
-
<li>It has a library function that allows you to store and reuse frequently used programs or devices.</li>
|
19 |
-
<li>It has a template function that allows you to create standard programs or devices based on predefined formats.</li>
|
20 |
-
</ul>
|
21 |
-
<h3>Supported PLC series and operating systems</h3>
|
22 |
-
<p>GX Developer 8.7 supports various PLC series such as:</p>
|
23 |
-
<ul>
|
24 |
-
<li>FX Series: FX0(S), FX0N, FX1(S), FX1N, FX2N, FX3G, FX3U</li>
|
25 |
-
<li>Q Series: Q00U/J/UDE(H), Q01U/J/UDE(H), Q02U/J/UDE(H), Q03U/J/UDE(H), Q04U/J/UDE(H), Q06U/J/UDE(H), Q10U/J/UDE(H), Q13U/J/UDE(H), Q20U/J/UDE(H), Q26U/J/UDE(H), Q50U/J/UDE(H), Q100U/J/UDE(H)</li>
|
26 |
-
<li>A Series: A0J2H/A1SJH/A1SH/A2SH/A2USH CPU (S) type only</li>
|
27 |
-
<li>AnS Series: AnS/QnAS (Small Type) CPU only</li>
|
28 |
-
<li>QnA Series: A2ACPU/A2NCPU/A2SCPU/A35B/Q2ACPU/Q2NCPU/Q2ASHCPU/Q2ASLCPU/QnACPU only</li>
|
29 |
-
<li>Q Process Series: Q02PHCPU/Q06PHCPU/Q12PHCPU/Q25PHCPU only</li>
|
30 |
-
<li>QS Safety Series: QS001CPU only</li>
|
31 |
-
<li>L Series: L02CPU/L02SCPU/L06CPU/L06SCPU/L26CPU-BT/L26CPU-BT-S only</li>
|
32 |
-
<li>Motion (SCPU): Q172DCPU/Q173DCPU only</li>
|
33 |
-
</ul>
|
34 |
-
<p>GX Developer 8.7 is compatible with the following operating systems:</p>
|
35 |
-
<ul>
|
36 |
-
<li>Windows XP (SP3 or later)</li>
|
37 |
-
<li>Windows Vista (SP1 or later)</li>
|
38 |
-
<li>Windows 7 (SP1 or later)</li>
|
39 |
-
<li>Windows 8/8.1</li>
|
40 |
-
<li>Windows 10</li>
|
41 |
-
</ul>
|
42 |
-
<h2>How to download and install GX Developer 8.7</h2>
|
43 |
-
<h3>Download link and password</h3>
|
44 |
-
<p>You can download GX Developer 8.7 from the following link:</p>
|
45 |
-
<a href="https://plc4me.com/download-gx-developer-v8-98c-new-version-2022/">https://plc4me.com/download-gx-developer-v8-98c-new-version-2022/</a>
|
46 |
-
<p>The password for extracting the file is: plc4me.com</p>
|
47 |
-
<p>gx developer software mitsubishi electric americas<br />
|
48 |
-
gx developer v8.98c new version 2022 download<br />
|
49 |
-
gx developer programming software for melsec fx series<br />
|
50 |
-
gx developer trouble viewing this page download the pdf<br />
|
51 |
-
gx developer product key 352-100201687<br />
|
52 |
-
gx developer v8.91 software plc mitsubishi install<br />
|
53 |
-
gx developer simulator converter configurator-en<br />
|
54 |
-
gx developer 8.7 full version t-luscious tea<br />
|
55 |
-
gx developer basic controller programming environment<br />
|
56 |
-
gx developer support q process q l fx series<br />
|
57 |
-
gx developer comprehensive line of factory automation solutions<br />
|
58 |
-
gx developer plc4me.com home downloads mitsubishi software<br />
|
59 |
-
gx developer compatible with operating systems windows xp/7/8/10<br />
|
60 |
-
gx developer support programming for mitsubishi plc series<br />
|
61 |
-
gx developer instructions for installing gx-developer<br />
|
62 |
-
gx developer v8.98c updated 06/2022<br />
|
63 |
-
gx developer v8.98c software download backup link<br />
|
64 |
-
gx developer password extract plc4me.com<br />
|
65 |
-
gx developer comment below the article if the link is broken<br />
|
66 |
-
gx developer gt designer 3 v1.260w full new version<br />
|
67 |
-
gx developer sysmac studio v1.45 full googledrive<br />
|
68 |
-
gx developer mx component ver.5 mitsubishi googledrive link<br />
|
69 |
-
gx developer gx works2 v1.610l new version 2022<br />
|
70 |
-
gx developer gx works3 v1.080j software full version<br />
|
71 |
-
gx developer cx-one v4.51 cx-programmer v9.73 full version<br />
|
72 |
-
gx developer sim ekb install 2022 11 27 for siemens software<br />
|
73 |
-
gx developer tia portal v18 full googledrive link<br />
|
74 |
-
gx developer blog-teknisi.com install-gx-developer-v891-software-plc.html<br />
|
75 |
-
gx developer bytlly.com/2sH3fz download-gx-developer-8.7-full<br />
|
76 |
-
gx developer legacy controllers a and ans series<br />
|
77 |
-
gx developer q00u q02u q04udh older versions will not support fx3g fx3u plcs<br />
|
78 |
-
gx developer feel free to contact us for input product id serial number<br />
|
79 |
-
gx developer fx0 s fx0n fx1 s fx1n fx2n fx3g fx3u plcs<br />
|
80 |
-
gx developer environment of melsoft setup install software install simulator install converter install configurator-en <br />
|
81 |
-
gx developer us.mitsubishielectric.com fa en products controllers programmable controllers melsec engineering software other engineering softwares <br />
|
82 |
-
gx developer plc4me.com article 2336385-korean-nuclear-fusion-reactor achieves 100 millionc for 30 seconds <br />
|
83 |
-
gx developer nssdc.gsfc.nasa.gov planetary factsheet sunfact.html <br />
|
84 |
-
gx developer solar.physics.montana.edu YPOP Spotlight SunInfo Core.html <br />
|
85 |
-
gx developer curious.astro.cornell.edu about-us 54 our-solar-system the-sun interior 206 how-hot-is-each-one-of-the-layers-of-the-sun-beginner <br />
|
86 |
-
gx developer en.wikipedia.org wiki Sun core temperature kelvin <br />
|
87 |
-
gx developer en.wikipedia.org wiki Solar_core <br />
|
88 |
-
gx developer news.yahoo.com nuclear-fusion-breakthrough-reactor-runs 130157687.html <br />
|
89 |
-
gx developer the-sun.com news 4381435-holy-grail-fusion-experiments-breakthrough-race-unlimited-energy <br />
|
90 |
-
gx developer thesun.co.uk news 17143468-holy-grail-fusion-experiments-breakthrough-race-unlimited-energy <br />
|
91 |
-
gx developer newscientist.com article 2336385-korean-nuclear-fusion-reactor achieves 100 millionc for 30 seconds</p>
|
92 |
-
<h3>Installation steps and product key</h3>
|
93 |
-
<p>To install GX Developer 8.7 on your computer, follow these steps:</p>
|
94 |
-
<ol>
|
95 |
-
<li>Click on the file "autorun" in the extracted folder.</li>
|
96 |
-
<li>Select "Setup Environment of MELSOFT" and click "Next". Follow the instructions on the screen to install the environment components.</li>
|
97 |
-
<li>Select "Install GX Developer Software" and click "Next". Follow the instructions on the screen to install the software components.</li>
|
98 |
-
<li>Select "Install GX Simulator Software" and click "Next". Follow the instructions on the screen to install the simulator components.</li>
|
99 |
-
<li>When prompted for the product key, enter: 352-100201687</li>
|
100 |
-
<li>Finish the installation process and restart your computer if necessary.</li>
|
101 |
-
</ol>
|
102 |
-
<h2>How to use GX Developer 8.7</h2>
|
103 |
-
<h3>Creating a new project</h3>
|
104 |
-
<p>To create a new project in GX Developer 8.7, follow these steps:</p>
|
105 |
-
<ol>
|
106 |
-
<li>Open GX Developer from the Start menu or desktop shortcut.</li>
|
107 |
-
<li>Select "File" -> "New Project" from the menu bar or click on the "New Project" icon on the toolbar.</li>
|
108 |
-
<li>Select the PLC type that matches your target device from the list of supported models.</li>
|
109 |
-
<li>Select the communication method that you will use to connect your PC to the PLC from the list of available options.</li>
|
110 |
-
<li>Select the programming language that you will use to create your program from the list of supported languages.</li>
|
111 |
-
<li>Select "OK" to create a new project with default settings or select "Advanced Settings" to customize your project settings such as device comment file name, device range setting file name, etc.</li>
|
112 |
-
<li>A new project window will open with an empty program area where you can start writing your program.</li>
|
113 |
-
</ol>
|
114 |
-
<h3>Programming with ladder logic</h3>
|
115 |
-
<p>Ladder logic is one of the most common programming languages used for PLCs. It consists of graphical symbols that represent contacts, coils, timers, counters, etc., arranged in horizontal rows called rungs. Each rung represents a logical condition that controls an output device or operation.</p>
|
116 |
-
<p>To program with ladder logic in GX Developer 8.7, follow these steps:</p>
|
117 |
-
<ol>
|
118 |
-
<li>Select "Edit" -> "Ladder" from the menu bar or click on the "Ladder" icon on the toolbar to switch to ladder mode.</li>
|
119 |
-
<li>Select "Insert" -> "Rung" from the menu bar or click on the "Insert Rung" icon on the toolbar to insert a new rung at the end of your program.</li>
|
120 |
-
the rung. You can select a device symbol from the device list on the left side of the screen or type the device name in the input box.</li>
|
121 |
-
<li>Repeat step 3 to insert more device symbols on the same rung or on different rungs until you complete your program logic.</li>
|
122 |
-
<li>Select "Edit" -> "Check" from the menu bar or click on the "Check" icon on the toolbar to check your program for errors and warnings. If there are any errors or warnings, they will be displayed in the error list at the bottom of the screen. You can double-click on an error or warning to jump to the corresponding location in your program and fix it.</li>
|
123 |
-
<li>Select "File" -> "Save" from the menu bar or click on the "Save" icon on the toolbar to save your program.</li>
|
124 |
-
</ol>
|
125 |
-
<h3>Simulating and debugging the program</h3>
|
126 |
-
<p>Simulating and debugging your program is an important step to verify its functionality and performance before transferring it to the PLC. You can use GX Simulator to simulate your program without connecting to a PLC and monitor and modify the status of devices, registers, timers, counters, etc.</p>
|
127 |
-
<p>To simulate and debug your program in GX Developer 8.7, follow these steps:</p>
|
128 |
-
<ol>
|
129 |
-
<li>Select "Tools" -> "GX Simulator" from the menu bar or click on the "GX Simulator" icon on the toolbar to launch GX Simulator.</li>
|
130 |
-
<li>Select "File" -> "Open Project" from the menu bar or click on the "Open Project" icon on the toolbar to open your project file in GX Simulator.</li>
|
131 |
-
<li>Select "Simulation" -> "Start Simulation" from the menu bar or click on the "Start Simulation" icon on the toolbar to start simulating your program. The simulation status will be displayed in the status bar at the bottom of the screen.</li>
|
132 |
-
<li>Select "View" -> "Device Monitor" from the menu bar or click on the "Device Monitor" icon on the toolbar to open the device monitor window. You can view and modify the status of devices, registers, timers, counters, etc. in this window.</li>
|
133 |
-
<li>Select "View" -> "Debug Monitor" from the menu bar or click on the "Debug Monitor" icon on the toolbar to open the debug monitor window. You can view and modify the status of internal relays, data registers, special relays, special registers, etc. in this window.</li>
|
134 |
-
<li>Select "Simulation" -> "Stop Simulation" from the menu bar or click on the "Stop Simulation" icon on the toolbar to stop simulating your program.</li>
|
135 |
-
the toolbar to close your project file in GX Simulator.</li>
|
136 |
-
<li>Select "File" -> "Exit" from the menu bar or click on the "Exit" icon on the toolbar to exit GX Simulator.</li>
|
137 |
-
</ol>
|
138 |
-
<h3>Transferring the program to the PLC</h3>
|
139 |
-
<p>After you have created and tested your program, you can transfer it to the PLC using a communication cable or a memory card. You can also transfer the program from the PLC to your PC for backup or modification.</p>
|
140 |
-
<p>To transfer your program to the PLC in GX Developer 8.7, follow these steps:</p>
|
141 |
-
<ol>
|
142 |
-
<li>Connect your PC to the PLC using a communication cable or insert a memory card into the PLC.</li>
|
143 |
-
<li>Select "Online" -> "Transfer Setup" from the menu bar or click on the "Transfer Setup" icon on the toolbar to open the transfer setup window. You can select the transfer mode, direction, device range, etc. in this window.</li>
|
144 |
-
<li>Select "OK" to confirm your transfer settings or select "Cancel" to cancel the transfer.</li>
|
145 |
-
<li>Select "Online" -> "Write to PLC" from the menu bar or click on the "Write to PLC" icon on the toolbar to transfer your program from your PC to the PLC. A progress bar will show the transfer status.</li>
|
146 |
-
<li>Select "Online" -> "Read from PLC" from the menu bar or click on the "Read from PLC" icon on the toolbar to transfer your program from the PLC to your PC. A progress bar will show the transfer status.</li>
|
147 |
-
</ol>
|
148 |
-
<h2>Tips and tricks for using GX Developer 8.7</h2>
|
149 |
-
<h3>Using shortcuts and hotkeys</h3>
|
150 |
-
<p>You can use shortcuts and hotkeys to perform various operations quickly and conveniently in GX Developer 8.7. Here are some of the most useful shortcuts and hotkeys:</p>
|
151 |
-
<ul>
|
152 |
-
<li>Ctrl+N: Create a new project</li>
|
153 |
-
<li>Ctrl+O: Open an existing project</li>
|
154 |
-
<li>Ctrl+S: Save the current project</li>
|
155 |
-
<li>Ctrl+P: Print the current project</li>
|
156 |
-
<li>Ctrl+Z: Undo the last operation</li>
|
157 |
-
<li>Ctrl+Y: Redo the last operation</li>
|
158 |
-
<li>Ctrl+C: Copy the selected device or rung</li>
|
159 |
-
<li>Ctrl+X: Cut the selected device or rung</li>
|
160 |
-
<li>Ctrl+V: Paste the copied or cut device or rung</li>
|
161 |
-
<li>Ctrl+F: Find a device or text in the current project</li>
|
162 |
-
<li>Ctrl+H: Replace a device or text in the current project</li>
|
163 |
-
<li>F1: Open the help function</li>
|
164 |
-
<li>F5: Start simulation</li>
|
165 |
-
<li>F6: Stop simulation</li>
|
166 |
-
<li>F7: Write to PLC</li>
|
167 |
-
<li>F8: Read from PLC</li>
|
168 |
-
<li>F9: Insert a new rung</li>
|
169 |
-
<li>F10: Delete a rung</li>
|
170 |
-
<li>F11: Insert a device</li>
|
171 |
-
<li>F12: Delete a device</li>
|
172 |
-
</ul>
|
173 |
-
<h3>Using comments and labels</h3>
|
174 |
-
<p>You can use comments and labels to add descriptive information to your program and make it easier to understand and maintain. Comments are text that explain the purpose or function of a device, rung, or program. Labels are names that identify a device, rung, or program.</p>
|
175 |
-
<p>To use comments and labels in GX Developer 8.7, follow these steps:</p>
|
176 |
-
<ol>
|
177 |
-
the menu bar or click on the "Comment/Label Display Setting" icon on the toolbar to open the comment/label display setting window. You can select the display mode, font size, color, etc. for comments and labels in this window.</li>
|
178 |
-
<li>Select "OK" to confirm your settings or select "Cancel" to cancel the settings.</li>
|
179 |
-
<li>Select "Edit" -> "Comment/Label" from the menu bar or click on the "Comment/Label" icon on the toolbar to open the comment/label edit window. You can add, edit, or delete comments and labels for devices, rungs, or programs in this window.</li>
|
180 |
-
<li>Select "OK" to save your changes or select "Cancel" to discard your changes.</li>
|
181 |
-
</ol>
|
182 |
-
<h3>Using libraries and templates</h3>
|
183 |
-
<p>You can use libraries and templates to store and reuse frequently used programs or devices. Libraries are files that contain programs or devices that you can import into your project. Templates are files that contain standard programs or devices that you can create based on predefined formats.</p>
|
184 |
-
<p>To use libraries and templates in GX Developer 8.7, follow these steps:</p>
|
185 |
-
<ol>
|
186 |
-
<li>Select "File" -> "Library" from the menu bar or click on the "Library" icon on the toolbar to open the library window. You can create, open, save, import, or export library files in this window.</li>
|
187 |
-
<li>Select "File" -> "Template" from the menu bar or click on the "Template" icon on the toolbar to open the template window. You can create, open, save, import, or export template files in this window.</li>
|
188 |
-
</ol>
|
189 |
-
<h2>Conclusion</h2>
|
190 |
-
<p>GX Developer 8.7 is a powerful and versatile programming software for Mitsubishi PLCs that supports various PLC models and operating systems. It allows you to create, edit, monitor, debug, and transfer programs using ladder logic or other programming languages. It also has many features and benefits that make it easy and convenient to use. In this article, we have given you a comprehensive guide on how to download and install GX Developer 8.7, how to use it, and some tips and tricks for using it effectively. We hope you have found this article helpful and informative.</p>
|
191 |
-
<h2>FAQs</h2>
|
192 |
-
<p>Here are some frequently asked questions about GX Developer 8.7:</p>
|
193 |
-
<h4>Q: How can I update GX Developer 8.7 to the latest version?</h4>
|
194 |
-
<p>A: You can update GX Developer 8.7 to the latest version by downloading and installing the update file from the Mitsubishi Electric website: <a href="https://www.mitsubishielectric.com/app/fa/download/search.do?kisyu=/melsoft&mode=software">https://www.mitsubishielectric.com/app/fa/download/search.do?kisyu=/melsoft&mode=software</a></p>
|
195 |
-
<h4>Q: How can I get technical support for GX Developer 8.7?</h4>
|
196 |
-
<p>A: You can get technical support for GX Developer 8.7 by contacting your local Mitsubishi Electric distributor or by visiting the Mitsubishi Electric website: <a href="https://www.mitsubishielectric.com/fa/support/index.html">https://www.mitsubishielectric.com/fa/support/index.html</a></p>
|
197 |
-
<h4>Q: How can I learn more about GX Developer 8.7?</h4>
|
198 |
-
<p>A: You can learn more about GX Developer 8.7 by reading the manuals that are included in the software package or by downloading them from the Mitsubishi Electric website: <a href="https://www.mitsubishielectric.com/app/fa/download/search.do?kisyu=/melsoft&mode=manual">https://www.mitsubishielectric.com/app/fa/download/search.do?kisyu=/melsoft&mode=manual</a></p>
|
199 |
-
<h4>Q: How can I share my program with other users?</h4>
|
200 |
-
<p>A: You can share your program with other users by saving it as a project file (.prg) or a library file (.lib) and sending it via email or other methods. You can also print your program as a document file (.doc) or a PDF file (.pdf) and share it as a hard copy.</p>
|
201 |
-
<h4>Q: How can I convert my program from GX Developer 8.7 to GX Works2 or GX Works3?</h4>
|
202 |
-
<p>A: You can convert your program from GX Developer 8.7 to GX Works2 or GX Works3 by using the conversion tool that is included in GX Works2 or GX Works3. You can access the conversion tool by selecting "Tools" -> "Conversion Tool" from the menu bar in GX Works2 or GX Works3.</p>
|
203 |
-
</p> 0a6ba089eb<br />
|
204 |
-
<br />
|
205 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/3planesoft Earth 3d Screensaver Keygen __FULL__.md
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<p>Now we present you our latest 3d screensavers collections. It is looking like already for our next releases. 3Planesoft Open Ocean 3D Screensaver. 3Planesoft. 2.0.2. 2.0.2 Crack, Serial Number 3D Earth Screensaver. 3Planesoft. 3.8.3. </p>
|
3 |
-
<p>3Planesoft is an independent screensaver developing company specializing 3D screensavers. The company has released 48 titles and is currently the most. 3Planesoft. 3.3.3. Keygen Serial Number 3D Earth Screensaver. 3Planesoft. 3.3.3. Serial Number, Crack 3D Earth Screensaver, Keygen. 3Planesoft. 4.0. </p>
|
4 |
-
<h2>3planesoft earth 3d screensaver keygen</h2><br /><p><b><b>Download</b> ○ <a href="https://imgfil.com/2uy1pp">https://imgfil.com/2uy1pp</a></b></p><br /><br />
|
5 |
-
<p>Registration Key for the following 3Planesoft Screensavers: - Ancient Castle 3D. Clock 3D Screensaver - Discovery 3D Screensaver - Earth 3D Screensaver. 3Planesoft Autumn Forest 3D Screensaver 1.0.0.1 crack, Hull City Tigers, 22KB, Vote!. Planet Earth 3D Screensaver 1.0 keygen, 8KB, Vote! </p>
|
6 |
-
<p>3Planesoft Mechanical Clock 3D Screensaver v1.0 build 3 crack by FFF. 3Planesoft Earth 3D Screensaver v1.0 DIR FIX keygen by Lz0. 613 records. 3d screensaver serial numbers are presented here.. 3Planesoft Voyage of Columbus.3D Screensaver v1.0. Planet Earth 3D Screensaver 1.0. </p>
|
7 |
-
<p>FazionEgg3D - a user-friendly Windows desktop display screensaver with ambient music that will add to the ambiance. Hundreds of items will be displayed with realistic 3D graphics and animations. You can place your. Fancy Planet Earth. Soft 3D Screensaver and Desktop Wallpaper - Dream Aquarium screensaver is a home aquarium simulation screensaver with a vast range of beautiful fish and. Unlock Dream Aquarium 5.0 keygen. Aquarium Lava Scene Screensaver - Look, it's a complex structure, but don't take my word for it, look for yourself! The incredible beauty of this complicated. Aquarium Lava Scene Screensaver without Registration. Dream Aquarium is a unique and extremely luxurious freshwater aquarium screen saver, which claims to be the real aquarium screensaver. Click to install Dream Aquarium 5.0. Dream Aquarium 5.0.5 Crack. Luna Wind (Aquarium Wallpaper 3d Aquarium Lava Scene Screensaver) Dream Aquarium. </p> 899543212b<br />
|
8 |
-
<br />
|
9 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Castlevania Lords Of Shadow Mirror Of Fate HD-RELOADED Fitgirl Repack.md
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Review: Castlevania Lords of Shadow Mirror of Fate HD-RELOADED fitgirl repack</h1>
|
3 |
-
<p>If you are a fan of the Castlevania series, you might be interested in the fitgirl repack of Castlevania Lords of Shadow Mirror of Fate HD-RELOADED, a side-scrolling action-adventure game that follows the story of the Belmont family across generations. The game was originally released for Nintendo 3DS in 2013, and later ported to PC, PS3 and Xbox 360 with improved graphics and sound. The fitgirl repack is based on the PC version and reduces the original size of 2 GB to 576 MB, without compromising the quality or content of the game.</p>
|
4 |
-
<h2>Castlevania Lords of Shadow Mirror of Fate HD-RELOADED fitgirl repack</h2><br /><p><b><b>Download</b> –––––>>> <a href="https://imgfil.com/2uxZY3">https://imgfil.com/2uxZY3</a></b></p><br /><br />
|
5 |
-
<p>The game features four playable characters: Gabriel Belmont, the protagonist of Castlevania Lords of Shadow who became the vampire lord Dracula; Trevor Belmont, his son and a knight of the Brotherhood of Light; Simon Belmont, Trevor's son and a barbarian warrior; and Alucard, Dracula's son and a half-vampire. Each character has their own abilities, weapons and skills, and the game switches between them as the story progresses. The game combines platforming, combat, puzzles and exploration, with some elements borrowed from the Metroidvania subgenre. The game also has a boss rush mode and a leaderboards system.</p>
|
6 |
-
<p>The fitgirl repack includes the latest update v1.0.684579 that fixes some bugs and improves performance. It also has an optional Russian text/audio translation (compiled by Siberian Gremlin) that can be installed during setup. The repack is 100% lossless and MD5 perfect, meaning that all files are identical to originals after installation. The installation takes only 1-2 minutes, and the game can be run from the Language Selector.exe in the game root folder. The repack is compatible with some filehosters and torrent trackers, but not with others.</p>
|
7 |
-
<p>Castlevania Lords of Shadow Mirror of Fate HD-RELOADED fitgirl repack is a great way to enjoy this game on PC with minimal storage space and maximum quality. The game offers a rich and engaging story that expands the lore of the Castlevania universe, as well as a challenging and varied gameplay that will test your skills and reflexes. If you are looking for a thrilling and satisfying adventure in the dark and Gothic world of Castlevania, you should definitely check out this repack.</p>
|
8 |
-
<p></p>
|
9 |
-
|
10 |
-
<p>The game has received mostly positive reviews from critics and players alike, who praised its story, graphics, music and gameplay. Some of the drawbacks mentioned were the lack of replay value, the linear level design and the occasional technical issues. The game has a Metacritic score of 72/100 for PC, based on 16 reviews. The user score is 7.8/10, based on 95 ratings.</p>
|
11 |
-
<p>If you want to download this repack, you can use one of the following links: , , , or . Make sure you have at least 2 GB of free RAM and 2 GB of HDD space before installing. You will also need a compatible gamepad to play the game, as it does not support mouse and keyboard controls. The game runs on Windows XP with Service Pack 3 or higher, and requires DirectX 9.0c or better.</p>
|
12 |
-
<p>Some other games that you might enjoy if you liked this one are Castlevania Lords of Shadow â Ultimate Edition, which is the first game in the Lords of Shadow trilogy and a reboot of the Castlevania series; Castlevania Lords of Shadow 2, which is the final game in the trilogy and follows Dracula's quest to end his immortality; and Castlevania: Symphony of the Night, which is a classic Metroidvania game that features Alucard as the main character.</p>
|
13 |
-
<p>In conclusion, Castlevania Lords of Shadow Mirror of Fate HD-RELOADED fitgirl repack is a worthy addition to your PC game collection, especially if you are a fan of the Castlevania franchise or the action-adventure genre. The game delivers a captivating story, a stunning visual presentation, a memorable soundtrack and a fun and challenging gameplay that will keep you hooked for hours. Don't miss this opportunity to experience this epic saga in a compact and convenient format.</p> d5da3c52bf<br />
|
14 |
-
<br />
|
15 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download CarSim The Industry-Leading Tool for Vehicle Dynamics Analysis and Development.md
DELETED
@@ -1,189 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Download CarSim: A Comprehensive Guide</h1>
|
3 |
-
<p>If you are interested in simulating the dynamic performance of cars, trucks, motorcycles, or specialty vehicles, you might have heard of CarSim, a software tool that can help you achieve accurate and efficient results. But how do you download CarSim and use it for your projects? In this article, we will provide you with a comprehensive guide on how to download CarSim for Windows or Linux, as well as how to use it for vehicle dynamics simulation. We will also cover some of the features, benefits, and alternatives of CarSim, so you can decide if it is the right tool for you.</p>
|
4 |
-
<h2>What is CarSim and why do you need it?</h2>
|
5 |
-
<p>CarSim is a commercial software package that predicts the performance of vehicles in response to driver controls (steering, throttle, brakes, clutch, and shifting) in a given environment (road geometry, coefficients of friction, wind). It uses 3D multibody dynamics models to accurately reproduce the physics of the vehicle in response to controls from the driver and/or automation. It also supports vehicle sensors and interactive traffic for V2V and ADAS development. CarSim can be used as a standalone application or in co-simulation with other software tools such as MATLAB/Simulink, NI LabVIEW, FMI/FMU, Unreal Engine, etc.</p>
|
6 |
-
<h2>download carsim</h2><br /><p><b><b>Download Zip</b> ✓ <a href="https://urlin.us/2uSTWz">https://urlin.us/2uSTWz</a></b></p><br /><br />
|
7 |
-
<h3>CarSim overview and features</h3>
|
8 |
-
<p>CarSim was first introduced in 1990 and has been developed continuously since then by Mechanical Simulation Corporation. It is used worldwide by over 110 OEMs and Tier 1 suppliers and over 200 universities and government research labs. There are more than 2600 active CarSim seats, not including driving simulators or students. Some of the features of CarSim include:</p>
|
9 |
-
<ul>
|
10 |
-
<li>An intuitive user interface and powerful analysis tools</li>
|
11 |
-
<li>A library of example vehicles, roads, procedures, controllers, sensors, etc.</li>
|
12 |
-
<li>A modular VS math model that can be extended with user-defined programs or connections to third-party software</li>
|
13 |
-
<li>A VS visualizer tool for viewing simulations with plots and photo-realistic animation</li>
|
14 |
-
<li>A VS scene builder tool for creating custom road networks and environments</li>
|
15 |
-
<li>A VS SDK tool for developing custom applications using VehicleSim technology</li>
|
16 |
-
<li>A support for real-time systems such as hardware-in-the-loop (HIL) and driver-in-the-loop (DIL)</li>
|
17 |
-
<li>An extensive documentation covering all aspects of the software</li>
|
18 |
-
</ul>
|
19 |
-
<h3>CarSim benefits and applications</h3>
|
20 |
-
<p>CarSim delivers the most accurate, detailed, and efficient methods for simulating the performance of passenger vehicles and light-duty trucks. It has been validated by automotive engineers for over twenty years and has shown close agreement between simulation predictions and test results. Some of the benefits and applications of CarSim are:</p>
|
21 |
-
<ul>
|
22 |
-
<li>It can help you analyze vehicle dynamics, develop active controllers, calculate vehicle performance characteristics, and engineer next-generation active safety systems</li>
|
23 |
-
<li>It can help you reduce development time and cost by enabling virtual testing and optimization before physical prototyping</li>
|
24 |
-
<li>It can help you improve vehicle quality and safety by identifying potential issues and solutions early in the design process</li>
|
25 |
-
<li>It can help you enhance vehicle performance and efficiency by exploring different design scenarios and trade-offs</li>
|
26 |
-
<li>It can help you demonstrate vehicle capabilities and features to customers and stakeholders by creating realistic simulations and animations</li>
|
27 |
-
</ul>
|
28 |
-
<h3>CarSim alternatives and competitors</h3>
|
29 |
-
<p>CarSim is not the only software tool available for vehicle dynamics simulation. There are several alternatives that offer similar or different <p>features and capabilities. Some of the most popular ones are:</p>
|
30 |
-
<ul>
|
31 |
-
<li>ADAMS Car: A software tool from MSC Software that provides a comprehensive set of built-in components, templates, and test rigs for vehicle modeling and simulation. It can be used for vehicle dynamics, durability, ride and handling, and NVH analysis. It can also be integrated with other MSC products such as MSC Nastran, MSC Fatigue, etc.</li>
|
32 |
-
<li>Dymola: A software tool from Dassault Systèmes that uses the Modelica language for modeling and simulating complex and multidisciplinary systems. It can be used for vehicle dynamics, powertrain, thermal management, electric and hybrid vehicles, etc. It can also be coupled with other tools such as Simulink, FMI/FMU, etc.</li>
|
33 |
-
<li>IPG CarMaker: A software tool from IPG Automotive that provides a complete solution for virtual test driving. It can be used for vehicle dynamics, ADAS, autonomous driving, powertrain, chassis, etc. It can also be connected to real-time systems such as HIL and DIL.</li>
|
34 |
-
</ul>
|
35 |
-
<p>Each of these tools has its own advantages and disadvantages, depending on your specific needs and preferences. You can compare them based on various criteria such as features, accuracy, speed, ease of use, cost, support, etc. You can also try them out by requesting a free trial or a demo from their respective websites.</p>
|
36 |
-
<h2>How to download CarSim for Windows or Linux</h2>
|
37 |
-
<p>If you have decided that CarSim is the right tool for you, you might be wondering how to download it and install it on your computer. In this section, we will guide you through the process of downloading CarSim for Windows or Linux. We will also provide you with some tips and tricks to optimize CarSim performance on your system.</p>
|
38 |
-
<h3>Requirements and prerequisites</h3>
|
39 |
-
<p>Before you download CarSim, you need to make sure that your computer meets the minimum requirements for running the software. According to the official website, these are:</p>
|
40 |
-
<p>How to download carsim software for vehicle dynamics simulation<br />
|
41 |
-
Download carsim demo version for free<br />
|
42 |
-
Carsim download link and installation guide<br />
|
43 |
-
Download carsim 2023.0 with new features and enhancements<br />
|
44 |
-
Carsim download for Windows 10, 8, 7, and XP<br />
|
45 |
-
Download carsim MATLAB/Simulink interface and examples<br />
|
46 |
-
Carsim download for real-time systems and hardware-in-the-loop testing<br />
|
47 |
-
Download carsim Unreal Engine plugin and VS Scene Builder<br />
|
48 |
-
Carsim download for driving simulators and virtual reality applications<br />
|
49 |
-
Download carsim ADAS sensors and interactive traffic models<br />
|
50 |
-
Carsim download for electric powertrain analysis and optimization<br />
|
51 |
-
Download carsim lap time optimization tool and examples<br />
|
52 |
-
Carsim download for motorcycle dynamics simulation and bikeSim integration<br />
|
53 |
-
Download carsim truck dynamics simulation and truckSim integration<br />
|
54 |
-
Carsim download for suspension design and analysis with suspensionSim<br />
|
55 |
-
Download carsim vehicle parameters and tables from measurement services<br />
|
56 |
-
Carsim download for co-simulation with FMI/FMU, LabVIEW, and ASCET<br />
|
57 |
-
Download carsim VS Visualizer tool for animation and plotting<br />
|
58 |
-
Carsim download for vehicle stability control and ESC testing<br />
|
59 |
-
Download carsim tilt table test procedure and results<br />
|
60 |
-
Carsim download for path following and speed control driver models<br />
|
61 |
-
Download carsim vehicle models and license options PDF<br />
|
62 |
-
Carsim download for industry, education, and motorsports applications<br />
|
63 |
-
Download carsim technical papers and publications<br />
|
64 |
-
Carsim download for sine with dwell test procedure and results<br />
|
65 |
-
Download carsim newsletters, press releases, and webcasts<br />
|
66 |
-
Carsim download for V2V communication and autonomous vehicle development<br />
|
67 |
-
Download carsim user section, support, and training resources<br />
|
68 |
-
Carsim download for trailer dynamics simulation and dual tire models<br />
|
69 |
-
Download carsim customer testimonials and case studies<br />
|
70 |
-
Carsim download for vehicle aerodynamics simulation and wind effects<br />
|
71 |
-
Download carsim built-in features overview PDF<br />
|
72 |
-
Carsim download for tire modeling and tire data import/export<br />
|
73 |
-
Download carsim VehicleSim SDK for custom programming and scripting<br />
|
74 |
-
Carsim download for brake system modeling and ABS testing<br />
|
75 |
-
Download carsim steering system modeling and EPS testing<br />
|
76 |
-
Carsim download for transmission modeling and gear shifting control<br />
|
77 |
-
Download carsim clutch modeling and mechanical clutch control<br />
|
78 |
-
Carsim download for engine modeling and torque generation control<br />
|
79 |
-
Download carsim fuel consumption modeling and fuel economy analysis<br />
|
80 |
-
Carsim download for thermal effects modeling and heat transfer analysis<br />
|
81 |
-
Download carsim road load modeling and road profile data import/export<br />
|
82 |
-
Carsim download for road geometry modeling and road network generation<br />
|
83 |
-
Download carsim terrain modeling and off-road simulation capabilities<br />
|
84 |
-
Carsim download for vehicle damage modeling and crash simulation capabilities<br />
|
85 |
-
Download carsim noise modeling and sound generation capabilities<br />
|
86 |
-
Carsim download for vehicle component modeling and sub-system integration capabilities <br />
|
87 |
-
Download carsim vehicle design optimization tool and examples <br />
|
88 |
-
Carsim download for vehicle handling evaluation tool and examples <br />
|
89 |
-
Download carsim vehicle performance evaluation tool and examples</p>
|
90 |
-
<table>
|
91 |
-
<tr>
|
92 |
-
<th>Operating System</th>
|
93 |
-
<th>Processor</th>
|
94 |
-
<th>Memory</th>
|
95 |
-
<th>Disk Space</th>
|
96 |
-
<th>Graphics Card</th>
|
97 |
-
</tr>
|
98 |
-
<tr>
|
99 |
-
<td>Windows 10 (64-bit) or Linux (64-bit)</td>
|
100 |
-
<td>Intel Core i5 or equivalent</td>
|
101 |
-
<td>8 GB RAM or more</td>
|
102 |
-
<td>10 GB free disk space or more</td>
|
103 |
-
<td>NVIDIA GeForce GTX 1050 or equivalent</td>
|
104 |
-
</tr>
|
105 |
-
</table>
|
106 |
-
<p>In addition to these requirements, you also need to have a valid license for using CarSim. You can obtain a license by contacting Mechanical Simulation Corporation or one of their authorized distributors. You can choose from different types of licenses such as node-locked, floating, networked, academic, etc. depending on your needs and budget.</p>
|
107 |
-
<h3>Steps to download and install CarSim</h3>
|
108 |
-
<p>Once you have a license for CarSim, you can proceed to download and install the software on your computer. The steps are as follows:</p>
|
109 |
-
<ol>
|
110 |
-
<li>Go to the official website of Mechanical Simulation Corporation and log in with your username and password.</li>
|
111 |
-
<li>Go to the Downloads section and select the latest version of CarSim for your operating system (Windows or Linux).</li>
|
112 |
-
<li>Download the installation file (CarSim_Setup.exe for Windows or CarSim_Setup.run for Linux) and save it to your preferred location.</li>
|
113 |
-
<li>Run the installation file and follow the instructions on the screen. You will need to accept the license agreement, choose the installation directory, select the components to install, etc.</li>
|
114 |
-
<li>When the installation is complete, you will need to activate your license by entering your license code or connecting to your license server.</li>
|
115 |
-
<li>You can now launch CarSim from the Start menu (Windows) or the Applications menu (Linux) and start using it for your projects.</li>
|
116 |
-
</ol>
|
117 |
-
<h3>Tips and tricks to optimize CarSim performance</h3>
|
118 |
-
<p>To get the most out of CarSim, you might want to optimize its performance on your system. Here are some tips and tricks that can help you do that:</p>
|
119 |
-
<ul>
|
120 |
-
<li>Keep your system updated with the latest drivers, patches, and security updates.</li>
|
121 |
-
<li>Close any unnecessary programs or processes that might be consuming CPU, memory, disk space, or network bandwidth.</li>
|
122 |
-
<li>Adjust the graphics settings in CarSim according to your system capabilities and preferences. You can change the resolution, quality, anti-aliasing, shadows, etc. in the VS Visualizer options menu.</li>
|
123 |
-
<li>Use a wired connection instead of a wireless one if you are using CarSim in co-simulation with other software tools such as MATLAB/Simulink, NI LabVIEW, etc. This can improve the communication speed and reliability between the tools.</li>
|
124 |
-
<li>Use the VS Scene Builder tool to create custom road networks and environments for your simulation scenarios. You can import existing road data from OpenDRIVE, Google Maps, etc. or create your own roads using the graphical interface.</li>
|
125 |
-
<li>Use the VS SDK tool to develop custom applications using VehicleSim technology. You can create your own user interface, data processing, visualization, etc. using C++, C#, Python, or Java.</li>
|
126 |
-
</ul>
|
127 |
-
<h2>How to use CarSim for vehicle dynamics simulation</h2>
|
128 |
-
<p>Now that you have downloaded and installed CarSim on your system, you might be wondering how to use it for vehicle dynamics simulation. In this section, we will guide you through the process of creating and running a simulation scenario, using built-in controllers and sensors, and analyzing and visualizing simulation results.</p>
|
129 |
-
<h3>How to create and run a simulation scenario</h3>
|
130 |
-
<p>A simulation scenario in CarSim consists of four main components: a vehicle model, a road model, a driver model, and a procedure. You can create and run a simulation scenario by following these steps:</p>
|
131 |
-
<ol>
|
132 |
-
<li>Launch CarSim from the Start menu (Windows) or the Applications menu (Linux) and select New from the File menu to create a new project.</li>
|
133 |
-
<li>Select a vehicle model from the library or create your own using the Vehicle Browser. You can modify the vehicle parameters such as mass, inertia, suspension, tires, brakes, steering, etc. using the Vehicle Editor.</li>
|
134 |
-
<li>Select a road model from the library or create your own using the Road Browser. You can modify the road parameters such as geometry, elevation, curvature, friction, etc. using the Road Editor.</li>
|
135 |
-
<li>Select a driver model from the library or create your own using the Driver Browser. You can modify the driver parameters such as reaction time, steering gain, speed profile, etc. using the Driver Editor.</li>
|
136 |
-
<li>Select a procedure from the library or create your own using the Procedure Browser. You can modify the procedure parameters such as initial conditions, events, outputs, etc. using the Procedure Editor.</li>
|
137 |
-
<li>Save your project and click on Run Simulation to start the simulation. You can monitor the simulation progress and status in the Simulation Control window.</li>
|
138 |
-
</ol>
|
139 |
-
<h3>How to use built-in controllers and sensors</h3>
|
140 |
-
<p>CarSim provides a number of built-in controllers and sensors that you can use for your simulation scenarios. These include:</p>
|
141 |
-
<ul>
|
142 |
-
<li>Controllers: ABS, TCS, ESC, ACC, LKA, AEB, etc.</li>
|
143 |
-
<li>Sensors: Radar, Lidar, Camera, GPS, IMU, etc.</li>
|
144 |
-
</ul>
|
145 |
-
<p>You can use these controllers and sensors by following these steps:</p>
|
146 |
-
<ol>
|
147 |
-
<li>Select a controller or sensor from the library or create your own using the Controller Browser or Sensor Browser. You can modify the controller or sensor parameters such as gains, thresholds, delays, fields of view, etc. using the Controller Editor or Sensor Editor.</li>
|
148 |
-
<li>Assign the controller or sensor to a vehicle component such as brakes, throttle, steering, etc. using the Vehicle Editor. You can also specify the connections and interactions between different controllers and sensors using the Controller Network Editor.</li>
|
149 |
-
<li>Save your project and run the simulation. You can observe the effects of the controllers and sensors on the vehicle behavior and performance in the VS Visualizer.</li>
|
150 |
-
</ol>
|
151 |
-
<h3>How to analyze and visualize simulation results</h3>
|
152 |
-
<p>After running a simulation, you can analyze and visualize the simulation results using the VS Visualizer tool. You can do this by following these steps:</p>
|
153 |
-
<ol>
|
154 |
-
<li>Open the VS Visualizer from the Tools menu or by clicking on the VS Visualizer icon in the Simulation Control window.</li>
|
155 |
-
<li>Select a simulation run from the list of available runs or browse for a run file (.sim) in your project folder.</li>
|
156 |
-
<li>Select a plot or animation type from the list of available types or create your own using the Plot Editor or Animation Editor. You can choose from various plot types such as time history, XY, polar, histogram, etc. and animation types such as 3D, 2D, video, etc.</li>
|
157 |
-
<li>Select the variables that you want to plot or animate from the list of available variables or create your own using the Variable Editor. You can choose from various variables such as vehicle states, driver inputs, controller outputs, sensor measurements, etc.</li>
|
158 |
-
<li>Adjust the plot or animation settings such as scale, range, color, font, legend, etc. using the Plot Options or Animation Options menu.</li>
|
159 |
-
<li>View the plot or animation in the VS Visualizer window. You can zoom, pan, rotate, pause, play, etc. using the mouse or keyboard commands.</li>
|
160 |
-
<li>Save or export the plot or animation as an image file (.png, .jpg, etc.) or a video file (.avi, .mp4, etc.) using the File menu.</li>
|
161 |
-
</ol>
|
162 |
-
<h2>Conclusion and FAQs</h2>
|
163 |
-
<p>In this article, we have provided you with a comprehensive guide on how to download CarSim for Windows or Linux, as well as how to use it for vehicle dynamics simulation. We have also covered some of the features, benefits, and alternatives of CarSim, so you can decide if it is the right tool for you. We hope that you have found this article useful and informative. If you have any questions or feedback, please feel free to contact us or leave a comment below. Here are some FAQs that might help you further:</p>
|
164 |
-
<h3>Summary of the main points</h3>
|
165 |
-
<ul>
|
166 |
-
<li>CarSim is a commercial software package that predicts the performance of vehicles in response to driver controls and environment conditions.</li>
|
167 |
-
<li>CarSim uses 3D multibody dynamics models to accurately reproduce the physics of the vehicle and supports vehicle sensors and interactive traffic.</li>
|
168 |
-
<li>CarSim can be used as a standalone application or in co-simulation with other software tools such as MATLAB/Simulink, NI LabVIEW, FMI/FMU, Unreal Engine, etc.</li>
|
169 |
-
<li>CarSim can help you analyze vehicle dynamics, develop active controllers, calculate vehicle performance characteristics, and engineer next-generation active safety systems.</li>
|
170 |
-
<li>CarSim can help you reduce development time and cost, improve vehicle quality and safety, enhance vehicle performance and efficiency, and demonstrate vehicle capabilities and features.</li>
|
171 |
-
<li>CarSim has several alternatives that offer similar or different features and capabilities, such as ADAMS Car, Dymola, IPG CarMaker, etc.</li>
|
172 |
-
<li>To download CarSim, you need to have a valid license and a computer that meets the minimum requirements. You can download the installation file from the official website and follow the instructions on the screen.</li>
|
173 |
-
<li>To use CarSim, you need to create and run a simulation scenario that consists of a vehicle model, a road model, a driver model, and a procedure. You can use built-in controllers and sensors or create your own. You can analyze and visualize the simulation results using the VS Visualizer tool.</li>
|
174 |
-
</ul>
|
175 |
-
<h3>FAQs</h3>
|
176 |
-
<ol>
|
177 |
-
<li>Q: How much does CarSim cost?<br>
|
178 |
-
A: The cost of CarSim depends on the type of license, the number of seats, the duration of use, and the level of support. You can contact Mechanical Simulation Corporation or one of their authorized distributors for a quote.</li>
|
179 |
-
<li>Q: How can I learn more about CarSim?<br>
|
180 |
-
A: You can learn more about CarSim by visiting the official website, reading the documentation, watching the tutorials, attending the webinars, or joining the user forum. You can also request a free trial or a demo to try out CarSim for yourself.</li>
|
181 |
-
<li>Q: How can I get technical support for CarSim?<br>
|
182 |
-
A: You can get technical support for CarSim by contacting Mechanical Simulation Corporation or one of their authorized distributors. You can also use the online help system, the FAQ page, or the user forum to find answers to common questions or issues.</li>
|
183 |
-
<li>Q: How can I upgrade to the latest version of CarSim?<br>
|
184 |
-
A: You can upgrade to the latest version of CarSim by downloading the installation file from the official website and running it on your system. You will need to have a valid license for the latest version or renew your license if it has expired.</li>
|
185 |
-
<li>Q: How can I share my CarSim projects with others?<br>
|
186 |
-
A: You can share your CarSim projects with others by exporting them as ZIP files or VS Browser files using the File menu. You can also share your simulation results as image files or video files using the VS Visualizer tool.</li>
|
187 |
-
</ol></p> 197e85843d<br />
|
188 |
-
<br />
|
189 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Cmo jugar al tablero deuda eterna juego apk y aprender sobre la realidad latinoamericana.md
DELETED
@@ -1,92 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>What is Tablero Deuda Eterna Juego APK?</h1>
|
3 |
-
<p>If you are looking for a board game that combines strategy, education, and fun, you might want to check out Tablero Deuda Eterna Juego APK. This is a digital version of a classic board game that was created in Cuba in the 1960s as a response to Monopoly. The game simulates the economic and political situation of Latin America and challenges the players to defeat the International Monetary Fund (IMF) and achieve development and independence.</p>
|
4 |
-
<h2>tablero deuda eterna juego apk</h2><br /><p><b><b>DOWNLOAD</b> ►►►►► <a href="https://jinyurl.com/2uNKrs">https://jinyurl.com/2uNKrs</a></b></p><br /><br />
|
5 |
-
<h2>How to play Tablero Deuda Eterna Juego APK?</h2>
|
6 |
-
<h3>The objective of the game</h3>
|
7 |
-
<p>The game can be played by 2 to 6 players, who take on the role of governments of Third World countries. The goal is to use the natural resources of Latin America, industrialize them, and sell them in the Northern markets. To do so, each player has to become an entrepreneur and overcome all the obstacles that the game presents. The capital is provided by the IMF, which imposes conditions, devaluations, and other difficulties. The objective is to overcome these challenges and get rid of the IMF.</p>
|
8 |
-
<h3>The components of the game</h3>
|
9 |
-
<p>The game consists of a board that represents Latin America and North America, divided into 24 territories. Each territory has a different value and produces a different resource. The board also has spaces for cards, dice, money, and other elements. The cards are divided into three types: IMF cards, which contain instructions that affect all players; Event cards, which contain random events that affect one or more players; and Development cards, which contain benefits that players can buy with their money. The dice are used to determine the movement of players on the board and to resolve conflicts. The money is used to buy properties, industries, and development cards.</p>
|
10 |
-
<h3>The rules of the game</h3>
|
11 |
-
<p>The game follows these basic steps:</p>
|
12 |
-
<ul>
|
13 |
-
<li>Each player chooses a color and receives a starting amount of money.</li>
|
14 |
-
<li>Each player rolls a die and moves their pawn on the board according to the number.</li>
|
15 |
-
<li>If a player lands on an empty territory in Latin America, they can buy it with their money.</li>
|
16 |
-
<li>If a player lands on an occupied territory in Latin America, they have to pay rent to the owner.</li>
|
17 |
-
<li>If a player lands on an empty territory in North America, they can build a multinational industry with their money.</li>
|
18 |
-
<li>If a player lands on an occupied territory in North America, they have to pay royalties to the owner.</li>
|
19 |
-
<li>If a player lands on an IMF space, they have to draw an IMF card and follow its instructions.</li>
|
20 |
-
<li>If a player lands on an Event space, they have to draw an Event card and follow its instructions.</li>
|
21 |
-
<li>If a player lands on a Development space, they can buy a Development card with their money.</li>
|
22 |
-
<li>If two or more players land on the same space, they have to roll dice to determine who stays and who moves back one space.</li>
|
23 |
-
</ul>
|
24 |
-
<p>The game ends when one of these conditions is met:</p>
|
25 |
-
<ul>
|
26 |
-
<li>Tie: If one player, or a pair or trio of players, owns more than - 50% of the territories in Latin America and North America, they win the game together. - Victory: If one player, or a pair or trio of players, manages to pay off their debt to the IMF and get rid of all the IMF cards, they win the game together. - Defeat: If all the players run out of money and cannot pay their debts, they lose the game together.</p>
|
27 |
-
<h2>Why should you play Tablero Deuda Eterna Juego APK?</h2>
|
28 |
-
<h3>The educational value of the game</h3>
|
29 |
-
<p>One of the main reasons to play Tablero Deuda Eterna Juego APK is that it is a great way to learn about the economic and political realities of Latin America and the world. The game exposes the players to the concepts of debt, interest, inflation, devaluation, trade, development, and imperialism. It also shows the historical and cultural aspects of Latin America, such as its diversity, richness, and struggles. The game encourages critical thinking and reflection on the causes and consequences of underdevelopment and dependency.</p>
|
30 |
-
<p>* descargar tablero deuda eterna juego apk gratis<br />
|
31 |
-
* tablero deuda eterna juego apk online<br />
|
32 |
-
* tablero deuda eterna juego apk para android<br />
|
33 |
-
* tablero deuda eterna juego apk full<br />
|
34 |
-
* tablero deuda eterna juego apk mod<br />
|
35 |
-
* tablero deuda eterna juego apk sin internet<br />
|
36 |
-
* tablero deuda eterna juego apk mega<br />
|
37 |
-
* tablero deuda eterna juego apk ultima version<br />
|
38 |
-
* tablero deuda eterna juego apk hack<br />
|
39 |
-
* tablero deuda eterna juego apk premium<br />
|
40 |
-
* como jugar tablero deuda eterna juego apk<br />
|
41 |
-
* tablero deuda eterna juego apk opiniones<br />
|
42 |
-
* tablero deuda eterna juego apk tutorial<br />
|
43 |
-
* tablero deuda eterna juego apk requisitos<br />
|
44 |
-
* tablero deuda eterna juego apk trucos<br />
|
45 |
-
* tablero deuda eterna juego apk descargar pc<br />
|
46 |
-
* tablero deuda eterna juego apk ios<br />
|
47 |
-
* tablero deuda eterna juego apk windows 10<br />
|
48 |
-
* tablero deuda eterna juego apk actualizado<br />
|
49 |
-
* tablero deuda eterna juego apk 2023<br />
|
50 |
-
* tablero deuda eterna juego apk original<br />
|
51 |
-
* tablero deuda eterna juego apk oficial<br />
|
52 |
-
* tablero deuda eterna juego apk gameplay<br />
|
53 |
-
* tablero deuda eterna juego apk reseña<br />
|
54 |
-
* tablero deuda eterna juego apk comprar<br />
|
55 |
-
* tablero deuda eterna juego apk argentina<br />
|
56 |
-
* tablero deuda eterna juego apk chile<br />
|
57 |
-
* tablero deuda eterna juego apk colombia<br />
|
58 |
-
* tablero deuda eterna juego apk mexico<br />
|
59 |
-
* tablero deuda eterna juego apk peru<br />
|
60 |
-
* tablero deuda eterna juego apk venezuela<br />
|
61 |
-
* tablero deuda eterna juego apk uruguay<br />
|
62 |
-
* tablero deuda eterna juego apk bolivia<br />
|
63 |
-
* tablero deuda eterna juego apk ecuador<br />
|
64 |
-
* tablero deuda eterna juego apk paraguay<br />
|
65 |
-
* tablero deuda eterna juego apk costa rica<br />
|
66 |
-
* tablero deuda eterna juego apk panama<br />
|
67 |
-
* tablero deuda eterna juego apk cuba<br />
|
68 |
-
* tablero deuda eterna juego apk españa<br />
|
69 |
-
* tablero deuda eterna juego apk francia</p>
|
70 |
-
<h3>The fun factor of the game</h3>
|
71 |
-
<p>Another reason to play Tablero Deuda Eterna Juego APK is that it is a lot of fun. The game is full of surprises, challenges, and opportunities. The players have to make strategic decisions, negotiate with each other, and deal with unexpected events. The game is also very social, as it fosters cooperation, competition, and communication among the players. The game can be played with friends, family, or online with other players from around the world.</p>
|
72 |
-
<h3>The availability of the game</h3>
|
73 |
-
<p>A final reason to play Tablero Deuda Eterna Juego APK is that it is very easy to access and enjoy. The game is available for free on Google Play Store for Android devices. You can download and install it in minutes and start playing right away. The game has a simple and intuitive interface, with clear instructions and graphics. The game also has a multiplayer mode, where you can join or create rooms with other players online.</p>
|
74 |
-
<h2>Conclusion</h2>
|
75 |
-
<p>Tablero Deuda Eterna Juego APK is a board game that offers a unique and engaging experience for anyone who wants to learn more about Latin America and the world. The game combines strategy, education, and fun in a way that challenges and entertains the players. The game is also very accessible and easy to play on Android devices. If you are looking for a board game that will make you think, laugh, and have a good time, you should definitely give Tablero Deuda Eterna Juego APK a try.</p>
|
76 |
-
<h2>FAQs</h2>
|
77 |
-
<p>Here are some frequently asked questions about Tablero Deuda Eterna Juego APK:</p>
|
78 |
-
<ul>
|
79 |
-
<li><b>Q: How long does a game last?</b></li>
|
80 |
-
<li>A: The duration of a game depends on the number of players, the level of difficulty, and the luck factor. A typical game can last from 30 minutes to 2 hours.</li>
|
81 |
-
<li><b>Q: Can I play the game offline?</b></li>
|
82 |
-
<li>A: Yes, you can play the game offline with up to 6 players on one device. You can also play online with other players from different devices.</li>
|
83 |
-
<li><b>Q: What languages does the game support?</b></li>
|
84 |
-
<li>A: The game supports Spanish, English, Portuguese, French, German, Italian, Russian, Chinese, Japanese, Korean, Arabic, Hindi, and Turkish.</li>
|
85 |
-
<li><b>Q: Is the game based on real facts?</b></li>
|
86 |
-
<li>A: The game is based on historical and current facts about Latin America and the world. However, some aspects of the game are fictional or exaggerated for entertainment purposes.</li>
|
87 |
-
<li><b>Q: Where can I find more information about the game?</b></li>
|
88 |
-
<li>A: You can find more information about the game on its official website, its Facebook page, or its YouTube channel.</li>
|
89 |
-
</ul>
|
90 |
-
: https://www.deudaeterna.com/ : https://www.facebook.com/deudaeterna/ : https://www.youtube.com/channel/UCwYkXtZ8v0a9yZyRlJg7oLw</p> 401be4b1e0<br />
|
91 |
-
<br />
|
92 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Hacked My Talking Tom APK with Unlimited Money and Coins.md
DELETED
@@ -1,77 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Download Hacked My Talking Tom: Is It Worth It?</h1>
|
3 |
-
<p>My Talking Tom is one of the most popular virtual pet games in the world, with over a billion downloads on Google Play and App Store. But some people are not satisfied with the official version of the game and look for ways to download hacked versions that offer unlimited coins, unlocked features, and no ads. But is it worth it to download hacked My Talking Tom? What are the risks and consequences of doing so? In this article, we will explore these questions and give you some tips on how to play My Talking Tom safely and legally.</p>
|
4 |
-
<h2>What is My Talking Tom?</h2>
|
5 |
-
<h3>A popular virtual pet game</h3>
|
6 |
-
<p>My Talking Tom is a game developed by Outfit7, a company that specializes in creating games featuring animated characters that can talk and interact with the players. The game was released in 2013 and has since become a global phenomenon, spawning several sequels and spin-offs, such as My Talking Tom 2, My Talking Angela, My Talking Tom Friends, and more.</p>
|
7 |
-
<h2>download hacked my talking tom</h2><br /><p><b><b>DOWNLOAD</b> ✏ <a href="https://jinyurl.com/2uNO7k">https://jinyurl.com/2uNO7k</a></b></p><br /><br />
|
8 |
-
<h3>Features and gameplay</h3>
|
9 |
-
<p>The game allows you to adopt a cute kitten named Tom and take care of him as he grows up. You can feed him, play with him, dress him up, decorate his house, and travel to different destinations with him. You can also talk to him and he will repeat what you say in a funny voice. The game also features mini-games that add action, adventure, and fun to the gameplay. The game is free to download and play, but it contains in-app purchases and ads that can enhance or interrupt your experience.</p>
|
10 |
-
<h2>What are hacked games?</h2>
|
11 |
-
<h3>Games that have been cracked or modified</h3>
|
12 |
-
<p>Hacked games are games that have been cracked or modified by hackers or pirates to bypass the digital rights management (DRM) technology that protects the copyrighted content. By doing so, they allow users to access and play the games without paying the developers or distributors. Often, in-demand titles are singled out for this treatment as soon as they become available to buy, although there are even examples of pirates getting hold of and distributing free copies in advance of official release dates.</p>
|
13 |
-
<h3>Reasons why people download hacked games</h3>
|
14 |
-
<p>Some of the reasons why people download hacked games are:</p>
|
15 |
-
<ul>
|
16 |
-
<li>They want to save money by not paying for the games.</li>
|
17 |
-
<li>They want to access features or content that are locked or unavailable in the official version.</li>
|
18 |
-
<li>They want to avoid ads or in-app purchases that may annoy or tempt them.</li>
|
19 |
-
<li>They want to try out the games before buying them.</li>
|
20 |
-
<li>They want to challenge themselves or show off their skills by playing harder or modified versions of the games.</li>
|
21 |
-
</ul>
|
22 |
-
<h2>What are the risks of downloading hacked games?</h2>
|
23 |
-
<h3>Malware infection</h3>
|
24 |
-
<p>One of the biggest risks of downloading hacked games is malware infection. Malware is malicious software that can harm your device or data in various ways, such as stealing your personal information, locking your files, displaying unwanted ads, slowing down your performance, or even taking over your device. Hackers and pirates often use popular games as bait to lure users into downloading malware-infested software. They may do so by sharing messages on social media, phishing emails, or even through search engine optimization of their websites or P2P torrents. Often, the malware has been crafted to bypass traditional security filters or it may require the user to deactivate their anti-malware software altogether. It also typically asks for excessive permissions to run.</p>
|
25 |
-
<h3>Legal consequences</h3>
|
26 |
-
<p>Another risk of downloading hacked games is legal consequences. Downloading hacked games is a form of piracy, which is illegal in most countries. Piracy is the unauthorized use or distribution of copyrighted material, such as games, movies, music, books, etc. Piracy violates the intellectual property rights of the creators and owners of the content, who invest time, money, and effort to produce and distribute it. Piracy also harms the legitimate businesses and industries that depend on the revenue from the sales of the content. Piracy can result in civil or criminal penalties, such as fines, lawsuits, or even jail time, depending on the severity and frequency of the offense.</p>
|
27 |
-
<p>stick war legacy mod apk unlimited head items download<br />
|
28 |
-
stick war legacy hack 99999 head avatar items free download<br />
|
29 |
-
stick war legacy mod new 99999 head skins and weapons download<br />
|
30 |
-
stick war legacy hack unlimited head items and gold download<br />
|
31 |
-
stick war legacy mod apk 99999 head avatar items unlocked download<br />
|
32 |
-
stick war legacy hack 99999 head items vs final boss download<br />
|
33 |
-
stick war legacy mod new 99999 head skins and griffon download<br />
|
34 |
-
stick war legacy hack unlimited head items and gems download<br />
|
35 |
-
stick war legacy mod apk 99999 head avatar items and leaf griffon download<br />
|
36 |
-
stick war legacy hack 99999 head items and zombies download<br />
|
37 |
-
stick war legacy mod new 99999 head skins and spearton download<br />
|
38 |
-
stick war legacy hack unlimited head items and magikill download<br />
|
39 |
-
stick war legacy mod apk 99999 head avatar items and swordwrath download<br />
|
40 |
-
stick war legacy hack 99999 head items and movie download<br />
|
41 |
-
stick war legacy mod new 99999 head skins and miner download<br />
|
42 |
-
stick war legacy hack unlimited head items and giant download<br />
|
43 |
-
stick war legacy mod apk 99999 head avatar items and archer download<br />
|
44 |
-
stick war legacy hack 99999 head items and hacked download<br />
|
45 |
-
stick war legacy mod new 99999 head skins and the elites download<br />
|
46 |
-
stick war legacy hack unlimited head items and cheats download<br />
|
47 |
-
stick war legacy mod apk 99999 head avatar items and no ads download<br />
|
48 |
-
stick war legacy hack 99999 head items and online download<br />
|
49 |
-
stick war legacy mod new 99999 head skins and offline download<br />
|
50 |
-
stick war legacy hack unlimited head items and latest version download<br />
|
51 |
-
stick war legacy mod apk 99999 head avatar items and update download</p>
|
52 |
-
<h3>Poor gaming experience</h3>
|
53 |
-
<p>A third risk of downloading hacked games is poor gaming experience. Hacked games may not work properly or at all on your device, as they may not be compatible with your operating system, hardware, or software. They may also contain bugs, glitches, errors, or crashes that can ruin your gameplay or cause you to lose your progress. Hacked games may also lack the updates, patches, or support that the official version receives from the developers or distributors. This means that you may miss out on new features, content, improvements, or fixes that can enhance your gaming experience. Hacked games may also be unfair or unbalanced, as they may give you an advantage or disadvantage over other players or the game itself.</p>
|
54 |
-
<h2>How to play My Talking Tom safely and legally?</h2>
|
55 |
-
<h3>Download from official sources</h3>
|
56 |
-
<p>The best way to play My Talking Tom safely and legally is to download it from official sources, such as Google Play or App Store. These platforms have strict policies and procedures to ensure that the games they offer are authentic, secure, and compliant with the law. They also have ratings, reviews, and feedback systems that can help you choose the best games for your preferences and needs. By downloading from official sources, you can also enjoy the benefits of cloud saving, achievements, leaderboards, and social features that can enrich your gaming experience.</p>
|
57 |
-
<h3>Use antivirus software and VPN</h3>
|
58 |
-
<p>Another way to play My Talking Tom safely and legally is to use antivirus software and VPN (virtual private network) on your device. Antivirus software can protect your device from malware infection by scanning and removing any suspicious files or programs that you download or run. VPN can protect your online privacy and security by encrypting your data and hiding your IP address from hackers, trackers, or spies. VPN can also help you access geo-restricted content or bypass censorship that may prevent you from playing My Talking Tom in some regions.</p>
|
59 |
-
<h3>Support the developers</h3>
|
60 |
-
<p>A final way to play My Talking Tom safely and legally is to support the developers by paying for the game or making in-app purchases. By doing so, you can show your appreciation and gratitude for their hard work and creativity. You can also help them cover their costs and earn a profit that can motivate them to continue making more games for you to enjoy. You can also access premium features or content that can enhance your gameplay or customization options. You can also disable ads that may interrupt or distract you from your gameplay.</p>
|
61 |
-
<h2>Conclusion</h2>
|
62 |
-
<p>My Talking Tom is a fun and engaging virtual pet game that millions of people love and play around the world. However, some people may be tempted to download hacked versions of the game that offer unlimited coins, unlocked features, and no ads. But this is not worth it, as it poses many risks and consequences for your device, data, legal status, and gaming experience. Instead, you should play My Talking Tom safely and legally by downloading it from official sources, using antivirus software and VPN on your device, and supporting the developers by paying for the game or making in-app purchases.</p>
|
63 |
-
<h2>FAQs</h2>
|
64 |
-
<ul>
|
65 |
-
<li><b>Q: How do I download My Talking Tom?</b></li>
|
66 |
-
<li>A: You can download My Talking Tom from Google Play or App Store on your Android or iOS device.</li>
|
67 |
-
<li><b>Q: How do I update My Talking Tom?</b></li>
|
68 |
-
<li>A: You can update My Talking Tom by checking for updates on Google Play or App Store and following the instructions.</li>
|
69 |
-
<li><b>Q: How do I get more coins in My Talking Tom?</b></li>
|
70 |
-
<li>A: You can get more coins in My Talking Tom by playing mini-games, watching ads, completing tasks, or making in-app purchases.</li>
|
71 |
-
<li><b>Q: How do I unlock more features or content in My Talking Tom?</b></li>
|
72 |
-
<li>A: You can unlock more features or content in My Talking Tom by leveling up your Tom, traveling to different destinations, dressing him up with different outfits and accessories, decorating his house with various items and furniture, or making in-app purchases.</li>
|
73 |
-
<li><b>Q: How do I talk to my Tom in My Talking Tom?</b></li>
|
74 |
-
<li>A: You can talk to your Tom in My Talking Tom by tapping on the microphone icon and speaking into your device. Your Tom will repeat what you say in a funny voice.</li>
|
75 |
-
</ul></p> 197e85843d<br />
|
76 |
-
<br />
|
77 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Pink Wallpaper Android - Amazing and Free HD Pink Wallpapers for Your Mobile and Desktop.md
DELETED
@@ -1,168 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Download Pink Wallpaper for Android</h1>
|
3 |
-
<p>If you are looking for a way to spice up your Android device with some color and style, you might want to consider downloading some pink wallpaper. Pink is a versatile and attractive color that can suit any mood, occasion, or personality. Whether you want a soft and romantic pink, a bright and cheerful pink, or a bold and edgy pink, there is a pink wallpaper for you.</p>
|
4 |
-
<p>In this article, we will show you what is pink wallpaper, why you should choose it for your Android device, where to find it, and how to download and set it as your home screen or lock screen. By the end of this article, you will be able to enjoy the beauty and charm of pink wallpaper on your Android device.</p>
|
5 |
-
<h2>download pink wallpaper android</h2><br /><p><b><b>DOWNLOAD</b> ✯ <a href="https://jinyurl.com/2uNSnE">https://jinyurl.com/2uNSnE</a></b></p><br /><br />
|
6 |
-
<h2>What is Pink Wallpaper?</h2>
|
7 |
-
<p>Pink wallpaper is a type of digital image that features the color pink as the main or dominant color. Pink wallpaper can have different shades, tones, patterns, textures, designs, or images that are related to the color pink. Some examples of pink wallpaper are:</p>
|
8 |
-
<ul>
|
9 |
-
<li>Pink flowers, such as roses, tulips, cherry blossoms, or peonies</li>
|
10 |
-
<li>Pink animals, such as flamingos, pigs, cats, or unicorns</li>
|
11 |
-
<li>Pink food, such as cupcakes, ice cream, candy, or strawberries</li>
|
12 |
-
<li>Pink abstract shapes, such as circles, triangles, stars, or hearts</li>
|
13 |
-
<li>Pink gradients, such as pastel pink, hot pink, neon pink, or magenta</li>
|
14 |
-
<li>Pink quotes, such as "Think Pink", "Pink Power", "Pink is My Favorite Color", or "Pink Makes Me Happy"</li>
|
15 |
-
</ul>
|
16 |
-
<p>Pink wallpaper can be downloaded from various sources online, such as websites, apps, or social media platforms. You can also create your own pink wallpaper using photo editing software or online tools.</p>
|
17 |
-
<h2>Why Choose Pink Wallpaper for Android?</h2>
|
18 |
-
<p>Pink wallpaper is a great choice for your Android device because it has many benefits and advantages. Some of them are:</p>
|
19 |
-
<ul>
|
20 |
-
<li>Pink wallpaper can make your device look more attractive and stylish. Pink is a fashionable and trendy color that can match any theme or case.</li>
|
21 |
-
<li>Pink wallpaper can boost your mood and energy. Pink is a positive and uplifting color that can inspire joy, happiness, love, or creativity.</li>
|
22 |
-
<li>Pink wallpaper can express your personality and preferences. Pink is a versatile and diverse color that can reflect your interests, hobbies, passions, or values.</li>
|
23 |
-
<li>Pink wallpaper can suit any occasion or season. Pink is a timeless and universal color that can fit any event or celebration.</li>
|
24 |
-
</ul>
|
25 |
-
<h3>Pink Wallpaper for Different Moods and Occasions</h3>
|
26 |
-
<p>Depending on your mood or the occasion, you can choose different types of pink wallpaper for your Android device. Here are some examples of pink wallpaper for various themes and events:</p>
|
27 |
-
<table>
|
28 |
-
<tr>
|
29 |
-
<th>Theme/Event</th>
|
30 |
-
<th>Pink Wallpaper</th>
|
31 |
-
</tr>
|
32 |
-
<tr>
|
33 |
-
<td>Valentine's Day</td>
|
34 |
-
<td>Pink hearts, roses, chocolates, or love letters</td>
|
35 |
-
</tr>
|
36 |
-
<tr>
|
37 |
-
<td>Birthday</td>
|
38 |
-
<td>Pink balloons, cakes, candles, or confetti</td>
|
39 |
-
</tr>
|
40 |
-
<tr>
|
41 |
-
<td>Spring</td>
|
42 |
-
<td>Pink cherry blossoms, tulips, butterflies, or birds</td>
|
43 |
-
</tr>
|
44 |
-
<tr>
|
45 |
-
<td>Summer</td>
|
46 |
-
<td>Pink flamingos, watermelons, ice creams, or sunglasses</td>
|
47 |
-
</tr>
|
48 |
-
<tr>
|
49 |
-
<td>Halloween</td>
|
50 |
-
<td>Pink pumpkins, skulls, witches, or ghosts</td>
|
51 |
-
</tr>
|
52 |
-
<tr>
|
53 |
-
<td>Christmas</td>
|
54 |
-
<td>Pink snowflakes, trees, ornaments, or stockings</td>
|
55 |
-
</tr>
|
56 |
-
</table>
|
57 |
-
<h3>Pink Wallpaper for Different Screen Sizes and Resolutions</h3>
|
58 |
-
<p>Another factor to consider when choosing pink wallpaper for your Android device is the screen size and resolution. You want to make sure that the pink wallpaper you download is compatible with your device and does not look blurry, pixelated, or stretched. Here are some tips on how to choose the right pink wallpaper for your device:</p>
|
59 |
-
<ul>
|
60 |
-
<li>Check the screen size and resolution of your device. You can find this information in the settings menu or online. For example, if you have a Samsung Galaxy S10, your screen size is 6.1 inches and your resolution is 1440 x 3040 pixels.</li>
|
61 |
-
<li>Look for pink wallpaper that matches or exceeds your screen resolution. You can use online tools or apps to resize or crop the pink wallpaper if needed. For example, if you find a pink wallpaper that is 1920 x 1080 pixels, you can resize it to fit your screen without losing quality.</li>
|
62 |
-
<li>Avoid pink wallpaper that is too small or too large for your screen. If the pink wallpaper is too small, it will look blurry or pixelated when you zoom in. If the pink wallpaper is too large, it will take up more space and memory on your device and may slow down its performance.</li>
|
63 |
-
<li>Preview the pink wallpaper before you download it. You can use the web browser or the app to see how the pink wallpaper looks on your device. You can also adjust the brightness, contrast, or saturation of the pink wallpaper to suit your preference.</li>
|
64 |
-
</ul>
|
65 |
-
<h2>Where to Find Pink Wallpaper for Android?</h2>
|
66 |
-
<p>There are many sources and websites where you can find and download pink wallpaper for your Android device. Some of them are free and some of them are paid. Some of them offer high-quality and original pink wallpaper and some of them offer low-quality and generic pink wallpaper. You should be careful and selective when choosing where to download pink wallpaper from. Here are some of the best and most popular sources and websites for downloading pink wallpaper:</p>
|
67 |
-
<h3>Free HD Pink Wallpaper from Unsplash</h3>
|
68 |
-
<p>Unsplash is a website that offers free high-resolution photos that you can use for anything. You can find thousands of stunning and unique pink wallpaper images on Unsplash that are uploaded by talented photographers from around the world. You can browse by category, keyword, color, orientation, or popularity. You can also download as many pink wallpaper images as you want without any limit or watermark.</p>
|
69 |
-
<p>download pink wallpaper android free<br />
|
70 |
-
download pink wallpaper android hd<br />
|
71 |
-
download pink wallpaper android 4k<br />
|
72 |
-
download pink wallpaper android cute<br />
|
73 |
-
download pink wallpaper android aesthetic<br />
|
74 |
-
download pink wallpaper android glitter<br />
|
75 |
-
download pink wallpaper android girly<br />
|
76 |
-
download pink wallpaper android love<br />
|
77 |
-
download pink wallpaper android floral<br />
|
78 |
-
download pink wallpaper android abstract<br />
|
79 |
-
download pink wallpaper android gradient<br />
|
80 |
-
download pink wallpaper android marshmallow<br />
|
81 |
-
download pink wallpaper android bokeh<br />
|
82 |
-
download pink wallpaper android crystals<br />
|
83 |
-
download pink wallpaper android silk<br />
|
84 |
-
download pink wallpaper android nature<br />
|
85 |
-
download pink wallpaper android sky<br />
|
86 |
-
download pink wallpaper android city<br />
|
87 |
-
download pink wallpaper android cars<br />
|
88 |
-
download pink wallpaper android travel<br />
|
89 |
-
download pink wallpaper android iphone<br />
|
90 |
-
download pink wallpaper android unsplash<br />
|
91 |
-
download pink wallpaper android pixabay<br />
|
92 |
-
download pink wallpaper android blackpink<br />
|
93 |
-
download pink wallpaper android quotes<br />
|
94 |
-
download hd free pink wallpapers for android<br />
|
95 |
-
download hd cute pink wallpapers for android<br />
|
96 |
-
download hd aesthetic pink wallpapers for android<br />
|
97 |
-
download hd glitter pink wallpapers for android<br />
|
98 |
-
download hd girly pink wallpapers for android<br />
|
99 |
-
download hd love pink wallpapers for android<br />
|
100 |
-
download hd floral pink wallpapers for android<br />
|
101 |
-
download hd abstract pink wallpapers for android<br />
|
102 |
-
download hd gradient pink wallpapers for android<br />
|
103 |
-
download hd marshmallow pink wallpapers for android<br />
|
104 |
-
download hd bokeh pink wallpapers for android<br />
|
105 |
-
download hd crystals pink wallpapers for android<br />
|
106 |
-
download hd silk pink wallpapers for android<br />
|
107 |
-
download hd nature pink wallpapers for android<br />
|
108 |
-
download hd sky pink wallpapers for android<br />
|
109 |
-
download hd city pink wallpapers for android<br />
|
110 |
-
download hd cars pink wallpapers for android<br />
|
111 |
-
download hd travel pink wallpapers for android<br />
|
112 |
-
download hd iphone compatible pink wallpapers for android <br />
|
113 |
-
download hd unsplash premium quality images of 500+ hq of free HD Pink Wallpapers for Android <br />
|
114 |
-
Download HD Pixabay royalty-free images of 40,000+ Beautiful Pink Backgrounds for Android <br />
|
115 |
-
Download HD Blackpink K-pop band themed Pink Wallpapers for Android <br />
|
116 |
-
Download HD inspirational and motivational quotes Pink Wallpapers for Android</p>
|
117 |
-
<h3>Awesome Pink Android Wallpaper from WallpaperAccess</h3>
|
118 |
-
<p>WallpaperAccess is a website that offers awesome wallpapers for various devices and platforms. You can find hundreds of amazing and cool pink wallpaper images on WallpaperAccess that are curated by a team of editors and designers. You can browse by category, resolution, device, or popularity. You can also download any pink wallpaper image with one click without any registration or login.</p>
|
119 |
-
<h3>Beautiful Pink Backgrounds from Pixabay</h3>
|
120 |
-
<p>Pixabay is a website that offers free stock photos, vectors, illustrations, and videos that you can use for anything. You can find thousands of beautiful and creative pink wallpaper images on Pixabay that are uploaded by a community of artists and photographers from around the world. You can browse by category, keyword, color, size, or popularity. You can also download any pink wallpaper image without any attribution or license.</p>
|
121 |
-
<h2>How to Download and Set Pink Wallpaper for Android?</h2>
|
122 |
-
<p>Now that you know what is pink wallpaper, why you should choose it for your Android device, and where to find it, you might be wondering how to download and set it as your home screen or lock screen. Don't worry, it's very easy and simple to do. Just follow these steps:</p>
|
123 |
-
<h3>How to Download Pink Wallpaper from a Website ?</h3>
|
124 |
-
<p>If you want to download pink wallpaper from a website, such as Unsplash, WallpaperAccess, or Pixabay, you can follow these steps:</p>
|
125 |
-
<ol>
|
126 |
-
<li>Open your web browser on your Android device and go to the website of your choice.</li>
|
127 |
-
<li>Search for pink wallpaper images using the search bar or the filters on the website.</li>
|
128 |
-
<li>Select the pink wallpaper image that you like and tap on it to open it in full size.</li>
|
129 |
-
<li>Tap and hold on the pink wallpaper image and select "Download image" or "Save image" from the menu that appears.</li>
|
130 |
-
<li>Choose a location on your device where you want to save the pink wallpaper image and tap on "Save" or "OK".</li>
|
131 |
-
</ol>
|
132 |
-
<p>You have successfully downloaded the pink wallpaper image from the website. You can find it in your gallery or file manager app.</p>
|
133 |
-
<h3>How to Download Pink Wallpaper from an App?</h3>
|
134 |
-
<p>If you want to download pink wallpaper from an app, such as Zedge, Walli, or Backgrounds HD, you can follow these steps:</p>
|
135 |
-
<ol>
|
136 |
-
<li>Open the Google Play Store on your Android device and search for the app of your choice.</li>
|
137 |
-
<li>Install and open the app on your device and grant it the necessary permissions.</li>
|
138 |
-
<li>Search for pink wallpaper images using the search bar or the categories on the app.</li>
|
139 |
-
<li>Select the pink wallpaper image that you like and tap on it to open it in full size.</li>
|
140 |
-
<li>Tap on the download icon or button on the bottom of the screen and wait for the download to complete.</li>
|
141 |
-
</ol>
|
142 |
-
<p>You have successfully downloaded the pink wallpaper image from the app. You can find it in your gallery or file manager app.</p>
|
143 |
-
<h3>How to Set Pink Wallpaper as Home Screen or Lock Screen?</h3>
|
144 |
-
<p>If you want to set pink wallpaper as your home screen or lock screen, you can follow these steps:</p>
|
145 |
-
<ol>
|
146 |
-
<li>Open your gallery or file manager app and find the pink wallpaper image that you downloaded.</li>
|
147 |
-
<li>Tap on the pink wallpaper image to open it in full size.</li>
|
148 |
-
<li>Tap on the menu icon or button on the top right corner of the screen and select "Set as" or "Use as" from the menu that appears.</li>
|
149 |
-
<li>Choose whether you want to set the pink wallpaper as your home screen, lock screen, or both.</li>
|
150 |
-
<li>Crop or adjust the pink wallpaper image if needed and tap on "Done" or "Apply".</li>
|
151 |
-
</ol>
|
152 |
-
<p>You have successfully set the pink wallpaper as your home screen or lock screen. You can enjoy the beauty and charm of pink wallpaper on your Android device.</p>
|
153 |
-
<h2>Conclusion</h2>
|
154 |
-
<p>Pink wallpaper is a wonderful way to customize and beautify your Android device. Pink is a color that can suit any mood, occasion, or personality. You can find and download various types of pink wallpaper images from different sources online, such as websites or apps. You can also easily set them as your home screen or lock screen with a few simple steps. We hope this article has helped you learn how to download pink wallpaper for Android and inspired you to try some of them out. If you have any questions or comments, feel free to leave them below.</p>
|
155 |
-
<h2>FAQs</h2>
|
156 |
-
<p>Here are some frequently asked questions and answers on pink wallpaper for Android:</p>
|
157 |
-
<h4>Q: How can I create my own pink wallpaper for Android?</h4>
|
158 |
-
<p>A: You can create your own pink wallpaper for Android using photo editing software or online tools. You can use your own photos or images, add text, stickers, filters, effects, or other elements to make them more pink and personalized. You can also use online generators or templates to create pink wallpaper for Android easily and quickly.</p>
|
159 |
-
<h4>Q: How can I change my pink wallpaper for Android automatically?</h4>
|
160 |
-
<p>A: You can change your pink wallpaper for Android automatically using apps that offer this feature. Some examples of these apps are Wallpaper Changer, Auto Wallpaper Changer, or DailyPic. You can set a time interval, a source folder, a category, or a keyword for changing your pink wallpaper for Android automatically. You can also choose whether you want to change your home screen, lock screen, or both.</p>
|
161 |
-
<h4>Q: How can I share my pink wallpaper for Android with others?</h4>
|
162 |
-
<p>A: You can share your pink wallpaper for Android with others using social media platforms, messaging apps, email, or Bluetooth. You can also upload your pink wallpaper for Android to online platforms, such as Pinterest, Tumblr, Instagram, or Reddit. You can also join online communities or groups that are dedicated to pink wallpaper for Android and share your creations with other fans and enthusiasts.</p>
|
163 |
-
<h <h4>Q: How can I delete or remove my pink wallpaper for Android?</h4>
|
164 |
-
<p>A: You can delete or remove your pink wallpaper for Android using your gallery or file manager app. You can find the pink wallpaper image that you downloaded or created and tap on it to open it. Then, you can tap on the menu icon or button and select "Delete" or "Remove" from the menu that appears. You can also select multiple pink wallpaper images and delete or remove them at once.</p>
|
165 |
-
<h4>Q: How can I find more pink wallpaper for Android?</h4>
|
166 |
-
<p>A: You can find more pink wallpaper for Android by exploring other sources and websites that offer them. You can also use search engines, such as Google or Bing, to look for more pink wallpaper images. You can also follow blogs, channels, pages, or accounts that post or share pink wallpaper images regularly. You can also ask for recommendations or suggestions from other users or friends who like pink wallpaper for Android.</p> 197e85843d<br />
|
167 |
-
<br />
|
168 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/2ndelement/voicevox/speaker_info/388f246b-8c41-4ac1-8e2d-5d79f3ff56d9/policy.md
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
dummy2 policy
|
2 |
-
|
3 |
-
https://voicevox.hiroshiba.jp/
|
|
|
|
|
|
|
|
spaces/2ndelement/voicevox/voicevox_engine/preset/Preset.py
DELETED
@@ -1,18 +0,0 @@
|
|
1 |
-
from pydantic import BaseModel, Field
|
2 |
-
|
3 |
-
|
4 |
-
class Preset(BaseModel):
|
5 |
-
"""
|
6 |
-
プリセット情報
|
7 |
-
"""
|
8 |
-
|
9 |
-
id: int = Field(title="プリセットID")
|
10 |
-
name: str = Field(title="プリセット名")
|
11 |
-
speaker_uuid: str = Field(title="スピーカーのUUID")
|
12 |
-
style_id: int = Field(title="スタイルID")
|
13 |
-
speedScale: float = Field(title="全体の話速")
|
14 |
-
pitchScale: float = Field(title="全体の音高")
|
15 |
-
intonationScale: float = Field(title="全体の抑揚")
|
16 |
-
volumeScale: float = Field(title="全体の音量")
|
17 |
-
prePhonemeLength: float = Field(title="音声の前の無音時間")
|
18 |
-
postPhonemeLength: float = Field(title="音声の後の無音時間")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/dataset.py
DELETED
@@ -1,124 +0,0 @@
|
|
1 |
-
import numbers
|
2 |
-
import os
|
3 |
-
import queue as Queue
|
4 |
-
import threading
|
5 |
-
|
6 |
-
import mxnet as mx
|
7 |
-
import numpy as np
|
8 |
-
import torch
|
9 |
-
from torch.utils.data import DataLoader, Dataset
|
10 |
-
from torchvision import transforms
|
11 |
-
|
12 |
-
|
13 |
-
class BackgroundGenerator(threading.Thread):
|
14 |
-
def __init__(self, generator, local_rank, max_prefetch=6):
|
15 |
-
super(BackgroundGenerator, self).__init__()
|
16 |
-
self.queue = Queue.Queue(max_prefetch)
|
17 |
-
self.generator = generator
|
18 |
-
self.local_rank = local_rank
|
19 |
-
self.daemon = True
|
20 |
-
self.start()
|
21 |
-
|
22 |
-
def run(self):
|
23 |
-
torch.cuda.set_device(self.local_rank)
|
24 |
-
for item in self.generator:
|
25 |
-
self.queue.put(item)
|
26 |
-
self.queue.put(None)
|
27 |
-
|
28 |
-
def next(self):
|
29 |
-
next_item = self.queue.get()
|
30 |
-
if next_item is None:
|
31 |
-
raise StopIteration
|
32 |
-
return next_item
|
33 |
-
|
34 |
-
def __next__(self):
|
35 |
-
return self.next()
|
36 |
-
|
37 |
-
def __iter__(self):
|
38 |
-
return self
|
39 |
-
|
40 |
-
|
41 |
-
class DataLoaderX(DataLoader):
|
42 |
-
|
43 |
-
def __init__(self, local_rank, **kwargs):
|
44 |
-
super(DataLoaderX, self).__init__(**kwargs)
|
45 |
-
self.stream = torch.cuda.Stream(local_rank)
|
46 |
-
self.local_rank = local_rank
|
47 |
-
|
48 |
-
def __iter__(self):
|
49 |
-
self.iter = super(DataLoaderX, self).__iter__()
|
50 |
-
self.iter = BackgroundGenerator(self.iter, self.local_rank)
|
51 |
-
self.preload()
|
52 |
-
return self
|
53 |
-
|
54 |
-
def preload(self):
|
55 |
-
self.batch = next(self.iter, None)
|
56 |
-
if self.batch is None:
|
57 |
-
return None
|
58 |
-
with torch.cuda.stream(self.stream):
|
59 |
-
for k in range(len(self.batch)):
|
60 |
-
self.batch[k] = self.batch[k].to(device=self.local_rank, non_blocking=True)
|
61 |
-
|
62 |
-
def __next__(self):
|
63 |
-
torch.cuda.current_stream().wait_stream(self.stream)
|
64 |
-
batch = self.batch
|
65 |
-
if batch is None:
|
66 |
-
raise StopIteration
|
67 |
-
self.preload()
|
68 |
-
return batch
|
69 |
-
|
70 |
-
|
71 |
-
class MXFaceDataset(Dataset):
|
72 |
-
def __init__(self, root_dir, local_rank):
|
73 |
-
super(MXFaceDataset, self).__init__()
|
74 |
-
self.transform = transforms.Compose(
|
75 |
-
[transforms.ToPILImage(),
|
76 |
-
transforms.RandomHorizontalFlip(),
|
77 |
-
transforms.ToTensor(),
|
78 |
-
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
|
79 |
-
])
|
80 |
-
self.root_dir = root_dir
|
81 |
-
self.local_rank = local_rank
|
82 |
-
path_imgrec = os.path.join(root_dir, 'train.rec')
|
83 |
-
path_imgidx = os.path.join(root_dir, 'train.idx')
|
84 |
-
self.imgrec = mx.recordio.MXIndexedRecordIO(path_imgidx, path_imgrec, 'r')
|
85 |
-
s = self.imgrec.read_idx(0)
|
86 |
-
header, _ = mx.recordio.unpack(s)
|
87 |
-
if header.flag > 0:
|
88 |
-
self.header0 = (int(header.label[0]), int(header.label[1]))
|
89 |
-
self.imgidx = np.array(range(1, int(header.label[0])))
|
90 |
-
else:
|
91 |
-
self.imgidx = np.array(list(self.imgrec.keys))
|
92 |
-
|
93 |
-
def __getitem__(self, index):
|
94 |
-
idx = self.imgidx[index]
|
95 |
-
s = self.imgrec.read_idx(idx)
|
96 |
-
header, img = mx.recordio.unpack(s)
|
97 |
-
label = header.label
|
98 |
-
if not isinstance(label, numbers.Number):
|
99 |
-
label = label[0]
|
100 |
-
label = torch.tensor(label, dtype=torch.long)
|
101 |
-
sample = mx.image.imdecode(img).asnumpy()
|
102 |
-
if self.transform is not None:
|
103 |
-
sample = self.transform(sample)
|
104 |
-
return sample, label
|
105 |
-
|
106 |
-
def __len__(self):
|
107 |
-
return len(self.imgidx)
|
108 |
-
|
109 |
-
|
110 |
-
class SyntheticDataset(Dataset):
|
111 |
-
def __init__(self, local_rank):
|
112 |
-
super(SyntheticDataset, self).__init__()
|
113 |
-
img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.int32)
|
114 |
-
img = np.transpose(img, (2, 0, 1))
|
115 |
-
img = torch.from_numpy(img).squeeze(0).float()
|
116 |
-
img = ((img / 255) - 0.5) / 0.5
|
117 |
-
self.img = img
|
118 |
-
self.label = 1
|
119 |
-
|
120 |
-
def __getitem__(self, index):
|
121 |
-
return self.img, self.label
|
122 |
-
|
123 |
-
def __len__(self):
|
124 |
-
return 1000000
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/infer/modules/onnx/export.py
DELETED
@@ -1,52 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
|
3 |
-
from infer.lib.infer_pack.models_onnx import SynthesizerTrnMsNSFsidM
|
4 |
-
|
5 |
-
|
6 |
-
def export_onnx(ModelPath, ExportedPath):
|
7 |
-
cpt = torch.load(ModelPath, map_location="cpu")
|
8 |
-
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0]
|
9 |
-
vec_channels = 256 if cpt.get("version", "v1") == "v1" else 768
|
10 |
-
|
11 |
-
test_phone = torch.rand(1, 200, vec_channels) # hidden unit
|
12 |
-
test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用)
|
13 |
-
test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹)
|
14 |
-
test_pitchf = torch.rand(1, 200) # nsf基频
|
15 |
-
test_ds = torch.LongTensor([0]) # 说话人ID
|
16 |
-
test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子)
|
17 |
-
|
18 |
-
device = "cpu" # 导出时设备(不影响使用模型)
|
19 |
-
|
20 |
-
net_g = SynthesizerTrnMsNSFsidM(
|
21 |
-
*cpt["config"], is_half=False, version=cpt.get("version", "v1")
|
22 |
-
) # fp32导出(C++要支持fp16必须手动将内存重新排列所以暂时不用fp16)
|
23 |
-
net_g.load_state_dict(cpt["weight"], strict=False)
|
24 |
-
input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"]
|
25 |
-
output_names = [
|
26 |
-
"audio",
|
27 |
-
]
|
28 |
-
# net_g.construct_spkmixmap(n_speaker) 多角色混合轨道导出
|
29 |
-
torch.onnx.export(
|
30 |
-
net_g,
|
31 |
-
(
|
32 |
-
test_phone.to(device),
|
33 |
-
test_phone_lengths.to(device),
|
34 |
-
test_pitch.to(device),
|
35 |
-
test_pitchf.to(device),
|
36 |
-
test_ds.to(device),
|
37 |
-
test_rnd.to(device),
|
38 |
-
),
|
39 |
-
ExportedPath,
|
40 |
-
dynamic_axes={
|
41 |
-
"phone": [1],
|
42 |
-
"pitch": [1],
|
43 |
-
"pitchf": [1],
|
44 |
-
"rnd": [2],
|
45 |
-
},
|
46 |
-
do_constant_folding=False,
|
47 |
-
opset_version=13,
|
48 |
-
verbose=False,
|
49 |
-
input_names=input_names,
|
50 |
-
output_names=output_names,
|
51 |
-
)
|
52 |
-
return "Finished"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/julius/filters.py
DELETED
@@ -1,258 +0,0 @@
|
|
1 |
-
# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details.
|
2 |
-
# Author: adefossez, 2021
|
3 |
-
"""
|
4 |
-
FIR windowed sinc highpass and bandpass filters.
|
5 |
-
Those are convenience wrappers around the filters defined in `julius.lowpass`.
|
6 |
-
"""
|
7 |
-
|
8 |
-
from typing import Sequence, Optional
|
9 |
-
|
10 |
-
import torch
|
11 |
-
|
12 |
-
# Import all lowpass filters for consistency.
|
13 |
-
from .lowpass import lowpass_filter, lowpass_filters, LowPassFilter, LowPassFilters # noqa
|
14 |
-
from .utils import simple_repr
|
15 |
-
|
16 |
-
|
17 |
-
class HighPassFilters(torch.nn.Module):
|
18 |
-
"""
|
19 |
-
Bank of high pass filters. See `julius.lowpass.LowPassFilters` for more
|
20 |
-
details on the implementation.
|
21 |
-
|
22 |
-
Args:
|
23 |
-
cutoffs (list[float]): list of cutoff frequencies, in [0, 0.5] expressed as `f/f_s` where
|
24 |
-
f_s is the samplerate and `f` is the cutoff frequency.
|
25 |
-
The upper limit is 0.5, because a signal sampled at `f_s` contains only
|
26 |
-
frequencies under `f_s / 2`.
|
27 |
-
stride (int): how much to decimate the output. Probably not a good idea
|
28 |
-
to do so with a high pass filters though...
|
29 |
-
pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`,
|
30 |
-
the output will have the same length as the input.
|
31 |
-
zeros (float): Number of zero crossings to keep.
|
32 |
-
Controls the receptive field of the Finite Impulse Response filter.
|
33 |
-
For filters with low cutoff frequency, e.g. 40Hz at 44.1kHz,
|
34 |
-
it is a bad idea to set this to a high value.
|
35 |
-
This is likely appropriate for most use. Lower values
|
36 |
-
will result in a faster filter, but with a slower attenuation around the
|
37 |
-
cutoff frequency.
|
38 |
-
fft (bool or None): if True, uses `julius.fftconv` rather than PyTorch convolutions.
|
39 |
-
If False, uses PyTorch convolutions. If None, either one will be chosen automatically
|
40 |
-
depending on the effective filter size.
|
41 |
-
|
42 |
-
|
43 |
-
..warning::
|
44 |
-
All the filters will use the same filter size, aligned on the lowest
|
45 |
-
frequency provided. If you combine a lot of filters with very diverse frequencies, it might
|
46 |
-
be more efficient to split them over multiple modules with similar frequencies.
|
47 |
-
|
48 |
-
Shape:
|
49 |
-
|
50 |
-
- Input: `[*, T]`
|
51 |
-
- Output: `[F, *, T']`, with `T'=T` if `pad` is True and `stride` is 1, and
|
52 |
-
`F` is the numer of cutoff frequencies.
|
53 |
-
|
54 |
-
>>> highpass = HighPassFilters([1/4])
|
55 |
-
>>> x = torch.randn(4, 12, 21, 1024)
|
56 |
-
>>> list(highpass(x).shape)
|
57 |
-
[1, 4, 12, 21, 1024]
|
58 |
-
"""
|
59 |
-
|
60 |
-
def __init__(self, cutoffs: Sequence[float], stride: int = 1, pad: bool = True,
|
61 |
-
zeros: float = 8, fft: Optional[bool] = None):
|
62 |
-
super().__init__()
|
63 |
-
self._lowpasses = LowPassFilters(cutoffs, stride, pad, zeros, fft)
|
64 |
-
|
65 |
-
@property
|
66 |
-
def cutoffs(self):
|
67 |
-
return self._lowpasses.cutoffs
|
68 |
-
|
69 |
-
@property
|
70 |
-
def stride(self):
|
71 |
-
return self._lowpasses.stride
|
72 |
-
|
73 |
-
@property
|
74 |
-
def pad(self):
|
75 |
-
return self._lowpasses.pad
|
76 |
-
|
77 |
-
@property
|
78 |
-
def zeros(self):
|
79 |
-
return self._lowpasses.zeros
|
80 |
-
|
81 |
-
@property
|
82 |
-
def fft(self):
|
83 |
-
return self._lowpasses.fft
|
84 |
-
|
85 |
-
def forward(self, input):
|
86 |
-
lows = self._lowpasses(input)
|
87 |
-
|
88 |
-
# We need to extract the right portion of the input in case
|
89 |
-
# pad is False or stride > 1
|
90 |
-
if self.pad:
|
91 |
-
start, end = 0, input.shape[-1]
|
92 |
-
else:
|
93 |
-
start = self._lowpasses.half_size
|
94 |
-
end = -start
|
95 |
-
input = input[..., start:end:self.stride]
|
96 |
-
highs = input - lows
|
97 |
-
return highs
|
98 |
-
|
99 |
-
def __repr__(self):
|
100 |
-
return simple_repr(self)
|
101 |
-
|
102 |
-
|
103 |
-
class HighPassFilter(torch.nn.Module):
|
104 |
-
"""
|
105 |
-
Same as `HighPassFilters` but applies a single high pass filter.
|
106 |
-
|
107 |
-
Shape:
|
108 |
-
|
109 |
-
- Input: `[*, T]`
|
110 |
-
- Output: `[*, T']`, with `T'=T` if `pad` is True and `stride` is 1.
|
111 |
-
|
112 |
-
>>> highpass = HighPassFilter(1/4, stride=1)
|
113 |
-
>>> x = torch.randn(4, 124)
|
114 |
-
>>> list(highpass(x).shape)
|
115 |
-
[4, 124]
|
116 |
-
"""
|
117 |
-
|
118 |
-
def __init__(self, cutoff: float, stride: int = 1, pad: bool = True,
|
119 |
-
zeros: float = 8, fft: Optional[bool] = None):
|
120 |
-
super().__init__()
|
121 |
-
self._highpasses = HighPassFilters([cutoff], stride, pad, zeros, fft)
|
122 |
-
|
123 |
-
@property
|
124 |
-
def cutoff(self):
|
125 |
-
return self._highpasses.cutoffs[0]
|
126 |
-
|
127 |
-
@property
|
128 |
-
def stride(self):
|
129 |
-
return self._highpasses.stride
|
130 |
-
|
131 |
-
@property
|
132 |
-
def pad(self):
|
133 |
-
return self._highpasses.pad
|
134 |
-
|
135 |
-
@property
|
136 |
-
def zeros(self):
|
137 |
-
return self._highpasses.zeros
|
138 |
-
|
139 |
-
@property
|
140 |
-
def fft(self):
|
141 |
-
return self._highpasses.fft
|
142 |
-
|
143 |
-
def forward(self, input):
|
144 |
-
return self._highpasses(input)[0]
|
145 |
-
|
146 |
-
def __repr__(self):
|
147 |
-
return simple_repr(self)
|
148 |
-
|
149 |
-
|
150 |
-
def highpass_filters(input: torch.Tensor, cutoffs: Sequence[float],
|
151 |
-
stride: int = 1, pad: bool = True,
|
152 |
-
zeros: float = 8, fft: Optional[bool] = None):
|
153 |
-
"""
|
154 |
-
Functional version of `HighPassFilters`, refer to this class for more information.
|
155 |
-
"""
|
156 |
-
return HighPassFilters(cutoffs, stride, pad, zeros, fft).to(input)(input)
|
157 |
-
|
158 |
-
|
159 |
-
def highpass_filter(input: torch.Tensor, cutoff: float,
|
160 |
-
stride: int = 1, pad: bool = True,
|
161 |
-
zeros: float = 8, fft: Optional[bool] = None):
|
162 |
-
"""
|
163 |
-
Functional version of `HighPassFilter`, refer to this class for more information.
|
164 |
-
Output will not have a dimension inserted in the front.
|
165 |
-
"""
|
166 |
-
return highpass_filters(input, [cutoff], stride, pad, zeros, fft)[0]
|
167 |
-
|
168 |
-
|
169 |
-
class BandPassFilter(torch.nn.Module):
|
170 |
-
"""
|
171 |
-
Single band pass filter, implemented as a the difference of two lowpass filters.
|
172 |
-
|
173 |
-
Args:
|
174 |
-
cutoff_low (float): lower cutoff frequency, in [0, 0.5] expressed as `f/f_s` where
|
175 |
-
f_s is the samplerate and `f` is the cutoff frequency.
|
176 |
-
The upper limit is 0.5, because a signal sampled at `f_s` contains only
|
177 |
-
frequencies under `f_s / 2`.
|
178 |
-
cutoff_high (float): higher cutoff frequency, in [0, 0.5] expressed as `f/f_s`.
|
179 |
-
This must be higher than cutoff_high. Note that due to the fact
|
180 |
-
that filter are not perfect, the output will be non zero even if
|
181 |
-
cutoff_high == cutoff_low.
|
182 |
-
stride (int): how much to decimate the output.
|
183 |
-
pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`,
|
184 |
-
the output will have the same length as the input.
|
185 |
-
zeros (float): Number of zero crossings to keep.
|
186 |
-
Controls the receptive field of the Finite Impulse Response filter.
|
187 |
-
For filters with low cutoff frequency, e.g. 40Hz at 44.1kHz,
|
188 |
-
it is a bad idea to set this to a high value.
|
189 |
-
This is likely appropriate for most use. Lower values
|
190 |
-
will result in a faster filter, but with a slower attenuation around the
|
191 |
-
cutoff frequency.
|
192 |
-
fft (bool or None): if True, uses `julius.fftconv` rather than PyTorch convolutions.
|
193 |
-
If False, uses PyTorch convolutions. If None, either one will be chosen automatically
|
194 |
-
depending on the effective filter size.
|
195 |
-
|
196 |
-
|
197 |
-
Shape:
|
198 |
-
|
199 |
-
- Input: `[*, T]`
|
200 |
-
- Output: `[*, T']`, with `T'=T` if `pad` is True and `stride` is 1.
|
201 |
-
|
202 |
-
..Note:: There is no BandPassFilters (bank of bandpasses) because its
|
203 |
-
signification would be the same as `julius.bands.SplitBands`.
|
204 |
-
|
205 |
-
>>> bandpass = BandPassFilter(1/4, 1/3)
|
206 |
-
>>> x = torch.randn(4, 12, 21, 1024)
|
207 |
-
>>> list(bandpass(x).shape)
|
208 |
-
[4, 12, 21, 1024]
|
209 |
-
"""
|
210 |
-
|
211 |
-
def __init__(self, cutoff_low: float, cutoff_high: float, stride: int = 1, pad: bool = True,
|
212 |
-
zeros: float = 8, fft: Optional[bool] = None):
|
213 |
-
super().__init__()
|
214 |
-
if cutoff_low > cutoff_high:
|
215 |
-
raise ValueError(f"Lower cutoff {cutoff_low} should be less than "
|
216 |
-
f"higher cutoff {cutoff_high}.")
|
217 |
-
self._lowpasses = LowPassFilters([cutoff_low, cutoff_high], stride, pad, zeros, fft)
|
218 |
-
|
219 |
-
@property
|
220 |
-
def cutoff_low(self):
|
221 |
-
return self._lowpasses.cutoffs[0]
|
222 |
-
|
223 |
-
@property
|
224 |
-
def cutoff_high(self):
|
225 |
-
return self._lowpasses.cutoffs[1]
|
226 |
-
|
227 |
-
@property
|
228 |
-
def stride(self):
|
229 |
-
return self._lowpasses.stride
|
230 |
-
|
231 |
-
@property
|
232 |
-
def pad(self):
|
233 |
-
return self._lowpasses.pad
|
234 |
-
|
235 |
-
@property
|
236 |
-
def zeros(self):
|
237 |
-
return self._lowpasses.zeros
|
238 |
-
|
239 |
-
@property
|
240 |
-
def fft(self):
|
241 |
-
return self._lowpasses.fft
|
242 |
-
|
243 |
-
def forward(self, input):
|
244 |
-
lows = self._lowpasses(input)
|
245 |
-
return lows[1] - lows[0]
|
246 |
-
|
247 |
-
def __repr__(self):
|
248 |
-
return simple_repr(self)
|
249 |
-
|
250 |
-
|
251 |
-
def bandpass_filter(input: torch.Tensor, cutoff_low: float, cutoff_high: float,
|
252 |
-
stride: int = 1, pad: bool = True,
|
253 |
-
zeros: float = 8, fft: Optional[bool] = None):
|
254 |
-
"""
|
255 |
-
Functional version of `BandPassfilter`, refer to this class for more information.
|
256 |
-
Output will not have a dimension inserted in the front.
|
257 |
-
"""
|
258 |
-
return BandPassFilter(cutoff_low, cutoff_high, stride, pad, zeros, fft).to(input)(input)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/A00001/bingothoo/src/app/layout.tsx
DELETED
@@ -1,47 +0,0 @@
|
|
1 |
-
import { Metadata } from 'next'
|
2 |
-
import { Toaster } from 'react-hot-toast'
|
3 |
-
import { TailwindIndicator } from '@/components/tailwind-indicator'
|
4 |
-
import { Providers } from '@/components/providers'
|
5 |
-
import { Header } from '@/components/header'
|
6 |
-
|
7 |
-
import '@/app/globals.scss'
|
8 |
-
|
9 |
-
|
10 |
-
export const metadata: Metadata = {
|
11 |
-
title: {
|
12 |
-
default: 'Bing AI Chatbot',
|
13 |
-
template: `%s - Bing AI Chatbot`
|
14 |
-
},
|
15 |
-
description: 'Bing AI Chatbot Web App.',
|
16 |
-
themeColor: [
|
17 |
-
{ media: '(prefers-color-scheme: light)', color: 'white' },
|
18 |
-
{ media: '(prefers-color-scheme: dark)', color: 'dark' }
|
19 |
-
],
|
20 |
-
icons: {
|
21 |
-
icon: '/favicon.ico',
|
22 |
-
shortcut: '../assets/images/logo.svg',
|
23 |
-
apple: '../assets/images/logo.svg'
|
24 |
-
}
|
25 |
-
}
|
26 |
-
|
27 |
-
interface RootLayoutProps {
|
28 |
-
children: React.ReactNode
|
29 |
-
}
|
30 |
-
|
31 |
-
export default function RootLayout({ children }: RootLayoutProps) {
|
32 |
-
return (
|
33 |
-
<html lang="zh-CN" suppressHydrationWarning>
|
34 |
-
<body>
|
35 |
-
<Toaster />
|
36 |
-
<Providers attribute="class" defaultTheme="system" enableSystem>
|
37 |
-
<div className="flex flex-col min-h-screen">
|
38 |
-
{/* @ts-ignore */}
|
39 |
-
<Header />
|
40 |
-
<main className="flex flex-col flex-1">{children}</main>
|
41 |
-
</div>
|
42 |
-
<TailwindIndicator />
|
43 |
-
</Providers>
|
44 |
-
</body>
|
45 |
-
</html>
|
46 |
-
)
|
47 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Backend 137c41fa386f43249b249e956eb06bb0.md
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
# Backend
|
2 |
-
|
3 |
-
Last edited time: April 23, 2023 3:58 PM
|
4 |
-
Owner: Anonymous
|
5 |
-
Tags: Codebase
|
6 |
-
|
7 |
-
## 技术栈
|
8 |
-
|
9 |
-
我们的技术栈是Sprig WebFlux + Spring Boot + Spring Reactive Data Couchbase
|
10 |
-
|
11 |
-
**Sprig WebFlux**是Spring Framework 5.0引入的一种新的反应式Web框架,可以用于构建高性能的Web应用程序。它支持异步编程模型,可以处理大量的并发请求,适用于高负载的Web应用程序。与传统的Servlet API相比,Sprig WebFlux提供了更好的性能、更高的吞吐量和更少的资源消耗。
|
12 |
-
|
13 |
-
**Spring Boot**是一个用于构建独立的、生产级别的Spring应用程序的框架。它提供了许多开箱即用的功能,如自动配置、嵌入式Web服务器和健康检查等,可以帮助开发人员快速构建Spring应用程序。
|
14 |
-
|
15 |
-
**Spring Reactive Data Couchbase**是一个基于反应式编程模型的Couchbase客户端库,可以与Sprig WebFlux和Spring Boot一起使用。它提供了异步的、非阻塞的API,可以处理大量的并发请求。同时,它还提供了一些有用的功能,如自动映射、复杂查询和事务管理等。
|
16 |
-
|
17 |
-
## 架构风格
|
18 |
-
|
19 |
-
Smart Domain是一种软件架构风格,旨在通过将业务逻辑与数据访问逻辑分离来提高可维护性和可扩展性。在Smart Domain架构中,业务逻辑被定义为一个领域模型,它包含了应用程序的核心概念和行为。与此同时,数据访问逻辑被封装在一个或多个数据访问对象中,它们负责与数据库交互。通过这种方式,Smart Domain架构可以使开发人员更加专注于业务逻辑,从而提高应用程序的质量和可维护性。
|
20 |
-
|
21 |
-
## Couchbase
|
22 |
-
|
23 |
-
Couchbase是一款面向企业应用的NoSQL数据库。它提供了高性能、高可用性和可扩展性,使得它成为了许多企业的首选。Couchbase支持类JSON文档、键值对和图形数据模型,非常适合用于Web应用、移动应用和物联网应用等场景。
|
24 |
-
|
25 |
-
对于需要高可用性和可扩展性的企业来说,Couchbase提供了一些非常有用的功能,如动态数据分片、自动故障转移和跨数据中心复制。Couchbase还提供了一个灵活的查询引擎,使得查询数据变得非常容易。同时,Couchbase还支持各种各样的客户端库,包括Java、Python、C#和Node.js等。
|
26 |
-
|
27 |
-
有关Couchbase的更多信息,请查看[Couchbase官方网站](https://www.couchbase.com/)。
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AI4PD/hexviz/hexviz/pages/2_🦅Birds_Eye_View.py
DELETED
@@ -1,176 +0,0 @@
|
|
1 |
-
import re
|
2 |
-
|
3 |
-
import py3Dmol
|
4 |
-
import stmol
|
5 |
-
import streamlit as st
|
6 |
-
|
7 |
-
from hexviz.attention import (
|
8 |
-
clean_and_validate_sequence,
|
9 |
-
get_attention,
|
10 |
-
get_attention_pairs,
|
11 |
-
res_to_1letter,
|
12 |
-
)
|
13 |
-
from hexviz.models import Model, ModelType
|
14 |
-
from hexviz.view import (
|
15 |
-
menu_items,
|
16 |
-
select_heads_and_layers,
|
17 |
-
select_model,
|
18 |
-
select_pdb,
|
19 |
-
select_protein,
|
20 |
-
)
|
21 |
-
|
22 |
-
st.set_page_config(layout="wide", menu_items=menu_items)
|
23 |
-
st.title("Bird's Eye View of attention heads")
|
24 |
-
|
25 |
-
|
26 |
-
for k, v in st.session_state.items():
|
27 |
-
st.session_state[k] = v
|
28 |
-
|
29 |
-
models = [
|
30 |
-
Model(name=ModelType.TAPE_BERT, layers=12, heads=12),
|
31 |
-
Model(name=ModelType.ZymCTRL, layers=36, heads=16),
|
32 |
-
Model(name=ModelType.PROT_BERT, layers=30, heads=16),
|
33 |
-
Model(name=ModelType.PROT_T5, layers=24, heads=32),
|
34 |
-
]
|
35 |
-
|
36 |
-
with st.expander("Input a PDB id, upload a PDB file or input a sequence", expanded=True):
|
37 |
-
pdb_id = select_pdb() or "2FZ5"
|
38 |
-
uploaded_file = st.file_uploader("2.Upload PDB", type=["pdb"])
|
39 |
-
input_sequence = st.text_area("3.Input sequence", "", key="input_sequence", max_chars=400)
|
40 |
-
sequence, error = clean_and_validate_sequence(input_sequence)
|
41 |
-
if error:
|
42 |
-
st.error(error)
|
43 |
-
pdb_str, structure, source = select_protein(pdb_id, uploaded_file, sequence)
|
44 |
-
st.write(f"Visualizing: {source}")
|
45 |
-
|
46 |
-
selected_model = select_model(models)
|
47 |
-
|
48 |
-
|
49 |
-
if "viewer_width" not in st.session_state:
|
50 |
-
st.session_state.viewer_width = 1200
|
51 |
-
viewer_width = st.sidebar.number_input(
|
52 |
-
label="Protein viewer width (px)",
|
53 |
-
min_value=0,
|
54 |
-
key="viewer_width",
|
55 |
-
)
|
56 |
-
|
57 |
-
chains = list(structure.get_chains())
|
58 |
-
chain_ids = [chain.id for chain in chains]
|
59 |
-
if "selected_chain" not in st.session_state:
|
60 |
-
st.session_state.selected_chain = chain_ids[0] if chain_ids else None
|
61 |
-
chain_selection = st.sidebar.selectbox(
|
62 |
-
label="Select Chain",
|
63 |
-
options=chain_ids,
|
64 |
-
key="selected_chain",
|
65 |
-
)
|
66 |
-
|
67 |
-
selected_chain = next(chain for chain in chains if chain.id == chain_selection)
|
68 |
-
|
69 |
-
ec_number = ""
|
70 |
-
if selected_model.name == ModelType.ZymCTRL:
|
71 |
-
st.sidebar.markdown(
|
72 |
-
"""
|
73 |
-
ZymCTRL EC number
|
74 |
-
---
|
75 |
-
"""
|
76 |
-
)
|
77 |
-
try:
|
78 |
-
ec_number = structure.header["compound"]["1"]["ec"]
|
79 |
-
except KeyError:
|
80 |
-
pass
|
81 |
-
ec_number = st.sidebar.text_input("Enzyme Comission number (EC)", ec_number)
|
82 |
-
|
83 |
-
# Validate EC number
|
84 |
-
if not re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ec_number):
|
85 |
-
st.sidebar.error(
|
86 |
-
"""Please enter a valid Enzyme Commission number in the format of 4
|
87 |
-
integers separated by periods (e.g., 1.2.3.21)"""
|
88 |
-
)
|
89 |
-
|
90 |
-
|
91 |
-
min_attn = st.sidebar.slider("Minimum attention", min_value=0.0, max_value=0.4, value=0.1)
|
92 |
-
if "show_ligands" not in st.session_state:
|
93 |
-
st.session_state.show_ligands = True
|
94 |
-
show_ligands = st.sidebar.checkbox("Show ligands", key="show_ligands")
|
95 |
-
|
96 |
-
with st.sidebar.expander("Highlight residues"):
|
97 |
-
st.write("Residue will be highlighted in yellow")
|
98 |
-
hl_resi_list = st.multiselect(label="Selected Residues", options=list(range(1, 5000)))
|
99 |
-
highlight_resi = st.checkbox(label="Highlight residues", value=True)
|
100 |
-
label_resi = st.checkbox(label="Label residue names", value=False)
|
101 |
-
layer_sequence, head_sequence = select_heads_and_layers(st.sidebar, selected_model)
|
102 |
-
# TODO add slider for widht of grid
|
103 |
-
|
104 |
-
|
105 |
-
residues = [res for res in selected_chain.get_residues()]
|
106 |
-
sequence = res_to_1letter(residues)
|
107 |
-
|
108 |
-
attention, tokens = get_attention(
|
109 |
-
sequence=sequence,
|
110 |
-
model_type=selected_model.name,
|
111 |
-
ec_number=ec_number,
|
112 |
-
)
|
113 |
-
|
114 |
-
grid_rows = len(layer_sequence)
|
115 |
-
grid_cols = len(head_sequence)
|
116 |
-
cell_width = viewer_width / grid_cols
|
117 |
-
viewer_height = int(cell_width * grid_rows)
|
118 |
-
|
119 |
-
xyzview = py3Dmol.view(
|
120 |
-
width=viewer_width,
|
121 |
-
height=viewer_height,
|
122 |
-
viewergrid=(grid_rows, grid_cols),
|
123 |
-
)
|
124 |
-
|
125 |
-
xyzview.addModel(pdb_str, "pdb")
|
126 |
-
xyzview.zoomTo()
|
127 |
-
|
128 |
-
for row, layer in enumerate(layer_sequence):
|
129 |
-
for col, head in enumerate(head_sequence):
|
130 |
-
attention_pairs, top_residues = get_attention_pairs(
|
131 |
-
pdb_str=pdb_str,
|
132 |
-
chain_ids=None,
|
133 |
-
layer=layer,
|
134 |
-
head=head,
|
135 |
-
threshold=min_attn,
|
136 |
-
model_type=selected_model.name,
|
137 |
-
top_n=1,
|
138 |
-
ec_numbers=None,
|
139 |
-
)
|
140 |
-
|
141 |
-
for att_weight, first, second in attention_pairs:
|
142 |
-
cylradius = att_weight
|
143 |
-
cylColor = "red"
|
144 |
-
dashed = False
|
145 |
-
xyzview.addCylinder(
|
146 |
-
{
|
147 |
-
"start": {"x": first[0], "y": first[1], "z": first[2]},
|
148 |
-
"end": {"x": second[0], "y": second[1], "z": second[2]},
|
149 |
-
"radius": cylradius,
|
150 |
-
"fromCap": True,
|
151 |
-
"toCap": True,
|
152 |
-
"color": cylColor,
|
153 |
-
"dashed": dashed,
|
154 |
-
},
|
155 |
-
viewer=(row, col),
|
156 |
-
)
|
157 |
-
|
158 |
-
|
159 |
-
xyzview.setStyle({"cartoon": {"color": "white"}})
|
160 |
-
if highlight_resi:
|
161 |
-
for res in hl_resi_list:
|
162 |
-
xyzview.setStyle({"resi": res}, {"cartoon": {"color": "yellow"}})
|
163 |
-
if label_resi:
|
164 |
-
for hl_resi in hl_resi_list:
|
165 |
-
xyzview.addResLabels(
|
166 |
-
{"resi": hl_resi},
|
167 |
-
{
|
168 |
-
"backgroundColor": "lightgray",
|
169 |
-
"fontColor": "black",
|
170 |
-
"backgroundOpacity": 0.5,
|
171 |
-
},
|
172 |
-
)
|
173 |
-
if show_ligands:
|
174 |
-
xyzview.addStyle({"hetflag": True}, {"stick": {"radius": 0.2}})
|
175 |
-
|
176 |
-
stmol.showmol(xyzview, height=viewer_height, width=viewer_width)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/dpt_depth.py
DELETED
@@ -1,109 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
import torch.nn.functional as F
|
4 |
-
|
5 |
-
from .base_model import BaseModel
|
6 |
-
from .blocks import (
|
7 |
-
FeatureFusionBlock,
|
8 |
-
FeatureFusionBlock_custom,
|
9 |
-
Interpolate,
|
10 |
-
_make_encoder,
|
11 |
-
forward_vit,
|
12 |
-
)
|
13 |
-
|
14 |
-
|
15 |
-
def _make_fusion_block(features, use_bn):
|
16 |
-
return FeatureFusionBlock_custom(
|
17 |
-
features,
|
18 |
-
nn.ReLU(False),
|
19 |
-
deconv=False,
|
20 |
-
bn=use_bn,
|
21 |
-
expand=False,
|
22 |
-
align_corners=True,
|
23 |
-
)
|
24 |
-
|
25 |
-
|
26 |
-
class DPT(BaseModel):
|
27 |
-
def __init__(
|
28 |
-
self,
|
29 |
-
head,
|
30 |
-
features=256,
|
31 |
-
backbone="vitb_rn50_384",
|
32 |
-
readout="project",
|
33 |
-
channels_last=False,
|
34 |
-
use_bn=False,
|
35 |
-
):
|
36 |
-
|
37 |
-
super(DPT, self).__init__()
|
38 |
-
|
39 |
-
self.channels_last = channels_last
|
40 |
-
|
41 |
-
hooks = {
|
42 |
-
"vitb_rn50_384": [0, 1, 8, 11],
|
43 |
-
"vitb16_384": [2, 5, 8, 11],
|
44 |
-
"vitl16_384": [5, 11, 17, 23],
|
45 |
-
}
|
46 |
-
|
47 |
-
# Instantiate backbone and reassemble blocks
|
48 |
-
self.pretrained, self.scratch = _make_encoder(
|
49 |
-
backbone,
|
50 |
-
features,
|
51 |
-
False, # Set to true of you want to train from scratch, uses ImageNet weights
|
52 |
-
groups=1,
|
53 |
-
expand=False,
|
54 |
-
exportable=False,
|
55 |
-
hooks=hooks[backbone],
|
56 |
-
use_readout=readout,
|
57 |
-
)
|
58 |
-
|
59 |
-
self.scratch.refinenet1 = _make_fusion_block(features, use_bn)
|
60 |
-
self.scratch.refinenet2 = _make_fusion_block(features, use_bn)
|
61 |
-
self.scratch.refinenet3 = _make_fusion_block(features, use_bn)
|
62 |
-
self.scratch.refinenet4 = _make_fusion_block(features, use_bn)
|
63 |
-
|
64 |
-
self.scratch.output_conv = head
|
65 |
-
|
66 |
-
|
67 |
-
def forward(self, x):
|
68 |
-
if self.channels_last == True:
|
69 |
-
x.contiguous(memory_format=torch.channels_last)
|
70 |
-
|
71 |
-
layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x)
|
72 |
-
|
73 |
-
layer_1_rn = self.scratch.layer1_rn(layer_1)
|
74 |
-
layer_2_rn = self.scratch.layer2_rn(layer_2)
|
75 |
-
layer_3_rn = self.scratch.layer3_rn(layer_3)
|
76 |
-
layer_4_rn = self.scratch.layer4_rn(layer_4)
|
77 |
-
|
78 |
-
path_4 = self.scratch.refinenet4(layer_4_rn)
|
79 |
-
path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
|
80 |
-
path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
|
81 |
-
path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
|
82 |
-
|
83 |
-
out = self.scratch.output_conv(path_1)
|
84 |
-
|
85 |
-
return out
|
86 |
-
|
87 |
-
|
88 |
-
class DPTDepthModel(DPT):
|
89 |
-
def __init__(self, path=None, non_negative=True, **kwargs):
|
90 |
-
features = kwargs["features"] if "features" in kwargs else 256
|
91 |
-
|
92 |
-
head = nn.Sequential(
|
93 |
-
nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1),
|
94 |
-
Interpolate(scale_factor=2, mode="bilinear", align_corners=True),
|
95 |
-
nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1),
|
96 |
-
nn.ReLU(True),
|
97 |
-
nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
|
98 |
-
nn.ReLU(True) if non_negative else nn.Identity(),
|
99 |
-
nn.Identity(),
|
100 |
-
)
|
101 |
-
|
102 |
-
super().__init__(head, **kwargs)
|
103 |
-
|
104 |
-
if path is not None:
|
105 |
-
self.load(path)
|
106 |
-
|
107 |
-
def forward(self, x):
|
108 |
-
return super().forward(x).squeeze(dim=1)
|
109 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/vocoder/parallel_wavegan/layers/__init__.py
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
from .causal_conv import * # NOQA
|
2 |
-
from .pqmf import * # NOQA
|
3 |
-
from .residual_block import * # NOQA
|
4 |
-
from text_to_speech.modules.vocoder.parallel_wavegan.layers.residual_stack import * # NOQA
|
5 |
-
from .upsample import * # NOQA
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIWaves/Software_Company/src/agents/Environment/base_environment.py
DELETED
@@ -1,167 +0,0 @@
|
|
1 |
-
from utils import get_relevant_history, get_embedding
|
2 |
-
import torch
|
3 |
-
from LLM.base_LLM import *
|
4 |
-
from Memory import Memory
|
5 |
-
from Prompt import *
|
6 |
-
import json
|
7 |
-
class Environment:
|
8 |
-
"""
|
9 |
-
The place where the agent activities, responsible for storing some shared memories
|
10 |
-
"""
|
11 |
-
def __init__(self, config) -> None:
|
12 |
-
self.shared_memory = {"long_term_memory": [], "short_term_memory": None}
|
13 |
-
self.agents = None
|
14 |
-
|
15 |
-
self.summary_system_prompt = {}
|
16 |
-
self.summary_last_prompt = {}
|
17 |
-
self.environment_prompt = {}
|
18 |
-
self.environment_type = config["environment_type"] if "environment_type" in config else "cooperative"
|
19 |
-
self.current_chat_history_idx = 0
|
20 |
-
self.LLMs = {}
|
21 |
-
|
22 |
-
# 初始化每个state 的summary 方法
|
23 |
-
# Initialize the summary method for each state
|
24 |
-
for state_name, state_dict in config["states"].items():
|
25 |
-
if state_name != "end_state":
|
26 |
-
self.summary_system_prompt[state_name] = (
|
27 |
-
state_dict["summary_system_prompt"]
|
28 |
-
if "summary_system_prompt" in state_dict
|
29 |
-
else eval(Default_environment_summary_system_prompt)
|
30 |
-
)
|
31 |
-
|
32 |
-
self.summary_last_prompt[state_name] = (
|
33 |
-
state_dict["summary_last_prompt"]
|
34 |
-
if "summary_last_prompt" in state_dict
|
35 |
-
else eval(Default_environment_summary_last_prompt)
|
36 |
-
)
|
37 |
-
|
38 |
-
self.environment_prompt[state_name] = (
|
39 |
-
state_dict["environment_prompt"]
|
40 |
-
if "environment_prompt" in state_dict
|
41 |
-
else " "
|
42 |
-
)
|
43 |
-
self.LLMs[state_name] = init_LLM(f"logs/{state_name}",**state_dict)
|
44 |
-
self.roles_to_names = None
|
45 |
-
self.names_to_roles = None
|
46 |
-
|
47 |
-
@classmethod
|
48 |
-
def from_config(cls, config_path):
|
49 |
-
with open(config_path) as f:
|
50 |
-
config = json.load(f)
|
51 |
-
return cls(config)
|
52 |
-
|
53 |
-
def summary(self, current_state):
|
54 |
-
"""
|
55 |
-
Summarize the situation in the current environment every once in a while
|
56 |
-
"""
|
57 |
-
MAX_CHAT_HISTORY = eval(os.environ["MAX_CHAT_HISTORY"])
|
58 |
-
current_state_name = current_state.name
|
59 |
-
|
60 |
-
query = self.shared_memory["long_term_memory"][-1].content
|
61 |
-
relevant_history = get_relevant_history(
|
62 |
-
query,
|
63 |
-
self.shared_memory["long_term_memory"][:-1],
|
64 |
-
self.shared_memory["chat_embeddings"][:-1],
|
65 |
-
)
|
66 |
-
|
67 |
-
relevant_history = Memory.get_chat_history(relevant_history)
|
68 |
-
chat_history = Memory.get_chat_history(
|
69 |
-
self.shared_memory["long_term_memory"][-MAX_CHAT_HISTORY + 1 :]
|
70 |
-
)
|
71 |
-
summary = self.shared_memory["short_term_memory"]
|
72 |
-
|
73 |
-
|
74 |
-
# system prompt = environment prompt + current memory + system prompt
|
75 |
-
# current_memory = summary + chat history + relevant history
|
76 |
-
current_memory = eval(Environment_summary_memory)
|
77 |
-
environment_prompt = self.environment_prompt[current_state_name]
|
78 |
-
summary_system_prompt = self.summary_system_prompt[current_state_name]
|
79 |
-
|
80 |
-
environment_summary_system_prompt = eval(Environment_summary_system_prompt)
|
81 |
-
response = self.LLMs[current_state_name].get_response(None, environment_summary_system_prompt, stream=False)
|
82 |
-
return response
|
83 |
-
|
84 |
-
def update_memory(self, memory, current_state):
|
85 |
-
"""
|
86 |
-
update chat embbedings and long term memory,short term memory,agents long term memory
|
87 |
-
"""
|
88 |
-
MAX_CHAT_HISTORY = eval(os.environ["MAX_CHAT_HISTORY"])
|
89 |
-
self.shared_memory["long_term_memory"].append(memory)
|
90 |
-
current_embedding = get_embedding(memory.content)
|
91 |
-
if "chat_embeddings" not in self.shared_memory:
|
92 |
-
self.shared_memory["chat_embeddings"] = current_embedding
|
93 |
-
else:
|
94 |
-
self.shared_memory["chat_embeddings"] = torch.cat(
|
95 |
-
[self.shared_memory["chat_embeddings"], current_embedding], dim=0
|
96 |
-
)
|
97 |
-
if len(self.shared_memory["long_term_memory"]) % MAX_CHAT_HISTORY == 0:
|
98 |
-
summary = self.summary(current_state)
|
99 |
-
self.shared_memory["short_term_memory"] = summary
|
100 |
-
|
101 |
-
self.agents[memory.send_name].update_memory(memory)
|
102 |
-
|
103 |
-
|
104 |
-
def _get_agent_last_conversation_idx(self,agent,current_long_term_memory):
|
105 |
-
last_conversation_idx = -1
|
106 |
-
for i, history in enumerate(current_long_term_memory):
|
107 |
-
if history.send_name == agent.name:
|
108 |
-
last_conversation_idx = i
|
109 |
-
return last_conversation_idx
|
110 |
-
|
111 |
-
|
112 |
-
def _get_agent_new_memory(self,agent,current_long_term_memory):
|
113 |
-
# get new conversation
|
114 |
-
last_conversation_idx = self._get_agent_last_conversation_idx(agent,current_long_term_memory)
|
115 |
-
|
116 |
-
if last_conversation_idx == -1:
|
117 |
-
new_conversation =current_long_term_memory
|
118 |
-
elif (
|
119 |
-
last_conversation_idx
|
120 |
-
== len(current_long_term_memory) - 1
|
121 |
-
):
|
122 |
-
new_conversation = []
|
123 |
-
else:
|
124 |
-
new_conversation = current_long_term_memory[
|
125 |
-
last_conversation_idx + 1 :
|
126 |
-
]
|
127 |
-
|
128 |
-
# get chat history from new conversation
|
129 |
-
return Memory.get_chat_history(new_conversation)
|
130 |
-
|
131 |
-
|
132 |
-
def _observe(self,agent):
|
133 |
-
MAX_CHAT_HISTORY = eval(os.environ["MAX_CHAT_HISTORY"])
|
134 |
-
current_state = agent.current_state
|
135 |
-
current_role = agent.state_roles[current_state.name]
|
136 |
-
current_component_dict = current_state.components[current_role]
|
137 |
-
|
138 |
-
# cooperative:Sharing information between different states ; competive: No information is shared between different states
|
139 |
-
current_chat_history_idx = self.current_chat_history_idx if self.environment_type == "competive" else 0
|
140 |
-
current_long_term_memory = self.shared_memory["long_term_memory"][current_chat_history_idx:]
|
141 |
-
current_chat_embbedings = self.shared_memory["chat_embeddings"][current_chat_history_idx:]
|
142 |
-
|
143 |
-
|
144 |
-
# relevant_memory
|
145 |
-
query = current_long_term_memory[-1].content
|
146 |
-
|
147 |
-
relevant_memory = get_relevant_history(
|
148 |
-
query,
|
149 |
-
current_long_term_memory[:-1],
|
150 |
-
current_chat_embbedings[:-1],
|
151 |
-
)
|
152 |
-
relevant_memory = Memory.get_chat_history(relevant_memory,agent.name)
|
153 |
-
|
154 |
-
relevant_memory = eval(Agent_observe_relevant_memory)
|
155 |
-
agent.relevant_memory = relevant_memory
|
156 |
-
|
157 |
-
|
158 |
-
# get chat history from new conversation
|
159 |
-
conversations = self._get_agent_new_memory(agent,current_long_term_memory)
|
160 |
-
|
161 |
-
# memory = relevant_memory + summary + history + query
|
162 |
-
query = current_long_term_memory[-1]
|
163 |
-
current_memory = eval(Agent_observe_memory)
|
164 |
-
|
165 |
-
return {"role": "user", "content": current_memory}
|
166 |
-
|
167 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/voc/__init__.py
DELETED
File without changes
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnext101_4xb16_1024e_4channel.py
DELETED
@@ -1,88 +0,0 @@
|
|
1 |
-
_base_ = [ # 此配置文件将继承所有 `_base_` 中的配置
|
2 |
-
'../configs/_base_/schedules/custom_schedule.py', # 训练策略配置
|
3 |
-
'../configs/_base_/default_runtime.py' # 默认运行设置
|
4 |
-
]
|
5 |
-
|
6 |
-
default_hooks = dict(
|
7 |
-
# print log every 50 iterations.
|
8 |
-
logger=dict(type='LoggerHook', interval=25),
|
9 |
-
# save checkpoint per 8 epochs.
|
10 |
-
checkpoint=dict(save_best='auto', interval=16)
|
11 |
-
)
|
12 |
-
|
13 |
-
visualizer = dict(
|
14 |
-
vis_backends=[dict(type='LocalVisBackend'),
|
15 |
-
dict(type='WandbVisBackend')])
|
16 |
-
|
17 |
-
dataset_type = 'CustomDataset'
|
18 |
-
|
19 |
-
# config of pipline
|
20 |
-
train_pipeline = [
|
21 |
-
dict(type='LoadImageFromFile', imdecode_backend='pillow', color_type='unchanged'), # 读取图像
|
22 |
-
dict(type='RandomResizedCrop', scale=224), # 随机放缩裁剪
|
23 |
-
dict(type='RandomFlip', prob=0.5, direction='horizontal'), # 随机水平翻转
|
24 |
-
dict(type='PackInputs'), # 准备图像以及标签
|
25 |
-
]
|
26 |
-
|
27 |
-
test_pipeline = [
|
28 |
-
dict(type='LoadImageFromFile', imdecode_backend='pillow', color_type='unchanged'), # 读取图像
|
29 |
-
dict(type='ResizeEdge', scale=256, edge='short'), # 缩放短边尺寸至 256px
|
30 |
-
dict(type='CenterCrop', crop_size=224), # 中心裁剪
|
31 |
-
dict(type='PackInputs'), # 准备图像以及标签
|
32 |
-
]
|
33 |
-
|
34 |
-
# config of dataloader
|
35 |
-
train_dataloader = dict(
|
36 |
-
batch_size=16, # 每张 GPU 的 batchsize
|
37 |
-
num_workers=5, # 每个 GPU 的线程数
|
38 |
-
dataset=dict( # 训练数据集
|
39 |
-
type=dataset_type,
|
40 |
-
data_root='../2_preprocess_data_3000',
|
41 |
-
with_label=True,
|
42 |
-
ann_file='',
|
43 |
-
data_prefix='train',
|
44 |
-
pipeline=train_pipeline),
|
45 |
-
sampler=dict(type='DefaultSampler', shuffle=True), # 默认采样器
|
46 |
-
persistent_workers=True, # 是否保持进程,可以缩短每个 epoch 的准备时间
|
47 |
-
)
|
48 |
-
|
49 |
-
# 构造验证集 dataloader
|
50 |
-
val_dataloader = dict(
|
51 |
-
batch_size=16,
|
52 |
-
num_workers=5,
|
53 |
-
dataset=dict(
|
54 |
-
type=dataset_type,
|
55 |
-
data_root='../2_preprocess_data_3000',
|
56 |
-
with_label=True,
|
57 |
-
ann_file='',
|
58 |
-
data_prefix='val',
|
59 |
-
pipeline=test_pipeline),
|
60 |
-
sampler=dict(type='DefaultSampler', shuffle=False),
|
61 |
-
persistent_workers=True,
|
62 |
-
)
|
63 |
-
|
64 |
-
# set evaluator of validation dataset. Here uses top1 and top3 accuracy
|
65 |
-
val_evaluator = dict(type='Accuracy', topk=(1, 3))
|
66 |
-
|
67 |
-
test_dataloader = val_dataloader
|
68 |
-
test_evaluator = val_evaluator
|
69 |
-
|
70 |
-
model = dict(
|
71 |
-
type='ImageClassifier', # 主模型类型(对于图像分类任务,使用 `ImageClassifier`)
|
72 |
-
backbone=dict(
|
73 |
-
type='ResNeXt', # 主干网络类型
|
74 |
-
depth=101,
|
75 |
-
in_channels=4, # 输入通道数
|
76 |
-
),
|
77 |
-
neck=dict(type='GlobalAveragePooling'), # 颈网络类型
|
78 |
-
head=dict(
|
79 |
-
type='LinearClsHead', # 分类颈网络类型
|
80 |
-
# 除了 `type` 之外的所有字段都来自 `LinearClsHead` 类的 __init__ 方法
|
81 |
-
# 可查阅 https://mmpretrain.readthedocs.io/zh_CN/latest/api/generated/mmpretrain.models.heads.LinearClsHead.html
|
82 |
-
num_classes=7, # 分类类别数
|
83 |
-
in_channels=2048,
|
84 |
-
loss=dict(type='CrossEntropyLoss', loss_weight=1.0), # 损失函数配置信息
|
85 |
-
topk=(1, 3), # 评估指标,Top-k 准确率
|
86 |
-
))
|
87 |
-
|
88 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Abhaykoul/Youtube_video_downloader/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Youtube Video Downloader
|
3 |
-
emoji: 💻
|
4 |
-
colorFrom: purple
|
5 |
-
colorTo: red
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.28.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/EasyChat.py
DELETED
@@ -1,111 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
import json
|
4 |
-
import random
|
5 |
-
|
6 |
-
import requests
|
7 |
-
|
8 |
-
from ...typing import Any, CreateResult
|
9 |
-
from ..base_provider import BaseProvider
|
10 |
-
|
11 |
-
|
12 |
-
class EasyChat(BaseProvider):
|
13 |
-
url: str = "https://free.easychat.work"
|
14 |
-
supports_stream = True
|
15 |
-
supports_gpt_35_turbo = True
|
16 |
-
working = False
|
17 |
-
|
18 |
-
@staticmethod
|
19 |
-
def create_completion(
|
20 |
-
model: str,
|
21 |
-
messages: list[dict[str, str]],
|
22 |
-
stream: bool, **kwargs: Any) -> CreateResult:
|
23 |
-
|
24 |
-
active_servers = [
|
25 |
-
"https://chat10.fastgpt.me",
|
26 |
-
"https://chat9.fastgpt.me",
|
27 |
-
"https://chat1.fastgpt.me",
|
28 |
-
"https://chat2.fastgpt.me",
|
29 |
-
"https://chat3.fastgpt.me",
|
30 |
-
"https://chat4.fastgpt.me",
|
31 |
-
"https://gxos1h1ddt.fastgpt.me"
|
32 |
-
]
|
33 |
-
|
34 |
-
server = active_servers[kwargs.get("active_server", random.randint(0, 5))]
|
35 |
-
headers = {
|
36 |
-
"authority" : f"{server}".replace("https://", ""),
|
37 |
-
"accept" : "text/event-stream",
|
38 |
-
"accept-language" : "en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3,fa=0.2",
|
39 |
-
"content-type" : "application/json",
|
40 |
-
"origin" : f"{server}",
|
41 |
-
"referer" : f"{server}/",
|
42 |
-
"x-requested-with" : "XMLHttpRequest",
|
43 |
-
'plugins' : '0',
|
44 |
-
'sec-ch-ua' : '"Chromium";v="116", "Not)A;Brand";v="24", "Google Chrome";v="116"',
|
45 |
-
'sec-ch-ua-mobile' : '?0',
|
46 |
-
'sec-ch-ua-platform': '"Windows"',
|
47 |
-
'sec-fetch-dest' : 'empty',
|
48 |
-
'sec-fetch-mode' : 'cors',
|
49 |
-
'sec-fetch-site' : 'same-origin',
|
50 |
-
'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36',
|
51 |
-
'usesearch' : 'false',
|
52 |
-
'x-requested-with' : 'XMLHttpRequest'
|
53 |
-
}
|
54 |
-
|
55 |
-
json_data = {
|
56 |
-
"messages" : messages,
|
57 |
-
"stream" : stream,
|
58 |
-
"model" : model,
|
59 |
-
"temperature" : kwargs.get("temperature", 0.5),
|
60 |
-
"presence_penalty" : kwargs.get("presence_penalty", 0),
|
61 |
-
"frequency_penalty" : kwargs.get("frequency_penalty", 0),
|
62 |
-
"top_p" : kwargs.get("top_p", 1)
|
63 |
-
}
|
64 |
-
|
65 |
-
session = requests.Session()
|
66 |
-
# init cookies from server
|
67 |
-
session.get(f"{server}/")
|
68 |
-
|
69 |
-
response = session.post(f"{server}/api/openai/v1/chat/completions",
|
70 |
-
headers=headers, json=json_data, stream=stream)
|
71 |
-
|
72 |
-
if response.status_code == 200:
|
73 |
-
|
74 |
-
if stream == False:
|
75 |
-
json_data = response.json()
|
76 |
-
|
77 |
-
if "choices" in json_data:
|
78 |
-
yield json_data["choices"][0]["message"]["content"]
|
79 |
-
else:
|
80 |
-
raise Exception("No response from server")
|
81 |
-
|
82 |
-
else:
|
83 |
-
|
84 |
-
for chunk in response.iter_lines():
|
85 |
-
|
86 |
-
if b"content" in chunk:
|
87 |
-
splitData = chunk.decode().split("data:")
|
88 |
-
|
89 |
-
if len(splitData) > 1:
|
90 |
-
yield json.loads(splitData[1])["choices"][0]["delta"]["content"]
|
91 |
-
else:
|
92 |
-
continue
|
93 |
-
else:
|
94 |
-
raise Exception(f"Error {response.status_code} from server : {response.reason}")
|
95 |
-
|
96 |
-
|
97 |
-
@classmethod
|
98 |
-
@property
|
99 |
-
def params(cls):
|
100 |
-
params = [
|
101 |
-
("model", "str"),
|
102 |
-
("messages", "list[dict[str, str]]"),
|
103 |
-
("stream", "bool"),
|
104 |
-
("temperature", "float"),
|
105 |
-
("presence_penalty", "int"),
|
106 |
-
("frequency_penalty", "int"),
|
107 |
-
("top_p", "int"),
|
108 |
-
("active_server", "int"),
|
109 |
-
]
|
110 |
-
param = ", ".join([": ".join(p) for p in params])
|
111 |
-
return f"g4f.provider.{cls.__name__} supports: ({param})"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/agentverse/demo.py
DELETED
@@ -1,487 +0,0 @@
|
|
1 |
-
import base64
|
2 |
-
import itertools
|
3 |
-
import json
|
4 |
-
from typing import Dict, List, Tuple
|
5 |
-
|
6 |
-
import cv2
|
7 |
-
import gradio as gr
|
8 |
-
|
9 |
-
from agentverse.agentverse import AgentVerse
|
10 |
-
from agentverse.message import Message
|
11 |
-
|
12 |
-
|
13 |
-
def cover_img(background, img, place: Tuple[int, int]):
|
14 |
-
"""
|
15 |
-
Overlays the specified image to the specified position of the background image.
|
16 |
-
:param background: background image
|
17 |
-
:param img: the specified image
|
18 |
-
:param place: the top-left coordinate of the target location
|
19 |
-
"""
|
20 |
-
back_h, back_w, _ = background.shape
|
21 |
-
height, width, _ = img.shape
|
22 |
-
for i, j in itertools.product(range(height), range(width)):
|
23 |
-
if img[i, j, 3]:
|
24 |
-
background[place[0] + i, place[1] + j] = img[i, j, :3]
|
25 |
-
|
26 |
-
|
27 |
-
class UI:
|
28 |
-
"""
|
29 |
-
the UI of frontend
|
30 |
-
"""
|
31 |
-
|
32 |
-
def __init__(self, task: str):
|
33 |
-
"""
|
34 |
-
init a UI.
|
35 |
-
default number of students is 0
|
36 |
-
"""
|
37 |
-
self.messages = []
|
38 |
-
self.task = task
|
39 |
-
self.backend = AgentVerse.from_task(task)
|
40 |
-
self.turns_remain = 0
|
41 |
-
self.agent_id = {
|
42 |
-
self.backend.agents[idx].name: idx
|
43 |
-
for idx in range(len(self.backend.agents))
|
44 |
-
}
|
45 |
-
self.stu_num = len(self.agent_id) - 1
|
46 |
-
self.autoplay = False
|
47 |
-
self.image_now = None
|
48 |
-
self.text_now = None
|
49 |
-
self.tot_solutions = 5
|
50 |
-
self.solution_status = [False] * self.tot_solutions
|
51 |
-
|
52 |
-
def get_avatar(self, idx):
|
53 |
-
if idx == -1:
|
54 |
-
img = cv2.imread("./imgs/db_diag/-1.png")
|
55 |
-
elif self.task == "prisoner_dilemma":
|
56 |
-
img = cv2.imread(f"./imgs/prison/{idx}.png")
|
57 |
-
elif self.task == "db_diag":
|
58 |
-
img = cv2.imread(f"./imgs/db_diag/{idx}.png")
|
59 |
-
elif "sde" in self.task:
|
60 |
-
img = cv2.imread(f"./imgs/sde/{idx}.png")
|
61 |
-
else:
|
62 |
-
img = cv2.imread(f"./imgs/{idx}.png")
|
63 |
-
base64_str = cv2.imencode(".png", img)[1].tostring()
|
64 |
-
return "data:image/png;base64," + base64.b64encode(base64_str).decode("utf-8")
|
65 |
-
|
66 |
-
def stop_autoplay(self):
|
67 |
-
self.autoplay = False
|
68 |
-
return (
|
69 |
-
gr.Button.update(interactive=False),
|
70 |
-
gr.Button.update(interactive=False),
|
71 |
-
gr.Button.update(interactive=False),
|
72 |
-
)
|
73 |
-
|
74 |
-
def start_autoplay(self):
|
75 |
-
self.autoplay = True
|
76 |
-
yield (
|
77 |
-
self.image_now,
|
78 |
-
self.text_now,
|
79 |
-
gr.Button.update(interactive=False),
|
80 |
-
gr.Button.update(interactive=True),
|
81 |
-
gr.Button.update(interactive=False),
|
82 |
-
*[gr.Button.update(visible=statu) for statu in self.solution_status],
|
83 |
-
gr.Box.update(visible=any(self.solution_status)),
|
84 |
-
)
|
85 |
-
|
86 |
-
while self.autoplay and self.turns_remain > 0:
|
87 |
-
outputs = self.gen_output()
|
88 |
-
self.image_now, self.text_now = outputs
|
89 |
-
|
90 |
-
yield (
|
91 |
-
*outputs,
|
92 |
-
gr.Button.update(interactive=not self.autoplay and self.turns_remain > 0),
|
93 |
-
gr.Button.update(interactive=self.autoplay and self.turns_remain > 0),
|
94 |
-
gr.Button.update(interactive=not self.autoplay and self.turns_remain > 0),
|
95 |
-
*[gr.Button.update(visible=statu) for statu in self.solution_status],
|
96 |
-
gr.Box.update(visible=any(self.solution_status))
|
97 |
-
)
|
98 |
-
|
99 |
-
def delay_gen_output(self):
|
100 |
-
yield (
|
101 |
-
self.image_now,
|
102 |
-
self.text_now,
|
103 |
-
gr.Button.update(interactive=False),
|
104 |
-
gr.Button.update(interactive=False),
|
105 |
-
*[gr.Button.update(visible=statu) for statu in self.solution_status],
|
106 |
-
gr.Box.update(visible=any(self.solution_status))
|
107 |
-
)
|
108 |
-
|
109 |
-
outputs = self.gen_output()
|
110 |
-
self.image_now, self.text_now = outputs
|
111 |
-
|
112 |
-
yield (
|
113 |
-
self.image_now,
|
114 |
-
self.text_now,
|
115 |
-
gr.Button.update(interactive=self.turns_remain > 0),
|
116 |
-
gr.Button.update(interactive=self.turns_remain > 0),
|
117 |
-
*[gr.Button.update(visible=statu) for statu in self.solution_status],
|
118 |
-
gr.Box.update(visible=any(self.solution_status))
|
119 |
-
)
|
120 |
-
|
121 |
-
def delay_reset(self):
|
122 |
-
self.autoplay = False
|
123 |
-
self.image_now, self.text_now = self.reset()
|
124 |
-
return (
|
125 |
-
self.image_now,
|
126 |
-
self.text_now,
|
127 |
-
gr.Button.update(interactive=True),
|
128 |
-
gr.Button.update(interactive=False),
|
129 |
-
gr.Button.update(interactive=True),
|
130 |
-
*[gr.Button.update(visible=statu) for statu in self.solution_status],
|
131 |
-
gr.Box.update(visible=any(self.solution_status))
|
132 |
-
)
|
133 |
-
|
134 |
-
def reset(self, stu_num=0):
|
135 |
-
"""
|
136 |
-
tell backend the new number of students and generate new empty image
|
137 |
-
:param stu_num:
|
138 |
-
:return: [empty image, empty message]
|
139 |
-
"""
|
140 |
-
if not 0 <= stu_num <= 30:
|
141 |
-
raise gr.Error("the number of students must be between 0 and 30.")
|
142 |
-
|
143 |
-
"""
|
144 |
-
# [To-Do] Need to add a function to assign agent numbers into the backend.
|
145 |
-
"""
|
146 |
-
# self.backend.reset(stu_num)
|
147 |
-
# self.stu_num = stu_num
|
148 |
-
|
149 |
-
"""
|
150 |
-
# [To-Do] Pass the parameters to reset
|
151 |
-
"""
|
152 |
-
self.backend.reset()
|
153 |
-
self.turns_remain = self.backend.environment.max_turns
|
154 |
-
|
155 |
-
if self.task == "prisoner_dilemma":
|
156 |
-
background = cv2.imread("./imgs/prison/case_1.png")
|
157 |
-
elif self.task == "db_diag":
|
158 |
-
background = cv2.imread("./imgs/db_diag/background.png")
|
159 |
-
elif "sde" in self.task:
|
160 |
-
background = cv2.imread("./imgs/sde/background.png")
|
161 |
-
else:
|
162 |
-
background = cv2.imread("./imgs/background.png")
|
163 |
-
back_h, back_w, _ = background.shape
|
164 |
-
stu_cnt = 0
|
165 |
-
for h_begin, w_begin in itertools.product(
|
166 |
-
range(800, back_h, 300), range(135, back_w - 200, 200)
|
167 |
-
):
|
168 |
-
stu_cnt += 1
|
169 |
-
img = cv2.imread(
|
170 |
-
f"./imgs/{(stu_cnt - 1) % 11 + 1 if stu_cnt <= self.stu_num else 'empty'}.png",
|
171 |
-
cv2.IMREAD_UNCHANGED,
|
172 |
-
)
|
173 |
-
cover_img(
|
174 |
-
background,
|
175 |
-
img,
|
176 |
-
(h_begin - 30 if img.shape[0] > 190 else h_begin, w_begin),
|
177 |
-
)
|
178 |
-
self.messages = []
|
179 |
-
self.solution_status = [False] * self.tot_solutions
|
180 |
-
return [cv2.cvtColor(background, cv2.COLOR_BGR2RGB), ""]
|
181 |
-
|
182 |
-
def gen_img(self, data: List[Dict]):
|
183 |
-
"""
|
184 |
-
generate new image with sender rank
|
185 |
-
:param data:
|
186 |
-
:return: the new image
|
187 |
-
"""
|
188 |
-
# The following code need to be more general. This one is too task-specific.
|
189 |
-
# if len(data) != self.stu_num:
|
190 |
-
if len(data) != self.stu_num + 1:
|
191 |
-
raise gr.Error("data length is not equal to the total number of students.")
|
192 |
-
if self.task == "prisoner_dilemma":
|
193 |
-
img = cv2.imread("./imgs/speaking.png", cv2.IMREAD_UNCHANGED)
|
194 |
-
if (
|
195 |
-
len(self.messages) < 2
|
196 |
-
or self.messages[-1][0] == 1
|
197 |
-
or self.messages[-2][0] == 2
|
198 |
-
):
|
199 |
-
background = cv2.imread("./imgs/prison/case_1.png")
|
200 |
-
if data[0]["message"] != "":
|
201 |
-
cover_img(background, img, (400, 480))
|
202 |
-
else:
|
203 |
-
background = cv2.imread("./imgs/prison/case_2.png")
|
204 |
-
if data[0]["message"] != "":
|
205 |
-
cover_img(background, img, (400, 880))
|
206 |
-
if data[1]["message"] != "":
|
207 |
-
cover_img(background, img, (550, 480))
|
208 |
-
if data[2]["message"] != "":
|
209 |
-
cover_img(background, img, (550, 880))
|
210 |
-
elif self.task == "db_diag":
|
211 |
-
background = cv2.imread("./imgs/db_diag/background.png")
|
212 |
-
img = cv2.imread("./imgs/db_diag/speaking.png", cv2.IMREAD_UNCHANGED)
|
213 |
-
if data[0]["message"] != "":
|
214 |
-
cover_img(background, img, (750, 80))
|
215 |
-
if data[1]["message"] != "":
|
216 |
-
cover_img(background, img, (310, 220))
|
217 |
-
if data[2]["message"] != "":
|
218 |
-
cover_img(background, img, (522, 11))
|
219 |
-
elif "sde" in self.task:
|
220 |
-
background = cv2.imread("./imgs/sde/background.png")
|
221 |
-
img = cv2.imread("./imgs/sde/speaking.png", cv2.IMREAD_UNCHANGED)
|
222 |
-
if data[0]["message"] != "":
|
223 |
-
cover_img(background, img, (692, 330))
|
224 |
-
if data[1]["message"] != "":
|
225 |
-
cover_img(background, img, (692, 660))
|
226 |
-
if data[2]["message"] != "":
|
227 |
-
cover_img(background, img, (692, 990))
|
228 |
-
else:
|
229 |
-
background = cv2.imread("./imgs/background.png")
|
230 |
-
back_h, back_w, _ = background.shape
|
231 |
-
stu_cnt = 0
|
232 |
-
if data[stu_cnt]["message"] not in ["", "[RaiseHand]"]:
|
233 |
-
img = cv2.imread("./imgs/speaking.png", cv2.IMREAD_UNCHANGED)
|
234 |
-
cover_img(background, img, (370, 1250))
|
235 |
-
for h_begin, w_begin in itertools.product(
|
236 |
-
range(800, back_h, 300), range(135, back_w - 200, 200)
|
237 |
-
):
|
238 |
-
stu_cnt += 1
|
239 |
-
if stu_cnt <= self.stu_num:
|
240 |
-
img = cv2.imread(
|
241 |
-
f"./imgs/{(stu_cnt - 1) % 11 + 1}.png", cv2.IMREAD_UNCHANGED
|
242 |
-
)
|
243 |
-
cover_img(
|
244 |
-
background,
|
245 |
-
img,
|
246 |
-
(h_begin - 30 if img.shape[0] > 190 else h_begin, w_begin),
|
247 |
-
)
|
248 |
-
if "[RaiseHand]" in data[stu_cnt]["message"]:
|
249 |
-
# elif data[stu_cnt]["message"] == "[RaiseHand]":
|
250 |
-
img = cv2.imread("./imgs/hand.png", cv2.IMREAD_UNCHANGED)
|
251 |
-
cover_img(background, img, (h_begin - 90, w_begin + 10))
|
252 |
-
elif data[stu_cnt]["message"] not in ["", "[RaiseHand]"]:
|
253 |
-
img = cv2.imread("./imgs/speaking.png", cv2.IMREAD_UNCHANGED)
|
254 |
-
cover_img(background, img, (h_begin - 90, w_begin + 10))
|
255 |
-
|
256 |
-
else:
|
257 |
-
img = cv2.imread("./imgs/empty.png", cv2.IMREAD_UNCHANGED)
|
258 |
-
cover_img(background, img, (h_begin, w_begin))
|
259 |
-
return cv2.cvtColor(background, cv2.COLOR_BGR2RGB)
|
260 |
-
|
261 |
-
def return_format(self, messages: List[Message]):
|
262 |
-
_format = [{"message": "", "sender": idx} for idx in range(len(self.agent_id))]
|
263 |
-
|
264 |
-
for message in messages:
|
265 |
-
if self.task == "db_diag":
|
266 |
-
content_json: dict = message.content
|
267 |
-
content_json["diagnose"] = f"[{message.sender}]: {content_json['diagnose']}"
|
268 |
-
_format[self.agent_id[message.sender]]["message"] = json.dumps(content_json)
|
269 |
-
elif "sde" in self.task:
|
270 |
-
if message.sender == "code_tester":
|
271 |
-
pre_message, message_ = message.content.split("\n")
|
272 |
-
message_ = "{}\n{}".format(pre_message, json.loads(message_)["feedback"])
|
273 |
-
_format[self.agent_id[message.sender]]["message"] = "[{}]: {}".format(
|
274 |
-
message.sender, message_
|
275 |
-
)
|
276 |
-
else:
|
277 |
-
_format[self.agent_id[message.sender]]["message"] = "[{}]: {}".format(
|
278 |
-
message.sender, message.content
|
279 |
-
)
|
280 |
-
|
281 |
-
else:
|
282 |
-
_format[self.agent_id[message.sender]]["message"] = "[{}]: {}".format(
|
283 |
-
message.sender, message.content
|
284 |
-
)
|
285 |
-
|
286 |
-
return _format
|
287 |
-
|
288 |
-
def gen_output(self):
|
289 |
-
"""
|
290 |
-
generate new image and message of next step
|
291 |
-
:return: [new image, new message]
|
292 |
-
"""
|
293 |
-
|
294 |
-
# data = self.backend.next_data()
|
295 |
-
return_message = self.backend.next()
|
296 |
-
data = self.return_format(return_message)
|
297 |
-
|
298 |
-
# data.sort(key=lambda item: item["sender"])
|
299 |
-
"""
|
300 |
-
# [To-Do]; Check the message from the backend: only 1 person can speak
|
301 |
-
"""
|
302 |
-
|
303 |
-
for item in data:
|
304 |
-
if item["message"] not in ["", "[RaiseHand]"]:
|
305 |
-
self.messages.append((item["sender"], item["message"]))
|
306 |
-
|
307 |
-
message = self.gen_message()
|
308 |
-
self.turns_remain -= 1
|
309 |
-
return [self.gen_img(data), message]
|
310 |
-
|
311 |
-
def gen_message(self):
|
312 |
-
# If the backend cannot handle this error, use the following code.
|
313 |
-
message = ""
|
314 |
-
"""
|
315 |
-
for item in data:
|
316 |
-
if item["message"] not in ["", "[RaiseHand]"]:
|
317 |
-
message = item["message"]
|
318 |
-
break
|
319 |
-
"""
|
320 |
-
for sender, msg in self.messages:
|
321 |
-
if sender == 0:
|
322 |
-
avatar = self.get_avatar(0)
|
323 |
-
elif sender == -1:
|
324 |
-
avatar = self.get_avatar(-1)
|
325 |
-
else:
|
326 |
-
avatar = self.get_avatar((sender - 1) % 11 + 1)
|
327 |
-
if self.task == "db_diag":
|
328 |
-
msg_json = json.loads(msg)
|
329 |
-
self.solution_status = [False] * self.tot_solutions
|
330 |
-
msg = msg_json["diagnose"]
|
331 |
-
if msg_json["solution"] != "":
|
332 |
-
solution: List[str] = msg_json["solution"]
|
333 |
-
for solu in solution:
|
334 |
-
if "query" in solu or "queries" in solu:
|
335 |
-
self.solution_status[0] = True
|
336 |
-
solu = solu.replace("query", '<span style="color:yellow;">query</span>')
|
337 |
-
solu = solu.replace("queries", '<span style="color:yellow;">queries</span>')
|
338 |
-
if "join" in solu:
|
339 |
-
self.solution_status[1] = True
|
340 |
-
solu = solu.replace("join", '<span style="color:yellow;">join</span>')
|
341 |
-
if "index" in solu:
|
342 |
-
self.solution_status[2] = True
|
343 |
-
solu = solu.replace("index", '<span style="color:yellow;">index</span>')
|
344 |
-
if "system configuration" in solu:
|
345 |
-
self.solution_status[3] = True
|
346 |
-
solu = solu.replace("system configuration",
|
347 |
-
'<span style="color:yellow;">system configuration</span>')
|
348 |
-
if "monitor" in solu or "Monitor" in solu or "Investigate" in solu:
|
349 |
-
self.solution_status[4] = True
|
350 |
-
solu = solu.replace("monitor", '<span style="color:yellow;">monitor</span>')
|
351 |
-
solu = solu.replace("Monitor", '<span style="color:yellow;">Monitor</span>')
|
352 |
-
solu = solu.replace("Investigate", '<span style="color:yellow;">Investigate</span>')
|
353 |
-
msg = f"{msg}<br>{solu}"
|
354 |
-
if msg_json["knowledge"] != "":
|
355 |
-
msg = f'{msg}<hr style="margin: 5px 0"><span style="font-style: italic">{msg_json["knowledge"]}<span>'
|
356 |
-
else:
|
357 |
-
msg = msg.replace("<", "<")
|
358 |
-
msg = msg.replace(">", ">")
|
359 |
-
message = (
|
360 |
-
f'<div style="display: flex; align-items: center; margin-bottom: 10px;overflow:auto;">'
|
361 |
-
f'<img src="{avatar}" style="width: 5%; height: 5%; border-radius: 25px; margin-right: 10px;">'
|
362 |
-
f'<div style="background-color: gray; color: white; padding: 10px; border-radius: 10px;'
|
363 |
-
f'max-width: 70%; white-space: pre-wrap">'
|
364 |
-
f"{msg}"
|
365 |
-
f"</div></div>" + message
|
366 |
-
)
|
367 |
-
message = '<div id="divDetail" style="height:600px;overflow:auto;">' + message + "</div>"
|
368 |
-
return message
|
369 |
-
|
370 |
-
def submit(self, message: str):
|
371 |
-
"""
|
372 |
-
submit message to backend
|
373 |
-
:param message: message
|
374 |
-
:return: [new image, new message]
|
375 |
-
"""
|
376 |
-
self.backend.submit(message)
|
377 |
-
self.messages.append((-1, f"[User]: {message}"))
|
378 |
-
return self.gen_img([{"message": ""}] * len(self.agent_id)), self.gen_message()
|
379 |
-
|
380 |
-
def launch(self):
|
381 |
-
"""
|
382 |
-
start a frontend
|
383 |
-
"""
|
384 |
-
with gr.Blocks() as demo:
|
385 |
-
with gr.Row():
|
386 |
-
with gr.Column():
|
387 |
-
image_output = gr.Image()
|
388 |
-
with gr.Row():
|
389 |
-
reset_btn = gr.Button("Reset")
|
390 |
-
# next_btn = gr.Button("Next", variant="primary")
|
391 |
-
next_btn = gr.Button("Next", interactive=False)
|
392 |
-
stop_autoplay_btn = gr.Button(
|
393 |
-
"Stop Autoplay", interactive=False
|
394 |
-
)
|
395 |
-
start_autoplay_btn = gr.Button("Start Autoplay", interactive=False)
|
396 |
-
with gr.Box(visible=False) as solutions:
|
397 |
-
with gr.Column():
|
398 |
-
gr.HTML("Optimization Solutions:")
|
399 |
-
with gr.Row():
|
400 |
-
rewrite_slow_query_btn = gr.Button("Rewrite Slow Query", visible=False)
|
401 |
-
add_query_hints_btn = gr.Button("Add Query Hints", visible=False)
|
402 |
-
update_indexes_btn = gr.Button("Update Indexes", visible=False)
|
403 |
-
tune_parameters_btn = gr.Button("Tune Parameters", visible=False)
|
404 |
-
gather_more_info_btn = gr.Button("Gather More Info", visible=False)
|
405 |
-
# text_output = gr.Textbox()
|
406 |
-
text_output = gr.HTML(self.reset()[1])
|
407 |
-
|
408 |
-
# Given a botton to provide student numbers and their inf.
|
409 |
-
# stu_num = gr.Number(label="Student Number", precision=0)
|
410 |
-
# stu_num = self.stu_num
|
411 |
-
|
412 |
-
if self.task == "db_diag":
|
413 |
-
user_msg = gr.Textbox()
|
414 |
-
submit_btn = gr.Button("Submit", variant="primary")
|
415 |
-
|
416 |
-
submit_btn.click(fn=self.submit, inputs=user_msg, outputs=[image_output, text_output], show_progress=False)
|
417 |
-
else:
|
418 |
-
pass
|
419 |
-
|
420 |
-
# next_btn.click(fn=self.gen_output, inputs=None, outputs=[image_output, text_output], show_progress=False)
|
421 |
-
next_btn.click(
|
422 |
-
fn=self.delay_gen_output,
|
423 |
-
inputs=None,
|
424 |
-
outputs=[
|
425 |
-
image_output,
|
426 |
-
text_output,
|
427 |
-
next_btn,
|
428 |
-
start_autoplay_btn,
|
429 |
-
rewrite_slow_query_btn,
|
430 |
-
add_query_hints_btn,
|
431 |
-
update_indexes_btn,
|
432 |
-
tune_parameters_btn,
|
433 |
-
gather_more_info_btn,
|
434 |
-
solutions
|
435 |
-
],
|
436 |
-
show_progress=False,
|
437 |
-
)
|
438 |
-
|
439 |
-
# [To-Do] Add botton: re-start (load different people and env)
|
440 |
-
# reset_btn.click(fn=self.reset, inputs=stu_num, outputs=[image_output, text_output], show_progress=False)
|
441 |
-
# reset_btn.click(fn=self.reset, inputs=None, outputs=[image_output, text_output], show_progress=False)
|
442 |
-
reset_btn.click(
|
443 |
-
fn=self.delay_reset,
|
444 |
-
inputs=None,
|
445 |
-
outputs=[
|
446 |
-
image_output,
|
447 |
-
text_output,
|
448 |
-
next_btn,
|
449 |
-
stop_autoplay_btn,
|
450 |
-
start_autoplay_btn,
|
451 |
-
rewrite_slow_query_btn,
|
452 |
-
add_query_hints_btn,
|
453 |
-
update_indexes_btn,
|
454 |
-
tune_parameters_btn,
|
455 |
-
gather_more_info_btn,
|
456 |
-
solutions
|
457 |
-
],
|
458 |
-
show_progress=False,
|
459 |
-
)
|
460 |
-
|
461 |
-
stop_autoplay_btn.click(
|
462 |
-
fn=self.stop_autoplay,
|
463 |
-
inputs=None,
|
464 |
-
outputs=[next_btn, stop_autoplay_btn, start_autoplay_btn],
|
465 |
-
show_progress=False,
|
466 |
-
)
|
467 |
-
start_autoplay_btn.click(
|
468 |
-
fn=self.start_autoplay,
|
469 |
-
inputs=None,
|
470 |
-
outputs=[
|
471 |
-
image_output,
|
472 |
-
text_output,
|
473 |
-
next_btn,
|
474 |
-
stop_autoplay_btn,
|
475 |
-
start_autoplay_btn,
|
476 |
-
rewrite_slow_query_btn,
|
477 |
-
add_query_hints_btn,
|
478 |
-
update_indexes_btn,
|
479 |
-
tune_parameters_btn,
|
480 |
-
gather_more_info_btn,
|
481 |
-
solutions
|
482 |
-
],
|
483 |
-
show_progress=False,
|
484 |
-
)
|
485 |
-
|
486 |
-
demo.queue(concurrency_count=5, max_size=20).launch()
|
487 |
-
# demo.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/concurrent.py
DELETED
@@ -1,80 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
import asyncio
|
3 |
-
from colorama import Fore
|
4 |
-
|
5 |
-
from typing import TYPE_CHECKING, List
|
6 |
-
|
7 |
-
from . import decision_maker_registry
|
8 |
-
from .base import BaseDecisionMaker
|
9 |
-
from agentverse.logging import typewriter_log, logger
|
10 |
-
|
11 |
-
if TYPE_CHECKING:
|
12 |
-
from agentverse.agents import BaseAgent, SolverAgent, CriticAgent
|
13 |
-
from agentverse.message import Message, CriticMessage, SolverMessage
|
14 |
-
|
15 |
-
|
16 |
-
@decision_maker_registry.register("concurrent")
|
17 |
-
class ConcurrentDecisionMaker(BaseDecisionMaker):
|
18 |
-
"""
|
19 |
-
Discuss in a concurrent manner.
|
20 |
-
"""
|
21 |
-
|
22 |
-
name: str = "concurrent"
|
23 |
-
max_inner_turns: int = 3
|
24 |
-
|
25 |
-
async def astep(
|
26 |
-
self,
|
27 |
-
agents: List[BaseAgent],
|
28 |
-
task_description: str,
|
29 |
-
previous_plan: str = "No solution yet.",
|
30 |
-
advice: str = "No advice yet.",
|
31 |
-
*args,
|
32 |
-
**kwargs,
|
33 |
-
) -> List[SolverMessage]:
|
34 |
-
# Here we assume that the first agent is the solver.
|
35 |
-
# The rest of the agents are the reviewers.
|
36 |
-
last_reviews = []
|
37 |
-
for i in range(self.max_inner_turns):
|
38 |
-
reviews: List[CriticMessage] = await asyncio.gather(
|
39 |
-
*[
|
40 |
-
agent.astep(previous_plan, advice, task_description)
|
41 |
-
for agent in agents[1:]
|
42 |
-
]
|
43 |
-
)
|
44 |
-
logger.info("", "Reviews:", Fore.YELLOW)
|
45 |
-
logger.info(
|
46 |
-
"",
|
47 |
-
"\n".join(
|
48 |
-
[f"[{review.sender}]: {review.content}" for review in reviews]
|
49 |
-
),
|
50 |
-
Fore.YELLOW,
|
51 |
-
)
|
52 |
-
nonempty_reviews = []
|
53 |
-
for review in reviews:
|
54 |
-
if not review.is_agree and review.content != "":
|
55 |
-
nonempty_reviews.append(review)
|
56 |
-
self.broadcast_messages(agents[1:], nonempty_reviews)
|
57 |
-
if len(nonempty_reviews) == 0:
|
58 |
-
break
|
59 |
-
last_reviews = nonempty_reviews
|
60 |
-
|
61 |
-
agents[0].add_message_to_memory(last_reviews)
|
62 |
-
result = agents[0].step(previous_plan, advice, task_description)
|
63 |
-
# agents[0].add_message_to_memory([result])
|
64 |
-
self.broadcast_messages(agents, [result])
|
65 |
-
return [result]
|
66 |
-
|
67 |
-
def broadcast_messages(self, agents, messages) -> None:
|
68 |
-
for agent in agents:
|
69 |
-
agent.add_message_to_memory(messages)
|
70 |
-
|
71 |
-
def p2p_messages(self, agents, messages) -> None:
|
72 |
-
agents[0].add_message_to_memory(messages)
|
73 |
-
for message in messages:
|
74 |
-
for agent in agents[1:]:
|
75 |
-
if agent.name == message.sender:
|
76 |
-
agent.add_message_to_memory(messages)
|
77 |
-
break
|
78 |
-
|
79 |
-
def reset(self):
|
80 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/lzstring.d.ts
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
import LZString from './string/lzstring/LZString';
|
2 |
-
export default LZString;
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/states/MainState.js
DELETED
@@ -1,222 +0,0 @@
|
|
1 |
-
import BaseState from './BaseState.js';
|
2 |
-
import MatchState from './MatchState.js';
|
3 |
-
// Actions
|
4 |
-
import SelectChess from '../actions/SelectChess.js';
|
5 |
-
import SwapChess from '../actions/SwapChess.js'
|
6 |
-
|
7 |
-
const GetValue = Phaser.Utils.Objects.GetValue;
|
8 |
-
|
9 |
-
class State extends BaseState {
|
10 |
-
constructor(bejeweled, config) {
|
11 |
-
super(bejeweled, config);
|
12 |
-
// this.bejeweled = bejeweled; // Bejeweled
|
13 |
-
// this.board = bejeweled.board; // Bejeweled.board
|
14 |
-
|
15 |
-
this.selectedChess1;
|
16 |
-
this.selectedChess2;
|
17 |
-
this.matchState = new MatchState(bejeweled, config); // sub-state
|
18 |
-
|
19 |
-
// Actions
|
20 |
-
// select1 action
|
21 |
-
this.select1Action = GetValue(config, 'select1Action', SelectChess);
|
22 |
-
// select2 action
|
23 |
-
this.select2Action = GetValue(config, 'select2Action', this.select1Action);
|
24 |
-
// Swap action
|
25 |
-
this.swapAction = GetValue(config, 'swapAction', SwapChess);
|
26 |
-
// UndoSwap action
|
27 |
-
this.undoSwapAction = GetValue(config, 'undoSwapAction', this.swapAction);
|
28 |
-
|
29 |
-
var debug = GetValue(config, 'debug', false);
|
30 |
-
if (debug) {
|
31 |
-
this.on('statechange', this.printState, this);
|
32 |
-
}
|
33 |
-
}
|
34 |
-
|
35 |
-
shutdown() {
|
36 |
-
super.shutdown();
|
37 |
-
|
38 |
-
this.matchState.shutdown();
|
39 |
-
|
40 |
-
this.matchState = undefined;
|
41 |
-
this.selectedChess1 = undefined;
|
42 |
-
this.selectedChess2 = undefined;
|
43 |
-
return this;
|
44 |
-
}
|
45 |
-
|
46 |
-
// START
|
47 |
-
enter_START() {
|
48 |
-
this.board.init(); // Fill background tiles
|
49 |
-
this.next();
|
50 |
-
}
|
51 |
-
next_START() {
|
52 |
-
return 'RESET';
|
53 |
-
}
|
54 |
-
// START
|
55 |
-
|
56 |
-
// RESET
|
57 |
-
enter_RESET() {
|
58 |
-
this.board.reset(); // Refill chess
|
59 |
-
this.next();
|
60 |
-
}
|
61 |
-
next_RESET() {
|
62 |
-
return 'PRETEST';
|
63 |
-
}
|
64 |
-
// RESET
|
65 |
-
|
66 |
-
|
67 |
-
// PRETEST
|
68 |
-
enter_PRETEST() {
|
69 |
-
this.next();
|
70 |
-
}
|
71 |
-
next_PRETEST() {
|
72 |
-
var nextState;
|
73 |
-
if (this.board.preTest()) {
|
74 |
-
nextState = 'SELECT1START';
|
75 |
-
} else {
|
76 |
-
nextState = 'RESET';
|
77 |
-
}
|
78 |
-
return nextState;
|
79 |
-
}
|
80 |
-
// PRETEST
|
81 |
-
|
82 |
-
// SELECT1START
|
83 |
-
enter_SELECT1() {
|
84 |
-
this.selectedChess1 = undefined;
|
85 |
-
this.selectedChess2 = undefined;
|
86 |
-
|
87 |
-
this.bejeweled.emit('select1-start', this.board.board, this.bejeweled);
|
88 |
-
}
|
89 |
-
selectChess1(chess) {
|
90 |
-
if (this.state === 'SELECT1START') {
|
91 |
-
this.selectedChess1 = chess;
|
92 |
-
this.next();
|
93 |
-
}
|
94 |
-
return this;
|
95 |
-
}
|
96 |
-
next_SELECT1START() {
|
97 |
-
var nextState;
|
98 |
-
if (this.selectedChess1) {
|
99 |
-
nextState = 'SELECT1';
|
100 |
-
}
|
101 |
-
return nextState;
|
102 |
-
}
|
103 |
-
// SELECT1START
|
104 |
-
|
105 |
-
// SELECT1
|
106 |
-
enter_SELECT1() {
|
107 |
-
var board = this.board.board,
|
108 |
-
chess = this.selectedChess1;
|
109 |
-
|
110 |
-
this.bejeweled.emit('select1', chess, board, this.bejeweled);
|
111 |
-
|
112 |
-
this.select1Action(chess, board, this.bejeweled);
|
113 |
-
|
114 |
-
// To next state when all completed
|
115 |
-
this.next();
|
116 |
-
}
|
117 |
-
next_SELECT1() {
|
118 |
-
return 'SELECT2START';
|
119 |
-
}
|
120 |
-
// SELECT1
|
121 |
-
|
122 |
-
// SELECT2START
|
123 |
-
enter_SELECT2START() {
|
124 |
-
this.bejeweled.emit('select2-start', this.board.board, this.bejeweled);
|
125 |
-
}
|
126 |
-
selectChess2(chess) {
|
127 |
-
if (this.state === 'SELECT2START') {
|
128 |
-
this.selectedChess2 = chess;
|
129 |
-
this.next();
|
130 |
-
}
|
131 |
-
return this;
|
132 |
-
}
|
133 |
-
next_SELECT2START() {
|
134 |
-
var nextState;
|
135 |
-
if (this.selectedChess2 &&
|
136 |
-
this.board.board.areNeighbors(this.selectedChess1, this.selectedChess2)) {
|
137 |
-
nextState = 'SELECT2';
|
138 |
-
} else {
|
139 |
-
nextState = 'SELECT1START';
|
140 |
-
}
|
141 |
-
return nextState;
|
142 |
-
}
|
143 |
-
// SELECT2START
|
144 |
-
|
145 |
-
// SELECT2
|
146 |
-
enter_SELECT2() {
|
147 |
-
var board = this.board.board,
|
148 |
-
chess = this.selectedChess2;
|
149 |
-
|
150 |
-
this.bejeweled.emit('select2', chess, board, this.bejeweled);
|
151 |
-
|
152 |
-
this.select2Action(chess, board, this.bejeweled);
|
153 |
-
|
154 |
-
// To next state when all completed
|
155 |
-
this.next();
|
156 |
-
}
|
157 |
-
next_SELECT2() {
|
158 |
-
return 'SWAP';
|
159 |
-
}
|
160 |
-
// SELECT2
|
161 |
-
|
162 |
-
// SWAP
|
163 |
-
enter_SWAP() {
|
164 |
-
var board = this.board.board,
|
165 |
-
chess1 = this.selectedChess1,
|
166 |
-
chess2 = this.selectedChess2;
|
167 |
-
|
168 |
-
this.bejeweled.emit('swap', chess1, chess2, board, this.bejeweled);
|
169 |
-
|
170 |
-
this.swapAction(chess1, chess2, board, this.bejeweled);
|
171 |
-
|
172 |
-
// To next state when all completed
|
173 |
-
this.next();
|
174 |
-
}
|
175 |
-
next_SWAP() {
|
176 |
-
return 'MATCH3';
|
177 |
-
}
|
178 |
-
// SWAP
|
179 |
-
|
180 |
-
// MATCH3
|
181 |
-
enter_MATCH3() {
|
182 |
-
this.matchState
|
183 |
-
.once('complete', this.next, this)
|
184 |
-
.goto('START');
|
185 |
-
}
|
186 |
-
next_MATCH3() {
|
187 |
-
var nextState;
|
188 |
-
if (this.matchState.totalMatchedLinesCount === 0) {
|
189 |
-
nextState = 'UNDOSWAP';
|
190 |
-
} else {
|
191 |
-
nextState = 'PRETEST';
|
192 |
-
}
|
193 |
-
return nextState;
|
194 |
-
}
|
195 |
-
// MATCH3
|
196 |
-
|
197 |
-
// UNDO_SWAP
|
198 |
-
enter_UNDOSWAP() {
|
199 |
-
var board = this.board.board,
|
200 |
-
chess1 = this.selectedChess1,
|
201 |
-
chess2 = this.selectedChess2;
|
202 |
-
|
203 |
-
this.bejeweled.emit('undo-swap', chess1, chess2, board, this.bejeweled);
|
204 |
-
|
205 |
-
this.undoSwapAction(chess1, chess2, board, this.bejeweled);
|
206 |
-
|
207 |
-
// To next state when all completed
|
208 |
-
this.next();
|
209 |
-
}
|
210 |
-
next_UNDOSWAP() {
|
211 |
-
return 'SELECT1START';
|
212 |
-
}
|
213 |
-
// UNDO_SWAP
|
214 |
-
|
215 |
-
// debug
|
216 |
-
printState() {
|
217 |
-
console.log('Main state: ' + this.prevState + ' -> ' + this.state);
|
218 |
-
}
|
219 |
-
|
220 |
-
}
|
221 |
-
|
222 |
-
export default State;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aki004/herta-so-vits/models.py
DELETED
@@ -1,420 +0,0 @@
|
|
1 |
-
import copy
|
2 |
-
import math
|
3 |
-
import torch
|
4 |
-
from torch import nn
|
5 |
-
from torch.nn import functional as F
|
6 |
-
|
7 |
-
import modules.attentions as attentions
|
8 |
-
import modules.commons as commons
|
9 |
-
import modules.modules as modules
|
10 |
-
|
11 |
-
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
|
12 |
-
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
|
13 |
-
|
14 |
-
import utils
|
15 |
-
from modules.commons import init_weights, get_padding
|
16 |
-
from vdecoder.hifigan.models import Generator
|
17 |
-
from utils import f0_to_coarse
|
18 |
-
|
19 |
-
class ResidualCouplingBlock(nn.Module):
|
20 |
-
def __init__(self,
|
21 |
-
channels,
|
22 |
-
hidden_channels,
|
23 |
-
kernel_size,
|
24 |
-
dilation_rate,
|
25 |
-
n_layers,
|
26 |
-
n_flows=4,
|
27 |
-
gin_channels=0):
|
28 |
-
super().__init__()
|
29 |
-
self.channels = channels
|
30 |
-
self.hidden_channels = hidden_channels
|
31 |
-
self.kernel_size = kernel_size
|
32 |
-
self.dilation_rate = dilation_rate
|
33 |
-
self.n_layers = n_layers
|
34 |
-
self.n_flows = n_flows
|
35 |
-
self.gin_channels = gin_channels
|
36 |
-
|
37 |
-
self.flows = nn.ModuleList()
|
38 |
-
for i in range(n_flows):
|
39 |
-
self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
|
40 |
-
self.flows.append(modules.Flip())
|
41 |
-
|
42 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
43 |
-
if not reverse:
|
44 |
-
for flow in self.flows:
|
45 |
-
x, _ = flow(x, x_mask, g=g, reverse=reverse)
|
46 |
-
else:
|
47 |
-
for flow in reversed(self.flows):
|
48 |
-
x = flow(x, x_mask, g=g, reverse=reverse)
|
49 |
-
return x
|
50 |
-
|
51 |
-
|
52 |
-
class Encoder(nn.Module):
|
53 |
-
def __init__(self,
|
54 |
-
in_channels,
|
55 |
-
out_channels,
|
56 |
-
hidden_channels,
|
57 |
-
kernel_size,
|
58 |
-
dilation_rate,
|
59 |
-
n_layers,
|
60 |
-
gin_channels=0):
|
61 |
-
super().__init__()
|
62 |
-
self.in_channels = in_channels
|
63 |
-
self.out_channels = out_channels
|
64 |
-
self.hidden_channels = hidden_channels
|
65 |
-
self.kernel_size = kernel_size
|
66 |
-
self.dilation_rate = dilation_rate
|
67 |
-
self.n_layers = n_layers
|
68 |
-
self.gin_channels = gin_channels
|
69 |
-
|
70 |
-
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
|
71 |
-
self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
|
72 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
73 |
-
|
74 |
-
def forward(self, x, x_lengths, g=None):
|
75 |
-
# print(x.shape,x_lengths.shape)
|
76 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
77 |
-
x = self.pre(x) * x_mask
|
78 |
-
x = self.enc(x, x_mask, g=g)
|
79 |
-
stats = self.proj(x) * x_mask
|
80 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
81 |
-
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
|
82 |
-
return z, m, logs, x_mask
|
83 |
-
|
84 |
-
|
85 |
-
class TextEncoder(nn.Module):
|
86 |
-
def __init__(self,
|
87 |
-
out_channels,
|
88 |
-
hidden_channels,
|
89 |
-
kernel_size,
|
90 |
-
n_layers,
|
91 |
-
gin_channels=0,
|
92 |
-
filter_channels=None,
|
93 |
-
n_heads=None,
|
94 |
-
p_dropout=None):
|
95 |
-
super().__init__()
|
96 |
-
self.out_channels = out_channels
|
97 |
-
self.hidden_channels = hidden_channels
|
98 |
-
self.kernel_size = kernel_size
|
99 |
-
self.n_layers = n_layers
|
100 |
-
self.gin_channels = gin_channels
|
101 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
102 |
-
self.f0_emb = nn.Embedding(256, hidden_channels)
|
103 |
-
|
104 |
-
self.enc_ = attentions.Encoder(
|
105 |
-
hidden_channels,
|
106 |
-
filter_channels,
|
107 |
-
n_heads,
|
108 |
-
n_layers,
|
109 |
-
kernel_size,
|
110 |
-
p_dropout)
|
111 |
-
|
112 |
-
def forward(self, x, x_mask, f0=None, noice_scale=1):
|
113 |
-
x = x + self.f0_emb(f0).transpose(1,2)
|
114 |
-
x = self.enc_(x * x_mask, x_mask)
|
115 |
-
stats = self.proj(x) * x_mask
|
116 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
117 |
-
z = (m + torch.randn_like(m) * torch.exp(logs) * noice_scale) * x_mask
|
118 |
-
|
119 |
-
return z, m, logs, x_mask
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
class DiscriminatorP(torch.nn.Module):
|
124 |
-
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
|
125 |
-
super(DiscriminatorP, self).__init__()
|
126 |
-
self.period = period
|
127 |
-
self.use_spectral_norm = use_spectral_norm
|
128 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
129 |
-
self.convs = nn.ModuleList([
|
130 |
-
norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
131 |
-
norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
132 |
-
norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
133 |
-
norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
134 |
-
norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
|
135 |
-
])
|
136 |
-
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
|
137 |
-
|
138 |
-
def forward(self, x):
|
139 |
-
fmap = []
|
140 |
-
|
141 |
-
# 1d to 2d
|
142 |
-
b, c, t = x.shape
|
143 |
-
if t % self.period != 0: # pad first
|
144 |
-
n_pad = self.period - (t % self.period)
|
145 |
-
x = F.pad(x, (0, n_pad), "reflect")
|
146 |
-
t = t + n_pad
|
147 |
-
x = x.view(b, c, t // self.period, self.period)
|
148 |
-
|
149 |
-
for l in self.convs:
|
150 |
-
x = l(x)
|
151 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
152 |
-
fmap.append(x)
|
153 |
-
x = self.conv_post(x)
|
154 |
-
fmap.append(x)
|
155 |
-
x = torch.flatten(x, 1, -1)
|
156 |
-
|
157 |
-
return x, fmap
|
158 |
-
|
159 |
-
|
160 |
-
class DiscriminatorS(torch.nn.Module):
|
161 |
-
def __init__(self, use_spectral_norm=False):
|
162 |
-
super(DiscriminatorS, self).__init__()
|
163 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
164 |
-
self.convs = nn.ModuleList([
|
165 |
-
norm_f(Conv1d(1, 16, 15, 1, padding=7)),
|
166 |
-
norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
|
167 |
-
norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
|
168 |
-
norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
|
169 |
-
norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
|
170 |
-
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
|
171 |
-
])
|
172 |
-
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
|
173 |
-
|
174 |
-
def forward(self, x):
|
175 |
-
fmap = []
|
176 |
-
|
177 |
-
for l in self.convs:
|
178 |
-
x = l(x)
|
179 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
180 |
-
fmap.append(x)
|
181 |
-
x = self.conv_post(x)
|
182 |
-
fmap.append(x)
|
183 |
-
x = torch.flatten(x, 1, -1)
|
184 |
-
|
185 |
-
return x, fmap
|
186 |
-
|
187 |
-
|
188 |
-
class MultiPeriodDiscriminator(torch.nn.Module):
|
189 |
-
def __init__(self, use_spectral_norm=False):
|
190 |
-
super(MultiPeriodDiscriminator, self).__init__()
|
191 |
-
periods = [2,3,5,7,11]
|
192 |
-
|
193 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
194 |
-
discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
|
195 |
-
self.discriminators = nn.ModuleList(discs)
|
196 |
-
|
197 |
-
def forward(self, y, y_hat):
|
198 |
-
y_d_rs = []
|
199 |
-
y_d_gs = []
|
200 |
-
fmap_rs = []
|
201 |
-
fmap_gs = []
|
202 |
-
for i, d in enumerate(self.discriminators):
|
203 |
-
y_d_r, fmap_r = d(y)
|
204 |
-
y_d_g, fmap_g = d(y_hat)
|
205 |
-
y_d_rs.append(y_d_r)
|
206 |
-
y_d_gs.append(y_d_g)
|
207 |
-
fmap_rs.append(fmap_r)
|
208 |
-
fmap_gs.append(fmap_g)
|
209 |
-
|
210 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
211 |
-
|
212 |
-
|
213 |
-
class SpeakerEncoder(torch.nn.Module):
|
214 |
-
def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256):
|
215 |
-
super(SpeakerEncoder, self).__init__()
|
216 |
-
self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True)
|
217 |
-
self.linear = nn.Linear(model_hidden_size, model_embedding_size)
|
218 |
-
self.relu = nn.ReLU()
|
219 |
-
|
220 |
-
def forward(self, mels):
|
221 |
-
self.lstm.flatten_parameters()
|
222 |
-
_, (hidden, _) = self.lstm(mels)
|
223 |
-
embeds_raw = self.relu(self.linear(hidden[-1]))
|
224 |
-
return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True)
|
225 |
-
|
226 |
-
def compute_partial_slices(self, total_frames, partial_frames, partial_hop):
|
227 |
-
mel_slices = []
|
228 |
-
for i in range(0, total_frames-partial_frames, partial_hop):
|
229 |
-
mel_range = torch.arange(i, i+partial_frames)
|
230 |
-
mel_slices.append(mel_range)
|
231 |
-
|
232 |
-
return mel_slices
|
233 |
-
|
234 |
-
def embed_utterance(self, mel, partial_frames=128, partial_hop=64):
|
235 |
-
mel_len = mel.size(1)
|
236 |
-
last_mel = mel[:,-partial_frames:]
|
237 |
-
|
238 |
-
if mel_len > partial_frames:
|
239 |
-
mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop)
|
240 |
-
mels = list(mel[:,s] for s in mel_slices)
|
241 |
-
mels.append(last_mel)
|
242 |
-
mels = torch.stack(tuple(mels), 0).squeeze(1)
|
243 |
-
|
244 |
-
with torch.no_grad():
|
245 |
-
partial_embeds = self(mels)
|
246 |
-
embed = torch.mean(partial_embeds, axis=0).unsqueeze(0)
|
247 |
-
#embed = embed / torch.linalg.norm(embed, 2)
|
248 |
-
else:
|
249 |
-
with torch.no_grad():
|
250 |
-
embed = self(last_mel)
|
251 |
-
|
252 |
-
return embed
|
253 |
-
|
254 |
-
class F0Decoder(nn.Module):
|
255 |
-
def __init__(self,
|
256 |
-
out_channels,
|
257 |
-
hidden_channels,
|
258 |
-
filter_channels,
|
259 |
-
n_heads,
|
260 |
-
n_layers,
|
261 |
-
kernel_size,
|
262 |
-
p_dropout,
|
263 |
-
spk_channels=0):
|
264 |
-
super().__init__()
|
265 |
-
self.out_channels = out_channels
|
266 |
-
self.hidden_channels = hidden_channels
|
267 |
-
self.filter_channels = filter_channels
|
268 |
-
self.n_heads = n_heads
|
269 |
-
self.n_layers = n_layers
|
270 |
-
self.kernel_size = kernel_size
|
271 |
-
self.p_dropout = p_dropout
|
272 |
-
self.spk_channels = spk_channels
|
273 |
-
|
274 |
-
self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1)
|
275 |
-
self.decoder = attentions.FFT(
|
276 |
-
hidden_channels,
|
277 |
-
filter_channels,
|
278 |
-
n_heads,
|
279 |
-
n_layers,
|
280 |
-
kernel_size,
|
281 |
-
p_dropout)
|
282 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
|
283 |
-
self.f0_prenet = nn.Conv1d(1, hidden_channels , 3, padding=1)
|
284 |
-
self.cond = nn.Conv1d(spk_channels, hidden_channels, 1)
|
285 |
-
|
286 |
-
def forward(self, x, norm_f0, x_mask, spk_emb=None):
|
287 |
-
x = torch.detach(x)
|
288 |
-
if (spk_emb is not None):
|
289 |
-
x = x + self.cond(spk_emb)
|
290 |
-
x += self.f0_prenet(norm_f0)
|
291 |
-
x = self.prenet(x) * x_mask
|
292 |
-
x = self.decoder(x * x_mask, x_mask)
|
293 |
-
x = self.proj(x) * x_mask
|
294 |
-
return x
|
295 |
-
|
296 |
-
|
297 |
-
class SynthesizerTrn(nn.Module):
|
298 |
-
"""
|
299 |
-
Synthesizer for Training
|
300 |
-
"""
|
301 |
-
|
302 |
-
def __init__(self,
|
303 |
-
spec_channels,
|
304 |
-
segment_size,
|
305 |
-
inter_channels,
|
306 |
-
hidden_channels,
|
307 |
-
filter_channels,
|
308 |
-
n_heads,
|
309 |
-
n_layers,
|
310 |
-
kernel_size,
|
311 |
-
p_dropout,
|
312 |
-
resblock,
|
313 |
-
resblock_kernel_sizes,
|
314 |
-
resblock_dilation_sizes,
|
315 |
-
upsample_rates,
|
316 |
-
upsample_initial_channel,
|
317 |
-
upsample_kernel_sizes,
|
318 |
-
gin_channels,
|
319 |
-
ssl_dim,
|
320 |
-
n_speakers,
|
321 |
-
sampling_rate=44100,
|
322 |
-
**kwargs):
|
323 |
-
|
324 |
-
super().__init__()
|
325 |
-
self.spec_channels = spec_channels
|
326 |
-
self.inter_channels = inter_channels
|
327 |
-
self.hidden_channels = hidden_channels
|
328 |
-
self.filter_channels = filter_channels
|
329 |
-
self.n_heads = n_heads
|
330 |
-
self.n_layers = n_layers
|
331 |
-
self.kernel_size = kernel_size
|
332 |
-
self.p_dropout = p_dropout
|
333 |
-
self.resblock = resblock
|
334 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
335 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
336 |
-
self.upsample_rates = upsample_rates
|
337 |
-
self.upsample_initial_channel = upsample_initial_channel
|
338 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
339 |
-
self.segment_size = segment_size
|
340 |
-
self.gin_channels = gin_channels
|
341 |
-
self.ssl_dim = ssl_dim
|
342 |
-
self.emb_g = nn.Embedding(n_speakers, gin_channels)
|
343 |
-
|
344 |
-
self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2)
|
345 |
-
|
346 |
-
self.enc_p = TextEncoder(
|
347 |
-
inter_channels,
|
348 |
-
hidden_channels,
|
349 |
-
filter_channels=filter_channels,
|
350 |
-
n_heads=n_heads,
|
351 |
-
n_layers=n_layers,
|
352 |
-
kernel_size=kernel_size,
|
353 |
-
p_dropout=p_dropout
|
354 |
-
)
|
355 |
-
hps = {
|
356 |
-
"sampling_rate": sampling_rate,
|
357 |
-
"inter_channels": inter_channels,
|
358 |
-
"resblock": resblock,
|
359 |
-
"resblock_kernel_sizes": resblock_kernel_sizes,
|
360 |
-
"resblock_dilation_sizes": resblock_dilation_sizes,
|
361 |
-
"upsample_rates": upsample_rates,
|
362 |
-
"upsample_initial_channel": upsample_initial_channel,
|
363 |
-
"upsample_kernel_sizes": upsample_kernel_sizes,
|
364 |
-
"gin_channels": gin_channels,
|
365 |
-
}
|
366 |
-
self.dec = Generator(h=hps)
|
367 |
-
self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
|
368 |
-
self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
|
369 |
-
self.f0_decoder = F0Decoder(
|
370 |
-
1,
|
371 |
-
hidden_channels,
|
372 |
-
filter_channels,
|
373 |
-
n_heads,
|
374 |
-
n_layers,
|
375 |
-
kernel_size,
|
376 |
-
p_dropout,
|
377 |
-
spk_channels=gin_channels
|
378 |
-
)
|
379 |
-
self.emb_uv = nn.Embedding(2, hidden_channels)
|
380 |
-
|
381 |
-
def forward(self, c, f0, uv, spec, g=None, c_lengths=None, spec_lengths=None):
|
382 |
-
g = self.emb_g(g).transpose(1,2)
|
383 |
-
# ssl prenet
|
384 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype)
|
385 |
-
x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2)
|
386 |
-
|
387 |
-
# f0 predict
|
388 |
-
lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500
|
389 |
-
norm_lf0 = utils.normalize_f0(lf0, x_mask, uv)
|
390 |
-
pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g)
|
391 |
-
|
392 |
-
# encoder
|
393 |
-
z_ptemp, m_p, logs_p, _ = self.enc_p(x, x_mask, f0=f0_to_coarse(f0))
|
394 |
-
z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g)
|
395 |
-
|
396 |
-
# flow
|
397 |
-
z_p = self.flow(z, spec_mask, g=g)
|
398 |
-
z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size)
|
399 |
-
|
400 |
-
# nsf decoder
|
401 |
-
o = self.dec(z_slice, g=g, f0=pitch_slice)
|
402 |
-
|
403 |
-
return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q), pred_lf0, norm_lf0, lf0
|
404 |
-
|
405 |
-
def infer(self, c, f0, uv, g=None, noice_scale=0.35, predict_f0=False):
|
406 |
-
c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device)
|
407 |
-
g = self.emb_g(g).transpose(1,2)
|
408 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype)
|
409 |
-
x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2)
|
410 |
-
|
411 |
-
if predict_f0:
|
412 |
-
lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500
|
413 |
-
norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False)
|
414 |
-
pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g)
|
415 |
-
f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1)
|
416 |
-
|
417 |
-
z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), noice_scale=noice_scale)
|
418 |
-
z = self.flow(z_p, c_mask, g=g, reverse=True)
|
419 |
-
o = self.dec(z * c_mask, g=g, f0=f0)
|
420 |
-
return o
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Alpaca233/SadTalker/src/face3d/util/visualizer.py
DELETED
@@ -1,227 +0,0 @@
|
|
1 |
-
"""This script defines the visualizer for Deep3DFaceRecon_pytorch
|
2 |
-
"""
|
3 |
-
|
4 |
-
import numpy as np
|
5 |
-
import os
|
6 |
-
import sys
|
7 |
-
import ntpath
|
8 |
-
import time
|
9 |
-
from . import util, html
|
10 |
-
from subprocess import Popen, PIPE
|
11 |
-
from torch.utils.tensorboard import SummaryWriter
|
12 |
-
|
13 |
-
def save_images(webpage, visuals, image_path, aspect_ratio=1.0, width=256):
|
14 |
-
"""Save images to the disk.
|
15 |
-
|
16 |
-
Parameters:
|
17 |
-
webpage (the HTML class) -- the HTML webpage class that stores these imaegs (see html.py for more details)
|
18 |
-
visuals (OrderedDict) -- an ordered dictionary that stores (name, images (either tensor or numpy) ) pairs
|
19 |
-
image_path (str) -- the string is used to create image paths
|
20 |
-
aspect_ratio (float) -- the aspect ratio of saved images
|
21 |
-
width (int) -- the images will be resized to width x width
|
22 |
-
|
23 |
-
This function will save images stored in 'visuals' to the HTML file specified by 'webpage'.
|
24 |
-
"""
|
25 |
-
image_dir = webpage.get_image_dir()
|
26 |
-
short_path = ntpath.basename(image_path[0])
|
27 |
-
name = os.path.splitext(short_path)[0]
|
28 |
-
|
29 |
-
webpage.add_header(name)
|
30 |
-
ims, txts, links = [], [], []
|
31 |
-
|
32 |
-
for label, im_data in visuals.items():
|
33 |
-
im = util.tensor2im(im_data)
|
34 |
-
image_name = '%s/%s.png' % (label, name)
|
35 |
-
os.makedirs(os.path.join(image_dir, label), exist_ok=True)
|
36 |
-
save_path = os.path.join(image_dir, image_name)
|
37 |
-
util.save_image(im, save_path, aspect_ratio=aspect_ratio)
|
38 |
-
ims.append(image_name)
|
39 |
-
txts.append(label)
|
40 |
-
links.append(image_name)
|
41 |
-
webpage.add_images(ims, txts, links, width=width)
|
42 |
-
|
43 |
-
|
44 |
-
class Visualizer():
|
45 |
-
"""This class includes several functions that can display/save images and print/save logging information.
|
46 |
-
|
47 |
-
It uses a Python library tensprboardX for display, and a Python library 'dominate' (wrapped in 'HTML') for creating HTML files with images.
|
48 |
-
"""
|
49 |
-
|
50 |
-
def __init__(self, opt):
|
51 |
-
"""Initialize the Visualizer class
|
52 |
-
|
53 |
-
Parameters:
|
54 |
-
opt -- stores all the experiment flags; needs to be a subclass of BaseOptions
|
55 |
-
Step 1: Cache the training/test options
|
56 |
-
Step 2: create a tensorboard writer
|
57 |
-
Step 3: create an HTML object for saveing HTML filters
|
58 |
-
Step 4: create a logging file to store training losses
|
59 |
-
"""
|
60 |
-
self.opt = opt # cache the option
|
61 |
-
self.use_html = opt.isTrain and not opt.no_html
|
62 |
-
self.writer = SummaryWriter(os.path.join(opt.checkpoints_dir, 'logs', opt.name))
|
63 |
-
self.win_size = opt.display_winsize
|
64 |
-
self.name = opt.name
|
65 |
-
self.saved = False
|
66 |
-
if self.use_html: # create an HTML object at <checkpoints_dir>/web/; images will be saved under <checkpoints_dir>/web/images/
|
67 |
-
self.web_dir = os.path.join(opt.checkpoints_dir, opt.name, 'web')
|
68 |
-
self.img_dir = os.path.join(self.web_dir, 'images')
|
69 |
-
print('create web directory %s...' % self.web_dir)
|
70 |
-
util.mkdirs([self.web_dir, self.img_dir])
|
71 |
-
# create a logging file to store training losses
|
72 |
-
self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt')
|
73 |
-
with open(self.log_name, "a") as log_file:
|
74 |
-
now = time.strftime("%c")
|
75 |
-
log_file.write('================ Training Loss (%s) ================\n' % now)
|
76 |
-
|
77 |
-
def reset(self):
|
78 |
-
"""Reset the self.saved status"""
|
79 |
-
self.saved = False
|
80 |
-
|
81 |
-
|
82 |
-
def display_current_results(self, visuals, total_iters, epoch, save_result):
|
83 |
-
"""Display current results on tensorboad; save current results to an HTML file.
|
84 |
-
|
85 |
-
Parameters:
|
86 |
-
visuals (OrderedDict) - - dictionary of images to display or save
|
87 |
-
total_iters (int) -- total iterations
|
88 |
-
epoch (int) - - the current epoch
|
89 |
-
save_result (bool) - - if save the current results to an HTML file
|
90 |
-
"""
|
91 |
-
for label, image in visuals.items():
|
92 |
-
self.writer.add_image(label, util.tensor2im(image), total_iters, dataformats='HWC')
|
93 |
-
|
94 |
-
if self.use_html and (save_result or not self.saved): # save images to an HTML file if they haven't been saved.
|
95 |
-
self.saved = True
|
96 |
-
# save images to the disk
|
97 |
-
for label, image in visuals.items():
|
98 |
-
image_numpy = util.tensor2im(image)
|
99 |
-
img_path = os.path.join(self.img_dir, 'epoch%.3d_%s.png' % (epoch, label))
|
100 |
-
util.save_image(image_numpy, img_path)
|
101 |
-
|
102 |
-
# update website
|
103 |
-
webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=0)
|
104 |
-
for n in range(epoch, 0, -1):
|
105 |
-
webpage.add_header('epoch [%d]' % n)
|
106 |
-
ims, txts, links = [], [], []
|
107 |
-
|
108 |
-
for label, image_numpy in visuals.items():
|
109 |
-
image_numpy = util.tensor2im(image)
|
110 |
-
img_path = 'epoch%.3d_%s.png' % (n, label)
|
111 |
-
ims.append(img_path)
|
112 |
-
txts.append(label)
|
113 |
-
links.append(img_path)
|
114 |
-
webpage.add_images(ims, txts, links, width=self.win_size)
|
115 |
-
webpage.save()
|
116 |
-
|
117 |
-
def plot_current_losses(self, total_iters, losses):
|
118 |
-
# G_loss_collection = {}
|
119 |
-
# D_loss_collection = {}
|
120 |
-
# for name, value in losses.items():
|
121 |
-
# if 'G' in name or 'NCE' in name or 'idt' in name:
|
122 |
-
# G_loss_collection[name] = value
|
123 |
-
# else:
|
124 |
-
# D_loss_collection[name] = value
|
125 |
-
# self.writer.add_scalars('G_collec', G_loss_collection, total_iters)
|
126 |
-
# self.writer.add_scalars('D_collec', D_loss_collection, total_iters)
|
127 |
-
for name, value in losses.items():
|
128 |
-
self.writer.add_scalar(name, value, total_iters)
|
129 |
-
|
130 |
-
# losses: same format as |losses| of plot_current_losses
|
131 |
-
def print_current_losses(self, epoch, iters, losses, t_comp, t_data):
|
132 |
-
"""print current losses on console; also save the losses to the disk
|
133 |
-
|
134 |
-
Parameters:
|
135 |
-
epoch (int) -- current epoch
|
136 |
-
iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch)
|
137 |
-
losses (OrderedDict) -- training losses stored in the format of (name, float) pairs
|
138 |
-
t_comp (float) -- computational time per data point (normalized by batch_size)
|
139 |
-
t_data (float) -- data loading time per data point (normalized by batch_size)
|
140 |
-
"""
|
141 |
-
message = '(epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (epoch, iters, t_comp, t_data)
|
142 |
-
for k, v in losses.items():
|
143 |
-
message += '%s: %.3f ' % (k, v)
|
144 |
-
|
145 |
-
print(message) # print the message
|
146 |
-
with open(self.log_name, "a") as log_file:
|
147 |
-
log_file.write('%s\n' % message) # save the message
|
148 |
-
|
149 |
-
|
150 |
-
class MyVisualizer:
|
151 |
-
def __init__(self, opt):
|
152 |
-
"""Initialize the Visualizer class
|
153 |
-
|
154 |
-
Parameters:
|
155 |
-
opt -- stores all the experiment flags; needs to be a subclass of BaseOptions
|
156 |
-
Step 1: Cache the training/test options
|
157 |
-
Step 2: create a tensorboard writer
|
158 |
-
Step 3: create an HTML object for saveing HTML filters
|
159 |
-
Step 4: create a logging file to store training losses
|
160 |
-
"""
|
161 |
-
self.opt = opt # cache the optio
|
162 |
-
self.name = opt.name
|
163 |
-
self.img_dir = os.path.join(opt.checkpoints_dir, opt.name, 'results')
|
164 |
-
|
165 |
-
if opt.phase != 'test':
|
166 |
-
self.writer = SummaryWriter(os.path.join(opt.checkpoints_dir, opt.name, 'logs'))
|
167 |
-
# create a logging file to store training losses
|
168 |
-
self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt')
|
169 |
-
with open(self.log_name, "a") as log_file:
|
170 |
-
now = time.strftime("%c")
|
171 |
-
log_file.write('================ Training Loss (%s) ================\n' % now)
|
172 |
-
|
173 |
-
|
174 |
-
def display_current_results(self, visuals, total_iters, epoch, dataset='train', save_results=False, count=0, name=None,
|
175 |
-
add_image=True):
|
176 |
-
"""Display current results on tensorboad; save current results to an HTML file.
|
177 |
-
|
178 |
-
Parameters:
|
179 |
-
visuals (OrderedDict) - - dictionary of images to display or save
|
180 |
-
total_iters (int) -- total iterations
|
181 |
-
epoch (int) - - the current epoch
|
182 |
-
dataset (str) - - 'train' or 'val' or 'test'
|
183 |
-
"""
|
184 |
-
# if (not add_image) and (not save_results): return
|
185 |
-
|
186 |
-
for label, image in visuals.items():
|
187 |
-
for i in range(image.shape[0]):
|
188 |
-
image_numpy = util.tensor2im(image[i])
|
189 |
-
if add_image:
|
190 |
-
self.writer.add_image(label + '%s_%02d'%(dataset, i + count),
|
191 |
-
image_numpy, total_iters, dataformats='HWC')
|
192 |
-
|
193 |
-
if save_results:
|
194 |
-
save_path = os.path.join(self.img_dir, dataset, 'epoch_%s_%06d'%(epoch, total_iters))
|
195 |
-
if not os.path.isdir(save_path):
|
196 |
-
os.makedirs(save_path)
|
197 |
-
|
198 |
-
if name is not None:
|
199 |
-
img_path = os.path.join(save_path, '%s.png' % name)
|
200 |
-
else:
|
201 |
-
img_path = os.path.join(save_path, '%s_%03d.png' % (label, i + count))
|
202 |
-
util.save_image(image_numpy, img_path)
|
203 |
-
|
204 |
-
|
205 |
-
def plot_current_losses(self, total_iters, losses, dataset='train'):
|
206 |
-
for name, value in losses.items():
|
207 |
-
self.writer.add_scalar(name + '/%s'%dataset, value, total_iters)
|
208 |
-
|
209 |
-
# losses: same format as |losses| of plot_current_losses
|
210 |
-
def print_current_losses(self, epoch, iters, losses, t_comp, t_data, dataset='train'):
|
211 |
-
"""print current losses on console; also save the losses to the disk
|
212 |
-
|
213 |
-
Parameters:
|
214 |
-
epoch (int) -- current epoch
|
215 |
-
iters (int) -- current training iteration during this epoch (reset to 0 at the end of every epoch)
|
216 |
-
losses (OrderedDict) -- training losses stored in the format of (name, float) pairs
|
217 |
-
t_comp (float) -- computational time per data point (normalized by batch_size)
|
218 |
-
t_data (float) -- data loading time per data point (normalized by batch_size)
|
219 |
-
"""
|
220 |
-
message = '(dataset: %s, epoch: %d, iters: %d, time: %.3f, data: %.3f) ' % (
|
221 |
-
dataset, epoch, iters, t_comp, t_data)
|
222 |
-
for k, v in losses.items():
|
223 |
-
message += '%s: %.3f ' % (k, v)
|
224 |
-
|
225 |
-
print(message) # print the message
|
226 |
-
with open(self.log_name, "a") as log_file:
|
227 |
-
log_file.write('%s\n' % message) # save the message
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/__init__.py
DELETED
@@ -1,37 +0,0 @@
|
|
1 |
-
# Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
-
# you may not use this file except in compliance with the License.
|
5 |
-
# You may obtain a copy of the License at
|
6 |
-
#
|
7 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
-
#
|
9 |
-
# Unless required by applicable law or agreed to in writing, software
|
10 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
-
# See the License for the specific language governing permissions and
|
13 |
-
# limitations under the License.
|
14 |
-
|
15 |
-
from ..utils import is_flax_available, is_torch_available
|
16 |
-
|
17 |
-
|
18 |
-
if is_torch_available():
|
19 |
-
from .adapter import MultiAdapter, T2IAdapter
|
20 |
-
from .autoencoder_asym_kl import AsymmetricAutoencoderKL
|
21 |
-
from .autoencoder_kl import AutoencoderKL
|
22 |
-
from .controlnet import ControlNetModel
|
23 |
-
from .dual_transformer_2d import DualTransformer2DModel
|
24 |
-
from .modeling_utils import ModelMixin
|
25 |
-
from .prior_transformer import PriorTransformer
|
26 |
-
from .t5_film_transformer import T5FilmDecoder
|
27 |
-
from .transformer_2d import Transformer2DModel
|
28 |
-
from .unet_1d import UNet1DModel
|
29 |
-
from .unet_2d import UNet2DModel
|
30 |
-
from .unet_2d_condition import UNet2DConditionModel
|
31 |
-
from .unet_3d_condition import UNet3DConditionModel
|
32 |
-
from .vq_model import VQModel
|
33 |
-
|
34 |
-
if is_flax_available():
|
35 |
-
from .controlnet_flax import FlaxControlNetModel
|
36 |
-
from .unet_2d_condition_flax import FlaxUNet2DConditionModel
|
37 |
-
from .vae_flax import FlaxAutoencoderKL
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/audioldm/pipeline_audioldm.py
DELETED
@@ -1,559 +0,0 @@
|
|
1 |
-
# Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
-
# you may not use this file except in compliance with the License.
|
5 |
-
# You may obtain a copy of the License at
|
6 |
-
#
|
7 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
-
#
|
9 |
-
# Unless required by applicable law or agreed to in writing, software
|
10 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
-
# See the License for the specific language governing permissions and
|
13 |
-
# limitations under the License.
|
14 |
-
|
15 |
-
import inspect
|
16 |
-
from typing import Any, Callable, Dict, List, Optional, Union
|
17 |
-
|
18 |
-
import numpy as np
|
19 |
-
import torch
|
20 |
-
import torch.nn.functional as F
|
21 |
-
from transformers import ClapTextModelWithProjection, RobertaTokenizer, RobertaTokenizerFast, SpeechT5HifiGan
|
22 |
-
|
23 |
-
from ...models import AutoencoderKL, UNet2DConditionModel
|
24 |
-
from ...schedulers import KarrasDiffusionSchedulers
|
25 |
-
from ...utils import logging, randn_tensor, replace_example_docstring
|
26 |
-
from ..pipeline_utils import AudioPipelineOutput, DiffusionPipeline
|
27 |
-
|
28 |
-
|
29 |
-
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
|
30 |
-
|
31 |
-
EXAMPLE_DOC_STRING = """
|
32 |
-
Examples:
|
33 |
-
```py
|
34 |
-
>>> from diffusers import AudioLDMPipeline
|
35 |
-
>>> import torch
|
36 |
-
>>> import scipy
|
37 |
-
|
38 |
-
>>> repo_id = "cvssp/audioldm-s-full-v2"
|
39 |
-
>>> pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
|
40 |
-
>>> pipe = pipe.to("cuda")
|
41 |
-
|
42 |
-
>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs"
|
43 |
-
>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0]
|
44 |
-
|
45 |
-
>>> # save the audio sample as a .wav file
|
46 |
-
>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio)
|
47 |
-
```
|
48 |
-
"""
|
49 |
-
|
50 |
-
|
51 |
-
class AudioLDMPipeline(DiffusionPipeline):
|
52 |
-
r"""
|
53 |
-
Pipeline for text-to-audio generation using AudioLDM.
|
54 |
-
|
55 |
-
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
|
56 |
-
implemented for all pipelines (downloading, saving, running on a particular device, etc.).
|
57 |
-
|
58 |
-
Args:
|
59 |
-
vae ([`AutoencoderKL`]):
|
60 |
-
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
|
61 |
-
text_encoder ([`~transformers.ClapTextModelWithProjection`]):
|
62 |
-
Frozen text-encoder (`ClapTextModelWithProjection`, specifically the
|
63 |
-
[laion/clap-htsat-unfused](https://huggingface.co/laion/clap-htsat-unfused) variant.
|
64 |
-
tokenizer ([`PreTrainedTokenizer`]):
|
65 |
-
A [`~transformers.RobertaTokenizer`] to tokenize text.
|
66 |
-
unet ([`UNet2DConditionModel`]):
|
67 |
-
A `UNet2DConditionModel` to denoise the encoded audio latents.
|
68 |
-
scheduler ([`SchedulerMixin`]):
|
69 |
-
A scheduler to be used in combination with `unet` to denoise the encoded audio latents. Can be one of
|
70 |
-
[`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
|
71 |
-
vocoder ([`~transformers.SpeechT5HifiGan`]):
|
72 |
-
Vocoder of class `SpeechT5HifiGan`.
|
73 |
-
"""
|
74 |
-
|
75 |
-
def __init__(
|
76 |
-
self,
|
77 |
-
vae: AutoencoderKL,
|
78 |
-
text_encoder: ClapTextModelWithProjection,
|
79 |
-
tokenizer: Union[RobertaTokenizer, RobertaTokenizerFast],
|
80 |
-
unet: UNet2DConditionModel,
|
81 |
-
scheduler: KarrasDiffusionSchedulers,
|
82 |
-
vocoder: SpeechT5HifiGan,
|
83 |
-
):
|
84 |
-
super().__init__()
|
85 |
-
|
86 |
-
self.register_modules(
|
87 |
-
vae=vae,
|
88 |
-
text_encoder=text_encoder,
|
89 |
-
tokenizer=tokenizer,
|
90 |
-
unet=unet,
|
91 |
-
scheduler=scheduler,
|
92 |
-
vocoder=vocoder,
|
93 |
-
)
|
94 |
-
self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
|
95 |
-
|
96 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
|
97 |
-
def enable_vae_slicing(self):
|
98 |
-
r"""
|
99 |
-
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
|
100 |
-
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
|
101 |
-
"""
|
102 |
-
self.vae.enable_slicing()
|
103 |
-
|
104 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
|
105 |
-
def disable_vae_slicing(self):
|
106 |
-
r"""
|
107 |
-
Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
|
108 |
-
computing decoding in one step.
|
109 |
-
"""
|
110 |
-
self.vae.disable_slicing()
|
111 |
-
|
112 |
-
def _encode_prompt(
|
113 |
-
self,
|
114 |
-
prompt,
|
115 |
-
device,
|
116 |
-
num_waveforms_per_prompt,
|
117 |
-
do_classifier_free_guidance,
|
118 |
-
negative_prompt=None,
|
119 |
-
prompt_embeds: Optional[torch.FloatTensor] = None,
|
120 |
-
negative_prompt_embeds: Optional[torch.FloatTensor] = None,
|
121 |
-
):
|
122 |
-
r"""
|
123 |
-
Encodes the prompt into text encoder hidden states.
|
124 |
-
|
125 |
-
Args:
|
126 |
-
prompt (`str` or `List[str]`, *optional*):
|
127 |
-
prompt to be encoded
|
128 |
-
device (`torch.device`):
|
129 |
-
torch device
|
130 |
-
num_waveforms_per_prompt (`int`):
|
131 |
-
number of waveforms that should be generated per prompt
|
132 |
-
do_classifier_free_guidance (`bool`):
|
133 |
-
whether to use classifier free guidance or not
|
134 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
135 |
-
The prompt or prompts not to guide the audio generation. If not defined, one has to pass
|
136 |
-
`negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
|
137 |
-
less than `1`).
|
138 |
-
prompt_embeds (`torch.FloatTensor`, *optional*):
|
139 |
-
Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
|
140 |
-
provided, text embeddings will be generated from `prompt` input argument.
|
141 |
-
negative_prompt_embeds (`torch.FloatTensor`, *optional*):
|
142 |
-
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
|
143 |
-
weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
|
144 |
-
argument.
|
145 |
-
"""
|
146 |
-
if prompt is not None and isinstance(prompt, str):
|
147 |
-
batch_size = 1
|
148 |
-
elif prompt is not None and isinstance(prompt, list):
|
149 |
-
batch_size = len(prompt)
|
150 |
-
else:
|
151 |
-
batch_size = prompt_embeds.shape[0]
|
152 |
-
|
153 |
-
if prompt_embeds is None:
|
154 |
-
text_inputs = self.tokenizer(
|
155 |
-
prompt,
|
156 |
-
padding="max_length",
|
157 |
-
max_length=self.tokenizer.model_max_length,
|
158 |
-
truncation=True,
|
159 |
-
return_tensors="pt",
|
160 |
-
)
|
161 |
-
text_input_ids = text_inputs.input_ids
|
162 |
-
attention_mask = text_inputs.attention_mask
|
163 |
-
untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
|
164 |
-
|
165 |
-
if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
|
166 |
-
text_input_ids, untruncated_ids
|
167 |
-
):
|
168 |
-
removed_text = self.tokenizer.batch_decode(
|
169 |
-
untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
|
170 |
-
)
|
171 |
-
logger.warning(
|
172 |
-
"The following part of your input was truncated because CLAP can only handle sequences up to"
|
173 |
-
f" {self.tokenizer.model_max_length} tokens: {removed_text}"
|
174 |
-
)
|
175 |
-
|
176 |
-
prompt_embeds = self.text_encoder(
|
177 |
-
text_input_ids.to(device),
|
178 |
-
attention_mask=attention_mask.to(device),
|
179 |
-
)
|
180 |
-
prompt_embeds = prompt_embeds.text_embeds
|
181 |
-
# additional L_2 normalization over each hidden-state
|
182 |
-
prompt_embeds = F.normalize(prompt_embeds, dim=-1)
|
183 |
-
|
184 |
-
prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
|
185 |
-
|
186 |
-
(
|
187 |
-
bs_embed,
|
188 |
-
seq_len,
|
189 |
-
) = prompt_embeds.shape
|
190 |
-
# duplicate text embeddings for each generation per prompt, using mps friendly method
|
191 |
-
prompt_embeds = prompt_embeds.repeat(1, num_waveforms_per_prompt)
|
192 |
-
prompt_embeds = prompt_embeds.view(bs_embed * num_waveforms_per_prompt, seq_len)
|
193 |
-
|
194 |
-
# get unconditional embeddings for classifier free guidance
|
195 |
-
if do_classifier_free_guidance and negative_prompt_embeds is None:
|
196 |
-
uncond_tokens: List[str]
|
197 |
-
if negative_prompt is None:
|
198 |
-
uncond_tokens = [""] * batch_size
|
199 |
-
elif type(prompt) is not type(negative_prompt):
|
200 |
-
raise TypeError(
|
201 |
-
f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
|
202 |
-
f" {type(prompt)}."
|
203 |
-
)
|
204 |
-
elif isinstance(negative_prompt, str):
|
205 |
-
uncond_tokens = [negative_prompt]
|
206 |
-
elif batch_size != len(negative_prompt):
|
207 |
-
raise ValueError(
|
208 |
-
f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
|
209 |
-
f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
|
210 |
-
" the batch size of `prompt`."
|
211 |
-
)
|
212 |
-
else:
|
213 |
-
uncond_tokens = negative_prompt
|
214 |
-
|
215 |
-
max_length = prompt_embeds.shape[1]
|
216 |
-
uncond_input = self.tokenizer(
|
217 |
-
uncond_tokens,
|
218 |
-
padding="max_length",
|
219 |
-
max_length=max_length,
|
220 |
-
truncation=True,
|
221 |
-
return_tensors="pt",
|
222 |
-
)
|
223 |
-
|
224 |
-
uncond_input_ids = uncond_input.input_ids.to(device)
|
225 |
-
attention_mask = uncond_input.attention_mask.to(device)
|
226 |
-
|
227 |
-
negative_prompt_embeds = self.text_encoder(
|
228 |
-
uncond_input_ids,
|
229 |
-
attention_mask=attention_mask,
|
230 |
-
)
|
231 |
-
negative_prompt_embeds = negative_prompt_embeds.text_embeds
|
232 |
-
# additional L_2 normalization over each hidden-state
|
233 |
-
negative_prompt_embeds = F.normalize(negative_prompt_embeds, dim=-1)
|
234 |
-
|
235 |
-
if do_classifier_free_guidance:
|
236 |
-
# duplicate unconditional embeddings for each generation per prompt, using mps friendly method
|
237 |
-
seq_len = negative_prompt_embeds.shape[1]
|
238 |
-
|
239 |
-
negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
|
240 |
-
|
241 |
-
negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_waveforms_per_prompt)
|
242 |
-
negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_waveforms_per_prompt, seq_len)
|
243 |
-
|
244 |
-
# For classifier free guidance, we need to do two forward passes.
|
245 |
-
# Here we concatenate the unconditional and text embeddings into a single batch
|
246 |
-
# to avoid doing two forward passes
|
247 |
-
prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
|
248 |
-
|
249 |
-
return prompt_embeds
|
250 |
-
|
251 |
-
def decode_latents(self, latents):
|
252 |
-
latents = 1 / self.vae.config.scaling_factor * latents
|
253 |
-
mel_spectrogram = self.vae.decode(latents).sample
|
254 |
-
return mel_spectrogram
|
255 |
-
|
256 |
-
def mel_spectrogram_to_waveform(self, mel_spectrogram):
|
257 |
-
if mel_spectrogram.dim() == 4:
|
258 |
-
mel_spectrogram = mel_spectrogram.squeeze(1)
|
259 |
-
|
260 |
-
waveform = self.vocoder(mel_spectrogram)
|
261 |
-
# we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
|
262 |
-
waveform = waveform.cpu().float()
|
263 |
-
return waveform
|
264 |
-
|
265 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
|
266 |
-
def prepare_extra_step_kwargs(self, generator, eta):
|
267 |
-
# prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
|
268 |
-
# eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
|
269 |
-
# eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
|
270 |
-
# and should be between [0, 1]
|
271 |
-
|
272 |
-
accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
|
273 |
-
extra_step_kwargs = {}
|
274 |
-
if accepts_eta:
|
275 |
-
extra_step_kwargs["eta"] = eta
|
276 |
-
|
277 |
-
# check if the scheduler accepts generator
|
278 |
-
accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
|
279 |
-
if accepts_generator:
|
280 |
-
extra_step_kwargs["generator"] = generator
|
281 |
-
return extra_step_kwargs
|
282 |
-
|
283 |
-
def check_inputs(
|
284 |
-
self,
|
285 |
-
prompt,
|
286 |
-
audio_length_in_s,
|
287 |
-
vocoder_upsample_factor,
|
288 |
-
callback_steps,
|
289 |
-
negative_prompt=None,
|
290 |
-
prompt_embeds=None,
|
291 |
-
negative_prompt_embeds=None,
|
292 |
-
):
|
293 |
-
min_audio_length_in_s = vocoder_upsample_factor * self.vae_scale_factor
|
294 |
-
if audio_length_in_s < min_audio_length_in_s:
|
295 |
-
raise ValueError(
|
296 |
-
f"`audio_length_in_s` has to be a positive value greater than or equal to {min_audio_length_in_s}, but "
|
297 |
-
f"is {audio_length_in_s}."
|
298 |
-
)
|
299 |
-
|
300 |
-
if self.vocoder.config.model_in_dim % self.vae_scale_factor != 0:
|
301 |
-
raise ValueError(
|
302 |
-
f"The number of frequency bins in the vocoder's log-mel spectrogram has to be divisible by the "
|
303 |
-
f"VAE scale factor, but got {self.vocoder.config.model_in_dim} bins and a scale factor of "
|
304 |
-
f"{self.vae_scale_factor}."
|
305 |
-
)
|
306 |
-
|
307 |
-
if (callback_steps is None) or (
|
308 |
-
callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
|
309 |
-
):
|
310 |
-
raise ValueError(
|
311 |
-
f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
|
312 |
-
f" {type(callback_steps)}."
|
313 |
-
)
|
314 |
-
|
315 |
-
if prompt is not None and prompt_embeds is not None:
|
316 |
-
raise ValueError(
|
317 |
-
f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
|
318 |
-
" only forward one of the two."
|
319 |
-
)
|
320 |
-
elif prompt is None and prompt_embeds is None:
|
321 |
-
raise ValueError(
|
322 |
-
"Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
|
323 |
-
)
|
324 |
-
elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
|
325 |
-
raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
|
326 |
-
|
327 |
-
if negative_prompt is not None and negative_prompt_embeds is not None:
|
328 |
-
raise ValueError(
|
329 |
-
f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
|
330 |
-
f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
|
331 |
-
)
|
332 |
-
|
333 |
-
if prompt_embeds is not None and negative_prompt_embeds is not None:
|
334 |
-
if prompt_embeds.shape != negative_prompt_embeds.shape:
|
335 |
-
raise ValueError(
|
336 |
-
"`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
|
337 |
-
f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
|
338 |
-
f" {negative_prompt_embeds.shape}."
|
339 |
-
)
|
340 |
-
|
341 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents with width->self.vocoder.config.model_in_dim
|
342 |
-
def prepare_latents(self, batch_size, num_channels_latents, height, dtype, device, generator, latents=None):
|
343 |
-
shape = (
|
344 |
-
batch_size,
|
345 |
-
num_channels_latents,
|
346 |
-
height // self.vae_scale_factor,
|
347 |
-
self.vocoder.config.model_in_dim // self.vae_scale_factor,
|
348 |
-
)
|
349 |
-
if isinstance(generator, list) and len(generator) != batch_size:
|
350 |
-
raise ValueError(
|
351 |
-
f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
|
352 |
-
f" size of {batch_size}. Make sure the batch size matches the length of the generators."
|
353 |
-
)
|
354 |
-
|
355 |
-
if latents is None:
|
356 |
-
latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
|
357 |
-
else:
|
358 |
-
latents = latents.to(device)
|
359 |
-
|
360 |
-
# scale the initial noise by the standard deviation required by the scheduler
|
361 |
-
latents = latents * self.scheduler.init_noise_sigma
|
362 |
-
return latents
|
363 |
-
|
364 |
-
@torch.no_grad()
|
365 |
-
@replace_example_docstring(EXAMPLE_DOC_STRING)
|
366 |
-
def __call__(
|
367 |
-
self,
|
368 |
-
prompt: Union[str, List[str]] = None,
|
369 |
-
audio_length_in_s: Optional[float] = None,
|
370 |
-
num_inference_steps: int = 10,
|
371 |
-
guidance_scale: float = 2.5,
|
372 |
-
negative_prompt: Optional[Union[str, List[str]]] = None,
|
373 |
-
num_waveforms_per_prompt: Optional[int] = 1,
|
374 |
-
eta: float = 0.0,
|
375 |
-
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
|
376 |
-
latents: Optional[torch.FloatTensor] = None,
|
377 |
-
prompt_embeds: Optional[torch.FloatTensor] = None,
|
378 |
-
negative_prompt_embeds: Optional[torch.FloatTensor] = None,
|
379 |
-
return_dict: bool = True,
|
380 |
-
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
|
381 |
-
callback_steps: Optional[int] = 1,
|
382 |
-
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
|
383 |
-
output_type: Optional[str] = "np",
|
384 |
-
):
|
385 |
-
r"""
|
386 |
-
The call function to the pipeline for generation.
|
387 |
-
|
388 |
-
Args:
|
389 |
-
prompt (`str` or `List[str]`, *optional*):
|
390 |
-
The prompt or prompts to guide audio generation. If not defined, you need to pass `prompt_embeds`.
|
391 |
-
audio_length_in_s (`int`, *optional*, defaults to 5.12):
|
392 |
-
The length of the generated audio sample in seconds.
|
393 |
-
num_inference_steps (`int`, *optional*, defaults to 10):
|
394 |
-
The number of denoising steps. More denoising steps usually lead to a higher quality audio at the
|
395 |
-
expense of slower inference.
|
396 |
-
guidance_scale (`float`, *optional*, defaults to 2.5):
|
397 |
-
A higher guidance scale value encourages the model to generate audio that is closely linked to the text
|
398 |
-
`prompt` at the expense of lower sound quality. Guidance scale is enabled when `guidance_scale > 1`.
|
399 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
400 |
-
The prompt or prompts to guide what to not include in audio generation. If not defined, you need to
|
401 |
-
pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
|
402 |
-
num_waveforms_per_prompt (`int`, *optional*, defaults to 1):
|
403 |
-
The number of waveforms to generate per prompt.
|
404 |
-
eta (`float`, *optional*, defaults to 0.0):
|
405 |
-
Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
|
406 |
-
to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
|
407 |
-
generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
|
408 |
-
A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
|
409 |
-
generation deterministic.
|
410 |
-
latents (`torch.FloatTensor`, *optional*):
|
411 |
-
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
|
412 |
-
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
413 |
-
tensor is generated by sampling using the supplied random `generator`.
|
414 |
-
prompt_embeds (`torch.FloatTensor`, *optional*):
|
415 |
-
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
|
416 |
-
provided, text embeddings are generated from the `prompt` input argument.
|
417 |
-
negative_prompt_embeds (`torch.FloatTensor`, *optional*):
|
418 |
-
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
|
419 |
-
not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
|
420 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
421 |
-
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
|
422 |
-
plain tuple.
|
423 |
-
callback (`Callable`, *optional*):
|
424 |
-
A function that calls every `callback_steps` steps during inference. The function is called with the
|
425 |
-
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
|
426 |
-
callback_steps (`int`, *optional*, defaults to 1):
|
427 |
-
The frequency at which the `callback` function is called. If not specified, the callback is called at
|
428 |
-
every step.
|
429 |
-
cross_attention_kwargs (`dict`, *optional*):
|
430 |
-
A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in
|
431 |
-
[`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
|
432 |
-
output_type (`str`, *optional*, defaults to `"np"`):
|
433 |
-
The output format of the generated image. Choose between `"np"` to return a NumPy `np.ndarray` or
|
434 |
-
`"pt"` to return a PyTorch `torch.Tensor` object.
|
435 |
-
|
436 |
-
Examples:
|
437 |
-
|
438 |
-
Returns:
|
439 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
|
440 |
-
If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned,
|
441 |
-
otherwise a `tuple` is returned where the first element is a list with the generated audio.
|
442 |
-
"""
|
443 |
-
# 0. Convert audio input length from seconds to spectrogram height
|
444 |
-
vocoder_upsample_factor = np.prod(self.vocoder.config.upsample_rates) / self.vocoder.config.sampling_rate
|
445 |
-
|
446 |
-
if audio_length_in_s is None:
|
447 |
-
audio_length_in_s = self.unet.config.sample_size * self.vae_scale_factor * vocoder_upsample_factor
|
448 |
-
|
449 |
-
height = int(audio_length_in_s / vocoder_upsample_factor)
|
450 |
-
|
451 |
-
original_waveform_length = int(audio_length_in_s * self.vocoder.config.sampling_rate)
|
452 |
-
if height % self.vae_scale_factor != 0:
|
453 |
-
height = int(np.ceil(height / self.vae_scale_factor)) * self.vae_scale_factor
|
454 |
-
logger.info(
|
455 |
-
f"Audio length in seconds {audio_length_in_s} is increased to {height * vocoder_upsample_factor} "
|
456 |
-
f"so that it can be handled by the model. It will be cut to {audio_length_in_s} after the "
|
457 |
-
f"denoising process."
|
458 |
-
)
|
459 |
-
|
460 |
-
# 1. Check inputs. Raise error if not correct
|
461 |
-
self.check_inputs(
|
462 |
-
prompt,
|
463 |
-
audio_length_in_s,
|
464 |
-
vocoder_upsample_factor,
|
465 |
-
callback_steps,
|
466 |
-
negative_prompt,
|
467 |
-
prompt_embeds,
|
468 |
-
negative_prompt_embeds,
|
469 |
-
)
|
470 |
-
|
471 |
-
# 2. Define call parameters
|
472 |
-
if prompt is not None and isinstance(prompt, str):
|
473 |
-
batch_size = 1
|
474 |
-
elif prompt is not None and isinstance(prompt, list):
|
475 |
-
batch_size = len(prompt)
|
476 |
-
else:
|
477 |
-
batch_size = prompt_embeds.shape[0]
|
478 |
-
|
479 |
-
device = self._execution_device
|
480 |
-
# here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
|
481 |
-
# of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
|
482 |
-
# corresponds to doing no classifier free guidance.
|
483 |
-
do_classifier_free_guidance = guidance_scale > 1.0
|
484 |
-
|
485 |
-
# 3. Encode input prompt
|
486 |
-
prompt_embeds = self._encode_prompt(
|
487 |
-
prompt,
|
488 |
-
device,
|
489 |
-
num_waveforms_per_prompt,
|
490 |
-
do_classifier_free_guidance,
|
491 |
-
negative_prompt,
|
492 |
-
prompt_embeds=prompt_embeds,
|
493 |
-
negative_prompt_embeds=negative_prompt_embeds,
|
494 |
-
)
|
495 |
-
|
496 |
-
# 4. Prepare timesteps
|
497 |
-
self.scheduler.set_timesteps(num_inference_steps, device=device)
|
498 |
-
timesteps = self.scheduler.timesteps
|
499 |
-
|
500 |
-
# 5. Prepare latent variables
|
501 |
-
num_channels_latents = self.unet.config.in_channels
|
502 |
-
latents = self.prepare_latents(
|
503 |
-
batch_size * num_waveforms_per_prompt,
|
504 |
-
num_channels_latents,
|
505 |
-
height,
|
506 |
-
prompt_embeds.dtype,
|
507 |
-
device,
|
508 |
-
generator,
|
509 |
-
latents,
|
510 |
-
)
|
511 |
-
|
512 |
-
# 6. Prepare extra step kwargs
|
513 |
-
extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
|
514 |
-
|
515 |
-
# 7. Denoising loop
|
516 |
-
num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
|
517 |
-
with self.progress_bar(total=num_inference_steps) as progress_bar:
|
518 |
-
for i, t in enumerate(timesteps):
|
519 |
-
# expand the latents if we are doing classifier free guidance
|
520 |
-
latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
|
521 |
-
latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
|
522 |
-
|
523 |
-
# predict the noise residual
|
524 |
-
noise_pred = self.unet(
|
525 |
-
latent_model_input,
|
526 |
-
t,
|
527 |
-
encoder_hidden_states=None,
|
528 |
-
class_labels=prompt_embeds,
|
529 |
-
cross_attention_kwargs=cross_attention_kwargs,
|
530 |
-
).sample
|
531 |
-
|
532 |
-
# perform guidance
|
533 |
-
if do_classifier_free_guidance:
|
534 |
-
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
|
535 |
-
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
|
536 |
-
|
537 |
-
# compute the previous noisy sample x_t -> x_t-1
|
538 |
-
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
|
539 |
-
|
540 |
-
# call the callback, if provided
|
541 |
-
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
|
542 |
-
progress_bar.update()
|
543 |
-
if callback is not None and i % callback_steps == 0:
|
544 |
-
callback(i, t, latents)
|
545 |
-
|
546 |
-
# 8. Post-processing
|
547 |
-
mel_spectrogram = self.decode_latents(latents)
|
548 |
-
|
549 |
-
audio = self.mel_spectrogram_to_waveform(mel_spectrogram)
|
550 |
-
|
551 |
-
audio = audio[:, :original_waveform_length]
|
552 |
-
|
553 |
-
if output_type == "np":
|
554 |
-
audio = audio.numpy()
|
555 |
-
|
556 |
-
if not return_dict:
|
557 |
-
return (audio,)
|
558 |
-
|
559 |
-
return AudioPipelineOutput(audios=audio)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/mmdet/core/__init__.py
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
from .anchor import * # noqa: F401, F403
|
2 |
-
from .bbox import * # noqa: F401, F403
|
3 |
-
from .evaluation import * # noqa: F401, F403
|
4 |
-
from .export import * # noqa: F401, F403
|
5 |
-
from .mask import * # noqa: F401, F403
|
6 |
-
from .post_processing import * # noqa: F401, F403
|
7 |
-
from .utils import * # noqa: F401, F403
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/tools/model_converters/detectron2pytorch.py
DELETED
@@ -1,82 +0,0 @@
|
|
1 |
-
import argparse
|
2 |
-
from collections import OrderedDict
|
3 |
-
|
4 |
-
import mmcv
|
5 |
-
import torch
|
6 |
-
|
7 |
-
arch_settings = {50: (3, 4, 6, 3), 101: (3, 4, 23, 3)}
|
8 |
-
|
9 |
-
|
10 |
-
def convert_bn(blobs, state_dict, caffe_name, torch_name, converted_names):
|
11 |
-
# detectron replace bn with affine channel layer
|
12 |
-
state_dict[torch_name + '.bias'] = torch.from_numpy(blobs[caffe_name +
|
13 |
-
'_b'])
|
14 |
-
state_dict[torch_name + '.weight'] = torch.from_numpy(blobs[caffe_name +
|
15 |
-
'_s'])
|
16 |
-
bn_size = state_dict[torch_name + '.weight'].size()
|
17 |
-
state_dict[torch_name + '.running_mean'] = torch.zeros(bn_size)
|
18 |
-
state_dict[torch_name + '.running_var'] = torch.ones(bn_size)
|
19 |
-
converted_names.add(caffe_name + '_b')
|
20 |
-
converted_names.add(caffe_name + '_s')
|
21 |
-
|
22 |
-
|
23 |
-
def convert_conv_fc(blobs, state_dict, caffe_name, torch_name,
|
24 |
-
converted_names):
|
25 |
-
state_dict[torch_name + '.weight'] = torch.from_numpy(blobs[caffe_name +
|
26 |
-
'_w'])
|
27 |
-
converted_names.add(caffe_name + '_w')
|
28 |
-
if caffe_name + '_b' in blobs:
|
29 |
-
state_dict[torch_name + '.bias'] = torch.from_numpy(blobs[caffe_name +
|
30 |
-
'_b'])
|
31 |
-
converted_names.add(caffe_name + '_b')
|
32 |
-
|
33 |
-
|
34 |
-
def convert(src, dst, depth):
|
35 |
-
"""Convert keys in detectron pretrained ResNet models to pytorch style."""
|
36 |
-
# load arch_settings
|
37 |
-
if depth not in arch_settings:
|
38 |
-
raise ValueError('Only support ResNet-50 and ResNet-101 currently')
|
39 |
-
block_nums = arch_settings[depth]
|
40 |
-
# load caffe model
|
41 |
-
caffe_model = mmcv.load(src, encoding='latin1')
|
42 |
-
blobs = caffe_model['blobs'] if 'blobs' in caffe_model else caffe_model
|
43 |
-
# convert to pytorch style
|
44 |
-
state_dict = OrderedDict()
|
45 |
-
converted_names = set()
|
46 |
-
convert_conv_fc(blobs, state_dict, 'conv1', 'conv1', converted_names)
|
47 |
-
convert_bn(blobs, state_dict, 'res_conv1_bn', 'bn1', converted_names)
|
48 |
-
for i in range(1, len(block_nums) + 1):
|
49 |
-
for j in range(block_nums[i - 1]):
|
50 |
-
if j == 0:
|
51 |
-
convert_conv_fc(blobs, state_dict, f'res{i + 1}_{j}_branch1',
|
52 |
-
f'layer{i}.{j}.downsample.0', converted_names)
|
53 |
-
convert_bn(blobs, state_dict, f'res{i + 1}_{j}_branch1_bn',
|
54 |
-
f'layer{i}.{j}.downsample.1', converted_names)
|
55 |
-
for k, letter in enumerate(['a', 'b', 'c']):
|
56 |
-
convert_conv_fc(blobs, state_dict,
|
57 |
-
f'res{i + 1}_{j}_branch2{letter}',
|
58 |
-
f'layer{i}.{j}.conv{k+1}', converted_names)
|
59 |
-
convert_bn(blobs, state_dict,
|
60 |
-
f'res{i + 1}_{j}_branch2{letter}_bn',
|
61 |
-
f'layer{i}.{j}.bn{k + 1}', converted_names)
|
62 |
-
# check if all layers are converted
|
63 |
-
for key in blobs:
|
64 |
-
if key not in converted_names:
|
65 |
-
print(f'Not Convert: {key}')
|
66 |
-
# save checkpoint
|
67 |
-
checkpoint = dict()
|
68 |
-
checkpoint['state_dict'] = state_dict
|
69 |
-
torch.save(checkpoint, dst)
|
70 |
-
|
71 |
-
|
72 |
-
def main():
|
73 |
-
parser = argparse.ArgumentParser(description='Convert model keys')
|
74 |
-
parser.add_argument('src', help='src detectron model path')
|
75 |
-
parser.add_argument('dst', help='save path')
|
76 |
-
parser.add_argument('depth', type=int, help='ResNet model depth')
|
77 |
-
args = parser.parse_args()
|
78 |
-
convert(args.src, args.dst, args.depth)
|
79 |
-
|
80 |
-
|
81 |
-
if __name__ == '__main__':
|
82 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_80k_ade20k.py
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/models/danet_r50-d8.py', '../_base_/datasets/ade20k.py',
|
3 |
-
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
|
4 |
-
]
|
5 |
-
model = dict(
|
6 |
-
decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_480x480_80k_pascal_context_59.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
_base_ = './fcn_r50-d8_480x480_80k_pascal_context_59.py'
|
2 |
-
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r50-d8_512x512_80k_ade20k.py
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/models/nonlocal_r50-d8.py', '../_base_/datasets/ade20k.py',
|
3 |
-
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
|
4 |
-
]
|
5 |
-
model = dict(
|
6 |
-
decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/base_model.py
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
|
3 |
-
|
4 |
-
class BaseModel(torch.nn.Module):
|
5 |
-
def load(self, path):
|
6 |
-
"""Load model from file.
|
7 |
-
|
8 |
-
Args:
|
9 |
-
path (str): file path
|
10 |
-
"""
|
11 |
-
parameters = torch.load(path, map_location=torch.device('cpu'))
|
12 |
-
|
13 |
-
if "optimizer" in parameters:
|
14 |
-
parameters = parameters["model"]
|
15 |
-
|
16 |
-
self.load_state_dict(parameters)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/autoencoder.py
DELETED
@@ -1,219 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import pytorch_lightning as pl
|
3 |
-
import torch.nn.functional as F
|
4 |
-
from contextlib import contextmanager
|
5 |
-
|
6 |
-
from ldm.modules.diffusionmodules.model import Encoder, Decoder
|
7 |
-
from ldm.modules.distributions.distributions import DiagonalGaussianDistribution
|
8 |
-
|
9 |
-
from ldm.util import instantiate_from_config
|
10 |
-
from ldm.modules.ema import LitEma
|
11 |
-
|
12 |
-
|
13 |
-
class AutoencoderKL(pl.LightningModule):
|
14 |
-
def __init__(self,
|
15 |
-
ddconfig,
|
16 |
-
lossconfig,
|
17 |
-
embed_dim,
|
18 |
-
ckpt_path=None,
|
19 |
-
ignore_keys=[],
|
20 |
-
image_key="image",
|
21 |
-
colorize_nlabels=None,
|
22 |
-
monitor=None,
|
23 |
-
ema_decay=None,
|
24 |
-
learn_logvar=False
|
25 |
-
):
|
26 |
-
super().__init__()
|
27 |
-
self.learn_logvar = learn_logvar
|
28 |
-
self.image_key = image_key
|
29 |
-
self.encoder = Encoder(**ddconfig)
|
30 |
-
self.decoder = Decoder(**ddconfig)
|
31 |
-
self.loss = instantiate_from_config(lossconfig)
|
32 |
-
assert ddconfig["double_z"]
|
33 |
-
self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1)
|
34 |
-
self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1)
|
35 |
-
self.embed_dim = embed_dim
|
36 |
-
if colorize_nlabels is not None:
|
37 |
-
assert type(colorize_nlabels)==int
|
38 |
-
self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1))
|
39 |
-
if monitor is not None:
|
40 |
-
self.monitor = monitor
|
41 |
-
|
42 |
-
self.use_ema = ema_decay is not None
|
43 |
-
if self.use_ema:
|
44 |
-
self.ema_decay = ema_decay
|
45 |
-
assert 0. < ema_decay < 1.
|
46 |
-
self.model_ema = LitEma(self, decay=ema_decay)
|
47 |
-
print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.")
|
48 |
-
|
49 |
-
if ckpt_path is not None:
|
50 |
-
self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
|
51 |
-
|
52 |
-
def init_from_ckpt(self, path, ignore_keys=list()):
|
53 |
-
sd = torch.load(path, map_location="cpu")["state_dict"]
|
54 |
-
keys = list(sd.keys())
|
55 |
-
for k in keys:
|
56 |
-
for ik in ignore_keys:
|
57 |
-
if k.startswith(ik):
|
58 |
-
print("Deleting key {} from state_dict.".format(k))
|
59 |
-
del sd[k]
|
60 |
-
self.load_state_dict(sd, strict=False)
|
61 |
-
print(f"Restored from {path}")
|
62 |
-
|
63 |
-
@contextmanager
|
64 |
-
def ema_scope(self, context=None):
|
65 |
-
if self.use_ema:
|
66 |
-
self.model_ema.store(self.parameters())
|
67 |
-
self.model_ema.copy_to(self)
|
68 |
-
if context is not None:
|
69 |
-
print(f"{context}: Switched to EMA weights")
|
70 |
-
try:
|
71 |
-
yield None
|
72 |
-
finally:
|
73 |
-
if self.use_ema:
|
74 |
-
self.model_ema.restore(self.parameters())
|
75 |
-
if context is not None:
|
76 |
-
print(f"{context}: Restored training weights")
|
77 |
-
|
78 |
-
def on_train_batch_end(self, *args, **kwargs):
|
79 |
-
if self.use_ema:
|
80 |
-
self.model_ema(self)
|
81 |
-
|
82 |
-
def encode(self, x):
|
83 |
-
h = self.encoder(x)
|
84 |
-
moments = self.quant_conv(h)
|
85 |
-
posterior = DiagonalGaussianDistribution(moments)
|
86 |
-
return posterior
|
87 |
-
|
88 |
-
def decode(self, z):
|
89 |
-
z = self.post_quant_conv(z)
|
90 |
-
dec = self.decoder(z)
|
91 |
-
return dec
|
92 |
-
|
93 |
-
def forward(self, input, sample_posterior=True):
|
94 |
-
posterior = self.encode(input)
|
95 |
-
if sample_posterior:
|
96 |
-
z = posterior.sample()
|
97 |
-
else:
|
98 |
-
z = posterior.mode()
|
99 |
-
dec = self.decode(z)
|
100 |
-
return dec, posterior
|
101 |
-
|
102 |
-
def get_input(self, batch, k):
|
103 |
-
x = batch[k]
|
104 |
-
if len(x.shape) == 3:
|
105 |
-
x = x[..., None]
|
106 |
-
x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float()
|
107 |
-
return x
|
108 |
-
|
109 |
-
def training_step(self, batch, batch_idx, optimizer_idx):
|
110 |
-
inputs = self.get_input(batch, self.image_key)
|
111 |
-
reconstructions, posterior = self(inputs)
|
112 |
-
|
113 |
-
if optimizer_idx == 0:
|
114 |
-
# train encoder+decoder+logvar
|
115 |
-
aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,
|
116 |
-
last_layer=self.get_last_layer(), split="train")
|
117 |
-
self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
|
118 |
-
self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False)
|
119 |
-
return aeloss
|
120 |
-
|
121 |
-
if optimizer_idx == 1:
|
122 |
-
# train the discriminator
|
123 |
-
discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,
|
124 |
-
last_layer=self.get_last_layer(), split="train")
|
125 |
-
|
126 |
-
self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
|
127 |
-
self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False)
|
128 |
-
return discloss
|
129 |
-
|
130 |
-
def validation_step(self, batch, batch_idx):
|
131 |
-
log_dict = self._validation_step(batch, batch_idx)
|
132 |
-
with self.ema_scope():
|
133 |
-
log_dict_ema = self._validation_step(batch, batch_idx, postfix="_ema")
|
134 |
-
return log_dict
|
135 |
-
|
136 |
-
def _validation_step(self, batch, batch_idx, postfix=""):
|
137 |
-
inputs = self.get_input(batch, self.image_key)
|
138 |
-
reconstructions, posterior = self(inputs)
|
139 |
-
aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step,
|
140 |
-
last_layer=self.get_last_layer(), split="val"+postfix)
|
141 |
-
|
142 |
-
discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step,
|
143 |
-
last_layer=self.get_last_layer(), split="val"+postfix)
|
144 |
-
|
145 |
-
self.log(f"val{postfix}/rec_loss", log_dict_ae[f"val{postfix}/rec_loss"])
|
146 |
-
self.log_dict(log_dict_ae)
|
147 |
-
self.log_dict(log_dict_disc)
|
148 |
-
return self.log_dict
|
149 |
-
|
150 |
-
def configure_optimizers(self):
|
151 |
-
lr = self.learning_rate
|
152 |
-
ae_params_list = list(self.encoder.parameters()) + list(self.decoder.parameters()) + list(
|
153 |
-
self.quant_conv.parameters()) + list(self.post_quant_conv.parameters())
|
154 |
-
if self.learn_logvar:
|
155 |
-
print(f"{self.__class__.__name__}: Learning logvar")
|
156 |
-
ae_params_list.append(self.loss.logvar)
|
157 |
-
opt_ae = torch.optim.Adam(ae_params_list,
|
158 |
-
lr=lr, betas=(0.5, 0.9))
|
159 |
-
opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(),
|
160 |
-
lr=lr, betas=(0.5, 0.9))
|
161 |
-
return [opt_ae, opt_disc], []
|
162 |
-
|
163 |
-
def get_last_layer(self):
|
164 |
-
return self.decoder.conv_out.weight
|
165 |
-
|
166 |
-
@torch.no_grad()
|
167 |
-
def log_images(self, batch, only_inputs=False, log_ema=False, **kwargs):
|
168 |
-
log = dict()
|
169 |
-
x = self.get_input(batch, self.image_key)
|
170 |
-
x = x.to(self.device)
|
171 |
-
if not only_inputs:
|
172 |
-
xrec, posterior = self(x)
|
173 |
-
if x.shape[1] > 3:
|
174 |
-
# colorize with random projection
|
175 |
-
assert xrec.shape[1] > 3
|
176 |
-
x = self.to_rgb(x)
|
177 |
-
xrec = self.to_rgb(xrec)
|
178 |
-
log["samples"] = self.decode(torch.randn_like(posterior.sample()))
|
179 |
-
log["reconstructions"] = xrec
|
180 |
-
if log_ema or self.use_ema:
|
181 |
-
with self.ema_scope():
|
182 |
-
xrec_ema, posterior_ema = self(x)
|
183 |
-
if x.shape[1] > 3:
|
184 |
-
# colorize with random projection
|
185 |
-
assert xrec_ema.shape[1] > 3
|
186 |
-
xrec_ema = self.to_rgb(xrec_ema)
|
187 |
-
log["samples_ema"] = self.decode(torch.randn_like(posterior_ema.sample()))
|
188 |
-
log["reconstructions_ema"] = xrec_ema
|
189 |
-
log["inputs"] = x
|
190 |
-
return log
|
191 |
-
|
192 |
-
def to_rgb(self, x):
|
193 |
-
assert self.image_key == "segmentation"
|
194 |
-
if not hasattr(self, "colorize"):
|
195 |
-
self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x))
|
196 |
-
x = F.conv2d(x, weight=self.colorize)
|
197 |
-
x = 2.*(x-x.min())/(x.max()-x.min()) - 1.
|
198 |
-
return x
|
199 |
-
|
200 |
-
|
201 |
-
class IdentityFirstStage(torch.nn.Module):
|
202 |
-
def __init__(self, *args, vq_interface=False, **kwargs):
|
203 |
-
self.vq_interface = vq_interface
|
204 |
-
super().__init__()
|
205 |
-
|
206 |
-
def encode(self, x, *args, **kwargs):
|
207 |
-
return x
|
208 |
-
|
209 |
-
def decode(self, x, *args, **kwargs):
|
210 |
-
return x
|
211 |
-
|
212 |
-
def quantize(self, x, *args, **kwargs):
|
213 |
-
if self.vq_interface:
|
214 |
-
return x, None, [None, None, None]
|
215 |
-
return x
|
216 |
-
|
217 |
-
def forward(self, x, *args, **kwargs):
|
218 |
-
return x
|
219 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ArchitSharma/Digital-Photo-Color-Restoration/app.py
DELETED
@@ -1,149 +0,0 @@
|
|
1 |
-
# Based on: https://github.com/jantic/DeOldify
|
2 |
-
import os, re, time
|
3 |
-
|
4 |
-
os.environ["TORCH_HOME"] = os.path.join(os.getcwd(), ".cache")
|
5 |
-
os.environ["XDG_CACHE_HOME"] = os.path.join(os.getcwd(), ".cache")
|
6 |
-
|
7 |
-
import streamlit as st
|
8 |
-
import PIL
|
9 |
-
import cv2
|
10 |
-
import numpy as np
|
11 |
-
import uuid
|
12 |
-
from zipfile import ZipFile, ZIP_DEFLATED
|
13 |
-
from io import BytesIO
|
14 |
-
from random import randint
|
15 |
-
from datetime import datetime
|
16 |
-
|
17 |
-
from src.deoldify import device
|
18 |
-
from src.deoldify.device_id import DeviceId
|
19 |
-
from src.deoldify.visualize import *
|
20 |
-
from src.app_utils import get_model_bin
|
21 |
-
|
22 |
-
|
23 |
-
device.set(device=DeviceId.CPU)
|
24 |
-
|
25 |
-
|
26 |
-
@st.cache_resource
|
27 |
-
def load_model(model_dir, option):
|
28 |
-
if option.lower() == 'artistic':
|
29 |
-
model_url = 'https://data.deepai.org/deoldify/ColorizeArtistic_gen.pth'
|
30 |
-
get_model_bin(model_url, os.path.join(model_dir, "ColorizeArtistic_gen.pth"))
|
31 |
-
colorizer = get_image_colorizer(artistic=True)
|
32 |
-
elif option.lower() == 'stable':
|
33 |
-
model_url = "https://www.dropbox.com/s/usf7uifrctqw9rl/ColorizeStable_gen.pth?dl=0"
|
34 |
-
get_model_bin(model_url, os.path.join(model_dir, "ColorizeStable_gen.pth"))
|
35 |
-
colorizer = get_image_colorizer(artistic=False)
|
36 |
-
|
37 |
-
return colorizer
|
38 |
-
|
39 |
-
|
40 |
-
def resize_img(input_img, max_size):
|
41 |
-
img = input_img.copy()
|
42 |
-
img_height, img_width = img.shape[0],img.shape[1]
|
43 |
-
|
44 |
-
if max(img_height, img_width) > max_size:
|
45 |
-
if img_height > img_width:
|
46 |
-
new_width = img_width*(max_size/img_height)
|
47 |
-
new_height = max_size
|
48 |
-
resized_img = cv2.resize(img,(int(new_width), int(new_height)))
|
49 |
-
return resized_img
|
50 |
-
|
51 |
-
elif img_height <= img_width:
|
52 |
-
new_width = img_height*(max_size/img_width)
|
53 |
-
new_height = max_size
|
54 |
-
resized_img = cv2.resize(img,(int(new_width), int(new_height)))
|
55 |
-
return resized_img
|
56 |
-
|
57 |
-
return img
|
58 |
-
|
59 |
-
|
60 |
-
def colorize_image(pil_image, img_size=800) -> "PIL.Image":
|
61 |
-
# Open the image
|
62 |
-
pil_img = pil_image.convert("RGB")
|
63 |
-
img_rgb = np.array(pil_img)
|
64 |
-
resized_img_rgb = resize_img(img_rgb, img_size)
|
65 |
-
resized_pil_img = PIL.Image.fromarray(resized_img_rgb)
|
66 |
-
|
67 |
-
# Send the image to the model
|
68 |
-
output_pil_img = colorizer.plot_transformed_pil_image(resized_pil_img, render_factor=35, compare=False)
|
69 |
-
|
70 |
-
return output_pil_img
|
71 |
-
|
72 |
-
|
73 |
-
def image_download_button(pil_image, filename: str, fmt: str, label="Download"):
|
74 |
-
if fmt not in ["jpg", "png"]:
|
75 |
-
raise Exception(f"Unknown image format (Available: {fmt} - case sensitive)")
|
76 |
-
|
77 |
-
pil_format = "JPEG" if fmt == "jpg" else "PNG"
|
78 |
-
file_format = "jpg" if fmt == "jpg" else "png"
|
79 |
-
mime = "image/jpeg" if fmt == "jpg" else "image/png"
|
80 |
-
|
81 |
-
buf = BytesIO()
|
82 |
-
pil_image.save(buf, format=pil_format)
|
83 |
-
|
84 |
-
return st.download_button(
|
85 |
-
label=label,
|
86 |
-
data=buf.getvalue(),
|
87 |
-
file_name=f'{filename}.{file_format}',
|
88 |
-
mime=mime,
|
89 |
-
)
|
90 |
-
|
91 |
-
|
92 |
-
###########################
|
93 |
-
###### STREAMLIT CODE #####
|
94 |
-
###########################
|
95 |
-
|
96 |
-
|
97 |
-
st_color_option = "Artistic"
|
98 |
-
|
99 |
-
# Load models
|
100 |
-
try:
|
101 |
-
with st.spinner("Loading..."):
|
102 |
-
print('before loading the model')
|
103 |
-
colorizer = load_model('models/', st_color_option)
|
104 |
-
print('after loading the model')
|
105 |
-
|
106 |
-
except Exception as e:
|
107 |
-
colorizer = None
|
108 |
-
print('Error while loading the model. Please refresh the page')
|
109 |
-
print(e)
|
110 |
-
st.write("**App loading error. Please try again later.**")
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
if colorizer is not None:
|
115 |
-
st.title("Digital Photo Color Restoration")
|
116 |
-
|
117 |
-
uploaded_file = st.file_uploader("Upload photo", accept_multiple_files=False, type=["png", "jpg", "jpeg"])
|
118 |
-
|
119 |
-
if uploaded_file is not None:
|
120 |
-
bytes_data = uploaded_file.getvalue()
|
121 |
-
img_input = PIL.Image.open(BytesIO(bytes_data)).convert("RGB")
|
122 |
-
|
123 |
-
with st.expander("Original photo", True):
|
124 |
-
st.image(img_input)
|
125 |
-
|
126 |
-
if st.button("Restore Color!") and uploaded_file is not None:
|
127 |
-
|
128 |
-
with st.spinner("AI is doing the magic!"):
|
129 |
-
img_output = colorize_image(img_input)
|
130 |
-
img_output = img_output.resize(img_input.size)
|
131 |
-
|
132 |
-
# NOTE: Calm! I'm not logging the input and outputs.
|
133 |
-
# It is impossible to access the filesystem in spaces environment.
|
134 |
-
now = datetime.now().strftime("%Y%m%d-%H%M%S-%f")
|
135 |
-
img_input.convert("RGB").save(f"./output/{now}-input.jpg")
|
136 |
-
img_output.convert("RGB").save(f"./output/{now}-output.jpg")
|
137 |
-
|
138 |
-
st.write("AI has finished the job!")
|
139 |
-
st.image(img_output)
|
140 |
-
# reuse = st.button('Edit again (Re-use this image)', on_click=set_image, args=(inpainted_img, ))
|
141 |
-
|
142 |
-
uploaded_name = os.path.splitext(uploaded_file.name)[0]
|
143 |
-
image_download_button(
|
144 |
-
pil_image=img_output,
|
145 |
-
filename=uploaded_name,
|
146 |
-
fmt="jpg",
|
147 |
-
label="Download Image"
|
148 |
-
)
|
149 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AriaMei/TTSdemo/models.py
DELETED
@@ -1,537 +0,0 @@
|
|
1 |
-
import copy
|
2 |
-
import math
|
3 |
-
import torch
|
4 |
-
from torch import nn
|
5 |
-
from torch.nn import functional as F
|
6 |
-
|
7 |
-
import commons
|
8 |
-
import modules
|
9 |
-
import attentions
|
10 |
-
# import monotonic_align
|
11 |
-
|
12 |
-
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
|
13 |
-
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
|
14 |
-
from commons import init_weights, get_padding
|
15 |
-
|
16 |
-
|
17 |
-
class StochasticDurationPredictor(nn.Module):
|
18 |
-
def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
|
19 |
-
super().__init__()
|
20 |
-
filter_channels = in_channels # it needs to be removed from future version.
|
21 |
-
self.in_channels = in_channels
|
22 |
-
self.filter_channels = filter_channels
|
23 |
-
self.kernel_size = kernel_size
|
24 |
-
self.p_dropout = p_dropout
|
25 |
-
self.n_flows = n_flows
|
26 |
-
self.gin_channels = gin_channels
|
27 |
-
|
28 |
-
self.log_flow = modules.Log()
|
29 |
-
self.flows = nn.ModuleList()
|
30 |
-
self.flows.append(modules.ElementwiseAffine(2))
|
31 |
-
for i in range(n_flows):
|
32 |
-
self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
|
33 |
-
self.flows.append(modules.Flip())
|
34 |
-
|
35 |
-
self.post_pre = nn.Conv1d(1, filter_channels, 1)
|
36 |
-
self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
|
37 |
-
self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
|
38 |
-
self.post_flows = nn.ModuleList()
|
39 |
-
self.post_flows.append(modules.ElementwiseAffine(2))
|
40 |
-
for i in range(4):
|
41 |
-
self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
|
42 |
-
self.post_flows.append(modules.Flip())
|
43 |
-
|
44 |
-
self.pre = nn.Conv1d(in_channels, filter_channels, 1)
|
45 |
-
self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
|
46 |
-
self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
|
47 |
-
if gin_channels != 0:
|
48 |
-
self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
|
49 |
-
|
50 |
-
def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
|
51 |
-
x = torch.detach(x)
|
52 |
-
x = self.pre(x)
|
53 |
-
if g is not None:
|
54 |
-
g = torch.detach(g)
|
55 |
-
x = x + self.cond(g)
|
56 |
-
x = self.convs(x, x_mask)
|
57 |
-
x = self.proj(x) * x_mask
|
58 |
-
|
59 |
-
if not reverse:
|
60 |
-
flows = self.flows
|
61 |
-
assert w is not None
|
62 |
-
|
63 |
-
logdet_tot_q = 0
|
64 |
-
h_w = self.post_pre(w)
|
65 |
-
h_w = self.post_convs(h_w, x_mask)
|
66 |
-
h_w = self.post_proj(h_w) * x_mask
|
67 |
-
e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
|
68 |
-
z_q = e_q
|
69 |
-
for flow in self.post_flows:
|
70 |
-
z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
|
71 |
-
logdet_tot_q += logdet_q
|
72 |
-
z_u, z1 = torch.split(z_q, [1, 1], 1)
|
73 |
-
u = torch.sigmoid(z_u) * x_mask
|
74 |
-
z0 = (w - u) * x_mask
|
75 |
-
logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
|
76 |
-
logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
|
77 |
-
|
78 |
-
logdet_tot = 0
|
79 |
-
z0, logdet = self.log_flow(z0, x_mask)
|
80 |
-
logdet_tot += logdet
|
81 |
-
z = torch.cat([z0, z1], 1)
|
82 |
-
for flow in flows:
|
83 |
-
z, logdet = flow(z, x_mask, g=x, reverse=reverse)
|
84 |
-
logdet_tot = logdet_tot + logdet
|
85 |
-
nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
|
86 |
-
return nll + logq # [b]
|
87 |
-
else:
|
88 |
-
flows = list(reversed(self.flows))
|
89 |
-
flows = flows[:-2] + [flows[-1]] # remove a useless vflow
|
90 |
-
z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
|
91 |
-
for flow in flows:
|
92 |
-
z = flow(z, x_mask, g=x, reverse=reverse)
|
93 |
-
z0, z1 = torch.split(z, [1, 1], 1)
|
94 |
-
logw = z0
|
95 |
-
return logw
|
96 |
-
|
97 |
-
|
98 |
-
class DurationPredictor(nn.Module):
|
99 |
-
def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
|
100 |
-
super().__init__()
|
101 |
-
|
102 |
-
self.in_channels = in_channels
|
103 |
-
self.filter_channels = filter_channels
|
104 |
-
self.kernel_size = kernel_size
|
105 |
-
self.p_dropout = p_dropout
|
106 |
-
self.gin_channels = gin_channels
|
107 |
-
|
108 |
-
self.drop = nn.Dropout(p_dropout)
|
109 |
-
self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
|
110 |
-
self.norm_1 = modules.LayerNorm(filter_channels)
|
111 |
-
self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
|
112 |
-
self.norm_2 = modules.LayerNorm(filter_channels)
|
113 |
-
self.proj = nn.Conv1d(filter_channels, 1, 1)
|
114 |
-
|
115 |
-
if gin_channels != 0:
|
116 |
-
self.cond = nn.Conv1d(gin_channels, in_channels, 1)
|
117 |
-
|
118 |
-
def forward(self, x, x_mask, g=None):
|
119 |
-
x = torch.detach(x)
|
120 |
-
if g is not None:
|
121 |
-
g = torch.detach(g)
|
122 |
-
x = x + self.cond(g)
|
123 |
-
x = self.conv_1(x * x_mask)
|
124 |
-
x = torch.relu(x)
|
125 |
-
x = self.norm_1(x)
|
126 |
-
x = self.drop(x)
|
127 |
-
x = self.conv_2(x * x_mask)
|
128 |
-
x = torch.relu(x)
|
129 |
-
x = self.norm_2(x)
|
130 |
-
x = self.drop(x)
|
131 |
-
x = self.proj(x * x_mask)
|
132 |
-
return x * x_mask
|
133 |
-
|
134 |
-
|
135 |
-
class TextEncoder(nn.Module):
|
136 |
-
def __init__(self,
|
137 |
-
n_vocab,
|
138 |
-
out_channels,
|
139 |
-
hidden_channels,
|
140 |
-
filter_channels,
|
141 |
-
n_heads,
|
142 |
-
n_layers,
|
143 |
-
kernel_size,
|
144 |
-
p_dropout):
|
145 |
-
super().__init__()
|
146 |
-
self.n_vocab = n_vocab
|
147 |
-
self.out_channels = out_channels
|
148 |
-
self.hidden_channels = hidden_channels
|
149 |
-
self.filter_channels = filter_channels
|
150 |
-
self.n_heads = n_heads
|
151 |
-
self.n_layers = n_layers
|
152 |
-
self.kernel_size = kernel_size
|
153 |
-
self.p_dropout = p_dropout
|
154 |
-
|
155 |
-
self.emb = nn.Embedding(n_vocab, hidden_channels)
|
156 |
-
self.emo_proj = nn.Linear(1024, hidden_channels)
|
157 |
-
|
158 |
-
nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
|
159 |
-
|
160 |
-
self.encoder = attentions.Encoder(
|
161 |
-
hidden_channels,
|
162 |
-
filter_channels,
|
163 |
-
n_heads,
|
164 |
-
n_layers,
|
165 |
-
kernel_size,
|
166 |
-
p_dropout)
|
167 |
-
self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
168 |
-
|
169 |
-
def forward(self, x, x_lengths, emo):
|
170 |
-
x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
|
171 |
-
x = x + self.emo_proj(emo.unsqueeze(1))
|
172 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
173 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
174 |
-
|
175 |
-
x = self.encoder(x * x_mask, x_mask)
|
176 |
-
stats = self.proj(x) * x_mask
|
177 |
-
|
178 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
179 |
-
return x, m, logs, x_mask
|
180 |
-
|
181 |
-
|
182 |
-
class ResidualCouplingBlock(nn.Module):
|
183 |
-
def __init__(self,
|
184 |
-
channels,
|
185 |
-
hidden_channels,
|
186 |
-
kernel_size,
|
187 |
-
dilation_rate,
|
188 |
-
n_layers,
|
189 |
-
n_flows=4,
|
190 |
-
gin_channels=0):
|
191 |
-
super().__init__()
|
192 |
-
self.channels = channels
|
193 |
-
self.hidden_channels = hidden_channels
|
194 |
-
self.kernel_size = kernel_size
|
195 |
-
self.dilation_rate = dilation_rate
|
196 |
-
self.n_layers = n_layers
|
197 |
-
self.n_flows = n_flows
|
198 |
-
self.gin_channels = gin_channels
|
199 |
-
|
200 |
-
self.flows = nn.ModuleList()
|
201 |
-
for i in range(n_flows):
|
202 |
-
self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
|
203 |
-
self.flows.append(modules.Flip())
|
204 |
-
|
205 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
206 |
-
if not reverse:
|
207 |
-
for flow in self.flows:
|
208 |
-
x, _ = flow(x, x_mask, g=g, reverse=reverse)
|
209 |
-
else:
|
210 |
-
for flow in reversed(self.flows):
|
211 |
-
x = flow(x, x_mask, g=g, reverse=reverse)
|
212 |
-
return x
|
213 |
-
|
214 |
-
|
215 |
-
class PosteriorEncoder(nn.Module):
|
216 |
-
def __init__(self,
|
217 |
-
in_channels,
|
218 |
-
out_channels,
|
219 |
-
hidden_channels,
|
220 |
-
kernel_size,
|
221 |
-
dilation_rate,
|
222 |
-
n_layers,
|
223 |
-
gin_channels=0):
|
224 |
-
super().__init__()
|
225 |
-
self.in_channels = in_channels
|
226 |
-
self.out_channels = out_channels
|
227 |
-
self.hidden_channels = hidden_channels
|
228 |
-
self.kernel_size = kernel_size
|
229 |
-
self.dilation_rate = dilation_rate
|
230 |
-
self.n_layers = n_layers
|
231 |
-
self.gin_channels = gin_channels
|
232 |
-
|
233 |
-
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
|
234 |
-
self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
|
235 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
236 |
-
|
237 |
-
def forward(self, x, x_lengths, g=None):
|
238 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
239 |
-
x = self.pre(x) * x_mask
|
240 |
-
x = self.enc(x, x_mask, g=g)
|
241 |
-
stats = self.proj(x) * x_mask
|
242 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
243 |
-
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
|
244 |
-
return z, m, logs, x_mask
|
245 |
-
|
246 |
-
|
247 |
-
class Generator(torch.nn.Module):
|
248 |
-
def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
|
249 |
-
super(Generator, self).__init__()
|
250 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
251 |
-
self.num_upsamples = len(upsample_rates)
|
252 |
-
self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
|
253 |
-
resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
|
254 |
-
|
255 |
-
self.ups = nn.ModuleList()
|
256 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
257 |
-
self.ups.append(weight_norm(
|
258 |
-
ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
|
259 |
-
k, u, padding=(k-u)//2)))
|
260 |
-
|
261 |
-
self.resblocks = nn.ModuleList()
|
262 |
-
for i in range(len(self.ups)):
|
263 |
-
ch = upsample_initial_channel//(2**(i+1))
|
264 |
-
for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
|
265 |
-
self.resblocks.append(resblock(ch, k, d))
|
266 |
-
|
267 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
268 |
-
self.ups.apply(init_weights)
|
269 |
-
|
270 |
-
if gin_channels != 0:
|
271 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
272 |
-
|
273 |
-
def forward(self, x, g=None):
|
274 |
-
x = self.conv_pre(x)
|
275 |
-
if g is not None:
|
276 |
-
x = x + self.cond(g)
|
277 |
-
|
278 |
-
for i in range(self.num_upsamples):
|
279 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
280 |
-
x = self.ups[i](x)
|
281 |
-
xs = None
|
282 |
-
for j in range(self.num_kernels):
|
283 |
-
if xs is None:
|
284 |
-
xs = self.resblocks[i*self.num_kernels+j](x)
|
285 |
-
else:
|
286 |
-
xs += self.resblocks[i*self.num_kernels+j](x)
|
287 |
-
x = xs / self.num_kernels
|
288 |
-
x = F.leaky_relu(x)
|
289 |
-
x = self.conv_post(x)
|
290 |
-
x = torch.tanh(x)
|
291 |
-
|
292 |
-
return x
|
293 |
-
|
294 |
-
def remove_weight_norm(self):
|
295 |
-
print('Removing weight norm...')
|
296 |
-
for l in self.ups:
|
297 |
-
remove_weight_norm(l)
|
298 |
-
for l in self.resblocks:
|
299 |
-
l.remove_weight_norm()
|
300 |
-
|
301 |
-
|
302 |
-
class DiscriminatorP(torch.nn.Module):
|
303 |
-
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
|
304 |
-
super(DiscriminatorP, self).__init__()
|
305 |
-
self.period = period
|
306 |
-
self.use_spectral_norm = use_spectral_norm
|
307 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
308 |
-
self.convs = nn.ModuleList([
|
309 |
-
norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
310 |
-
norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
311 |
-
norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
312 |
-
norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
313 |
-
norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
|
314 |
-
])
|
315 |
-
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
|
316 |
-
|
317 |
-
def forward(self, x):
|
318 |
-
fmap = []
|
319 |
-
|
320 |
-
# 1d to 2d
|
321 |
-
b, c, t = x.shape
|
322 |
-
if t % self.period != 0: # pad first
|
323 |
-
n_pad = self.period - (t % self.period)
|
324 |
-
x = F.pad(x, (0, n_pad), "reflect")
|
325 |
-
t = t + n_pad
|
326 |
-
x = x.view(b, c, t // self.period, self.period)
|
327 |
-
|
328 |
-
for l in self.convs:
|
329 |
-
x = l(x)
|
330 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
331 |
-
fmap.append(x)
|
332 |
-
x = self.conv_post(x)
|
333 |
-
fmap.append(x)
|
334 |
-
x = torch.flatten(x, 1, -1)
|
335 |
-
|
336 |
-
return x, fmap
|
337 |
-
|
338 |
-
|
339 |
-
class DiscriminatorS(torch.nn.Module):
|
340 |
-
def __init__(self, use_spectral_norm=False):
|
341 |
-
super(DiscriminatorS, self).__init__()
|
342 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
343 |
-
self.convs = nn.ModuleList([
|
344 |
-
norm_f(Conv1d(1, 16, 15, 1, padding=7)),
|
345 |
-
norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
|
346 |
-
norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
|
347 |
-
norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
|
348 |
-
norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
|
349 |
-
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
|
350 |
-
])
|
351 |
-
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
|
352 |
-
|
353 |
-
def forward(self, x):
|
354 |
-
fmap = []
|
355 |
-
|
356 |
-
for l in self.convs:
|
357 |
-
x = l(x)
|
358 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
359 |
-
fmap.append(x)
|
360 |
-
x = self.conv_post(x)
|
361 |
-
fmap.append(x)
|
362 |
-
x = torch.flatten(x, 1, -1)
|
363 |
-
|
364 |
-
return x, fmap
|
365 |
-
|
366 |
-
|
367 |
-
class MultiPeriodDiscriminator(torch.nn.Module):
|
368 |
-
def __init__(self, use_spectral_norm=False):
|
369 |
-
super(MultiPeriodDiscriminator, self).__init__()
|
370 |
-
periods = [2,3,5,7,11]
|
371 |
-
|
372 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
373 |
-
discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
|
374 |
-
self.discriminators = nn.ModuleList(discs)
|
375 |
-
|
376 |
-
def forward(self, y, y_hat):
|
377 |
-
y_d_rs = []
|
378 |
-
y_d_gs = []
|
379 |
-
fmap_rs = []
|
380 |
-
fmap_gs = []
|
381 |
-
for i, d in enumerate(self.discriminators):
|
382 |
-
y_d_r, fmap_r = d(y)
|
383 |
-
y_d_g, fmap_g = d(y_hat)
|
384 |
-
y_d_rs.append(y_d_r)
|
385 |
-
y_d_gs.append(y_d_g)
|
386 |
-
fmap_rs.append(fmap_r)
|
387 |
-
fmap_gs.append(fmap_g)
|
388 |
-
|
389 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
390 |
-
|
391 |
-
|
392 |
-
|
393 |
-
class SynthesizerTrn(nn.Module):
|
394 |
-
"""
|
395 |
-
Synthesizer for Training
|
396 |
-
"""
|
397 |
-
|
398 |
-
def __init__(self,
|
399 |
-
n_vocab,
|
400 |
-
spec_channels,
|
401 |
-
segment_size,
|
402 |
-
inter_channels,
|
403 |
-
hidden_channels,
|
404 |
-
filter_channels,
|
405 |
-
n_heads,
|
406 |
-
n_layers,
|
407 |
-
kernel_size,
|
408 |
-
p_dropout,
|
409 |
-
resblock,
|
410 |
-
resblock_kernel_sizes,
|
411 |
-
resblock_dilation_sizes,
|
412 |
-
upsample_rates,
|
413 |
-
upsample_initial_channel,
|
414 |
-
upsample_kernel_sizes,
|
415 |
-
n_speakers=0,
|
416 |
-
gin_channels=0,
|
417 |
-
use_sdp=True,
|
418 |
-
**kwargs):
|
419 |
-
|
420 |
-
super().__init__()
|
421 |
-
self.n_vocab = n_vocab
|
422 |
-
self.spec_channels = spec_channels
|
423 |
-
self.inter_channels = inter_channels
|
424 |
-
self.hidden_channels = hidden_channels
|
425 |
-
self.filter_channels = filter_channels
|
426 |
-
self.n_heads = n_heads
|
427 |
-
self.n_layers = n_layers
|
428 |
-
self.kernel_size = kernel_size
|
429 |
-
self.p_dropout = p_dropout
|
430 |
-
self.resblock = resblock
|
431 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
432 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
433 |
-
self.upsample_rates = upsample_rates
|
434 |
-
self.upsample_initial_channel = upsample_initial_channel
|
435 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
436 |
-
self.segment_size = segment_size
|
437 |
-
self.n_speakers = n_speakers
|
438 |
-
self.gin_channels = gin_channels
|
439 |
-
|
440 |
-
self.use_sdp = use_sdp
|
441 |
-
|
442 |
-
self.enc_p = TextEncoder(n_vocab,
|
443 |
-
inter_channels,
|
444 |
-
hidden_channels,
|
445 |
-
filter_channels,
|
446 |
-
n_heads,
|
447 |
-
n_layers,
|
448 |
-
kernel_size,
|
449 |
-
p_dropout)
|
450 |
-
self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
|
451 |
-
self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
|
452 |
-
self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
|
453 |
-
|
454 |
-
if use_sdp:
|
455 |
-
self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
|
456 |
-
else:
|
457 |
-
self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
|
458 |
-
|
459 |
-
if n_speakers > 1:
|
460 |
-
self.emb_g = nn.Embedding(n_speakers, gin_channels)
|
461 |
-
|
462 |
-
def forward(self, x, x_lengths, y, y_lengths, sid=None, emo=None):
|
463 |
-
|
464 |
-
x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, emo)
|
465 |
-
if self.n_speakers > 0:
|
466 |
-
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
467 |
-
else:
|
468 |
-
g = None
|
469 |
-
|
470 |
-
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
|
471 |
-
z_p = self.flow(z, y_mask, g=g)
|
472 |
-
|
473 |
-
with torch.no_grad():
|
474 |
-
# negative cross-entropy
|
475 |
-
s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
|
476 |
-
neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
|
477 |
-
neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
|
478 |
-
neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
|
479 |
-
neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
|
480 |
-
neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
|
481 |
-
|
482 |
-
attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
|
483 |
-
attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
|
484 |
-
|
485 |
-
w = attn.sum(2)
|
486 |
-
if self.use_sdp:
|
487 |
-
l_length = self.dp(x, x_mask, w, g=g)
|
488 |
-
l_length = l_length / torch.sum(x_mask)
|
489 |
-
else:
|
490 |
-
logw_ = torch.log(w + 1e-6) * x_mask
|
491 |
-
logw = self.dp(x, x_mask, g=g)
|
492 |
-
l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging
|
493 |
-
|
494 |
-
# expand prior
|
495 |
-
m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
|
496 |
-
logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
|
497 |
-
|
498 |
-
z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
|
499 |
-
o = self.dec(z_slice, g=g)
|
500 |
-
return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
|
501 |
-
|
502 |
-
def infer(self, x, x_lengths, sid=None, emo=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
|
503 |
-
x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths,emo)
|
504 |
-
if self.n_speakers > 0:
|
505 |
-
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
506 |
-
else:
|
507 |
-
g = None
|
508 |
-
|
509 |
-
if self.use_sdp:
|
510 |
-
logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
|
511 |
-
else:
|
512 |
-
logw = self.dp(x, x_mask, g=g)
|
513 |
-
w = torch.exp(logw) * x_mask * length_scale
|
514 |
-
w_ceil = torch.ceil(w)
|
515 |
-
y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
|
516 |
-
y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
|
517 |
-
attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
|
518 |
-
attn = commons.generate_path(w_ceil, attn_mask)
|
519 |
-
|
520 |
-
m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
|
521 |
-
logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
|
522 |
-
|
523 |
-
z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
|
524 |
-
z = self.flow(z_p, y_mask, g=g, reverse=True)
|
525 |
-
o = self.dec((z * y_mask)[:,:,:max_len], g=g)
|
526 |
-
return o, attn, y_mask, (z, z_p, m_p, logs_p)
|
527 |
-
|
528 |
-
def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
|
529 |
-
assert self.n_speakers > 0, "n_speakers have to be larger than 0."
|
530 |
-
g_src = self.emb_g(sid_src).unsqueeze(-1)
|
531 |
-
g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
|
532 |
-
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
|
533 |
-
z_p = self.flow(z, y_mask, g=g_src)
|
534 |
-
z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
|
535 |
-
o_hat = self.dec(z_hat * y_mask, g=g_tgt)
|
536 |
-
return o_hat, y_mask, (z, z_p, z_hat)
|
537 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Atualli/yoloxTeste/configs/yolov3.py
DELETED
@@ -1,33 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python3
|
2 |
-
# -*- coding:utf-8 -*-
|
3 |
-
# Copyright (c) Megvii, Inc. and its affiliates.
|
4 |
-
|
5 |
-
import os
|
6 |
-
|
7 |
-
import torch.nn as nn
|
8 |
-
|
9 |
-
from yolox.exp import Exp as MyExp
|
10 |
-
|
11 |
-
|
12 |
-
class Exp(MyExp):
|
13 |
-
def __init__(self):
|
14 |
-
super(Exp, self).__init__()
|
15 |
-
self.depth = 1.0
|
16 |
-
self.width = 1.0
|
17 |
-
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
|
18 |
-
|
19 |
-
def get_model(self, sublinear=False):
|
20 |
-
def init_yolo(M):
|
21 |
-
for m in M.modules():
|
22 |
-
if isinstance(m, nn.BatchNorm2d):
|
23 |
-
m.eps = 1e-3
|
24 |
-
m.momentum = 0.03
|
25 |
-
if "model" not in self.__dict__:
|
26 |
-
from yolox.models import YOLOX, YOLOFPN, YOLOXHead
|
27 |
-
backbone = YOLOFPN()
|
28 |
-
head = YOLOXHead(self.num_classes, self.width, in_channels=[128, 256, 512], act="lrelu")
|
29 |
-
self.model = YOLOX(backbone, head)
|
30 |
-
self.model.apply(init_yolo)
|
31 |
-
self.model.head.initialize_biases(1e-2)
|
32 |
-
|
33 |
-
return self.model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Audio-AGI/AudioSep/models/CLAP/__init__.py
DELETED
File without changes
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/roi_heads.py
DELETED
@@ -1,877 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
import inspect
|
3 |
-
import logging
|
4 |
-
import numpy as np
|
5 |
-
from typing import Dict, List, Optional, Tuple
|
6 |
-
import torch
|
7 |
-
from torch import nn
|
8 |
-
|
9 |
-
from detectron2.config import configurable
|
10 |
-
from detectron2.layers import ShapeSpec, nonzero_tuple
|
11 |
-
from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou
|
12 |
-
from detectron2.utils.events import get_event_storage
|
13 |
-
from detectron2.utils.registry import Registry
|
14 |
-
|
15 |
-
from ..backbone.resnet import BottleneckBlock, ResNet
|
16 |
-
from ..matcher import Matcher
|
17 |
-
from ..poolers import ROIPooler
|
18 |
-
from ..proposal_generator.proposal_utils import add_ground_truth_to_proposals
|
19 |
-
from ..sampling import subsample_labels
|
20 |
-
from .box_head import build_box_head
|
21 |
-
from .fast_rcnn import FastRCNNOutputLayers
|
22 |
-
from .keypoint_head import build_keypoint_head
|
23 |
-
from .mask_head import build_mask_head
|
24 |
-
|
25 |
-
ROI_HEADS_REGISTRY = Registry("ROI_HEADS")
|
26 |
-
ROI_HEADS_REGISTRY.__doc__ = """
|
27 |
-
Registry for ROI heads in a generalized R-CNN model.
|
28 |
-
ROIHeads take feature maps and region proposals, and
|
29 |
-
perform per-region computation.
|
30 |
-
|
31 |
-
The registered object will be called with `obj(cfg, input_shape)`.
|
32 |
-
The call is expected to return an :class:`ROIHeads`.
|
33 |
-
"""
|
34 |
-
|
35 |
-
logger = logging.getLogger(__name__)
|
36 |
-
|
37 |
-
|
38 |
-
def build_roi_heads(cfg, input_shape):
|
39 |
-
"""
|
40 |
-
Build ROIHeads defined by `cfg.MODEL.ROI_HEADS.NAME`.
|
41 |
-
"""
|
42 |
-
name = cfg.MODEL.ROI_HEADS.NAME
|
43 |
-
return ROI_HEADS_REGISTRY.get(name)(cfg, input_shape)
|
44 |
-
|
45 |
-
|
46 |
-
def select_foreground_proposals(
|
47 |
-
proposals: List[Instances], bg_label: int
|
48 |
-
) -> Tuple[List[Instances], List[torch.Tensor]]:
|
49 |
-
"""
|
50 |
-
Given a list of N Instances (for N images), each containing a `gt_classes` field,
|
51 |
-
return a list of Instances that contain only instances with `gt_classes != -1 &&
|
52 |
-
gt_classes != bg_label`.
|
53 |
-
|
54 |
-
Args:
|
55 |
-
proposals (list[Instances]): A list of N Instances, where N is the number of
|
56 |
-
images in the batch.
|
57 |
-
bg_label: label index of background class.
|
58 |
-
|
59 |
-
Returns:
|
60 |
-
list[Instances]: N Instances, each contains only the selected foreground instances.
|
61 |
-
list[Tensor]: N boolean vector, correspond to the selection mask of
|
62 |
-
each Instances object. True for selected instances.
|
63 |
-
"""
|
64 |
-
assert isinstance(proposals, (list, tuple))
|
65 |
-
assert isinstance(proposals[0], Instances)
|
66 |
-
assert proposals[0].has("gt_classes")
|
67 |
-
fg_proposals = []
|
68 |
-
fg_selection_masks = []
|
69 |
-
for proposals_per_image in proposals:
|
70 |
-
gt_classes = proposals_per_image.gt_classes
|
71 |
-
fg_selection_mask = (gt_classes != -1) & (gt_classes != bg_label)
|
72 |
-
fg_idxs = fg_selection_mask.nonzero().squeeze(1)
|
73 |
-
fg_proposals.append(proposals_per_image[fg_idxs])
|
74 |
-
fg_selection_masks.append(fg_selection_mask)
|
75 |
-
return fg_proposals, fg_selection_masks
|
76 |
-
|
77 |
-
|
78 |
-
def select_proposals_with_visible_keypoints(proposals: List[Instances]) -> List[Instances]:
|
79 |
-
"""
|
80 |
-
Args:
|
81 |
-
proposals (list[Instances]): a list of N Instances, where N is the
|
82 |
-
number of images.
|
83 |
-
|
84 |
-
Returns:
|
85 |
-
proposals: only contains proposals with at least one visible keypoint.
|
86 |
-
|
87 |
-
Note that this is still slightly different from Detectron.
|
88 |
-
In Detectron, proposals for training keypoint head are re-sampled from
|
89 |
-
all the proposals with IOU>threshold & >=1 visible keypoint.
|
90 |
-
|
91 |
-
Here, the proposals are first sampled from all proposals with
|
92 |
-
IOU>threshold, then proposals with no visible keypoint are filtered out.
|
93 |
-
This strategy seems to make no difference on Detectron and is easier to implement.
|
94 |
-
"""
|
95 |
-
ret = []
|
96 |
-
all_num_fg = []
|
97 |
-
for proposals_per_image in proposals:
|
98 |
-
# If empty/unannotated image (hard negatives), skip filtering for train
|
99 |
-
if len(proposals_per_image) == 0:
|
100 |
-
ret.append(proposals_per_image)
|
101 |
-
continue
|
102 |
-
gt_keypoints = proposals_per_image.gt_keypoints.tensor
|
103 |
-
# #fg x K x 3
|
104 |
-
vis_mask = gt_keypoints[:, :, 2] >= 1
|
105 |
-
xs, ys = gt_keypoints[:, :, 0], gt_keypoints[:, :, 1]
|
106 |
-
proposal_boxes = proposals_per_image.proposal_boxes.tensor.unsqueeze(dim=1) # #fg x 1 x 4
|
107 |
-
kp_in_box = (
|
108 |
-
(xs >= proposal_boxes[:, :, 0])
|
109 |
-
& (xs <= proposal_boxes[:, :, 2])
|
110 |
-
& (ys >= proposal_boxes[:, :, 1])
|
111 |
-
& (ys <= proposal_boxes[:, :, 3])
|
112 |
-
)
|
113 |
-
selection = (kp_in_box & vis_mask).any(dim=1)
|
114 |
-
selection_idxs = nonzero_tuple(selection)[0]
|
115 |
-
all_num_fg.append(selection_idxs.numel())
|
116 |
-
ret.append(proposals_per_image[selection_idxs])
|
117 |
-
|
118 |
-
storage = get_event_storage()
|
119 |
-
storage.put_scalar("keypoint_head/num_fg_samples", np.mean(all_num_fg))
|
120 |
-
return ret
|
121 |
-
|
122 |
-
|
123 |
-
class ROIHeads(torch.nn.Module):
|
124 |
-
"""
|
125 |
-
ROIHeads perform all per-region computation in an R-CNN.
|
126 |
-
|
127 |
-
It typically contains logic to
|
128 |
-
|
129 |
-
1. (in training only) match proposals with ground truth and sample them
|
130 |
-
2. crop the regions and extract per-region features using proposals
|
131 |
-
3. make per-region predictions with different heads
|
132 |
-
|
133 |
-
It can have many variants, implemented as subclasses of this class.
|
134 |
-
This base class contains the logic to match/sample proposals.
|
135 |
-
But it is not necessary to inherit this class if the sampling logic is not needed.
|
136 |
-
"""
|
137 |
-
|
138 |
-
@configurable
|
139 |
-
def __init__(
|
140 |
-
self,
|
141 |
-
*,
|
142 |
-
num_classes,
|
143 |
-
batch_size_per_image,
|
144 |
-
positive_fraction,
|
145 |
-
proposal_matcher,
|
146 |
-
proposal_append_gt=True,
|
147 |
-
):
|
148 |
-
"""
|
149 |
-
NOTE: this interface is experimental.
|
150 |
-
|
151 |
-
Args:
|
152 |
-
num_classes (int): number of foreground classes (i.e. background is not included)
|
153 |
-
batch_size_per_image (int): number of proposals to sample for training
|
154 |
-
positive_fraction (float): fraction of positive (foreground) proposals
|
155 |
-
to sample for training.
|
156 |
-
proposal_matcher (Matcher): matcher that matches proposals and ground truth
|
157 |
-
proposal_append_gt (bool): whether to include ground truth as proposals as well
|
158 |
-
"""
|
159 |
-
super().__init__()
|
160 |
-
self.batch_size_per_image = batch_size_per_image
|
161 |
-
self.positive_fraction = positive_fraction
|
162 |
-
self.num_classes = num_classes
|
163 |
-
self.proposal_matcher = proposal_matcher
|
164 |
-
self.proposal_append_gt = proposal_append_gt
|
165 |
-
|
166 |
-
@classmethod
|
167 |
-
def from_config(cls, cfg):
|
168 |
-
return {
|
169 |
-
"batch_size_per_image": cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE,
|
170 |
-
"positive_fraction": cfg.MODEL.ROI_HEADS.POSITIVE_FRACTION,
|
171 |
-
"num_classes": cfg.MODEL.ROI_HEADS.NUM_CLASSES,
|
172 |
-
"proposal_append_gt": cfg.MODEL.ROI_HEADS.PROPOSAL_APPEND_GT,
|
173 |
-
# Matcher to assign box proposals to gt boxes
|
174 |
-
"proposal_matcher": Matcher(
|
175 |
-
cfg.MODEL.ROI_HEADS.IOU_THRESHOLDS,
|
176 |
-
cfg.MODEL.ROI_HEADS.IOU_LABELS,
|
177 |
-
allow_low_quality_matches=False,
|
178 |
-
),
|
179 |
-
}
|
180 |
-
|
181 |
-
def _sample_proposals(
|
182 |
-
self, matched_idxs: torch.Tensor, matched_labels: torch.Tensor, gt_classes: torch.Tensor
|
183 |
-
) -> Tuple[torch.Tensor, torch.Tensor]:
|
184 |
-
"""
|
185 |
-
Based on the matching between N proposals and M groundtruth,
|
186 |
-
sample the proposals and set their classification labels.
|
187 |
-
|
188 |
-
Args:
|
189 |
-
matched_idxs (Tensor): a vector of length N, each is the best-matched
|
190 |
-
gt index in [0, M) for each proposal.
|
191 |
-
matched_labels (Tensor): a vector of length N, the matcher's label
|
192 |
-
(one of cfg.MODEL.ROI_HEADS.IOU_LABELS) for each proposal.
|
193 |
-
gt_classes (Tensor): a vector of length M.
|
194 |
-
|
195 |
-
Returns:
|
196 |
-
Tensor: a vector of indices of sampled proposals. Each is in [0, N).
|
197 |
-
Tensor: a vector of the same length, the classification label for
|
198 |
-
each sampled proposal. Each sample is labeled as either a category in
|
199 |
-
[0, num_classes) or the background (num_classes).
|
200 |
-
"""
|
201 |
-
has_gt = gt_classes.numel() > 0
|
202 |
-
# Get the corresponding GT for each proposal
|
203 |
-
if has_gt:
|
204 |
-
gt_classes = gt_classes[matched_idxs]
|
205 |
-
# Label unmatched proposals (0 label from matcher) as background (label=num_classes)
|
206 |
-
gt_classes[matched_labels == 0] = self.num_classes
|
207 |
-
# Label ignore proposals (-1 label)
|
208 |
-
gt_classes[matched_labels == -1] = -1
|
209 |
-
else:
|
210 |
-
gt_classes = torch.zeros_like(matched_idxs) + self.num_classes
|
211 |
-
|
212 |
-
sampled_fg_idxs, sampled_bg_idxs = subsample_labels(
|
213 |
-
gt_classes, self.batch_size_per_image, self.positive_fraction, self.num_classes
|
214 |
-
)
|
215 |
-
|
216 |
-
sampled_idxs = torch.cat([sampled_fg_idxs, sampled_bg_idxs], dim=0)
|
217 |
-
return sampled_idxs, gt_classes[sampled_idxs]
|
218 |
-
|
219 |
-
@torch.no_grad()
|
220 |
-
def label_and_sample_proposals(
|
221 |
-
self, proposals: List[Instances], targets: List[Instances]
|
222 |
-
) -> List[Instances]:
|
223 |
-
"""
|
224 |
-
Prepare some proposals to be used to train the ROI heads.
|
225 |
-
It performs box matching between `proposals` and `targets`, and assigns
|
226 |
-
training labels to the proposals.
|
227 |
-
It returns ``self.batch_size_per_image`` random samples from proposals and groundtruth
|
228 |
-
boxes, with a fraction of positives that is no larger than
|
229 |
-
``self.positive_fraction``.
|
230 |
-
|
231 |
-
Args:
|
232 |
-
See :meth:`ROIHeads.forward`
|
233 |
-
|
234 |
-
Returns:
|
235 |
-
list[Instances]:
|
236 |
-
length `N` list of `Instances`s containing the proposals
|
237 |
-
sampled for training. Each `Instances` has the following fields:
|
238 |
-
|
239 |
-
- proposal_boxes: the proposal boxes
|
240 |
-
- gt_boxes: the ground-truth box that the proposal is assigned to
|
241 |
-
(this is only meaningful if the proposal has a label > 0; if label = 0
|
242 |
-
then the ground-truth box is random)
|
243 |
-
|
244 |
-
Other fields such as "gt_classes", "gt_masks", that's included in `targets`.
|
245 |
-
"""
|
246 |
-
# Augment proposals with ground-truth boxes.
|
247 |
-
# In the case of learned proposals (e.g., RPN), when training starts
|
248 |
-
# the proposals will be low quality due to random initialization.
|
249 |
-
# It's possible that none of these initial
|
250 |
-
# proposals have high enough overlap with the gt objects to be used
|
251 |
-
# as positive examples for the second stage components (box head,
|
252 |
-
# cls head, mask head). Adding the gt boxes to the set of proposals
|
253 |
-
# ensures that the second stage components will have some positive
|
254 |
-
# examples from the start of training. For RPN, this augmentation improves
|
255 |
-
# convergence and empirically improves box AP on COCO by about 0.5
|
256 |
-
# points (under one tested configuration).
|
257 |
-
if self.proposal_append_gt:
|
258 |
-
proposals = add_ground_truth_to_proposals(targets, proposals)
|
259 |
-
|
260 |
-
proposals_with_gt = []
|
261 |
-
|
262 |
-
num_fg_samples = []
|
263 |
-
num_bg_samples = []
|
264 |
-
for proposals_per_image, targets_per_image in zip(proposals, targets):
|
265 |
-
has_gt = len(targets_per_image) > 0
|
266 |
-
match_quality_matrix = pairwise_iou(
|
267 |
-
targets_per_image.gt_boxes, proposals_per_image.proposal_boxes
|
268 |
-
)
|
269 |
-
matched_idxs, matched_labels = self.proposal_matcher(match_quality_matrix)
|
270 |
-
sampled_idxs, gt_classes = self._sample_proposals(
|
271 |
-
matched_idxs, matched_labels, targets_per_image.gt_classes
|
272 |
-
)
|
273 |
-
|
274 |
-
# Set target attributes of the sampled proposals:
|
275 |
-
proposals_per_image = proposals_per_image[sampled_idxs]
|
276 |
-
proposals_per_image.gt_classes = gt_classes
|
277 |
-
|
278 |
-
if has_gt:
|
279 |
-
sampled_targets = matched_idxs[sampled_idxs]
|
280 |
-
# We index all the attributes of targets that start with "gt_"
|
281 |
-
# and have not been added to proposals yet (="gt_classes").
|
282 |
-
# NOTE: here the indexing waste some compute, because heads
|
283 |
-
# like masks, keypoints, etc, will filter the proposals again,
|
284 |
-
# (by foreground/background, or number of keypoints in the image, etc)
|
285 |
-
# so we essentially index the data twice.
|
286 |
-
for (trg_name, trg_value) in targets_per_image.get_fields().items():
|
287 |
-
if trg_name.startswith("gt_") and not proposals_per_image.has(trg_name):
|
288 |
-
proposals_per_image.set(trg_name, trg_value[sampled_targets])
|
289 |
-
# If no GT is given in the image, we don't know what a dummy gt value can be.
|
290 |
-
# Therefore the returned proposals won't have any gt_* fields, except for a
|
291 |
-
# gt_classes full of background label.
|
292 |
-
|
293 |
-
num_bg_samples.append((gt_classes == self.num_classes).sum().item())
|
294 |
-
num_fg_samples.append(gt_classes.numel() - num_bg_samples[-1])
|
295 |
-
proposals_with_gt.append(proposals_per_image)
|
296 |
-
|
297 |
-
# Log the number of fg/bg samples that are selected for training ROI heads
|
298 |
-
storage = get_event_storage()
|
299 |
-
storage.put_scalar("roi_head/num_fg_samples", np.mean(num_fg_samples))
|
300 |
-
storage.put_scalar("roi_head/num_bg_samples", np.mean(num_bg_samples))
|
301 |
-
|
302 |
-
return proposals_with_gt
|
303 |
-
|
304 |
-
def forward(
|
305 |
-
self,
|
306 |
-
images: ImageList,
|
307 |
-
features: Dict[str, torch.Tensor],
|
308 |
-
proposals: List[Instances],
|
309 |
-
targets: Optional[List[Instances]] = None,
|
310 |
-
) -> Tuple[List[Instances], Dict[str, torch.Tensor]]:
|
311 |
-
"""
|
312 |
-
Args:
|
313 |
-
images (ImageList):
|
314 |
-
features (dict[str,Tensor]): input data as a mapping from feature
|
315 |
-
map name to tensor. Axis 0 represents the number of images `N` in
|
316 |
-
the input data; axes 1-3 are channels, height, and width, which may
|
317 |
-
vary between feature maps (e.g., if a feature pyramid is used).
|
318 |
-
proposals (list[Instances]): length `N` list of `Instances`. The i-th
|
319 |
-
`Instances` contains object proposals for the i-th input image,
|
320 |
-
with fields "proposal_boxes" and "objectness_logits".
|
321 |
-
targets (list[Instances], optional): length `N` list of `Instances`. The i-th
|
322 |
-
`Instances` contains the ground-truth per-instance annotations
|
323 |
-
for the i-th input image. Specify `targets` during training only.
|
324 |
-
It may have the following fields:
|
325 |
-
|
326 |
-
- gt_boxes: the bounding box of each instance.
|
327 |
-
- gt_classes: the label for each instance with a category ranging in [0, #class].
|
328 |
-
- gt_masks: PolygonMasks or BitMasks, the ground-truth masks of each instance.
|
329 |
-
- gt_keypoints: NxKx3, the groud-truth keypoints for each instance.
|
330 |
-
|
331 |
-
Returns:
|
332 |
-
list[Instances]: length `N` list of `Instances` containing the
|
333 |
-
detected instances. Returned during inference only; may be [] during training.
|
334 |
-
|
335 |
-
dict[str->Tensor]:
|
336 |
-
mapping from a named loss to a tensor storing the loss. Used during training only.
|
337 |
-
"""
|
338 |
-
raise NotImplementedError()
|
339 |
-
|
340 |
-
|
341 |
-
@ROI_HEADS_REGISTRY.register()
|
342 |
-
class Res5ROIHeads(ROIHeads):
|
343 |
-
"""
|
344 |
-
The ROIHeads in a typical "C4" R-CNN model, where
|
345 |
-
the box and mask head share the cropping and
|
346 |
-
the per-region feature computation by a Res5 block.
|
347 |
-
See :paper:`ResNet` Appendix A.
|
348 |
-
"""
|
349 |
-
|
350 |
-
@configurable
|
351 |
-
def __init__(
|
352 |
-
self,
|
353 |
-
*,
|
354 |
-
in_features: List[str],
|
355 |
-
pooler: ROIPooler,
|
356 |
-
res5: nn.Module,
|
357 |
-
box_predictor: nn.Module,
|
358 |
-
mask_head: Optional[nn.Module] = None,
|
359 |
-
**kwargs,
|
360 |
-
):
|
361 |
-
"""
|
362 |
-
NOTE: this interface is experimental.
|
363 |
-
|
364 |
-
Args:
|
365 |
-
in_features (list[str]): list of backbone feature map names to use for
|
366 |
-
feature extraction
|
367 |
-
pooler (ROIPooler): pooler to extra region features from backbone
|
368 |
-
res5 (nn.Sequential): a CNN to compute per-region features, to be used by
|
369 |
-
``box_predictor`` and ``mask_head``. Typically this is a "res5"
|
370 |
-
block from a ResNet.
|
371 |
-
box_predictor (nn.Module): make box predictions from the feature.
|
372 |
-
Should have the same interface as :class:`FastRCNNOutputLayers`.
|
373 |
-
mask_head (nn.Module): transform features to make mask predictions
|
374 |
-
"""
|
375 |
-
super().__init__(**kwargs)
|
376 |
-
self.in_features = in_features
|
377 |
-
self.pooler = pooler
|
378 |
-
if isinstance(res5, (list, tuple)):
|
379 |
-
res5 = nn.Sequential(*res5)
|
380 |
-
self.res5 = res5
|
381 |
-
self.box_predictor = box_predictor
|
382 |
-
self.mask_on = mask_head is not None
|
383 |
-
if self.mask_on:
|
384 |
-
self.mask_head = mask_head
|
385 |
-
|
386 |
-
@classmethod
|
387 |
-
def from_config(cls, cfg, input_shape):
|
388 |
-
# fmt: off
|
389 |
-
ret = super().from_config(cfg)
|
390 |
-
in_features = ret["in_features"] = cfg.MODEL.ROI_HEADS.IN_FEATURES
|
391 |
-
pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
|
392 |
-
pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
|
393 |
-
pooler_scales = (1.0 / input_shape[in_features[0]].stride, )
|
394 |
-
sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
|
395 |
-
mask_on = cfg.MODEL.MASK_ON
|
396 |
-
# fmt: on
|
397 |
-
assert not cfg.MODEL.KEYPOINT_ON
|
398 |
-
assert len(in_features) == 1
|
399 |
-
|
400 |
-
ret["pooler"] = ROIPooler(
|
401 |
-
output_size=pooler_resolution,
|
402 |
-
scales=pooler_scales,
|
403 |
-
sampling_ratio=sampling_ratio,
|
404 |
-
pooler_type=pooler_type,
|
405 |
-
)
|
406 |
-
|
407 |
-
# Compatbility with old moco code. Might be useful.
|
408 |
-
# See notes in StandardROIHeads.from_config
|
409 |
-
if not inspect.ismethod(cls._build_res5_block):
|
410 |
-
logger.warning(
|
411 |
-
"The behavior of _build_res5_block may change. "
|
412 |
-
"Please do not depend on private methods."
|
413 |
-
)
|
414 |
-
cls._build_res5_block = classmethod(cls._build_res5_block)
|
415 |
-
|
416 |
-
ret["res5"], out_channels = cls._build_res5_block(cfg)
|
417 |
-
ret["box_predictor"] = FastRCNNOutputLayers(
|
418 |
-
cfg, ShapeSpec(channels=out_channels, height=1, width=1)
|
419 |
-
)
|
420 |
-
|
421 |
-
if mask_on:
|
422 |
-
ret["mask_head"] = build_mask_head(
|
423 |
-
cfg,
|
424 |
-
ShapeSpec(channels=out_channels, width=pooler_resolution, height=pooler_resolution),
|
425 |
-
)
|
426 |
-
return ret
|
427 |
-
|
428 |
-
@classmethod
|
429 |
-
def _build_res5_block(cls, cfg):
|
430 |
-
# fmt: off
|
431 |
-
stage_channel_factor = 2 ** 3 # res5 is 8x res2
|
432 |
-
num_groups = cfg.MODEL.RESNETS.NUM_GROUPS
|
433 |
-
width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP
|
434 |
-
bottleneck_channels = num_groups * width_per_group * stage_channel_factor
|
435 |
-
out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS * stage_channel_factor
|
436 |
-
stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1
|
437 |
-
norm = cfg.MODEL.RESNETS.NORM
|
438 |
-
assert not cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE[-1], \
|
439 |
-
"Deformable conv is not yet supported in res5 head."
|
440 |
-
# fmt: on
|
441 |
-
|
442 |
-
blocks = ResNet.make_stage(
|
443 |
-
BottleneckBlock,
|
444 |
-
3,
|
445 |
-
stride_per_block=[2, 1, 1],
|
446 |
-
in_channels=out_channels // 2,
|
447 |
-
bottleneck_channels=bottleneck_channels,
|
448 |
-
out_channels=out_channels,
|
449 |
-
num_groups=num_groups,
|
450 |
-
norm=norm,
|
451 |
-
stride_in_1x1=stride_in_1x1,
|
452 |
-
)
|
453 |
-
return nn.Sequential(*blocks), out_channels
|
454 |
-
|
455 |
-
def _shared_roi_transform(self, features: List[torch.Tensor], boxes: List[Boxes]):
|
456 |
-
x = self.pooler(features, boxes)
|
457 |
-
return self.res5(x)
|
458 |
-
|
459 |
-
def forward(
|
460 |
-
self,
|
461 |
-
images: ImageList,
|
462 |
-
features: Dict[str, torch.Tensor],
|
463 |
-
proposals: List[Instances],
|
464 |
-
targets: Optional[List[Instances]] = None,
|
465 |
-
):
|
466 |
-
"""
|
467 |
-
See :meth:`ROIHeads.forward`.
|
468 |
-
"""
|
469 |
-
del images
|
470 |
-
|
471 |
-
if self.training:
|
472 |
-
assert targets
|
473 |
-
proposals = self.label_and_sample_proposals(proposals, targets)
|
474 |
-
del targets
|
475 |
-
|
476 |
-
proposal_boxes = [x.proposal_boxes for x in proposals]
|
477 |
-
box_features = self._shared_roi_transform(
|
478 |
-
[features[f] for f in self.in_features], proposal_boxes
|
479 |
-
)
|
480 |
-
predictions = self.box_predictor(box_features.mean(dim=[2, 3]))
|
481 |
-
|
482 |
-
if self.training:
|
483 |
-
del features
|
484 |
-
losses = self.box_predictor.losses(predictions, proposals)
|
485 |
-
if self.mask_on:
|
486 |
-
proposals, fg_selection_masks = select_foreground_proposals(
|
487 |
-
proposals, self.num_classes
|
488 |
-
)
|
489 |
-
# Since the ROI feature transform is shared between boxes and masks,
|
490 |
-
# we don't need to recompute features. The mask loss is only defined
|
491 |
-
# on foreground proposals, so we need to select out the foreground
|
492 |
-
# features.
|
493 |
-
mask_features = box_features[torch.cat(fg_selection_masks, dim=0)]
|
494 |
-
del box_features
|
495 |
-
losses.update(self.mask_head(mask_features, proposals))
|
496 |
-
return [], losses
|
497 |
-
else:
|
498 |
-
pred_instances, _ = self.box_predictor.inference(predictions, proposals)
|
499 |
-
pred_instances = self.forward_with_given_boxes(features, pred_instances)
|
500 |
-
return pred_instances, {}
|
501 |
-
|
502 |
-
def forward_with_given_boxes(
|
503 |
-
self, features: Dict[str, torch.Tensor], instances: List[Instances]
|
504 |
-
) -> List[Instances]:
|
505 |
-
"""
|
506 |
-
Use the given boxes in `instances` to produce other (non-box) per-ROI outputs.
|
507 |
-
|
508 |
-
Args:
|
509 |
-
features: same as in `forward()`
|
510 |
-
instances (list[Instances]): instances to predict other outputs. Expect the keys
|
511 |
-
"pred_boxes" and "pred_classes" to exist.
|
512 |
-
|
513 |
-
Returns:
|
514 |
-
instances (Instances):
|
515 |
-
the same `Instances` object, with extra
|
516 |
-
fields such as `pred_masks` or `pred_keypoints`.
|
517 |
-
"""
|
518 |
-
assert not self.training
|
519 |
-
assert instances[0].has("pred_boxes") and instances[0].has("pred_classes")
|
520 |
-
|
521 |
-
if self.mask_on:
|
522 |
-
feature_list = [features[f] for f in self.in_features]
|
523 |
-
x = self._shared_roi_transform(feature_list, [x.pred_boxes for x in instances])
|
524 |
-
return self.mask_head(x, instances)
|
525 |
-
else:
|
526 |
-
return instances
|
527 |
-
|
528 |
-
|
529 |
-
@ROI_HEADS_REGISTRY.register()
|
530 |
-
class StandardROIHeads(ROIHeads):
|
531 |
-
"""
|
532 |
-
It's "standard" in a sense that there is no ROI transform sharing
|
533 |
-
or feature sharing between tasks.
|
534 |
-
Each head independently processes the input features by each head's
|
535 |
-
own pooler and head.
|
536 |
-
|
537 |
-
This class is used by most models, such as FPN and C5.
|
538 |
-
To implement more models, you can subclass it and implement a different
|
539 |
-
:meth:`forward()` or a head.
|
540 |
-
"""
|
541 |
-
|
542 |
-
@configurable
|
543 |
-
def __init__(
|
544 |
-
self,
|
545 |
-
*,
|
546 |
-
box_in_features: List[str],
|
547 |
-
box_pooler: ROIPooler,
|
548 |
-
box_head: nn.Module,
|
549 |
-
box_predictor: nn.Module,
|
550 |
-
mask_in_features: Optional[List[str]] = None,
|
551 |
-
mask_pooler: Optional[ROIPooler] = None,
|
552 |
-
mask_head: Optional[nn.Module] = None,
|
553 |
-
keypoint_in_features: Optional[List[str]] = None,
|
554 |
-
keypoint_pooler: Optional[ROIPooler] = None,
|
555 |
-
keypoint_head: Optional[nn.Module] = None,
|
556 |
-
train_on_pred_boxes: bool = False,
|
557 |
-
**kwargs,
|
558 |
-
):
|
559 |
-
"""
|
560 |
-
NOTE: this interface is experimental.
|
561 |
-
|
562 |
-
Args:
|
563 |
-
box_in_features (list[str]): list of feature names to use for the box head.
|
564 |
-
box_pooler (ROIPooler): pooler to extra region features for box head
|
565 |
-
box_head (nn.Module): transform features to make box predictions
|
566 |
-
box_predictor (nn.Module): make box predictions from the feature.
|
567 |
-
Should have the same interface as :class:`FastRCNNOutputLayers`.
|
568 |
-
mask_in_features (list[str]): list of feature names to use for the mask
|
569 |
-
pooler or mask head. None if not using mask head.
|
570 |
-
mask_pooler (ROIPooler): pooler to extract region features from image features.
|
571 |
-
The mask head will then take region features to make predictions.
|
572 |
-
If None, the mask head will directly take the dict of image features
|
573 |
-
defined by `mask_in_features`
|
574 |
-
mask_head (nn.Module): transform features to make mask predictions
|
575 |
-
keypoint_in_features, keypoint_pooler, keypoint_head: similar to ``mask_*``.
|
576 |
-
train_on_pred_boxes (bool): whether to use proposal boxes or
|
577 |
-
predicted boxes from the box head to train other heads.
|
578 |
-
"""
|
579 |
-
super().__init__(**kwargs)
|
580 |
-
# keep self.in_features for backward compatibility
|
581 |
-
self.in_features = self.box_in_features = box_in_features
|
582 |
-
self.box_pooler = box_pooler
|
583 |
-
self.box_head = box_head
|
584 |
-
self.box_predictor = box_predictor
|
585 |
-
|
586 |
-
self.mask_on = mask_in_features is not None
|
587 |
-
if self.mask_on:
|
588 |
-
self.mask_in_features = mask_in_features
|
589 |
-
self.mask_pooler = mask_pooler
|
590 |
-
self.mask_head = mask_head
|
591 |
-
|
592 |
-
self.keypoint_on = keypoint_in_features is not None
|
593 |
-
if self.keypoint_on:
|
594 |
-
self.keypoint_in_features = keypoint_in_features
|
595 |
-
self.keypoint_pooler = keypoint_pooler
|
596 |
-
self.keypoint_head = keypoint_head
|
597 |
-
|
598 |
-
self.train_on_pred_boxes = train_on_pred_boxes
|
599 |
-
|
600 |
-
@classmethod
|
601 |
-
def from_config(cls, cfg, input_shape):
|
602 |
-
ret = super().from_config(cfg)
|
603 |
-
ret["train_on_pred_boxes"] = cfg.MODEL.ROI_BOX_HEAD.TRAIN_ON_PRED_BOXES
|
604 |
-
# Subclasses that have not been updated to use from_config style construction
|
605 |
-
# may have overridden _init_*_head methods. In this case, those overridden methods
|
606 |
-
# will not be classmethods and we need to avoid trying to call them here.
|
607 |
-
# We test for this with ismethod which only returns True for bound methods of cls.
|
608 |
-
# Such subclasses will need to handle calling their overridden _init_*_head methods.
|
609 |
-
if inspect.ismethod(cls._init_box_head):
|
610 |
-
ret.update(cls._init_box_head(cfg, input_shape))
|
611 |
-
if inspect.ismethod(cls._init_mask_head):
|
612 |
-
ret.update(cls._init_mask_head(cfg, input_shape))
|
613 |
-
if inspect.ismethod(cls._init_keypoint_head):
|
614 |
-
ret.update(cls._init_keypoint_head(cfg, input_shape))
|
615 |
-
return ret
|
616 |
-
|
617 |
-
@classmethod
|
618 |
-
def _init_box_head(cls, cfg, input_shape):
|
619 |
-
# fmt: off
|
620 |
-
in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
|
621 |
-
pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
|
622 |
-
pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features)
|
623 |
-
sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
|
624 |
-
pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
|
625 |
-
# fmt: on
|
626 |
-
|
627 |
-
# If StandardROIHeads is applied on multiple feature maps (as in FPN),
|
628 |
-
# then we share the same predictors and therefore the channel counts must be the same
|
629 |
-
in_channels = [input_shape[f].channels for f in in_features]
|
630 |
-
# Check all channel counts are equal
|
631 |
-
assert len(set(in_channels)) == 1, in_channels
|
632 |
-
in_channels = in_channels[0]
|
633 |
-
|
634 |
-
box_pooler = ROIPooler(
|
635 |
-
output_size=pooler_resolution,
|
636 |
-
scales=pooler_scales,
|
637 |
-
sampling_ratio=sampling_ratio,
|
638 |
-
pooler_type=pooler_type,
|
639 |
-
)
|
640 |
-
# Here we split "box head" and "box predictor", which is mainly due to historical reasons.
|
641 |
-
# They are used together so the "box predictor" layers should be part of the "box head".
|
642 |
-
# New subclasses of ROIHeads do not need "box predictor"s.
|
643 |
-
box_head = build_box_head(
|
644 |
-
cfg, ShapeSpec(channels=in_channels, height=pooler_resolution, width=pooler_resolution)
|
645 |
-
)
|
646 |
-
box_predictor = FastRCNNOutputLayers(cfg, box_head.output_shape)
|
647 |
-
return {
|
648 |
-
"box_in_features": in_features,
|
649 |
-
"box_pooler": box_pooler,
|
650 |
-
"box_head": box_head,
|
651 |
-
"box_predictor": box_predictor,
|
652 |
-
}
|
653 |
-
|
654 |
-
@classmethod
|
655 |
-
def _init_mask_head(cls, cfg, input_shape):
|
656 |
-
if not cfg.MODEL.MASK_ON:
|
657 |
-
return {}
|
658 |
-
# fmt: off
|
659 |
-
in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
|
660 |
-
pooler_resolution = cfg.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION
|
661 |
-
pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features)
|
662 |
-
sampling_ratio = cfg.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO
|
663 |
-
pooler_type = cfg.MODEL.ROI_MASK_HEAD.POOLER_TYPE
|
664 |
-
# fmt: on
|
665 |
-
|
666 |
-
in_channels = [input_shape[f].channels for f in in_features][0]
|
667 |
-
|
668 |
-
ret = {"mask_in_features": in_features}
|
669 |
-
ret["mask_pooler"] = (
|
670 |
-
ROIPooler(
|
671 |
-
output_size=pooler_resolution,
|
672 |
-
scales=pooler_scales,
|
673 |
-
sampling_ratio=sampling_ratio,
|
674 |
-
pooler_type=pooler_type,
|
675 |
-
)
|
676 |
-
if pooler_type
|
677 |
-
else None
|
678 |
-
)
|
679 |
-
if pooler_type:
|
680 |
-
shape = ShapeSpec(
|
681 |
-
channels=in_channels, width=pooler_resolution, height=pooler_resolution
|
682 |
-
)
|
683 |
-
else:
|
684 |
-
shape = {f: input_shape[f] for f in in_features}
|
685 |
-
ret["mask_head"] = build_mask_head(cfg, shape)
|
686 |
-
return ret
|
687 |
-
|
688 |
-
@classmethod
|
689 |
-
def _init_keypoint_head(cls, cfg, input_shape):
|
690 |
-
if not cfg.MODEL.KEYPOINT_ON:
|
691 |
-
return {}
|
692 |
-
# fmt: off
|
693 |
-
in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
|
694 |
-
pooler_resolution = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_RESOLUTION
|
695 |
-
pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features) # noqa
|
696 |
-
sampling_ratio = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_SAMPLING_RATIO
|
697 |
-
pooler_type = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_TYPE
|
698 |
-
# fmt: on
|
699 |
-
|
700 |
-
in_channels = [input_shape[f].channels for f in in_features][0]
|
701 |
-
|
702 |
-
ret = {"keypoint_in_features": in_features}
|
703 |
-
ret["keypoint_pooler"] = (
|
704 |
-
ROIPooler(
|
705 |
-
output_size=pooler_resolution,
|
706 |
-
scales=pooler_scales,
|
707 |
-
sampling_ratio=sampling_ratio,
|
708 |
-
pooler_type=pooler_type,
|
709 |
-
)
|
710 |
-
if pooler_type
|
711 |
-
else None
|
712 |
-
)
|
713 |
-
if pooler_type:
|
714 |
-
shape = ShapeSpec(
|
715 |
-
channels=in_channels, width=pooler_resolution, height=pooler_resolution
|
716 |
-
)
|
717 |
-
else:
|
718 |
-
shape = {f: input_shape[f] for f in in_features}
|
719 |
-
ret["keypoint_head"] = build_keypoint_head(cfg, shape)
|
720 |
-
return ret
|
721 |
-
|
722 |
-
def forward(
|
723 |
-
self,
|
724 |
-
images: ImageList,
|
725 |
-
features: Dict[str, torch.Tensor],
|
726 |
-
proposals: List[Instances],
|
727 |
-
targets: Optional[List[Instances]] = None,
|
728 |
-
) -> Tuple[List[Instances], Dict[str, torch.Tensor]]:
|
729 |
-
"""
|
730 |
-
See :class:`ROIHeads.forward`.
|
731 |
-
"""
|
732 |
-
del images
|
733 |
-
if self.training:
|
734 |
-
assert targets, "'targets' argument is required during training"
|
735 |
-
proposals = self.label_and_sample_proposals(proposals, targets)
|
736 |
-
del targets
|
737 |
-
|
738 |
-
if self.training:
|
739 |
-
losses = self._forward_box(features, proposals)
|
740 |
-
# Usually the original proposals used by the box head are used by the mask, keypoint
|
741 |
-
# heads. But when `self.train_on_pred_boxes is True`, proposals will contain boxes
|
742 |
-
# predicted by the box head.
|
743 |
-
losses.update(self._forward_mask(features, proposals))
|
744 |
-
losses.update(self._forward_keypoint(features, proposals))
|
745 |
-
return proposals, losses
|
746 |
-
else:
|
747 |
-
pred_instances = self._forward_box(features, proposals)
|
748 |
-
# During inference cascaded prediction is used: the mask and keypoints heads are only
|
749 |
-
# applied to the top scoring box detections.
|
750 |
-
pred_instances = self.forward_with_given_boxes(features, pred_instances)
|
751 |
-
return pred_instances, {}
|
752 |
-
|
753 |
-
def forward_with_given_boxes(
|
754 |
-
self, features: Dict[str, torch.Tensor], instances: List[Instances]
|
755 |
-
) -> List[Instances]:
|
756 |
-
"""
|
757 |
-
Use the given boxes in `instances` to produce other (non-box) per-ROI outputs.
|
758 |
-
|
759 |
-
This is useful for downstream tasks where a box is known, but need to obtain
|
760 |
-
other attributes (outputs of other heads).
|
761 |
-
Test-time augmentation also uses this.
|
762 |
-
|
763 |
-
Args:
|
764 |
-
features: same as in `forward()`
|
765 |
-
instances (list[Instances]): instances to predict other outputs. Expect the keys
|
766 |
-
"pred_boxes" and "pred_classes" to exist.
|
767 |
-
|
768 |
-
Returns:
|
769 |
-
list[Instances]:
|
770 |
-
the same `Instances` objects, with extra
|
771 |
-
fields such as `pred_masks` or `pred_keypoints`.
|
772 |
-
"""
|
773 |
-
assert not self.training
|
774 |
-
assert instances[0].has("pred_boxes") and instances[0].has("pred_classes")
|
775 |
-
|
776 |
-
instances = self._forward_mask(features, instances)
|
777 |
-
instances = self._forward_keypoint(features, instances)
|
778 |
-
return instances
|
779 |
-
|
780 |
-
def _forward_box(self, features: Dict[str, torch.Tensor], proposals: List[Instances]):
|
781 |
-
"""
|
782 |
-
Forward logic of the box prediction branch. If `self.train_on_pred_boxes is True`,
|
783 |
-
the function puts predicted boxes in the `proposal_boxes` field of `proposals` argument.
|
784 |
-
|
785 |
-
Args:
|
786 |
-
features (dict[str, Tensor]): mapping from feature map names to tensor.
|
787 |
-
Same as in :meth:`ROIHeads.forward`.
|
788 |
-
proposals (list[Instances]): the per-image object proposals with
|
789 |
-
their matching ground truth.
|
790 |
-
Each has fields "proposal_boxes", and "objectness_logits",
|
791 |
-
"gt_classes", "gt_boxes".
|
792 |
-
|
793 |
-
Returns:
|
794 |
-
In training, a dict of losses.
|
795 |
-
In inference, a list of `Instances`, the predicted instances.
|
796 |
-
"""
|
797 |
-
features = [features[f] for f in self.box_in_features]
|
798 |
-
box_features = self.box_pooler(features, [x.proposal_boxes for x in proposals])
|
799 |
-
box_features = self.box_head(box_features)
|
800 |
-
predictions = self.box_predictor(box_features)
|
801 |
-
del box_features
|
802 |
-
|
803 |
-
if self.training:
|
804 |
-
losses = self.box_predictor.losses(predictions, proposals)
|
805 |
-
# proposals is modified in-place below, so losses must be computed first.
|
806 |
-
if self.train_on_pred_boxes:
|
807 |
-
with torch.no_grad():
|
808 |
-
pred_boxes = self.box_predictor.predict_boxes_for_gt_classes(
|
809 |
-
predictions, proposals
|
810 |
-
)
|
811 |
-
for proposals_per_image, pred_boxes_per_image in zip(proposals, pred_boxes):
|
812 |
-
proposals_per_image.proposal_boxes = Boxes(pred_boxes_per_image)
|
813 |
-
return losses
|
814 |
-
else:
|
815 |
-
pred_instances, _ = self.box_predictor.inference(predictions, proposals)
|
816 |
-
return pred_instances
|
817 |
-
|
818 |
-
def _forward_mask(self, features: Dict[str, torch.Tensor], instances: List[Instances]):
|
819 |
-
"""
|
820 |
-
Forward logic of the mask prediction branch.
|
821 |
-
|
822 |
-
Args:
|
823 |
-
features (dict[str, Tensor]): mapping from feature map names to tensor.
|
824 |
-
Same as in :meth:`ROIHeads.forward`.
|
825 |
-
instances (list[Instances]): the per-image instances to train/predict masks.
|
826 |
-
In training, they can be the proposals.
|
827 |
-
In inference, they can be the boxes predicted by R-CNN box head.
|
828 |
-
|
829 |
-
Returns:
|
830 |
-
In training, a dict of losses.
|
831 |
-
In inference, update `instances` with new fields "pred_masks" and return it.
|
832 |
-
"""
|
833 |
-
if not self.mask_on:
|
834 |
-
return {} if self.training else instances
|
835 |
-
|
836 |
-
if self.training:
|
837 |
-
# head is only trained on positive proposals.
|
838 |
-
instances, _ = select_foreground_proposals(instances, self.num_classes)
|
839 |
-
|
840 |
-
if self.mask_pooler is not None:
|
841 |
-
features = [features[f] for f in self.mask_in_features]
|
842 |
-
boxes = [x.proposal_boxes if self.training else x.pred_boxes for x in instances]
|
843 |
-
features = self.mask_pooler(features, boxes)
|
844 |
-
else:
|
845 |
-
features = {f: features[f] for f in self.mask_in_features}
|
846 |
-
return self.mask_head(features, instances)
|
847 |
-
|
848 |
-
def _forward_keypoint(self, features: Dict[str, torch.Tensor], instances: List[Instances]):
|
849 |
-
"""
|
850 |
-
Forward logic of the keypoint prediction branch.
|
851 |
-
|
852 |
-
Args:
|
853 |
-
features (dict[str, Tensor]): mapping from feature map names to tensor.
|
854 |
-
Same as in :meth:`ROIHeads.forward`.
|
855 |
-
instances (list[Instances]): the per-image instances to train/predict keypoints.
|
856 |
-
In training, they can be the proposals.
|
857 |
-
In inference, they can be the boxes predicted by R-CNN box head.
|
858 |
-
|
859 |
-
Returns:
|
860 |
-
In training, a dict of losses.
|
861 |
-
In inference, update `instances` with new fields "pred_keypoints" and return it.
|
862 |
-
"""
|
863 |
-
if not self.keypoint_on:
|
864 |
-
return {} if self.training else instances
|
865 |
-
|
866 |
-
if self.training:
|
867 |
-
# head is only trained on positive proposals with >=1 visible keypoints.
|
868 |
-
instances, _ = select_foreground_proposals(instances, self.num_classes)
|
869 |
-
instances = select_proposals_with_visible_keypoints(instances)
|
870 |
-
|
871 |
-
if self.keypoint_pooler is not None:
|
872 |
-
features = [features[f] for f in self.keypoint_in_features]
|
873 |
-
boxes = [x.proposal_boxes if self.training else x.pred_boxes for x in instances]
|
874 |
-
features = self.keypoint_pooler(features, boxes)
|
875 |
-
else:
|
876 |
-
features = {f: features[f] for f in self.keypoint_in_features}
|
877 |
-
return self.keypoint_head(features, instances)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BadRobot147/SFQ3/Dockerfile
DELETED
@@ -1,21 +0,0 @@
|
|
1 |
-
FROM node:18-bullseye-slim
|
2 |
-
|
3 |
-
RUN apt-get update && \
|
4 |
-
|
5 |
-
apt-get install -y git
|
6 |
-
|
7 |
-
RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
|
8 |
-
|
9 |
-
WORKDIR /app
|
10 |
-
|
11 |
-
RUN npm install
|
12 |
-
|
13 |
-
COPY Dockerfile greeting.md* .env* ./
|
14 |
-
|
15 |
-
RUN npm run build
|
16 |
-
|
17 |
-
EXPOSE 7860
|
18 |
-
|
19 |
-
ENV NODE_ENV=production
|
20 |
-
|
21 |
-
CMD [ "npm", "start" ]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/infer/lib/infer_pack/models_onnx.py
DELETED
@@ -1,824 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import logging
|
3 |
-
|
4 |
-
logger = logging.getLogger(__name__)
|
5 |
-
|
6 |
-
import numpy as np
|
7 |
-
import torch
|
8 |
-
from torch import nn
|
9 |
-
from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d
|
10 |
-
from torch.nn import functional as F
|
11 |
-
from torch.nn.utils import remove_weight_norm, spectral_norm, weight_norm
|
12 |
-
|
13 |
-
from infer.lib.infer_pack import attentions, commons, modules
|
14 |
-
from infer.lib.infer_pack.commons import get_padding, init_weights
|
15 |
-
|
16 |
-
|
17 |
-
class TextEncoder256(nn.Module):
|
18 |
-
def __init__(
|
19 |
-
self,
|
20 |
-
out_channels,
|
21 |
-
hidden_channels,
|
22 |
-
filter_channels,
|
23 |
-
n_heads,
|
24 |
-
n_layers,
|
25 |
-
kernel_size,
|
26 |
-
p_dropout,
|
27 |
-
f0=True,
|
28 |
-
):
|
29 |
-
super().__init__()
|
30 |
-
self.out_channels = out_channels
|
31 |
-
self.hidden_channels = hidden_channels
|
32 |
-
self.filter_channels = filter_channels
|
33 |
-
self.n_heads = n_heads
|
34 |
-
self.n_layers = n_layers
|
35 |
-
self.kernel_size = kernel_size
|
36 |
-
self.p_dropout = p_dropout
|
37 |
-
self.emb_phone = nn.Linear(256, hidden_channels)
|
38 |
-
self.lrelu = nn.LeakyReLU(0.1, inplace=True)
|
39 |
-
if f0 == True:
|
40 |
-
self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
|
41 |
-
self.encoder = attentions.Encoder(
|
42 |
-
hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
|
43 |
-
)
|
44 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
45 |
-
|
46 |
-
def forward(self, phone, pitch, lengths):
|
47 |
-
if pitch == None:
|
48 |
-
x = self.emb_phone(phone)
|
49 |
-
else:
|
50 |
-
x = self.emb_phone(phone) + self.emb_pitch(pitch)
|
51 |
-
x = x * math.sqrt(self.hidden_channels) # [b, t, h]
|
52 |
-
x = self.lrelu(x)
|
53 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
54 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
|
55 |
-
x.dtype
|
56 |
-
)
|
57 |
-
x = self.encoder(x * x_mask, x_mask)
|
58 |
-
stats = self.proj(x) * x_mask
|
59 |
-
|
60 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
61 |
-
return m, logs, x_mask
|
62 |
-
|
63 |
-
|
64 |
-
class TextEncoder768(nn.Module):
|
65 |
-
def __init__(
|
66 |
-
self,
|
67 |
-
out_channels,
|
68 |
-
hidden_channels,
|
69 |
-
filter_channels,
|
70 |
-
n_heads,
|
71 |
-
n_layers,
|
72 |
-
kernel_size,
|
73 |
-
p_dropout,
|
74 |
-
f0=True,
|
75 |
-
):
|
76 |
-
super().__init__()
|
77 |
-
self.out_channels = out_channels
|
78 |
-
self.hidden_channels = hidden_channels
|
79 |
-
self.filter_channels = filter_channels
|
80 |
-
self.n_heads = n_heads
|
81 |
-
self.n_layers = n_layers
|
82 |
-
self.kernel_size = kernel_size
|
83 |
-
self.p_dropout = p_dropout
|
84 |
-
self.emb_phone = nn.Linear(768, hidden_channels)
|
85 |
-
self.lrelu = nn.LeakyReLU(0.1, inplace=True)
|
86 |
-
if f0 == True:
|
87 |
-
self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
|
88 |
-
self.encoder = attentions.Encoder(
|
89 |
-
hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
|
90 |
-
)
|
91 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
92 |
-
|
93 |
-
def forward(self, phone, pitch, lengths):
|
94 |
-
if pitch == None:
|
95 |
-
x = self.emb_phone(phone)
|
96 |
-
else:
|
97 |
-
x = self.emb_phone(phone) + self.emb_pitch(pitch)
|
98 |
-
x = x * math.sqrt(self.hidden_channels) # [b, t, h]
|
99 |
-
x = self.lrelu(x)
|
100 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
101 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
|
102 |
-
x.dtype
|
103 |
-
)
|
104 |
-
x = self.encoder(x * x_mask, x_mask)
|
105 |
-
stats = self.proj(x) * x_mask
|
106 |
-
|
107 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
108 |
-
return m, logs, x_mask
|
109 |
-
|
110 |
-
|
111 |
-
class ResidualCouplingBlock(nn.Module):
|
112 |
-
def __init__(
|
113 |
-
self,
|
114 |
-
channels,
|
115 |
-
hidden_channels,
|
116 |
-
kernel_size,
|
117 |
-
dilation_rate,
|
118 |
-
n_layers,
|
119 |
-
n_flows=4,
|
120 |
-
gin_channels=0,
|
121 |
-
):
|
122 |
-
super().__init__()
|
123 |
-
self.channels = channels
|
124 |
-
self.hidden_channels = hidden_channels
|
125 |
-
self.kernel_size = kernel_size
|
126 |
-
self.dilation_rate = dilation_rate
|
127 |
-
self.n_layers = n_layers
|
128 |
-
self.n_flows = n_flows
|
129 |
-
self.gin_channels = gin_channels
|
130 |
-
|
131 |
-
self.flows = nn.ModuleList()
|
132 |
-
for i in range(n_flows):
|
133 |
-
self.flows.append(
|
134 |
-
modules.ResidualCouplingLayer(
|
135 |
-
channels,
|
136 |
-
hidden_channels,
|
137 |
-
kernel_size,
|
138 |
-
dilation_rate,
|
139 |
-
n_layers,
|
140 |
-
gin_channels=gin_channels,
|
141 |
-
mean_only=True,
|
142 |
-
)
|
143 |
-
)
|
144 |
-
self.flows.append(modules.Flip())
|
145 |
-
|
146 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
147 |
-
if not reverse:
|
148 |
-
for flow in self.flows:
|
149 |
-
x, _ = flow(x, x_mask, g=g, reverse=reverse)
|
150 |
-
else:
|
151 |
-
for flow in reversed(self.flows):
|
152 |
-
x = flow(x, x_mask, g=g, reverse=reverse)
|
153 |
-
return x
|
154 |
-
|
155 |
-
def remove_weight_norm(self):
|
156 |
-
for i in range(self.n_flows):
|
157 |
-
self.flows[i * 2].remove_weight_norm()
|
158 |
-
|
159 |
-
|
160 |
-
class PosteriorEncoder(nn.Module):
|
161 |
-
def __init__(
|
162 |
-
self,
|
163 |
-
in_channels,
|
164 |
-
out_channels,
|
165 |
-
hidden_channels,
|
166 |
-
kernel_size,
|
167 |
-
dilation_rate,
|
168 |
-
n_layers,
|
169 |
-
gin_channels=0,
|
170 |
-
):
|
171 |
-
super().__init__()
|
172 |
-
self.in_channels = in_channels
|
173 |
-
self.out_channels = out_channels
|
174 |
-
self.hidden_channels = hidden_channels
|
175 |
-
self.kernel_size = kernel_size
|
176 |
-
self.dilation_rate = dilation_rate
|
177 |
-
self.n_layers = n_layers
|
178 |
-
self.gin_channels = gin_channels
|
179 |
-
|
180 |
-
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
|
181 |
-
self.enc = modules.WN(
|
182 |
-
hidden_channels,
|
183 |
-
kernel_size,
|
184 |
-
dilation_rate,
|
185 |
-
n_layers,
|
186 |
-
gin_channels=gin_channels,
|
187 |
-
)
|
188 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
189 |
-
|
190 |
-
def forward(self, x, x_lengths, g=None):
|
191 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
|
192 |
-
x.dtype
|
193 |
-
)
|
194 |
-
x = self.pre(x) * x_mask
|
195 |
-
x = self.enc(x, x_mask, g=g)
|
196 |
-
stats = self.proj(x) * x_mask
|
197 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
198 |
-
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
|
199 |
-
return z, m, logs, x_mask
|
200 |
-
|
201 |
-
def remove_weight_norm(self):
|
202 |
-
self.enc.remove_weight_norm()
|
203 |
-
|
204 |
-
|
205 |
-
class Generator(torch.nn.Module):
|
206 |
-
def __init__(
|
207 |
-
self,
|
208 |
-
initial_channel,
|
209 |
-
resblock,
|
210 |
-
resblock_kernel_sizes,
|
211 |
-
resblock_dilation_sizes,
|
212 |
-
upsample_rates,
|
213 |
-
upsample_initial_channel,
|
214 |
-
upsample_kernel_sizes,
|
215 |
-
gin_channels=0,
|
216 |
-
):
|
217 |
-
super(Generator, self).__init__()
|
218 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
219 |
-
self.num_upsamples = len(upsample_rates)
|
220 |
-
self.conv_pre = Conv1d(
|
221 |
-
initial_channel, upsample_initial_channel, 7, 1, padding=3
|
222 |
-
)
|
223 |
-
resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
|
224 |
-
|
225 |
-
self.ups = nn.ModuleList()
|
226 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
227 |
-
self.ups.append(
|
228 |
-
weight_norm(
|
229 |
-
ConvTranspose1d(
|
230 |
-
upsample_initial_channel // (2**i),
|
231 |
-
upsample_initial_channel // (2 ** (i + 1)),
|
232 |
-
k,
|
233 |
-
u,
|
234 |
-
padding=(k - u) // 2,
|
235 |
-
)
|
236 |
-
)
|
237 |
-
)
|
238 |
-
|
239 |
-
self.resblocks = nn.ModuleList()
|
240 |
-
for i in range(len(self.ups)):
|
241 |
-
ch = upsample_initial_channel // (2 ** (i + 1))
|
242 |
-
for j, (k, d) in enumerate(
|
243 |
-
zip(resblock_kernel_sizes, resblock_dilation_sizes)
|
244 |
-
):
|
245 |
-
self.resblocks.append(resblock(ch, k, d))
|
246 |
-
|
247 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
248 |
-
self.ups.apply(init_weights)
|
249 |
-
|
250 |
-
if gin_channels != 0:
|
251 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
252 |
-
|
253 |
-
def forward(self, x, g=None):
|
254 |
-
x = self.conv_pre(x)
|
255 |
-
if g is not None:
|
256 |
-
x = x + self.cond(g)
|
257 |
-
|
258 |
-
for i in range(self.num_upsamples):
|
259 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
260 |
-
x = self.ups[i](x)
|
261 |
-
xs = None
|
262 |
-
for j in range(self.num_kernels):
|
263 |
-
if xs is None:
|
264 |
-
xs = self.resblocks[i * self.num_kernels + j](x)
|
265 |
-
else:
|
266 |
-
xs += self.resblocks[i * self.num_kernels + j](x)
|
267 |
-
x = xs / self.num_kernels
|
268 |
-
x = F.leaky_relu(x)
|
269 |
-
x = self.conv_post(x)
|
270 |
-
x = torch.tanh(x)
|
271 |
-
|
272 |
-
return x
|
273 |
-
|
274 |
-
def remove_weight_norm(self):
|
275 |
-
for l in self.ups:
|
276 |
-
remove_weight_norm(l)
|
277 |
-
for l in self.resblocks:
|
278 |
-
l.remove_weight_norm()
|
279 |
-
|
280 |
-
|
281 |
-
class SineGen(torch.nn.Module):
|
282 |
-
"""Definition of sine generator
|
283 |
-
SineGen(samp_rate, harmonic_num = 0,
|
284 |
-
sine_amp = 0.1, noise_std = 0.003,
|
285 |
-
voiced_threshold = 0,
|
286 |
-
flag_for_pulse=False)
|
287 |
-
samp_rate: sampling rate in Hz
|
288 |
-
harmonic_num: number of harmonic overtones (default 0)
|
289 |
-
sine_amp: amplitude of sine-wavefrom (default 0.1)
|
290 |
-
noise_std: std of Gaussian noise (default 0.003)
|
291 |
-
voiced_thoreshold: F0 threshold for U/V classification (default 0)
|
292 |
-
flag_for_pulse: this SinGen is used inside PulseGen (default False)
|
293 |
-
Note: when flag_for_pulse is True, the first time step of a voiced
|
294 |
-
segment is always sin(np.pi) or cos(0)
|
295 |
-
"""
|
296 |
-
|
297 |
-
def __init__(
|
298 |
-
self,
|
299 |
-
samp_rate,
|
300 |
-
harmonic_num=0,
|
301 |
-
sine_amp=0.1,
|
302 |
-
noise_std=0.003,
|
303 |
-
voiced_threshold=0,
|
304 |
-
flag_for_pulse=False,
|
305 |
-
):
|
306 |
-
super(SineGen, self).__init__()
|
307 |
-
self.sine_amp = sine_amp
|
308 |
-
self.noise_std = noise_std
|
309 |
-
self.harmonic_num = harmonic_num
|
310 |
-
self.dim = self.harmonic_num + 1
|
311 |
-
self.sampling_rate = samp_rate
|
312 |
-
self.voiced_threshold = voiced_threshold
|
313 |
-
|
314 |
-
def _f02uv(self, f0):
|
315 |
-
# generate uv signal
|
316 |
-
uv = torch.ones_like(f0)
|
317 |
-
uv = uv * (f0 > self.voiced_threshold)
|
318 |
-
return uv
|
319 |
-
|
320 |
-
def forward(self, f0, upp):
|
321 |
-
"""sine_tensor, uv = forward(f0)
|
322 |
-
input F0: tensor(batchsize=1, length, dim=1)
|
323 |
-
f0 for unvoiced steps should be 0
|
324 |
-
output sine_tensor: tensor(batchsize=1, length, dim)
|
325 |
-
output uv: tensor(batchsize=1, length, 1)
|
326 |
-
"""
|
327 |
-
with torch.no_grad():
|
328 |
-
f0 = f0[:, None].transpose(1, 2)
|
329 |
-
f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
|
330 |
-
# fundamental component
|
331 |
-
f0_buf[:, :, 0] = f0[:, :, 0]
|
332 |
-
for idx in np.arange(self.harmonic_num):
|
333 |
-
f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
|
334 |
-
idx + 2
|
335 |
-
) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
|
336 |
-
rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
|
337 |
-
rand_ini = torch.rand(
|
338 |
-
f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
|
339 |
-
)
|
340 |
-
rand_ini[:, 0] = 0
|
341 |
-
rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
|
342 |
-
tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
|
343 |
-
tmp_over_one *= upp
|
344 |
-
tmp_over_one = F.interpolate(
|
345 |
-
tmp_over_one.transpose(2, 1),
|
346 |
-
scale_factor=upp,
|
347 |
-
mode="linear",
|
348 |
-
align_corners=True,
|
349 |
-
).transpose(2, 1)
|
350 |
-
rad_values = F.interpolate(
|
351 |
-
rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
|
352 |
-
).transpose(
|
353 |
-
2, 1
|
354 |
-
) #######
|
355 |
-
tmp_over_one %= 1
|
356 |
-
tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
|
357 |
-
cumsum_shift = torch.zeros_like(rad_values)
|
358 |
-
cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
|
359 |
-
sine_waves = torch.sin(
|
360 |
-
torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
|
361 |
-
)
|
362 |
-
sine_waves = sine_waves * self.sine_amp
|
363 |
-
uv = self._f02uv(f0)
|
364 |
-
uv = F.interpolate(
|
365 |
-
uv.transpose(2, 1), scale_factor=upp, mode="nearest"
|
366 |
-
).transpose(2, 1)
|
367 |
-
noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
|
368 |
-
noise = noise_amp * torch.randn_like(sine_waves)
|
369 |
-
sine_waves = sine_waves * uv + noise
|
370 |
-
return sine_waves, uv, noise
|
371 |
-
|
372 |
-
|
373 |
-
class SourceModuleHnNSF(torch.nn.Module):
|
374 |
-
"""SourceModule for hn-nsf
|
375 |
-
SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
|
376 |
-
add_noise_std=0.003, voiced_threshod=0)
|
377 |
-
sampling_rate: sampling_rate in Hz
|
378 |
-
harmonic_num: number of harmonic above F0 (default: 0)
|
379 |
-
sine_amp: amplitude of sine source signal (default: 0.1)
|
380 |
-
add_noise_std: std of additive Gaussian noise (default: 0.003)
|
381 |
-
note that amplitude of noise in unvoiced is decided
|
382 |
-
by sine_amp
|
383 |
-
voiced_threshold: threhold to set U/V given F0 (default: 0)
|
384 |
-
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
|
385 |
-
F0_sampled (batchsize, length, 1)
|
386 |
-
Sine_source (batchsize, length, 1)
|
387 |
-
noise_source (batchsize, length 1)
|
388 |
-
uv (batchsize, length, 1)
|
389 |
-
"""
|
390 |
-
|
391 |
-
def __init__(
|
392 |
-
self,
|
393 |
-
sampling_rate,
|
394 |
-
harmonic_num=0,
|
395 |
-
sine_amp=0.1,
|
396 |
-
add_noise_std=0.003,
|
397 |
-
voiced_threshod=0,
|
398 |
-
is_half=True,
|
399 |
-
):
|
400 |
-
super(SourceModuleHnNSF, self).__init__()
|
401 |
-
|
402 |
-
self.sine_amp = sine_amp
|
403 |
-
self.noise_std = add_noise_std
|
404 |
-
self.is_half = is_half
|
405 |
-
# to produce sine waveforms
|
406 |
-
self.l_sin_gen = SineGen(
|
407 |
-
sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
|
408 |
-
)
|
409 |
-
|
410 |
-
# to merge source harmonics into a single excitation
|
411 |
-
self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
|
412 |
-
self.l_tanh = torch.nn.Tanh()
|
413 |
-
|
414 |
-
def forward(self, x, upp=None):
|
415 |
-
sine_wavs, uv, _ = self.l_sin_gen(x, upp)
|
416 |
-
if self.is_half:
|
417 |
-
sine_wavs = sine_wavs.half()
|
418 |
-
sine_merge = self.l_tanh(self.l_linear(sine_wavs))
|
419 |
-
return sine_merge, None, None # noise, uv
|
420 |
-
|
421 |
-
|
422 |
-
class GeneratorNSF(torch.nn.Module):
|
423 |
-
def __init__(
|
424 |
-
self,
|
425 |
-
initial_channel,
|
426 |
-
resblock,
|
427 |
-
resblock_kernel_sizes,
|
428 |
-
resblock_dilation_sizes,
|
429 |
-
upsample_rates,
|
430 |
-
upsample_initial_channel,
|
431 |
-
upsample_kernel_sizes,
|
432 |
-
gin_channels,
|
433 |
-
sr,
|
434 |
-
is_half=False,
|
435 |
-
):
|
436 |
-
super(GeneratorNSF, self).__init__()
|
437 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
438 |
-
self.num_upsamples = len(upsample_rates)
|
439 |
-
|
440 |
-
self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
|
441 |
-
self.m_source = SourceModuleHnNSF(
|
442 |
-
sampling_rate=sr, harmonic_num=0, is_half=is_half
|
443 |
-
)
|
444 |
-
self.noise_convs = nn.ModuleList()
|
445 |
-
self.conv_pre = Conv1d(
|
446 |
-
initial_channel, upsample_initial_channel, 7, 1, padding=3
|
447 |
-
)
|
448 |
-
resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
|
449 |
-
|
450 |
-
self.ups = nn.ModuleList()
|
451 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
452 |
-
c_cur = upsample_initial_channel // (2 ** (i + 1))
|
453 |
-
self.ups.append(
|
454 |
-
weight_norm(
|
455 |
-
ConvTranspose1d(
|
456 |
-
upsample_initial_channel // (2**i),
|
457 |
-
upsample_initial_channel // (2 ** (i + 1)),
|
458 |
-
k,
|
459 |
-
u,
|
460 |
-
padding=(k - u) // 2,
|
461 |
-
)
|
462 |
-
)
|
463 |
-
)
|
464 |
-
if i + 1 < len(upsample_rates):
|
465 |
-
stride_f0 = np.prod(upsample_rates[i + 1 :])
|
466 |
-
self.noise_convs.append(
|
467 |
-
Conv1d(
|
468 |
-
1,
|
469 |
-
c_cur,
|
470 |
-
kernel_size=stride_f0 * 2,
|
471 |
-
stride=stride_f0,
|
472 |
-
padding=stride_f0 // 2,
|
473 |
-
)
|
474 |
-
)
|
475 |
-
else:
|
476 |
-
self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
|
477 |
-
|
478 |
-
self.resblocks = nn.ModuleList()
|
479 |
-
for i in range(len(self.ups)):
|
480 |
-
ch = upsample_initial_channel // (2 ** (i + 1))
|
481 |
-
for j, (k, d) in enumerate(
|
482 |
-
zip(resblock_kernel_sizes, resblock_dilation_sizes)
|
483 |
-
):
|
484 |
-
self.resblocks.append(resblock(ch, k, d))
|
485 |
-
|
486 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
487 |
-
self.ups.apply(init_weights)
|
488 |
-
|
489 |
-
if gin_channels != 0:
|
490 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
491 |
-
|
492 |
-
self.upp = np.prod(upsample_rates)
|
493 |
-
|
494 |
-
def forward(self, x, f0, g=None):
|
495 |
-
har_source, noi_source, uv = self.m_source(f0, self.upp)
|
496 |
-
har_source = har_source.transpose(1, 2)
|
497 |
-
x = self.conv_pre(x)
|
498 |
-
if g is not None:
|
499 |
-
x = x + self.cond(g)
|
500 |
-
|
501 |
-
for i in range(self.num_upsamples):
|
502 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
503 |
-
x = self.ups[i](x)
|
504 |
-
x_source = self.noise_convs[i](har_source)
|
505 |
-
x = x + x_source
|
506 |
-
xs = None
|
507 |
-
for j in range(self.num_kernels):
|
508 |
-
if xs is None:
|
509 |
-
xs = self.resblocks[i * self.num_kernels + j](x)
|
510 |
-
else:
|
511 |
-
xs += self.resblocks[i * self.num_kernels + j](x)
|
512 |
-
x = xs / self.num_kernels
|
513 |
-
x = F.leaky_relu(x)
|
514 |
-
x = self.conv_post(x)
|
515 |
-
x = torch.tanh(x)
|
516 |
-
return x
|
517 |
-
|
518 |
-
def remove_weight_norm(self):
|
519 |
-
for l in self.ups:
|
520 |
-
remove_weight_norm(l)
|
521 |
-
for l in self.resblocks:
|
522 |
-
l.remove_weight_norm()
|
523 |
-
|
524 |
-
|
525 |
-
sr2sr = {
|
526 |
-
"32k": 32000,
|
527 |
-
"40k": 40000,
|
528 |
-
"48k": 48000,
|
529 |
-
}
|
530 |
-
|
531 |
-
|
532 |
-
class SynthesizerTrnMsNSFsidM(nn.Module):
|
533 |
-
def __init__(
|
534 |
-
self,
|
535 |
-
spec_channels,
|
536 |
-
segment_size,
|
537 |
-
inter_channels,
|
538 |
-
hidden_channels,
|
539 |
-
filter_channels,
|
540 |
-
n_heads,
|
541 |
-
n_layers,
|
542 |
-
kernel_size,
|
543 |
-
p_dropout,
|
544 |
-
resblock,
|
545 |
-
resblock_kernel_sizes,
|
546 |
-
resblock_dilation_sizes,
|
547 |
-
upsample_rates,
|
548 |
-
upsample_initial_channel,
|
549 |
-
upsample_kernel_sizes,
|
550 |
-
spk_embed_dim,
|
551 |
-
gin_channels,
|
552 |
-
sr,
|
553 |
-
version,
|
554 |
-
**kwargs
|
555 |
-
):
|
556 |
-
super().__init__()
|
557 |
-
if type(sr) == type("strr"):
|
558 |
-
sr = sr2sr[sr]
|
559 |
-
self.spec_channels = spec_channels
|
560 |
-
self.inter_channels = inter_channels
|
561 |
-
self.hidden_channels = hidden_channels
|
562 |
-
self.filter_channels = filter_channels
|
563 |
-
self.n_heads = n_heads
|
564 |
-
self.n_layers = n_layers
|
565 |
-
self.kernel_size = kernel_size
|
566 |
-
self.p_dropout = p_dropout
|
567 |
-
self.resblock = resblock
|
568 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
569 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
570 |
-
self.upsample_rates = upsample_rates
|
571 |
-
self.upsample_initial_channel = upsample_initial_channel
|
572 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
573 |
-
self.segment_size = segment_size
|
574 |
-
self.gin_channels = gin_channels
|
575 |
-
# self.hop_length = hop_length#
|
576 |
-
self.spk_embed_dim = spk_embed_dim
|
577 |
-
if version == "v1":
|
578 |
-
self.enc_p = TextEncoder256(
|
579 |
-
inter_channels,
|
580 |
-
hidden_channels,
|
581 |
-
filter_channels,
|
582 |
-
n_heads,
|
583 |
-
n_layers,
|
584 |
-
kernel_size,
|
585 |
-
p_dropout,
|
586 |
-
)
|
587 |
-
else:
|
588 |
-
self.enc_p = TextEncoder768(
|
589 |
-
inter_channels,
|
590 |
-
hidden_channels,
|
591 |
-
filter_channels,
|
592 |
-
n_heads,
|
593 |
-
n_layers,
|
594 |
-
kernel_size,
|
595 |
-
p_dropout,
|
596 |
-
)
|
597 |
-
self.dec = GeneratorNSF(
|
598 |
-
inter_channels,
|
599 |
-
resblock,
|
600 |
-
resblock_kernel_sizes,
|
601 |
-
resblock_dilation_sizes,
|
602 |
-
upsample_rates,
|
603 |
-
upsample_initial_channel,
|
604 |
-
upsample_kernel_sizes,
|
605 |
-
gin_channels=gin_channels,
|
606 |
-
sr=sr,
|
607 |
-
is_half=kwargs["is_half"],
|
608 |
-
)
|
609 |
-
self.enc_q = PosteriorEncoder(
|
610 |
-
spec_channels,
|
611 |
-
inter_channels,
|
612 |
-
hidden_channels,
|
613 |
-
5,
|
614 |
-
1,
|
615 |
-
16,
|
616 |
-
gin_channels=gin_channels,
|
617 |
-
)
|
618 |
-
self.flow = ResidualCouplingBlock(
|
619 |
-
inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
|
620 |
-
)
|
621 |
-
self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
|
622 |
-
self.speaker_map = None
|
623 |
-
logger.debug(
|
624 |
-
"gin_channels: "
|
625 |
-
+ gin_channels
|
626 |
-
+ ", self.spk_embed_dim: "
|
627 |
-
+ self.spk_embed_dim
|
628 |
-
)
|
629 |
-
|
630 |
-
def remove_weight_norm(self):
|
631 |
-
self.dec.remove_weight_norm()
|
632 |
-
self.flow.remove_weight_norm()
|
633 |
-
self.enc_q.remove_weight_norm()
|
634 |
-
|
635 |
-
def construct_spkmixmap(self, n_speaker):
|
636 |
-
self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels))
|
637 |
-
for i in range(n_speaker):
|
638 |
-
self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]]))
|
639 |
-
self.speaker_map = self.speaker_map.unsqueeze(0)
|
640 |
-
|
641 |
-
def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None):
|
642 |
-
if self.speaker_map is not None: # [N, S] * [S, B, 1, H]
|
643 |
-
g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1]
|
644 |
-
g = g * self.speaker_map # [N, S, B, 1, H]
|
645 |
-
g = torch.sum(g, dim=1) # [N, 1, B, 1, H]
|
646 |
-
g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N]
|
647 |
-
else:
|
648 |
-
g = g.unsqueeze(0)
|
649 |
-
g = self.emb_g(g).transpose(1, 2)
|
650 |
-
|
651 |
-
m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
|
652 |
-
z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
|
653 |
-
z = self.flow(z_p, x_mask, g=g, reverse=True)
|
654 |
-
o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
|
655 |
-
return o
|
656 |
-
|
657 |
-
|
658 |
-
class MultiPeriodDiscriminator(torch.nn.Module):
|
659 |
-
def __init__(self, use_spectral_norm=False):
|
660 |
-
super(MultiPeriodDiscriminator, self).__init__()
|
661 |
-
periods = [2, 3, 5, 7, 11, 17]
|
662 |
-
# periods = [3, 5, 7, 11, 17, 23, 37]
|
663 |
-
|
664 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
665 |
-
discs = discs + [
|
666 |
-
DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
|
667 |
-
]
|
668 |
-
self.discriminators = nn.ModuleList(discs)
|
669 |
-
|
670 |
-
def forward(self, y, y_hat):
|
671 |
-
y_d_rs = [] #
|
672 |
-
y_d_gs = []
|
673 |
-
fmap_rs = []
|
674 |
-
fmap_gs = []
|
675 |
-
for i, d in enumerate(self.discriminators):
|
676 |
-
y_d_r, fmap_r = d(y)
|
677 |
-
y_d_g, fmap_g = d(y_hat)
|
678 |
-
# for j in range(len(fmap_r)):
|
679 |
-
# print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
|
680 |
-
y_d_rs.append(y_d_r)
|
681 |
-
y_d_gs.append(y_d_g)
|
682 |
-
fmap_rs.append(fmap_r)
|
683 |
-
fmap_gs.append(fmap_g)
|
684 |
-
|
685 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
686 |
-
|
687 |
-
|
688 |
-
class MultiPeriodDiscriminatorV2(torch.nn.Module):
|
689 |
-
def __init__(self, use_spectral_norm=False):
|
690 |
-
super(MultiPeriodDiscriminatorV2, self).__init__()
|
691 |
-
# periods = [2, 3, 5, 7, 11, 17]
|
692 |
-
periods = [2, 3, 5, 7, 11, 17, 23, 37]
|
693 |
-
|
694 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
695 |
-
discs = discs + [
|
696 |
-
DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
|
697 |
-
]
|
698 |
-
self.discriminators = nn.ModuleList(discs)
|
699 |
-
|
700 |
-
def forward(self, y, y_hat):
|
701 |
-
y_d_rs = [] #
|
702 |
-
y_d_gs = []
|
703 |
-
fmap_rs = []
|
704 |
-
fmap_gs = []
|
705 |
-
for i, d in enumerate(self.discriminators):
|
706 |
-
y_d_r, fmap_r = d(y)
|
707 |
-
y_d_g, fmap_g = d(y_hat)
|
708 |
-
# for j in range(len(fmap_r)):
|
709 |
-
# print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
|
710 |
-
y_d_rs.append(y_d_r)
|
711 |
-
y_d_gs.append(y_d_g)
|
712 |
-
fmap_rs.append(fmap_r)
|
713 |
-
fmap_gs.append(fmap_g)
|
714 |
-
|
715 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
716 |
-
|
717 |
-
|
718 |
-
class DiscriminatorS(torch.nn.Module):
|
719 |
-
def __init__(self, use_spectral_norm=False):
|
720 |
-
super(DiscriminatorS, self).__init__()
|
721 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
722 |
-
self.convs = nn.ModuleList(
|
723 |
-
[
|
724 |
-
norm_f(Conv1d(1, 16, 15, 1, padding=7)),
|
725 |
-
norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
|
726 |
-
norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
|
727 |
-
norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
|
728 |
-
norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
|
729 |
-
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
|
730 |
-
]
|
731 |
-
)
|
732 |
-
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
|
733 |
-
|
734 |
-
def forward(self, x):
|
735 |
-
fmap = []
|
736 |
-
|
737 |
-
for l in self.convs:
|
738 |
-
x = l(x)
|
739 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
740 |
-
fmap.append(x)
|
741 |
-
x = self.conv_post(x)
|
742 |
-
fmap.append(x)
|
743 |
-
x = torch.flatten(x, 1, -1)
|
744 |
-
|
745 |
-
return x, fmap
|
746 |
-
|
747 |
-
|
748 |
-
class DiscriminatorP(torch.nn.Module):
|
749 |
-
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
|
750 |
-
super(DiscriminatorP, self).__init__()
|
751 |
-
self.period = period
|
752 |
-
self.use_spectral_norm = use_spectral_norm
|
753 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
754 |
-
self.convs = nn.ModuleList(
|
755 |
-
[
|
756 |
-
norm_f(
|
757 |
-
Conv2d(
|
758 |
-
1,
|
759 |
-
32,
|
760 |
-
(kernel_size, 1),
|
761 |
-
(stride, 1),
|
762 |
-
padding=(get_padding(kernel_size, 1), 0),
|
763 |
-
)
|
764 |
-
),
|
765 |
-
norm_f(
|
766 |
-
Conv2d(
|
767 |
-
32,
|
768 |
-
128,
|
769 |
-
(kernel_size, 1),
|
770 |
-
(stride, 1),
|
771 |
-
padding=(get_padding(kernel_size, 1), 0),
|
772 |
-
)
|
773 |
-
),
|
774 |
-
norm_f(
|
775 |
-
Conv2d(
|
776 |
-
128,
|
777 |
-
512,
|
778 |
-
(kernel_size, 1),
|
779 |
-
(stride, 1),
|
780 |
-
padding=(get_padding(kernel_size, 1), 0),
|
781 |
-
)
|
782 |
-
),
|
783 |
-
norm_f(
|
784 |
-
Conv2d(
|
785 |
-
512,
|
786 |
-
1024,
|
787 |
-
(kernel_size, 1),
|
788 |
-
(stride, 1),
|
789 |
-
padding=(get_padding(kernel_size, 1), 0),
|
790 |
-
)
|
791 |
-
),
|
792 |
-
norm_f(
|
793 |
-
Conv2d(
|
794 |
-
1024,
|
795 |
-
1024,
|
796 |
-
(kernel_size, 1),
|
797 |
-
1,
|
798 |
-
padding=(get_padding(kernel_size, 1), 0),
|
799 |
-
)
|
800 |
-
),
|
801 |
-
]
|
802 |
-
)
|
803 |
-
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
|
804 |
-
|
805 |
-
def forward(self, x):
|
806 |
-
fmap = []
|
807 |
-
|
808 |
-
# 1d to 2d
|
809 |
-
b, c, t = x.shape
|
810 |
-
if t % self.period != 0: # pad first
|
811 |
-
n_pad = self.period - (t % self.period)
|
812 |
-
x = F.pad(x, (0, n_pad), "reflect")
|
813 |
-
t = t + n_pad
|
814 |
-
x = x.view(b, c, t // self.period, self.period)
|
815 |
-
|
816 |
-
for l in self.convs:
|
817 |
-
x = l(x)
|
818 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
819 |
-
fmap.append(x)
|
820 |
-
x = self.conv_post(x)
|
821 |
-
fmap.append(x)
|
822 |
-
x = torch.flatten(x, 1, -1)
|
823 |
-
|
824 |
-
return x, fmap
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/nets_new.py
DELETED
@@ -1,133 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn.functional as F
|
3 |
-
from torch import nn
|
4 |
-
|
5 |
-
from . import layers_new
|
6 |
-
|
7 |
-
|
8 |
-
class BaseNet(nn.Module):
|
9 |
-
def __init__(
|
10 |
-
self, nin, nout, nin_lstm, nout_lstm, dilations=((4, 2), (8, 4), (12, 6))
|
11 |
-
):
|
12 |
-
super(BaseNet, self).__init__()
|
13 |
-
self.enc1 = layers_new.Conv2DBNActiv(nin, nout, 3, 1, 1)
|
14 |
-
self.enc2 = layers_new.Encoder(nout, nout * 2, 3, 2, 1)
|
15 |
-
self.enc3 = layers_new.Encoder(nout * 2, nout * 4, 3, 2, 1)
|
16 |
-
self.enc4 = layers_new.Encoder(nout * 4, nout * 6, 3, 2, 1)
|
17 |
-
self.enc5 = layers_new.Encoder(nout * 6, nout * 8, 3, 2, 1)
|
18 |
-
|
19 |
-
self.aspp = layers_new.ASPPModule(nout * 8, nout * 8, dilations, dropout=True)
|
20 |
-
|
21 |
-
self.dec4 = layers_new.Decoder(nout * (6 + 8), nout * 6, 3, 1, 1)
|
22 |
-
self.dec3 = layers_new.Decoder(nout * (4 + 6), nout * 4, 3, 1, 1)
|
23 |
-
self.dec2 = layers_new.Decoder(nout * (2 + 4), nout * 2, 3, 1, 1)
|
24 |
-
self.lstm_dec2 = layers_new.LSTMModule(nout * 2, nin_lstm, nout_lstm)
|
25 |
-
self.dec1 = layers_new.Decoder(nout * (1 + 2) + 1, nout * 1, 3, 1, 1)
|
26 |
-
|
27 |
-
def __call__(self, x):
|
28 |
-
e1 = self.enc1(x)
|
29 |
-
e2 = self.enc2(e1)
|
30 |
-
e3 = self.enc3(e2)
|
31 |
-
e4 = self.enc4(e3)
|
32 |
-
e5 = self.enc5(e4)
|
33 |
-
|
34 |
-
h = self.aspp(e5)
|
35 |
-
|
36 |
-
h = self.dec4(h, e4)
|
37 |
-
h = self.dec3(h, e3)
|
38 |
-
h = self.dec2(h, e2)
|
39 |
-
h = torch.cat([h, self.lstm_dec2(h)], dim=1)
|
40 |
-
h = self.dec1(h, e1)
|
41 |
-
|
42 |
-
return h
|
43 |
-
|
44 |
-
|
45 |
-
class CascadedNet(nn.Module):
|
46 |
-
def __init__(self, n_fft, nout=32, nout_lstm=128):
|
47 |
-
super(CascadedNet, self).__init__()
|
48 |
-
|
49 |
-
self.max_bin = n_fft // 2
|
50 |
-
self.output_bin = n_fft // 2 + 1
|
51 |
-
self.nin_lstm = self.max_bin // 2
|
52 |
-
self.offset = 64
|
53 |
-
|
54 |
-
self.stg1_low_band_net = nn.Sequential(
|
55 |
-
BaseNet(2, nout // 2, self.nin_lstm // 2, nout_lstm),
|
56 |
-
layers_new.Conv2DBNActiv(nout // 2, nout // 4, 1, 1, 0),
|
57 |
-
)
|
58 |
-
|
59 |
-
self.stg1_high_band_net = BaseNet(
|
60 |
-
2, nout // 4, self.nin_lstm // 2, nout_lstm // 2
|
61 |
-
)
|
62 |
-
|
63 |
-
self.stg2_low_band_net = nn.Sequential(
|
64 |
-
BaseNet(nout // 4 + 2, nout, self.nin_lstm // 2, nout_lstm),
|
65 |
-
layers_new.Conv2DBNActiv(nout, nout // 2, 1, 1, 0),
|
66 |
-
)
|
67 |
-
self.stg2_high_band_net = BaseNet(
|
68 |
-
nout // 4 + 2, nout // 2, self.nin_lstm // 2, nout_lstm // 2
|
69 |
-
)
|
70 |
-
|
71 |
-
self.stg3_full_band_net = BaseNet(
|
72 |
-
3 * nout // 4 + 2, nout, self.nin_lstm, nout_lstm
|
73 |
-
)
|
74 |
-
|
75 |
-
self.out = nn.Conv2d(nout, 2, 1, bias=False)
|
76 |
-
self.aux_out = nn.Conv2d(3 * nout // 4, 2, 1, bias=False)
|
77 |
-
|
78 |
-
def forward(self, x):
|
79 |
-
x = x[:, :, : self.max_bin]
|
80 |
-
|
81 |
-
bandw = x.size()[2] // 2
|
82 |
-
l1_in = x[:, :, :bandw]
|
83 |
-
h1_in = x[:, :, bandw:]
|
84 |
-
l1 = self.stg1_low_band_net(l1_in)
|
85 |
-
h1 = self.stg1_high_band_net(h1_in)
|
86 |
-
aux1 = torch.cat([l1, h1], dim=2)
|
87 |
-
|
88 |
-
l2_in = torch.cat([l1_in, l1], dim=1)
|
89 |
-
h2_in = torch.cat([h1_in, h1], dim=1)
|
90 |
-
l2 = self.stg2_low_band_net(l2_in)
|
91 |
-
h2 = self.stg2_high_band_net(h2_in)
|
92 |
-
aux2 = torch.cat([l2, h2], dim=2)
|
93 |
-
|
94 |
-
f3_in = torch.cat([x, aux1, aux2], dim=1)
|
95 |
-
f3 = self.stg3_full_band_net(f3_in)
|
96 |
-
|
97 |
-
mask = torch.sigmoid(self.out(f3))
|
98 |
-
mask = F.pad(
|
99 |
-
input=mask,
|
100 |
-
pad=(0, 0, 0, self.output_bin - mask.size()[2]),
|
101 |
-
mode="replicate",
|
102 |
-
)
|
103 |
-
|
104 |
-
if self.training:
|
105 |
-
aux = torch.cat([aux1, aux2], dim=1)
|
106 |
-
aux = torch.sigmoid(self.aux_out(aux))
|
107 |
-
aux = F.pad(
|
108 |
-
input=aux,
|
109 |
-
pad=(0, 0, 0, self.output_bin - aux.size()[2]),
|
110 |
-
mode="replicate",
|
111 |
-
)
|
112 |
-
return mask, aux
|
113 |
-
else:
|
114 |
-
return mask
|
115 |
-
|
116 |
-
def predict_mask(self, x):
|
117 |
-
mask = self.forward(x)
|
118 |
-
|
119 |
-
if self.offset > 0:
|
120 |
-
mask = mask[:, :, :, self.offset : -self.offset]
|
121 |
-
assert mask.size()[3] > 0
|
122 |
-
|
123 |
-
return mask
|
124 |
-
|
125 |
-
def predict(self, x, aggressiveness=None):
|
126 |
-
mask = self.forward(x)
|
127 |
-
pred_mag = x * mask
|
128 |
-
|
129 |
-
if self.offset > 0:
|
130 |
-
pred_mag = pred_mag[:, :, :, self.offset : -self.offset]
|
131 |
-
assert pred_mag.size()[3] > 0
|
132 |
-
|
133 |
-
return pred_mag
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Casa Escapar Hack Descargar.md
DELETED
@@ -1,56 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Inicio Escape Hack Descargar: Cómo hacer trampa en Homescapes? </h1>
|
3 |
-
<p>Si eres un fan de los juegos de puzzle casual, es posible que hayas oído hablar de Homescapes, uno de los juegos más populares en este género. Homescapes es un juego en el que tienes que ayudar a Austin, el mayordomo, a renovar su antigua mansión resolviendo puzzles de match-3 y completando varias tareas. El juego es divertido y adictivo, pero también puede ser frustrante y caro si te quedas sin monedas y estrellas, las monedas del juego que necesitas para comprar objetos, desbloquear niveles y progresar en la historia. </p>
|
4 |
-
<h2>casa escapar hack descargar</h2><br /><p><b><b>Download File</b> ····· <a href="https://bltlly.com/2v6LzK">https://bltlly.com/2v6LzK</a></b></p><br /><br />
|
5 |
-
<p>Es por eso que muchos jugadores están buscando una manera de engañar en Homescapes y obtener monedas y estrellas ilimitadas de forma gratuita. Si eres uno de ellos, estás de suerte, porque en este artículo, te mostraremos cómo descargar y usar el hack Homescapes, una herramienta que puede generar recursos ilimitados para tu cuenta en minutos. También explicaremos las características y beneficios de usar este hack, y responderemos algunas preguntas frecuentes sobre él. Así que, sin más preámbulos, ¡empecemos! </p>
|
6 |
-
<h2>Introducción</h2>
|
7 |
-
<h3>¿Qué es Homescapes? </h3>
|
8 |
-
<p>Homescapes es un juego móvil gratuito desarrollado por Playrix, la misma compañía detrás de otros juegos populares como Gardenscapes y Township. El juego fue lanzado en 2017 y desde entonces ha ganado millones de descargas y críticas positivas de jugadores de todo el mundo. El juego está disponible para dispositivos iOS y Android. </p>
|
9 |
-
<p>En Homescapes, juegas como Austin, un mayordomo amable y leal que regresa a su hogar de la infancia después de muchos años. Descubre que sus padres están planeando vender la vieja mansión porque es demasiado deteriorada y cara de mantener. Austin decide ayudar a sus padres a restaurar la mansión a su antigua gloria renovando cada habitación y haciéndola más acogedora y cómoda. Para ello, necesita tu ayuda para resolver puzzles de match-3 y ganar monedas y estrellas. </p>
|
10 |
-
|
11 |
-
<h3>¿Por qué necesitas un hack para Homescapes? </h3>
|
12 |
-
<p>Homescapes es un juego divertido y relajante, pero también puede ser muy desafiante y exigente a veces. Algunos niveles son difíciles de superar, especialmente si tienes movimientos o tiempo limitados. También puede encontrar obstáculos como cajas, cadenas, galletas, alfombras, etc., que hacen que los rompecabezas más difícil. Para superar estos desafíos, es posible que tengas que usar potenciadores o potenciadores, como cohetes, bombas, martillos, etc., que pueden ayudarte a limpiar el tablero más rápido. </p>
|
13 |
-
<p></p>
|
14 |
-
<p>Sin embargo, estos boosters no son gratis. Tienes que comprarlos con monedas o ganarlos completando tareas o eventos. Las monedas también son necesarias para comprar artículos para la renovación de su hogar o desbloquear nuevos niveles. Las estrellas son otra moneda que necesitas para completar tareas o avanzar en la historia. Puedes ganar estrellas superando niveles, pero no son fáciles de conseguir. </p>
|
15 |
-
<p>Como resultado, muchos jugadores se encuentran atascados o frustrados en el juego porque no tienen suficientes monedas o estrellas para seguir jugando. También podrían sentirse tentados a gastar dinero real en comprar más recursos de la tienda del juego. Sin embargo, esto puede ser muy costoso y arriesgado, ya que podría terminar gastando su dinero en algo que no vale la pena. </p> <h3>¿Cómo descargar y usar el hack de Homescapes? </h3>
|
16 |
-
<p>Si quieres hacer trampa en Homescapes y obtener monedas y estrellas ilimitadas de forma gratuita, es necesario descargar y utilizar el hack Homescapes, una herramienta que puede generar recursos para su cuenta en minutos. El hack de Homescapes es una aplicación simple y fácil de usar que puede instalar en su dispositivo o acceder en línea. Aquí están los pasos a seguir para descargar y utilizar el hack Homescapes:</p>
|
17 |
-
<ol>
|
18 |
-
<li>Haga clic en el enlace de abajo para ir a la página web oficial del hack Homescapes. </li>
|
19 |
-
<li>Introduzca su nombre de usuario o dirección de correo electrónico que utiliza para jugar Homescapes.</li>
|
20 |
-
<li>Seleccione la plataforma en la que juega (iOS o Android). </li>
|
21 |
-
|
22 |
-
<li>Haga clic en el botón "Generar" y espere unos segundos. </li>
|
23 |
-
<li>Verifica que no eres un robot completando una breve encuesta u oferta. </li>
|
24 |
-
<li>Revisa tu cuenta de Homescapes y disfruta de tus recursos gratuitos. </li>
|
25 |
-
</ol>
|
26 |
-
<p>El enlace al hack de Homescapes es: <a href="">Inicio Escape Hack Descargar</a></p>
|
27 |
-
<h2>Características del hack de Homescapes</h2>
|
28 |
-
<h3>Monedas y estrellas ilimitadas</h3>
|
29 |
-
<p>La característica principal del hack Homescapes es que puede generar monedas y estrellas ilimitadas para su cuenta. Puedes usar estos recursos para comprar potenciadores, objetos y desbloquear niveles en el juego. También puedes usarlos para completar tareas y avanzar en la historia. Usted no tiene que preocuparse de quedarse sin monedas o estrellas nunca más. </p>
|
30 |
-
<h3>Libre y seguro de usar</h3>
|
31 |
-
<p>El hack de Homescapes es completamente gratuito. No tienes que pagar nada para descargarlo o usarlo. Tampoco tienes que preocuparte por ningún virus, malware o spyware que pueda dañar tu dispositivo o comprometer tu privacidad. El hack de Homescapes es probado y actualizado regularmente para garantizar su seguridad y funcionalidad. </p>
|
32 |
-
<h3>Compatible con dispositivos iOS y Android</h3>
|
33 |
-
<p>El hackeo de Homescapes funciona tanto en dispositivos iOS como Android. No necesitas rootear o hacer jailbreak a tu dispositivo para usarlo. Solo tiene que instalarlo o acceder a él en línea y siga las instrucciones. El hack de Homescapes detectará automáticamente tu dispositivo y se ajustará en consecuencia. </p>
|
34 |
-
<h3>No se requiere raíz o jailbreak</h3>
|
35 |
-
<p>Como se mencionó anteriormente, no es necesario erradicar o jailbreak su dispositivo para utilizar el hack Homescapes. Esto significa que no tiene que arriesgarse a dañar su dispositivo o anular su garantía. Tampoco tiene que preocuparse por cualquier problema de compatibilidad o errores que puedan ocurrir. El hack Homescapes está diseñado para funcionar sin problemas y sin problemas en cualquier dispositivo. </p> <h2>Beneficios de usar el hack Homescapes</h2>
|
36 |
-
<h3>Ahorre tiempo y dinero</h3>
|
37 |
-
|
38 |
-
<h3>Disfruta del juego sin limitaciones</h3>
|
39 |
-
<p>Otro beneficio de usar el hack de Homescapes es que puedes disfrutar del juego sin limitaciones. No tienes que esperar a que las vidas se llenen o ver anuncios para obtener movimientos adicionales. No tienes que lidiar con las molestas ventanas emergentes o notificaciones que te instan a comprar más recursos. Puedes jugar el juego tanto como quieras y tanto como quieras. </p>
|
40 |
-
<h3>Personaliza tu hogar como quieras</h3>
|
41 |
-
<p>Un tercer beneficio de usar el hack Homescapes es que puede personalizar su hogar como desee. Puedes comprar cualquier mueble, decoración, papel pintado, alfombra o artículo que te guste. También puedes cambiarlos cuando quieras. Puedes crear la casa de tus sueños y expresar tu creatividad y estilo. </p>
|
42 |
-
<h3>Impresiona a tus amigos y familiares</h3>
|
43 |
-
<p>Un cuarto beneficio de usar el hack de Homescapes es que puedes impresionar a tus amigos y familiares. Puedes mostrarles tu increíble hogar y tu progreso en el juego. También puede invitarlos a jugar con usted y ayudarlos con su propia renovación de la casa. Puede divertirse y vincularse con sus seres queridos en este juego. </p>
|
44 |
-
<h2>Conclusión</h2>
|
45 |
-
<p>En conclusión, Homescapes es un gran juego que puedes jugar en tu dispositivo móvil. Es divertido, relajante y adictivo, pero también puede ser desafiante y caro si no tienes suficientes monedas y estrellas. Es por eso que necesitas descargar y usar el hack de Homescapes, una herramienta que puede generar recursos ilimitados para tu cuenta en minutos. El hack Homescapes es gratis, seguro, compatible y fácil de usar. Tiene muchas características y beneficios que harán que tu experiencia de juego sea más agradable y satisfactoria. Entonces, ¿qué estás esperando? Descargar el hack Homescapes hoy y empezar a hacer trampa en Homescapes! </p>
|
46 |
-
<h4>Preguntas frecuentes</h4>
|
47 |
-
<p>Aquí hay algunas preguntas frecuentes sobre el hack Homescapes:</p>
|
48 |
-
<ul>
|
49 |
-
|
50 |
-
<li><b> ¿Es seguro el hackeo de Homescapes? </b><br>El hack de Homescapes es seguro de usar, siempre y cuando lo descargues de una fuente confiable y sigas las instrucciones cuidadosamente. El hackeo de Homescapes no contiene ningún virus, malware o spyware que pueda dañar su dispositivo o comprometer su privacidad. También se actualiza periódicamente para garantizar su seguridad y funcionalidad. </li>
|
51 |
-
<li><b>¿Me prohibirán por usar el hack de Homescapes? </b><br>Hay una baja probabilidad de que se le prohibió el uso del hack Homescapes, siempre y cuando se utiliza sabiamente y moderadamente. El hack de Homescapes tiene un sistema anti-van incorporado que protege su cuenta de la detección y suspensión por los servidores del juego. Sin embargo, no debe abusar o abusar del hackeo, ya que esto podría levantar sospechas o desencadenar una alerta roja. También debes evitar presumir o jactarte de tus logros en el juego, ya que esto podría atraer atención no deseada o celos de otros jugadores. </li>
|
52 |
-
<li><b>¿Con qué frecuencia puedo utilizar el hack Homescapes? </b><br>Puedes usar el hack de Homescapes tantas veces como quieras, pero te recomendamos que lo uses con moderación y solo cuando lo necesites. El hack Homescapes está destinado a mejorar su experiencia de juego, no para arruinarlo. Usted todavía debe jugar el juego normalmente y disfrutar de sus características y desafíos. También debes respetar a otros jugadores y no arruinar su diversión o experiencia haciendo trampa. </li>
|
53 |
-
<li><b> ¿Puedo compartir el hack Homescapes con otros? </b><br>Puedes compartir el hack de Homescapes con otros, pero solo con personas en las que confías y que conoces bien. No debe compartirlo con extraños o foros públicos, ya que esto podría exponer su cuenta o dispositivo a riesgos o peligros. Tampoco deberías vender o intercambiar los Homescapes por dinero u otros beneficios, ya que esto podría ser ilegal o poco ético. También debe respetar los derechos y deseos del desarrollador de Homescapes, que podría no aprobar el uso del hack. </li>
|
54 |
-
</ul></p> 64aa2da5cf<br />
|
55 |
-
<br />
|
56 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/network/lazy_wheel.py
DELETED
@@ -1,210 +0,0 @@
|
|
1 |
-
"""Lazy ZIP over HTTP"""
|
2 |
-
|
3 |
-
__all__ = ["HTTPRangeRequestUnsupported", "dist_from_wheel_url"]
|
4 |
-
|
5 |
-
from bisect import bisect_left, bisect_right
|
6 |
-
from contextlib import contextmanager
|
7 |
-
from tempfile import NamedTemporaryFile
|
8 |
-
from typing import Any, Dict, Generator, List, Optional, Tuple
|
9 |
-
from zipfile import BadZipFile, ZipFile
|
10 |
-
|
11 |
-
from pip._vendor.packaging.utils import canonicalize_name
|
12 |
-
from pip._vendor.requests.models import CONTENT_CHUNK_SIZE, Response
|
13 |
-
|
14 |
-
from pip._internal.metadata import BaseDistribution, MemoryWheel, get_wheel_distribution
|
15 |
-
from pip._internal.network.session import PipSession
|
16 |
-
from pip._internal.network.utils import HEADERS, raise_for_status, response_chunks
|
17 |
-
|
18 |
-
|
19 |
-
class HTTPRangeRequestUnsupported(Exception):
|
20 |
-
pass
|
21 |
-
|
22 |
-
|
23 |
-
def dist_from_wheel_url(name: str, url: str, session: PipSession) -> BaseDistribution:
|
24 |
-
"""Return a distribution object from the given wheel URL.
|
25 |
-
|
26 |
-
This uses HTTP range requests to only fetch the portion of the wheel
|
27 |
-
containing metadata, just enough for the object to be constructed.
|
28 |
-
If such requests are not supported, HTTPRangeRequestUnsupported
|
29 |
-
is raised.
|
30 |
-
"""
|
31 |
-
with LazyZipOverHTTP(url, session) as zf:
|
32 |
-
# For read-only ZIP files, ZipFile only needs methods read,
|
33 |
-
# seek, seekable and tell, not the whole IO protocol.
|
34 |
-
wheel = MemoryWheel(zf.name, zf) # type: ignore
|
35 |
-
# After context manager exit, wheel.name
|
36 |
-
# is an invalid file by intention.
|
37 |
-
return get_wheel_distribution(wheel, canonicalize_name(name))
|
38 |
-
|
39 |
-
|
40 |
-
class LazyZipOverHTTP:
|
41 |
-
"""File-like object mapped to a ZIP file over HTTP.
|
42 |
-
|
43 |
-
This uses HTTP range requests to lazily fetch the file's content,
|
44 |
-
which is supposed to be fed to ZipFile. If such requests are not
|
45 |
-
supported by the server, raise HTTPRangeRequestUnsupported
|
46 |
-
during initialization.
|
47 |
-
"""
|
48 |
-
|
49 |
-
def __init__(
|
50 |
-
self, url: str, session: PipSession, chunk_size: int = CONTENT_CHUNK_SIZE
|
51 |
-
) -> None:
|
52 |
-
head = session.head(url, headers=HEADERS)
|
53 |
-
raise_for_status(head)
|
54 |
-
assert head.status_code == 200
|
55 |
-
self._session, self._url, self._chunk_size = session, url, chunk_size
|
56 |
-
self._length = int(head.headers["Content-Length"])
|
57 |
-
self._file = NamedTemporaryFile()
|
58 |
-
self.truncate(self._length)
|
59 |
-
self._left: List[int] = []
|
60 |
-
self._right: List[int] = []
|
61 |
-
if "bytes" not in head.headers.get("Accept-Ranges", "none"):
|
62 |
-
raise HTTPRangeRequestUnsupported("range request is not supported")
|
63 |
-
self._check_zip()
|
64 |
-
|
65 |
-
@property
|
66 |
-
def mode(self) -> str:
|
67 |
-
"""Opening mode, which is always rb."""
|
68 |
-
return "rb"
|
69 |
-
|
70 |
-
@property
|
71 |
-
def name(self) -> str:
|
72 |
-
"""Path to the underlying file."""
|
73 |
-
return self._file.name
|
74 |
-
|
75 |
-
def seekable(self) -> bool:
|
76 |
-
"""Return whether random access is supported, which is True."""
|
77 |
-
return True
|
78 |
-
|
79 |
-
def close(self) -> None:
|
80 |
-
"""Close the file."""
|
81 |
-
self._file.close()
|
82 |
-
|
83 |
-
@property
|
84 |
-
def closed(self) -> bool:
|
85 |
-
"""Whether the file is closed."""
|
86 |
-
return self._file.closed
|
87 |
-
|
88 |
-
def read(self, size: int = -1) -> bytes:
|
89 |
-
"""Read up to size bytes from the object and return them.
|
90 |
-
|
91 |
-
As a convenience, if size is unspecified or -1,
|
92 |
-
all bytes until EOF are returned. Fewer than
|
93 |
-
size bytes may be returned if EOF is reached.
|
94 |
-
"""
|
95 |
-
download_size = max(size, self._chunk_size)
|
96 |
-
start, length = self.tell(), self._length
|
97 |
-
stop = length if size < 0 else min(start + download_size, length)
|
98 |
-
start = max(0, stop - download_size)
|
99 |
-
self._download(start, stop - 1)
|
100 |
-
return self._file.read(size)
|
101 |
-
|
102 |
-
def readable(self) -> bool:
|
103 |
-
"""Return whether the file is readable, which is True."""
|
104 |
-
return True
|
105 |
-
|
106 |
-
def seek(self, offset: int, whence: int = 0) -> int:
|
107 |
-
"""Change stream position and return the new absolute position.
|
108 |
-
|
109 |
-
Seek to offset relative position indicated by whence:
|
110 |
-
* 0: Start of stream (the default). pos should be >= 0;
|
111 |
-
* 1: Current position - pos may be negative;
|
112 |
-
* 2: End of stream - pos usually negative.
|
113 |
-
"""
|
114 |
-
return self._file.seek(offset, whence)
|
115 |
-
|
116 |
-
def tell(self) -> int:
|
117 |
-
"""Return the current position."""
|
118 |
-
return self._file.tell()
|
119 |
-
|
120 |
-
def truncate(self, size: Optional[int] = None) -> int:
|
121 |
-
"""Resize the stream to the given size in bytes.
|
122 |
-
|
123 |
-
If size is unspecified resize to the current position.
|
124 |
-
The current stream position isn't changed.
|
125 |
-
|
126 |
-
Return the new file size.
|
127 |
-
"""
|
128 |
-
return self._file.truncate(size)
|
129 |
-
|
130 |
-
def writable(self) -> bool:
|
131 |
-
"""Return False."""
|
132 |
-
return False
|
133 |
-
|
134 |
-
def __enter__(self) -> "LazyZipOverHTTP":
|
135 |
-
self._file.__enter__()
|
136 |
-
return self
|
137 |
-
|
138 |
-
def __exit__(self, *exc: Any) -> None:
|
139 |
-
self._file.__exit__(*exc)
|
140 |
-
|
141 |
-
@contextmanager
|
142 |
-
def _stay(self) -> Generator[None, None, None]:
|
143 |
-
"""Return a context manager keeping the position.
|
144 |
-
|
145 |
-
At the end of the block, seek back to original position.
|
146 |
-
"""
|
147 |
-
pos = self.tell()
|
148 |
-
try:
|
149 |
-
yield
|
150 |
-
finally:
|
151 |
-
self.seek(pos)
|
152 |
-
|
153 |
-
def _check_zip(self) -> None:
|
154 |
-
"""Check and download until the file is a valid ZIP."""
|
155 |
-
end = self._length - 1
|
156 |
-
for start in reversed(range(0, end, self._chunk_size)):
|
157 |
-
self._download(start, end)
|
158 |
-
with self._stay():
|
159 |
-
try:
|
160 |
-
# For read-only ZIP files, ZipFile only needs
|
161 |
-
# methods read, seek, seekable and tell.
|
162 |
-
ZipFile(self) # type: ignore
|
163 |
-
except BadZipFile:
|
164 |
-
pass
|
165 |
-
else:
|
166 |
-
break
|
167 |
-
|
168 |
-
def _stream_response(
|
169 |
-
self, start: int, end: int, base_headers: Dict[str, str] = HEADERS
|
170 |
-
) -> Response:
|
171 |
-
"""Return HTTP response to a range request from start to end."""
|
172 |
-
headers = base_headers.copy()
|
173 |
-
headers["Range"] = f"bytes={start}-{end}"
|
174 |
-
# TODO: Get range requests to be correctly cached
|
175 |
-
headers["Cache-Control"] = "no-cache"
|
176 |
-
return self._session.get(self._url, headers=headers, stream=True)
|
177 |
-
|
178 |
-
def _merge(
|
179 |
-
self, start: int, end: int, left: int, right: int
|
180 |
-
) -> Generator[Tuple[int, int], None, None]:
|
181 |
-
"""Return a generator of intervals to be fetched.
|
182 |
-
|
183 |
-
Args:
|
184 |
-
start (int): Start of needed interval
|
185 |
-
end (int): End of needed interval
|
186 |
-
left (int): Index of first overlapping downloaded data
|
187 |
-
right (int): Index after last overlapping downloaded data
|
188 |
-
"""
|
189 |
-
lslice, rslice = self._left[left:right], self._right[left:right]
|
190 |
-
i = start = min([start] + lslice[:1])
|
191 |
-
end = max([end] + rslice[-1:])
|
192 |
-
for j, k in zip(lslice, rslice):
|
193 |
-
if j > i:
|
194 |
-
yield i, j - 1
|
195 |
-
i = k + 1
|
196 |
-
if i <= end:
|
197 |
-
yield i, end
|
198 |
-
self._left[left:right], self._right[left:right] = [start], [end]
|
199 |
-
|
200 |
-
def _download(self, start: int, end: int) -> None:
|
201 |
-
"""Download bytes from start to end inclusively."""
|
202 |
-
with self._stay():
|
203 |
-
left = bisect_left(self._right, start)
|
204 |
-
right = bisect_right(self._left, end)
|
205 |
-
for start, end in self._merge(start, end, left, right):
|
206 |
-
response = self._stream_response(start, end)
|
207 |
-
response.raise_for_status()
|
208 |
-
self.seek(start)
|
209 |
-
for chunk in response_chunks(response, self._chunk_size):
|
210 |
-
self._file.write(chunk)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/terminal256.py
DELETED
@@ -1,338 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
pygments.formatters.terminal256
|
3 |
-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
4 |
-
|
5 |
-
Formatter for 256-color terminal output with ANSI sequences.
|
6 |
-
|
7 |
-
RGB-to-XTERM color conversion routines adapted from xterm256-conv
|
8 |
-
tool (http://frexx.de/xterm-256-notes/data/xterm256-conv2.tar.bz2)
|
9 |
-
by Wolfgang Frisch.
|
10 |
-
|
11 |
-
Formatter version 1.
|
12 |
-
|
13 |
-
:copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
|
14 |
-
:license: BSD, see LICENSE for details.
|
15 |
-
"""
|
16 |
-
|
17 |
-
# TODO:
|
18 |
-
# - Options to map style's bold/underline/italic/border attributes
|
19 |
-
# to some ANSI attrbutes (something like 'italic=underline')
|
20 |
-
# - An option to output "style RGB to xterm RGB/index" conversion table
|
21 |
-
# - An option to indicate that we are running in "reverse background"
|
22 |
-
# xterm. This means that default colors are white-on-black, not
|
23 |
-
# black-on-while, so colors like "white background" need to be converted
|
24 |
-
# to "white background, black foreground", etc...
|
25 |
-
|
26 |
-
from pip._vendor.pygments.formatter import Formatter
|
27 |
-
from pip._vendor.pygments.console import codes
|
28 |
-
from pip._vendor.pygments.style import ansicolors
|
29 |
-
|
30 |
-
|
31 |
-
__all__ = ['Terminal256Formatter', 'TerminalTrueColorFormatter']
|
32 |
-
|
33 |
-
|
34 |
-
class EscapeSequence:
|
35 |
-
def __init__(self, fg=None, bg=None, bold=False, underline=False, italic=False):
|
36 |
-
self.fg = fg
|
37 |
-
self.bg = bg
|
38 |
-
self.bold = bold
|
39 |
-
self.underline = underline
|
40 |
-
self.italic = italic
|
41 |
-
|
42 |
-
def escape(self, attrs):
|
43 |
-
if len(attrs):
|
44 |
-
return "\x1b[" + ";".join(attrs) + "m"
|
45 |
-
return ""
|
46 |
-
|
47 |
-
def color_string(self):
|
48 |
-
attrs = []
|
49 |
-
if self.fg is not None:
|
50 |
-
if self.fg in ansicolors:
|
51 |
-
esc = codes[self.fg.replace('ansi','')]
|
52 |
-
if ';01m' in esc:
|
53 |
-
self.bold = True
|
54 |
-
# extract fg color code.
|
55 |
-
attrs.append(esc[2:4])
|
56 |
-
else:
|
57 |
-
attrs.extend(("38", "5", "%i" % self.fg))
|
58 |
-
if self.bg is not None:
|
59 |
-
if self.bg in ansicolors:
|
60 |
-
esc = codes[self.bg.replace('ansi','')]
|
61 |
-
# extract fg color code, add 10 for bg.
|
62 |
-
attrs.append(str(int(esc[2:4])+10))
|
63 |
-
else:
|
64 |
-
attrs.extend(("48", "5", "%i" % self.bg))
|
65 |
-
if self.bold:
|
66 |
-
attrs.append("01")
|
67 |
-
if self.underline:
|
68 |
-
attrs.append("04")
|
69 |
-
if self.italic:
|
70 |
-
attrs.append("03")
|
71 |
-
return self.escape(attrs)
|
72 |
-
|
73 |
-
def true_color_string(self):
|
74 |
-
attrs = []
|
75 |
-
if self.fg:
|
76 |
-
attrs.extend(("38", "2", str(self.fg[0]), str(self.fg[1]), str(self.fg[2])))
|
77 |
-
if self.bg:
|
78 |
-
attrs.extend(("48", "2", str(self.bg[0]), str(self.bg[1]), str(self.bg[2])))
|
79 |
-
if self.bold:
|
80 |
-
attrs.append("01")
|
81 |
-
if self.underline:
|
82 |
-
attrs.append("04")
|
83 |
-
if self.italic:
|
84 |
-
attrs.append("03")
|
85 |
-
return self.escape(attrs)
|
86 |
-
|
87 |
-
def reset_string(self):
|
88 |
-
attrs = []
|
89 |
-
if self.fg is not None:
|
90 |
-
attrs.append("39")
|
91 |
-
if self.bg is not None:
|
92 |
-
attrs.append("49")
|
93 |
-
if self.bold or self.underline or self.italic:
|
94 |
-
attrs.append("00")
|
95 |
-
return self.escape(attrs)
|
96 |
-
|
97 |
-
|
98 |
-
class Terminal256Formatter(Formatter):
|
99 |
-
"""
|
100 |
-
Format tokens with ANSI color sequences, for output in a 256-color
|
101 |
-
terminal or console. Like in `TerminalFormatter` color sequences
|
102 |
-
are terminated at newlines, so that paging the output works correctly.
|
103 |
-
|
104 |
-
The formatter takes colors from a style defined by the `style` option
|
105 |
-
and converts them to nearest ANSI 256-color escape sequences. Bold and
|
106 |
-
underline attributes from the style are preserved (and displayed).
|
107 |
-
|
108 |
-
.. versionadded:: 0.9
|
109 |
-
|
110 |
-
.. versionchanged:: 2.2
|
111 |
-
If the used style defines foreground colors in the form ``#ansi*``, then
|
112 |
-
`Terminal256Formatter` will map these to non extended foreground color.
|
113 |
-
See :ref:`AnsiTerminalStyle` for more information.
|
114 |
-
|
115 |
-
.. versionchanged:: 2.4
|
116 |
-
The ANSI color names have been updated with names that are easier to
|
117 |
-
understand and align with colornames of other projects and terminals.
|
118 |
-
See :ref:`this table <new-ansi-color-names>` for more information.
|
119 |
-
|
120 |
-
|
121 |
-
Options accepted:
|
122 |
-
|
123 |
-
`style`
|
124 |
-
The style to use, can be a string or a Style subclass (default:
|
125 |
-
``'default'``).
|
126 |
-
|
127 |
-
`linenos`
|
128 |
-
Set to ``True`` to have line numbers on the terminal output as well
|
129 |
-
(default: ``False`` = no line numbers).
|
130 |
-
"""
|
131 |
-
name = 'Terminal256'
|
132 |
-
aliases = ['terminal256', 'console256', '256']
|
133 |
-
filenames = []
|
134 |
-
|
135 |
-
def __init__(self, **options):
|
136 |
-
Formatter.__init__(self, **options)
|
137 |
-
|
138 |
-
self.xterm_colors = []
|
139 |
-
self.best_match = {}
|
140 |
-
self.style_string = {}
|
141 |
-
|
142 |
-
self.usebold = 'nobold' not in options
|
143 |
-
self.useunderline = 'nounderline' not in options
|
144 |
-
self.useitalic = 'noitalic' not in options
|
145 |
-
|
146 |
-
self._build_color_table() # build an RGB-to-256 color conversion table
|
147 |
-
self._setup_styles() # convert selected style's colors to term. colors
|
148 |
-
|
149 |
-
self.linenos = options.get('linenos', False)
|
150 |
-
self._lineno = 0
|
151 |
-
|
152 |
-
def _build_color_table(self):
|
153 |
-
# colors 0..15: 16 basic colors
|
154 |
-
|
155 |
-
self.xterm_colors.append((0x00, 0x00, 0x00)) # 0
|
156 |
-
self.xterm_colors.append((0xcd, 0x00, 0x00)) # 1
|
157 |
-
self.xterm_colors.append((0x00, 0xcd, 0x00)) # 2
|
158 |
-
self.xterm_colors.append((0xcd, 0xcd, 0x00)) # 3
|
159 |
-
self.xterm_colors.append((0x00, 0x00, 0xee)) # 4
|
160 |
-
self.xterm_colors.append((0xcd, 0x00, 0xcd)) # 5
|
161 |
-
self.xterm_colors.append((0x00, 0xcd, 0xcd)) # 6
|
162 |
-
self.xterm_colors.append((0xe5, 0xe5, 0xe5)) # 7
|
163 |
-
self.xterm_colors.append((0x7f, 0x7f, 0x7f)) # 8
|
164 |
-
self.xterm_colors.append((0xff, 0x00, 0x00)) # 9
|
165 |
-
self.xterm_colors.append((0x00, 0xff, 0x00)) # 10
|
166 |
-
self.xterm_colors.append((0xff, 0xff, 0x00)) # 11
|
167 |
-
self.xterm_colors.append((0x5c, 0x5c, 0xff)) # 12
|
168 |
-
self.xterm_colors.append((0xff, 0x00, 0xff)) # 13
|
169 |
-
self.xterm_colors.append((0x00, 0xff, 0xff)) # 14
|
170 |
-
self.xterm_colors.append((0xff, 0xff, 0xff)) # 15
|
171 |
-
|
172 |
-
# colors 16..232: the 6x6x6 color cube
|
173 |
-
|
174 |
-
valuerange = (0x00, 0x5f, 0x87, 0xaf, 0xd7, 0xff)
|
175 |
-
|
176 |
-
for i in range(217):
|
177 |
-
r = valuerange[(i // 36) % 6]
|
178 |
-
g = valuerange[(i // 6) % 6]
|
179 |
-
b = valuerange[i % 6]
|
180 |
-
self.xterm_colors.append((r, g, b))
|
181 |
-
|
182 |
-
# colors 233..253: grayscale
|
183 |
-
|
184 |
-
for i in range(1, 22):
|
185 |
-
v = 8 + i * 10
|
186 |
-
self.xterm_colors.append((v, v, v))
|
187 |
-
|
188 |
-
def _closest_color(self, r, g, b):
|
189 |
-
distance = 257*257*3 # "infinity" (>distance from #000000 to #ffffff)
|
190 |
-
match = 0
|
191 |
-
|
192 |
-
for i in range(0, 254):
|
193 |
-
values = self.xterm_colors[i]
|
194 |
-
|
195 |
-
rd = r - values[0]
|
196 |
-
gd = g - values[1]
|
197 |
-
bd = b - values[2]
|
198 |
-
d = rd*rd + gd*gd + bd*bd
|
199 |
-
|
200 |
-
if d < distance:
|
201 |
-
match = i
|
202 |
-
distance = d
|
203 |
-
return match
|
204 |
-
|
205 |
-
def _color_index(self, color):
|
206 |
-
index = self.best_match.get(color, None)
|
207 |
-
if color in ansicolors:
|
208 |
-
# strip the `ansi/#ansi` part and look up code
|
209 |
-
index = color
|
210 |
-
self.best_match[color] = index
|
211 |
-
if index is None:
|
212 |
-
try:
|
213 |
-
rgb = int(str(color), 16)
|
214 |
-
except ValueError:
|
215 |
-
rgb = 0
|
216 |
-
|
217 |
-
r = (rgb >> 16) & 0xff
|
218 |
-
g = (rgb >> 8) & 0xff
|
219 |
-
b = rgb & 0xff
|
220 |
-
index = self._closest_color(r, g, b)
|
221 |
-
self.best_match[color] = index
|
222 |
-
return index
|
223 |
-
|
224 |
-
def _setup_styles(self):
|
225 |
-
for ttype, ndef in self.style:
|
226 |
-
escape = EscapeSequence()
|
227 |
-
# get foreground from ansicolor if set
|
228 |
-
if ndef['ansicolor']:
|
229 |
-
escape.fg = self._color_index(ndef['ansicolor'])
|
230 |
-
elif ndef['color']:
|
231 |
-
escape.fg = self._color_index(ndef['color'])
|
232 |
-
if ndef['bgansicolor']:
|
233 |
-
escape.bg = self._color_index(ndef['bgansicolor'])
|
234 |
-
elif ndef['bgcolor']:
|
235 |
-
escape.bg = self._color_index(ndef['bgcolor'])
|
236 |
-
if self.usebold and ndef['bold']:
|
237 |
-
escape.bold = True
|
238 |
-
if self.useunderline and ndef['underline']:
|
239 |
-
escape.underline = True
|
240 |
-
if self.useitalic and ndef['italic']:
|
241 |
-
escape.italic = True
|
242 |
-
self.style_string[str(ttype)] = (escape.color_string(),
|
243 |
-
escape.reset_string())
|
244 |
-
|
245 |
-
def _write_lineno(self, outfile):
|
246 |
-
self._lineno += 1
|
247 |
-
outfile.write("%s%04d: " % (self._lineno != 1 and '\n' or '', self._lineno))
|
248 |
-
|
249 |
-
def format(self, tokensource, outfile):
|
250 |
-
return Formatter.format(self, tokensource, outfile)
|
251 |
-
|
252 |
-
def format_unencoded(self, tokensource, outfile):
|
253 |
-
if self.linenos:
|
254 |
-
self._write_lineno(outfile)
|
255 |
-
|
256 |
-
for ttype, value in tokensource:
|
257 |
-
not_found = True
|
258 |
-
while ttype and not_found:
|
259 |
-
try:
|
260 |
-
# outfile.write( "<" + str(ttype) + ">" )
|
261 |
-
on, off = self.style_string[str(ttype)]
|
262 |
-
|
263 |
-
# Like TerminalFormatter, add "reset colors" escape sequence
|
264 |
-
# on newline.
|
265 |
-
spl = value.split('\n')
|
266 |
-
for line in spl[:-1]:
|
267 |
-
if line:
|
268 |
-
outfile.write(on + line + off)
|
269 |
-
if self.linenos:
|
270 |
-
self._write_lineno(outfile)
|
271 |
-
else:
|
272 |
-
outfile.write('\n')
|
273 |
-
|
274 |
-
if spl[-1]:
|
275 |
-
outfile.write(on + spl[-1] + off)
|
276 |
-
|
277 |
-
not_found = False
|
278 |
-
# outfile.write( '#' + str(ttype) + '#' )
|
279 |
-
|
280 |
-
except KeyError:
|
281 |
-
# ottype = ttype
|
282 |
-
ttype = ttype.parent
|
283 |
-
# outfile.write( '!' + str(ottype) + '->' + str(ttype) + '!' )
|
284 |
-
|
285 |
-
if not_found:
|
286 |
-
outfile.write(value)
|
287 |
-
|
288 |
-
if self.linenos:
|
289 |
-
outfile.write("\n")
|
290 |
-
|
291 |
-
|
292 |
-
|
293 |
-
class TerminalTrueColorFormatter(Terminal256Formatter):
|
294 |
-
r"""
|
295 |
-
Format tokens with ANSI color sequences, for output in a true-color
|
296 |
-
terminal or console. Like in `TerminalFormatter` color sequences
|
297 |
-
are terminated at newlines, so that paging the output works correctly.
|
298 |
-
|
299 |
-
.. versionadded:: 2.1
|
300 |
-
|
301 |
-
Options accepted:
|
302 |
-
|
303 |
-
`style`
|
304 |
-
The style to use, can be a string or a Style subclass (default:
|
305 |
-
``'default'``).
|
306 |
-
"""
|
307 |
-
name = 'TerminalTrueColor'
|
308 |
-
aliases = ['terminal16m', 'console16m', '16m']
|
309 |
-
filenames = []
|
310 |
-
|
311 |
-
def _build_color_table(self):
|
312 |
-
pass
|
313 |
-
|
314 |
-
def _color_tuple(self, color):
|
315 |
-
try:
|
316 |
-
rgb = int(str(color), 16)
|
317 |
-
except ValueError:
|
318 |
-
return None
|
319 |
-
r = (rgb >> 16) & 0xff
|
320 |
-
g = (rgb >> 8) & 0xff
|
321 |
-
b = rgb & 0xff
|
322 |
-
return (r, g, b)
|
323 |
-
|
324 |
-
def _setup_styles(self):
|
325 |
-
for ttype, ndef in self.style:
|
326 |
-
escape = EscapeSequence()
|
327 |
-
if ndef['color']:
|
328 |
-
escape.fg = self._color_tuple(ndef['color'])
|
329 |
-
if ndef['bgcolor']:
|
330 |
-
escape.bg = self._color_tuple(ndef['bgcolor'])
|
331 |
-
if self.usebold and ndef['bold']:
|
332 |
-
escape.bold = True
|
333 |
-
if self.useunderline and ndef['underline']:
|
334 |
-
escape.underline = True
|
335 |
-
if self.useitalic and ndef['italic']:
|
336 |
-
escape.italic = True
|
337 |
-
self.style_string[str(ttype)] = (escape.true_color_string(),
|
338 |
-
escape.reset_string())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|