parquet-converter commited on
Commit
cd1c29d
·
1 Parent(s): fcca8bc

Update parquet files (step 96 of 249)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/101-5/gpt4free/testing/binghuan/README.md +0 -7
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cimatron E12 Crack Serial Download The Benefits and Risks of Using a Cracked Version.md +0 -112
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cyprus Patch Football Manager 2008.md +0 -36
  4. spaces/1gistliPinn/ChatGPT4/Examples/Aqw Class Hack Downloadl.md +0 -13
  5. spaces/1gistliPinn/ChatGPT4/Examples/Download No Radar Pes 6 !!INSTALL!!.md +0 -9
  6. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Simulator 2 How to Download and Play on Your Laptop.md +0 -103
  7. spaces/1phancelerku/anime-remove-background/Cube Rubik Solver APK The Ultimate Guide to Mastering the 3x3 Puzzle.md +0 -209
  8. spaces/1phancelerku/anime-remove-background/Download Draw Bridge Puzzle APK and Test Your Drawing Skills.md +0 -115
  9. spaces/1phancelerku/anime-remove-background/Download Last Pirate Island Survival MOD APK Terbaru and Experience a Unique Survival Game.md +0 -110
  10. spaces/4Taps/SadTalker/src/face3d/data/template_dataset.py +0 -75
  11. spaces/7hao/bingo/src/lib/bots/bing/types.ts +0 -259
  12. spaces/AIFILMS/generate_human_motion/VQ-Trans/VQ_eval.py +0 -95
  13. spaces/AIFILMS/generate_human_motion/pyrender/pyrender/renderer.py +0 -1339
  14. spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv_mlm.py +0 -233
  15. spaces/Aashiue/speech_to_text/app.py +0 -25
  16. spaces/AchyuthGamer/OpenGPT/client/css/message-input.css +0 -27
  17. spaces/AchyuthGamer/OpenGPT/g4f/Provider/Cromicle.py +0 -50
  18. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/ModalMethods.js +0 -41
  19. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/PerspectiveMethods.js +0 -67
  20. spaces/AkitoP/umamusume_bert_vits2/modules.py +0 -597
  21. spaces/AlanMars/QYL-AI-Space/locale/extract_locale.py +0 -26
  22. spaces/AlekseyKorshuk/huggingartists/app.py +0 -245
  23. spaces/AlekseyKorshuk/michellejieli-NSFW_text_classifier/app.py +0 -3
  24. spaces/Aloento/9Nine-PITS/models.py +0 -1383
  25. spaces/Andy1621/uniformer_image_detection/configs/atss/atss_r50_fpn_1x_coco.py +0 -62
  26. spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_512x512_80k_ade20k.py +0 -2
  27. spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/html_instruct_style.css +0 -64
  28. spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/encoders/__init__.py +0 -0
  29. spaces/Arnx/MusicGenXvAKN/audiocraft/data/audio_utils.py +0 -174
  30. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/utils.py +0 -136
  31. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_resources/readers.py +0 -122
  32. spaces/AutoBG/Auto-BoardGame/Model_Constants_Template.py +0 -7
  33. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/__main__.py +0 -31
  34. spaces/BulatF/StreamlitSentiment/README.md +0 -13
  35. spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/gather.h +0 -44
  36. spaces/CVPR/Object-Detection-With-DETR-and-YOLOS/README.md +0 -13
  37. spaces/CVPR/Text2Human/Text2Human/models/hierarchy_vqgan_model.py +0 -374
  38. spaces/CVPR/WALT/mmdet/datasets/lvis.py +0 -742
  39. spaces/CVPR/WALT/mmdet/models/roi_heads/standard_roi_head.py +0 -306
  40. spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py +0 -413
  41. spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/README.md +0 -12
  42. spaces/CognitiveLabs/GPT-auto-webscraping/AssistantService.py +0 -22
  43. spaces/Cong723/gpt-academic-public/crazy_functions/读文章写摘要.py +0 -67
  44. spaces/Cpp4App/Cpp4App/SEM/paragraph_bayesian.py +0 -52
  45. spaces/DEEMOSTECH/ChatAvatar/README.md +0 -12
  46. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/IcnsImagePlugin.py +0 -399
  47. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_signals.py +0 -26
  48. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/dsv-576afacd.js +0 -6
  49. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/TabItem.svelte_svelte_type_style_lang-ffbad424.js +0 -2
  50. spaces/DataScienceEngineering/1-SimPhysics-HTML5/README.md +0 -14
spaces/101-5/gpt4free/testing/binghuan/README.md DELETED
@@ -1,7 +0,0 @@
1
- https://github.com/xtekky/gpt4free/issues/40#issuecomment-1630946450
2
- flow chat process is realy like real Bing (create conversation,listern to websocket and more)
3
- so i just use code Bing Provider from https://gitler.moe/g4f/gpt4free/ version and replace API endpoint and some conversationstyles and work fine
4
-
5
- but bing dont realy support multi/continues conversation (using prompt template from original Provider : def convert(messages) : https://github.com/xtekky/gpt4free/blob/e594500c4e7a8443e9b3f4af755c72f42dae83f0/g4f/Provider/Providers/Bing.py#L322)
6
-
7
- also i have problem with emoji encoding idk how to fix that
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cimatron E12 Crack Serial Download The Benefits and Risks of Using a Cracked Version.md DELETED
@@ -1,112 +0,0 @@
1
- <br />
2
- <h1>Cimatron E12 Crack Serial Download: How to Get It and Why You Need It</h1>
3
- <p>If you are looking for a powerful and versatile CAD/CAM software for mold, die, and tool design and manufacturing, you might have heard of Cimatron E12. This software is one of the most popular and widely used solutions in the industry, offering a comprehensive set of features and benefits for various applications. However, you might also know that Cimatron E12 is not cheap, and you might not be able to afford it or justify its cost. That's why you might be interested in finding a way to get Cimatron E12 crack serial download, which can allow you to use the software for free without any limitations. In this article, we will explain what Cimatron E12 is, what a crack serial is and why you need it, and how to download and install Cimatron E12 crack serial safely and easily. Let's get started!</p>
4
- <h2>What is Cimatron E12?</h2>
5
- <p>Cimatron E12 is a CAD/CAM software that provides a complete solution for mold, die, and tool design and manufacturing. It enables you to design complex parts and assemblies, create high-quality molds and dies, optimize machining processes, and manage your projects efficiently. With Cimatron E12, you can benefit from:</p>
6
- <h2>cimatron e12 crack serial download</h2><br /><p><b><b>Download</b> ===> <a href="https://byltly.com/2uKwXh">https://byltly.com/2uKwXh</a></b></p><br /><br />
7
- <h3>Features and benefits of Cimatron E12</h3>
8
- <ul>
9
- <li>A user-friendly interface that allows you to work faster and easier</li>
10
- <li>A powerful hybrid modeling engine that supports both parametric and direct modeling</li>
11
- <li>A comprehensive set of tools for mold design, including parting line analysis, core/cavity extraction, cooling system design, runner design, mold base design, electrode design, and more</li>
12
- <li>A robust solution for die design, including strip design, blanking analysis, progressive die design, transfer die design, springback compensation, punch design, and more</li>
13
- <li>An advanced CAM module that supports 2.5 to 5-axis milling, drilling, turning, wire EDM, laser cutting, additive manufacturing, and more</li>
14
- <li>A simulation module that allows you to verify your designs and machining operations before production</li>
15
- <li>A data management module that helps you organize your files, track revisions, collaborate with others, and integrate with other systems</li>
16
- <li>A customization module that enables you to tailor the software to your specific needs and preferences</li>
17
- </ul>
18
- <h3>System requirements and compatibility of Cimatron E12</h3>
19
- <p>To run Cimatron E12 smoothly on your computer, you need to meet the following minimum system requirements:</p>
20
- <table>
21
- <tr><th>Operating system</th><th>Windows 7/8/10 (64-bit)</th></tr>
22
- <tr><td>Processor</td><td>Intel Core i5 or higher</td></tr>
23
- <tr><td>Memory</td><td>8 GB RAM or higher</td></tr>
24
- <tr><td>Graphics card</td><td>NVIDIA Quadro or AMD FirePro with 2 GB VRAM or higher</td></tr>
25
- <tr><td>Hard disk space</td><td>20 GB or higher</td></tr>
26
- <tr><td>Internet connection</td><td>Required for activation and updates</td></tr>
27
- </table>
28
- <p>Cimatron E12 is compatible with various file formats, such as IGES, STEP, DXF/DWG, STL, Parasolid, CATIA V4/V5/V6/3DEXPERIENCE, SolidWorks, Solid Edge, NX, Creo, Inventor, and more. You can import and export files easily using the built-in translators.</p>
29
- <h2>What is a crack serial and why do you need it?</h2>
30
- <p>A crack serial is a program or code that can bypass the security measures of a software and unlock its full features without paying for it. In other words, a crack serial can make a software think that it has been activated legally with a valid license key. By using a crack serial for Cimatron E12, you can enjoy all the benefits of the software without spending any money.</p>
31
- <h3>The advantages of using a crack serial for Cimatron E12</h3>
32
- <ul>
33
- <li>You can save a lot of money by not buying the software license.</li>
34
- <li>You can use the software without any time or functionality limitations.</li>
35
- <li>You can access all the updates and new features of the software.</li>
36
- <li>You can use the software on multiple computers without any restrictions.</li>
37
- <li>You can share the software with others who might need it.</li>
38
- </ul>
39
- <h3>The risks and challenges of using a crack serial for Cimatron E12</h3>
40
- <ul>
41
- <li>You might violate the intellectual property rights of the software developer.</li>
42
- <li>You might expose your computer to viruses or malware that might be hidden in the crack serial file.</li>
43
- <li>You might compromise your personal or professional data that might be accessed by hackers or third parties through the crack serial program.</li>
44
- <li>You might face legal consequences or penalties if you are caught using or distributing the crack serial.</li>
45
- <li>You might not get any technical support or customer service from the software developer.</li>
46
- <li>You might encounter errors or bugs that might affect your work quality or productivity.</li>
47
- </ul>
48
- <h2>How to download and install Cimatron E12 crack serial?</h2>
49
- <p>If you have decided to use a crack serial for Cimatron E12, you need to follow some steps carefully to ensure that you get it safely and successfully. Here are the steps:</p>
50
- <p>cimatron e12 full crack free download<br />
51
- cimatron e12 license key generator<br />
52
- cimatron e12 sp3p2 patch download<br />
53
- cimatron e12 cad cam software torrent<br />
54
- cimatron e12 activation code crack<br />
55
- cimatron e12 64 bit crack download<br />
56
- cimatron e12 serial number keygen<br />
57
- cimatron e12 sp1 x64 full version<br />
58
- cimatron e12 crack file download<br />
59
- cimatron e12 latest update download<br />
60
- cimatron e12 installation guide crack<br />
61
- cimatron e12 keygen download link<br />
62
- cimatron e12 cracked software download<br />
63
- cimatron e12 product key crack<br />
64
- cimatron e12 offline installer download<br />
65
- cimatron e12 registration code crack<br />
66
- cimatron e12 patch file free download<br />
67
- cimatron e12 full version download torrent<br />
68
- cimatron e12 license file crack<br />
69
- cimatron e12 crack download for windows 10<br />
70
- cimatron e12 activation key generator<br />
71
- cimatron e12 sp2 x64 crack download<br />
72
- cimatron e12 serial key crack<br />
73
- cimatron e12 full setup download link<br />
74
- cimatron e12 crack software free download<br />
75
- cimatron e12 license code crack<br />
76
- cimatron e12 sp4 x64 patch download<br />
77
- cimatron e12 keygen free download<br />
78
- cimatron e12 full crack download link<br />
79
- cimatron e12 cracked version download torrent<br />
80
- cimatron e12 activation file crack<br />
81
- cimatron e12 sp3 x64 full version download<br />
82
- cimatron e12 serial code keygen<br />
83
- cimatron e12 full package download link<br />
84
- cimatron e12 crack software torrent download<br />
85
- cimatron e12 license key crack<br />
86
- cimatron e12 sp5 x64 patch download link<br />
87
- cimatron e12 keygen torrent download link<br />
88
- cimatron e12 full cracked software download link<br />
89
- cimatron e12 cracked software torrent link</p>
90
- <h3>Step 1: Find a reliable source for the crack serial</h3>
91
- <p>The first step is to find a website or platform that offers the crack serial file for Cimatron E12. You need to be careful and cautious when choosing a source because not all of them are trustworthy or legitimate. Some sources might provide fake or outdated files that might not work or might harm your computer. To avoid this, you should look for sources that have positive reviews, feedbacks, testimonials, or ratings from other users who have tried them before. You should also check if the source has any guarantees, warranties, or refunds in case something goes wrong.</p>
92
- <h3>Step 2: Download the crack serial file and extract it</h3>
93
- <p>The next step is to download the crack serial file from the source that you have chosen. The file might be in a compressed format such as ZIP or RAR that needs to be extracted before using it. To extract it, you need to use a program such as WinRAR or 7-Zip that can open these formats. After extracting it, you should see a folder that contains the crack serial program and some instructions on how to use it.</p>
94
- <h3>Step 3: Run the crack serial program and follow the instructions</h3>
95
- <p>The third step is to run the crack serial program and follow the instructions that are provided in the folder or on the screen. The instructions might vary depending on the type of crack serial that you have downloaded, but generally they involve copying some files or codes into the installation directory of Cimatron E12 or entering some information such as your name or email address. You should follow these instructions carefully and accurately to ensure that the crack serial works properly.</p>
96
- <h3>Step 4: Enjoy your full version of Cimatron E12</h3>
97
- <p>The final step is to enjoy your full version of Cimatron E12 that has been activated by the crack serial. You can now use all the features and functions of the software without any limitations. You can also update the software regularly to get access to new features and improvements. However, you should also be aware of the risks and challenges that we mentioned earlier and take precautions accordingly.</p>
98
- <h2>Conclusion</h2>
99
- <p>In this article, we have explained what Cimatron E12 is, you need it, and how to download and install Cimatron E12 crack serial safely and easily. We have also discussed the advantages and disadvantages of using a crack serial for Cimatron E12, and the steps that you need to follow to get it. We hope that this article has been helpful and informative for you, and that you have learned something new and useful.</p>
100
- <p>However, we also want to remind you that using a crack serial for Cimatron E12 is not legal or ethical, and that it might cause some problems or issues for you or others. Therefore, we do not recommend or endorse using a crack serial for Cimatron E12 or any other software. We suggest that you respect the intellectual property rights of the software developer and purchase a legitimate license for Cimatron E12 if you want to use it. This way, you can support the software developer and enjoy the software without any worries or regrets.</p>
101
- <p>Thank you for reading this article, and we hope that you have a great day!</p>
102
- <h3>FAQs</h3>
103
- <ul>
104
- <li>Q: What is Cimatron E12? A: Cimatron E12 is a CAD/CAM software that provides a complete solution for mold, die, and tool design and manufacturing.</li>
105
- <li>Q: What is a crack serial? A: A crack serial is a program or code that can bypass the security measures of a software and unlock its full features without paying for it.</li>
106
- <li>Q: Why do I need a crack serial for Cimatron E12? A: You might need a crack serial for Cimatron E12 if you want to use the software for free without any limitations.</li>
107
- <li>Q: How can I download and install Cimatron E12 crack serial? A: You can download and install Cimatron E12 crack serial by following these steps: 1) Find a reliable source for the crack serial. 2) Download the crack serial file and extract it. 3) Run the crack serial program and follow the instructions. 4) Enjoy your full version of Cimatron E12.</li>
108
- <li>Q: What are the risks and challenges of using a crack serial for Cimatron E12? A: Some of the risks and challenges of using a crack serial for Cimatron E12 are: 1) You might violate the intellectual property rights of the software developer. 2) You might expose your computer to viruses or malware that might be hidden in the crack serial file. 3) You might compromise your personal or professional data that might be accessed by hackers or third parties through the crack serial program. 4) You might face legal consequences or penalties if you are caught using or distributing the crack serial. 5) You might not get any technical support or customer service from the software developer. 6) You might encounter errors or bugs that might affect your work quality or productivity.</li>
109
- </ul>
110
- </p> 0a6ba089eb<br />
111
- <br />
112
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cyprus Patch Football Manager 2008.md DELETED
@@ -1,36 +0,0 @@
1
-
2
- <h1>How to Install Cyprus Patch for Football Manager 2008</h1>
3
- <p>If you are a fan of Football Manager 2008, you might want to spice up your game with some extra leagues and teams. One of the most popular patches for FM 2008 is the Cyprus Patch, which adds the Cypriot First and Second Division, as well as the Cup and Super Cup competitions. In this article, we will show you how to download and install the Cyprus Patch for Football Manager 2008.</p>
4
- <h2>Step 1: Download the Patch</h2>
5
- <p>The first thing you need to do is to download the patch file from one of the following mirrors[^2^]:</p>
6
- <h2>Cyprus Patch Football Manager 2008</h2><br /><p><b><b>DOWNLOAD</b> &mdash;&mdash;&mdash; <a href="https://byltly.com/2uKxX2">https://byltly.com/2uKxX2</a></b></p><br /><br />
7
- <ul>
8
- <li>www.ut3.yourfirstcreditcard.com/Cyprus_Patch_08.rar</li>
9
- <li>rapidshare.com/files/.../Cyprus_Patch_08.rar.html</li>
10
- </ul>
11
- <p>Make sure you download the correct version of the patch for your version of Football Manager 2008. The patch is compatible with both PC and Mac versions of the game.</p>
12
- <h2>Step 2: Extract the Patch</h2>
13
- <p>Once you have downloaded the patch file, you need to extract it using a program like WinRAR or 7-Zip. You should get a folder called "Cyprus Patch 08" with two subfolders: "graphics" and "editor data".</p>
14
- <h2>Step 3: Copy the Patch Files</h2>
15
- <p>Now you need to copy the patch files to your Football Manager 2008 folder. Depending on your operating system and installation location, this folder might be different. Here are some common paths:</p>
16
- <ul>
17
- <li>C:\Program Files\Sports Interactive\Football Manager 2008\ (Windows)</li>
18
- <li>C:\Program Files (x86)\Sports Interactive\Football Manager 2008\ (Windows 64-bit)</li>
19
- <li>/Users/[username]/Library/Application Support/Sports Interactive/Football Manager 2008/ (Mac)</li>
20
- </ul>
21
- <p>You need to copy the "graphics" folder from the patch to the "graphics" folder in your FM 2008 folder. If you don't have a "graphics" folder in your FM 2008 folder, you can create one.</p>
22
- <p>You also need to copy the "editor data" folder from the patch to the "editor data" folder in your FM 2008 folder. If you don't have an "editor data" folder in your FM 2008 folder, you can create one.</p>
23
- <h2>Step 4: Start a New Game</h2>
24
- <p>Now you are ready to start a new game with the Cyprus Patch. Launch Football Manager 2008 and click on "New Game". In the database selection screen, make sure you tick the box next to "Cyprus Patch 08". You can also choose other databases and custom files if you want.</p>
25
- <p>Then proceed with the game setup as usual. You should be able to select Cyprus as a playable nation and choose from its clubs and leagues. Enjoy!</p>
26
- <p></p>
27
-
28
- <h2>Step 5: Customize Your Game</h2>
29
- <p>If you want to make your game more realistic and immersive, you can also download some extra files to enhance the Cyprus Patch. For example, you can download logos, kits, faces, and stadiums for the Cypriot teams and players. You can find these files on various websites and forums dedicated to Football Manager 2008.</p>
30
- <p>To install these files, you need to copy them to the appropriate folders in your FM 2008 folder. For example, logos go to the "graphics/logos" folder, kits go to the "graphics/kits" folder, faces go to the "graphics/players" folder, and stadiums go to the "graphics/backgrounds" folder. You might need to create these folders if they don't exist.</p>
31
- <p>After you copy the files, you need to reload the skin in your game. To do this, go to "Preferences" and click on "Reload Skin". You should see the new graphics in your game.</p>
32
- <h2>Step 6: Have Fun!</h2>
33
- <p>That's it! You have successfully installed the Cyprus Patch for Football Manager 2008. Now you can enjoy managing a Cypriot club or national team and compete with other European giants. You can also discover new talents and hidden gems from the island of Aphrodite. Who knows, maybe you can lead Cyprus to glory in the World Cup or the European Championship!</p>
34
- <p>We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Happy gaming!</p> 81aa517590<br />
35
- <br />
36
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Aqw Class Hack Downloadl.md DELETED
@@ -1,13 +0,0 @@
1
- <h2>Aqw Class Hack Downloadl</h2><br /><p><b><b>DOWNLOAD</b> &#9913; <a href="https://imgfil.com/2uxXcz">https://imgfil.com/2uxXcz</a></b></p><br /><br />
2
-
3
- FASTEST BOT TO GET LEVEL 100 - AQWhere you can find the fastest way to reach level 100 with ... Read More
4
- BOTS IN THE BROWSER | HOW TO IMPROVE FPS IN CS:GO?
5
- More
6
- HOW TO GET LEVEL 100 FOR FREE IN CS:GO?
7
- HOW TO GET LEVEL 100 IN CS GO FOR FREE?!
8
- HOW TO GET LEVEL 100 IN CS:GO FOR FREE?
9
- HOW TO GET LEVEL 100 IN CS GO FOR FREE?
10
- HOW TO GET 100 U 8a78ff9644<br />
11
- <br />
12
- <br />
13
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Download No Radar Pes 6 !!INSTALL!!.md DELETED
@@ -1,9 +0,0 @@
1
- <br />
2
- <p>myradar is a pretty comprehensive weather forecast app. it gives accurate and precise information about weather conditions in different areas around the world. additionally, the application lets you monitor the maps for tornados, storms, hurricanes, etc. the program receives regular updates for both the paid and free versions. on the official website, you can check out important information around privacy policy, data protection, and ad preferences.</p>
3
- <h2>Download No Radar Pes 6</h2><br /><p><b><b>Download</b> &#10038; <a href="https://imgfil.com/2uxZl9">https://imgfil.com/2uxZl9</a></b></p><br /><br />
4
- <p>myradar is a pretty comprehensive weather tracking app. i like how the app automatically refreshes and the amount of information it provides. i use it for tracking when the weather is going to be good or bad for an upcoming trip.</p>
5
- <p>radar is a free application to predict weather in any part of the world. it can be used to plot and track storms, hurricanes, thunderstorms, tornados, earthquakes, and other natural disasters. the application has a user-friendly interface that allows you to choose the area in the world where you want to get information about weather conditions. the <strong>radar</strong> application is available for both ios and android. it lets you track weather conditions in the world, check the forecast, and <strong>view the radar in real time</strong>. you can also get detailed information about current weather conditions in different areas of the world and see information about precipitation, thunderstorms, and weather forecasts.</p>
6
- <p>myradar is designed to help you catch the biggest weather events like storms, hurricanes, and tornados. you can get up to the minute updates about the current weather conditions and forecast through the application. the program can give you detailed information about precipitation, thunderstorms, and weather forecasts. the application is packed with features like precipitation maps and radar. plus, it allows you to track the location of a weather event, monitor the forecast, and view its latest news.</p>
7
- <p></p> 899543212b<br />
8
- <br />
9
- <br />
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Simulator 2 How to Download and Play on Your Laptop.md DELETED
@@ -1,103 +0,0 @@
1
-
2
- <h1>Car Simulator 2: How to Download and Play on Your Laptop</h1>
3
- <h2>Introduction</h2>
4
- <p>Do you love cars and driving? Do you want to experience a realistic and fun simulation game that lets you explore a 3D open world, race with other players, customize your vehicles, and more? If you answered yes, then you should definitely try Car Simulator 2, one of the most popular and exciting car games for Android devices.</p>
5
- <p>But what if you don't have an Android device or you prefer playing on a bigger screen? Don't worry, because you can still enjoy Car Simulator 2 on your laptop with the help of an emulator. In this article, we will show you how to download and play Car Simulator 2 on your laptop using two different emulators: BlueStacks and GameLoop. Both of them are free, easy to use, and compatible with Windows and Mac operating systems. Let's get started!</p>
6
- <h2>car simulator 2 download for laptop</h2><br /><p><b><b>Download Zip</b> &#9675;&#9675;&#9675; <a href="https://urlin.us/2uSZ60">https://urlin.us/2uSZ60</a></b></p><br /><br />
7
- <h3>What is Car Simulator 2?</h3>
8
- <p>Car Simulator 2 is a simulation game developed by Oppana Games that lets you drive over 85 different cars in a realistic open world. You can play online with real players from all over the world, win races, earn money, buy new cars, upgrade them, and even buy a house. You can also choose from various game modes, such as quests, arcade challenges, cab fares, and more. You can also switch between first-person and third-person perspectives, interact with various elements in the car models, and enjoy realistic physics and sound effects.</p>
9
- <h3>Why play Car Simulator 2 on your laptop?</h3>
10
- <p>Playing Car Simulator 2 on your laptop has many advantages over playing it on your mobile device. Here are some of them:</p>
11
- <ul>
12
- <li>You can enjoy better graphics and performance on a larger screen.</li>
13
- <li>You can use your keyboard or gaming wheel to control your car more easily.</li>
14
- <li>You don't have to worry about battery drain or phone calls interrupting your game.</li>
15
- <li>You can access more features and settings with the emulator.</li>
16
- </ul>
17
- <h2>How to download and play Car Simulator 2 on your laptop</h2>
18
- <h3>Option 1: Use BlueStacks emulator</h3>
19
- <p>BlueStacks is one of the most popular and trusted Android emulators that allows you to play thousands of mobile games on your PC or Mac. It has a user-friendly interface, high compatibility, fast performance, and many customization options. Here are the steps to download and play Car Simulator 2 on your laptop using BlueStacks:</p>
20
- <h4>Step 1: Download and install BlueStacks on your PC</h4>
21
- <p>Go to [BlueStacks website](^1^) and click on "Download BlueStacks". Once the download is complete, run the exe file and follow the instructions to install BlueStacks on your PC.</p>
22
- <p>car simulator 2 pc download free<br />
23
- car simulator 2 emulator for windows<br />
24
- car simulator 2 game download for laptop<br />
25
- car simulator 2 bluestacks on pc<br />
26
- car simulator 2 gameloop on pc<br />
27
- car simulator 2 windows pc download<br />
28
- car simulator 2 mac download free<br />
29
- car simulator 2 android emulator for laptop<br />
30
- car simulator 2 racing game for pc<br />
31
- car simulator 2 oppana games download<br />
32
- car simulator 2 online play on pc<br />
33
- car simulator 2 apk download for laptop<br />
34
- car simulator 2 install on windows<br />
35
- car simulator 2 simulation game for pc<br />
36
- car simulator 2 latest version download<br />
37
- car simulator 2 offline play on laptop<br />
38
- car simulator 2 mod apk for pc<br />
39
- car simulator 2 update download for laptop<br />
40
- car simulator 2 realistic driving game for pc<br />
41
- car simulator 2 open world game for laptop<br />
42
- car simulator 2 cheats and hacks for pc<br />
43
- car simulator 2 review and rating for laptop<br />
44
- car simulator 2 best cars and upgrades for pc<br />
45
- car simulator 2 tips and tricks for laptop<br />
46
- car simulator 2 gameplay and features for pc<br />
47
- car simulator 2 system requirements for laptop<br />
48
- car simulator 2 graphics and sound effects for pc<br />
49
- car simulator 2 multiplayer mode on laptop<br />
50
- car simulator 2 missions and quests for pc<br />
51
- car simulator 2 gas station and mechanic for laptop<br />
52
- car simulator 2 how to play on pc<br />
53
- car simulator 2 download size and speed for laptop<br />
54
- car simulator 2 fun and free game for pc<br />
55
- car simulator 2 new cars and parts for laptop<br />
56
- car simulator 2 police and traffic rules for pc<br />
57
- car simulator 2 cab fares and mob jobs for laptop<br />
58
- car simulator 2 beta versions and updates for pc<br />
59
- car simulator 2 keymapping and controls for laptop<br />
60
- car simulator 2 net energy gain experiment for pc<br />
61
- car simulator 2 mini sun fusion reactor for laptop<br />
62
- car simulator 2 kstar facility and korea institute of fusion energy for pc <br />
63
- car simulator 2 holy grail fusion experiment for laptop <br />
64
- car simulator 2 nuclear fusion reaction and temperature for pc <br />
65
- car simulator 2 sun core comparison and ratio for laptop <br />
66
- car simulator 2 first or third person perspective for pc <br />
67
- car simulator 2 interactive elements and models for laptop <br />
68
- car simulator 2 dynamic day-night cycle for pc <br />
69
- car simulator 2 facebook and vk pages for laptop <br />
70
- car simulator 2 feedback and comments for pc <br />
71
- car simulator 2 enjoy yourself and have fun on laptop</p>
72
- <h4>Step 2: Complete Google sign-in to access the Play Store</h4>
73
- <p>After installing BlueStacks, launch it and sign in with your Google account. This will allow you to access the Google Play Store from BlueStacks.</p>
74
- <h4>Step 3: Look for Car Simulator 2 in the search bar and click to install</h4 <p>Enter Car Simulator 2 in the search bar at the top right corner of the BlueStacks home screen. You will see the game icon in the search results. Click on it and then click on "Install" to start downloading and installing Car Simulator 2 on your PC.</p>
75
- <h4>Step 4: Click the Car Simulator 2 icon on the home screen to start playing</h4>
76
- <p>Once the installation is complete, you will see the Car Simulator 2 icon on the BlueStacks home screen. Click on it and enjoy playing Car Simulator 2 on your laptop!</p>
77
- <h3>Option 2: Use GameLoop emulator</h3>
78
- <p>GameLoop is another popular and reliable Android emulator that is specially designed for gaming. It has a smooth and stable performance, high compatibility, low latency, and many optimization features. Here are the steps to download and play Car Simulator 2 on your laptop using GameLoop:</p>
79
- <h4>Step 1: Download and install GameLoop on your PC</h4>
80
- <p>Go to [GameLoop website] and click on "Download". Once the download is complete, run the exe file and follow the instructions to install GameLoop on your PC.</p>
81
- <h4>Step 2: Open GameLoop and search for Car Simulator 2</h4>
82
- <p>After installing GameLoop, launch it and click on the "Game Center" tab. You will see a list of recommended games. Type Car Simulator 2 in the search box at the top right corner and press enter. You will see the game icon in the search results.</p>
83
- <h4>Step 3: Click to download and play Car Simulator 2 on PC</h4>
84
- <p>Click on the game icon and then click on "Download" to start downloading and installing Car Simulator 2 on your PC. Once the installation is complete, click on "Play" to start playing Car Simulator 2 on your laptop!</p>
85
- <h2>Conclusion</h2>
86
- <p>Car Simulator 2 is a fantastic game that lets you drive over 85 different cars in a realistic open world. You can play online with real players, win races, earn money, buy new cars, upgrade them, and more. You can also choose from various game modes, such as quests, arcade challenges, cab fares, and more.</p>
87
- <p>If you want to play Car Simulator 2 on your laptop, you can use an emulator like BlueStacks or GameLoop. Both of them are free, easy to use, and compatible with Windows and Mac operating systems. They also offer better graphics, performance, and control options than playing on your mobile device.</p>
88
- <p>So what are you waiting for? Download and play Car Simulator 2 on your laptop today and have fun!</p>
89
- <h3>Frequently Asked Questions</h3>
90
- <ul>
91
- <li><b>Is Car Simulator 2 free to play?</b></li>
92
- <p>Yes, Car Simulator 2 is free to download and play. However, it contains ads and in-app purchases that you can disable or buy with real money.</p>
93
- <li><b>Can I play Car Simulator 2 offline?</b></li>
94
- <p>Yes, you can play Car Simulator 2 offline without an internet connection. However, some features and modes may not be available or updated offline.</p>
95
- <li><b>How can I save my progress in Car Simulator 2?</b></li>
96
- <p>You can save your progress in Car Simulator 2 by signing in with your Google Play Games account or Facebook account. This will also allow you to sync your progress across different devices.</p>
97
- <li><b>How can I customize my car in Car Simulator 2?</b></li>
98
- <p>You can customize your car in Car Simulator 2 by going to the garage menu. There you can change the color, wheels, suspension, engine, transmission, brakes, nitro, turbo, and more of your car. You can also add stickers and decals to your car.</p>
99
- <li><b>How can I earn money in Car Simulator 2?</b></li>
100
- <p>You can earn money in Car Simulator 2 by winning races, completing quests, doing cab fares, watching ads, or buying them with real money.</p>
101
- </ul></p> 197e85843d<br />
102
- <br />
103
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Cube Rubik Solver APK The Ultimate Guide to Mastering the 3x3 Puzzle.md DELETED
@@ -1,209 +0,0 @@
1
-
2
- <h1>Cube Rubik Solver APK: How to Solve the Rubik's Cube with Your Android Device</h1>
3
- <h2>Introduction</h2>
4
- <p>Have you ever wondered how to solve the Rubik's Cube, the world's most famous and challenging puzzle? If you have, you are not alone. Millions of people around the world have tried to crack this colorful cube, but only a few have succeeded. Some people spend hours, days, or even years trying to figure out the right moves, while others give up in frustration.</p>
5
- <h2>cube rubik solver apk</h2><br /><p><b><b>DOWNLOAD</b> &hArr; <a href="https://jinyurl.com/2uNRRZ">https://jinyurl.com/2uNRRZ</a></b></p><br /><br />
6
- <p>But what if we told you that there is an easier way to solve the Rubik's Cube, using only your Android device? Yes, you read that right. Thanks to technology, you can now download and install a Cube Rubik Solver APK, an application that will guide you step by step to solve your cube in minutes or even seconds. Sounds amazing, right?</p>
7
- <p>In this article, we will explain what a Rubik's Cube is and why it is so popular, what a Cube Rubik Solver APK is and how it works, and how to download and install one on your device. We will also compare some of the most popular Cube Rubik Solver APKs available on the market, and give you some tips on how to use them effectively. By the end of this article, you will be able to solve any Rubik's Cube with ease and impress your friends and family.</p>
8
- <h2>What is a Rubik's Cube and why is it so popular?</h2>
9
- <h3>What is a Rubik's Cube?</h3>
10
- <p>A Rubik's Cube is a three-dimensional puzzle that consists of six faces, each divided into nine smaller squares of one of six colors: white, yellow, red, blue, green, and orange. The goal of the puzzle is to twist and turn the cube until each face has only one color.</p>
11
- <p>The Rubik's Cube was invented in 1974 by Ernő Rubik, a Hungarian professor of architecture and design. He originally created it as a model to demonstrate three-dimensional geometry, but soon realized that it could also be used as a toy. He patented his invention in 1975 and named it the "Magic Cube".</p>
12
- <p>How to solve a Rubik's cube with AZ Rubik's cube solver apk<br />
13
- Download Rubik's Solver apk for Android<br />
14
- Online Rubik's Cube Solver - 3D 3x3x3<br />
15
- AZ Rubik's cube solver apk - learn the best tricks and tips<br />
16
- Rubik's Solver apk - the official app from Rubik's<br />
17
- Solve your own puzzles with AZ Rubik's cube solver apk<br />
18
- Rubik's Solver apk - easy and clear steps to solve the cube<br />
19
- Online Rubik's Cube Solver - solve any scrambled cube<br />
20
- AZ Rubik's cube solver apk - practice with different cube sizes<br />
21
- Rubik's Solver apk - fast and optimal solution mode<br />
22
- Online Rubik's Cube Solver - watch the solution steps<br />
23
- AZ Rubik's cube solver apk - 3D graphics and free rotation<br />
24
- Rubik's Solver apk - learn the basics of the cube<br />
25
- Online Rubik's Cube Solver - enter the colors of your puzzle<br />
26
- AZ Rubik's cube solver apk - download for free from Uptodown<br />
27
- Rubik's Solver apk - download from APKCombo<br />
28
- Online Rubik's Cube Solver - free and online tool<br />
29
- AZ Rubik's cube solver apk - fun and educational game<br />
30
- Rubik's Solver apk - compatible with Android 5.0 or higher<br />
31
- Online Rubik's Cube Solver - how to use it?<br />
32
- AZ Rubik's cube solver apk - created by DotFinger Games<br />
33
- Rubik's Solver apk - developed by RubiksPhotoCube<br />
34
- Online Rubik's Cube Solver - powered by rubiks-cube-solver.com<br />
35
- AZ Rubik's cube solver apk - latest version 2.0.3<br />
36
- Rubik's Solver apk - latest version 1.0.1<br />
37
- Online Rubik's Cube Solver - updated regularly<br />
38
- AZ Rubik's cube solver apk - reviewed by New Scientist<br />
39
- Rubik's Solver apk - reviewed by APKCombo<br />
40
- Online Rubik's Cube Solver - trusted by millions of users<br />
41
- AZ Rubik's cube solver apk - solve the cube in seconds<br />
42
- Rubik's Solver apk - solve the cube in minutes<br />
43
- Online Rubik's Cube Solver - solve the cube in steps<br />
44
- AZ Rubik's cube solver apk - guide and timer features<br />
45
- Rubik's Solver apk - virtual cube and quick solution features<br />
46
- Online Rubik's Cube Solver - algorithm and notation features<br />
47
- AZ Rubik's cube solver apk - supports 2x2, 3x3, 4x4, 5x5, and 6x6 cubes<br />
48
- Rubik's Solver apk - supports 3x3 classic cube only<br />
49
- Online Rubik's Cube Solver - supports any valid starting position<br />
50
- AZ Rubik's cube solver apk - create your own custom cubes<br />
51
- Rubik's Solver apk - scan your real cube with your camera<br />
52
- Online Rubik's Cube Solver - customize your virtual cube colors<br />
53
- AZ Rubik's cube solver apk - learn how the cube works and rotates<br />
54
- Rubik's Solver apk - learn the logic and strategy behind the cube<br />
55
- Online Rubik's Cube Solver - learn the history and facts about the cube<br />
56
- AZ Rubik's cube solver apk - challenge yourself and improve your skills<br />
57
- Rubik's Solver apk - challenge your friends and compare your times<br />
58
- Online Rubik's Cube Solver - share your results and feedback online</p>
59
- <p>The Magic Cube was first sold in Hungary in 1977, but it was not until 1980 that it became an international sensation. That year, it was renamed the "Rubik's Cube" and licensed by Ideal Toy Corp., an American company that marketed it worldwide. The Rubik's Cube quickly became a best-selling toy, winning several awards and breaking records. By 1982, more than 100 million cubes had been sold.</p>
60
- <h3>Why is the Rubik's Cube so popular?</h3>
61
- <p>The Rubik's Cube is not only a toy, but also a cultural icon. It has inspired countless books, movies, songs, games, art works, competitions, and even algorithms. It has been featured in museums, exhibitions, and festivals. It has been used as a symbol of intelligence, creativity, innovation, and problem-solving.</p>
62
- <p>But it has 43 quintillion possible combinations, making it extremely difficult to solve. It challenges the mind and the patience of anyone who tries it.</p>
63
- <p>Secondly, it is universal and timeless. It can be enjoyed by anyone, regardless of age, gender, culture, or language. It does not require batteries, electricity, or internet connection. It can be played anywhere, anytime, and with anyone. It never goes out of style or becomes obsolete.</p>
64
- <p>Thirdly, it is fun and rewarding. It provides a sense of accomplishment and satisfaction when solved. It stimulates the brain and improves memory, concentration, logic, and spatial awareness. It also fosters creativity and curiosity, as there are many ways to approach and solve it.</p>
65
- <h2>What is a Cube Rubik Solver APK and how does it work?</h2>
66
- <h3>What is a Cube Rubik Solver APK?</h3>
67
- <p>A Cube Rubik Solver APK is an application that can be downloaded and installed on an Android device, such as a smartphone or a tablet. It is designed to help users solve the Rubik's Cube by providing them with step-by-step instructions and animations.</p>
68
- <p>An APK (Android Package Kit) is a file format that contains all the elements needed to run an application on an Android device. It is similar to an EXE file for Windows or a DMG file for Mac. An APK file can be obtained from various sources, such as official app stores, third-party websites, or direct links.</p>
69
- <h3>How does a Cube Rubik Solver APK work?</h3>
70
- <p>A Cube Rubik Solver APK works by using the device's camera to scan the scrambled cube and analyze its colors and positions. Then, it applies a mathematical algorithm to find the optimal solution for the cube. Finally, it displays the solution on the screen in the form of text instructions and 3D animations that show how to rotate the cube.</p>
71
- <p>The user can choose between different modes of solving the cube, such as beginner, intermediate, advanced, or expert. The user can also adjust the speed and difficulty of the solution, as well as the color scheme and orientation of the cube.</p>
72
- <h2>Benefits of using a Cube Rubik Solver APK</h2>
73
- <p>Using a Cube Rubik Solver APK has many benefits for users who want to solve the Rubik's Cube. Some of these benefits are:</p>
74
- <ul>
75
- <li>It saves time and effort. Instead of spending hours or days trying to figure out the cube by trial and error, users can solve it in minutes or seconds with the help of the app.</li>
76
- <li>It boosts confidence and motivation. Users can feel proud and happy when they solve the cube with ease and speed. They can also challenge themselves to improve their skills and beat their own records.</li>
77
- <li>It enhances learning and understanding. Users can learn the logic and principles behind the cube's movements and patterns. They can also understand how the app's algorithm works and how it finds the best solution.</li>
78
- <li>It increases fun and enjoyment. Users can have fun playing with the cube and watching the app's animations. They can also share their achievements with their friends and family or compete with other users online.</li>
79
- <h2>How to download and install a Cube Rubik Solver APK</h2>
80
- <p>If you want to use a Cube Rubik Solver APK on your Android device, you need to download and install it first. Here are the steps you need to follow:</p>
81
- <h3>Step 1: Find a reliable source for the APK file</h3>
82
- <p>The first thing you need to do is to find a trustworthy website that offers the APK file of the Cube Rubik Solver app you want to use. There are many websites that claim to provide free and safe APK files, but some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information.</p>
83
- <p>Therefore, you need to be careful and do some research before downloading any APK file from an unknown source. You can check the reviews, ratings, comments, and feedback of other users who have downloaded the same file. You can also use antivirus software or online tools to scan the file for any potential threats.</p>
84
- <p>Some of the reputable websites that offer Cube Rubik Solver APK files are:</p>
85
- <ul>
86
- <li>[APKPure]: This website provides a large collection of APK files for various apps and games, including Cube Rubik Solver apps. It also updates its files regularly and verifies their security and quality.</li>
87
- <li>[APKMirror]: This website is another popular source for APK files, especially for apps that are not available on the official app stores. It also ensures that its files are safe and authentic.</li>
88
- <li>[Uptodown]: This website is a global platform that offers APK files for thousands of apps and games in different languages and regions. It also checks its files for viruses and malware.</li>
89
- </ul>
90
- <h3>Step 2: Enable unknown sources on your device settings</h3>
91
- <p>The next thing you need to do is to allow your device to install apps from unknown sources. This is because most Android devices have a default setting that prevents them from installing apps that are not downloaded from the official app stores, such as Google Play Store or Amazon Appstore.</p>
92
- <p>To enable unknown sources on your device settings, you need to follow these steps:</p>
93
- <ol>
94
- <li>Go to your device's Settings menu and tap on Security or Privacy.</li>
95
- <li>Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.</li>
96
- <li>A warning message may appear, asking you to confirm your action. Tap on OK or Allow to proceed.</li>
97
- </ol>
98
- <p>Note: The exact steps may vary depending on your device model and Android version. You can also disable this option after installing the app if you want to.</p>
99
- <h3>Step 3: Download and install the APK file</h3>
100
- <p>The final thing you need to do is to download and install the APK file of the Cube Rubik Solver app you want to use. To do this, you need to follow these steps:</p>
101
- <ol>
102
- <li>Open your device's browser and go to the website where you found the APK file.</li>
103
- <li>Find the download button or link and tap on it. The file will start downloading automatically.</li>
104
- <li>Once the download is complete, go to your device's Downloads folder and find the APK file.</li>
105
- <li>Tap on the file and follow the installation instructions on the screen.</li>
106
- <li>Wait for the installation process to finish. You may see a message that says App Installed or Done.</li>
107
- <li>Tap on Open or Launch to start using the app.</li>
108
- </ol>
109
- <h2>How to use a Cube Rubik Solver APK</h2>
110
- <p>Now that you have downloaded and installed a Cube Rubik Solver APK on your Android device, you may be wondering how to use it to solve your Rubik's Cube. Don't worry, it's very easy and intuitive. Here are the steps you need to follow:</p>
111
- <h3>Step 1: Scan your scrambled cube with your device camera</h3>
112
- <p>The first thing you need to do is to scan your scrambled cube with your device camera. To do this, you need to follow these steps:</p>
113
- <ol>
114
- <li>Open the Cube Rubik Solver app on your device.</li>
115
- <li>Hold your cube in front of your device camera, making sure that the entire face is visible and well-lit.</li>
116
- <li>The app will automatically detect the colors and positions of the squares on the face.</li>
117
- <li>Repeat this process for all six faces of the cube, following the app's instructions on which face to scan next.</li>
118
- <li>The app will show you a 3D model of your scanned cube on the screen. You can rotate it and zoom in or out to check if it matches your real cube.</li>
119
- <li>If you notice any errors or discrepancies, you can tap on the Edit button and manually adjust the colors and positions of the squares.</li>
120
- <li>Once you are satisfied with the scanned cube, tap on the Solve button and wait for the app to find the solution.</li>
121
- </ol>
122
- <p>Note: The scanning process may vary depending on the app you are using. Some apps may require you to scan only one face at a time, while others may allow you to scan multiple faces at once. Some apps may also have different color schemes or orientations for the cube. You can check the app's settings or help section for more details.</p>
123
- <h3>Step 2: Choose a solution mode and follow the instructions</h3>
124
- <p>The next thing you need to do is to choose a solution mode and follow the instructions. To do this, you need to follow these steps:</p>
125
- <ol>
126
- <li>The app will show you several options for solving the cube, such as beginner, intermediate, advanced, or expert. You can choose the one that suits your skill level and preference.</li>
127
- <li>The app will also show you how many moves and how much time it will take to solve the cube in each mode. You can compare them and select the one that meets your goals.</li>
128
- <li>The app will then display the solution on the screen in the form of text instructions and 3D animations. The text instructions will tell you which face to rotate and in which direction, using standard notation such as R for right, L for left, U for up, D for down, F for front, B for back, and ' for counterclockwise. The 3D animations will show you how the cube changes after each move.</li>
129
- <li>You can follow the instructions and animations on your device screen and perform the same moves on your real cube. You can also pause, resume, rewind, or fast-forward the solution as needed.</li>
130
- <li>The app will keep track of your progress and tell you when you have solved the cube.</li>
131
- </ol>
132
- <p>Note: The solution mode and instructions may vary depending on the app you are using. Some apps may have different modes or levels of difficulty, such as easy, normal, hard, or expert. Some apps may also have different notations or formats for the instructions, such as arrows, symbols, or colors. You can check the app's settings or help section for more details.</p>
133
- <h3>Step 3: Enjoy solving your cube in minutes or seconds</h3>
134
- <p>The final thing you need to do is to enjoy solving your cube in minutes or seconds. To do this, you need to follow these steps:</p>
135
- <ol>
136
- <li>Congratulate yourself for solving the Rubik's Cube with ease and speed. You have just accomplished something that many people find impossible or extremely difficult.</li>
137
- <li>Feel free to share your achievement with your friends and family or post it on social media. You can also take a screenshot or a video of your solved cube and your solution time.</li>
138
- <li>If you want to challenge yourself further, you can try to solve the cube faster or with fewer moves. You can also try different types or sizes of cubes, such as 2x2x2, 4x4x4, 5x5x5, or even 7x7x7.</li>
139
- <li>If you want to learn more about the Rubik's Cube and its history, theory, methods, algorithms, competitions, and culture, you can visit some of these websites:</li>
140
- <ul>
141
- <li>[World Cube Association]: This is the official organization that governs Rubik's Cube competitions and records. It also provides information on events, rankings, regulations, and news.</li>
142
- <li>[Speedsolving.com]: This is a community website for speedcubers and puzzle enthusiasts. It features forums, articles, tutorials, resources, and tools.</li>
143
- <li>[Ruwix.com]: This is a website dedicated to the Rubik's Cube and other twisty puzzles. It offers online solvers, simulators, timers, guides, and trivia.</li>
144
- </ul>
145
- </ol>
146
- <p>You have now learned how to use a Cube Rubik Solver APK to solve the Rubik's Cube with your Android device. We hope you enjoyed this article and found it useful and informative. If you have any questions or feedback, please feel free to leave a comment below. Happy cubing!</p>
147
- <h2>Comparison of some popular Cube Rubik Solver APKs</h2>
148
- <p>As we mentioned earlier, there are many Cube Rubik Solver APKs available on the market, each with its own features and advantages. To help you choose the best one for your needs and preferences, we have compared some of the most popular ones in terms of their ratings, downloads, size, and functionality. Here is a table that summarizes our comparison:</p>
149
- <table>
150
- <tr>
151
- <th>Cube Rubik Solver APK</th>
152
- <th>Rating</th>
153
- <th>Downloads</th>
154
- <th>Size</th>
155
- <th>Functionality</th>
156
- </tr>
157
- <tr>
158
- <td>AZ Rubik's Cube Solver</td>
159
- <td>4.5/5</td>
160
- <td>1M+</td>
161
- <td>8.9 MB</td>
162
- <td>- Supports 2x2x2 to 7x7x7 cubes<br>- Offers beginner to expert modes<br>- Allows manual or automatic scanning<br>- Shows text and 3D animations<br>- Has customizable settings and themes<br>- Includes a timer and a leaderboard</td>
163
- </tr>
164
- <tr>
165
- <td>Rubik's Solver</td>
166
- <td>4.4/5</td>
167
- <td>500K+</td>
168
- <td>6.8 MB</td>
169
- <td>- Supports 3x3x3 cubes only<br>- Offers beginner to advanced modes<br>- Requires manual scanning<br>- Shows text and 2D animations<br>- Has simple settings and interface<br>- Includes a timer and a history</td>
170
- </tr>
171
- <tr>
172
- <td>Online Rubik's Cube Solver</td>
173
- <td>4.3/5</td>
174
- <td>100K+</td>
175
- <td>4.1 MB</td>
176
- <td>- Supports 2x2x2 to 6x6x6 cubes<br>- Offers easy to expert modes<br>- Allows manual or automatic scanning<br>- Shows text and 3D animations<br>- Has adjustable settings and colors<br>- Includes a timer and a statistics</td>
177
- </tr>
178
- </table>
179
- <p>Note: The information in this table is based on the data available at the time of writing this article. It may change or vary depending on the updates or changes made by the app developers or providers.</p>
180
- <h2>Conclusion</h2>
181
- <h3>Summary of the main points</h3>
182
- <p>In this article, we have covered the following topics:</p>
183
- <ul>
184
- <li>What is a Rubik's Cube and why is it so popular?</li>
185
- <li>What is a Cube Rubik Solver APK and how does it work?</li>
186
- <li>Benefits of using a Cube Rubik Solver APK</li>
187
- <li>How to download and install a Cube Rubik Solver APK</li>
188
- <li>How to use a Cube Rubik Solver APK</li>
189
- <li>Comparison of some popular Cube Rubik Solver APKs</li>
190
- </ul>
191
- <p>We have also provided you with some tips, resources, and examples to help you solve the Rubik's Cube with ease and speed using your Android device.</p>
192
- <h3>Call to action and final remarks</h3>
193
- <p>If you are interested in trying out a Cube Rubik Solver APK, we recommend you to download one of the apps we have compared in this article and follow our instructions on how to use it. You can also explore other apps that may suit your needs and preferences better.</p>
194
- <p>We hope you enjoyed this article and found it useful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.</p>
195
- <p>Thank you for reading this article and happy cubing!</p>
196
- <h2>Frequently Asked Questions (FAQs)</h2>
197
- <p>Here are some of the most common questions that people ask about Cube Rubik Solver APKs:</p>
198
- <h3>Q: Is using a Cube Rubik Solver APK cheating?</h3>
199
- <p>A: No, using a A: No, using a Cube Rubik Solver APK is not cheating. It is a tool that can help you learn and improve your skills in solving the Rubik's Cube. It can also provide you with fun and entertainment. However, if you are participating in a competition or a challenge, you should not use the app, as it may be considered unfair or dishonest by the organizers or the other participants.</p>
200
- <h3>Q: How accurate and reliable are Cube Rubik Solver APKs?</h3>
201
- <p>A: Cube Rubik Solver APKs are generally accurate and reliable, as they use mathematical algorithms and formulas to find the optimal solution for the cube. However, some factors may affect their accuracy and reliability, such as the quality of the device camera, the lighting conditions, the color recognition, and the scanning process. Therefore, you should always check the scanned cube and the solution on the screen before following them on your real cube. You should also make sure that the app is updated and compatible with your device.</p>
202
- <h3>Q: Are Cube Rubik Solver APKs safe and secure?</h3>
203
- <p>A: Cube Rubik Solver APKs are usually safe and secure, as they do not require any special permissions or access to your device's data or functions. However, as with any APK file, you should always download and install them from reputable and trustworthy sources, such as official app stores or websites. You should also scan them for any viruses, malware, or spyware before installing them on your device. You should also read the app's privacy policy and terms of service to understand how it collects, uses, and protects your personal information.</p>
204
- <h3>Q: Do Cube Rubik Solver APKs work offline?</h3>
205
- <p>A: Most Cube Rubik Solver APKs work offline, as they do not require any internet connection to scan the cube or find the solution. However, some apps may require an internet connection for some features or functions, such as downloading updates, accessing online resources, or sharing your results. You should check the app's description or settings to see if it works offline or not.</p>
206
- <h3>Q: Can I use Cube Rubik Solver APKs on other devices besides Android?</h3>
207
- <p>A: No, Cube Rubik Solver APKs are designed to work only on Android devices, such as smartphones or tablets. They are not compatible with other devices or operating systems, such as iOS, Windows, Mac, or Linux. However, there may be other apps or websites that offer similar services for other devices or platforms. You can search for them online or ask for recommendations from other users.</p> 401be4b1e0<br />
208
- <br />
209
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Draw Bridge Puzzle APK and Test Your Drawing Skills.md DELETED
@@ -1,115 +0,0 @@
1
-
2
- <h1>Draw Bridge Puzzle APK: A Fun and Challenging Game for Android Users</h1>
3
- <p>Do you love puzzle games that test your logic and creativity? Do you enjoy drawing lines and shapes to solve problems? If you answered yes to these questions, then you should try Draw Bridge Puzzle APK, a new and exciting game for Android devices. In this article, we will tell you everything you need to know about this game, including what it is, how to play it, why you should download it, and how to get it on your device. Let's get started!</p>
4
- <h2>draw bridge puzzle apk</h2><br /><p><b><b>DOWNLOAD</b> ::: <a href="https://jinyurl.com/2uNK0M">https://jinyurl.com/2uNK0M</a></b></p><br /><br />
5
- <h2>What is Draw Bridge Puzzle APK?</h2>
6
- <p>Draw Bridge Puzzle APK is a game developed by Weegoon, a studio that specializes in creating fun and addictive games for mobile platforms. The game is based on the concept of drawing bridges to connect two points and guide a car to the goal. The game has hundreds of levels with different scenarios and obstacles, such as gaps, hills, spikes, enemies, and more. The game also has simple and colorful graphics, smooth animations, and relaxing music that make it enjoyable to play.</p>
7
- <h3>A brief introduction to the game and its features</h3>
8
- <p>The game is easy to play but hard to master. All you need to do is drag your finger on the screen to create a path for the car. You can only draw the line once, so be careful not to make any mistakes. You also need to consider the physics and gravity of the game, as well as the speed and direction of the car. The game will reward you with stars based on how well you complete each level. You can use these stars to unlock new cars and skins that have different abilities and appearances.</p>
9
- <h2>How to Play Draw Bridge Puzzle APK?</h2>
10
- <p>The game has a simple and intuitive interface that allows you to start playing right away. You can choose from different modes, such as classic, challenge, arcade, or custom. You can also adjust the settings, such as sound, music, vibration, language, or theme. The game also has a leaderboard and achievements system that lets you compare your scores and progress with other players around the world.</p>
11
- <h3>The basic rules and mechanics of the game</h3>
12
- <p>The game consists of several stages, each with a number of levels. Each level has a start point, an end point, and some obstacles in between. Your goal is to draw a bridge that connects the start point and the end point without touching any obstacles or falling off the screen. You also need to make sure that the car can cross the bridge safely without crashing or getting stuck. The game will show you a preview of your bridge before you release your finger, so you can check if it is feasible or not.</p>
13
- <h4>Tips and tricks to solve the puzzles</h4>
14
- <ul>
15
- <li>Use curved lines instead of straight lines to create smoother bridges.</li>
16
- <li>Use short lines instead of long lines to save ink and avoid unnecessary loops.</li>
17
- <li>Use multiple lines instead of one line to create more stable bridges.</li>
18
- <li>Use trial and error method to find the best solution for each level.</li>
19
- <li>Watch ads or use hints if you get stuck or need some help.</li>
20
- </ul>
21
- <h2>Why You Should Download Draw Bridge Puzzle APK?</h2>
22
- <p>If you are looking for a game that can keep you entertained and challenged for hours, then Draw Bridge Puzzle APK is the perfect choice for you. Here are some of the reasons why you should download this game:</p>
23
- <h3>The benefits and advantages of playing the game</h3>
24
- <ul>
25
- <li>It improves your logical thinking and problem-solving skills <li>It stimulates your creativity and imagination</li>
26
- <li>It provides you with fun and relaxation</li>
27
- <li>It offers you a variety of levels and modes to suit your preferences and skills</li>
28
- <li>It updates regularly with new features and content</li>
29
- </ul>
30
- <h4>The positive reviews and ratings from other players</h4>
31
- <p>Don't just take our word for it, see what other players have to say about Draw Bridge Puzzle APK. The game has received thousands of positive reviews and ratings on Google Play Store, with an average score of 4.5 out of 5 stars. Here are some of the comments from satisfied users:</p>
32
- <p>draw bridge puzzle game download<br />
33
- draw bridge puzzle app free<br />
34
- draw bridge puzzle android apk<br />
35
- draw bridge puzzle mod apk<br />
36
- draw bridge puzzle latest version<br />
37
- draw bridge puzzle offline apk<br />
38
- draw bridge puzzle online play<br />
39
- draw bridge puzzle hack apk<br />
40
- draw bridge puzzle cheats tips<br />
41
- draw bridge puzzle walkthrough guide<br />
42
- draw bridge puzzle levels solutions<br />
43
- draw bridge puzzle best strategy<br />
44
- draw bridge puzzle review rating<br />
45
- draw bridge puzzle gameplay video<br />
46
- draw bridge puzzle update news<br />
47
- draw bridge puzzle apk mirror<br />
48
- draw bridge puzzle apk pure<br />
49
- draw bridge puzzle apkpure.com<br />
50
- draw bridge puzzle apkmonk.com<br />
51
- draw bridge puzzle apk4fun.com<br />
52
- draw bridge puzzle apkmirror.com<br />
53
- draw bridge puzzle apkplz.net<br />
54
- draw bridge puzzle apkpure.co.id<br />
55
- draw bridge puzzle apksum.com<br />
56
- draw bridge puzzle apktada.com<br />
57
- draw bridge puzzle uptodown.com<br />
58
- draw bridge puzzle androidapksfree.com<br />
59
- draw bridge puzzle apknite.com<br />
60
- draw bridge puzzle apkmody.io<br />
61
- draw bridge puzzle apkdone.com<br />
62
- draw bridge puzzle apkfab.com<br />
63
- draw bridge puzzle apkcombo.com<br />
64
- draw bridge puzzle apkmody.co<br />
65
- draw bridge puzzle apkaward.com<br />
66
- draw bridge puzzle apksfull.com<br />
67
- draw bridge puzzle apkhere.com<br />
68
- draw bridge puzzle apktovi.com<br />
69
- draw bridge puzzle apkgk.com<br />
70
- draw bridge puzzle apkhome.net<br />
71
- draw bridge puzzle apkring.com<br />
72
- download com.bravestars.draw.bridge.drawgame.apk <br />
73
- com.bravestars.draw.bridge.drawgame.apk free <br />
74
- com.bravestars.draw.bridge.drawgame.apk mod <br />
75
- com.bravestars.draw.bridge.drawgame.apk latest <br />
76
- com.bravestars.draw.bridge.drawgame.apk offline <br />
77
- com.bravestars.draw.bridge.drawgame.apk online <br />
78
- com.bravestars.draw.bridge.drawgame.apk hack <br />
79
- com.bravestars.draw.bridge.drawgame.apk cheats <br />
80
- com.bravestars.draw.bridge.drawgame.apk review</p>
81
- <blockquote>
82
- <p>"This game is awesome. It's very addictive and challenging. I love the graphics and the sound effects. It's a great way to pass time and have fun."</p>
83
- <p>"I really enjoy this game. It makes me think and use my brain. It's not too easy or too hard. It's just perfect."</p>
84
- <p>"This game is amazing. It's so creative and original. I like how you can draw different bridges and see how they work. It's very satisfying."</p>
85
- </blockquote>
86
- <h2>How to Download and Install Draw Bridge Puzzle APK?</h2>
87
- <p>If you are ready to join the fun and challenge of Draw Bridge Puzzle APK, then follow these simple steps to download and install the game on your Android device:</p>
88
- <h3>The steps and requirements to get the game on your device</h3>
89
- <ol>
90
- <li>Make sure that your device meets the minimum requirements for the game, which are: Android 4.4 or higher, 50 MB of free storage space, and an internet connection.</li>
91
- <li>Go to the official website of Draw Bridge Puzzle APK, which is [Draw Bridge Puzzle APK], or scan the QR code below with your device's camera.</li>
92
- <li>Click on the download button and wait for the game file to be downloaded on your device.</li>
93
- <li>Once the download is complete, locate the game file in your device's file manager and tap on it to start the installation process.</li>
94
- <li>Follow the instructions on the screen and grant the necessary permissions for the game to run properly.</li>
95
- <li>After the installation is done, you can launch the game from your device's home screen or app drawer.</li>
96
- </ol>
97
- <h4>The safety and security of the game file</h4>
98
- <p>You might be wondering if Draw Bridge Puzzle APK is safe and secure to download and install on your device. The answer is yes, it is. The game file is scanned by various antivirus programs and verified by Google Play Protect, which ensures that it is free from any malware, viruses, or harmful code. The game also respects your privacy and does not collect or share any personal or sensitive information from your device.</p>
99
- <h2>Conclusion</h2>
100
- <p>In conclusion, Draw Bridge Puzzle APK is a fun and challenging game that will test your logic and creativity while providing you with hours of entertainment and relaxation. The game has hundreds of levels with different scenarios and obstacles, simple and colorful graphics, smooth animations, relaxing music, various modes, settings, cars, skins, leaderboards, achievements, hints, ads, and more. The game is easy to play but hard to master, so you will never get bored or frustrated. The game is also safe and secure to download and install on your Android device, as it is scanned by antivirus programs and verified by Google Play Protect. So what are you waiting for? Download Draw Bridge Puzzle APK today and enjoy drawing bridges to solve puzzles!</p>
101
- <h3>Frequently Asked Questions</h3>
102
- <ul>
103
- <li><strong>Q: How much does Draw Bridge Puzzle APK cost?</strong></li>
104
- <li>A: Draw Bridge Puzzle APK is free to download and play. However, it contains some optional in-app purchases that can enhance your gaming experience, such as removing ads, buying hints, or unlocking cars and skins.</li>
105
- <li><strong>Q: How can I contact the developer of Draw Bridge Puzzle APK?</strong></li>
106
- <li>A: You can contact the developer of Draw Bridge Puzzle APK by sending an email to [email protected] or visiting their Facebook page at [Weegoon].</li>
107
- <li><strong>Q: How can I support the developer of Draw Bridge Puzzle APK?</strong></li>
108
- <li>A: You can support the developer of Draw Bridge Puzzle APK by rating and reviewing the game on Google Play Store, sharing it with your friends and family, or making an in-app purchase if you like.</li>
109
- <li><strong>Q: How can I update Draw Bridge Puzzle APK?</strong></li>
110
- <li>A: You can update Draw Bridge Puzzle APK by visiting its official website or Google Play Store page regularly and checking for any new versions available. You can also enable automatic updates on your device to receive the latest updates automatically.</li>
111
- <li><strong>Q: How can I uninstall Draw Bridge Puzzle APK?</strong></li>
112
- <li>A: You can uninstall Draw Bridge Puzzle APK by going to your device's settings, selecting apps, finding Draw Bridge Puzzle APK, and tapping on uninstall. You can also long-press the game icon on your home screen or app drawer and drag it to the uninstall option.</li>
113
- </ul></p> 401be4b1e0<br />
114
- <br />
115
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Last Pirate Island Survival MOD APK Terbaru and Experience a Unique Survival Game.md DELETED
@@ -1,110 +0,0 @@
1
-
2
- <h1>Download Last Pirate Island Survival Mod Apk Terbaru: A Guide for Android Users</h1>
3
- <p>Do you love pirate games? Do you enjoy survival and adventure games? If you answered yes to both questions, then you should try Last Pirate Island Survival, a game that combines both genres in an immersive and realistic way. In this game, you will be stranded on a deserted island, where you will have to fight for your life against zombies, wild animals, and other pirates. You will also have to craft weapons, build shelters, and explore the island for resources and secrets. Sounds exciting, right?</p>
4
- <h2>download last pirate island survival mod apk terbaru</h2><br /><p><b><b>Download</b> &hArr; <a href="https://jinyurl.com/2uNMgU">https://jinyurl.com/2uNMgU</a></b></p><br /><br />
5
- <p>But wait, there's more. You can also download the mod apk version of the game, which will give you access to unlimited money, free craft, god mode, and other features that will make your gameplay more fun and easy. In this article, we will tell you everything you need to know about Last Pirate Island Survival and how to download the mod apk terbaru version for your Android device. Let's get started!</p>
6
- <h2>What is Last Pirate Island Survival?</h2>
7
- <p>Last Pirate Island Survival is a survival and pirate game developed by RetroStyle Games UA. It was released in 2019 and has since gained over 10 million downloads on Google Play Store. The game is set in a post-apocalyptic world, where a deadly virus has turned most people into zombies. You are one of the few survivors who managed to escape to an island, where you hope to find a safe haven. However, you soon realize that the island is not as peaceful as it seems. You will have to face many dangers and enemies, such as:</p>
8
- <ul>
9
- <li>Zombies: These are the most common threat on the island. They are fast, aggressive, and hungry for flesh. You will need to use your weapons and skills to fend them off or avoid them.</li>
10
- <li>Wild animals: The island is home to various animals, such as wolves, bears, crocodiles, and sharks. Some of them are friendly and can be tamed as pets, while others are hostile and will attack you on sight.</li>
11
- <li>Other pirates: You are not the only pirate on the island. There are other groups of pirates who want to take over the island and loot its treasures. You will have to fight them or ally with them depending on your strategy.</li>
12
- </ul>
13
- <h3>Features of the game</h3>
14
- <p>Last Pirate Island Survival has many features that make it a unique and enjoyable game. Some of them are:</p>
15
- <p>download last pirate survival island adventure mod apk latest version<br />
16
- last pirate island survival mod apk unlimited money and resources<br />
17
- how to download last pirate survival island adventure mod apk for free<br />
18
- last pirate island survival game mod apk offline<br />
19
- last pirate survival island adventure apk mod menu<br />
20
- download last pirate island survival mod apk android 1<br />
21
- last pirate island survival mod apk rexdl<br />
22
- last pirate island survival hack mod apk download<br />
23
- last pirate island survival mod apk 2023 update<br />
24
- last pirate island survival adventure mod apk unlimited health<br />
25
- download last pirate island survival mod apk for pc<br />
26
- last pirate island survival mod apk no root<br />
27
- last pirate island survival mod apk obb<br />
28
- last pirate island survival mod apk revdl<br />
29
- download last pirate island survival mod apk ios<br />
30
- last pirate island survival mod apk unlimited everything<br />
31
- last pirate island survival mod apk free craft<br />
32
- last pirate island survival mod apk latest version 1.12.8<br />
33
- download last pirate island survival mod apk from apkpure<br />
34
- last pirate island survival mod apk god mode<br />
35
- download last pirate island survival mod apk for android<br />
36
- last pirate island survival mod apk unlimited coins and gems<br />
37
- last pirate island survival mod apk happymod<br />
38
- last pirate island survival adventure game mod apk download<br />
39
- download last pirate island survival mod apk terbaru 2023</p>
40
- <ul>
41
- <li>Realistic graphics and sound effects: The game has stunning 3D graphics that create a realistic atmosphere of the island. You can see the details of the environment, such as the trees, rocks, water, and weather. The sound effects also add to the immersion, such as the waves crashing, the wind blowing, and the zombies groaning.</li>
42
- <li>Crafting and building system: The game allows you to craft various items and weapons using the resources you find on the island. You can make swords, axes, bows, guns, bombs, and more. You can also build shelters, traps, fences, and other structures to protect yourself from enemies and weather.</li>
43
- <li>Exploration and discovery: The game has a large open world map that you can explore freely. You can find hidden caves, shipwrecks, treasure chests, and other secrets on the island. You can also interact with different objects and NPCs on the island.</li>
44
- <li>Pet system: The game lets you tame some of the animals on the island as your pets. You can feed them, play with them, and use them as your companions in combat or exploration.</li>
45
- </ul>
46
- <h3>Challenges and tips</h3>
47
- <p>Last Pirate Island Survival is not an easy game. It has many challenges that will test your skills and patience. Some of them are:</p>
48
- <ul>
49
- <li>Hunger and thirst: You will have to monitor your hunger and thirst levels constantly. If they drop too low, you will lose health and stamina. You will have to find food and water sources on the island, such as fruits, vegetables, fish, and rainwater. You can also cook your food using a fire or a stove.</li>
50
- <li>Health and stamina: You will also have to watch your health and stamina levels. If you get injured or exhausted, you will need to heal yourself using bandages, potions, or resting. You can also improve your health and stamina by leveling up your skills and attributes.</li>
51
- <li>Enemies and combat: You will have to face many enemies on the island, such as zombies, animals, and pirates. You will need to use your weapons and tactics to defeat them or escape from them. You can also use stealth, traps, and explosives to gain an advantage in combat.</li>
52
- </ul>
53
- <p>Here are some tips that will help you survive and thrive on the island:</p>
54
- <ul>
55
- <li>Gather resources: The island has many resources that you can collect and use for crafting and building. You can chop trees, mine rocks, hunt animals, fish in the sea, and loot chests. You can also trade with other pirates or raid their camps for more resources.</li>
56
- <li>Upgrade your equipment: You can upgrade your weapons, armor, and tools using the crafting system. You can also find or buy better equipment from other pirates or merchants. Upgrading your equipment will increase your damage, defense, and durability.</li>
57
- <li>Explore the island: The island has many secrets and surprises that you can discover by exploring it. You can find hidden locations, quests, items, and events that will enrich your gameplay. You can also learn more about the island's history and lore by reading notes, books, and journals.</li>
58
- <li>Customize your character: You can customize your character's appearance, skills, and attributes using the game's options. You can choose your gender, hair style, skin color, clothes, and accessories. You can also level up your skills and attributes by gaining experience points from various activities.</li>
59
- </ul>
60
- <h2>Why download the mod apk version?</h2>
61
- <p>If you want to enjoy Last Pirate Island Survival without any limitations or restrictions, you should download the mod apk version of the game. The mod apk version is a modified version of the game that has some extra features that are not available in the original version. Some of these features are:</p>
62
- <h3>Benefits of the mod apk</h3>
63
- <p>The mod apk version of Last Pirate Island Survival has many benefits that will make your gameplay more fun and easy. Some of them are:</p>
64
- <ul>
65
- <li>Unlimited money: The mod apk version gives you unlimited money that you can use to buy anything you want in the game. You can buy weapons, armor, tools, items, pets, and more without worrying about running out of money.</li>
66
- <li>Free craft: The mod apk version allows you to craft anything you want without needing any resources or materials. You can craft weapons, armor, tools, items, buildings, and more without having to gather any resources.</li>
67
- <li>God mode: The mod apk version enables you to activate god mode that makes you invincible in the game. You can survive any attack from enemies or hazards without losing any health or stamina.</li>
68
- <li>No ads: The mod apk version removes all the ads that appear in the game. You can play the game without any interruptions or distractions from annoying ads.</li>
69
- </ul>
70
- <h3>How to download and install the mod apk</h3>
71
- <p>If you want to download and install the mod apk version of Last Pirate Island Survival on your Android device, you will need to follow these simple steps:</p>
72
- <ol>
73
- <li>Download the mod apk file from a reliable source on the internet. You can search for "download last pirate island survival mod apk terbaru" on Google or any other search engine.</li>
74
- <li>Enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store.</li>
75
- <li>Locate the downloaded mod apk file on your device storage and tap on it to install it.</li>
76
- <li>Wait for the installation process to finish and then launch the game from your app drawer or home screen.</li>
77
- <li>Enjoy playing Last Pirate Island Survival with unlimited money, free craft, god mode, and no ads!</li>
78
- </ol>
79
- <h2>Conclusion</h2>
80
- <p>Last Pirate Island Survival is a survival and pirate game that offers a lot of fun and excitement for Android users. You can experience a realistic and immersive gameplay on a deserted island full of dangers and secrets. You can also download the mod apk version of the game that gives you access to unlimited money, free craft, god mode, and no ads. If you are looking for a game that combines survival and adventure with pirate themes, then you should try Last Pirate Island Survival today!</p>
81
- <h3>Summary of the main points</h3> <p>In this article, we have covered the following main points:</p>
82
- <ul>
83
- <li>Last Pirate Island Survival is a survival and pirate game that lets you explore, craft, build, and fight on a deserted island.</li>
84
- <li>The game has realistic graphics, sound effects, and gameplay that create an immersive and challenging experience.</li>
85
- <li>The game has many features, such as crafting, building, exploration, discovery, pet system, customization, and more.</li>
86
- <li>The game has many challenges, such as hunger, thirst, health, stamina, enemies, and combat.</li>
87
- <li>The game also has a mod apk version that gives you unlimited money, free craft, god mode, and no ads.</li>
88
- <li>The mod apk version can be downloaded and installed easily on your Android device by following some simple steps.</li>
89
- </ul>
90
- <h3>FAQs</h3>
91
- <p>Here are some frequently asked questions about Last Pirate Island Survival and its mod apk version:</p>
92
- <ol>
93
- <li>Q: Is Last Pirate Island Survival free to play?</li>
94
- <li>A: Yes, Last Pirate Island Survival is free to play. However, it contains some in-app purchases that can enhance your gameplay. You can also download the mod apk version that gives you unlimited money for free.</li>
95
- <li>Q: Is Last Pirate Island Survival online or offline?</li>
96
- <li>A: Last Pirate Island Survival is an offline game. You can play it without an internet connection. However, some features may require an internet connection, such as updates, cloud save, and social media integration.</li>
97
- <li>Q: Is Last Pirate Island Survival safe to download and install?</li>
98
- <li>A: Yes, Last Pirate Island Survival is safe to download and install. The game does not contain any viruses or malware that can harm your device. However, you should always download the game from a trusted source and enable unknown sources on your device settings before installing it.</li>
99
- <li>Q: What are the minimum requirements to play Last Pirate Island Survival?</li>
100
- <li>A: The minimum requirements to play Last Pirate Island Survival are:</li>
101
- <ul>
102
- <li>Android version: 4.4 or higher</li>
103
- <li>RAM: 2 GB or higher</li>
104
- <li>Storage space: 200 MB or higher</li>
105
- </ul>
106
- <li>Q: How can I contact the developers of Last Pirate Island Survival?</li>
107
- <li>A: You can contact the developers of Last Pirate Island Survival by sending them an email at [email protected] or visiting their website at https://retrostylegames.com/.</li>
108
- </ol></p> 197e85843d<br />
109
- <br />
110
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/4Taps/SadTalker/src/face3d/data/template_dataset.py DELETED
@@ -1,75 +0,0 @@
1
- """Dataset class template
2
-
3
- This module provides a template for users to implement custom datasets.
4
- You can specify '--dataset_mode template' to use this dataset.
5
- The class name should be consistent with both the filename and its dataset_mode option.
6
- The filename should be <dataset_mode>_dataset.py
7
- The class name should be <Dataset_mode>Dataset.py
8
- You need to implement the following functions:
9
- -- <modify_commandline_options>: Add dataset-specific options and rewrite default values for existing options.
10
- -- <__init__>: Initialize this dataset class.
11
- -- <__getitem__>: Return a data point and its metadata information.
12
- -- <__len__>: Return the number of images.
13
- """
14
- from data.base_dataset import BaseDataset, get_transform
15
- # from data.image_folder import make_dataset
16
- # from PIL import Image
17
-
18
-
19
- class TemplateDataset(BaseDataset):
20
- """A template dataset class for you to implement custom datasets."""
21
- @staticmethod
22
- def modify_commandline_options(parser, is_train):
23
- """Add new dataset-specific options, and rewrite default values for existing options.
24
-
25
- Parameters:
26
- parser -- original option parser
27
- is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
28
-
29
- Returns:
30
- the modified parser.
31
- """
32
- parser.add_argument('--new_dataset_option', type=float, default=1.0, help='new dataset option')
33
- parser.set_defaults(max_dataset_size=10, new_dataset_option=2.0) # specify dataset-specific default values
34
- return parser
35
-
36
- def __init__(self, opt):
37
- """Initialize this dataset class.
38
-
39
- Parameters:
40
- opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
41
-
42
- A few things can be done here.
43
- - save the options (have been done in BaseDataset)
44
- - get image paths and meta information of the dataset.
45
- - define the image transformation.
46
- """
47
- # save the option and dataset root
48
- BaseDataset.__init__(self, opt)
49
- # get the image paths of your dataset;
50
- self.image_paths = [] # You can call sorted(make_dataset(self.root, opt.max_dataset_size)) to get all the image paths under the directory self.root
51
- # define the default transform function. You can use <base_dataset.get_transform>; You can also define your custom transform function
52
- self.transform = get_transform(opt)
53
-
54
- def __getitem__(self, index):
55
- """Return a data point and its metadata information.
56
-
57
- Parameters:
58
- index -- a random integer for data indexing
59
-
60
- Returns:
61
- a dictionary of data with their names. It usually contains the data itself and its metadata information.
62
-
63
- Step 1: get a random image path: e.g., path = self.image_paths[index]
64
- Step 2: load your data from the disk: e.g., image = Image.open(path).convert('RGB').
65
- Step 3: convert your data to a PyTorch tensor. You can use helpder functions such as self.transform. e.g., data = self.transform(image)
66
- Step 4: return a data point as a dictionary.
67
- """
68
- path = 'temp' # needs to be a string
69
- data_A = None # needs to be a tensor
70
- data_B = None # needs to be a tensor
71
- return {'data_A': data_A, 'data_B': data_B, 'path': path}
72
-
73
- def __len__(self):
74
- """Return the total number of images."""
75
- return len(self.image_paths)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/7hao/bingo/src/lib/bots/bing/types.ts DELETED
@@ -1,259 +0,0 @@
1
- export type Author = 'user' | 'system' | 'bot'
2
-
3
- export type BotId = 'bing'
4
-
5
- export enum BingConversationStyle {
6
- Creative = 'Creative',
7
- Balanced = 'Balanced',
8
- Precise = 'Precise'
9
- }
10
-
11
- export enum ErrorCode {
12
- CONVERSATION_LIMIT = 'CONVERSATION_LIMIT',
13
- BING_UNAUTHORIZED = 'BING_UNAUTHORIZED',
14
- BING_FORBIDDEN = 'BING_FORBIDDEN',
15
- BING_CAPTCHA = 'BING_CAPTCHA',
16
- THROTTLE_LIMIT = 'THROTTLE_LIMIT',
17
- NOTFOUND_ERROR = 'NOT_FOUND_ERROR',
18
- UNKOWN_ERROR = 'UNKOWN_ERROR',
19
- NETWORK_ERROR = 'NETWORK_ERROR',
20
- }
21
-
22
- export class ChatError extends Error {
23
- code: ErrorCode
24
- constructor(message: string, code: ErrorCode) {
25
- super(message)
26
- this.code = code
27
- }
28
- }
29
-
30
- export type ChatMessageModel = {
31
- id: string
32
- author: Author
33
- text: string
34
- error?: ChatError
35
- throttling?: Throttling
36
- sourceAttributions?: SourceAttribution[]
37
- suggestedResponses?: SuggestedResponse[]
38
- }
39
-
40
- export interface ConversationModel {
41
- messages: ChatMessageModel[]
42
- }
43
-
44
- export type Event =
45
- | {
46
- type: 'UPDATE_ANSWER'
47
- data: {
48
- text: string
49
- spokenText?: string
50
- sourceAttributions?: SourceAttribution[]
51
- suggestedResponses?: SuggestedResponse[]
52
- throttling?: Throttling
53
- }
54
- }
55
- | {
56
- type: 'DONE'
57
- }
58
- | {
59
- type: 'ERROR'
60
- error: ChatError
61
- }
62
-
63
- export interface SendMessageParams<T> {
64
- prompt: string
65
- imageUrl?: string
66
- options: T
67
- onEvent: (event: Event) => void
68
- signal?: AbortSignal
69
- }
70
-
71
- export interface ConversationResponse {
72
- conversationId: string
73
- clientId: string
74
- conversationSignature: string
75
- result: {
76
- value: string
77
- message?: string
78
- }
79
- }
80
-
81
- export interface Telemetry {
82
- metrics?: null
83
- startTime: string
84
- }
85
-
86
- export interface ChatUpdateArgument {
87
- messages?: ChatResponseMessage[]
88
- throttling?: Throttling
89
- requestId: string
90
- result: null
91
- }
92
-
93
- export type ChatUpdateCompleteResponse = {
94
- type: 2
95
- invocationId: string
96
- item: ChatResponseItem
97
- } | {
98
- type: 1
99
- target: string
100
- arguments: ChatUpdateArgument[]
101
- } | {
102
- type: 3
103
- invocationId: string
104
- } | {
105
- type: 6 | 7
106
- }
107
-
108
- export interface ChatRequestResult {
109
- value: string
110
- serviceVersion: string
111
- error?: string
112
- }
113
-
114
- export interface ChatResponseItem {
115
- messages: ChatResponseMessage[]
116
- firstNewMessageIndex: number
117
- suggestedResponses: null
118
- conversationId: string
119
- requestId: string
120
- conversationExpiryTime: string
121
- telemetry: Telemetry
122
- result: ChatRequestResult
123
- throttling: Throttling
124
- }
125
- export enum InvocationEventType {
126
- Invocation = 1,
127
- StreamItem = 2,
128
- Completion = 3,
129
- StreamInvocation = 4,
130
- CancelInvocation = 5,
131
- Ping = 6,
132
- Close = 7,
133
- }
134
-
135
- // https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts
136
-
137
- export interface ConversationInfo {
138
- conversationId: string
139
- clientId: string
140
- conversationSignature: string
141
- invocationId: number
142
- conversationStyle: BingConversationStyle
143
- prompt: string
144
- imageUrl?: string
145
- }
146
-
147
- export interface BingChatResponse {
148
- conversationSignature: string
149
- conversationId: string
150
- clientId: string
151
- invocationId: number
152
- conversationExpiryTime: Date
153
- response: string
154
- details: ChatResponseMessage
155
- }
156
-
157
- export interface Throttling {
158
- maxNumLongDocSummaryUserMessagesInConversation: number
159
- maxNumUserMessagesInConversation: number
160
- numLongDocSummaryUserMessagesInConversation: number
161
- numUserMessagesInConversation: number
162
- }
163
-
164
- export interface ChatResponseMessage {
165
- text: string
166
- spokenText?: string
167
- author: string
168
- createdAt: Date
169
- timestamp: Date
170
- messageId: string
171
- requestId: string
172
- offense: string
173
- adaptiveCards: AdaptiveCard[]
174
- sourceAttributions: SourceAttribution[]
175
- feedback: Feedback
176
- contentOrigin: string
177
- messageType?: string
178
- contentType?: string
179
- privacy: null
180
- suggestedResponses: SuggestedResponse[]
181
- }
182
-
183
- export interface AdaptiveCard {
184
- type: string
185
- version: string
186
- body: Body[]
187
- }
188
-
189
- export interface Body {
190
- type: string
191
- text: string
192
- wrap: boolean
193
- size?: string
194
- }
195
-
196
- export interface Feedback {
197
- tag: null
198
- updatedOn: null
199
- type: string
200
- }
201
-
202
- export interface SourceAttribution {
203
- providerDisplayName: string
204
- seeMoreUrl: string
205
- searchQuery: string
206
- }
207
-
208
- export interface SuggestedResponse {
209
- text: string
210
- author?: Author
211
- createdAt?: Date
212
- timestamp?: Date
213
- messageId?: string
214
- messageType?: string
215
- offense?: string
216
- feedback?: Feedback
217
- contentOrigin?: string
218
- privacy?: null
219
- }
220
-
221
- export interface KBlobRequest {
222
- knowledgeRequest: KnowledgeRequestContext
223
- imageBase64?: string
224
- }
225
-
226
- export interface KBlobResponse {
227
- blobId: string
228
- processedBlobId?: string
229
- }
230
-
231
- export interface KnowledgeRequestContext {
232
- imageInfo: ImageInfo;
233
- knowledgeRequest: KnowledgeRequest;
234
- }
235
-
236
- export interface ImageInfo {
237
- url?: string;
238
- }
239
-
240
- export interface KnowledgeRequest {
241
- invokedSkills: string[];
242
- subscriptionId: string;
243
- invokedSkillsRequestData: InvokedSkillsRequestData;
244
- convoData: ConvoData;
245
- }
246
-
247
- export interface ConvoData {
248
- convoid: string;
249
- convotone: BingConversationStyle;
250
- }
251
-
252
- export interface InvokedSkillsRequestData {
253
- enableFaceBlur: boolean;
254
- }
255
-
256
- export interface FileItem {
257
- url: string;
258
- status?: 'loading' | 'error' | 'loaded'
259
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/VQ-Trans/VQ_eval.py DELETED
@@ -1,95 +0,0 @@
1
- import os
2
- import json
3
-
4
- import torch
5
- from torch.utils.tensorboard import SummaryWriter
6
- import numpy as np
7
- import models.vqvae as vqvae
8
- import options.option_vq as option_vq
9
- import utils.utils_model as utils_model
10
- from dataset import dataset_TM_eval
11
- import utils.eval_trans as eval_trans
12
- from options.get_eval_option import get_opt
13
- from models.evaluator_wrapper import EvaluatorModelWrapper
14
- import warnings
15
- warnings.filterwarnings('ignore')
16
- import numpy as np
17
- ##### ---- Exp dirs ---- #####
18
- args = option_vq.get_args_parser()
19
- torch.manual_seed(args.seed)
20
-
21
- args.out_dir = os.path.join(args.out_dir, f'{args.exp_name}')
22
- os.makedirs(args.out_dir, exist_ok = True)
23
-
24
- ##### ---- Logger ---- #####
25
- logger = utils_model.get_logger(args.out_dir)
26
- writer = SummaryWriter(args.out_dir)
27
- logger.info(json.dumps(vars(args), indent=4, sort_keys=True))
28
-
29
-
30
- from utils.word_vectorizer import WordVectorizer
31
- w_vectorizer = WordVectorizer('./glove', 'our_vab')
32
-
33
-
34
- dataset_opt_path = 'checkpoints/kit/Comp_v6_KLD005/opt.txt' if args.dataname == 'kit' else 'checkpoints/t2m/Comp_v6_KLD005/opt.txt'
35
-
36
- wrapper_opt = get_opt(dataset_opt_path, torch.device('cuda'))
37
- eval_wrapper = EvaluatorModelWrapper(wrapper_opt)
38
-
39
-
40
- ##### ---- Dataloader ---- #####
41
- args.nb_joints = 21 if args.dataname == 'kit' else 22
42
-
43
- val_loader = dataset_TM_eval.DATALoader(args.dataname, True, 32, w_vectorizer, unit_length=2**args.down_t)
44
-
45
- ##### ---- Network ---- #####
46
- net = vqvae.HumanVQVAE(args, ## use args to define different parameters in different quantizers
47
- args.nb_code,
48
- args.code_dim,
49
- args.output_emb_width,
50
- args.down_t,
51
- args.stride_t,
52
- args.width,
53
- args.depth,
54
- args.dilation_growth_rate,
55
- args.vq_act,
56
- args.vq_norm)
57
-
58
- if args.resume_pth :
59
- logger.info('loading checkpoint from {}'.format(args.resume_pth))
60
- ckpt = torch.load(args.resume_pth, map_location='cpu')
61
- net.load_state_dict(ckpt['net'], strict=True)
62
- net.train()
63
- net.cuda()
64
-
65
- fid = []
66
- div = []
67
- top1 = []
68
- top2 = []
69
- top3 = []
70
- matching = []
71
- repeat_time = 20
72
- for i in range(repeat_time):
73
- best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, writer, logger = eval_trans.evaluation_vqvae(args.out_dir, val_loader, net, logger, writer, 0, best_fid=1000, best_iter=0, best_div=100, best_top1=0, best_top2=0, best_top3=0, best_matching=100, eval_wrapper=eval_wrapper, draw=False, save=False, savenpy=(i==0))
74
- fid.append(best_fid)
75
- div.append(best_div)
76
- top1.append(best_top1)
77
- top2.append(best_top2)
78
- top3.append(best_top3)
79
- matching.append(best_matching)
80
- print('final result:')
81
- print('fid: ', sum(fid)/repeat_time)
82
- print('div: ', sum(div)/repeat_time)
83
- print('top1: ', sum(top1)/repeat_time)
84
- print('top2: ', sum(top2)/repeat_time)
85
- print('top3: ', sum(top3)/repeat_time)
86
- print('matching: ', sum(matching)/repeat_time)
87
-
88
- fid = np.array(fid)
89
- div = np.array(div)
90
- top1 = np.array(top1)
91
- top2 = np.array(top2)
92
- top3 = np.array(top3)
93
- matching = np.array(matching)
94
- msg_final = f"FID. {np.mean(fid):.3f}, conf. {np.std(fid)*1.96/np.sqrt(repeat_time):.3f}, Diversity. {np.mean(div):.3f}, conf. {np.std(div)*1.96/np.sqrt(repeat_time):.3f}, TOP1. {np.mean(top1):.3f}, conf. {np.std(top1)*1.96/np.sqrt(repeat_time):.3f}, TOP2. {np.mean(top2):.3f}, conf. {np.std(top2)*1.96/np.sqrt(repeat_time):.3f}, TOP3. {np.mean(top3):.3f}, conf. {np.std(top3)*1.96/np.sqrt(repeat_time):.3f}, Matching. {np.mean(matching):.3f}, conf. {np.std(matching)*1.96/np.sqrt(repeat_time):.3f}"
95
- logger.info(msg_final)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/pyrender/pyrender/renderer.py DELETED
@@ -1,1339 +0,0 @@
1
- """PBR renderer for Python.
2
-
3
- Author: Matthew Matl
4
- """
5
- import sys
6
-
7
- import numpy as np
8
- import PIL
9
-
10
- from .constants import (RenderFlags, TextAlign, GLTF, BufFlags, TexFlags,
11
- ProgramFlags, DEFAULT_Z_FAR, DEFAULT_Z_NEAR,
12
- SHADOW_TEX_SZ, MAX_N_LIGHTS)
13
- from .shader_program import ShaderProgramCache
14
- from .material import MetallicRoughnessMaterial, SpecularGlossinessMaterial
15
- from .light import PointLight, SpotLight, DirectionalLight
16
- from .font import FontCache
17
- from .utils import format_color_vector
18
-
19
- from OpenGL.GL import *
20
-
21
-
22
- class Renderer(object):
23
- """Class for handling all rendering operations on a scene.
24
-
25
- Note
26
- ----
27
- This renderer relies on the existence of an OpenGL context and
28
- does not create one on its own.
29
-
30
- Parameters
31
- ----------
32
- viewport_width : int
33
- Width of the viewport in pixels.
34
- viewport_height : int
35
- Width of the viewport height in pixels.
36
- point_size : float, optional
37
- Size of points in pixels. Defaults to 1.0.
38
- """
39
-
40
- def __init__(self, viewport_width, viewport_height, point_size=1.0):
41
- self.dpscale = 1
42
- # Scaling needed on retina displays
43
- if sys.platform == 'darwin':
44
- self.dpscale = 2
45
-
46
- self.viewport_width = viewport_width
47
- self.viewport_height = viewport_height
48
- self.point_size = point_size
49
-
50
- # Optional framebuffer for offscreen renders
51
- self._main_fb = None
52
- self._main_cb = None
53
- self._main_db = None
54
- self._main_fb_ms = None
55
- self._main_cb_ms = None
56
- self._main_db_ms = None
57
- self._main_fb_dims = (None, None)
58
- self._shadow_fb = None
59
- self._latest_znear = DEFAULT_Z_NEAR
60
- self._latest_zfar = DEFAULT_Z_FAR
61
-
62
- # Shader Program Cache
63
- self._program_cache = ShaderProgramCache()
64
- self._font_cache = FontCache()
65
- self._meshes = set()
66
- self._mesh_textures = set()
67
- self._shadow_textures = set()
68
- self._texture_alloc_idx = 0
69
-
70
- @property
71
- def viewport_width(self):
72
- """int : The width of the main viewport, in pixels.
73
- """
74
- return self._viewport_width
75
-
76
- @viewport_width.setter
77
- def viewport_width(self, value):
78
- self._viewport_width = self.dpscale * value
79
-
80
- @property
81
- def viewport_height(self):
82
- """int : The height of the main viewport, in pixels.
83
- """
84
- return self._viewport_height
85
-
86
- @viewport_height.setter
87
- def viewport_height(self, value):
88
- self._viewport_height = self.dpscale * value
89
-
90
- @property
91
- def point_size(self):
92
- """float : The size of screen-space points, in pixels.
93
- """
94
- return self._point_size
95
-
96
- @point_size.setter
97
- def point_size(self, value):
98
- self._point_size = float(value)
99
-
100
- def render(self, scene, flags, seg_node_map=None):
101
- """Render a scene with the given set of flags.
102
-
103
- Parameters
104
- ----------
105
- scene : :class:`Scene`
106
- A scene to render.
107
- flags : int
108
- A specification from :class:`.RenderFlags`.
109
- seg_node_map : dict
110
- A map from :class:`.Node` objects to (3,) colors for each.
111
- If specified along with flags set to :attr:`.RenderFlags.SEG`,
112
- the color image will be a segmentation image.
113
-
114
- Returns
115
- -------
116
- color_im : (h, w, 3) uint8 or (h, w, 4) uint8
117
- If :attr:`RenderFlags.OFFSCREEN` is set, the color buffer. This is
118
- normally an RGB buffer, but if :attr:`.RenderFlags.RGBA` is set,
119
- the buffer will be a full RGBA buffer.
120
- depth_im : (h, w) float32
121
- If :attr:`RenderFlags.OFFSCREEN` is set, the depth buffer
122
- in linear units.
123
- """
124
- # Update context with meshes and textures
125
- self._update_context(scene, flags)
126
-
127
- # Render necessary shadow maps
128
- if not bool(flags & RenderFlags.DEPTH_ONLY or flags & RenderFlags.SEG):
129
- for ln in scene.light_nodes:
130
- take_pass = False
131
- if (isinstance(ln.light, DirectionalLight) and
132
- bool(flags & RenderFlags.SHADOWS_DIRECTIONAL)):
133
- take_pass = True
134
- elif (isinstance(ln.light, SpotLight) and
135
- bool(flags & RenderFlags.SHADOWS_SPOT)):
136
- take_pass = True
137
- elif (isinstance(ln.light, PointLight) and
138
- bool(flags & RenderFlags.SHADOWS_POINT)):
139
- take_pass = True
140
- if take_pass:
141
- self._shadow_mapping_pass(scene, ln, flags)
142
-
143
- # Make forward pass
144
- retval = self._forward_pass(scene, flags, seg_node_map=seg_node_map)
145
-
146
- # If necessary, make normals pass
147
- if flags & (RenderFlags.VERTEX_NORMALS | RenderFlags.FACE_NORMALS):
148
- self._normals_pass(scene, flags)
149
-
150
- # Update camera settings for retrieving depth buffers
151
- self._latest_znear = scene.main_camera_node.camera.znear
152
- self._latest_zfar = scene.main_camera_node.camera.zfar
153
-
154
- return retval
155
-
156
- def render_text(self, text, x, y, font_name='OpenSans-Regular',
157
- font_pt=40, color=None, scale=1.0,
158
- align=TextAlign.BOTTOM_LEFT):
159
- """Render text into the current viewport.
160
-
161
- Note
162
- ----
163
- This cannot be done into an offscreen buffer.
164
-
165
- Parameters
166
- ----------
167
- text : str
168
- The text to render.
169
- x : int
170
- Horizontal pixel location of text.
171
- y : int
172
- Vertical pixel location of text.
173
- font_name : str
174
- Name of font, from the ``pyrender/fonts`` folder, or
175
- a path to a ``.ttf`` file.
176
- font_pt : int
177
- Height of the text, in font points.
178
- color : (4,) float
179
- The color of the text. Default is black.
180
- scale : int
181
- Scaling factor for text.
182
- align : int
183
- One of the :class:`TextAlign` options which specifies where the
184
- ``x`` and ``y`` parameters lie on the text. For example,
185
- :attr:`TextAlign.BOTTOM_LEFT` means that ``x`` and ``y`` indicate
186
- the position of the bottom-left corner of the textbox.
187
- """
188
- x *= self.dpscale
189
- y *= self.dpscale
190
- font_pt *= self.dpscale
191
-
192
- if color is None:
193
- color = np.array([0.0, 0.0, 0.0, 1.0])
194
- else:
195
- color = format_color_vector(color, 4)
196
-
197
- # Set up viewport for render
198
- self._configure_forward_pass_viewport(0)
199
-
200
- # Load font
201
- font = self._font_cache.get_font(font_name, font_pt)
202
- if not font._in_context():
203
- font._add_to_context()
204
-
205
- # Load program
206
- program = self._get_text_program()
207
- program._bind()
208
-
209
- # Set uniforms
210
- p = np.eye(4)
211
- p[0,0] = 2.0 / self.viewport_width
212
- p[0,3] = -1.0
213
- p[1,1] = 2.0 / self.viewport_height
214
- p[1,3] = -1.0
215
- program.set_uniform('projection', p)
216
- program.set_uniform('text_color', color)
217
-
218
- # Draw text
219
- font.render_string(text, x, y, scale, align)
220
-
221
- def read_color_buf(self):
222
- """Read and return the current viewport's color buffer.
223
-
224
- Alpha cannot be computed for an on-screen buffer.
225
-
226
- Returns
227
- -------
228
- color_im : (h, w, 3) uint8
229
- The color buffer in RGB byte format.
230
- """
231
- # Extract color image from frame buffer
232
- width, height = self.viewport_width, self.viewport_height
233
- glBindFramebuffer(GL_READ_FRAMEBUFFER, 0)
234
- glReadBuffer(GL_FRONT)
235
- color_buf = glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE)
236
-
237
- # Re-format them into numpy arrays
238
- color_im = np.frombuffer(color_buf, dtype=np.uint8)
239
- color_im = color_im.reshape((height, width, 3))
240
- color_im = np.flip(color_im, axis=0)
241
-
242
- # Resize for macos if needed
243
- if sys.platform == 'darwin':
244
- color_im = self._resize_image(color_im, True)
245
-
246
- return color_im
247
-
248
- def read_depth_buf(self):
249
- """Read and return the current viewport's color buffer.
250
-
251
- Returns
252
- -------
253
- depth_im : (h, w) float32
254
- The depth buffer in linear units.
255
- """
256
- width, height = self.viewport_width, self.viewport_height
257
- glBindFramebuffer(GL_READ_FRAMEBUFFER, 0)
258
- glReadBuffer(GL_FRONT)
259
- depth_buf = glReadPixels(
260
- 0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT
261
- )
262
-
263
- depth_im = np.frombuffer(depth_buf, dtype=np.float32)
264
- depth_im = depth_im.reshape((height, width))
265
- depth_im = np.flip(depth_im, axis=0)
266
-
267
- inf_inds = (depth_im == 1.0)
268
- depth_im = 2.0 * depth_im - 1.0
269
- z_near, z_far = self._latest_znear, self._latest_zfar
270
- noninf = np.logical_not(inf_inds)
271
- if z_far is None:
272
- depth_im[noninf] = 2 * z_near / (1.0 - depth_im[noninf])
273
- else:
274
- depth_im[noninf] = ((2.0 * z_near * z_far) /
275
- (z_far + z_near - depth_im[noninf] *
276
- (z_far - z_near)))
277
- depth_im[inf_inds] = 0.0
278
-
279
- # Resize for macos if needed
280
- if sys.platform == 'darwin':
281
- depth_im = self._resize_image(depth_im)
282
-
283
- return depth_im
284
-
285
- def delete(self):
286
- """Free all allocated OpenGL resources.
287
- """
288
- # Free shaders
289
- self._program_cache.clear()
290
-
291
- # Free fonts
292
- self._font_cache.clear()
293
-
294
- # Free meshes
295
- for mesh in self._meshes:
296
- for p in mesh.primitives:
297
- p.delete()
298
-
299
- # Free textures
300
- for mesh_texture in self._mesh_textures:
301
- mesh_texture.delete()
302
-
303
- for shadow_texture in self._shadow_textures:
304
- shadow_texture.delete()
305
-
306
- self._meshes = set()
307
- self._mesh_textures = set()
308
- self._shadow_textures = set()
309
- self._texture_alloc_idx = 0
310
-
311
- self._delete_main_framebuffer()
312
- self._delete_shadow_framebuffer()
313
-
314
- def __del__(self):
315
- try:
316
- self.delete()
317
- except Exception:
318
- pass
319
-
320
- ###########################################################################
321
- # Rendering passes
322
- ###########################################################################
323
-
324
- def _forward_pass(self, scene, flags, seg_node_map=None):
325
- # Set up viewport for render
326
- self._configure_forward_pass_viewport(flags)
327
-
328
- # Clear it
329
- if bool(flags & RenderFlags.SEG):
330
- glClearColor(0.0, 0.0, 0.0, 1.0)
331
- if seg_node_map is None:
332
- seg_node_map = {}
333
- else:
334
- glClearColor(*scene.bg_color)
335
-
336
- glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
337
-
338
- if not bool(flags & RenderFlags.SEG):
339
- glEnable(GL_MULTISAMPLE)
340
- else:
341
- glDisable(GL_MULTISAMPLE)
342
-
343
- # Set up camera matrices
344
- V, P = self._get_camera_matrices(scene)
345
-
346
- program = None
347
- # Now, render each object in sorted order
348
- for node in self._sorted_mesh_nodes(scene):
349
- mesh = node.mesh
350
-
351
- # Skip the mesh if it's not visible
352
- if not mesh.is_visible:
353
- continue
354
-
355
- # If SEG, set color
356
- if bool(flags & RenderFlags.SEG):
357
- if node not in seg_node_map:
358
- continue
359
- color = seg_node_map[node]
360
- if not isinstance(color, (list, tuple, np.ndarray)):
361
- color = np.repeat(color, 3)
362
- else:
363
- color = np.asanyarray(color)
364
- color = color / 255.0
365
-
366
- for primitive in mesh.primitives:
367
-
368
- # First, get and bind the appropriate program
369
- program = self._get_primitive_program(
370
- primitive, flags, ProgramFlags.USE_MATERIAL
371
- )
372
- program._bind()
373
-
374
- # Set the camera uniforms
375
- program.set_uniform('V', V)
376
- program.set_uniform('P', P)
377
- program.set_uniform(
378
- 'cam_pos', scene.get_pose(scene.main_camera_node)[:3,3]
379
- )
380
- if bool(flags & RenderFlags.SEG):
381
- program.set_uniform('color', color)
382
-
383
- # Next, bind the lighting
384
- if not (flags & RenderFlags.DEPTH_ONLY or flags & RenderFlags.FLAT or
385
- flags & RenderFlags.SEG):
386
- self._bind_lighting(scene, program, node, flags)
387
-
388
- # Finally, bind and draw the primitive
389
- self._bind_and_draw_primitive(
390
- primitive=primitive,
391
- pose=scene.get_pose(node),
392
- program=program,
393
- flags=flags
394
- )
395
- self._reset_active_textures()
396
-
397
- # Unbind the shader and flush the output
398
- if program is not None:
399
- program._unbind()
400
- glFlush()
401
-
402
- # If doing offscreen render, copy result from framebuffer and return
403
- if flags & RenderFlags.OFFSCREEN:
404
- return self._read_main_framebuffer(scene, flags)
405
- else:
406
- return
407
-
408
- def _shadow_mapping_pass(self, scene, light_node, flags):
409
- light = light_node.light
410
-
411
- # Set up viewport for render
412
- self._configure_shadow_mapping_viewport(light, flags)
413
-
414
- # Set up camera matrices
415
- V, P = self._get_light_cam_matrices(scene, light_node, flags)
416
-
417
- # Now, render each object in sorted order
418
- for node in self._sorted_mesh_nodes(scene):
419
- mesh = node.mesh
420
-
421
- # Skip the mesh if it's not visible
422
- if not mesh.is_visible:
423
- continue
424
-
425
- for primitive in mesh.primitives:
426
-
427
- # First, get and bind the appropriate program
428
- program = self._get_primitive_program(
429
- primitive, flags, ProgramFlags.NONE
430
- )
431
- program._bind()
432
-
433
- # Set the camera uniforms
434
- program.set_uniform('V', V)
435
- program.set_uniform('P', P)
436
- program.set_uniform(
437
- 'cam_pos', scene.get_pose(scene.main_camera_node)[:3,3]
438
- )
439
-
440
- # Finally, bind and draw the primitive
441
- self._bind_and_draw_primitive(
442
- primitive=primitive,
443
- pose=scene.get_pose(node),
444
- program=program,
445
- flags=RenderFlags.DEPTH_ONLY
446
- )
447
- self._reset_active_textures()
448
-
449
- # Unbind the shader and flush the output
450
- if program is not None:
451
- program._unbind()
452
- glFlush()
453
-
454
- def _normals_pass(self, scene, flags):
455
- # Set up viewport for render
456
- self._configure_forward_pass_viewport(flags)
457
- program = None
458
-
459
- # Set up camera matrices
460
- V, P = self._get_camera_matrices(scene)
461
-
462
- # Now, render each object in sorted order
463
- for node in self._sorted_mesh_nodes(scene):
464
- mesh = node.mesh
465
-
466
- # Skip the mesh if it's not visible
467
- if not mesh.is_visible:
468
- continue
469
-
470
- for primitive in mesh.primitives:
471
-
472
- # Skip objects that don't have normals
473
- if not primitive.buf_flags & BufFlags.NORMAL:
474
- continue
475
-
476
- # First, get and bind the appropriate program
477
- pf = ProgramFlags.NONE
478
- if flags & RenderFlags.VERTEX_NORMALS:
479
- pf = pf | ProgramFlags.VERTEX_NORMALS
480
- if flags & RenderFlags.FACE_NORMALS:
481
- pf = pf | ProgramFlags.FACE_NORMALS
482
- program = self._get_primitive_program(primitive, flags, pf)
483
- program._bind()
484
-
485
- # Set the camera uniforms
486
- program.set_uniform('V', V)
487
- program.set_uniform('P', P)
488
- program.set_uniform('normal_magnitude', 0.05 * primitive.scale)
489
- program.set_uniform(
490
- 'normal_color', np.array([0.1, 0.1, 1.0, 1.0])
491
- )
492
-
493
- # Finally, bind and draw the primitive
494
- self._bind_and_draw_primitive(
495
- primitive=primitive,
496
- pose=scene.get_pose(node),
497
- program=program,
498
- flags=RenderFlags.DEPTH_ONLY
499
- )
500
- self._reset_active_textures()
501
-
502
- # Unbind the shader and flush the output
503
- if program is not None:
504
- program._unbind()
505
- glFlush()
506
-
507
- ###########################################################################
508
- # Handlers for binding uniforms and drawing primitives
509
- ###########################################################################
510
-
511
- def _bind_and_draw_primitive(self, primitive, pose, program, flags):
512
- # Set model pose matrix
513
- program.set_uniform('M', pose)
514
-
515
- # Bind mesh buffers
516
- primitive._bind()
517
-
518
- # Bind mesh material
519
- if not (flags & RenderFlags.DEPTH_ONLY or flags & RenderFlags.SEG):
520
- material = primitive.material
521
-
522
- # Bind textures
523
- tf = material.tex_flags
524
- if tf & TexFlags.NORMAL:
525
- self._bind_texture(material.normalTexture,
526
- 'material.normal_texture', program)
527
- if tf & TexFlags.OCCLUSION:
528
- self._bind_texture(material.occlusionTexture,
529
- 'material.occlusion_texture', program)
530
- if tf & TexFlags.EMISSIVE:
531
- self._bind_texture(material.emissiveTexture,
532
- 'material.emissive_texture', program)
533
- if tf & TexFlags.BASE_COLOR:
534
- self._bind_texture(material.baseColorTexture,
535
- 'material.base_color_texture', program)
536
- if tf & TexFlags.METALLIC_ROUGHNESS:
537
- self._bind_texture(material.metallicRoughnessTexture,
538
- 'material.metallic_roughness_texture',
539
- program)
540
- if tf & TexFlags.DIFFUSE:
541
- self._bind_texture(material.diffuseTexture,
542
- 'material.diffuse_texture', program)
543
- if tf & TexFlags.SPECULAR_GLOSSINESS:
544
- self._bind_texture(material.specularGlossinessTexture,
545
- 'material.specular_glossiness_texture',
546
- program)
547
-
548
- # Bind other uniforms
549
- b = 'material.{}'
550
- program.set_uniform(b.format('emissive_factor'),
551
- material.emissiveFactor)
552
- if isinstance(material, MetallicRoughnessMaterial):
553
- program.set_uniform(b.format('base_color_factor'),
554
- material.baseColorFactor)
555
- program.set_uniform(b.format('metallic_factor'),
556
- material.metallicFactor)
557
- program.set_uniform(b.format('roughness_factor'),
558
- material.roughnessFactor)
559
- elif isinstance(material, SpecularGlossinessMaterial):
560
- program.set_uniform(b.format('diffuse_factor'),
561
- material.diffuseFactor)
562
- program.set_uniform(b.format('specular_factor'),
563
- material.specularFactor)
564
- program.set_uniform(b.format('glossiness_factor'),
565
- material.glossinessFactor)
566
-
567
- # Set blending options
568
- if material.alphaMode == 'BLEND':
569
- glEnable(GL_BLEND)
570
- glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
571
- else:
572
- glEnable(GL_BLEND)
573
- glBlendFunc(GL_ONE, GL_ZERO)
574
-
575
- # Set wireframe mode
576
- wf = material.wireframe
577
- if flags & RenderFlags.FLIP_WIREFRAME:
578
- wf = not wf
579
- if (flags & RenderFlags.ALL_WIREFRAME) or wf:
580
- glPolygonMode(GL_FRONT_AND_BACK, GL_LINE)
581
- else:
582
- glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
583
-
584
- # Set culling mode
585
- if material.doubleSided or flags & RenderFlags.SKIP_CULL_FACES:
586
- glDisable(GL_CULL_FACE)
587
- else:
588
- glEnable(GL_CULL_FACE)
589
- glCullFace(GL_BACK)
590
- else:
591
- glEnable(GL_CULL_FACE)
592
- glEnable(GL_BLEND)
593
- glCullFace(GL_BACK)
594
- glBlendFunc(GL_ONE, GL_ZERO)
595
- glPolygonMode(GL_FRONT_AND_BACK, GL_FILL)
596
-
597
- # Set point size if needed
598
- glDisable(GL_PROGRAM_POINT_SIZE)
599
- if primitive.mode == GLTF.POINTS:
600
- glEnable(GL_PROGRAM_POINT_SIZE)
601
- glPointSize(self.point_size)
602
-
603
- # Render mesh
604
- n_instances = 1
605
- if primitive.poses is not None:
606
- n_instances = len(primitive.poses)
607
-
608
- if primitive.indices is not None:
609
- glDrawElementsInstanced(
610
- primitive.mode, primitive.indices.size, GL_UNSIGNED_INT,
611
- ctypes.c_void_p(0), n_instances
612
- )
613
- else:
614
- glDrawArraysInstanced(
615
- primitive.mode, 0, len(primitive.positions), n_instances
616
- )
617
-
618
- # Unbind mesh buffers
619
- primitive._unbind()
620
-
621
- def _bind_lighting(self, scene, program, node, flags):
622
- """Bind all lighting uniform values for a scene.
623
- """
624
- max_n_lights = self._compute_max_n_lights(flags)
625
-
626
- n_d = min(len(scene.directional_light_nodes), max_n_lights[0])
627
- n_s = min(len(scene.spot_light_nodes), max_n_lights[1])
628
- n_p = min(len(scene.point_light_nodes), max_n_lights[2])
629
- program.set_uniform('ambient_light', scene.ambient_light)
630
- program.set_uniform('n_directional_lights', n_d)
631
- program.set_uniform('n_spot_lights', n_s)
632
- program.set_uniform('n_point_lights', n_p)
633
- plc = 0
634
- slc = 0
635
- dlc = 0
636
-
637
- light_nodes = scene.light_nodes
638
- if (len(scene.directional_light_nodes) > max_n_lights[0] or
639
- len(scene.spot_light_nodes) > max_n_lights[1] or
640
- len(scene.point_light_nodes) > max_n_lights[2]):
641
- light_nodes = self._sorted_nodes_by_distance(
642
- scene, scene.light_nodes, node
643
- )
644
-
645
- for n in light_nodes:
646
- light = n.light
647
- pose = scene.get_pose(n)
648
- position = pose[:3,3]
649
- direction = -pose[:3,2]
650
-
651
- if isinstance(light, PointLight):
652
- if plc == max_n_lights[2]:
653
- continue
654
- b = 'point_lights[{}].'.format(plc)
655
- plc += 1
656
- shadow = bool(flags & RenderFlags.SHADOWS_POINT)
657
- program.set_uniform(b + 'position', position)
658
- elif isinstance(light, SpotLight):
659
- if slc == max_n_lights[1]:
660
- continue
661
- b = 'spot_lights[{}].'.format(slc)
662
- slc += 1
663
- shadow = bool(flags & RenderFlags.SHADOWS_SPOT)
664
- las = 1.0 / max(0.001, np.cos(light.innerConeAngle) -
665
- np.cos(light.outerConeAngle))
666
- lao = -np.cos(light.outerConeAngle) * las
667
- program.set_uniform(b + 'direction', direction)
668
- program.set_uniform(b + 'position', position)
669
- program.set_uniform(b + 'light_angle_scale', las)
670
- program.set_uniform(b + 'light_angle_offset', lao)
671
- else:
672
- if dlc == max_n_lights[0]:
673
- continue
674
- b = 'directional_lights[{}].'.format(dlc)
675
- dlc += 1
676
- shadow = bool(flags & RenderFlags.SHADOWS_DIRECTIONAL)
677
- program.set_uniform(b + 'direction', direction)
678
-
679
- program.set_uniform(b + 'color', light.color)
680
- program.set_uniform(b + 'intensity', light.intensity)
681
- # if light.range is not None:
682
- # program.set_uniform(b + 'range', light.range)
683
- # else:
684
- # program.set_uniform(b + 'range', 0)
685
-
686
- if shadow:
687
- self._bind_texture(light.shadow_texture,
688
- b + 'shadow_map', program)
689
- if not isinstance(light, PointLight):
690
- V, P = self._get_light_cam_matrices(scene, n, flags)
691
- program.set_uniform(b + 'light_matrix', P.dot(V))
692
- else:
693
- raise NotImplementedError(
694
- 'Point light shadows not implemented'
695
- )
696
-
697
- def _sorted_mesh_nodes(self, scene):
698
- cam_loc = scene.get_pose(scene.main_camera_node)[:3,3]
699
- solid_nodes = []
700
- trans_nodes = []
701
- for node in scene.mesh_nodes:
702
- mesh = node.mesh
703
- if mesh.is_transparent:
704
- trans_nodes.append(node)
705
- else:
706
- solid_nodes.append(node)
707
-
708
- # TODO BETTER SORTING METHOD
709
- trans_nodes.sort(
710
- key=lambda n: -np.linalg.norm(scene.get_pose(n)[:3,3] - cam_loc)
711
- )
712
- solid_nodes.sort(
713
- key=lambda n: -np.linalg.norm(scene.get_pose(n)[:3,3] - cam_loc)
714
- )
715
-
716
- return solid_nodes + trans_nodes
717
-
718
- def _sorted_nodes_by_distance(self, scene, nodes, compare_node):
719
- nodes = list(nodes)
720
- compare_posn = scene.get_pose(compare_node)[:3,3]
721
- nodes.sort(key=lambda n: np.linalg.norm(
722
- scene.get_pose(n)[:3,3] - compare_posn)
723
- )
724
- return nodes
725
-
726
- ###########################################################################
727
- # Context Management
728
- ###########################################################################
729
-
730
- def _update_context(self, scene, flags):
731
-
732
- # Update meshes
733
- scene_meshes = scene.meshes
734
-
735
- # Add new meshes to context
736
- for mesh in scene_meshes - self._meshes:
737
- for p in mesh.primitives:
738
- p._add_to_context()
739
-
740
- # Remove old meshes from context
741
- for mesh in self._meshes - scene_meshes:
742
- for p in mesh.primitives:
743
- p.delete()
744
-
745
- self._meshes = scene_meshes.copy()
746
-
747
- # Update mesh textures
748
- mesh_textures = set()
749
- for m in scene_meshes:
750
- for p in m.primitives:
751
- mesh_textures |= p.material.textures
752
-
753
- # Add new textures to context
754
- for texture in mesh_textures - self._mesh_textures:
755
- texture._add_to_context()
756
-
757
- # Remove old textures from context
758
- for texture in self._mesh_textures - mesh_textures:
759
- texture.delete()
760
-
761
- self._mesh_textures = mesh_textures.copy()
762
-
763
- shadow_textures = set()
764
- for l in scene.lights:
765
- # Create if needed
766
- active = False
767
- if (isinstance(l, DirectionalLight) and
768
- flags & RenderFlags.SHADOWS_DIRECTIONAL):
769
- active = True
770
- elif (isinstance(l, PointLight) and
771
- flags & RenderFlags.SHADOWS_POINT):
772
- active = True
773
- elif isinstance(l, SpotLight) and flags & RenderFlags.SHADOWS_SPOT:
774
- active = True
775
-
776
- if active and l.shadow_texture is None:
777
- l._generate_shadow_texture()
778
- if l.shadow_texture is not None:
779
- shadow_textures.add(l.shadow_texture)
780
-
781
- # Add new textures to context
782
- for texture in shadow_textures - self._shadow_textures:
783
- texture._add_to_context()
784
-
785
- # Remove old textures from context
786
- for texture in self._shadow_textures - shadow_textures:
787
- texture.delete()
788
-
789
- self._shadow_textures = shadow_textures.copy()
790
-
791
- ###########################################################################
792
- # Texture Management
793
- ###########################################################################
794
-
795
- def _bind_texture(self, texture, uniform_name, program):
796
- """Bind a texture to an active texture unit and return
797
- the texture unit index that was used.
798
- """
799
- tex_id = self._get_next_active_texture()
800
- glActiveTexture(GL_TEXTURE0 + tex_id)
801
- texture._bind()
802
- program.set_uniform(uniform_name, tex_id)
803
-
804
- def _get_next_active_texture(self):
805
- val = self._texture_alloc_idx
806
- self._texture_alloc_idx += 1
807
- return val
808
-
809
- def _reset_active_textures(self):
810
- self._texture_alloc_idx = 0
811
-
812
- ###########################################################################
813
- # Camera Matrix Management
814
- ###########################################################################
815
-
816
- def _get_camera_matrices(self, scene):
817
- main_camera_node = scene.main_camera_node
818
- if main_camera_node is None:
819
- raise ValueError('Cannot render scene without a camera')
820
- P = main_camera_node.camera.get_projection_matrix(
821
- width=self.viewport_width, height=self.viewport_height
822
- )
823
- pose = scene.get_pose(main_camera_node)
824
- V = np.linalg.inv(pose) # V maps from world to camera
825
- return V, P
826
-
827
- def _get_light_cam_matrices(self, scene, light_node, flags):
828
- light = light_node.light
829
- pose = scene.get_pose(light_node).copy()
830
- s = scene.scale
831
- camera = light._get_shadow_camera(s)
832
- P = camera.get_projection_matrix()
833
- if isinstance(light, DirectionalLight):
834
- direction = -pose[:3,2]
835
- c = scene.centroid
836
- loc = c - direction * s
837
- pose[:3,3] = loc
838
- V = np.linalg.inv(pose) # V maps from world to camera
839
- return V, P
840
-
841
- ###########################################################################
842
- # Shader Program Management
843
- ###########################################################################
844
-
845
- def _get_text_program(self):
846
- program = self._program_cache.get_program(
847
- vertex_shader='text.vert',
848
- fragment_shader='text.frag'
849
- )
850
-
851
- if not program._in_context():
852
- program._add_to_context()
853
-
854
- return program
855
-
856
- def _compute_max_n_lights(self, flags):
857
- max_n_lights = [MAX_N_LIGHTS, MAX_N_LIGHTS, MAX_N_LIGHTS]
858
- n_tex_units = glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS)
859
-
860
- # Reserved texture units: 6
861
- # Normal Map
862
- # Occlusion Map
863
- # Emissive Map
864
- # Base Color or Diffuse Map
865
- # MR or SG Map
866
- # Environment cubemap
867
-
868
- n_reserved_textures = 6
869
- n_available_textures = n_tex_units - n_reserved_textures
870
-
871
- # Distribute textures evenly among lights with shadows, with
872
- # a preference for directional lights
873
- n_shadow_types = 0
874
- if flags & RenderFlags.SHADOWS_DIRECTIONAL:
875
- n_shadow_types += 1
876
- if flags & RenderFlags.SHADOWS_SPOT:
877
- n_shadow_types += 1
878
- if flags & RenderFlags.SHADOWS_POINT:
879
- n_shadow_types += 1
880
-
881
- if n_shadow_types > 0:
882
- tex_per_light = n_available_textures // n_shadow_types
883
-
884
- if flags & RenderFlags.SHADOWS_DIRECTIONAL:
885
- max_n_lights[0] = (
886
- tex_per_light +
887
- (n_available_textures - tex_per_light * n_shadow_types)
888
- )
889
- if flags & RenderFlags.SHADOWS_SPOT:
890
- max_n_lights[1] = tex_per_light
891
- if flags & RenderFlags.SHADOWS_POINT:
892
- max_n_lights[2] = tex_per_light
893
-
894
- return max_n_lights
895
-
896
- def _get_primitive_program(self, primitive, flags, program_flags):
897
- vertex_shader = None
898
- fragment_shader = None
899
- geometry_shader = None
900
- defines = {}
901
-
902
- if (bool(program_flags & ProgramFlags.USE_MATERIAL) and
903
- not flags & RenderFlags.DEPTH_ONLY and
904
- not flags & RenderFlags.FLAT and
905
- not flags & RenderFlags.SEG):
906
- vertex_shader = 'mesh.vert'
907
- fragment_shader = 'mesh.frag'
908
- elif bool(program_flags & (ProgramFlags.VERTEX_NORMALS |
909
- ProgramFlags.FACE_NORMALS)):
910
- vertex_shader = 'vertex_normals.vert'
911
- if primitive.mode == GLTF.POINTS:
912
- geometry_shader = 'vertex_normals_pc.geom'
913
- else:
914
- geometry_shader = 'vertex_normals.geom'
915
- fragment_shader = 'vertex_normals.frag'
916
- elif flags & RenderFlags.FLAT:
917
- vertex_shader = 'flat.vert'
918
- fragment_shader = 'flat.frag'
919
- elif flags & RenderFlags.SEG:
920
- vertex_shader = 'segmentation.vert'
921
- fragment_shader = 'segmentation.frag'
922
- else:
923
- vertex_shader = 'mesh_depth.vert'
924
- fragment_shader = 'mesh_depth.frag'
925
-
926
- # Set up vertex buffer DEFINES
927
- bf = primitive.buf_flags
928
- buf_idx = 1
929
- if bf & BufFlags.NORMAL:
930
- defines['NORMAL_LOC'] = buf_idx
931
- buf_idx += 1
932
- if bf & BufFlags.TANGENT:
933
- defines['TANGENT_LOC'] = buf_idx
934
- buf_idx += 1
935
- if bf & BufFlags.TEXCOORD_0:
936
- defines['TEXCOORD_0_LOC'] = buf_idx
937
- buf_idx += 1
938
- if bf & BufFlags.TEXCOORD_1:
939
- defines['TEXCOORD_1_LOC'] = buf_idx
940
- buf_idx += 1
941
- if bf & BufFlags.COLOR_0:
942
- defines['COLOR_0_LOC'] = buf_idx
943
- buf_idx += 1
944
- if bf & BufFlags.JOINTS_0:
945
- defines['JOINTS_0_LOC'] = buf_idx
946
- buf_idx += 1
947
- if bf & BufFlags.WEIGHTS_0:
948
- defines['WEIGHTS_0_LOC'] = buf_idx
949
- buf_idx += 1
950
- defines['INST_M_LOC'] = buf_idx
951
-
952
- # Set up shadow mapping defines
953
- if flags & RenderFlags.SHADOWS_DIRECTIONAL:
954
- defines['DIRECTIONAL_LIGHT_SHADOWS'] = 1
955
- if flags & RenderFlags.SHADOWS_SPOT:
956
- defines['SPOT_LIGHT_SHADOWS'] = 1
957
- if flags & RenderFlags.SHADOWS_POINT:
958
- defines['POINT_LIGHT_SHADOWS'] = 1
959
- max_n_lights = self._compute_max_n_lights(flags)
960
- defines['MAX_DIRECTIONAL_LIGHTS'] = max_n_lights[0]
961
- defines['MAX_SPOT_LIGHTS'] = max_n_lights[1]
962
- defines['MAX_POINT_LIGHTS'] = max_n_lights[2]
963
-
964
- # Set up vertex normal defines
965
- if program_flags & ProgramFlags.VERTEX_NORMALS:
966
- defines['VERTEX_NORMALS'] = 1
967
- if program_flags & ProgramFlags.FACE_NORMALS:
968
- defines['FACE_NORMALS'] = 1
969
-
970
- # Set up material texture defines
971
- if bool(program_flags & ProgramFlags.USE_MATERIAL):
972
- tf = primitive.material.tex_flags
973
- if tf & TexFlags.NORMAL:
974
- defines['HAS_NORMAL_TEX'] = 1
975
- if tf & TexFlags.OCCLUSION:
976
- defines['HAS_OCCLUSION_TEX'] = 1
977
- if tf & TexFlags.EMISSIVE:
978
- defines['HAS_EMISSIVE_TEX'] = 1
979
- if tf & TexFlags.BASE_COLOR:
980
- defines['HAS_BASE_COLOR_TEX'] = 1
981
- if tf & TexFlags.METALLIC_ROUGHNESS:
982
- defines['HAS_METALLIC_ROUGHNESS_TEX'] = 1
983
- if tf & TexFlags.DIFFUSE:
984
- defines['HAS_DIFFUSE_TEX'] = 1
985
- if tf & TexFlags.SPECULAR_GLOSSINESS:
986
- defines['HAS_SPECULAR_GLOSSINESS_TEX'] = 1
987
- if isinstance(primitive.material, MetallicRoughnessMaterial):
988
- defines['USE_METALLIC_MATERIAL'] = 1
989
- elif isinstance(primitive.material, SpecularGlossinessMaterial):
990
- defines['USE_GLOSSY_MATERIAL'] = 1
991
-
992
- program = self._program_cache.get_program(
993
- vertex_shader=vertex_shader,
994
- fragment_shader=fragment_shader,
995
- geometry_shader=geometry_shader,
996
- defines=defines
997
- )
998
-
999
- if not program._in_context():
1000
- program._add_to_context()
1001
-
1002
- return program
1003
-
1004
- ###########################################################################
1005
- # Viewport Management
1006
- ###########################################################################
1007
-
1008
- def _configure_forward_pass_viewport(self, flags):
1009
-
1010
- # If using offscreen render, bind main framebuffer
1011
- if flags & RenderFlags.OFFSCREEN:
1012
- self._configure_main_framebuffer()
1013
- glBindFramebuffer(GL_DRAW_FRAMEBUFFER, self._main_fb_ms)
1014
- else:
1015
- glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0)
1016
-
1017
- glViewport(0, 0, self.viewport_width, self.viewport_height)
1018
- glEnable(GL_DEPTH_TEST)
1019
- glDepthMask(GL_TRUE)
1020
- glDepthFunc(GL_LESS)
1021
- glDepthRange(0.0, 1.0)
1022
-
1023
- def _configure_shadow_mapping_viewport(self, light, flags):
1024
- self._configure_shadow_framebuffer()
1025
- glBindFramebuffer(GL_FRAMEBUFFER, self._shadow_fb)
1026
- light.shadow_texture._bind()
1027
- light.shadow_texture._bind_as_depth_attachment()
1028
- glActiveTexture(GL_TEXTURE0)
1029
- light.shadow_texture._bind()
1030
- glDrawBuffer(GL_NONE)
1031
- glReadBuffer(GL_NONE)
1032
-
1033
- glClear(GL_DEPTH_BUFFER_BIT)
1034
- glViewport(0, 0, SHADOW_TEX_SZ, SHADOW_TEX_SZ)
1035
- glEnable(GL_DEPTH_TEST)
1036
- glDepthMask(GL_TRUE)
1037
- glDepthFunc(GL_LESS)
1038
- glDepthRange(0.0, 1.0)
1039
- glDisable(GL_CULL_FACE)
1040
- glDisable(GL_BLEND)
1041
-
1042
- ###########################################################################
1043
- # Framebuffer Management
1044
- ###########################################################################
1045
-
1046
- def _configure_shadow_framebuffer(self):
1047
- if self._shadow_fb is None:
1048
- self._shadow_fb = glGenFramebuffers(1)
1049
-
1050
- def _delete_shadow_framebuffer(self):
1051
- if self._shadow_fb is not None:
1052
- glDeleteFramebuffers(1, [self._shadow_fb])
1053
-
1054
- def _configure_main_framebuffer(self):
1055
- # If mismatch with prior framebuffer, delete it
1056
- if (self._main_fb is not None and
1057
- self.viewport_width != self._main_fb_dims[0] or
1058
- self.viewport_height != self._main_fb_dims[1]):
1059
- self._delete_main_framebuffer()
1060
-
1061
- # If framebuffer doesn't exist, create it
1062
- if self._main_fb is None:
1063
- # Generate standard buffer
1064
- self._main_cb, self._main_db = glGenRenderbuffers(2)
1065
-
1066
- glBindRenderbuffer(GL_RENDERBUFFER, self._main_cb)
1067
- glRenderbufferStorage(
1068
- GL_RENDERBUFFER, GL_RGBA,
1069
- self.viewport_width, self.viewport_height
1070
- )
1071
-
1072
- glBindRenderbuffer(GL_RENDERBUFFER, self._main_db)
1073
- glRenderbufferStorage(
1074
- GL_RENDERBUFFER, GL_DEPTH_COMPONENT24,
1075
- self.viewport_width, self.viewport_height
1076
- )
1077
-
1078
- self._main_fb = glGenFramebuffers(1)
1079
- glBindFramebuffer(GL_DRAW_FRAMEBUFFER, self._main_fb)
1080
- glFramebufferRenderbuffer(
1081
- GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
1082
- GL_RENDERBUFFER, self._main_cb
1083
- )
1084
- glFramebufferRenderbuffer(
1085
- GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
1086
- GL_RENDERBUFFER, self._main_db
1087
- )
1088
-
1089
- # Generate multisample buffer
1090
- self._main_cb_ms, self._main_db_ms = glGenRenderbuffers(2)
1091
- glBindRenderbuffer(GL_RENDERBUFFER, self._main_cb_ms)
1092
- # glRenderbufferStorageMultisample(
1093
- # GL_RENDERBUFFER, 4, GL_RGBA,
1094
- # self.viewport_width, self.viewport_height
1095
- # )
1096
- # glBindRenderbuffer(GL_RENDERBUFFER, self._main_db_ms)
1097
- # glRenderbufferStorageMultisample(
1098
- # GL_RENDERBUFFER, 4, GL_DEPTH_COMPONENT24,
1099
- # self.viewport_width, self.viewport_height
1100
- # )
1101
- # 增加这一行
1102
- num_samples = min(glGetIntegerv(GL_MAX_SAMPLES), 4) # No more than GL_MAX_SAMPLES
1103
-
1104
- # 其实就是把 4 替换成 num_samples,其余不变
1105
- glRenderbufferStorageMultisample(GL_RENDERBUFFER, num_samples, GL_RGBA, self.viewport_width, self.viewport_height)
1106
-
1107
- glBindRenderbuffer(GL_RENDERBUFFER, self._main_db_ms) # 这行不变
1108
-
1109
- # 这一行也是将 4 替换成 num_samples
1110
- glRenderbufferStorageMultisample(GL_RENDERBUFFER, num_samples, GL_DEPTH_COMPONENT24, self.viewport_width, self.viewport_height)
1111
-
1112
- self._main_fb_ms = glGenFramebuffers(1)
1113
- glBindFramebuffer(GL_DRAW_FRAMEBUFFER, self._main_fb_ms)
1114
- glFramebufferRenderbuffer(
1115
- GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
1116
- GL_RENDERBUFFER, self._main_cb_ms
1117
- )
1118
- glFramebufferRenderbuffer(
1119
- GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
1120
- GL_RENDERBUFFER, self._main_db_ms
1121
- )
1122
-
1123
- self._main_fb_dims = (self.viewport_width, self.viewport_height)
1124
-
1125
- def _delete_main_framebuffer(self):
1126
- if self._main_fb is not None:
1127
- glDeleteFramebuffers(2, [self._main_fb, self._main_fb_ms])
1128
- if self._main_cb is not None:
1129
- glDeleteRenderbuffers(2, [self._main_cb, self._main_cb_ms])
1130
- if self._main_db is not None:
1131
- glDeleteRenderbuffers(2, [self._main_db, self._main_db_ms])
1132
-
1133
- self._main_fb = None
1134
- self._main_cb = None
1135
- self._main_db = None
1136
- self._main_fb_ms = None
1137
- self._main_cb_ms = None
1138
- self._main_db_ms = None
1139
- self._main_fb_dims = (None, None)
1140
-
1141
- def _read_main_framebuffer(self, scene, flags):
1142
- width, height = self._main_fb_dims[0], self._main_fb_dims[1]
1143
-
1144
- # Bind framebuffer and blit buffers
1145
- glBindFramebuffer(GL_READ_FRAMEBUFFER, self._main_fb_ms)
1146
- glBindFramebuffer(GL_DRAW_FRAMEBUFFER, self._main_fb)
1147
- glBlitFramebuffer(
1148
- 0, 0, width, height, 0, 0, width, height,
1149
- GL_COLOR_BUFFER_BIT, GL_LINEAR
1150
- )
1151
- glBlitFramebuffer(
1152
- 0, 0, width, height, 0, 0, width, height,
1153
- GL_DEPTH_BUFFER_BIT, GL_NEAREST
1154
- )
1155
- glBindFramebuffer(GL_READ_FRAMEBUFFER, self._main_fb)
1156
-
1157
- # Read depth
1158
- depth_buf = glReadPixels(
1159
- 0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT
1160
- )
1161
- depth_im = np.frombuffer(depth_buf, dtype=np.float32)
1162
- depth_im = depth_im.reshape((height, width))
1163
- depth_im = np.flip(depth_im, axis=0)
1164
- inf_inds = (depth_im == 1.0)
1165
- depth_im = 2.0 * depth_im - 1.0
1166
- z_near = scene.main_camera_node.camera.znear
1167
- z_far = scene.main_camera_node.camera.zfar
1168
- noninf = np.logical_not(inf_inds)
1169
- if z_far is None:
1170
- depth_im[noninf] = 2 * z_near / (1.0 - depth_im[noninf])
1171
- else:
1172
- depth_im[noninf] = ((2.0 * z_near * z_far) /
1173
- (z_far + z_near - depth_im[noninf] *
1174
- (z_far - z_near)))
1175
- depth_im[inf_inds] = 0.0
1176
-
1177
- # Resize for macos if needed
1178
- if sys.platform == 'darwin':
1179
- depth_im = self._resize_image(depth_im)
1180
-
1181
- if flags & RenderFlags.DEPTH_ONLY:
1182
- return depth_im
1183
-
1184
- # Read color
1185
- if flags & RenderFlags.RGBA:
1186
- color_buf = glReadPixels(
1187
- 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE
1188
- )
1189
- color_im = np.frombuffer(color_buf, dtype=np.uint8)
1190
- color_im = color_im.reshape((height, width, 4))
1191
- else:
1192
- color_buf = glReadPixels(
1193
- 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE
1194
- )
1195
- color_im = np.frombuffer(color_buf, dtype=np.uint8)
1196
- color_im = color_im.reshape((height, width, 3))
1197
- color_im = np.flip(color_im, axis=0)
1198
-
1199
- # Resize for macos if needed
1200
- if sys.platform == 'darwin':
1201
- color_im = self._resize_image(color_im, True)
1202
-
1203
- return color_im, depth_im
1204
-
1205
- def _resize_image(self, value, antialias=False):
1206
- """If needed, rescale the render for MacOS."""
1207
- img = PIL.Image.fromarray(value)
1208
- resample = PIL.Image.NEAREST
1209
- if antialias:
1210
- resample = PIL.Image.BILINEAR
1211
- size = (self.viewport_width // self.dpscale,
1212
- self.viewport_height // self.dpscale)
1213
- img = img.resize(size, resample=resample)
1214
- return np.array(img)
1215
-
1216
- ###########################################################################
1217
- # Shadowmap Debugging
1218
- ###########################################################################
1219
-
1220
- def _forward_pass_no_reset(self, scene, flags):
1221
- # Set up camera matrices
1222
- V, P = self._get_camera_matrices(scene)
1223
-
1224
- # Now, render each object in sorted order
1225
- for node in self._sorted_mesh_nodes(scene):
1226
- mesh = node.mesh
1227
-
1228
- # Skip the mesh if it's not visible
1229
- if not mesh.is_visible:
1230
- continue
1231
-
1232
- for primitive in mesh.primitives:
1233
-
1234
- # First, get and bind the appropriate program
1235
- program = self._get_primitive_program(
1236
- primitive, flags, ProgramFlags.USE_MATERIAL
1237
- )
1238
- program._bind()
1239
-
1240
- # Set the camera uniforms
1241
- program.set_uniform('V', V)
1242
- program.set_uniform('P', P)
1243
- program.set_uniform(
1244
- 'cam_pos', scene.get_pose(scene.main_camera_node)[:3,3]
1245
- )
1246
-
1247
- # Next, bind the lighting
1248
- if not flags & RenderFlags.DEPTH_ONLY and not flags & RenderFlags.FLAT:
1249
- self._bind_lighting(scene, program, node, flags)
1250
-
1251
- # Finally, bind and draw the primitive
1252
- self._bind_and_draw_primitive(
1253
- primitive=primitive,
1254
- pose=scene.get_pose(node),
1255
- program=program,
1256
- flags=flags
1257
- )
1258
- self._reset_active_textures()
1259
-
1260
- # Unbind the shader and flush the output
1261
- if program is not None:
1262
- program._unbind()
1263
- glFlush()
1264
-
1265
- def _render_light_shadowmaps(self, scene, light_nodes, flags, tile=False):
1266
- glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0)
1267
- glClearColor(*scene.bg_color)
1268
- glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
1269
- glEnable(GL_DEPTH_TEST)
1270
- glDepthMask(GL_TRUE)
1271
- glDepthFunc(GL_LESS)
1272
- glDepthRange(0.0, 1.0)
1273
-
1274
- w = self.viewport_width
1275
- h = self.viewport_height
1276
-
1277
- num_nodes = len(light_nodes)
1278
- viewport_dims = {
1279
- (0, 2): [0, h // 2, w // 2, h],
1280
- (1, 2): [w // 2, h // 2, w, h],
1281
- (0, 3): [0, h // 2, w // 2, h],
1282
- (1, 3): [w // 2, h // 2, w, h],
1283
- (2, 3): [0, 0, w // 2, h // 2],
1284
- (0, 4): [0, h // 2, w // 2, h],
1285
- (1, 4): [w // 2, h // 2, w, h],
1286
- (2, 4): [0, 0, w // 2, h // 2],
1287
- (3, 4): [w // 2, 0, w, h // 2]
1288
- }
1289
-
1290
- if tile:
1291
- for i, ln in enumerate(light_nodes):
1292
- light = ln.light
1293
-
1294
- if light.shadow_texture is None:
1295
- raise ValueError('Light does not have a shadow texture')
1296
-
1297
- glViewport(*viewport_dims[(i, num_nodes + 1)])
1298
-
1299
- program = self._get_debug_quad_program()
1300
- program._bind()
1301
- self._bind_texture(light.shadow_texture, 'depthMap', program)
1302
- self._render_debug_quad()
1303
- self._reset_active_textures()
1304
- glFlush()
1305
- i += 1
1306
- glViewport(*viewport_dims[(i, num_nodes + 1)])
1307
- self._forward_pass_no_reset(scene, flags)
1308
- else:
1309
- for i, ln in enumerate(light_nodes):
1310
- light = ln.light
1311
-
1312
- if light.shadow_texture is None:
1313
- raise ValueError('Light does not have a shadow texture')
1314
-
1315
- glViewport(0, 0, self.viewport_width, self.viewport_height)
1316
-
1317
- program = self._get_debug_quad_program()
1318
- program._bind()
1319
- self._bind_texture(light.shadow_texture, 'depthMap', program)
1320
- self._render_debug_quad()
1321
- self._reset_active_textures()
1322
- glFlush()
1323
- return
1324
-
1325
- def _get_debug_quad_program(self):
1326
- program = self._program_cache.get_program(
1327
- vertex_shader='debug_quad.vert',
1328
- fragment_shader='debug_quad.frag'
1329
- )
1330
- if not program._in_context():
1331
- program._add_to_context()
1332
- return program
1333
-
1334
- def _render_debug_quad(self):
1335
- x = glGenVertexArrays(1)
1336
- glBindVertexArray(x)
1337
- glDrawArrays(GL_TRIANGLES, 0, 6)
1338
- glBindVertexArray(0)
1339
- glDeleteVertexArrays(1, [x])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv_mlm.py DELETED
@@ -1,233 +0,0 @@
1
- import torch
2
- from torch import nn
3
- from tasks.tts.ps_adv import PortaSpeechAdvTask, FastSpeechTask
4
- from text_to_speech.utils.commons.hparams import hparams
5
- from text_to_speech.utils.nn.seq_utils import group_hidden_by_segs
6
-
7
-
8
- class PortaSpeechAdvMLMTask(PortaSpeechAdvTask):
9
-
10
- def build_scheduler(self, optimizer):
11
- return [
12
- FastSpeechTask.build_scheduler(self, optimizer[0]), # Generator Scheduler
13
- torch.optim.lr_scheduler.StepLR(optimizer=optimizer[1], # Discriminator Scheduler
14
- **hparams["discriminator_scheduler_params"]),
15
- ]
16
-
17
- def on_before_optimization(self, opt_idx):
18
- if opt_idx in [0, 2]:
19
- nn.utils.clip_grad_norm_(self.dp_params, hparams['clip_grad_norm'])
20
- if self.use_bert:
21
- nn.utils.clip_grad_norm_(self.bert_params, hparams['clip_grad_norm'])
22
- nn.utils.clip_grad_norm_(self.gen_params_except_bert_and_dp, hparams['clip_grad_norm'])
23
- else:
24
- nn.utils.clip_grad_norm_(self.gen_params_except_dp, hparams['clip_grad_norm'])
25
- else:
26
- nn.utils.clip_grad_norm_(self.disc_params, hparams["clip_grad_norm"])
27
-
28
- def on_after_optimization(self, epoch, batch_idx, optimizer, optimizer_idx):
29
- if self.scheduler is not None:
30
- self.scheduler[0].step(self.global_step // hparams['accumulate_grad_batches'])
31
- self.scheduler[1].step(self.global_step // hparams['accumulate_grad_batches'])
32
-
33
-
34
- def _training_step(self, sample, batch_idx, optimizer_idx):
35
- loss_output = {}
36
- loss_weights = {}
37
- disc_start = self.global_step >= hparams["disc_start_steps"] and hparams['lambda_mel_adv'] > 0
38
- if optimizer_idx == 0:
39
- #######################
40
- # Generator #
41
- #######################
42
- loss_output, model_out = self.run_model(sample, infer=False)
43
- self.model_out_gt = self.model_out = \
44
- {k: v.detach() for k, v in model_out.items() if isinstance(v, torch.Tensor)}
45
- if disc_start:
46
- mel_p = model_out['mel_out']
47
- if hasattr(self.model, 'out2mel'):
48
- mel_p = self.model.out2mel(mel_p)
49
- o_ = self.mel_disc(mel_p)
50
- p_, pc_ = o_['y'], o_['y_c']
51
- if p_ is not None:
52
- loss_output['a'] = self.mse_loss_fn(p_, p_.new_ones(p_.size()))
53
- loss_weights['a'] = hparams['lambda_mel_adv']
54
- if pc_ is not None:
55
- loss_output['ac'] = self.mse_loss_fn(pc_, pc_.new_ones(pc_.size()))
56
- loss_weights['ac'] = hparams['lambda_mel_adv']
57
- else:
58
- return None
59
-
60
- loss_output2, model_out2 = self.run_contrastive_learning(sample)
61
- loss_output.update(loss_output2)
62
- model_out.update(model_out2)
63
-
64
- elif optimizer_idx == 1:
65
- #######################
66
- # Discriminator #
67
- #######################
68
- if disc_start and self.global_step % hparams['disc_interval'] == 0:
69
- model_out = self.model_out_gt
70
- mel_g = sample['mels']
71
- mel_p = model_out['mel_out']
72
- o = self.mel_disc(mel_g)
73
- p, pc = o['y'], o['y_c']
74
- o_ = self.mel_disc(mel_p)
75
- p_, pc_ = o_['y'], o_['y_c']
76
- if p_ is not None:
77
- loss_output["r"] = self.mse_loss_fn(p, p.new_ones(p.size()))
78
- loss_output["f"] = self.mse_loss_fn(p_, p_.new_zeros(p_.size()))
79
- if pc_ is not None:
80
- loss_output["rc"] = self.mse_loss_fn(pc, pc.new_ones(pc.size()))
81
- loss_output["fc"] = self.mse_loss_fn(pc_, pc_.new_zeros(pc_.size()))
82
-
83
- total_loss = sum([loss_weights.get(k, 1) * v for k, v in loss_output.items() if isinstance(v, torch.Tensor) and v.requires_grad])
84
- loss_output['batch_size'] = sample['txt_tokens'].size()[0]
85
- return total_loss, loss_output
86
-
87
- def run_contrastive_learning(self, sample):
88
- losses = {}
89
- outputs = {}
90
-
91
- bert = self.model.encoder.bert.bert
92
- bert_for_mlm = self.model.encoder.bert
93
- pooler = self.model.encoder.pooler
94
- sim = self.model.encoder.sim
95
- tokenizer = self.model.encoder.tokenizer
96
- ph_encoder = self.model.encoder
97
-
98
- if hparams['lambda_cl'] > 0:
99
- if hparams.get("cl_version", "v1") == "v1":
100
- cl_feats = sample['cl_feats']
101
- bs, _, t = cl_feats['cl_input_ids'].shape
102
- cl_input_ids = cl_feats['cl_input_ids'].reshape([bs*2, t])
103
- cl_attention_mask = cl_feats['cl_attention_mask'].reshape([bs*2, t])
104
- cl_token_type_ids = cl_feats['cl_token_type_ids'].reshape([bs*2, t])
105
- cl_output = bert(cl_input_ids, attention_mask=cl_attention_mask,token_type_ids=cl_token_type_ids,)
106
- pooler_output = pooler(cl_attention_mask, cl_output)
107
- pooler_output = pooler_output.reshape([bs, 2, -1])
108
- z1, z2 = pooler_output[:,0], pooler_output[:,1]
109
-
110
- cos_sim = sim(z1.unsqueeze(1), z2.unsqueeze(0))
111
- labels = torch.arange(cos_sim.size(0)).long().to(z1.device)
112
- ce_fn = nn.CrossEntropyLoss()
113
- cl_loss = ce_fn(cos_sim, labels)
114
- losses['cl_v'] = cl_loss.detach()
115
- losses['cl'] = cl_loss * hparams['lambda_cl']
116
- elif hparams['cl_version'] == "v2":
117
- # use the output of ph encoder as sentence embedding
118
- cl_feats = sample['cl_feats']
119
- bs, _, t = cl_feats['cl_input_ids'].shape
120
- cl_input_ids = cl_feats['cl_input_ids'].reshape([bs*2, t])
121
- cl_attention_mask = cl_feats['cl_attention_mask'].reshape([bs*2, t])
122
- cl_token_type_ids = cl_feats['cl_token_type_ids'].reshape([bs*2, t])
123
- txt_tokens = sample['txt_tokens']
124
- bert_feats = sample['bert_feats']
125
- src_nonpadding = (txt_tokens > 0).float()[:, :, None]
126
- ph_encoder_out1 = ph_encoder(txt_tokens, bert_feats=bert_feats, ph2word=sample['ph2word']) * src_nonpadding
127
- ph_encoder_out2 = ph_encoder(txt_tokens, bert_feats=bert_feats, ph2word=sample['ph2word']) * src_nonpadding
128
- # word_encoding1 = group_hidden_by_segs(ph_encoder_out1, sample['ph2word'], sample['ph2word'].max().item())
129
- # word_encoding2 = group_hidden_by_segs(ph_encoder_out2, sample['ph2word'], sample['ph2word'].max().item())
130
- z1 = ((ph_encoder_out1 * src_nonpadding).sum(1) / src_nonpadding.sum(1))
131
- z2 = ((ph_encoder_out2 * src_nonpadding).sum(1) / src_nonpadding.sum(1))
132
-
133
- cos_sim = sim(z1.unsqueeze(1), z2.unsqueeze(0))
134
- labels = torch.arange(cos_sim.size(0)).long().to(z1.device)
135
- ce_fn = nn.CrossEntropyLoss()
136
- cl_loss = ce_fn(cos_sim, labels)
137
- losses['cl_v'] = cl_loss.detach()
138
- losses['cl'] = cl_loss * hparams['lambda_cl']
139
- elif hparams['cl_version'] == "v3":
140
- # use the word-level contrastive learning
141
- cl_feats = sample['cl_feats']
142
- bs, _, t = cl_feats['cl_input_ids'].shape
143
- cl_input_ids = cl_feats['cl_input_ids'].reshape([bs*2, t])
144
- cl_attention_mask = cl_feats['cl_attention_mask'].reshape([bs*2, t])
145
- cl_token_type_ids = cl_feats['cl_token_type_ids'].reshape([bs*2, t])
146
- cl_output = bert(cl_input_ids, attention_mask=cl_attention_mask,token_type_ids=cl_token_type_ids,)
147
- cl_output = cl_output.last_hidden_state.reshape([-1, 768]) # [bs*2,t_w,768] ==> [bs*2*t_w, 768]
148
- cl_word_out = cl_output[cl_attention_mask.reshape([-1]).bool()] # [num_word*2, 768]
149
- cl_word_out = cl_word_out.view([-1, 2, 768])
150
- z1_total, z2_total = cl_word_out[:,0], cl_word_out[:,1] # [num_word, 768]
151
- ce_fn = nn.CrossEntropyLoss()
152
- start_idx = 0
153
- lengths = cl_attention_mask.sum(-1)
154
- cl_loss_accu = 0
155
- for i in range(bs):
156
- length = lengths[i]
157
- z1 = z1_total[start_idx:start_idx + length]
158
- z2 = z2_total[start_idx:start_idx + length]
159
- start_idx += length
160
- cos_sim = sim(z1.unsqueeze(1), z2.unsqueeze(0))
161
- labels = torch.arange(cos_sim.size(0)).long().to(z1.device)
162
- cl_loss_accu += ce_fn(cos_sim, labels) * length
163
- cl_loss = cl_loss_accu / lengths.sum()
164
- losses['cl_v'] = cl_loss.detach()
165
- losses['cl'] = cl_loss * hparams['lambda_cl']
166
- elif hparams['cl_version'] == "v4":
167
- # with Wiki dataset
168
- cl_feats = sample['cl_feats']
169
- bs, _, t = cl_feats['cl_input_ids'].shape
170
- cl_input_ids = cl_feats['cl_input_ids'].reshape([bs*2, t])
171
- cl_attention_mask = cl_feats['cl_attention_mask'].reshape([bs*2, t])
172
- cl_token_type_ids = cl_feats['cl_token_type_ids'].reshape([bs*2, t])
173
- cl_output = bert(cl_input_ids, attention_mask=cl_attention_mask,token_type_ids=cl_token_type_ids,)
174
- pooler_output = pooler(cl_attention_mask, cl_output)
175
- pooler_output = pooler_output.reshape([bs, 2, -1])
176
- z1, z2 = pooler_output[:,0], pooler_output[:,1]
177
-
178
- cos_sim = sim(z1.unsqueeze(1), z2.unsqueeze(0))
179
- labels = torch.arange(cos_sim.size(0)).long().to(z1.device)
180
- ce_fn = nn.CrossEntropyLoss()
181
- cl_loss = ce_fn(cos_sim, labels)
182
- losses['cl_v'] = cl_loss.detach()
183
- losses['cl'] = cl_loss * hparams['lambda_cl']
184
- elif hparams['cl_version'] == "v5":
185
- # with NLI dataset
186
- cl_feats = sample['cl_feats']
187
- cl_input_ids = cl_feats['sent0']['cl_input_ids']
188
- cl_attention_mask = cl_feats['sent0']['cl_attention_mask']
189
- cl_token_type_ids = cl_feats['sent0']['cl_token_type_ids']
190
- cl_output = bert(cl_input_ids, attention_mask=cl_attention_mask,token_type_ids=cl_token_type_ids,)
191
- z1 = pooler_output_sent0 = pooler(cl_attention_mask, cl_output)
192
-
193
- cl_input_ids = cl_feats['sent1']['cl_input_ids']
194
- cl_attention_mask = cl_feats['sent1']['cl_attention_mask']
195
- cl_token_type_ids = cl_feats['sent1']['cl_token_type_ids']
196
- cl_output = bert(cl_input_ids, attention_mask=cl_attention_mask,token_type_ids=cl_token_type_ids,)
197
- z2 = pooler_output_sent1 = pooler(cl_attention_mask, cl_output)
198
-
199
- cl_input_ids = cl_feats['hard_neg']['cl_input_ids']
200
- cl_attention_mask = cl_feats['hard_neg']['cl_attention_mask']
201
- cl_token_type_ids = cl_feats['hard_neg']['cl_token_type_ids']
202
- cl_output = bert(cl_input_ids, attention_mask=cl_attention_mask,token_type_ids=cl_token_type_ids,)
203
- z3 = pooler_output_neg = pooler(cl_attention_mask, cl_output)
204
-
205
- cos_sim = sim(z1.unsqueeze(1), z2.unsqueeze(0))
206
- z1_z3_cos = sim(z1.unsqueeze(1), z3.unsqueeze(0))
207
- cos_sim = torch.cat([cos_sim, z1_z3_cos], 1) # [n_sent, n_sent * 2]
208
- labels = torch.arange(cos_sim.size(0)).long().to(cos_sim.device) # [n_sent, ]
209
- ce_fn = nn.CrossEntropyLoss()
210
- cl_loss = ce_fn(cos_sim, labels)
211
- losses['cl_v'] = cl_loss.detach()
212
- losses['cl'] = cl_loss * hparams['lambda_cl']
213
- else:
214
- raise NotImplementedError()
215
-
216
- if hparams['lambda_mlm'] > 0:
217
- cl_feats = sample['cl_feats']
218
- mlm_input_ids = cl_feats['mlm_input_ids']
219
- bs, t = mlm_input_ids.shape
220
- mlm_input_ids = mlm_input_ids.view((-1, mlm_input_ids.size(-1)))
221
- mlm_labels = cl_feats['mlm_labels']
222
- mlm_labels = mlm_labels.view(-1, mlm_labels.size(-1))
223
- mlm_attention_mask = cl_feats['mlm_attention_mask']
224
-
225
- prediction_scores = bert_for_mlm(mlm_input_ids, mlm_attention_mask).logits
226
- ce_fn = nn.CrossEntropyLoss(reduction="none")
227
- mlm_loss = ce_fn(prediction_scores.view(-1, tokenizer.vocab_size), mlm_labels.view(-1))
228
- mlm_loss = mlm_loss[mlm_labels.view(-1)>=0].mean()
229
- losses['mlm'] = mlm_loss * hparams['lambda_mlm']
230
- losses['mlm_v'] = mlm_loss.detach()
231
-
232
- return losses, outputs
233
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aashiue/speech_to_text/app.py DELETED
@@ -1,25 +0,0 @@
1
- import gradio as gr
2
- from transformers import pipeline
3
- from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
4
- transcribe = pipeline("automatic-speech-recognition")
5
-
6
- model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-one-to-many-mmt")
7
-
8
- tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-one-to-many-mmt", src_lang="en_XX")
9
- def speech_to_text(audio):
10
- text = transcribe(audio)["text"]
11
-
12
- model_inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
13
- generated_tokens = model.generate(
14
- **model_inputs,
15
- forced_bos_token_id=tokenizer.lang_code_to_id["hi_IN"]
16
- )
17
-
18
- translation = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
19
-
20
- return translation
21
-
22
- gr.Interface(
23
- fn=speech_to_text,
24
- inputs=gr.Audio(source="microphone", type="filepath"),
25
- outputs="text").launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/client/css/message-input.css DELETED
@@ -1,27 +0,0 @@
1
- #message-input {
2
- margin-right: 30px;
3
- height: 64px;
4
- }
5
-
6
- #message-input::-webkit-scrollbar {
7
- width: 5px;
8
- }
9
-
10
- #message-input::-webkit-scrollbar-track {
11
- background: #f1f1f1;
12
- }
13
-
14
- #message-input::-webkit-scrollbar-thumb {
15
- background: #c7a2ff;
16
- }
17
-
18
- #message-input::-webkit-scrollbar-thumb:hover {
19
- background: #8b3dff;
20
- }
21
-
22
- @media screen and (max-width: 360px) {
23
- #message-input {
24
- margin: 0;
25
- }
26
- }
27
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/g4f/Provider/Cromicle.py DELETED
@@ -1,50 +0,0 @@
1
- from __future__ import annotations
2
-
3
- from aiohttp import ClientSession
4
- from hashlib import sha256
5
- from typing import AsyncGenerator, Dict, List
6
-
7
- from .base_provider import AsyncGeneratorProvider
8
- from .helper import format_prompt
9
-
10
-
11
- class Cromicle(AsyncGeneratorProvider):
12
- url: str = 'https://cromicle.top'
13
- working: bool = True
14
- supports_gpt_35_turbo: bool = True
15
-
16
- @classmethod
17
- async def create_async_generator(
18
- cls,
19
- model: str,
20
- messages: List[Dict[str, str]],
21
- proxy: str = None,
22
- **kwargs
23
- ) -> AsyncGenerator[str, None]:
24
- async with ClientSession(
25
- headers=_create_header()
26
- ) as session:
27
- async with session.post(
28
- f'{cls.url}/chat',
29
- proxy=proxy,
30
- json=_create_payload(format_prompt(messages))
31
- ) as response:
32
- response.raise_for_status()
33
- async for stream in response.content.iter_any():
34
- if stream:
35
- yield stream.decode()
36
-
37
-
38
- def _create_header() -> Dict[str, str]:
39
- return {
40
- 'accept': '*/*',
41
- 'content-type': 'application/json',
42
- }
43
-
44
-
45
- def _create_payload(message: str) -> Dict[str, str]:
46
- return {
47
- 'message': message,
48
- 'token': 'abc',
49
- 'hash': sha256('abc'.encode() + message.encode()).hexdigest()
50
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/ModalMethods.js DELETED
@@ -1,41 +0,0 @@
1
- import { Modal, ModalClose } from '../modal/Modal.js';
2
- import IsFunction from '../../../plugins/utils/object/IsFunction.js';
3
-
4
- export default {
5
- // Override
6
- // onCreateModalBehavior(self, config) { },
7
-
8
- modal(config, onClose) {
9
- if (IsFunction(config)) {
10
- onClose = config;
11
- config = undefined;
12
- }
13
-
14
- if (this._modalBehavior === undefined) {
15
- if (this.onCreateModalBehavior) {
16
- this.onCreateModalBehavior(this, config);
17
- }
18
- this._modalBehavior = Modal(this, config);
19
- }
20
-
21
- if (onClose) {
22
- this._modalBehavior.once('close', onClose);
23
- }
24
-
25
- this._modalBehavior.requestOpen();
26
-
27
- return this;
28
- },
29
-
30
- modalPromise(config) {
31
- var self = this;
32
- return new Promise(function (resolve, reject) {
33
- self.modal(config, resolve);
34
- });
35
- },
36
-
37
- modalClose(closeEventData) {
38
- ModalClose(this, closeEventData);
39
- return this;
40
- }
41
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/PerspectiveMethods.js DELETED
@@ -1,67 +0,0 @@
1
- const FaceIndexMap = ['front', 'back'];
2
-
3
- export default {
4
- enterPerspectiveMode() {
5
- if (this.isInPerspectiveMode) {
6
- return this;
7
- }
8
-
9
- // Set card's visible to true
10
- this.setChildVisible(this.perspectiveCard, true);
11
- // Snapshot front and back children to card's faces
12
- this.snapshotFace(0);
13
- this.snapshotFace(1);
14
- // Set front and back children's visible to false
15
- this.setChildVisible(this.childrenMap.front, false);
16
- this.setChildVisible(this.childrenMap.back, false);
17
- // Reset size of card
18
- this.perspectiveCard.setSize(this.width, this.height);
19
-
20
- return this;
21
- },
22
-
23
- exitPerspectiveMode() {
24
- if (!this.isInPerspectiveMode) {
25
- return this;
26
- }
27
-
28
- // Set card's visible to false
29
- this.setChildVisible(this.perspectiveCard, false);
30
- // Set front or back children's visible to true, according to card's face
31
- var isFrontFace = (this.perspectiveCard.face === 0);
32
- this.setChildVisible(this.childrenMap.front, isFrontFace);
33
- this.setChildVisible(this.childrenMap.back, !isFrontFace);
34
-
35
- return this;
36
- },
37
-
38
- setSnapshotPadding(padding) {
39
- this.snapshotPadding = padding;
40
- return this;
41
- },
42
-
43
- snapshotFace(face) {
44
- if (typeof (face) === 'number') {
45
- face = FaceIndexMap[face];
46
- }
47
-
48
- var cardFace = this.perspectiveCard.faces[face];
49
- var faceChild = this.childrenMap[face];
50
-
51
- cardFace.rt.clear();
52
-
53
- var faceChildVisibleSave = faceChild.visible;
54
- faceChild.visible = true;
55
-
56
- var gameObjects = (faceChild.isRexContainerLite) ? faceChild.getAllVisibleChildren() : faceChild;
57
- cardFace.snapshot(
58
- gameObjects,
59
- { padding: this.snapshotPadding }
60
- );
61
-
62
- faceChild.visible = faceChildVisibleSave;
63
-
64
- return this;
65
- }
66
-
67
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AkitoP/umamusume_bert_vits2/modules.py DELETED
@@ -1,597 +0,0 @@
1
- import math
2
- import torch
3
- from torch import nn
4
- from torch.nn import functional as F
5
-
6
- from torch.nn import Conv1d
7
- from torch.nn.utils import weight_norm, remove_weight_norm
8
-
9
- import commons
10
- from commons import init_weights, get_padding
11
- from transforms import piecewise_rational_quadratic_transform
12
- from attentions import Encoder
13
-
14
- LRELU_SLOPE = 0.1
15
-
16
-
17
- class LayerNorm(nn.Module):
18
- def __init__(self, channels, eps=1e-5):
19
- super().__init__()
20
- self.channels = channels
21
- self.eps = eps
22
-
23
- self.gamma = nn.Parameter(torch.ones(channels))
24
- self.beta = nn.Parameter(torch.zeros(channels))
25
-
26
- def forward(self, x):
27
- x = x.transpose(1, -1)
28
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
29
- return x.transpose(1, -1)
30
-
31
-
32
- class ConvReluNorm(nn.Module):
33
- def __init__(
34
- self,
35
- in_channels,
36
- hidden_channels,
37
- out_channels,
38
- kernel_size,
39
- n_layers,
40
- p_dropout,
41
- ):
42
- super().__init__()
43
- self.in_channels = in_channels
44
- self.hidden_channels = hidden_channels
45
- self.out_channels = out_channels
46
- self.kernel_size = kernel_size
47
- self.n_layers = n_layers
48
- self.p_dropout = p_dropout
49
- assert n_layers > 1, "Number of layers should be larger than 0."
50
-
51
- self.conv_layers = nn.ModuleList()
52
- self.norm_layers = nn.ModuleList()
53
- self.conv_layers.append(
54
- nn.Conv1d(
55
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
56
- )
57
- )
58
- self.norm_layers.append(LayerNorm(hidden_channels))
59
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
60
- for _ in range(n_layers - 1):
61
- self.conv_layers.append(
62
- nn.Conv1d(
63
- hidden_channels,
64
- hidden_channels,
65
- kernel_size,
66
- padding=kernel_size // 2,
67
- )
68
- )
69
- self.norm_layers.append(LayerNorm(hidden_channels))
70
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
71
- self.proj.weight.data.zero_()
72
- self.proj.bias.data.zero_()
73
-
74
- def forward(self, x, x_mask):
75
- x_org = x
76
- for i in range(self.n_layers):
77
- x = self.conv_layers[i](x * x_mask)
78
- x = self.norm_layers[i](x)
79
- x = self.relu_drop(x)
80
- x = x_org + self.proj(x)
81
- return x * x_mask
82
-
83
-
84
- class DDSConv(nn.Module):
85
- """
86
- Dialted and Depth-Separable Convolution
87
- """
88
-
89
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
90
- super().__init__()
91
- self.channels = channels
92
- self.kernel_size = kernel_size
93
- self.n_layers = n_layers
94
- self.p_dropout = p_dropout
95
-
96
- self.drop = nn.Dropout(p_dropout)
97
- self.convs_sep = nn.ModuleList()
98
- self.convs_1x1 = nn.ModuleList()
99
- self.norms_1 = nn.ModuleList()
100
- self.norms_2 = nn.ModuleList()
101
- for i in range(n_layers):
102
- dilation = kernel_size**i
103
- padding = (kernel_size * dilation - dilation) // 2
104
- self.convs_sep.append(
105
- nn.Conv1d(
106
- channels,
107
- channels,
108
- kernel_size,
109
- groups=channels,
110
- dilation=dilation,
111
- padding=padding,
112
- )
113
- )
114
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
115
- self.norms_1.append(LayerNorm(channels))
116
- self.norms_2.append(LayerNorm(channels))
117
-
118
- def forward(self, x, x_mask, g=None):
119
- if g is not None:
120
- x = x + g
121
- for i in range(self.n_layers):
122
- y = self.convs_sep[i](x * x_mask)
123
- y = self.norms_1[i](y)
124
- y = F.gelu(y)
125
- y = self.convs_1x1[i](y)
126
- y = self.norms_2[i](y)
127
- y = F.gelu(y)
128
- y = self.drop(y)
129
- x = x + y
130
- return x * x_mask
131
-
132
-
133
- class WN(torch.nn.Module):
134
- def __init__(
135
- self,
136
- hidden_channels,
137
- kernel_size,
138
- dilation_rate,
139
- n_layers,
140
- gin_channels=0,
141
- p_dropout=0,
142
- ):
143
- super(WN, self).__init__()
144
- assert kernel_size % 2 == 1
145
- self.hidden_channels = hidden_channels
146
- self.kernel_size = (kernel_size,)
147
- self.dilation_rate = dilation_rate
148
- self.n_layers = n_layers
149
- self.gin_channels = gin_channels
150
- self.p_dropout = p_dropout
151
-
152
- self.in_layers = torch.nn.ModuleList()
153
- self.res_skip_layers = torch.nn.ModuleList()
154
- self.drop = nn.Dropout(p_dropout)
155
-
156
- if gin_channels != 0:
157
- cond_layer = torch.nn.Conv1d(
158
- gin_channels, 2 * hidden_channels * n_layers, 1
159
- )
160
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
161
-
162
- for i in range(n_layers):
163
- dilation = dilation_rate**i
164
- padding = int((kernel_size * dilation - dilation) / 2)
165
- in_layer = torch.nn.Conv1d(
166
- hidden_channels,
167
- 2 * hidden_channels,
168
- kernel_size,
169
- dilation=dilation,
170
- padding=padding,
171
- )
172
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
173
- self.in_layers.append(in_layer)
174
-
175
- # last one is not necessary
176
- if i < n_layers - 1:
177
- res_skip_channels = 2 * hidden_channels
178
- else:
179
- res_skip_channels = hidden_channels
180
-
181
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
182
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
183
- self.res_skip_layers.append(res_skip_layer)
184
-
185
- def forward(self, x, x_mask, g=None, **kwargs):
186
- output = torch.zeros_like(x)
187
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
188
-
189
- if g is not None:
190
- g = self.cond_layer(g)
191
-
192
- for i in range(self.n_layers):
193
- x_in = self.in_layers[i](x)
194
- if g is not None:
195
- cond_offset = i * 2 * self.hidden_channels
196
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
197
- else:
198
- g_l = torch.zeros_like(x_in)
199
-
200
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
201
- acts = self.drop(acts)
202
-
203
- res_skip_acts = self.res_skip_layers[i](acts)
204
- if i < self.n_layers - 1:
205
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
206
- x = (x + res_acts) * x_mask
207
- output = output + res_skip_acts[:, self.hidden_channels :, :]
208
- else:
209
- output = output + res_skip_acts
210
- return output * x_mask
211
-
212
- def remove_weight_norm(self):
213
- if self.gin_channels != 0:
214
- torch.nn.utils.remove_weight_norm(self.cond_layer)
215
- for l in self.in_layers:
216
- torch.nn.utils.remove_weight_norm(l)
217
- for l in self.res_skip_layers:
218
- torch.nn.utils.remove_weight_norm(l)
219
-
220
-
221
- class ResBlock1(torch.nn.Module):
222
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
223
- super(ResBlock1, self).__init__()
224
- self.convs1 = nn.ModuleList(
225
- [
226
- weight_norm(
227
- Conv1d(
228
- channels,
229
- channels,
230
- kernel_size,
231
- 1,
232
- dilation=dilation[0],
233
- padding=get_padding(kernel_size, dilation[0]),
234
- )
235
- ),
236
- weight_norm(
237
- Conv1d(
238
- channels,
239
- channels,
240
- kernel_size,
241
- 1,
242
- dilation=dilation[1],
243
- padding=get_padding(kernel_size, dilation[1]),
244
- )
245
- ),
246
- weight_norm(
247
- Conv1d(
248
- channels,
249
- channels,
250
- kernel_size,
251
- 1,
252
- dilation=dilation[2],
253
- padding=get_padding(kernel_size, dilation[2]),
254
- )
255
- ),
256
- ]
257
- )
258
- self.convs1.apply(init_weights)
259
-
260
- self.convs2 = nn.ModuleList(
261
- [
262
- weight_norm(
263
- Conv1d(
264
- channels,
265
- channels,
266
- kernel_size,
267
- 1,
268
- dilation=1,
269
- padding=get_padding(kernel_size, 1),
270
- )
271
- ),
272
- weight_norm(
273
- Conv1d(
274
- channels,
275
- channels,
276
- kernel_size,
277
- 1,
278
- dilation=1,
279
- padding=get_padding(kernel_size, 1),
280
- )
281
- ),
282
- weight_norm(
283
- Conv1d(
284
- channels,
285
- channels,
286
- kernel_size,
287
- 1,
288
- dilation=1,
289
- padding=get_padding(kernel_size, 1),
290
- )
291
- ),
292
- ]
293
- )
294
- self.convs2.apply(init_weights)
295
-
296
- def forward(self, x, x_mask=None):
297
- for c1, c2 in zip(self.convs1, self.convs2):
298
- xt = F.leaky_relu(x, LRELU_SLOPE)
299
- if x_mask is not None:
300
- xt = xt * x_mask
301
- xt = c1(xt)
302
- xt = F.leaky_relu(xt, LRELU_SLOPE)
303
- if x_mask is not None:
304
- xt = xt * x_mask
305
- xt = c2(xt)
306
- x = xt + x
307
- if x_mask is not None:
308
- x = x * x_mask
309
- return x
310
-
311
- def remove_weight_norm(self):
312
- for l in self.convs1:
313
- remove_weight_norm(l)
314
- for l in self.convs2:
315
- remove_weight_norm(l)
316
-
317
-
318
- class ResBlock2(torch.nn.Module):
319
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
320
- super(ResBlock2, self).__init__()
321
- self.convs = nn.ModuleList(
322
- [
323
- weight_norm(
324
- Conv1d(
325
- channels,
326
- channels,
327
- kernel_size,
328
- 1,
329
- dilation=dilation[0],
330
- padding=get_padding(kernel_size, dilation[0]),
331
- )
332
- ),
333
- weight_norm(
334
- Conv1d(
335
- channels,
336
- channels,
337
- kernel_size,
338
- 1,
339
- dilation=dilation[1],
340
- padding=get_padding(kernel_size, dilation[1]),
341
- )
342
- ),
343
- ]
344
- )
345
- self.convs.apply(init_weights)
346
-
347
- def forward(self, x, x_mask=None):
348
- for c in self.convs:
349
- xt = F.leaky_relu(x, LRELU_SLOPE)
350
- if x_mask is not None:
351
- xt = xt * x_mask
352
- xt = c(xt)
353
- x = xt + x
354
- if x_mask is not None:
355
- x = x * x_mask
356
- return x
357
-
358
- def remove_weight_norm(self):
359
- for l in self.convs:
360
- remove_weight_norm(l)
361
-
362
-
363
- class Log(nn.Module):
364
- def forward(self, x, x_mask, reverse=False, **kwargs):
365
- if not reverse:
366
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
367
- logdet = torch.sum(-y, [1, 2])
368
- return y, logdet
369
- else:
370
- x = torch.exp(x) * x_mask
371
- return x
372
-
373
-
374
- class Flip(nn.Module):
375
- def forward(self, x, *args, reverse=False, **kwargs):
376
- x = torch.flip(x, [1])
377
- if not reverse:
378
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
379
- return x, logdet
380
- else:
381
- return x
382
-
383
-
384
- class ElementwiseAffine(nn.Module):
385
- def __init__(self, channels):
386
- super().__init__()
387
- self.channels = channels
388
- self.m = nn.Parameter(torch.zeros(channels, 1))
389
- self.logs = nn.Parameter(torch.zeros(channels, 1))
390
-
391
- def forward(self, x, x_mask, reverse=False, **kwargs):
392
- if not reverse:
393
- y = self.m + torch.exp(self.logs) * x
394
- y = y * x_mask
395
- logdet = torch.sum(self.logs * x_mask, [1, 2])
396
- return y, logdet
397
- else:
398
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
399
- return x
400
-
401
-
402
- class ResidualCouplingLayer(nn.Module):
403
- def __init__(
404
- self,
405
- channels,
406
- hidden_channels,
407
- kernel_size,
408
- dilation_rate,
409
- n_layers,
410
- p_dropout=0,
411
- gin_channels=0,
412
- mean_only=False,
413
- ):
414
- assert channels % 2 == 0, "channels should be divisible by 2"
415
- super().__init__()
416
- self.channels = channels
417
- self.hidden_channels = hidden_channels
418
- self.kernel_size = kernel_size
419
- self.dilation_rate = dilation_rate
420
- self.n_layers = n_layers
421
- self.half_channels = channels // 2
422
- self.mean_only = mean_only
423
-
424
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
425
- self.enc = WN(
426
- hidden_channels,
427
- kernel_size,
428
- dilation_rate,
429
- n_layers,
430
- p_dropout=p_dropout,
431
- gin_channels=gin_channels,
432
- )
433
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
434
- self.post.weight.data.zero_()
435
- self.post.bias.data.zero_()
436
-
437
- def forward(self, x, x_mask, g=None, reverse=False):
438
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
439
- h = self.pre(x0) * x_mask
440
- h = self.enc(h, x_mask, g=g)
441
- stats = self.post(h) * x_mask
442
- if not self.mean_only:
443
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
444
- else:
445
- m = stats
446
- logs = torch.zeros_like(m)
447
-
448
- if not reverse:
449
- x1 = m + x1 * torch.exp(logs) * x_mask
450
- x = torch.cat([x0, x1], 1)
451
- logdet = torch.sum(logs, [1, 2])
452
- return x, logdet
453
- else:
454
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
455
- x = torch.cat([x0, x1], 1)
456
- return x
457
-
458
-
459
- class ConvFlow(nn.Module):
460
- def __init__(
461
- self,
462
- in_channels,
463
- filter_channels,
464
- kernel_size,
465
- n_layers,
466
- num_bins=10,
467
- tail_bound=5.0,
468
- ):
469
- super().__init__()
470
- self.in_channels = in_channels
471
- self.filter_channels = filter_channels
472
- self.kernel_size = kernel_size
473
- self.n_layers = n_layers
474
- self.num_bins = num_bins
475
- self.tail_bound = tail_bound
476
- self.half_channels = in_channels // 2
477
-
478
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
479
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
480
- self.proj = nn.Conv1d(
481
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
482
- )
483
- self.proj.weight.data.zero_()
484
- self.proj.bias.data.zero_()
485
-
486
- def forward(self, x, x_mask, g=None, reverse=False):
487
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
488
- h = self.pre(x0)
489
- h = self.convs(h, x_mask, g=g)
490
- h = self.proj(h) * x_mask
491
-
492
- b, c, t = x0.shape
493
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
494
-
495
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
496
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
497
- self.filter_channels
498
- )
499
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
500
-
501
- x1, logabsdet = piecewise_rational_quadratic_transform(
502
- x1,
503
- unnormalized_widths,
504
- unnormalized_heights,
505
- unnormalized_derivatives,
506
- inverse=reverse,
507
- tails="linear",
508
- tail_bound=self.tail_bound,
509
- )
510
-
511
- x = torch.cat([x0, x1], 1) * x_mask
512
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
513
- if not reverse:
514
- return x, logdet
515
- else:
516
- return x
517
-
518
-
519
- class TransformerCouplingLayer(nn.Module):
520
- def __init__(
521
- self,
522
- channels,
523
- hidden_channels,
524
- kernel_size,
525
- n_layers,
526
- n_heads,
527
- p_dropout=0,
528
- filter_channels=0,
529
- mean_only=False,
530
- wn_sharing_parameter=None,
531
- gin_channels=0,
532
- ):
533
- assert channels % 2 == 0, "channels should be divisible by 2"
534
- super().__init__()
535
- self.channels = channels
536
- self.hidden_channels = hidden_channels
537
- self.kernel_size = kernel_size
538
- self.n_layers = n_layers
539
- self.half_channels = channels // 2
540
- self.mean_only = mean_only
541
-
542
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
543
- self.enc = (
544
- Encoder(
545
- hidden_channels,
546
- filter_channels,
547
- n_heads,
548
- n_layers,
549
- kernel_size,
550
- p_dropout,
551
- isflow=True,
552
- gin_channels=gin_channels,
553
- )
554
- if wn_sharing_parameter is None
555
- else wn_sharing_parameter
556
- )
557
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
558
- self.post.weight.data.zero_()
559
- self.post.bias.data.zero_()
560
-
561
- def forward(self, x, x_mask, g=None, reverse=False):
562
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
563
- h = self.pre(x0) * x_mask
564
- h = self.enc(h, x_mask, g=g)
565
- stats = self.post(h) * x_mask
566
- if not self.mean_only:
567
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
568
- else:
569
- m = stats
570
- logs = torch.zeros_like(m)
571
-
572
- if not reverse:
573
- x1 = m + x1 * torch.exp(logs) * x_mask
574
- x = torch.cat([x0, x1], 1)
575
- logdet = torch.sum(logs, [1, 2])
576
- return x, logdet
577
- else:
578
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
579
- x = torch.cat([x0, x1], 1)
580
- return x
581
-
582
- x1, logabsdet = piecewise_rational_quadratic_transform(
583
- x1,
584
- unnormalized_widths,
585
- unnormalized_heights,
586
- unnormalized_derivatives,
587
- inverse=reverse,
588
- tails="linear",
589
- tail_bound=self.tail_bound,
590
- )
591
-
592
- x = torch.cat([x0, x1], 1) * x_mask
593
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
594
- if not reverse:
595
- return x, logdet
596
- else:
597
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlanMars/QYL-AI-Space/locale/extract_locale.py DELETED
@@ -1,26 +0,0 @@
1
- import os
2
- import json
3
- import re
4
-
5
- # Define regular expression patterns
6
- pattern = r'i18n\((\"{3}.*?\"{3}|\".*?\")\)'
7
-
8
- # Load the .py file
9
- with open('app.py', 'r', encoding='utf-8') as f:
10
- contents = f.read()
11
-
12
- # Load the .py files in the modules folder
13
- for filename in os.listdir("modules"):
14
- if filename.endswith(".py"):
15
- with open(os.path.join("modules", filename), "r", encoding="utf-8") as f:
16
- contents += f.read()
17
-
18
- # Matching with regular expressions
19
- matches = re.findall(pattern, contents, re.DOTALL)
20
-
21
- # Convert to key/value pairs
22
- data = {match.strip('()"'): '' for match in matches}
23
-
24
- # Save as a JSON file
25
- with open('labels.json', 'w', encoding='utf-8') as f:
26
- json.dump(data, f, ensure_ascii=False, indent=4)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlekseyKorshuk/huggingartists/app.py DELETED
@@ -1,245 +0,0 @@
1
- import json
2
- import math
3
- import random
4
- import os
5
- import streamlit as st
6
- import lyricsgenius
7
- import transformers
8
- from transformers import AutoTokenizer, AutoModelForCausalLM
9
-
10
-
11
-
12
- st.set_page_config(page_title="HuggingArtists")
13
-
14
-
15
- st.title("HuggingArtists")
16
- st.sidebar.markdown(
17
- """
18
- <style>
19
- .aligncenter {
20
- text-align: center;
21
- }
22
- </style>
23
- <p class="aligncenter">
24
- <img src="https://raw.githubusercontent.com/AlekseyKorshuk/huggingartists/master/img/logo.jpg" width="420" />
25
- </p>
26
- """,
27
- unsafe_allow_html=True,
28
- )
29
- st.sidebar.markdown(
30
- """
31
- <style>
32
- .aligncenter {
33
- text-align: center;
34
- }
35
- </style>
36
-
37
- <p style='text-align: center'>
38
- <a href="https://github.com/AlekseyKorshuk/huggingartists" target="_blank">GitHub</a> | <a href="https://wandb.ai/huggingartists/huggingartists/reportlist" target="_blank">Project Report</a>
39
- </p>
40
-
41
- <p class="aligncenter">
42
- <a href="https://github.com/AlekseyKorshuk/huggingartists" target="_blank">
43
- <img src="https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social"/>
44
- </a>
45
- </p>
46
- <p class="aligncenter">
47
- <a href="https://t.me/joinchat/_CQ04KjcJ-4yZTky" target="_blank">
48
- <img src="https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram"/>
49
- </a>
50
- </p>
51
- <p class="aligncenter">
52
- <a href="https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb" target="_blank">
53
- <img src="https://colab.research.google.com/assets/colab-badge.svg"/>
54
- </a>
55
- </p>
56
- """,
57
- unsafe_allow_html=True,
58
- )
59
-
60
-
61
-
62
- st.sidebar.header("Generation settings:")
63
- num_sequences = st.sidebar.number_input(
64
- "Number of sequences to generate",
65
- min_value=1,
66
- value=5,
67
- help="The amount of generated texts",
68
- )
69
- min_length = st.sidebar.number_input(
70
- "Minimum length of the sequence",
71
- min_value=1,
72
- value=100,
73
- help="The minimum length of the sequence to be generated",
74
- )
75
- max_length= st.sidebar.number_input(
76
- "Maximum length of the sequence",
77
- min_value=1,
78
- value=160,
79
- help="The maximum length of the sequence to be generated",
80
- )
81
- temperature = st.sidebar.slider(
82
- "Temperature",
83
- min_value=0.0,
84
- max_value=3.0,
85
- step=0.01,
86
- value=1.0,
87
- help="The value used to module the next token probabilities",
88
- )
89
- top_p = st.sidebar.slider(
90
- "Top-P",
91
- min_value=0.0,
92
- max_value=1.0,
93
- step=0.01,
94
- value=0.95,
95
- help="If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation.",
96
- )
97
-
98
- top_k= st.sidebar.number_input(
99
- "Top-K",
100
- min_value=0,
101
- value=50,
102
- step=1,
103
- help="The number of highest probability vocabulary tokens to keep for top-k-filtering.",
104
- )
105
-
106
- caption = (
107
- "In [HuggingArtists](https://github.com/AlekseyKorshuk/huggingartist), we can generate lyrics by a specific artist. This was made by fine-tuning a pre-trained [HuggingFace Transformer](https://huggingface.co) on parsed datasets from [Genius](https://genius.com)."
108
- )
109
- st.markdown("[HuggingArtists](https://github.com/AlekseyKorshuk/huggingartist) - Train a model to generate lyrics 🎵")
110
- st.markdown(caption)
111
-
112
- st.subheader("Settings:")
113
- artist_name = st.text_input("Artist name:", "Eminem")
114
- start = st.text_input("Beginning of the song:", "But for me to rap like a computer")
115
-
116
- TOKEN = "q_JK_BFy9OMiG7fGTzL-nUto9JDv3iXI24aYRrQnkOvjSCSbY4BuFIindweRsr5I"
117
- genius = lyricsgenius.Genius(TOKEN)
118
-
119
- model_html = """
120
-
121
- <div class="inline-flex flex-col" style="line-height: 1.5;">
122
- <div class="flex">
123
- <div
124
- \t\t\tstyle="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;USER_PROFILE&#39;)">
125
- </div>
126
- </div>
127
- <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
128
- <div style="text-align: center; font-size: 16px; font-weight: 800">USER_NAME</div>
129
- <a href="https://genius.com/artists/USER_HANDLE">
130
- \t<div style="text-align: center; font-size: 14px;">@USER_HANDLE</div>
131
- </a>
132
- </div>
133
- """
134
-
135
-
136
- def post_process(output_sequences):
137
- predictions = []
138
- generated_sequences = []
139
-
140
- max_repeat = 2
141
-
142
- # decode prediction
143
- for generated_sequence_idx, generated_sequence in enumerate(output_sequences):
144
- generated_sequence = generated_sequence.tolist()
145
- text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True, skip_special_tokens=True)
146
- generated_sequences.append(text.strip())
147
-
148
- for i, g in enumerate(generated_sequences):
149
- res = str(g).replace('\n\n\n', '\n').replace('\n\n', '\n')
150
- lines = res.split('\n')
151
- # print(lines)
152
- # i = max_repeat
153
- # while i != len(lines):
154
- # remove_count = 0
155
- # for index in range(0, max_repeat):
156
- # # print(i - index - 1, i - index)
157
- # if lines[i - index - 1] == lines[i - index]:
158
- # remove_count += 1
159
- # if remove_count == max_repeat:
160
- # lines.pop(i)
161
- # i -= 1
162
- # else:
163
- # i += 1
164
- predictions.append('\n'.join(lines))
165
-
166
- return predictions
167
-
168
- if st.button("Run"):
169
- model_name = None
170
- with st.spinner(text=f"Searching for {artist_name } in Genius..."):
171
- artist = genius.search_artist(artist_name, max_songs=0, get_full_info=False)
172
- if artist is not None:
173
- artist_dict = genius.artist(artist.id)['artist']
174
- artist_url = str(artist_dict['url'])
175
- model_name = artist_url[artist_url.rfind('/') + 1:].lower()
176
- st.markdown(model_html.replace("USER_PROFILE",artist.image_url).replace("USER_NAME",artist.name).replace("USER_HANDLE",model_name), unsafe_allow_html=True)
177
- else:
178
- st.markdown(f"Could not find {artist_name}! Be sure that he/she exists in [Genius](https://genius.com/).")
179
- if model_name is not None:
180
- with st.spinner(text=f"Downloading the model of {artist_name }..."):
181
- model = None
182
- tokenizer = None
183
- try:
184
- tokenizer = AutoTokenizer.from_pretrained(f"huggingartists/{model_name}")
185
- model = AutoModelForCausalLM.from_pretrained(f"huggingartists/{model_name}")
186
- except Exception as ex:
187
- # st.markdown(ex)
188
- st.markdown(f"Model for this artist does not exist yet. Create it in just 5 min with [Colab Notebook](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb):")
189
- st.markdown(
190
- """
191
- <style>
192
- .aligncenter {
193
- text-align: center;
194
- }
195
- </style>
196
- <p class="aligncenter">
197
- <a href="https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb" target="_blank">
198
- <img src="https://colab.research.google.com/assets/colab-badge.svg"/>
199
- </a>
200
- </p>
201
- """,
202
- unsafe_allow_html=True,
203
- )
204
- if model is not None:
205
- with st.spinner(text=f"Generating lyrics..."):
206
- encoded_prompt = tokenizer(start, add_special_tokens=False, return_tensors="pt").input_ids
207
- encoded_prompt = encoded_prompt.to(model.device)
208
- # prediction
209
- output_sequences = model.generate(
210
- input_ids=encoded_prompt,
211
- max_length=max_length,
212
- min_length=min_length,
213
- temperature=float(temperature),
214
- top_p=float(top_p),
215
- top_k=int(top_k),
216
- do_sample=True,
217
- repetition_penalty=1.0,
218
- num_return_sequences=num_sequences
219
- )
220
- # Post-processing
221
- predictions = post_process(output_sequences)
222
- st.subheader("Results")
223
- for prediction in predictions:
224
- st.text(prediction)
225
- st.subheader("Please star this repository and join my Telegram Channel:")
226
- st.markdown(
227
- """
228
- <style>
229
- .aligncenter {
230
- text-align: center;
231
- }
232
- </style>
233
- <p class="aligncenter">
234
- <a href="https://github.com/AlekseyKorshuk/huggingartists" target="_blank">
235
- <img src="https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social"/>
236
- </a>
237
- </p>
238
- <p class="aligncenter">
239
- <a href="https://t.me/joinchat/_CQ04KjcJ-4yZTky" target="_blank">
240
- <img src="https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram"/>
241
- </a>
242
- </p>
243
- """,
244
- unsafe_allow_html=True,
245
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlekseyKorshuk/michellejieli-NSFW_text_classifier/app.py DELETED
@@ -1,3 +0,0 @@
1
- import gradio as gr
2
-
3
- gr.Interface.load("models/michellejieli/NSFW_text_classifier").launch()
 
 
 
 
spaces/Aloento/9Nine-PITS/models.py DELETED
@@ -1,1383 +0,0 @@
1
- # from https://github.com/jaywalnut310/vits
2
- # from https://github.com/ncsoft/avocodo
3
- import math
4
-
5
- import torch
6
- from torch import nn
7
- from torch.nn import Conv1d, ConvTranspose1d, Conv2d
8
- from torch.nn import functional as F
9
- from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
10
-
11
- import attentions
12
- import commons
13
- import modules
14
- from analysis import Pitch
15
- from commons import init_weights, get_padding
16
- from pqmf import PQMF
17
-
18
-
19
- # for Q option
20
- # from functions import vq, vq_st
21
-
22
-
23
- class StochasticDurationPredictor(nn.Module):
24
-
25
- def __init__(self,
26
- in_channels,
27
- filter_channels,
28
- kernel_size,
29
- p_dropout,
30
- n_flows=4,
31
- gin_channels=0):
32
- super().__init__()
33
- # it needs to be removed from future version.
34
- filter_channels = in_channels
35
- self.in_channels = in_channels
36
- self.filter_channels = filter_channels
37
- self.kernel_size = kernel_size
38
- self.p_dropout = p_dropout
39
- self.n_flows = n_flows
40
- self.gin_channels = gin_channels
41
-
42
- self.log_flow = modules.Log()
43
- self.flows = nn.ModuleList()
44
- self.flows.append(modules.ElementwiseAffine(2))
45
- for i in range(n_flows):
46
- self.flows.append(
47
- modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
48
- self.flows.append(modules.Flip())
49
-
50
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
51
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
52
- self.post_convs = modules.DDSConv(filter_channels,
53
- kernel_size,
54
- n_layers=3,
55
- p_dropout=p_dropout)
56
- self.post_flows = nn.ModuleList()
57
- self.post_flows.append(modules.ElementwiseAffine(2))
58
- for i in range(4):
59
- self.post_flows.append(
60
- modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
61
- self.post_flows.append(modules.Flip())
62
-
63
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
64
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
65
- self.convs = modules.DDSConv(filter_channels,
66
- kernel_size,
67
- n_layers=3,
68
- p_dropout=p_dropout)
69
- if gin_channels != 0:
70
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
71
-
72
- def forward(self,
73
- x,
74
- x_mask,
75
- w=None,
76
- g=None,
77
- reverse=False,
78
- noise_scale=1.0):
79
- x = torch.detach(x)
80
- x = self.pre(x)
81
- if g is not None:
82
- g = torch.detach(g)
83
- x = x + self.cond(g)
84
- x = self.convs(x, x_mask)
85
- x = self.proj(x) * x_mask
86
-
87
- if not reverse:
88
- flows = self.flows
89
- assert w is not None
90
-
91
- logdet_tot_q = 0
92
- h_w = self.post_pre(w)
93
- h_w = self.post_convs(h_w, x_mask)
94
- h_w = self.post_proj(h_w) * x_mask
95
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(
96
- device=x.device, dtype=x.dtype) * x_mask
97
- z_q = e_q
98
- for flow in self.post_flows:
99
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
100
- logdet_tot_q += logdet_q
101
- z_u, z1 = torch.split(z_q, [1, 1], 1)
102
- u = torch.sigmoid(z_u) * x_mask
103
- z0 = (w - u) * x_mask
104
- logdet_tot_q += torch.sum(
105
- (F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2])
106
- logq = torch.sum(
107
- -0.5 * (math.log(2 * math.pi) +
108
- (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q
109
-
110
- logdet_tot = 0
111
- z0, logdet = self.log_flow(z0, x_mask)
112
- logdet_tot += logdet
113
- z = torch.cat([z0, z1], 1)
114
- for flow in flows:
115
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
116
- logdet_tot = logdet_tot + logdet
117
- nll = torch.sum(0.5 * (math.log(2 * math.pi) +
118
- (z ** 2)) * x_mask, [1, 2]) - logdet_tot
119
- return nll + logq # [b]
120
- else:
121
- flows = list(reversed(self.flows))
122
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
123
- z = torch.randn(x.size(0), 2, x.size(2)).to(
124
- device=x.device, dtype=x.dtype) * noise_scale
125
- for flow in flows:
126
- z = flow(z, x_mask, g=x, reverse=reverse)
127
- z0, z1 = torch.split(z, [1, 1], 1)
128
- logw = z0
129
- return logw
130
-
131
-
132
- class DurationPredictor(nn.Module):
133
-
134
- def __init__(self,
135
- in_channels,
136
- filter_channels,
137
- kernel_size,
138
- p_dropout,
139
- gin_channels=0):
140
- super().__init__()
141
-
142
- self.in_channels = in_channels
143
- self.filter_channels = filter_channels
144
- self.kernel_size = kernel_size
145
- self.p_dropout = p_dropout
146
- self.gin_channels = gin_channels
147
-
148
- self.drop = nn.Dropout(p_dropout)
149
- self.conv_1 = nn.Conv1d(in_channels,
150
- filter_channels,
151
- kernel_size,
152
- padding=kernel_size // 2)
153
- self.norm_1 = modules.LayerNorm(filter_channels)
154
- self.conv_2 = nn.Conv1d(filter_channels,
155
- filter_channels,
156
- kernel_size,
157
- padding=kernel_size // 2)
158
- self.norm_2 = modules.LayerNorm(filter_channels)
159
- self.proj = nn.Conv1d(filter_channels, 1, 1)
160
-
161
- if gin_channels != 0:
162
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
163
-
164
- def forward(self, x, x_mask, g=None):
165
- x = torch.detach(x)
166
- if g is not None:
167
- g = torch.detach(g)
168
- x = x + self.cond(g)
169
- x = self.conv_1(x * x_mask)
170
- x = torch.relu(x)
171
- x = self.norm_1(x)
172
- x = self.drop(x)
173
- x = self.conv_2(x * x_mask)
174
- x = torch.relu(x)
175
- x = self.norm_2(x)
176
- x = self.drop(x)
177
- x = self.proj(x * x_mask)
178
- return x * x_mask
179
-
180
-
181
- class TextEncoder(nn.Module):
182
-
183
- def __init__(self, n_vocab, out_channels, hidden_channels, filter_channels,
184
- n_heads, n_layers, kernel_size, p_dropout):
185
- super().__init__()
186
- self.n_vocab = n_vocab
187
- self.out_channels = out_channels
188
- self.hidden_channels = hidden_channels
189
- self.filter_channels = filter_channels
190
- self.n_heads = n_heads
191
- self.n_layers = n_layers
192
- self.kernel_size = kernel_size
193
- self.p_dropout = p_dropout
194
-
195
- self.emb = nn.Embedding(n_vocab, hidden_channels)
196
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
197
- self.emb_t = nn.Embedding(6, hidden_channels)
198
- nn.init.normal_(self.emb_t.weight, 0.0, hidden_channels ** -0.5)
199
-
200
- self.encoder = attentions.Encoder(hidden_channels, filter_channels,
201
- n_heads, n_layers, kernel_size,
202
- p_dropout)
203
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
204
-
205
- def forward(self, x, t, x_lengths):
206
- t_zero = (t == 0)
207
- emb_t = self.emb_t(t)
208
- emb_t[t_zero, :] = 0
209
- x = (self.emb(x) + emb_t) * math.sqrt(
210
- self.hidden_channels) # [b, t, h]
211
- # x = torch.transpose(x, 1, -1) # [b, h, t]
212
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(1)),
213
- 1).to(x.dtype)
214
- # x = self.encoder(x * x_mask, x_mask)
215
- x = torch.einsum('btd,but->bdt', x, x_mask)
216
- x = self.encoder(x, x_mask)
217
- stats = self.proj(x) * x_mask
218
-
219
- m, logs = torch.split(stats, self.out_channels, dim=1)
220
- return x, m, logs, x_mask
221
-
222
-
223
- class ResidualCouplingBlock(nn.Module):
224
-
225
- def __init__(self,
226
- channels,
227
- hidden_channels,
228
- kernel_size,
229
- dilation_rate,
230
- n_layers,
231
- n_flows=4,
232
- gin_channels=0):
233
- super().__init__()
234
- self.channels = channels
235
- self.hidden_channels = hidden_channels
236
- self.kernel_size = kernel_size
237
- self.dilation_rate = dilation_rate
238
- self.n_layers = n_layers
239
- self.n_flows = n_flows
240
- self.gin_channels = gin_channels
241
-
242
- self.flows = nn.ModuleList()
243
- for i in range(n_flows):
244
- self.flows.append(
245
- modules.ResidualCouplingLayer(channels,
246
- hidden_channels,
247
- kernel_size,
248
- dilation_rate,
249
- n_layers,
250
- gin_channels=gin_channels,
251
- mean_only=True))
252
- self.flows.append(modules.Flip())
253
-
254
- def forward(self, x, x_mask, g=None, reverse=False):
255
- if not reverse:
256
- for flow in self.flows:
257
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
258
- else:
259
- for flow in reversed(self.flows):
260
- x = flow(x, x_mask, g=g, reverse=reverse)
261
- return x
262
-
263
-
264
- class PosteriorEncoder(nn.Module):
265
-
266
- def __init__(self,
267
- in_channels,
268
- out_channels,
269
- hidden_channels,
270
- kernel_size,
271
- dilation_rate,
272
- n_layers,
273
- gin_channels=0):
274
- super().__init__()
275
- self.in_channels = in_channels
276
- self.out_channels = out_channels
277
- self.hidden_channels = hidden_channels
278
- self.kernel_size = kernel_size
279
- self.dilation_rate = dilation_rate
280
- self.n_layers = n_layers
281
- self.gin_channels = gin_channels
282
-
283
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
284
- self.enc = modules.WN(hidden_channels,
285
- kernel_size,
286
- dilation_rate,
287
- n_layers,
288
- gin_channels=gin_channels)
289
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
290
-
291
- def forward(self, x, x_lengths, g=None):
292
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)),
293
- 1).to(x.dtype)
294
- x = self.pre(x) * x_mask
295
- x = self.enc(x, x_mask, g=g)
296
- stats = self.proj(x) * x_mask
297
- m, logs = torch.split(stats, self.out_channels, dim=1)
298
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
299
- return z, m, logs, x_mask
300
-
301
-
302
- class Generator(nn.Module):
303
-
304
- def __init__(self,
305
- initial_channel,
306
- resblock,
307
- resblock_kernel_sizes,
308
- resblock_dilation_sizes,
309
- upsample_rates,
310
- upsample_initial_channel,
311
- upsample_kernel_sizes,
312
- gin_channels=0):
313
- super(Generator, self).__init__()
314
- self.num_kernels = len(resblock_kernel_sizes)
315
- self.num_upsamples = len(upsample_rates)
316
- self.conv_pre = Conv1d(initial_channel,
317
- upsample_initial_channel,
318
- 7,
319
- 1,
320
- padding=3)
321
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
322
-
323
- self.ups = nn.ModuleList()
324
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
325
- self.ups.append(
326
- weight_norm(
327
- ConvTranspose1d(upsample_initial_channel // (2 ** i),
328
- upsample_initial_channel // (2 ** (i + 1)),
329
- k,
330
- u,
331
- padding=(k - u) // 2)))
332
-
333
- self.resblocks = nn.ModuleList()
334
- self.conv_posts = nn.ModuleList()
335
- for i in range(len(self.ups)):
336
- ch = upsample_initial_channel // (2 ** (i + 1))
337
- for j, (k, d) in enumerate(
338
- zip(resblock_kernel_sizes, resblock_dilation_sizes)):
339
- self.resblocks.append(resblock(ch, k, d))
340
- if i >= len(self.ups) - 3:
341
- self.conv_posts.append(
342
- Conv1d(ch, 1, 7, 1, padding=3, bias=False))
343
- self.ups.apply(init_weights)
344
-
345
- if gin_channels != 0:
346
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
347
-
348
- def forward(self, x, g=None):
349
- x = self.conv_pre(x)
350
- if g is not None:
351
- x = x + self.cond(g)
352
-
353
- for i in range(self.num_upsamples):
354
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
355
- x = self.ups[i](x)
356
- xs = None
357
- for j in range(self.num_kernels):
358
- xs = xs + self.resblocks[i * self.num_kernels + j](x) if xs is not None \
359
- else self.resblocks[i * self.num_kernels + j](x)
360
- x = xs / self.num_kernels
361
- x = F.leaky_relu(x)
362
- x = self.conv_posts[-1](x)
363
- x = torch.tanh(x)
364
-
365
- return x
366
-
367
- def hier_forward(self, x, g=None):
368
- outs = []
369
- x = self.conv_pre(x)
370
- if g is not None:
371
- x = x + self.cond(g)
372
-
373
- for i in range(self.num_upsamples):
374
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
375
- x = self.ups[i](x)
376
- xs = None
377
- for j in range(self.num_kernels):
378
- xs = xs + self.resblocks[i * self.num_kernels + j](x) if xs is not None \
379
- else self.resblocks[i * self.num_kernels + j](x)
380
- x = xs / self.num_kernels
381
- if i >= self.num_upsamples - 3:
382
- _x = F.leaky_relu(x)
383
- _x = self.conv_posts[i - self.num_upsamples + 3](_x)
384
- _x = torch.tanh(_x)
385
- outs.append(_x)
386
- return outs
387
-
388
- def remove_weight_norm(self):
389
- print('Removing weight norm...')
390
- for l in self.ups:
391
- remove_weight_norm(l)
392
- for l in self.resblocks:
393
- l.remove_weight_norm()
394
-
395
-
396
- class DiscriminatorP(nn.Module):
397
-
398
- def __init__(self,
399
- period,
400
- kernel_size=5,
401
- stride=3,
402
- use_spectral_norm=False):
403
- super(DiscriminatorP, self).__init__()
404
- self.period = period
405
- self.use_spectral_norm = use_spectral_norm
406
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
407
- self.convs = nn.ModuleList([
408
- norm_f(
409
- Conv2d(1,
410
- 32, (kernel_size, 1), (stride, 1),
411
- padding=(get_padding(kernel_size, 1), 0))),
412
- norm_f(
413
- Conv2d(32,
414
- 128, (kernel_size, 1), (stride, 1),
415
- padding=(get_padding(kernel_size, 1), 0))),
416
- norm_f(
417
- Conv2d(128,
418
- 512, (kernel_size, 1), (stride, 1),
419
- padding=(get_padding(kernel_size, 1), 0))),
420
- norm_f(
421
- Conv2d(512,
422
- 1024, (kernel_size, 1), (stride, 1),
423
- padding=(get_padding(kernel_size, 1), 0))),
424
- norm_f(
425
- Conv2d(1024,
426
- 1024, (kernel_size, 1),
427
- 1,
428
- padding=(get_padding(kernel_size, 1), 0))),
429
- ])
430
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
431
-
432
- def forward(self, x):
433
- fmap = []
434
-
435
- # 1d to 2d
436
- b, c, t = x.shape
437
- if t % self.period != 0: # pad first
438
- n_pad = self.period - (t % self.period)
439
- x = F.pad(x, (0, n_pad), "reflect")
440
- t = t + n_pad
441
- x = x.view(b, c, t // self.period, self.period)
442
-
443
- for l in self.convs:
444
- x = l(x)
445
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
446
- fmap.append(x)
447
- x = self.conv_post(x)
448
- fmap.append(x)
449
- x = torch.flatten(x, 1, -1)
450
-
451
- return x, fmap
452
-
453
-
454
- class DiscriminatorS(nn.Module):
455
-
456
- def __init__(self, use_spectral_norm=False):
457
- super(DiscriminatorS, self).__init__()
458
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
459
- self.convs = nn.ModuleList([
460
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
461
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
462
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
463
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
464
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
465
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
466
- ])
467
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
468
-
469
- def forward(self, x):
470
- fmap = []
471
-
472
- for l in self.convs:
473
- x = l(x)
474
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
475
- fmap.append(x)
476
- x = self.conv_post(x)
477
- fmap.append(x)
478
- x = torch.flatten(x, 1, -1)
479
-
480
- return x, fmap
481
-
482
-
483
- class MultiPeriodDiscriminator(nn.Module):
484
-
485
- def __init__(self, use_spectral_norm=False):
486
- super(MultiPeriodDiscriminator, self).__init__()
487
- periods = [2, 3, 5, 7, 11]
488
-
489
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
490
- discs = discs + \
491
- [DiscriminatorP(i, use_spectral_norm=use_spectral_norm)
492
- for i in periods]
493
- self.discriminators = nn.ModuleList(discs)
494
-
495
- def forward(self, y, y_hat):
496
- y_d_rs = []
497
- y_d_gs = []
498
- fmap_rs = []
499
- fmap_gs = []
500
- for i, d in enumerate(self.discriminators):
501
- y_d_r, fmap_r = d(y)
502
- y_d_g, fmap_g = d(y_hat)
503
- y_d_rs.append(y_d_r)
504
- y_d_gs.append(y_d_g)
505
- fmap_rs.append(fmap_r)
506
- fmap_gs.append(fmap_g)
507
-
508
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
509
-
510
-
511
- ##### Avocodo
512
- class CoMBDBlock(torch.nn.Module):
513
-
514
- def __init__(
515
- self,
516
- h_u, # List[int],
517
- d_k, # List[int],
518
- d_s, # List[int],
519
- d_d, # List[int],
520
- d_g, # List[int],
521
- d_p, # List[int],
522
- op_f, # int,
523
- op_k, # int,
524
- op_g, # int,
525
- use_spectral_norm=False):
526
- super(CoMBDBlock, self).__init__()
527
- norm_f = weight_norm if use_spectral_norm is False else spectral_norm
528
-
529
- self.convs = nn.ModuleList()
530
- filters = [[1, h_u[0]]]
531
- for i in range(len(h_u) - 1):
532
- filters.append([h_u[i], h_u[i + 1]])
533
- for _f, _k, _s, _d, _g, _p in zip(filters, d_k, d_s, d_d, d_g, d_p):
534
- self.convs.append(
535
- norm_f(
536
- Conv1d(in_channels=_f[0],
537
- out_channels=_f[1],
538
- kernel_size=_k,
539
- stride=_s,
540
- dilation=_d,
541
- groups=_g,
542
- padding=_p)))
543
- self.projection_conv = norm_f(
544
- Conv1d(in_channels=filters[-1][1],
545
- out_channels=op_f,
546
- kernel_size=op_k,
547
- groups=op_g))
548
-
549
- def forward(self, x, b_y, b_y_hat):
550
- fmap_r = []
551
- fmap_g = []
552
- for block in self.convs:
553
- x = block(x)
554
- x = F.leaky_relu(x, 0.2)
555
- f_r, f_g = x.split([b_y, b_y_hat], dim=0)
556
- fmap_r.append(f_r.tile([2, 1, 1]) if b_y < b_y_hat else f_r)
557
- fmap_g.append(f_g)
558
- x = self.projection_conv(x)
559
- x_r, x_g = x.split([b_y, b_y_hat], dim=0)
560
- return x_r.tile([2, 1, 1
561
- ]) if b_y < b_y_hat else x_r, x_g, fmap_r, fmap_g
562
-
563
-
564
- class CoMBD(torch.nn.Module):
565
-
566
- def __init__(self, use_spectral_norm=False):
567
- super(CoMBD, self).__init__()
568
- self.pqmf_list = nn.ModuleList([
569
- PQMF(4, 192, 0.13, 10.0), # lv2
570
- PQMF(2, 256, 0.25, 10.0) # lv1
571
- ])
572
- combd_h_u = [[16, 64, 256, 1024, 1024, 1024] for _ in range(3)]
573
- combd_d_k = [[7, 11, 11, 11, 11, 5], [11, 21, 21, 21, 21, 5],
574
- [15, 41, 41, 41, 41, 5]]
575
- combd_d_s = [[1, 1, 4, 4, 4, 1] for _ in range(3)]
576
- combd_d_d = [[1, 1, 1, 1, 1, 1] for _ in range(3)]
577
- combd_d_g = [[1, 4, 16, 64, 256, 1] for _ in range(3)]
578
-
579
- combd_d_p = [[3, 5, 5, 5, 5, 2], [5, 10, 10, 10, 10, 2],
580
- [7, 20, 20, 20, 20, 2]]
581
- combd_op_f = [1, 1, 1]
582
- combd_op_k = [3, 3, 3]
583
- combd_op_g = [1, 1, 1]
584
-
585
- self.blocks = nn.ModuleList()
586
- for _h_u, _d_k, _d_s, _d_d, _d_g, _d_p, _op_f, _op_k, _op_g in zip(
587
- combd_h_u,
588
- combd_d_k,
589
- combd_d_s,
590
- combd_d_d,
591
- combd_d_g,
592
- combd_d_p,
593
- combd_op_f,
594
- combd_op_k,
595
- combd_op_g,
596
- ):
597
- self.blocks.append(
598
- CoMBDBlock(
599
- _h_u,
600
- _d_k,
601
- _d_s,
602
- _d_d,
603
- _d_g,
604
- _d_p,
605
- _op_f,
606
- _op_k,
607
- _op_g,
608
- ))
609
-
610
- def _block_forward(self, ys, ys_hat, blocks):
611
- outs_real = []
612
- outs_fake = []
613
- f_maps_real = []
614
- f_maps_fake = []
615
- for y, y_hat, block in zip(ys, ys_hat,
616
- blocks): # y:B, y_hat: 2B if i!=-1 else B,B
617
- b_y = y.shape[0]
618
- b_y_hat = y_hat.shape[0]
619
- cat_y = torch.cat([y, y_hat], dim=0)
620
- out_real, out_fake, f_map_r, f_map_g = block(cat_y, b_y, b_y_hat)
621
- outs_real.append(out_real)
622
- outs_fake.append(out_fake)
623
- f_maps_real.append(f_map_r)
624
- f_maps_fake.append(f_map_g)
625
- return outs_real, outs_fake, f_maps_real, f_maps_fake
626
-
627
- def _pqmf_forward(self, ys, ys_hat):
628
- # preprocess for multi_scale forward
629
- multi_scale_inputs_hat = []
630
- for pqmf_ in self.pqmf_list:
631
- multi_scale_inputs_hat.append(pqmf_.analysis(ys_hat[-1])[:, :1, :])
632
-
633
- # real
634
- # for hierarchical forward
635
- # outs_real_, f_maps_real_ = self._block_forward(
636
- # ys, self.blocks)
637
-
638
- # for multi_scale forward
639
- # outs_real, f_maps_real = self._block_forward(
640
- # ys[:-1], self.blocks[:-1], outs_real, f_maps_real)
641
- # outs_real.extend(outs_real[:-1])
642
- # f_maps_real.extend(f_maps_real[:-1])
643
-
644
- # outs_real = [torch.cat([o,o], dim=0) if i!=len(outs_real_)-1 else o for i,o in enumerate(outs_real_)]
645
- # f_maps_real = [[torch.cat([fmap,fmap], dim=0) if i!=len(f_maps_real_)-1 else fmap for fmap in fmaps ] \
646
- # for i,fmaps in enumerate(f_maps_real_)]
647
-
648
- inputs_fake = [
649
- torch.cat([y, multi_scale_inputs_hat[i]], dim=0)
650
- if i != len(ys_hat) - 1 else y for i, y in enumerate(ys_hat)
651
- ]
652
- outs_real, outs_fake, f_maps_real, f_maps_fake = self._block_forward(
653
- ys, inputs_fake, self.blocks)
654
-
655
- # predicted
656
- # for hierarchical forward
657
- # outs_fake, f_maps_fake = self._block_forward(
658
- # inputs_fake, self.blocks)
659
-
660
- # outs_real_, f_maps_real_ = self._block_forward(
661
- # ys, self.blocks)
662
- # for multi_scale forward
663
- # outs_fake, f_maps_fake = self._block_forward(
664
- # multi_scale_inputs_hat, self.blocks[:-1], outs_fake, f_maps_fake)
665
-
666
- return outs_real, outs_fake, f_maps_real, f_maps_fake
667
-
668
- def forward(self, ys, ys_hat):
669
- outs_real, outs_fake, f_maps_real, f_maps_fake = self._pqmf_forward(
670
- ys, ys_hat)
671
- return outs_real, outs_fake, f_maps_real, f_maps_fake
672
-
673
-
674
- class MDC(torch.nn.Module):
675
-
676
- def __init__(self,
677
- in_channels,
678
- out_channels,
679
- strides,
680
- kernel_size,
681
- dilations,
682
- use_spectral_norm=False):
683
- super(MDC, self).__init__()
684
- norm_f = weight_norm if not use_spectral_norm else spectral_norm
685
- self.d_convs = nn.ModuleList()
686
- for _k, _d in zip(kernel_size, dilations):
687
- self.d_convs.append(
688
- norm_f(
689
- Conv1d(in_channels=in_channels,
690
- out_channels=out_channels,
691
- kernel_size=_k,
692
- dilation=_d,
693
- padding=get_padding(_k, _d))))
694
- self.post_conv = norm_f(
695
- Conv1d(in_channels=out_channels,
696
- out_channels=out_channels,
697
- kernel_size=3,
698
- stride=strides,
699
- padding=get_padding(_k, _d)))
700
- self.softmax = torch.nn.Softmax(dim=-1)
701
-
702
- def forward(self, x):
703
- _out = None
704
- for _l in self.d_convs:
705
- _x = torch.unsqueeze(_l(x), -1)
706
- _x = F.leaky_relu(_x, 0.2)
707
- _out = torch.cat([_out, _x], axis=-1) if _out is not None \
708
- else _x
709
- x = torch.sum(_out, dim=-1)
710
- x = self.post_conv(x)
711
- x = F.leaky_relu(x, 0.2) # @@
712
-
713
- return x
714
-
715
-
716
- class SBDBlock(torch.nn.Module):
717
-
718
- def __init__(self,
719
- segment_dim,
720
- strides,
721
- filters,
722
- kernel_size,
723
- dilations,
724
- use_spectral_norm=False):
725
- super(SBDBlock, self).__init__()
726
- norm_f = weight_norm if not use_spectral_norm else spectral_norm
727
- self.convs = nn.ModuleList()
728
- filters_in_out = [(segment_dim, filters[0])]
729
- for i in range(len(filters) - 1):
730
- filters_in_out.append([filters[i], filters[i + 1]])
731
-
732
- for _s, _f, _k, _d in zip(strides, filters_in_out, kernel_size,
733
- dilations):
734
- self.convs.append(
735
- MDC(in_channels=_f[0],
736
- out_channels=_f[1],
737
- strides=_s,
738
- kernel_size=_k,
739
- dilations=_d,
740
- use_spectral_norm=use_spectral_norm))
741
- self.post_conv = norm_f(
742
- Conv1d(in_channels=_f[1],
743
- out_channels=1,
744
- kernel_size=3,
745
- stride=1,
746
- padding=3 // 2)) # @@
747
-
748
- def forward(self, x):
749
- fmap_r = []
750
- fmap_g = []
751
- for _l in self.convs:
752
- x = _l(x)
753
- f_r, f_g = torch.chunk(x, 2, dim=0)
754
- fmap_r.append(f_r)
755
- fmap_g.append(f_g)
756
- x = self.post_conv(x) # @@
757
- x_r, x_g = torch.chunk(x, 2, dim=0)
758
- return x_r, x_g, fmap_r, fmap_g
759
-
760
-
761
- class MDCDConfig:
762
-
763
- def __init__(self):
764
- self.pqmf_params = [16, 256, 0.03, 10.0]
765
- self.f_pqmf_params = [64, 256, 0.1, 9.0]
766
- self.filters = [[64, 128, 256, 256, 256], [64, 128, 256, 256, 256],
767
- [64, 128, 256, 256, 256], [32, 64, 128, 128, 128]]
768
- self.kernel_sizes = [[[7, 7, 7], [7, 7, 7], [7, 7, 7], [7, 7, 7],
769
- [7, 7, 7]],
770
- [[5, 5, 5], [5, 5, 5], [5, 5, 5], [5, 5, 5],
771
- [5, 5, 5]],
772
- [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3],
773
- [3, 3, 3]],
774
- [[5, 5, 5], [5, 5, 5], [5, 5, 5], [5, 5, 5],
775
- [5, 5, 5]]]
776
- self.dilations = [[[5, 7, 11], [5, 7, 11], [5, 7, 11], [5, 7, 11],
777
- [5, 7, 11]],
778
- [[3, 5, 7], [3, 5, 7], [3, 5, 7], [3, 5, 7],
779
- [3, 5, 7]],
780
- [[1, 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3],
781
- [1, 2, 3]],
782
- [[1, 2, 3], [1, 2, 3], [1, 2, 3], [2, 3, 5],
783
- [2, 3, 5]]]
784
- self.strides = [[1, 1, 3, 3, 1], [1, 1, 3, 3, 1], [1, 1, 3, 3, 1],
785
- [1, 1, 3, 3, 1]]
786
- self.band_ranges = [[0, 6], [0, 11], [0, 16], [0, 64]]
787
- self.transpose = [False, False, False, True]
788
- self.segment_size = 8192
789
-
790
-
791
- class SBD(torch.nn.Module):
792
-
793
- def __init__(self, use_spectral_norm=False):
794
- super(SBD, self).__init__()
795
- self.config = MDCDConfig()
796
- self.pqmf = PQMF(*self.config.pqmf_params)
797
- if True in self.config.transpose:
798
- self.f_pqmf = PQMF(*self.config.f_pqmf_params)
799
- else:
800
- self.f_pqmf = None
801
-
802
- self.discriminators = torch.nn.ModuleList()
803
-
804
- for _f, _k, _d, _s, _br, _tr in zip(self.config.filters,
805
- self.config.kernel_sizes,
806
- self.config.dilations,
807
- self.config.strides,
808
- self.config.band_ranges,
809
- self.config.transpose):
810
- if _tr:
811
- segment_dim = self.config.segment_size // _br[1] - _br[0]
812
- else:
813
- segment_dim = _br[1] - _br[0]
814
-
815
- self.discriminators.append(
816
- SBDBlock(segment_dim=segment_dim,
817
- filters=_f,
818
- kernel_size=_k,
819
- dilations=_d,
820
- strides=_s,
821
- use_spectral_norm=use_spectral_norm))
822
-
823
- def forward(self, y, y_hat):
824
- y_d_rs = []
825
- y_d_gs = []
826
- fmap_rs = []
827
- fmap_gs = []
828
- y_in = self.pqmf.analysis(y)
829
- y_hat_in = self.pqmf.analysis(y_hat)
830
- y_in_f = self.f_pqmf.analysis(y)
831
- y_hat_in_f = self.f_pqmf.analysis(y_hat)
832
-
833
- for d, br, tr in zip(self.discriminators, self.config.band_ranges,
834
- self.config.transpose):
835
- if not tr:
836
- _y_in = y_in[:, br[0]:br[1], :]
837
- _y_hat_in = y_hat_in[:, br[0]:br[1], :]
838
- else:
839
- _y_in = y_in_f[:, br[0]:br[1], :]
840
- _y_hat_in = y_hat_in_f[:, br[0]:br[1], :]
841
- _y_in = torch.transpose(_y_in, 1, 2)
842
- _y_hat_in = torch.transpose(_y_hat_in, 1, 2)
843
- # y_d_r, fmap_r = d(_y_in)
844
- # y_d_g, fmap_g = d(_y_hat_in)
845
- cat_y = torch.cat([_y_in, _y_hat_in], dim=0)
846
- y_d_r, y_d_g, fmap_r, fmap_g = d(cat_y)
847
- y_d_rs.append(y_d_r)
848
- fmap_rs.append(fmap_r)
849
- y_d_gs.append(y_d_g)
850
- fmap_gs.append(fmap_g)
851
-
852
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
853
-
854
-
855
- class AvocodoDiscriminator(nn.Module):
856
-
857
- def __init__(self, use_spectral_norm=False):
858
- super(AvocodoDiscriminator, self).__init__()
859
- self.combd = CoMBD(use_spectral_norm)
860
- self.sbd = SBD(use_spectral_norm)
861
-
862
- def forward(self, y, ys_hat):
863
- ys = [
864
- self.combd.pqmf_list[0].analysis(y)[:, :1], # lv2
865
- self.combd.pqmf_list[1].analysis(y)[:, :1], # lv1
866
- y
867
- ]
868
- y_c_rs, y_c_gs, fmap_c_rs, fmap_c_gs = self.combd(ys, ys_hat)
869
- y_s_rs, y_s_gs, fmap_s_rs, fmap_s_gs = self.sbd(y, ys_hat[-1])
870
- y_c_rs.extend(y_s_rs)
871
- y_c_gs.extend(y_s_gs)
872
- fmap_c_rs.extend(fmap_s_rs)
873
- fmap_c_gs.extend(fmap_s_gs)
874
- return y_c_rs, y_c_gs, fmap_c_rs, fmap_c_gs
875
-
876
-
877
- ##### Avocodo
878
-
879
-
880
- class YingDecoder(nn.Module):
881
-
882
- def __init__(self,
883
- hidden_channels,
884
- kernel_size,
885
- dilation_rate,
886
- n_layers,
887
- yin_start,
888
- yin_scope,
889
- yin_shift_range,
890
- gin_channels=0):
891
- super().__init__()
892
- self.in_channels = yin_scope
893
- self.out_channels = yin_scope
894
- self.hidden_channels = hidden_channels
895
- self.kernel_size = kernel_size
896
- self.dilation_rate = dilation_rate
897
- self.n_layers = n_layers
898
- self.gin_channels = gin_channels
899
-
900
- self.yin_start = yin_start
901
- self.yin_scope = yin_scope
902
- self.yin_shift_range = yin_shift_range
903
-
904
- self.pre = nn.Conv1d(self.in_channels, hidden_channels, 1)
905
- self.dec = modules.WN(hidden_channels,
906
- kernel_size,
907
- dilation_rate,
908
- n_layers,
909
- gin_channels=gin_channels)
910
- self.proj = nn.Conv1d(hidden_channels, self.out_channels, 1)
911
-
912
- def crop_scope(self, x, yin_start,
913
- scope_shift): # x: tensor [B,C,T] #scope_shift: tensor [B]
914
- return torch.stack([
915
- x[i, yin_start + scope_shift[i]:yin_start + self.yin_scope +
916
- scope_shift[i], :] for i in range(x.shape[0])
917
- ],
918
- dim=0)
919
-
920
- def infer(self, z_yin, z_mask, g=None):
921
- B = z_yin.shape[0]
922
- scope_shift = torch.randint(-self.yin_shift_range,
923
- self.yin_shift_range, (B,),
924
- dtype=torch.int)
925
- z_yin_crop = self.crop_scope(z_yin, self.yin_start, scope_shift)
926
- x = self.pre(z_yin_crop) * z_mask
927
- x = self.dec(x, z_mask, g=g)
928
- yin_hat_crop = self.proj(x) * z_mask
929
- return yin_hat_crop
930
-
931
- def forward(self, z_yin, yin_gt, z_mask, g=None):
932
- B = z_yin.shape[0]
933
- scope_shift = torch.randint(-self.yin_shift_range,
934
- self.yin_shift_range, (B,),
935
- dtype=torch.int)
936
- z_yin_crop = self.crop_scope(z_yin, self.yin_start, scope_shift)
937
- yin_gt_shifted_crop = self.crop_scope(yin_gt, self.yin_start,
938
- scope_shift)
939
- yin_gt_crop = self.crop_scope(yin_gt, self.yin_start,
940
- torch.zeros_like(scope_shift))
941
- x = self.pre(z_yin_crop) * z_mask
942
- x = self.dec(x, z_mask, g=g)
943
- yin_hat_crop = self.proj(x) * z_mask
944
- return yin_gt_crop, yin_gt_shifted_crop, yin_hat_crop, z_yin_crop, scope_shift
945
-
946
-
947
- # For Q option
948
- # class VQEmbedding(nn.Module):
949
- #
950
- # def __init__(self, codebook_size,
951
- # code_channels):
952
- # super().__init__()
953
- # self.embedding = nn.Embedding(codebook_size, code_channels)
954
- # self.embedding.weight.data.uniform_(-1. / codebook_size,
955
- # 1. / codebook_size)
956
- #
957
- # def forward(self, z_e_x):
958
- # z_e_x_ = z_e_x.permute(0, 2, 1).contiguous()
959
- # latent_indices = vq(z_e_x_, self.embedding.weight)
960
- # z_q = self.embedding(latent_indices).permute(0, 2, 1)
961
- # return z_q
962
- #
963
- # def straight_through(self, z_e_x):
964
- # z_e_x_ = z_e_x.permute(0, 2, 1).contiguous()
965
- # z_q_x_st_, indices = vq_st(z_e_x_, self.embedding.weight.detach())
966
- # z_q_x_st = z_q_x_st_.permute(0, 2, 1).contiguous()
967
- #
968
- # z_q_x_flatten = torch.index_select(self.embedding.weight,
969
- # dim=0,
970
- # index=indices)
971
- # z_q_x_ = z_q_x_flatten.view_as(z_e_x_)
972
- # z_q_x = z_q_x_.permute(0, 2, 1).contiguous()
973
- # return z_q_x_st, z_q_x
974
-
975
-
976
- class SynthesizerTrn(nn.Module):
977
- """
978
- Synthesizer for Training
979
- """
980
-
981
- def __init__(
982
- self,
983
- n_vocab,
984
- spec_channels,
985
- segment_size,
986
- midi_start,
987
- midi_end,
988
- octave_range,
989
- inter_channels,
990
- hidden_channels,
991
- filter_channels,
992
- n_heads,
993
- n_layers,
994
- kernel_size,
995
- p_dropout,
996
- resblock,
997
- resblock_kernel_sizes,
998
- resblock_dilation_sizes,
999
- upsample_rates,
1000
- upsample_initial_channel,
1001
- upsample_kernel_sizes,
1002
- yin_channels,
1003
- yin_start,
1004
- yin_scope,
1005
- yin_shift_range,
1006
- n_speakers=0,
1007
- gin_channels=0,
1008
- use_sdp=True,
1009
- # codebook_size=256, #for Q option
1010
- **kwargs):
1011
-
1012
- super().__init__()
1013
- self.n_vocab = n_vocab
1014
- self.spec_channels = spec_channels
1015
- self.inter_channels = inter_channels
1016
- self.hidden_channels = hidden_channels
1017
- self.filter_channels = filter_channels
1018
- self.n_heads = n_heads
1019
- self.n_layers = n_layers
1020
- self.kernel_size = kernel_size
1021
- self.p_dropout = p_dropout
1022
- self.resblock = resblock
1023
- self.resblock_kernel_sizes = resblock_kernel_sizes
1024
- self.resblock_dilation_sizes = resblock_dilation_sizes
1025
- self.upsample_rates = upsample_rates
1026
- self.upsample_initial_channel = upsample_initial_channel
1027
- self.upsample_kernel_sizes = upsample_kernel_sizes
1028
- self.segment_size = segment_size
1029
- self.n_speakers = n_speakers
1030
- self.gin_channels = gin_channels
1031
-
1032
- self.yin_channels = yin_channels
1033
- self.yin_start = yin_start
1034
- self.yin_scope = yin_scope
1035
-
1036
- self.use_sdp = use_sdp
1037
- self.enc_p = TextEncoder(n_vocab, inter_channels, hidden_channels,
1038
- filter_channels, n_heads, n_layers,
1039
- kernel_size, p_dropout)
1040
- self.dec = Generator(
1041
- inter_channels - yin_channels +
1042
- yin_scope,
1043
- resblock,
1044
- resblock_kernel_sizes,
1045
- resblock_dilation_sizes,
1046
- upsample_rates,
1047
- upsample_initial_channel,
1048
- upsample_kernel_sizes,
1049
- gin_channels=gin_channels)
1050
-
1051
- self.enc_spec = PosteriorEncoder(spec_channels,
1052
- inter_channels - yin_channels,
1053
- inter_channels - yin_channels,
1054
- 5,
1055
- 1,
1056
- 16,
1057
- gin_channels=gin_channels)
1058
-
1059
- self.enc_pitch = PosteriorEncoder(yin_channels,
1060
- yin_channels,
1061
- yin_channels,
1062
- 5,
1063
- 1,
1064
- 16,
1065
- gin_channels=gin_channels)
1066
-
1067
- self.flow = ResidualCouplingBlock(inter_channels,
1068
- hidden_channels,
1069
- 5,
1070
- 1,
1071
- 4,
1072
- gin_channels=gin_channels)
1073
-
1074
- if use_sdp:
1075
- self.dp = StochasticDurationPredictor(hidden_channels,
1076
- 192,
1077
- 3,
1078
- 0.5,
1079
- 4,
1080
- gin_channels=gin_channels)
1081
- else:
1082
- self.dp = DurationPredictor(hidden_channels,
1083
- 256,
1084
- 3,
1085
- 0.5,
1086
- gin_channels=gin_channels)
1087
-
1088
- self.yin_dec = YingDecoder(yin_scope,
1089
- 5,
1090
- 1,
1091
- 4,
1092
- yin_start,
1093
- yin_scope,
1094
- yin_shift_range,
1095
- gin_channels=gin_channels)
1096
-
1097
- # self.vq = VQEmbedding(codebook_size, inter_channels - yin_channels)#inter_channels // 2)
1098
- self.emb_g = nn.Embedding(self.n_speakers, gin_channels)
1099
-
1100
- self.pitch = Pitch(midi_start=midi_start,
1101
- midi_end=midi_end,
1102
- octave_range=octave_range)
1103
-
1104
- def crop_scope(
1105
- self,
1106
- x,
1107
- scope_shift=0): # x: list #need to modify for non-scalar shift
1108
- return [
1109
- i[:, self.yin_start + scope_shift:self.yin_start + self.yin_scope +
1110
- scope_shift, :] for i in x
1111
- ]
1112
-
1113
- def crop_scope_tensor(
1114
- self, x,
1115
- scope_shift): # x: tensor [B,C,T] #scope_shift: tensor [B]
1116
- return torch.stack([
1117
- x[i, self.yin_start + scope_shift[i]:self.yin_start +
1118
- self.yin_scope + scope_shift[i], :] for i in range(x.shape[0])
1119
- ],
1120
- dim=0)
1121
-
1122
- def yin_dec_infer(self, z_yin, z_mask, sid=None):
1123
- if self.n_speakers > 0:
1124
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
1125
- else:
1126
- g = None
1127
- return self.yin_dec.infer(z_yin, z_mask, g)
1128
-
1129
- def forward(self,
1130
- x,
1131
- t,
1132
- x_lengths,
1133
- y,
1134
- y_lengths,
1135
- ying,
1136
- ying_lengths,
1137
- sid=None,
1138
- scope_shift=0):
1139
- x, m_p, logs_p, x_mask = self.enc_p(x, t, x_lengths)
1140
- if self.n_speakers > 0:
1141
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
1142
- else:
1143
- g = None
1144
-
1145
- z_spec, m_spec, logs_spec, spec_mask = self.enc_spec(y, y_lengths, g=g)
1146
-
1147
- # for Q option
1148
- # z_spec_q_st, z_spec_q = self.vq.straight_through(z_spec)
1149
- # z_spec_q_st = z_spec_q_st * spec_mask
1150
- # z_spec_q = z_spec_q * spec_mask
1151
-
1152
- z_yin, m_yin, logs_yin, yin_mask = self.enc_pitch(ying, y_lengths, g=g)
1153
- z_yin_crop, logs_yin_crop, m_yin_crop = self.crop_scope(
1154
- [z_yin, logs_yin, m_yin], scope_shift)
1155
-
1156
- # yin dec loss
1157
- yin_gt_crop, yin_gt_shifted_crop, yin_dec_crop, z_yin_crop_shifted, scope_shift = self.yin_dec(
1158
- z_yin, ying, yin_mask, g)
1159
-
1160
- z = torch.cat([z_spec, z_yin], dim=1)
1161
- logs_q = torch.cat([logs_spec, logs_yin], dim=1)
1162
- m_q = torch.cat([m_spec, m_yin], dim=1)
1163
- y_mask = spec_mask
1164
-
1165
- z_p = self.flow(z, y_mask, g=g)
1166
-
1167
- z_dec = torch.cat([z_spec, z_yin_crop], dim=1)
1168
-
1169
- z_dec_shifted = torch.cat([z_spec.detach(), z_yin_crop_shifted], dim=1)
1170
- z_dec_ = torch.cat([z_dec, z_dec_shifted], dim=0)
1171
-
1172
- with torch.no_grad():
1173
- # negative cross-entropy
1174
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
1175
- # [b, 1, t_s]
1176
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1],
1177
- keepdim=True)
1178
- # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s], z_p: [b,d,t]
1179
- # neg_cent2 = torch.matmul(-0.5 * (z_p**2).transpose(1, 2), s_p_sq_r)
1180
- neg_cent2 = torch.einsum('bdt, bds -> bts', -0.5 * (z_p ** 2),
1181
- s_p_sq_r)
1182
- # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
1183
- # neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r))
1184
- neg_cent3 = torch.einsum('bdt, bds -> bts', z_p, (m_p * s_p_sq_r))
1185
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1],
1186
- keepdim=True) # [b, 1, t_s]
1187
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
1188
-
1189
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(
1190
- y_mask, -1)
1191
- from monotonic_align import maximum_path
1192
- attn = maximum_path(neg_cent,
1193
- attn_mask.squeeze(1)).unsqueeze(1).detach()
1194
-
1195
- w = attn.sum(2)
1196
- if self.use_sdp:
1197
- l_length = self.dp(x, x_mask, w, g=g)
1198
- l_length = l_length / torch.sum(x_mask)
1199
- else:
1200
- logw_ = torch.log(w + 1e-6) * x_mask
1201
- logw = self.dp(x, x_mask, g=g)
1202
- l_length = torch.sum(
1203
- (logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging
1204
-
1205
- # expand prior
1206
- m_p = torch.einsum('bctn, bdn -> bdt', attn, m_p)
1207
- logs_p = torch.einsum('bctn, bdn -> bdt', attn, logs_p)
1208
-
1209
- # z_slice, ids_slice = commons.rand_slice_segments(z_dec, y_lengths, self.segment_size)
1210
- # o = self.dec(z_slice, g=g)
1211
- z_slice, ids_slice = commons.rand_slice_segments_for_cat(
1212
- z_dec_, torch.cat([y_lengths, y_lengths], dim=0),
1213
- self.segment_size)
1214
- o_ = self.dec.hier_forward(z_slice, g=torch.cat([g, g], dim=0))
1215
- o = [torch.chunk(o_hier, 2, dim=0)[0] for o_hier in o_]
1216
-
1217
- o_pad = F.pad(o_[-1], (768, 768 + (-o_[-1].shape[-1]) % 256 + 256 *
1218
- (o_[-1].shape[-1] % 256 == 0)),
1219
- mode='constant').squeeze(1)
1220
- yin_hat = self.pitch.yingram(o_pad)
1221
- yin_hat_crop = self.crop_scope([yin_hat])[0]
1222
- yin_hat_shifted = self.crop_scope_tensor(
1223
- torch.chunk(yin_hat, 2, dim=0)[0], scope_shift)
1224
- return o, l_length, attn, ids_slice, x_mask, y_mask, o_, \
1225
- (z, z_p, m_p, logs_p, m_q, logs_q), \
1226
- (z_dec_), \
1227
- (z_spec, m_spec, logs_spec, spec_mask, z_yin, m_yin, logs_yin, yin_mask), \
1228
- (yin_gt_crop, yin_gt_shifted_crop, yin_dec_crop, yin_hat_crop, scope_shift, yin_hat_shifted)
1229
-
1230
- def infer(self,
1231
- x,
1232
- t,
1233
- x_lengths,
1234
- sid=None,
1235
- noise_scale=1,
1236
- length_scale=1,
1237
- noise_scale_w=1.,
1238
- max_len=None,
1239
- scope_shift=0): # need to fix #vector scope shift needed
1240
- x, m_p, logs_p, x_mask = self.enc_p(x, t, x_lengths)
1241
- if self.n_speakers > 0:
1242
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
1243
- else:
1244
- g = None
1245
-
1246
- if self.use_sdp:
1247
- logw = self.dp(x,
1248
- x_mask,
1249
- g=g,
1250
- reverse=True,
1251
- noise_scale=noise_scale_w)
1252
- else:
1253
- logw = self.dp(x, x_mask, g=g)
1254
- w = torch.exp(logw) * x_mask * length_scale
1255
- w_ceil = torch.ceil(w)
1256
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
1257
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None),
1258
- 1).to(x_mask.dtype)
1259
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
1260
- attn = commons.generate_path(w_ceil, attn_mask)
1261
-
1262
- m_p = torch.einsum('bctn, bdn -> bdt', attn, m_p)
1263
- logs_p = torch.einsum('bctn, bdn -> bdt', attn, logs_p)
1264
-
1265
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
1266
- z = self.flow(z_p, y_mask, g=g, reverse=True)
1267
- z_spec, z_yin = torch.split(z,
1268
- self.inter_channels - self.yin_channels,
1269
- dim=1)
1270
- z_yin_crop = self.crop_scope([z_yin], scope_shift)[0]
1271
- z_crop = torch.cat([z_spec, z_yin_crop], dim=1)
1272
- o = self.dec((z_crop * y_mask)[:, :, :max_len], g=g)
1273
- return o, attn, y_mask, (z_crop, z, z_p, m_p, logs_p)
1274
-
1275
- def infer_pre_decoder(self,
1276
- x,
1277
- t,
1278
- x_lengths,
1279
- sid=None,
1280
- noise_scale=1.,
1281
- length_scale=1.,
1282
- noise_scale_w=1.,
1283
- max_len=None,
1284
- scope_shift=0):
1285
- x, m_p, logs_p, x_mask = self.enc_p(x, t, x_lengths)
1286
- if self.n_speakers > 0:
1287
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
1288
- else:
1289
- g = None
1290
-
1291
- if self.use_sdp:
1292
- logw = self.dp(x,
1293
- x_mask,
1294
- g=g,
1295
- reverse=True,
1296
- noise_scale=noise_scale_w)
1297
- else:
1298
- logw = self.dp(x, x_mask, g=g)
1299
- w = torch.exp(logw) * x_mask * length_scale
1300
- w_ceil = torch.ceil(w)
1301
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
1302
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None),
1303
- 1).to(x_mask.dtype)
1304
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
1305
- attn = commons.generate_path(w_ceil, attn_mask)
1306
-
1307
- m_p = torch.einsum('bctn, bdn -> bdt', attn, m_p)
1308
- logs_p = torch.einsum('bctn, bdn -> bdt', attn, logs_p)
1309
-
1310
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
1311
- z = self.flow(z_p, y_mask, g=g, reverse=True)
1312
- z_spec, z_yin = torch.split(z,
1313
- self.inter_channels - self.yin_channels,
1314
- dim=1)
1315
- z_yin_crop = self.crop_scope([z_yin], scope_shift)[0]
1316
- z_crop = torch.cat([z_spec, z_yin_crop], dim=1)
1317
- decoder_inputs = z_crop * y_mask
1318
- return decoder_inputs, attn, y_mask, (z_crop, z, z_p, m_p, logs_p)
1319
-
1320
- def infer_pre_lr(
1321
- self,
1322
- x,
1323
- t,
1324
- x_lengths,
1325
- sid=None,
1326
- length_scale=1,
1327
- noise_scale_w=1.,
1328
- ):
1329
- x, m_p, logs_p, x_mask = self.enc_p(x, t, x_lengths)
1330
- if self.n_speakers > 0:
1331
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
1332
- else:
1333
- g = None
1334
-
1335
- if self.use_sdp:
1336
- logw = self.dp(x,
1337
- x_mask,
1338
- g=g,
1339
- reverse=True,
1340
- noise_scale=noise_scale_w)
1341
- else:
1342
- logw = self.dp(x, x_mask, g=g)
1343
- w = torch.exp(logw) * x_mask * length_scale
1344
- w_ceil = torch.ceil(w)
1345
- return w_ceil, x, m_p, logs_p, x_mask, g
1346
-
1347
- def infer_lr(self, w_ceil, x, m_p, logs_p, x_mask):
1348
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
1349
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None),
1350
- 1).to(x_mask.dtype)
1351
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
1352
- attn = commons.generate_path(w_ceil, attn_mask)
1353
-
1354
- m_p = torch.einsum('bctn, bdn -> bdt', attn, m_p)
1355
- logs_p = torch.einsum('bctn, bdn -> bdt', attn, logs_p)
1356
- return m_p, logs_p, y_mask
1357
-
1358
- def infer_post_lr_pre_decoder(self,
1359
- m_p,
1360
- logs_p,
1361
- g,
1362
- y_mask,
1363
- noise_scale=1,
1364
- scope_shift=0):
1365
-
1366
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
1367
- z = self.flow(z_p, y_mask, g=g, reverse=True)
1368
- z_spec, z_yin = torch.split(z,
1369
- self.inter_channels - self.yin_channels,
1370
- dim=1)
1371
-
1372
- z_yin_crop = self.crop_scope([z_yin], scope_shift)[0]
1373
- z_crop = torch.cat([z_spec, z_yin_crop], dim=1)
1374
- decoder_inputs = z_crop * y_mask
1375
-
1376
- return decoder_inputs, y_mask, (z_crop, z, z_p, m_p, logs_p)
1377
-
1378
- def infer_decode_chunk(self, decoder_inputs, sid=None):
1379
- if self.n_speakers > 0:
1380
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
1381
- else:
1382
- g = None
1383
- return self.dec(decoder_inputs, g=g)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/atss/atss_r50_fpn_1x_coco.py DELETED
@@ -1,62 +0,0 @@
1
- _base_ = [
2
- '../_base_/datasets/coco_detection.py',
3
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
4
- ]
5
- model = dict(
6
- type='ATSS',
7
- pretrained='torchvision://resnet50',
8
- backbone=dict(
9
- type='ResNet',
10
- depth=50,
11
- num_stages=4,
12
- out_indices=(0, 1, 2, 3),
13
- frozen_stages=1,
14
- norm_cfg=dict(type='BN', requires_grad=True),
15
- norm_eval=True,
16
- style='pytorch'),
17
- neck=dict(
18
- type='FPN',
19
- in_channels=[256, 512, 1024, 2048],
20
- out_channels=256,
21
- start_level=1,
22
- add_extra_convs='on_output',
23
- num_outs=5),
24
- bbox_head=dict(
25
- type='ATSSHead',
26
- num_classes=80,
27
- in_channels=256,
28
- stacked_convs=4,
29
- feat_channels=256,
30
- anchor_generator=dict(
31
- type='AnchorGenerator',
32
- ratios=[1.0],
33
- octave_base_scale=8,
34
- scales_per_octave=1,
35
- strides=[8, 16, 32, 64, 128]),
36
- bbox_coder=dict(
37
- type='DeltaXYWHBBoxCoder',
38
- target_means=[.0, .0, .0, .0],
39
- target_stds=[0.1, 0.1, 0.2, 0.2]),
40
- loss_cls=dict(
41
- type='FocalLoss',
42
- use_sigmoid=True,
43
- gamma=2.0,
44
- alpha=0.25,
45
- loss_weight=1.0),
46
- loss_bbox=dict(type='GIoULoss', loss_weight=2.0),
47
- loss_centerness=dict(
48
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
49
- # training and testing settings
50
- train_cfg=dict(
51
- assigner=dict(type='ATSSAssigner', topk=9),
52
- allowed_border=-1,
53
- pos_weight=-1,
54
- debug=False),
55
- test_cfg=dict(
56
- nms_pre=1000,
57
- min_bbox_size=0,
58
- score_thr=0.05,
59
- nms=dict(type='nms', iou_threshold=0.6),
60
- max_per_img=100))
61
- # optimizer
62
- optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_512x512_80k_ade20k.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './dnl_r50-d8_512x512_80k_ade20k.py'
2
- model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
 
 
 
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/html_instruct_style.css DELETED
@@ -1,64 +0,0 @@
1
- .message {
2
- display: grid;
3
- grid-template-columns: 60px 1fr;
4
- padding-bottom: 25px;
5
- font-size: 15px;
6
- font-family: 'Noto Sans', Helvetica, Arial, sans-serif;
7
- line-height: 22px;
8
- }
9
-
10
- .username {
11
- display: none;
12
- }
13
-
14
- .message-body p {
15
- font-size: 15px !important;
16
- line-height: 22px !important;
17
- margin-bottom: 1.25em !important;
18
- }
19
-
20
- .chat .message-body ul, .chat .message-body ol {
21
- margin-bottom: 1.25em !important;
22
- }
23
-
24
- .dark .message-body p em {
25
- color: rgb(198, 202, 214) !important;
26
- }
27
-
28
- .message-body p em {
29
- color: rgb(110, 110, 110) !important;
30
- }
31
-
32
- .gradio-container .chat .assistant-message {
33
- padding: 15px;
34
- border-radius: 20px;
35
- background-color: #0000000f;
36
- margin-top: 9px !important;
37
- margin-bottom: 18px !important;
38
- }
39
-
40
- .gradio-container .chat .user-message {
41
- padding: 15px;
42
- border-radius: 20px;
43
- margin-bottom: 9px !important;
44
- }
45
-
46
- .gradio-container .chat .assistant-message:last-child, .gradio-container .chat .user-message:last-child {
47
- margin-bottom: 0px !important;
48
- }
49
-
50
- .dark .chat .assistant-message {
51
- background-color: #1f2937;
52
- }
53
-
54
- .dark .chat .user-message {
55
- background-color: transparent;
56
- }
57
-
58
- code {
59
- background-color: white !important;
60
- }
61
-
62
- .dark code {
63
- background-color: #0e1321 !important;
64
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/encoders/__init__.py DELETED
File without changes
spaces/Arnx/MusicGenXvAKN/audiocraft/data/audio_utils.py DELETED
@@ -1,174 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- import sys
8
- import typing as tp
9
-
10
- import julius
11
- import torch
12
- import torchaudio
13
-
14
-
15
- def convert_audio_channels(wav: torch.Tensor, channels: int = 2) -> torch.Tensor:
16
- """Convert audio to the given number of channels.
17
-
18
- Args:
19
- wav (torch.Tensor): Audio wave of shape [B, C, T].
20
- channels (int): Expected number of channels as output.
21
- Returns:
22
- torch.Tensor: Downmixed or unchanged audio wave [B, C, T].
23
- """
24
- *shape, src_channels, length = wav.shape
25
- if src_channels == channels:
26
- pass
27
- elif channels == 1:
28
- # Case 1:
29
- # The caller asked 1-channel audio, and the stream has multiple
30
- # channels, downmix all channels.
31
- wav = wav.mean(dim=-2, keepdim=True)
32
- elif src_channels == 1:
33
- # Case 2:
34
- # The caller asked for multiple channels, but the input file has
35
- # a single channel, replicate the audio over all channels.
36
- wav = wav.expand(*shape, channels, length)
37
- elif src_channels >= channels:
38
- # Case 3:
39
- # The caller asked for multiple channels, and the input file has
40
- # more channels than requested. In that case return the first channels.
41
- wav = wav[..., :channels, :]
42
- else:
43
- # Case 4: What is a reasonable choice here?
44
- raise ValueError('The audio file has less channels than requested but is not mono.')
45
- return wav
46
-
47
-
48
- def convert_audio(wav: torch.Tensor, from_rate: float,
49
- to_rate: float, to_channels: int) -> torch.Tensor:
50
- """Convert audio to new sample rate and number of audio channels.
51
- """
52
- wav = julius.resample_frac(wav, int(from_rate), int(to_rate))
53
- wav = convert_audio_channels(wav, to_channels)
54
- return wav
55
-
56
-
57
- def normalize_loudness(wav: torch.Tensor, sample_rate: int, loudness_headroom_db: float = 14,
58
- loudness_compressor: bool = False, energy_floor: float = 2e-3):
59
- """Normalize an input signal to a user loudness in dB LKFS.
60
- Audio loudness is defined according to the ITU-R BS.1770-4 recommendation.
61
-
62
- Args:
63
- wav (torch.Tensor): Input multichannel audio data.
64
- sample_rate (int): Sample rate.
65
- loudness_headroom_db (float): Target loudness of the output in dB LUFS.
66
- loudness_compressor (bool): Uses tanh for soft clipping.
67
- energy_floor (float): anything below that RMS level will not be rescaled.
68
- Returns:
69
- output (torch.Tensor): Loudness normalized output data.
70
- """
71
- energy = wav.pow(2).mean().sqrt().item()
72
- if energy < energy_floor:
73
- return wav
74
- transform = torchaudio.transforms.Loudness(sample_rate)
75
- input_loudness_db = transform(wav).item()
76
- # calculate the gain needed to scale to the desired loudness level
77
- delta_loudness = -loudness_headroom_db - input_loudness_db
78
- gain = 10.0 ** (delta_loudness / 20.0)
79
- output = gain * wav
80
- if loudness_compressor:
81
- output = torch.tanh(output)
82
- assert output.isfinite().all(), (input_loudness_db, wav.pow(2).mean().sqrt())
83
- return output
84
-
85
-
86
- def _clip_wav(wav: torch.Tensor, log_clipping: bool = False, stem_name: tp.Optional[str] = None) -> None:
87
- """Utility function to clip the audio with logging if specified."""
88
- max_scale = wav.abs().max()
89
- if log_clipping and max_scale > 1:
90
- clamp_prob = (wav.abs() > 1).float().mean().item()
91
- print(f"CLIPPING {stem_name or ''} happening with proba (a bit of clipping is okay):",
92
- clamp_prob, "maximum scale: ", max_scale.item(), file=sys.stderr)
93
- wav.clamp_(-1, 1)
94
-
95
-
96
- def normalize_audio(wav: torch.Tensor, normalize: bool = True,
97
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
98
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
99
- loudness_compressor: bool = False, log_clipping: bool = False,
100
- sample_rate: tp.Optional[int] = None,
101
- stem_name: tp.Optional[str] = None) -> torch.Tensor:
102
- """Normalize the audio according to the prescribed strategy (see after).
103
-
104
- Args:
105
- wav (torch.Tensor): Audio data.
106
- normalize (bool): if `True` (default), normalizes according to the prescribed
107
- strategy (see after). If `False`, the strategy is only used in case clipping
108
- would happen.
109
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
110
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
111
- with extra headroom to avoid clipping. 'clip' just clips.
112
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
113
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
114
- than the `peak_clip` one to avoid further clipping.
115
- loudness_headroom_db (float): Target loudness for loudness normalization.
116
- loudness_compressor (bool): If True, uses tanh based soft clipping.
117
- log_clipping (bool): If True, basic logging on stderr when clipping still
118
- occurs despite strategy (only for 'rms').
119
- sample_rate (int): Sample rate for the audio data (required for loudness).
120
- stem_name (Optional[str]): Stem name for clipping logging.
121
- Returns:
122
- torch.Tensor: Normalized audio.
123
- """
124
- scale_peak = 10 ** (-peak_clip_headroom_db / 20)
125
- scale_rms = 10 ** (-rms_headroom_db / 20)
126
- if strategy == 'peak':
127
- rescaling = (scale_peak / wav.abs().max())
128
- if normalize or rescaling < 1:
129
- wav = wav * rescaling
130
- elif strategy == 'clip':
131
- wav = wav.clamp(-scale_peak, scale_peak)
132
- elif strategy == 'rms':
133
- mono = wav.mean(dim=0)
134
- rescaling = scale_rms / mono.pow(2).mean().sqrt()
135
- if normalize or rescaling < 1:
136
- wav = wav * rescaling
137
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
138
- elif strategy == 'loudness':
139
- assert sample_rate is not None, "Loudness normalization requires sample rate."
140
- wav = normalize_loudness(wav, sample_rate, loudness_headroom_db, loudness_compressor)
141
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
142
- else:
143
- assert wav.abs().max() < 1
144
- assert strategy == '' or strategy == 'none', f"Unexpected strategy: '{strategy}'"
145
- return wav
146
-
147
-
148
- def f32_pcm(wav: torch.Tensor) -> torch.Tensor:
149
- """Convert audio to float 32 bits PCM format.
150
- """
151
- if wav.dtype.is_floating_point:
152
- return wav
153
- else:
154
- assert wav.dtype == torch.int16
155
- return wav.float() / 2**15
156
-
157
-
158
- def i16_pcm(wav: torch.Tensor) -> torch.Tensor:
159
- """Convert audio to int 16 bits PCM format.
160
-
161
- ..Warning:: There exist many formula for doing this convertion. None are perfect
162
- due to the asymetry of the int16 range. One either have possible clipping, DC offset,
163
- or inconsistancies with f32_pcm. If the given wav doesn't have enough headroom,
164
- it is possible that `i16_pcm(f32_pcm)) != Identity`.
165
- """
166
- if wav.dtype.is_floating_point:
167
- assert wav.abs().max() <= 1
168
- candidate = (wav * 2 ** 15).round()
169
- if candidate.max() >= 2 ** 15: # clipping would occur
170
- candidate = (wav * (2 ** 15 - 1)).round()
171
- return candidate.short()
172
- else:
173
- assert wav.dtype == torch.int16
174
- return wav
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/utils.py DELETED
@@ -1,136 +0,0 @@
1
- # This file is dual licensed under the terms of the Apache License, Version
2
- # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3
- # for complete details.
4
-
5
- import re
6
- from typing import FrozenSet, NewType, Tuple, Union, cast
7
-
8
- from .tags import Tag, parse_tag
9
- from .version import InvalidVersion, Version
10
-
11
- BuildTag = Union[Tuple[()], Tuple[int, str]]
12
- NormalizedName = NewType("NormalizedName", str)
13
-
14
-
15
- class InvalidWheelFilename(ValueError):
16
- """
17
- An invalid wheel filename was found, users should refer to PEP 427.
18
- """
19
-
20
-
21
- class InvalidSdistFilename(ValueError):
22
- """
23
- An invalid sdist filename was found, users should refer to the packaging user guide.
24
- """
25
-
26
-
27
- _canonicalize_regex = re.compile(r"[-_.]+")
28
- # PEP 427: The build number must start with a digit.
29
- _build_tag_regex = re.compile(r"(\d+)(.*)")
30
-
31
-
32
- def canonicalize_name(name: str) -> NormalizedName:
33
- # This is taken from PEP 503.
34
- value = _canonicalize_regex.sub("-", name).lower()
35
- return cast(NormalizedName, value)
36
-
37
-
38
- def canonicalize_version(version: Union[Version, str]) -> str:
39
- """
40
- This is very similar to Version.__str__, but has one subtle difference
41
- with the way it handles the release segment.
42
- """
43
- if isinstance(version, str):
44
- try:
45
- parsed = Version(version)
46
- except InvalidVersion:
47
- # Legacy versions cannot be normalized
48
- return version
49
- else:
50
- parsed = version
51
-
52
- parts = []
53
-
54
- # Epoch
55
- if parsed.epoch != 0:
56
- parts.append(f"{parsed.epoch}!")
57
-
58
- # Release segment
59
- # NB: This strips trailing '.0's to normalize
60
- parts.append(re.sub(r"(\.0)+$", "", ".".join(str(x) for x in parsed.release)))
61
-
62
- # Pre-release
63
- if parsed.pre is not None:
64
- parts.append("".join(str(x) for x in parsed.pre))
65
-
66
- # Post-release
67
- if parsed.post is not None:
68
- parts.append(f".post{parsed.post}")
69
-
70
- # Development release
71
- if parsed.dev is not None:
72
- parts.append(f".dev{parsed.dev}")
73
-
74
- # Local version segment
75
- if parsed.local is not None:
76
- parts.append(f"+{parsed.local}")
77
-
78
- return "".join(parts)
79
-
80
-
81
- def parse_wheel_filename(
82
- filename: str,
83
- ) -> Tuple[NormalizedName, Version, BuildTag, FrozenSet[Tag]]:
84
- if not filename.endswith(".whl"):
85
- raise InvalidWheelFilename(
86
- f"Invalid wheel filename (extension must be '.whl'): {filename}"
87
- )
88
-
89
- filename = filename[:-4]
90
- dashes = filename.count("-")
91
- if dashes not in (4, 5):
92
- raise InvalidWheelFilename(
93
- f"Invalid wheel filename (wrong number of parts): {filename}"
94
- )
95
-
96
- parts = filename.split("-", dashes - 2)
97
- name_part = parts[0]
98
- # See PEP 427 for the rules on escaping the project name
99
- if "__" in name_part or re.match(r"^[\w\d._]*$", name_part, re.UNICODE) is None:
100
- raise InvalidWheelFilename(f"Invalid project name: {filename}")
101
- name = canonicalize_name(name_part)
102
- version = Version(parts[1])
103
- if dashes == 5:
104
- build_part = parts[2]
105
- build_match = _build_tag_regex.match(build_part)
106
- if build_match is None:
107
- raise InvalidWheelFilename(
108
- f"Invalid build number: {build_part} in '{filename}'"
109
- )
110
- build = cast(BuildTag, (int(build_match.group(1)), build_match.group(2)))
111
- else:
112
- build = ()
113
- tags = parse_tag(parts[-1])
114
- return (name, version, build, tags)
115
-
116
-
117
- def parse_sdist_filename(filename: str) -> Tuple[NormalizedName, Version]:
118
- if filename.endswith(".tar.gz"):
119
- file_stem = filename[: -len(".tar.gz")]
120
- elif filename.endswith(".zip"):
121
- file_stem = filename[: -len(".zip")]
122
- else:
123
- raise InvalidSdistFilename(
124
- f"Invalid sdist filename (extension must be '.tar.gz' or '.zip'):"
125
- f" {filename}"
126
- )
127
-
128
- # We are requiring a PEP 440 version, which cannot contain dashes,
129
- # so we split on the last dash.
130
- name_part, sep, version_part = file_stem.rpartition("-")
131
- if not sep:
132
- raise InvalidSdistFilename(f"Invalid sdist filename: {filename}")
133
-
134
- name = canonicalize_name(name_part)
135
- version = Version(version_part)
136
- return (name, version)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_resources/readers.py DELETED
@@ -1,122 +0,0 @@
1
- import collections
2
- import pathlib
3
- import operator
4
-
5
- from . import abc
6
-
7
- from ._itertools import unique_everseen
8
- from ._compat import ZipPath
9
-
10
-
11
- def remove_duplicates(items):
12
- return iter(collections.OrderedDict.fromkeys(items))
13
-
14
-
15
- class FileReader(abc.TraversableResources):
16
- def __init__(self, loader):
17
- self.path = pathlib.Path(loader.path).parent
18
-
19
- def resource_path(self, resource):
20
- """
21
- Return the file system path to prevent
22
- `resources.path()` from creating a temporary
23
- copy.
24
- """
25
- return str(self.path.joinpath(resource))
26
-
27
- def files(self):
28
- return self.path
29
-
30
-
31
- class ZipReader(abc.TraversableResources):
32
- def __init__(self, loader, module):
33
- _, _, name = module.rpartition('.')
34
- self.prefix = loader.prefix.replace('\\', '/') + name + '/'
35
- self.archive = loader.archive
36
-
37
- def open_resource(self, resource):
38
- try:
39
- return super().open_resource(resource)
40
- except KeyError as exc:
41
- raise FileNotFoundError(exc.args[0])
42
-
43
- def is_resource(self, path):
44
- # workaround for `zipfile.Path.is_file` returning true
45
- # for non-existent paths.
46
- target = self.files().joinpath(path)
47
- return target.is_file() and target.exists()
48
-
49
- def files(self):
50
- return ZipPath(self.archive, self.prefix)
51
-
52
-
53
- class MultiplexedPath(abc.Traversable):
54
- """
55
- Given a series of Traversable objects, implement a merged
56
- version of the interface across all objects. Useful for
57
- namespace packages which may be multihomed at a single
58
- name.
59
- """
60
-
61
- def __init__(self, *paths):
62
- self._paths = list(map(pathlib.Path, remove_duplicates(paths)))
63
- if not self._paths:
64
- message = 'MultiplexedPath must contain at least one path'
65
- raise FileNotFoundError(message)
66
- if not all(path.is_dir() for path in self._paths):
67
- raise NotADirectoryError('MultiplexedPath only supports directories')
68
-
69
- def iterdir(self):
70
- files = (file for path in self._paths for file in path.iterdir())
71
- return unique_everseen(files, key=operator.attrgetter('name'))
72
-
73
- def read_bytes(self):
74
- raise FileNotFoundError(f'{self} is not a file')
75
-
76
- def read_text(self, *args, **kwargs):
77
- raise FileNotFoundError(f'{self} is not a file')
78
-
79
- def is_dir(self):
80
- return True
81
-
82
- def is_file(self):
83
- return False
84
-
85
- def joinpath(self, child):
86
- # first try to find child in current paths
87
- for file in self.iterdir():
88
- if file.name == child:
89
- return file
90
- # if it does not exist, construct it with the first path
91
- return self._paths[0] / child
92
-
93
- __truediv__ = joinpath
94
-
95
- def open(self, *args, **kwargs):
96
- raise FileNotFoundError(f'{self} is not a file')
97
-
98
- @property
99
- def name(self):
100
- return self._paths[0].name
101
-
102
- def __repr__(self):
103
- paths = ', '.join(f"'{path}'" for path in self._paths)
104
- return f'MultiplexedPath({paths})'
105
-
106
-
107
- class NamespaceReader(abc.TraversableResources):
108
- def __init__(self, namespace_path):
109
- if 'NamespacePath' not in str(namespace_path):
110
- raise ValueError('Invalid path')
111
- self.path = MultiplexedPath(*list(namespace_path))
112
-
113
- def resource_path(self, resource):
114
- """
115
- Return the file system path to prevent
116
- `resources.path()` from creating a temporary
117
- copy.
118
- """
119
- return str(self.path.joinpath(resource))
120
-
121
- def files(self):
122
- return self.path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AutoBG/Auto-BoardGame/Model_Constants_Template.py DELETED
@@ -1,7 +0,0 @@
1
- def SEND_KEY():
2
- KEY = ""
3
- return KEY
4
-
5
- def SEND_MODEL():
6
- OAI_MODEL = ""
7
- return OAI_MODEL
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/__main__.py DELETED
@@ -1,31 +0,0 @@
1
- import os
2
- import sys
3
- import warnings
4
-
5
- # Remove '' and current working directory from the first entry
6
- # of sys.path, if present to avoid using current directory
7
- # in pip commands check, freeze, install, list and show,
8
- # when invoked as python -m pip <command>
9
- if sys.path[0] in ("", os.getcwd()):
10
- sys.path.pop(0)
11
-
12
- # If we are running from a wheel, add the wheel to sys.path
13
- # This allows the usage python pip-*.whl/pip install pip-*.whl
14
- if __package__ == "":
15
- # __file__ is pip-*.whl/pip/__main__.py
16
- # first dirname call strips of '/__main__.py', second strips off '/pip'
17
- # Resulting path is the name of the wheel itself
18
- # Add that to sys.path so we can import pip
19
- path = os.path.dirname(os.path.dirname(__file__))
20
- sys.path.insert(0, path)
21
-
22
- if __name__ == "__main__":
23
- # Work around the error reported in #9540, pending a proper fix.
24
- # Note: It is essential the warning filter is set *before* importing
25
- # pip, as the deprecation happens at import time, not runtime.
26
- warnings.filterwarnings(
27
- "ignore", category=DeprecationWarning, module=".*packaging\\.version"
28
- )
29
- from pip._internal.cli.main import main as _main
30
-
31
- sys.exit(_main())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BulatF/StreamlitSentiment/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: StreamlitSentiment
3
- emoji: 🔥
4
- colorFrom: purple
5
- colorTo: purple
6
- sdk: streamlit
7
- sdk_version: 1.21.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/gather.h DELETED
@@ -1,44 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a fill of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
-
21
- // the purpose of this header is to #include the gather.h header
22
- // of the sequential, host, and device systems. It should be #included in any
23
- // code which uses adl to dispatch gather
24
-
25
- #include <thrust/system/detail/sequential/gather.h>
26
-
27
- // SCons can't see through the #defines below to figure out what this header
28
- // includes, so we fake it out by specifying all possible files we might end up
29
- // including inside an #if 0.
30
- #if 0
31
- #include <thrust/system/cpp/detail/gather.h>
32
- #include <thrust/system/cuda/detail/gather.h>
33
- #include <thrust/system/omp/detail/gather.h>
34
- #include <thrust/system/tbb/detail/gather.h>
35
- #endif
36
-
37
- #define __THRUST_HOST_SYSTEM_GATHER_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/gather.h>
38
- #include __THRUST_HOST_SYSTEM_GATHER_HEADER
39
- #undef __THRUST_HOST_SYSTEM_GATHER_HEADER
40
-
41
- #define __THRUST_DEVICE_SYSTEM_GATHER_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/gather.h>
42
- #include __THRUST_DEVICE_SYSTEM_GATHER_HEADER
43
- #undef __THRUST_DEVICE_SYSTEM_GATHER_HEADER
44
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Object-Detection-With-DETR-and-YOLOS/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Object Detection With DETR And YOLOS
3
- emoji: ⚡
4
- colorFrom: indigo
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 3.0.19
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Text2Human/Text2Human/models/hierarchy_vqgan_model.py DELETED
@@ -1,374 +0,0 @@
1
- import math
2
- import sys
3
- from collections import OrderedDict
4
-
5
- sys.path.append('..')
6
- import lpips
7
- import torch
8
- import torch.nn.functional as F
9
- from torchvision.utils import save_image
10
-
11
- from models.archs.vqgan_arch import (Decoder, DecoderRes, Discriminator,
12
- Encoder,
13
- VectorQuantizerSpatialTextureAware,
14
- VectorQuantizerTexture)
15
- from models.losses.vqgan_loss import (DiffAugment, adopt_weight,
16
- calculate_adaptive_weight, hinge_d_loss)
17
-
18
-
19
- class HierarchyVQSpatialTextureAwareModel():
20
-
21
- def __init__(self, opt):
22
- self.opt = opt
23
- self.device = torch.device('cuda')
24
- self.top_encoder = Encoder(
25
- ch=opt['top_ch'],
26
- num_res_blocks=opt['top_num_res_blocks'],
27
- attn_resolutions=opt['top_attn_resolutions'],
28
- ch_mult=opt['top_ch_mult'],
29
- in_channels=opt['top_in_channels'],
30
- resolution=opt['top_resolution'],
31
- z_channels=opt['top_z_channels'],
32
- double_z=opt['top_double_z'],
33
- dropout=opt['top_dropout']).to(self.device)
34
- self.decoder = Decoder(
35
- in_channels=opt['top_in_channels'],
36
- resolution=opt['top_resolution'],
37
- z_channels=opt['top_z_channels'],
38
- ch=opt['top_ch'],
39
- out_ch=opt['top_out_ch'],
40
- num_res_blocks=opt['top_num_res_blocks'],
41
- attn_resolutions=opt['top_attn_resolutions'],
42
- ch_mult=opt['top_ch_mult'],
43
- dropout=opt['top_dropout'],
44
- resamp_with_conv=True,
45
- give_pre_end=False).to(self.device)
46
- self.top_quantize = VectorQuantizerTexture(
47
- 1024, opt['embed_dim'], beta=0.25).to(self.device)
48
- self.top_quant_conv = torch.nn.Conv2d(opt["top_z_channels"],
49
- opt['embed_dim'],
50
- 1).to(self.device)
51
- self.top_post_quant_conv = torch.nn.Conv2d(opt['embed_dim'],
52
- opt["top_z_channels"],
53
- 1).to(self.device)
54
- self.load_top_pretrain_models()
55
-
56
- self.bot_encoder = Encoder(
57
- ch=opt['bot_ch'],
58
- num_res_blocks=opt['bot_num_res_blocks'],
59
- attn_resolutions=opt['bot_attn_resolutions'],
60
- ch_mult=opt['bot_ch_mult'],
61
- in_channels=opt['bot_in_channels'],
62
- resolution=opt['bot_resolution'],
63
- z_channels=opt['bot_z_channels'],
64
- double_z=opt['bot_double_z'],
65
- dropout=opt['bot_dropout']).to(self.device)
66
- self.bot_decoder_res = DecoderRes(
67
- in_channels=opt['bot_in_channels'],
68
- resolution=opt['bot_resolution'],
69
- z_channels=opt['bot_z_channels'],
70
- ch=opt['bot_ch'],
71
- num_res_blocks=opt['bot_num_res_blocks'],
72
- ch_mult=opt['bot_ch_mult'],
73
- dropout=opt['bot_dropout'],
74
- give_pre_end=False).to(self.device)
75
- self.bot_quantize = VectorQuantizerSpatialTextureAware(
76
- opt['bot_n_embed'],
77
- opt['embed_dim'],
78
- beta=0.25,
79
- spatial_size=opt['codebook_spatial_size']).to(self.device)
80
- self.bot_quant_conv = torch.nn.Conv2d(opt["bot_z_channels"],
81
- opt['embed_dim'],
82
- 1).to(self.device)
83
- self.bot_post_quant_conv = torch.nn.Conv2d(opt['embed_dim'],
84
- opt["bot_z_channels"],
85
- 1).to(self.device)
86
-
87
- self.disc = Discriminator(
88
- opt['n_channels'], opt['ndf'],
89
- n_layers=opt['disc_layers']).to(self.device)
90
- self.perceptual = lpips.LPIPS(net="vgg").to(self.device)
91
- self.perceptual_weight = opt['perceptual_weight']
92
- self.disc_start_step = opt['disc_start_step']
93
- self.disc_weight_max = opt['disc_weight_max']
94
- self.diff_aug = opt['diff_aug']
95
- self.policy = "color,translation"
96
-
97
- self.load_discriminator_models()
98
-
99
- self.disc.train()
100
-
101
- self.fix_decoder = opt['fix_decoder']
102
-
103
- self.init_training_settings()
104
-
105
- def load_top_pretrain_models(self):
106
- # load pretrained vqgan for segmentation mask
107
- top_vae_checkpoint = torch.load(self.opt['top_vae_path'])
108
- self.top_encoder.load_state_dict(
109
- top_vae_checkpoint['encoder'], strict=True)
110
- self.decoder.load_state_dict(
111
- top_vae_checkpoint['decoder'], strict=True)
112
- self.top_quantize.load_state_dict(
113
- top_vae_checkpoint['quantize'], strict=True)
114
- self.top_quant_conv.load_state_dict(
115
- top_vae_checkpoint['quant_conv'], strict=True)
116
- self.top_post_quant_conv.load_state_dict(
117
- top_vae_checkpoint['post_quant_conv'], strict=True)
118
- self.top_encoder.eval()
119
- self.top_quantize.eval()
120
- self.top_quant_conv.eval()
121
- self.top_post_quant_conv.eval()
122
-
123
- def init_training_settings(self):
124
- self.log_dict = OrderedDict()
125
- self.configure_optimizers()
126
-
127
- def configure_optimizers(self):
128
- optim_params = []
129
- for v in self.bot_encoder.parameters():
130
- if v.requires_grad:
131
- optim_params.append(v)
132
- for v in self.bot_decoder_res.parameters():
133
- if v.requires_grad:
134
- optim_params.append(v)
135
- for v in self.bot_quantize.parameters():
136
- if v.requires_grad:
137
- optim_params.append(v)
138
- for v in self.bot_quant_conv.parameters():
139
- if v.requires_grad:
140
- optim_params.append(v)
141
- for v in self.bot_post_quant_conv.parameters():
142
- if v.requires_grad:
143
- optim_params.append(v)
144
- if not self.fix_decoder:
145
- for name, v in self.decoder.named_parameters():
146
- if v.requires_grad:
147
- if 'up.0' in name:
148
- optim_params.append(v)
149
- if 'up.1' in name:
150
- optim_params.append(v)
151
- if 'up.2' in name:
152
- optim_params.append(v)
153
- if 'up.3' in name:
154
- optim_params.append(v)
155
-
156
- self.optimizer = torch.optim.Adam(optim_params, lr=self.opt['lr'])
157
-
158
- self.disc_optimizer = torch.optim.Adam(
159
- self.disc.parameters(), lr=self.opt['lr'])
160
-
161
- def load_discriminator_models(self):
162
- # load pretrained vqgan for segmentation mask
163
- top_vae_checkpoint = torch.load(self.opt['top_vae_path'])
164
- self.disc.load_state_dict(
165
- top_vae_checkpoint['discriminator'], strict=True)
166
-
167
- def save_network(self, save_path):
168
- """Save networks.
169
- """
170
-
171
- save_dict = {}
172
- save_dict['bot_encoder'] = self.bot_encoder.state_dict()
173
- save_dict['bot_decoder_res'] = self.bot_decoder_res.state_dict()
174
- save_dict['decoder'] = self.decoder.state_dict()
175
- save_dict['bot_quantize'] = self.bot_quantize.state_dict()
176
- save_dict['bot_quant_conv'] = self.bot_quant_conv.state_dict()
177
- save_dict['bot_post_quant_conv'] = self.bot_post_quant_conv.state_dict(
178
- )
179
- save_dict['discriminator'] = self.disc.state_dict()
180
- torch.save(save_dict, save_path)
181
-
182
- def load_network(self):
183
- checkpoint = torch.load(self.opt['pretrained_models'])
184
- self.bot_encoder.load_state_dict(
185
- checkpoint['bot_encoder'], strict=True)
186
- self.bot_decoder_res.load_state_dict(
187
- checkpoint['bot_decoder_res'], strict=True)
188
- self.decoder.load_state_dict(checkpoint['decoder'], strict=True)
189
- self.bot_quantize.load_state_dict(
190
- checkpoint['bot_quantize'], strict=True)
191
- self.bot_quant_conv.load_state_dict(
192
- checkpoint['bot_quant_conv'], strict=True)
193
- self.bot_post_quant_conv.load_state_dict(
194
- checkpoint['bot_post_quant_conv'], strict=True)
195
-
196
- def optimize_parameters(self, data, step):
197
- self.bot_encoder.train()
198
- self.bot_decoder_res.train()
199
- if not self.fix_decoder:
200
- self.decoder.train()
201
- self.bot_quantize.train()
202
- self.bot_quant_conv.train()
203
- self.bot_post_quant_conv.train()
204
-
205
- loss, d_loss = self.training_step(data, step)
206
- self.optimizer.zero_grad()
207
- loss.backward()
208
- self.optimizer.step()
209
-
210
- if step > self.disc_start_step:
211
- self.disc_optimizer.zero_grad()
212
- d_loss.backward()
213
- self.disc_optimizer.step()
214
-
215
- def top_encode(self, x, mask):
216
- h = self.top_encoder(x)
217
- h = self.top_quant_conv(h)
218
- quant, _, _ = self.top_quantize(h, mask)
219
- quant = self.top_post_quant_conv(quant)
220
- return quant
221
-
222
- def bot_encode(self, x, mask):
223
- h = self.bot_encoder(x)
224
- h = self.bot_quant_conv(h)
225
- quant, emb_loss, info = self.bot_quantize(h, mask)
226
- quant = self.bot_post_quant_conv(quant)
227
- bot_dec_res = self.bot_decoder_res(quant)
228
- return bot_dec_res, emb_loss, info
229
-
230
- def decode(self, quant_top, bot_dec_res):
231
- dec = self.decoder(quant_top, bot_h=bot_dec_res)
232
- return dec
233
-
234
- def forward_step(self, input, mask):
235
- with torch.no_grad():
236
- quant_top = self.top_encode(input, mask)
237
- bot_dec_res, diff, _ = self.bot_encode(input, mask)
238
- dec = self.decode(quant_top, bot_dec_res)
239
- return dec, diff
240
-
241
- def feed_data(self, data):
242
- x = data['image'].float().to(self.device)
243
- mask = data['texture_mask'].float().to(self.device)
244
-
245
- return x, mask
246
-
247
- def training_step(self, data, step):
248
- x, mask = self.feed_data(data)
249
- xrec, codebook_loss = self.forward_step(x, mask)
250
-
251
- # get recon/perceptual loss
252
- recon_loss = torch.abs(x.contiguous() - xrec.contiguous())
253
- p_loss = self.perceptual(x.contiguous(), xrec.contiguous())
254
- nll_loss = recon_loss + self.perceptual_weight * p_loss
255
- nll_loss = torch.mean(nll_loss)
256
-
257
- # augment for input to discriminator
258
- if self.diff_aug:
259
- xrec = DiffAugment(xrec, policy=self.policy)
260
-
261
- # update generator
262
- logits_fake = self.disc(xrec)
263
- g_loss = -torch.mean(logits_fake)
264
- last_layer = self.decoder.conv_out.weight
265
- d_weight = calculate_adaptive_weight(nll_loss, g_loss, last_layer,
266
- self.disc_weight_max)
267
- d_weight *= adopt_weight(1, step, self.disc_start_step)
268
- loss = nll_loss + d_weight * g_loss + codebook_loss
269
-
270
- self.log_dict["loss"] = loss
271
- self.log_dict["l1"] = recon_loss.mean().item()
272
- self.log_dict["perceptual"] = p_loss.mean().item()
273
- self.log_dict["nll_loss"] = nll_loss.item()
274
- self.log_dict["g_loss"] = g_loss.item()
275
- self.log_dict["d_weight"] = d_weight
276
- self.log_dict["codebook_loss"] = codebook_loss.item()
277
-
278
- if step > self.disc_start_step:
279
- if self.diff_aug:
280
- logits_real = self.disc(
281
- DiffAugment(x.contiguous().detach(), policy=self.policy))
282
- else:
283
- logits_real = self.disc(x.contiguous().detach())
284
- logits_fake = self.disc(xrec.contiguous().detach(
285
- )) # detach so that generator isn"t also updated
286
- d_loss = hinge_d_loss(logits_real, logits_fake)
287
- self.log_dict["d_loss"] = d_loss
288
- else:
289
- d_loss = None
290
-
291
- return loss, d_loss
292
-
293
- @torch.no_grad()
294
- def inference(self, data_loader, save_dir):
295
- self.bot_encoder.eval()
296
- self.bot_decoder_res.eval()
297
- self.decoder.eval()
298
- self.bot_quantize.eval()
299
- self.bot_quant_conv.eval()
300
- self.bot_post_quant_conv.eval()
301
-
302
- loss_total = 0
303
- num = 0
304
-
305
- for _, data in enumerate(data_loader):
306
- img_name = data['img_name'][0]
307
- x, mask = self.feed_data(data)
308
- xrec, _ = self.forward_step(x, mask)
309
-
310
- recon_loss = torch.abs(x.contiguous() - xrec.contiguous())
311
- p_loss = self.perceptual(x.contiguous(), xrec.contiguous())
312
- nll_loss = recon_loss + self.perceptual_weight * p_loss
313
- nll_loss = torch.mean(nll_loss)
314
- loss_total += nll_loss
315
-
316
- num += x.size(0)
317
-
318
- if x.shape[1] > 3:
319
- # colorize with random projection
320
- assert xrec.shape[1] > 3
321
- # convert logits to indices
322
- xrec = torch.argmax(xrec, dim=1, keepdim=True)
323
- xrec = F.one_hot(xrec, num_classes=x.shape[1])
324
- xrec = xrec.squeeze(1).permute(0, 3, 1, 2).float()
325
- x = self.to_rgb(x)
326
- xrec = self.to_rgb(xrec)
327
-
328
- img_cat = torch.cat([x, xrec], dim=3).detach()
329
- img_cat = ((img_cat + 1) / 2)
330
- img_cat = img_cat.clamp_(0, 1)
331
- save_image(
332
- img_cat, f'{save_dir}/{img_name}.png', nrow=1, padding=4)
333
-
334
- return (loss_total / num).item()
335
-
336
- def get_current_log(self):
337
- return self.log_dict
338
-
339
- def update_learning_rate(self, epoch):
340
- """Update learning rate.
341
-
342
- Args:
343
- current_iter (int): Current iteration.
344
- warmup_iter (int): Warmup iter numbers. -1 for no warmup.
345
- Default: -1.
346
- """
347
- lr = self.optimizer.param_groups[0]['lr']
348
-
349
- if self.opt['lr_decay'] == 'step':
350
- lr = self.opt['lr'] * (
351
- self.opt['gamma']**(epoch // self.opt['step']))
352
- elif self.opt['lr_decay'] == 'cos':
353
- lr = self.opt['lr'] * (
354
- 1 + math.cos(math.pi * epoch / self.opt['num_epochs'])) / 2
355
- elif self.opt['lr_decay'] == 'linear':
356
- lr = self.opt['lr'] * (1 - epoch / self.opt['num_epochs'])
357
- elif self.opt['lr_decay'] == 'linear2exp':
358
- if epoch < self.opt['turning_point'] + 1:
359
- # learning rate decay as 95%
360
- # at the turning point (1 / 95% = 1.0526)
361
- lr = self.opt['lr'] * (
362
- 1 - epoch / int(self.opt['turning_point'] * 1.0526))
363
- else:
364
- lr *= self.opt['gamma']
365
- elif self.opt['lr_decay'] == 'schedule':
366
- if epoch in self.opt['schedule']:
367
- lr *= self.opt['gamma']
368
- else:
369
- raise ValueError('Unknown lr mode {}'.format(self.opt['lr_decay']))
370
- # set learning rate
371
- for param_group in self.optimizer.param_groups:
372
- param_group['lr'] = lr
373
-
374
- return lr
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/datasets/lvis.py DELETED
@@ -1,742 +0,0 @@
1
- import itertools
2
- import logging
3
- import os.path as osp
4
- import tempfile
5
- from collections import OrderedDict
6
-
7
- import numpy as np
8
- from mmcv.utils import print_log
9
- from terminaltables import AsciiTable
10
-
11
- from .builder import DATASETS
12
- from .coco import CocoDataset
13
-
14
-
15
- @DATASETS.register_module()
16
- class LVISV05Dataset(CocoDataset):
17
-
18
- CLASSES = (
19
- 'acorn', 'aerosol_can', 'air_conditioner', 'airplane', 'alarm_clock',
20
- 'alcohol', 'alligator', 'almond', 'ambulance', 'amplifier', 'anklet',
21
- 'antenna', 'apple', 'apple_juice', 'applesauce', 'apricot', 'apron',
22
- 'aquarium', 'armband', 'armchair', 'armoire', 'armor', 'artichoke',
23
- 'trash_can', 'ashtray', 'asparagus', 'atomizer', 'avocado', 'award',
24
- 'awning', 'ax', 'baby_buggy', 'basketball_backboard', 'backpack',
25
- 'handbag', 'suitcase', 'bagel', 'bagpipe', 'baguet', 'bait', 'ball',
26
- 'ballet_skirt', 'balloon', 'bamboo', 'banana', 'Band_Aid', 'bandage',
27
- 'bandanna', 'banjo', 'banner', 'barbell', 'barge', 'barrel',
28
- 'barrette', 'barrow', 'baseball_base', 'baseball', 'baseball_bat',
29
- 'baseball_cap', 'baseball_glove', 'basket', 'basketball_hoop',
30
- 'basketball', 'bass_horn', 'bat_(animal)', 'bath_mat', 'bath_towel',
31
- 'bathrobe', 'bathtub', 'batter_(food)', 'battery', 'beachball', 'bead',
32
- 'beaker', 'bean_curd', 'beanbag', 'beanie', 'bear', 'bed',
33
- 'bedspread', 'cow', 'beef_(food)', 'beeper', 'beer_bottle', 'beer_can',
34
- 'beetle', 'bell', 'bell_pepper', 'belt', 'belt_buckle', 'bench',
35
- 'beret', 'bib', 'Bible', 'bicycle', 'visor', 'binder', 'binoculars',
36
- 'bird', 'birdfeeder', 'birdbath', 'birdcage', 'birdhouse',
37
- 'birthday_cake', 'birthday_card', 'biscuit_(bread)', 'pirate_flag',
38
- 'black_sheep', 'blackboard', 'blanket', 'blazer', 'blender', 'blimp',
39
- 'blinker', 'blueberry', 'boar', 'gameboard', 'boat', 'bobbin',
40
- 'bobby_pin', 'boiled_egg', 'bolo_tie', 'deadbolt', 'bolt', 'bonnet',
41
- 'book', 'book_bag', 'bookcase', 'booklet', 'bookmark',
42
- 'boom_microphone', 'boot', 'bottle', 'bottle_opener', 'bouquet',
43
- 'bow_(weapon)', 'bow_(decorative_ribbons)', 'bow-tie', 'bowl',
44
- 'pipe_bowl', 'bowler_hat', 'bowling_ball', 'bowling_pin',
45
- 'boxing_glove', 'suspenders', 'bracelet', 'brass_plaque', 'brassiere',
46
- 'bread-bin', 'breechcloth', 'bridal_gown', 'briefcase',
47
- 'bristle_brush', 'broccoli', 'broach', 'broom', 'brownie',
48
- 'brussels_sprouts', 'bubble_gum', 'bucket', 'horse_buggy', 'bull',
49
- 'bulldog', 'bulldozer', 'bullet_train', 'bulletin_board',
50
- 'bulletproof_vest', 'bullhorn', 'corned_beef', 'bun', 'bunk_bed',
51
- 'buoy', 'burrito', 'bus_(vehicle)', 'business_card', 'butcher_knife',
52
- 'butter', 'butterfly', 'button', 'cab_(taxi)', 'cabana', 'cabin_car',
53
- 'cabinet', 'locker', 'cake', 'calculator', 'calendar', 'calf',
54
- 'camcorder', 'camel', 'camera', 'camera_lens', 'camper_(vehicle)',
55
- 'can', 'can_opener', 'candelabrum', 'candle', 'candle_holder',
56
- 'candy_bar', 'candy_cane', 'walking_cane', 'canister', 'cannon',
57
- 'canoe', 'cantaloup', 'canteen', 'cap_(headwear)', 'bottle_cap',
58
- 'cape', 'cappuccino', 'car_(automobile)', 'railcar_(part_of_a_train)',
59
- 'elevator_car', 'car_battery', 'identity_card', 'card', 'cardigan',
60
- 'cargo_ship', 'carnation', 'horse_carriage', 'carrot', 'tote_bag',
61
- 'cart', 'carton', 'cash_register', 'casserole', 'cassette', 'cast',
62
- 'cat', 'cauliflower', 'caviar', 'cayenne_(spice)', 'CD_player',
63
- 'celery', 'cellular_telephone', 'chain_mail', 'chair', 'chaise_longue',
64
- 'champagne', 'chandelier', 'chap', 'checkbook', 'checkerboard',
65
- 'cherry', 'chessboard', 'chest_of_drawers_(furniture)',
66
- 'chicken_(animal)', 'chicken_wire', 'chickpea', 'Chihuahua',
67
- 'chili_(vegetable)', 'chime', 'chinaware', 'crisp_(potato_chip)',
68
- 'poker_chip', 'chocolate_bar', 'chocolate_cake', 'chocolate_milk',
69
- 'chocolate_mousse', 'choker', 'chopping_board', 'chopstick',
70
- 'Christmas_tree', 'slide', 'cider', 'cigar_box', 'cigarette',
71
- 'cigarette_case', 'cistern', 'clarinet', 'clasp', 'cleansing_agent',
72
- 'clementine', 'clip', 'clipboard', 'clock', 'clock_tower',
73
- 'clothes_hamper', 'clothespin', 'clutch_bag', 'coaster', 'coat',
74
- 'coat_hanger', 'coatrack', 'cock', 'coconut', 'coffee_filter',
75
- 'coffee_maker', 'coffee_table', 'coffeepot', 'coil', 'coin',
76
- 'colander', 'coleslaw', 'coloring_material', 'combination_lock',
77
- 'pacifier', 'comic_book', 'computer_keyboard', 'concrete_mixer',
78
- 'cone', 'control', 'convertible_(automobile)', 'sofa_bed', 'cookie',
79
- 'cookie_jar', 'cooking_utensil', 'cooler_(for_food)',
80
- 'cork_(bottle_plug)', 'corkboard', 'corkscrew', 'edible_corn',
81
- 'cornbread', 'cornet', 'cornice', 'cornmeal', 'corset',
82
- 'romaine_lettuce', 'costume', 'cougar', 'coverall', 'cowbell',
83
- 'cowboy_hat', 'crab_(animal)', 'cracker', 'crape', 'crate', 'crayon',
84
- 'cream_pitcher', 'credit_card', 'crescent_roll', 'crib', 'crock_pot',
85
- 'crossbar', 'crouton', 'crow', 'crown', 'crucifix', 'cruise_ship',
86
- 'police_cruiser', 'crumb', 'crutch', 'cub_(animal)', 'cube',
87
- 'cucumber', 'cufflink', 'cup', 'trophy_cup', 'cupcake', 'hair_curler',
88
- 'curling_iron', 'curtain', 'cushion', 'custard', 'cutting_tool',
89
- 'cylinder', 'cymbal', 'dachshund', 'dagger', 'dartboard',
90
- 'date_(fruit)', 'deck_chair', 'deer', 'dental_floss', 'desk',
91
- 'detergent', 'diaper', 'diary', 'die', 'dinghy', 'dining_table', 'tux',
92
- 'dish', 'dish_antenna', 'dishrag', 'dishtowel', 'dishwasher',
93
- 'dishwasher_detergent', 'diskette', 'dispenser', 'Dixie_cup', 'dog',
94
- 'dog_collar', 'doll', 'dollar', 'dolphin', 'domestic_ass', 'eye_mask',
95
- 'doorbell', 'doorknob', 'doormat', 'doughnut', 'dove', 'dragonfly',
96
- 'drawer', 'underdrawers', 'dress', 'dress_hat', 'dress_suit',
97
- 'dresser', 'drill', 'drinking_fountain', 'drone', 'dropper',
98
- 'drum_(musical_instrument)', 'drumstick', 'duck', 'duckling',
99
- 'duct_tape', 'duffel_bag', 'dumbbell', 'dumpster', 'dustpan',
100
- 'Dutch_oven', 'eagle', 'earphone', 'earplug', 'earring', 'easel',
101
- 'eclair', 'eel', 'egg', 'egg_roll', 'egg_yolk', 'eggbeater',
102
- 'eggplant', 'electric_chair', 'refrigerator', 'elephant', 'elk',
103
- 'envelope', 'eraser', 'escargot', 'eyepatch', 'falcon', 'fan',
104
- 'faucet', 'fedora', 'ferret', 'Ferris_wheel', 'ferry', 'fig_(fruit)',
105
- 'fighter_jet', 'figurine', 'file_cabinet', 'file_(tool)', 'fire_alarm',
106
- 'fire_engine', 'fire_extinguisher', 'fire_hose', 'fireplace',
107
- 'fireplug', 'fish', 'fish_(food)', 'fishbowl', 'fishing_boat',
108
- 'fishing_rod', 'flag', 'flagpole', 'flamingo', 'flannel', 'flash',
109
- 'flashlight', 'fleece', 'flip-flop_(sandal)', 'flipper_(footwear)',
110
- 'flower_arrangement', 'flute_glass', 'foal', 'folding_chair',
111
- 'food_processor', 'football_(American)', 'football_helmet',
112
- 'footstool', 'fork', 'forklift', 'freight_car', 'French_toast',
113
- 'freshener', 'frisbee', 'frog', 'fruit_juice', 'fruit_salad',
114
- 'frying_pan', 'fudge', 'funnel', 'futon', 'gag', 'garbage',
115
- 'garbage_truck', 'garden_hose', 'gargle', 'gargoyle', 'garlic',
116
- 'gasmask', 'gazelle', 'gelatin', 'gemstone', 'giant_panda',
117
- 'gift_wrap', 'ginger', 'giraffe', 'cincture',
118
- 'glass_(drink_container)', 'globe', 'glove', 'goat', 'goggles',
119
- 'goldfish', 'golf_club', 'golfcart', 'gondola_(boat)', 'goose',
120
- 'gorilla', 'gourd', 'surgical_gown', 'grape', 'grasshopper', 'grater',
121
- 'gravestone', 'gravy_boat', 'green_bean', 'green_onion', 'griddle',
122
- 'grillroom', 'grinder_(tool)', 'grits', 'grizzly', 'grocery_bag',
123
- 'guacamole', 'guitar', 'gull', 'gun', 'hair_spray', 'hairbrush',
124
- 'hairnet', 'hairpin', 'ham', 'hamburger', 'hammer', 'hammock',
125
- 'hamper', 'hamster', 'hair_dryer', 'hand_glass', 'hand_towel',
126
- 'handcart', 'handcuff', 'handkerchief', 'handle', 'handsaw',
127
- 'hardback_book', 'harmonium', 'hat', 'hatbox', 'hatch', 'veil',
128
- 'headband', 'headboard', 'headlight', 'headscarf', 'headset',
129
- 'headstall_(for_horses)', 'hearing_aid', 'heart', 'heater',
130
- 'helicopter', 'helmet', 'heron', 'highchair', 'hinge', 'hippopotamus',
131
- 'hockey_stick', 'hog', 'home_plate_(baseball)', 'honey', 'fume_hood',
132
- 'hook', 'horse', 'hose', 'hot-air_balloon', 'hotplate', 'hot_sauce',
133
- 'hourglass', 'houseboat', 'hummingbird', 'hummus', 'polar_bear',
134
- 'icecream', 'popsicle', 'ice_maker', 'ice_pack', 'ice_skate',
135
- 'ice_tea', 'igniter', 'incense', 'inhaler', 'iPod',
136
- 'iron_(for_clothing)', 'ironing_board', 'jacket', 'jam', 'jean',
137
- 'jeep', 'jelly_bean', 'jersey', 'jet_plane', 'jewelry', 'joystick',
138
- 'jumpsuit', 'kayak', 'keg', 'kennel', 'kettle', 'key', 'keycard',
139
- 'kilt', 'kimono', 'kitchen_sink', 'kitchen_table', 'kite', 'kitten',
140
- 'kiwi_fruit', 'knee_pad', 'knife', 'knight_(chess_piece)',
141
- 'knitting_needle', 'knob', 'knocker_(on_a_door)', 'koala', 'lab_coat',
142
- 'ladder', 'ladle', 'ladybug', 'lamb_(animal)', 'lamb-chop', 'lamp',
143
- 'lamppost', 'lampshade', 'lantern', 'lanyard', 'laptop_computer',
144
- 'lasagna', 'latch', 'lawn_mower', 'leather', 'legging_(clothing)',
145
- 'Lego', 'lemon', 'lemonade', 'lettuce', 'license_plate', 'life_buoy',
146
- 'life_jacket', 'lightbulb', 'lightning_rod', 'lime', 'limousine',
147
- 'linen_paper', 'lion', 'lip_balm', 'lipstick', 'liquor', 'lizard',
148
- 'Loafer_(type_of_shoe)', 'log', 'lollipop', 'lotion',
149
- 'speaker_(stero_equipment)', 'loveseat', 'machine_gun', 'magazine',
150
- 'magnet', 'mail_slot', 'mailbox_(at_home)', 'mallet', 'mammoth',
151
- 'mandarin_orange', 'manger', 'manhole', 'map', 'marker', 'martini',
152
- 'mascot', 'mashed_potato', 'masher', 'mask', 'mast',
153
- 'mat_(gym_equipment)', 'matchbox', 'mattress', 'measuring_cup',
154
- 'measuring_stick', 'meatball', 'medicine', 'melon', 'microphone',
155
- 'microscope', 'microwave_oven', 'milestone', 'milk', 'minivan',
156
- 'mint_candy', 'mirror', 'mitten', 'mixer_(kitchen_tool)', 'money',
157
- 'monitor_(computer_equipment) computer_monitor', 'monkey', 'motor',
158
- 'motor_scooter', 'motor_vehicle', 'motorboat', 'motorcycle',
159
- 'mound_(baseball)', 'mouse_(animal_rodent)',
160
- 'mouse_(computer_equipment)', 'mousepad', 'muffin', 'mug', 'mushroom',
161
- 'music_stool', 'musical_instrument', 'nailfile', 'nameplate', 'napkin',
162
- 'neckerchief', 'necklace', 'necktie', 'needle', 'nest', 'newsstand',
163
- 'nightshirt', 'nosebag_(for_animals)', 'noseband_(for_animals)',
164
- 'notebook', 'notepad', 'nut', 'nutcracker', 'oar', 'octopus_(food)',
165
- 'octopus_(animal)', 'oil_lamp', 'olive_oil', 'omelet', 'onion',
166
- 'orange_(fruit)', 'orange_juice', 'oregano', 'ostrich', 'ottoman',
167
- 'overalls_(clothing)', 'owl', 'packet', 'inkpad', 'pad', 'paddle',
168
- 'padlock', 'paintbox', 'paintbrush', 'painting', 'pajamas', 'palette',
169
- 'pan_(for_cooking)', 'pan_(metal_container)', 'pancake', 'pantyhose',
170
- 'papaya', 'paperclip', 'paper_plate', 'paper_towel', 'paperback_book',
171
- 'paperweight', 'parachute', 'parakeet', 'parasail_(sports)',
172
- 'parchment', 'parka', 'parking_meter', 'parrot',
173
- 'passenger_car_(part_of_a_train)', 'passenger_ship', 'passport',
174
- 'pastry', 'patty_(food)', 'pea_(food)', 'peach', 'peanut_butter',
175
- 'pear', 'peeler_(tool_for_fruit_and_vegetables)', 'pegboard',
176
- 'pelican', 'pen', 'pencil', 'pencil_box', 'pencil_sharpener',
177
- 'pendulum', 'penguin', 'pennant', 'penny_(coin)', 'pepper',
178
- 'pepper_mill', 'perfume', 'persimmon', 'baby', 'pet', 'petfood',
179
- 'pew_(church_bench)', 'phonebook', 'phonograph_record', 'piano',
180
- 'pickle', 'pickup_truck', 'pie', 'pigeon', 'piggy_bank', 'pillow',
181
- 'pin_(non_jewelry)', 'pineapple', 'pinecone', 'ping-pong_ball',
182
- 'pinwheel', 'tobacco_pipe', 'pipe', 'pistol', 'pita_(bread)',
183
- 'pitcher_(vessel_for_liquid)', 'pitchfork', 'pizza', 'place_mat',
184
- 'plate', 'platter', 'playing_card', 'playpen', 'pliers',
185
- 'plow_(farm_equipment)', 'pocket_watch', 'pocketknife',
186
- 'poker_(fire_stirring_tool)', 'pole', 'police_van', 'polo_shirt',
187
- 'poncho', 'pony', 'pool_table', 'pop_(soda)', 'portrait',
188
- 'postbox_(public)', 'postcard', 'poster', 'pot', 'flowerpot', 'potato',
189
- 'potholder', 'pottery', 'pouch', 'power_shovel', 'prawn', 'printer',
190
- 'projectile_(weapon)', 'projector', 'propeller', 'prune', 'pudding',
191
- 'puffer_(fish)', 'puffin', 'pug-dog', 'pumpkin', 'puncher', 'puppet',
192
- 'puppy', 'quesadilla', 'quiche', 'quilt', 'rabbit', 'race_car',
193
- 'racket', 'radar', 'radiator', 'radio_receiver', 'radish', 'raft',
194
- 'rag_doll', 'raincoat', 'ram_(animal)', 'raspberry', 'rat',
195
- 'razorblade', 'reamer_(juicer)', 'rearview_mirror', 'receipt',
196
- 'recliner', 'record_player', 'red_cabbage', 'reflector',
197
- 'remote_control', 'rhinoceros', 'rib_(food)', 'rifle', 'ring',
198
- 'river_boat', 'road_map', 'robe', 'rocking_chair', 'roller_skate',
199
- 'Rollerblade', 'rolling_pin', 'root_beer',
200
- 'router_(computer_equipment)', 'rubber_band', 'runner_(carpet)',
201
- 'plastic_bag', 'saddle_(on_an_animal)', 'saddle_blanket', 'saddlebag',
202
- 'safety_pin', 'sail', 'salad', 'salad_plate', 'salami',
203
- 'salmon_(fish)', 'salmon_(food)', 'salsa', 'saltshaker',
204
- 'sandal_(type_of_shoe)', 'sandwich', 'satchel', 'saucepan', 'saucer',
205
- 'sausage', 'sawhorse', 'saxophone', 'scale_(measuring_instrument)',
206
- 'scarecrow', 'scarf', 'school_bus', 'scissors', 'scoreboard',
207
- 'scrambled_eggs', 'scraper', 'scratcher', 'screwdriver',
208
- 'scrubbing_brush', 'sculpture', 'seabird', 'seahorse', 'seaplane',
209
- 'seashell', 'seedling', 'serving_dish', 'sewing_machine', 'shaker',
210
- 'shampoo', 'shark', 'sharpener', 'Sharpie', 'shaver_(electric)',
211
- 'shaving_cream', 'shawl', 'shears', 'sheep', 'shepherd_dog',
212
- 'sherbert', 'shield', 'shirt', 'shoe', 'shopping_bag', 'shopping_cart',
213
- 'short_pants', 'shot_glass', 'shoulder_bag', 'shovel', 'shower_head',
214
- 'shower_curtain', 'shredder_(for_paper)', 'sieve', 'signboard', 'silo',
215
- 'sink', 'skateboard', 'skewer', 'ski', 'ski_boot', 'ski_parka',
216
- 'ski_pole', 'skirt', 'sled', 'sleeping_bag', 'sling_(bandage)',
217
- 'slipper_(footwear)', 'smoothie', 'snake', 'snowboard', 'snowman',
218
- 'snowmobile', 'soap', 'soccer_ball', 'sock', 'soda_fountain',
219
- 'carbonated_water', 'sofa', 'softball', 'solar_array', 'sombrero',
220
- 'soup', 'soup_bowl', 'soupspoon', 'sour_cream', 'soya_milk',
221
- 'space_shuttle', 'sparkler_(fireworks)', 'spatula', 'spear',
222
- 'spectacles', 'spice_rack', 'spider', 'sponge', 'spoon', 'sportswear',
223
- 'spotlight', 'squirrel', 'stapler_(stapling_machine)', 'starfish',
224
- 'statue_(sculpture)', 'steak_(food)', 'steak_knife',
225
- 'steamer_(kitchen_appliance)', 'steering_wheel', 'stencil',
226
- 'stepladder', 'step_stool', 'stereo_(sound_system)', 'stew', 'stirrer',
227
- 'stirrup', 'stockings_(leg_wear)', 'stool', 'stop_sign', 'brake_light',
228
- 'stove', 'strainer', 'strap', 'straw_(for_drinking)', 'strawberry',
229
- 'street_sign', 'streetlight', 'string_cheese', 'stylus', 'subwoofer',
230
- 'sugar_bowl', 'sugarcane_(plant)', 'suit_(clothing)', 'sunflower',
231
- 'sunglasses', 'sunhat', 'sunscreen', 'surfboard', 'sushi', 'mop',
232
- 'sweat_pants', 'sweatband', 'sweater', 'sweatshirt', 'sweet_potato',
233
- 'swimsuit', 'sword', 'syringe', 'Tabasco_sauce', 'table-tennis_table',
234
- 'table', 'table_lamp', 'tablecloth', 'tachometer', 'taco', 'tag',
235
- 'taillight', 'tambourine', 'army_tank', 'tank_(storage_vessel)',
236
- 'tank_top_(clothing)', 'tape_(sticky_cloth_or_paper)', 'tape_measure',
237
- 'tapestry', 'tarp', 'tartan', 'tassel', 'tea_bag', 'teacup',
238
- 'teakettle', 'teapot', 'teddy_bear', 'telephone', 'telephone_booth',
239
- 'telephone_pole', 'telephoto_lens', 'television_camera',
240
- 'television_set', 'tennis_ball', 'tennis_racket', 'tequila',
241
- 'thermometer', 'thermos_bottle', 'thermostat', 'thimble', 'thread',
242
- 'thumbtack', 'tiara', 'tiger', 'tights_(clothing)', 'timer', 'tinfoil',
243
- 'tinsel', 'tissue_paper', 'toast_(food)', 'toaster', 'toaster_oven',
244
- 'toilet', 'toilet_tissue', 'tomato', 'tongs', 'toolbox', 'toothbrush',
245
- 'toothpaste', 'toothpick', 'cover', 'tortilla', 'tow_truck', 'towel',
246
- 'towel_rack', 'toy', 'tractor_(farm_equipment)', 'traffic_light',
247
- 'dirt_bike', 'trailer_truck', 'train_(railroad_vehicle)', 'trampoline',
248
- 'tray', 'tree_house', 'trench_coat', 'triangle_(musical_instrument)',
249
- 'tricycle', 'tripod', 'trousers', 'truck', 'truffle_(chocolate)',
250
- 'trunk', 'vat', 'turban', 'turkey_(bird)', 'turkey_(food)', 'turnip',
251
- 'turtle', 'turtleneck_(clothing)', 'typewriter', 'umbrella',
252
- 'underwear', 'unicycle', 'urinal', 'urn', 'vacuum_cleaner', 'valve',
253
- 'vase', 'vending_machine', 'vent', 'videotape', 'vinegar', 'violin',
254
- 'vodka', 'volleyball', 'vulture', 'waffle', 'waffle_iron', 'wagon',
255
- 'wagon_wheel', 'walking_stick', 'wall_clock', 'wall_socket', 'wallet',
256
- 'walrus', 'wardrobe', 'wasabi', 'automatic_washer', 'watch',
257
- 'water_bottle', 'water_cooler', 'water_faucet', 'water_filter',
258
- 'water_heater', 'water_jug', 'water_gun', 'water_scooter', 'water_ski',
259
- 'water_tower', 'watering_can', 'watermelon', 'weathervane', 'webcam',
260
- 'wedding_cake', 'wedding_ring', 'wet_suit', 'wheel', 'wheelchair',
261
- 'whipped_cream', 'whiskey', 'whistle', 'wick', 'wig', 'wind_chime',
262
- 'windmill', 'window_box_(for_plants)', 'windshield_wiper', 'windsock',
263
- 'wine_bottle', 'wine_bucket', 'wineglass', 'wing_chair',
264
- 'blinder_(for_horses)', 'wok', 'wolf', 'wooden_spoon', 'wreath',
265
- 'wrench', 'wristband', 'wristlet', 'yacht', 'yak', 'yogurt',
266
- 'yoke_(animal_equipment)', 'zebra', 'zucchini')
267
-
268
- def load_annotations(self, ann_file):
269
- """Load annotation from lvis style annotation file.
270
-
271
- Args:
272
- ann_file (str): Path of annotation file.
273
-
274
- Returns:
275
- list[dict]: Annotation info from LVIS api.
276
- """
277
-
278
- try:
279
- import lvis
280
- assert lvis.__version__ >= '10.5.3'
281
- from lvis import LVIS
282
- except AssertionError:
283
- raise AssertionError('Incompatible version of lvis is installed. '
284
- 'Run pip uninstall lvis first. Then run pip '
285
- 'install mmlvis to install open-mmlab forked '
286
- 'lvis. ')
287
- except ImportError:
288
- raise ImportError('Package lvis is not installed. Please run pip '
289
- 'install mmlvis to install open-mmlab forked '
290
- 'lvis.')
291
- self.coco = LVIS(ann_file)
292
- self.cat_ids = self.coco.get_cat_ids()
293
- self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)}
294
- self.img_ids = self.coco.get_img_ids()
295
- data_infos = []
296
- for i in self.img_ids:
297
- info = self.coco.load_imgs([i])[0]
298
- if info['file_name'].startswith('COCO'):
299
- # Convert form the COCO 2014 file naming convention of
300
- # COCO_[train/val/test]2014_000000000000.jpg to the 2017
301
- # naming convention of 000000000000.jpg
302
- # (LVIS v1 will fix this naming issue)
303
- info['filename'] = info['file_name'][-16:]
304
- else:
305
- info['filename'] = info['file_name']
306
- data_infos.append(info)
307
- return data_infos
308
-
309
- def evaluate(self,
310
- results,
311
- metric='bbox',
312
- logger=None,
313
- jsonfile_prefix=None,
314
- classwise=False,
315
- proposal_nums=(100, 300, 1000),
316
- iou_thrs=np.arange(0.5, 0.96, 0.05)):
317
- """Evaluation in LVIS protocol.
318
-
319
- Args:
320
- results (list[list | tuple]): Testing results of the dataset.
321
- metric (str | list[str]): Metrics to be evaluated. Options are
322
- 'bbox', 'segm', 'proposal', 'proposal_fast'.
323
- logger (logging.Logger | str | None): Logger used for printing
324
- related information during evaluation. Default: None.
325
- jsonfile_prefix (str | None):
326
- classwise (bool): Whether to evaluating the AP for each class.
327
- proposal_nums (Sequence[int]): Proposal number used for evaluating
328
- recalls, such as recall@100, recall@1000.
329
- Default: (100, 300, 1000).
330
- iou_thrs (Sequence[float]): IoU threshold used for evaluating
331
- recalls. If set to a list, the average recall of all IoUs will
332
- also be computed. Default: 0.5.
333
-
334
- Returns:
335
- dict[str, float]: LVIS style metrics.
336
- """
337
-
338
- try:
339
- import lvis
340
- assert lvis.__version__ >= '10.5.3'
341
- from lvis import LVISResults, LVISEval
342
- except AssertionError:
343
- raise AssertionError('Incompatible version of lvis is installed. '
344
- 'Run pip uninstall lvis first. Then run pip '
345
- 'install mmlvis to install open-mmlab forked '
346
- 'lvis. ')
347
- except ImportError:
348
- raise ImportError('Package lvis is not installed. Please run pip '
349
- 'install mmlvis to install open-mmlab forked '
350
- 'lvis.')
351
- assert isinstance(results, list), 'results must be a list'
352
- assert len(results) == len(self), (
353
- 'The length of results is not equal to the dataset len: {} != {}'.
354
- format(len(results), len(self)))
355
-
356
- metrics = metric if isinstance(metric, list) else [metric]
357
- allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast']
358
- for metric in metrics:
359
- if metric not in allowed_metrics:
360
- raise KeyError('metric {} is not supported'.format(metric))
361
-
362
- if jsonfile_prefix is None:
363
- tmp_dir = tempfile.TemporaryDirectory()
364
- jsonfile_prefix = osp.join(tmp_dir.name, 'results')
365
- else:
366
- tmp_dir = None
367
- result_files = self.results2json(results, jsonfile_prefix)
368
-
369
- eval_results = OrderedDict()
370
- # get original api
371
- lvis_gt = self.coco
372
- for metric in metrics:
373
- msg = 'Evaluating {}...'.format(metric)
374
- if logger is None:
375
- msg = '\n' + msg
376
- print_log(msg, logger=logger)
377
-
378
- if metric == 'proposal_fast':
379
- ar = self.fast_eval_recall(
380
- results, proposal_nums, iou_thrs, logger='silent')
381
- log_msg = []
382
- for i, num in enumerate(proposal_nums):
383
- eval_results['AR@{}'.format(num)] = ar[i]
384
- log_msg.append('\nAR@{}\t{:.4f}'.format(num, ar[i]))
385
- log_msg = ''.join(log_msg)
386
- print_log(log_msg, logger=logger)
387
- continue
388
-
389
- if metric not in result_files:
390
- raise KeyError('{} is not in results'.format(metric))
391
- try:
392
- lvis_dt = LVISResults(lvis_gt, result_files[metric])
393
- except IndexError:
394
- print_log(
395
- 'The testing results of the whole dataset is empty.',
396
- logger=logger,
397
- level=logging.ERROR)
398
- break
399
-
400
- iou_type = 'bbox' if metric == 'proposal' else metric
401
- lvis_eval = LVISEval(lvis_gt, lvis_dt, iou_type)
402
- lvis_eval.params.imgIds = self.img_ids
403
- if metric == 'proposal':
404
- lvis_eval.params.useCats = 0
405
- lvis_eval.params.maxDets = list(proposal_nums)
406
- lvis_eval.evaluate()
407
- lvis_eval.accumulate()
408
- lvis_eval.summarize()
409
- for k, v in lvis_eval.get_results().items():
410
- if k.startswith('AR'):
411
- val = float('{:.3f}'.format(float(v)))
412
- eval_results[k] = val
413
- else:
414
- lvis_eval.evaluate()
415
- lvis_eval.accumulate()
416
- lvis_eval.summarize()
417
- lvis_results = lvis_eval.get_results()
418
- if classwise: # Compute per-category AP
419
- # Compute per-category AP
420
- # from https://github.com/facebookresearch/detectron2/
421
- precisions = lvis_eval.eval['precision']
422
- # precision: (iou, recall, cls, area range, max dets)
423
- assert len(self.cat_ids) == precisions.shape[2]
424
-
425
- results_per_category = []
426
- for idx, catId in enumerate(self.cat_ids):
427
- # area range index 0: all area ranges
428
- # max dets index -1: typically 100 per image
429
- nm = self.coco.load_cats(catId)[0]
430
- precision = precisions[:, :, idx, 0, -1]
431
- precision = precision[precision > -1]
432
- if precision.size:
433
- ap = np.mean(precision)
434
- else:
435
- ap = float('nan')
436
- results_per_category.append(
437
- (f'{nm["name"]}', f'{float(ap):0.3f}'))
438
-
439
- num_columns = min(6, len(results_per_category) * 2)
440
- results_flatten = list(
441
- itertools.chain(*results_per_category))
442
- headers = ['category', 'AP'] * (num_columns // 2)
443
- results_2d = itertools.zip_longest(*[
444
- results_flatten[i::num_columns]
445
- for i in range(num_columns)
446
- ])
447
- table_data = [headers]
448
- table_data += [result for result in results_2d]
449
- table = AsciiTable(table_data)
450
- print_log('\n' + table.table, logger=logger)
451
-
452
- for k, v in lvis_results.items():
453
- if k.startswith('AP'):
454
- key = '{}_{}'.format(metric, k)
455
- val = float('{:.3f}'.format(float(v)))
456
- eval_results[key] = val
457
- ap_summary = ' '.join([
458
- '{}:{:.3f}'.format(k, float(v))
459
- for k, v in lvis_results.items() if k.startswith('AP')
460
- ])
461
- eval_results['{}_mAP_copypaste'.format(metric)] = ap_summary
462
- lvis_eval.print_results()
463
- if tmp_dir is not None:
464
- tmp_dir.cleanup()
465
- return eval_results
466
-
467
-
468
- LVISDataset = LVISV05Dataset
469
- DATASETS.register_module(name='LVISDataset', module=LVISDataset)
470
-
471
-
472
- @DATASETS.register_module()
473
- class LVISV1Dataset(LVISDataset):
474
-
475
- CLASSES = (
476
- 'aerosol_can', 'air_conditioner', 'airplane', 'alarm_clock', 'alcohol',
477
- 'alligator', 'almond', 'ambulance', 'amplifier', 'anklet', 'antenna',
478
- 'apple', 'applesauce', 'apricot', 'apron', 'aquarium',
479
- 'arctic_(type_of_shoe)', 'armband', 'armchair', 'armoire', 'armor',
480
- 'artichoke', 'trash_can', 'ashtray', 'asparagus', 'atomizer',
481
- 'avocado', 'award', 'awning', 'ax', 'baboon', 'baby_buggy',
482
- 'basketball_backboard', 'backpack', 'handbag', 'suitcase', 'bagel',
483
- 'bagpipe', 'baguet', 'bait', 'ball', 'ballet_skirt', 'balloon',
484
- 'bamboo', 'banana', 'Band_Aid', 'bandage', 'bandanna', 'banjo',
485
- 'banner', 'barbell', 'barge', 'barrel', 'barrette', 'barrow',
486
- 'baseball_base', 'baseball', 'baseball_bat', 'baseball_cap',
487
- 'baseball_glove', 'basket', 'basketball', 'bass_horn', 'bat_(animal)',
488
- 'bath_mat', 'bath_towel', 'bathrobe', 'bathtub', 'batter_(food)',
489
- 'battery', 'beachball', 'bead', 'bean_curd', 'beanbag', 'beanie',
490
- 'bear', 'bed', 'bedpan', 'bedspread', 'cow', 'beef_(food)', 'beeper',
491
- 'beer_bottle', 'beer_can', 'beetle', 'bell', 'bell_pepper', 'belt',
492
- 'belt_buckle', 'bench', 'beret', 'bib', 'Bible', 'bicycle', 'visor',
493
- 'billboard', 'binder', 'binoculars', 'bird', 'birdfeeder', 'birdbath',
494
- 'birdcage', 'birdhouse', 'birthday_cake', 'birthday_card',
495
- 'pirate_flag', 'black_sheep', 'blackberry', 'blackboard', 'blanket',
496
- 'blazer', 'blender', 'blimp', 'blinker', 'blouse', 'blueberry',
497
- 'gameboard', 'boat', 'bob', 'bobbin', 'bobby_pin', 'boiled_egg',
498
- 'bolo_tie', 'deadbolt', 'bolt', 'bonnet', 'book', 'bookcase',
499
- 'booklet', 'bookmark', 'boom_microphone', 'boot', 'bottle',
500
- 'bottle_opener', 'bouquet', 'bow_(weapon)', 'bow_(decorative_ribbons)',
501
- 'bow-tie', 'bowl', 'pipe_bowl', 'bowler_hat', 'bowling_ball', 'box',
502
- 'boxing_glove', 'suspenders', 'bracelet', 'brass_plaque', 'brassiere',
503
- 'bread-bin', 'bread', 'breechcloth', 'bridal_gown', 'briefcase',
504
- 'broccoli', 'broach', 'broom', 'brownie', 'brussels_sprouts',
505
- 'bubble_gum', 'bucket', 'horse_buggy', 'bull', 'bulldog', 'bulldozer',
506
- 'bullet_train', 'bulletin_board', 'bulletproof_vest', 'bullhorn',
507
- 'bun', 'bunk_bed', 'buoy', 'burrito', 'bus_(vehicle)', 'business_card',
508
- 'butter', 'butterfly', 'button', 'cab_(taxi)', 'cabana', 'cabin_car',
509
- 'cabinet', 'locker', 'cake', 'calculator', 'calendar', 'calf',
510
- 'camcorder', 'camel', 'camera', 'camera_lens', 'camper_(vehicle)',
511
- 'can', 'can_opener', 'candle', 'candle_holder', 'candy_bar',
512
- 'candy_cane', 'walking_cane', 'canister', 'canoe', 'cantaloup',
513
- 'canteen', 'cap_(headwear)', 'bottle_cap', 'cape', 'cappuccino',
514
- 'car_(automobile)', 'railcar_(part_of_a_train)', 'elevator_car',
515
- 'car_battery', 'identity_card', 'card', 'cardigan', 'cargo_ship',
516
- 'carnation', 'horse_carriage', 'carrot', 'tote_bag', 'cart', 'carton',
517
- 'cash_register', 'casserole', 'cassette', 'cast', 'cat', 'cauliflower',
518
- 'cayenne_(spice)', 'CD_player', 'celery', 'cellular_telephone',
519
- 'chain_mail', 'chair', 'chaise_longue', 'chalice', 'chandelier',
520
- 'chap', 'checkbook', 'checkerboard', 'cherry', 'chessboard',
521
- 'chicken_(animal)', 'chickpea', 'chili_(vegetable)', 'chime',
522
- 'chinaware', 'crisp_(potato_chip)', 'poker_chip', 'chocolate_bar',
523
- 'chocolate_cake', 'chocolate_milk', 'chocolate_mousse', 'choker',
524
- 'chopping_board', 'chopstick', 'Christmas_tree', 'slide', 'cider',
525
- 'cigar_box', 'cigarette', 'cigarette_case', 'cistern', 'clarinet',
526
- 'clasp', 'cleansing_agent', 'cleat_(for_securing_rope)', 'clementine',
527
- 'clip', 'clipboard', 'clippers_(for_plants)', 'cloak', 'clock',
528
- 'clock_tower', 'clothes_hamper', 'clothespin', 'clutch_bag', 'coaster',
529
- 'coat', 'coat_hanger', 'coatrack', 'cock', 'cockroach',
530
- 'cocoa_(beverage)', 'coconut', 'coffee_maker', 'coffee_table',
531
- 'coffeepot', 'coil', 'coin', 'colander', 'coleslaw',
532
- 'coloring_material', 'combination_lock', 'pacifier', 'comic_book',
533
- 'compass', 'computer_keyboard', 'condiment', 'cone', 'control',
534
- 'convertible_(automobile)', 'sofa_bed', 'cooker', 'cookie',
535
- 'cooking_utensil', 'cooler_(for_food)', 'cork_(bottle_plug)',
536
- 'corkboard', 'corkscrew', 'edible_corn', 'cornbread', 'cornet',
537
- 'cornice', 'cornmeal', 'corset', 'costume', 'cougar', 'coverall',
538
- 'cowbell', 'cowboy_hat', 'crab_(animal)', 'crabmeat', 'cracker',
539
- 'crape', 'crate', 'crayon', 'cream_pitcher', 'crescent_roll', 'crib',
540
- 'crock_pot', 'crossbar', 'crouton', 'crow', 'crowbar', 'crown',
541
- 'crucifix', 'cruise_ship', 'police_cruiser', 'crumb', 'crutch',
542
- 'cub_(animal)', 'cube', 'cucumber', 'cufflink', 'cup', 'trophy_cup',
543
- 'cupboard', 'cupcake', 'hair_curler', 'curling_iron', 'curtain',
544
- 'cushion', 'cylinder', 'cymbal', 'dagger', 'dalmatian', 'dartboard',
545
- 'date_(fruit)', 'deck_chair', 'deer', 'dental_floss', 'desk',
546
- 'detergent', 'diaper', 'diary', 'die', 'dinghy', 'dining_table', 'tux',
547
- 'dish', 'dish_antenna', 'dishrag', 'dishtowel', 'dishwasher',
548
- 'dishwasher_detergent', 'dispenser', 'diving_board', 'Dixie_cup',
549
- 'dog', 'dog_collar', 'doll', 'dollar', 'dollhouse', 'dolphin',
550
- 'domestic_ass', 'doorknob', 'doormat', 'doughnut', 'dove', 'dragonfly',
551
- 'drawer', 'underdrawers', 'dress', 'dress_hat', 'dress_suit',
552
- 'dresser', 'drill', 'drone', 'dropper', 'drum_(musical_instrument)',
553
- 'drumstick', 'duck', 'duckling', 'duct_tape', 'duffel_bag', 'dumbbell',
554
- 'dumpster', 'dustpan', 'eagle', 'earphone', 'earplug', 'earring',
555
- 'easel', 'eclair', 'eel', 'egg', 'egg_roll', 'egg_yolk', 'eggbeater',
556
- 'eggplant', 'electric_chair', 'refrigerator', 'elephant', 'elk',
557
- 'envelope', 'eraser', 'escargot', 'eyepatch', 'falcon', 'fan',
558
- 'faucet', 'fedora', 'ferret', 'Ferris_wheel', 'ferry', 'fig_(fruit)',
559
- 'fighter_jet', 'figurine', 'file_cabinet', 'file_(tool)', 'fire_alarm',
560
- 'fire_engine', 'fire_extinguisher', 'fire_hose', 'fireplace',
561
- 'fireplug', 'first-aid_kit', 'fish', 'fish_(food)', 'fishbowl',
562
- 'fishing_rod', 'flag', 'flagpole', 'flamingo', 'flannel', 'flap',
563
- 'flash', 'flashlight', 'fleece', 'flip-flop_(sandal)',
564
- 'flipper_(footwear)', 'flower_arrangement', 'flute_glass', 'foal',
565
- 'folding_chair', 'food_processor', 'football_(American)',
566
- 'football_helmet', 'footstool', 'fork', 'forklift', 'freight_car',
567
- 'French_toast', 'freshener', 'frisbee', 'frog', 'fruit_juice',
568
- 'frying_pan', 'fudge', 'funnel', 'futon', 'gag', 'garbage',
569
- 'garbage_truck', 'garden_hose', 'gargle', 'gargoyle', 'garlic',
570
- 'gasmask', 'gazelle', 'gelatin', 'gemstone', 'generator',
571
- 'giant_panda', 'gift_wrap', 'ginger', 'giraffe', 'cincture',
572
- 'glass_(drink_container)', 'globe', 'glove', 'goat', 'goggles',
573
- 'goldfish', 'golf_club', 'golfcart', 'gondola_(boat)', 'goose',
574
- 'gorilla', 'gourd', 'grape', 'grater', 'gravestone', 'gravy_boat',
575
- 'green_bean', 'green_onion', 'griddle', 'grill', 'grits', 'grizzly',
576
- 'grocery_bag', 'guitar', 'gull', 'gun', 'hairbrush', 'hairnet',
577
- 'hairpin', 'halter_top', 'ham', 'hamburger', 'hammer', 'hammock',
578
- 'hamper', 'hamster', 'hair_dryer', 'hand_glass', 'hand_towel',
579
- 'handcart', 'handcuff', 'handkerchief', 'handle', 'handsaw',
580
- 'hardback_book', 'harmonium', 'hat', 'hatbox', 'veil', 'headband',
581
- 'headboard', 'headlight', 'headscarf', 'headset',
582
- 'headstall_(for_horses)', 'heart', 'heater', 'helicopter', 'helmet',
583
- 'heron', 'highchair', 'hinge', 'hippopotamus', 'hockey_stick', 'hog',
584
- 'home_plate_(baseball)', 'honey', 'fume_hood', 'hook', 'hookah',
585
- 'hornet', 'horse', 'hose', 'hot-air_balloon', 'hotplate', 'hot_sauce',
586
- 'hourglass', 'houseboat', 'hummingbird', 'hummus', 'polar_bear',
587
- 'icecream', 'popsicle', 'ice_maker', 'ice_pack', 'ice_skate',
588
- 'igniter', 'inhaler', 'iPod', 'iron_(for_clothing)', 'ironing_board',
589
- 'jacket', 'jam', 'jar', 'jean', 'jeep', 'jelly_bean', 'jersey',
590
- 'jet_plane', 'jewel', 'jewelry', 'joystick', 'jumpsuit', 'kayak',
591
- 'keg', 'kennel', 'kettle', 'key', 'keycard', 'kilt', 'kimono',
592
- 'kitchen_sink', 'kitchen_table', 'kite', 'kitten', 'kiwi_fruit',
593
- 'knee_pad', 'knife', 'knitting_needle', 'knob', 'knocker_(on_a_door)',
594
- 'koala', 'lab_coat', 'ladder', 'ladle', 'ladybug', 'lamb_(animal)',
595
- 'lamb-chop', 'lamp', 'lamppost', 'lampshade', 'lantern', 'lanyard',
596
- 'laptop_computer', 'lasagna', 'latch', 'lawn_mower', 'leather',
597
- 'legging_(clothing)', 'Lego', 'legume', 'lemon', 'lemonade', 'lettuce',
598
- 'license_plate', 'life_buoy', 'life_jacket', 'lightbulb',
599
- 'lightning_rod', 'lime', 'limousine', 'lion', 'lip_balm', 'liquor',
600
- 'lizard', 'log', 'lollipop', 'speaker_(stero_equipment)', 'loveseat',
601
- 'machine_gun', 'magazine', 'magnet', 'mail_slot', 'mailbox_(at_home)',
602
- 'mallard', 'mallet', 'mammoth', 'manatee', 'mandarin_orange', 'manger',
603
- 'manhole', 'map', 'marker', 'martini', 'mascot', 'mashed_potato',
604
- 'masher', 'mask', 'mast', 'mat_(gym_equipment)', 'matchbox',
605
- 'mattress', 'measuring_cup', 'measuring_stick', 'meatball', 'medicine',
606
- 'melon', 'microphone', 'microscope', 'microwave_oven', 'milestone',
607
- 'milk', 'milk_can', 'milkshake', 'minivan', 'mint_candy', 'mirror',
608
- 'mitten', 'mixer_(kitchen_tool)', 'money',
609
- 'monitor_(computer_equipment) computer_monitor', 'monkey', 'motor',
610
- 'motor_scooter', 'motor_vehicle', 'motorcycle', 'mound_(baseball)',
611
- 'mouse_(computer_equipment)', 'mousepad', 'muffin', 'mug', 'mushroom',
612
- 'music_stool', 'musical_instrument', 'nailfile', 'napkin',
613
- 'neckerchief', 'necklace', 'necktie', 'needle', 'nest', 'newspaper',
614
- 'newsstand', 'nightshirt', 'nosebag_(for_animals)',
615
- 'noseband_(for_animals)', 'notebook', 'notepad', 'nut', 'nutcracker',
616
- 'oar', 'octopus_(food)', 'octopus_(animal)', 'oil_lamp', 'olive_oil',
617
- 'omelet', 'onion', 'orange_(fruit)', 'orange_juice', 'ostrich',
618
- 'ottoman', 'oven', 'overalls_(clothing)', 'owl', 'packet', 'inkpad',
619
- 'pad', 'paddle', 'padlock', 'paintbrush', 'painting', 'pajamas',
620
- 'palette', 'pan_(for_cooking)', 'pan_(metal_container)', 'pancake',
621
- 'pantyhose', 'papaya', 'paper_plate', 'paper_towel', 'paperback_book',
622
- 'paperweight', 'parachute', 'parakeet', 'parasail_(sports)', 'parasol',
623
- 'parchment', 'parka', 'parking_meter', 'parrot',
624
- 'passenger_car_(part_of_a_train)', 'passenger_ship', 'passport',
625
- 'pastry', 'patty_(food)', 'pea_(food)', 'peach', 'peanut_butter',
626
- 'pear', 'peeler_(tool_for_fruit_and_vegetables)', 'wooden_leg',
627
- 'pegboard', 'pelican', 'pen', 'pencil', 'pencil_box',
628
- 'pencil_sharpener', 'pendulum', 'penguin', 'pennant', 'penny_(coin)',
629
- 'pepper', 'pepper_mill', 'perfume', 'persimmon', 'person', 'pet',
630
- 'pew_(church_bench)', 'phonebook', 'phonograph_record', 'piano',
631
- 'pickle', 'pickup_truck', 'pie', 'pigeon', 'piggy_bank', 'pillow',
632
- 'pin_(non_jewelry)', 'pineapple', 'pinecone', 'ping-pong_ball',
633
- 'pinwheel', 'tobacco_pipe', 'pipe', 'pistol', 'pita_(bread)',
634
- 'pitcher_(vessel_for_liquid)', 'pitchfork', 'pizza', 'place_mat',
635
- 'plate', 'platter', 'playpen', 'pliers', 'plow_(farm_equipment)',
636
- 'plume', 'pocket_watch', 'pocketknife', 'poker_(fire_stirring_tool)',
637
- 'pole', 'polo_shirt', 'poncho', 'pony', 'pool_table', 'pop_(soda)',
638
- 'postbox_(public)', 'postcard', 'poster', 'pot', 'flowerpot', 'potato',
639
- 'potholder', 'pottery', 'pouch', 'power_shovel', 'prawn', 'pretzel',
640
- 'printer', 'projectile_(weapon)', 'projector', 'propeller', 'prune',
641
- 'pudding', 'puffer_(fish)', 'puffin', 'pug-dog', 'pumpkin', 'puncher',
642
- 'puppet', 'puppy', 'quesadilla', 'quiche', 'quilt', 'rabbit',
643
- 'race_car', 'racket', 'radar', 'radiator', 'radio_receiver', 'radish',
644
- 'raft', 'rag_doll', 'raincoat', 'ram_(animal)', 'raspberry', 'rat',
645
- 'razorblade', 'reamer_(juicer)', 'rearview_mirror', 'receipt',
646
- 'recliner', 'record_player', 'reflector', 'remote_control',
647
- 'rhinoceros', 'rib_(food)', 'rifle', 'ring', 'river_boat', 'road_map',
648
- 'robe', 'rocking_chair', 'rodent', 'roller_skate', 'Rollerblade',
649
- 'rolling_pin', 'root_beer', 'router_(computer_equipment)',
650
- 'rubber_band', 'runner_(carpet)', 'plastic_bag',
651
- 'saddle_(on_an_animal)', 'saddle_blanket', 'saddlebag', 'safety_pin',
652
- 'sail', 'salad', 'salad_plate', 'salami', 'salmon_(fish)',
653
- 'salmon_(food)', 'salsa', 'saltshaker', 'sandal_(type_of_shoe)',
654
- 'sandwich', 'satchel', 'saucepan', 'saucer', 'sausage', 'sawhorse',
655
- 'saxophone', 'scale_(measuring_instrument)', 'scarecrow', 'scarf',
656
- 'school_bus', 'scissors', 'scoreboard', 'scraper', 'screwdriver',
657
- 'scrubbing_brush', 'sculpture', 'seabird', 'seahorse', 'seaplane',
658
- 'seashell', 'sewing_machine', 'shaker', 'shampoo', 'shark',
659
- 'sharpener', 'Sharpie', 'shaver_(electric)', 'shaving_cream', 'shawl',
660
- 'shears', 'sheep', 'shepherd_dog', 'sherbert', 'shield', 'shirt',
661
- 'shoe', 'shopping_bag', 'shopping_cart', 'short_pants', 'shot_glass',
662
- 'shoulder_bag', 'shovel', 'shower_head', 'shower_cap',
663
- 'shower_curtain', 'shredder_(for_paper)', 'signboard', 'silo', 'sink',
664
- 'skateboard', 'skewer', 'ski', 'ski_boot', 'ski_parka', 'ski_pole',
665
- 'skirt', 'skullcap', 'sled', 'sleeping_bag', 'sling_(bandage)',
666
- 'slipper_(footwear)', 'smoothie', 'snake', 'snowboard', 'snowman',
667
- 'snowmobile', 'soap', 'soccer_ball', 'sock', 'sofa', 'softball',
668
- 'solar_array', 'sombrero', 'soup', 'soup_bowl', 'soupspoon',
669
- 'sour_cream', 'soya_milk', 'space_shuttle', 'sparkler_(fireworks)',
670
- 'spatula', 'spear', 'spectacles', 'spice_rack', 'spider', 'crawfish',
671
- 'sponge', 'spoon', 'sportswear', 'spotlight', 'squid_(food)',
672
- 'squirrel', 'stagecoach', 'stapler_(stapling_machine)', 'starfish',
673
- 'statue_(sculpture)', 'steak_(food)', 'steak_knife', 'steering_wheel',
674
- 'stepladder', 'step_stool', 'stereo_(sound_system)', 'stew', 'stirrer',
675
- 'stirrup', 'stool', 'stop_sign', 'brake_light', 'stove', 'strainer',
676
- 'strap', 'straw_(for_drinking)', 'strawberry', 'street_sign',
677
- 'streetlight', 'string_cheese', 'stylus', 'subwoofer', 'sugar_bowl',
678
- 'sugarcane_(plant)', 'suit_(clothing)', 'sunflower', 'sunglasses',
679
- 'sunhat', 'surfboard', 'sushi', 'mop', 'sweat_pants', 'sweatband',
680
- 'sweater', 'sweatshirt', 'sweet_potato', 'swimsuit', 'sword',
681
- 'syringe', 'Tabasco_sauce', 'table-tennis_table', 'table',
682
- 'table_lamp', 'tablecloth', 'tachometer', 'taco', 'tag', 'taillight',
683
- 'tambourine', 'army_tank', 'tank_(storage_vessel)',
684
- 'tank_top_(clothing)', 'tape_(sticky_cloth_or_paper)', 'tape_measure',
685
- 'tapestry', 'tarp', 'tartan', 'tassel', 'tea_bag', 'teacup',
686
- 'teakettle', 'teapot', 'teddy_bear', 'telephone', 'telephone_booth',
687
- 'telephone_pole', 'telephoto_lens', 'television_camera',
688
- 'television_set', 'tennis_ball', 'tennis_racket', 'tequila',
689
- 'thermometer', 'thermos_bottle', 'thermostat', 'thimble', 'thread',
690
- 'thumbtack', 'tiara', 'tiger', 'tights_(clothing)', 'timer', 'tinfoil',
691
- 'tinsel', 'tissue_paper', 'toast_(food)', 'toaster', 'toaster_oven',
692
- 'toilet', 'toilet_tissue', 'tomato', 'tongs', 'toolbox', 'toothbrush',
693
- 'toothpaste', 'toothpick', 'cover', 'tortilla', 'tow_truck', 'towel',
694
- 'towel_rack', 'toy', 'tractor_(farm_equipment)', 'traffic_light',
695
- 'dirt_bike', 'trailer_truck', 'train_(railroad_vehicle)', 'trampoline',
696
- 'tray', 'trench_coat', 'triangle_(musical_instrument)', 'tricycle',
697
- 'tripod', 'trousers', 'truck', 'truffle_(chocolate)', 'trunk', 'vat',
698
- 'turban', 'turkey_(food)', 'turnip', 'turtle', 'turtleneck_(clothing)',
699
- 'typewriter', 'umbrella', 'underwear', 'unicycle', 'urinal', 'urn',
700
- 'vacuum_cleaner', 'vase', 'vending_machine', 'vent', 'vest',
701
- 'videotape', 'vinegar', 'violin', 'vodka', 'volleyball', 'vulture',
702
- 'waffle', 'waffle_iron', 'wagon', 'wagon_wheel', 'walking_stick',
703
- 'wall_clock', 'wall_socket', 'wallet', 'walrus', 'wardrobe',
704
- 'washbasin', 'automatic_washer', 'watch', 'water_bottle',
705
- 'water_cooler', 'water_faucet', 'water_heater', 'water_jug',
706
- 'water_gun', 'water_scooter', 'water_ski', 'water_tower',
707
- 'watering_can', 'watermelon', 'weathervane', 'webcam', 'wedding_cake',
708
- 'wedding_ring', 'wet_suit', 'wheel', 'wheelchair', 'whipped_cream',
709
- 'whistle', 'wig', 'wind_chime', 'windmill', 'window_box_(for_plants)',
710
- 'windshield_wiper', 'windsock', 'wine_bottle', 'wine_bucket',
711
- 'wineglass', 'blinder_(for_horses)', 'wok', 'wolf', 'wooden_spoon',
712
- 'wreath', 'wrench', 'wristband', 'wristlet', 'yacht', 'yogurt',
713
- 'yoke_(animal_equipment)', 'zebra', 'zucchini')
714
-
715
- def load_annotations(self, ann_file):
716
- try:
717
- import lvis
718
- assert lvis.__version__ >= '10.5.3'
719
- from lvis import LVIS
720
- except AssertionError:
721
- raise AssertionError('Incompatible version of lvis is installed. '
722
- 'Run pip uninstall lvis first. Then run pip '
723
- 'install mmlvis to install open-mmlab forked '
724
- 'lvis. ')
725
- except ImportError:
726
- raise ImportError('Package lvis is not installed. Please run pip '
727
- 'install mmlvis to install open-mmlab forked '
728
- 'lvis.')
729
- self.coco = LVIS(ann_file)
730
- self.cat_ids = self.coco.get_cat_ids()
731
- self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)}
732
- self.img_ids = self.coco.get_img_ids()
733
- data_infos = []
734
- for i in self.img_ids:
735
- info = self.coco.load_imgs([i])[0]
736
- # coco_url is used in LVISv1 instead of file_name
737
- # e.g. http://images.cocodataset.org/train2017/000000391895.jpg
738
- # train/val split in specified in url
739
- info['filename'] = info['coco_url'].replace(
740
- 'http://images.cocodataset.org/', '')
741
- data_infos.append(info)
742
- return data_infos
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/roi_heads/standard_roi_head.py DELETED
@@ -1,306 +0,0 @@
1
- import torch
2
-
3
- from mmdet.core import bbox2result, bbox2roi, build_assigner, build_sampler
4
- from ..builder import HEADS, build_head, build_roi_extractor
5
- from .base_roi_head import BaseRoIHead
6
- from .test_mixins import BBoxTestMixin, MaskTestMixin
7
-
8
-
9
- @HEADS.register_module()
10
- class StandardRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin):
11
- """Simplest base roi head including one bbox head and one mask head."""
12
-
13
- def init_assigner_sampler(self):
14
- """Initialize assigner and sampler."""
15
- self.bbox_assigner = None
16
- self.bbox_sampler = None
17
- if self.train_cfg:
18
- self.bbox_assigner = build_assigner(self.train_cfg.assigner)
19
- self.bbox_sampler = build_sampler(
20
- self.train_cfg.sampler, context=self)
21
-
22
- def init_bbox_head(self, bbox_roi_extractor, bbox_head):
23
- """Initialize ``bbox_head``"""
24
- self.bbox_roi_extractor = build_roi_extractor(bbox_roi_extractor)
25
- self.bbox_head = build_head(bbox_head)
26
-
27
- def init_mask_head(self, mask_roi_extractor, mask_head):
28
- """Initialize ``mask_head``"""
29
- if mask_roi_extractor is not None:
30
- self.mask_roi_extractor = build_roi_extractor(mask_roi_extractor)
31
- self.share_roi_extractor = False
32
- else:
33
- self.share_roi_extractor = True
34
- self.mask_roi_extractor = self.bbox_roi_extractor
35
- self.mask_head = build_head(mask_head)
36
-
37
- def init_gan_head(self, gan_roi_extractor, gan_head):
38
- """Initialize ``mask_head``"""
39
- if gan_roi_extractor is not None:
40
- self.gan_roi_extractor = build_roi_extractor(gan_roi_extractor)
41
- self.share_roi_extractor = False
42
- else:
43
- self.share_roi_extractor = True
44
- self.gan_roi_extractor = self.bbox_roi_extractor
45
- self.gan_head = build_head(gan_head)
46
-
47
-
48
- def init_weights(self, pretrained):
49
- """Initialize the weights in head.
50
-
51
- Args:
52
- pretrained (str, optional): Path to pre-trained weights.
53
- Defaults to None.
54
- """
55
- if self.with_shared_head:
56
- self.shared_head.init_weights(pretrained=pretrained)
57
- if self.with_bbox:
58
- self.bbox_roi_extractor.init_weights()
59
- self.bbox_head.init_weights()
60
- if self.with_mask:
61
- self.mask_head.init_weights()
62
- if not self.share_roi_extractor:
63
- self.mask_roi_extractor.init_weights()
64
-
65
- def forward_dummy(self, x, proposals):
66
- """Dummy forward function."""
67
- # bbox head
68
- outs = ()
69
- rois = bbox2roi([proposals])
70
- if self.with_bbox:
71
- bbox_results = self._bbox_forward(x, rois)
72
- outs = outs + (bbox_results['cls_score'],
73
- bbox_results['bbox_pred'])
74
- # mask head
75
- if self.with_mask:
76
- mask_rois = rois[:100]
77
- mask_results = self._mask_forward(x, mask_rois)
78
- outs = outs + (mask_results['mask_pred'], )
79
- return outs
80
-
81
- def forward_train(self,
82
- x,
83
- img_metas,
84
- proposal_list,
85
- gt_bboxes,
86
- gt_labels,
87
- gt_bboxes_ignore=None,
88
- gt_masks=None):
89
- """
90
- Args:
91
- x (list[Tensor]): list of multi-level img features.
92
- img_metas (list[dict]): list of image info dict where each dict
93
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
94
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
95
- For details on the values of these keys see
96
- `mmdet/datasets/pipelines/formatting.py:Collect`.
97
- proposals (list[Tensors]): list of region proposals.
98
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
99
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
100
- gt_labels (list[Tensor]): class indices corresponding to each box
101
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
102
- boxes can be ignored when computing the loss.
103
- gt_masks (None | Tensor) : true segmentation masks for each box
104
- used if the architecture supports a segmentation task.
105
-
106
- Returns:
107
- dict[str, Tensor]: a dictionary of loss components
108
- """
109
- # assign gts and sample proposals
110
- if self.with_bbox or self.with_mask:
111
- num_imgs = len(img_metas)
112
- if gt_bboxes_ignore is None:
113
- gt_bboxes_ignore = [None for _ in range(num_imgs)]
114
- sampling_results = []
115
- for i in range(num_imgs):
116
- assign_result = self.bbox_assigner.assign(
117
- proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i],
118
- gt_labels[i])
119
- sampling_result = self.bbox_sampler.sample(
120
- assign_result,
121
- proposal_list[i],
122
- gt_bboxes[i],
123
- gt_labels[i],
124
- feats=[lvl_feat[i][None] for lvl_feat in x])
125
- sampling_results.append(sampling_result)
126
-
127
- losses = dict()
128
- # bbox head forward and loss
129
- if self.with_bbox:
130
- bbox_results = self._bbox_forward_train(x, sampling_results,
131
- gt_bboxes, gt_labels,
132
- img_metas)
133
- losses.update(bbox_results['loss_bbox'])
134
-
135
- # mask head forward and loss
136
- if self.with_mask:
137
- mask_results = self._mask_forward_train(x, sampling_results,
138
- bbox_results['bbox_feats'],
139
- gt_masks, img_metas)
140
- losses.update(mask_results['loss_mask'])
141
-
142
- return losses
143
-
144
- def _bbox_forward(self, x, rois):
145
- """Box head forward function used in both training and testing."""
146
- # TODO: a more flexible way to decide which feature maps to use
147
- bbox_feats = self.bbox_roi_extractor(
148
- x[:self.bbox_roi_extractor.num_inputs], rois)
149
- if self.with_shared_head:
150
- bbox_feats = self.shared_head(bbox_feats)
151
- cls_score, bbox_pred = self.bbox_head(bbox_feats)
152
-
153
- bbox_results = dict(
154
- cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats)
155
- return bbox_results
156
-
157
- def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels,
158
- img_metas):
159
- """Run forward function and calculate loss for box head in training."""
160
- rois = bbox2roi([res.bboxes for res in sampling_results])
161
- bbox_results = self._bbox_forward(x, rois)
162
-
163
- bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes,
164
- gt_labels, self.train_cfg)
165
- loss_bbox = self.bbox_head.loss(bbox_results['cls_score'],
166
- bbox_results['bbox_pred'], rois,
167
- *bbox_targets)
168
-
169
- bbox_results.update(loss_bbox=loss_bbox)
170
- return bbox_results
171
-
172
- def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks,
173
- img_metas):
174
- """Run forward function and calculate loss for mask head in
175
- training."""
176
- if not self.share_roi_extractor:
177
- pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results])
178
- mask_results = self._mask_forward(x, pos_rois)
179
- else:
180
- pos_inds = []
181
- device = bbox_feats.device
182
- for res in sampling_results:
183
- pos_inds.append(
184
- torch.ones(
185
- res.pos_bboxes.shape[0],
186
- device=device,
187
- dtype=torch.uint8))
188
- pos_inds.append(
189
- torch.zeros(
190
- res.neg_bboxes.shape[0],
191
- device=device,
192
- dtype=torch.uint8))
193
- pos_inds = torch.cat(pos_inds)
194
-
195
- mask_results = self._mask_forward(
196
- x, pos_inds=pos_inds, bbox_feats=bbox_feats)
197
-
198
- mask_targets = self.mask_head.get_targets(sampling_results, gt_masks,
199
- self.train_cfg)
200
- pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results])
201
- loss_mask = self.mask_head.loss(mask_results['mask_pred'],
202
- mask_targets, pos_labels)
203
-
204
- mask_results.update(loss_mask=loss_mask, mask_targets=mask_targets)
205
- return mask_results
206
-
207
- def _mask_forward(self, x, rois=None, pos_inds=None, bbox_feats=None):
208
- """Mask head forward function used in both training and testing."""
209
- assert ((rois is not None) ^
210
- (pos_inds is not None and bbox_feats is not None))
211
- if rois is not None:
212
- mask_feats = self.mask_roi_extractor(
213
- x[:self.mask_roi_extractor.num_inputs], rois)
214
- if self.with_shared_head:
215
- mask_feats = self.shared_head(mask_feats)
216
- else:
217
- assert bbox_feats is not None
218
- mask_feats = bbox_feats[pos_inds]
219
-
220
- mask_pred = self.mask_head(mask_feats)
221
- mask_results = dict(mask_pred=mask_pred, mask_feats=mask_feats)
222
- return mask_results
223
-
224
- async def async_simple_test(self,
225
- x,
226
- proposal_list,
227
- img_metas,
228
- proposals=None,
229
- rescale=False):
230
- """Async test without augmentation."""
231
- assert self.with_bbox, 'Bbox head must be implemented.'
232
-
233
- det_bboxes, det_labels = await self.async_test_bboxes(
234
- x, img_metas, proposal_list, self.test_cfg, rescale=rescale)
235
- bbox_results = bbox2result(det_bboxes, det_labels,
236
- self.bbox_head.num_classes)
237
- if not self.with_mask:
238
- return bbox_results
239
- else:
240
- segm_results = await self.async_test_mask(
241
- x,
242
- img_metas,
243
- det_bboxes,
244
- det_labels,
245
- rescale=rescale,
246
- mask_test_cfg=self.test_cfg.get('mask'))
247
- return bbox_results, segm_results
248
-
249
- def simple_test(self,
250
- x,
251
- proposal_list,
252
- img_metas,
253
- proposals=None,
254
- rescale=False):
255
- """Test without augmentation."""
256
- assert self.with_bbox, 'Bbox head must be implemented.'
257
-
258
- det_bboxes, det_labels = self.simple_test_bboxes(
259
- x, img_metas, proposal_list, self.test_cfg, rescale=rescale)
260
- if torch.onnx.is_in_onnx_export():
261
- if self.with_mask:
262
- segm_results = self.simple_test_mask(
263
- x, img_metas, det_bboxes, det_labels, rescale=rescale)
264
- return det_bboxes, det_labels, segm_results
265
- else:
266
- return det_bboxes, det_labels
267
-
268
- bbox_results = [
269
- bbox2result(det_bboxes[i], det_labels[i],
270
- self.bbox_head.num_classes)
271
- for i in range(len(det_bboxes))
272
- ]
273
-
274
- if not self.with_mask:
275
- return bbox_results
276
- else:
277
- segm_results = self.simple_test_mask(
278
- x, img_metas, det_bboxes, det_labels, rescale=rescale)
279
- return list(zip(bbox_results, segm_results))
280
-
281
- def aug_test(self, x, proposal_list, img_metas, rescale=False):
282
- """Test with augmentations.
283
-
284
- If rescale is False, then returned bboxes and masks will fit the scale
285
- of imgs[0].
286
- """
287
- det_bboxes, det_labels = self.aug_test_bboxes(x, img_metas,
288
- proposal_list,
289
- self.test_cfg)
290
-
291
- if rescale:
292
- _det_bboxes = det_bboxes
293
- else:
294
- _det_bboxes = det_bboxes.clone()
295
- _det_bboxes[:, :4] *= det_bboxes.new_tensor(
296
- img_metas[0][0]['scale_factor'])
297
- bbox_results = bbox2result(_det_bboxes, det_labels,
298
- self.bbox_head.num_classes)
299
-
300
- # det_bboxes always keep the original scale
301
- if self.with_mask:
302
- segm_results = self.aug_test_mask(x, img_metas, det_bboxes,
303
- det_labels)
304
- return [(bbox_results, segm_results)]
305
- else:
306
- return [bbox_results]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py DELETED
@@ -1,413 +0,0 @@
1
- # ------------------------------------------------------------------------
2
- # Grounding DINO
3
- # url: https://github.com/IDEA-Research/GroundingDINO
4
- # Copyright (c) 2023 IDEA. All Rights Reserved.
5
- # Licensed under the Apache License, Version 2.0 [see LICENSE for details]
6
- # ------------------------------------------------------------------------
7
- # Deformable DETR
8
- # Copyright (c) 2020 SenseTime. All Rights Reserved.
9
- # Licensed under the Apache License, Version 2.0 [see LICENSE for details]
10
- # ------------------------------------------------------------------------------------------------
11
- # Modified from:
12
- # https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/functions/ms_deform_attn_func.py
13
- # https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/modules/ms_deform_attn.py
14
- # https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/multi_scale_deform_attn.py
15
- # ------------------------------------------------------------------------------------------------
16
-
17
- import math
18
- import warnings
19
- from typing import Optional
20
-
21
- import torch
22
- import torch.nn as nn
23
- import torch.nn.functional as F
24
- from torch.autograd import Function
25
- from torch.autograd.function import once_differentiable
26
- from torch.nn.init import constant_, xavier_uniform_
27
-
28
- try:
29
- from groundingdino import _C
30
- except:
31
- warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!")
32
-
33
-
34
- # helpers
35
- def _is_power_of_2(n):
36
- if (not isinstance(n, int)) or (n < 0):
37
- raise ValueError("invalid input for _is_power_of_2: {} (type: {})".format(n, type(n)))
38
- return (n & (n - 1) == 0) and n != 0
39
-
40
-
41
- class MultiScaleDeformableAttnFunction(Function):
42
- @staticmethod
43
- def forward(
44
- ctx,
45
- value,
46
- value_spatial_shapes,
47
- value_level_start_index,
48
- sampling_locations,
49
- attention_weights,
50
- im2col_step,
51
- ):
52
- ctx.im2col_step = im2col_step
53
- output = _C.ms_deform_attn_forward(
54
- value,
55
- value_spatial_shapes,
56
- value_level_start_index,
57
- sampling_locations,
58
- attention_weights,
59
- ctx.im2col_step,
60
- )
61
- ctx.save_for_backward(
62
- value,
63
- value_spatial_shapes,
64
- value_level_start_index,
65
- sampling_locations,
66
- attention_weights,
67
- )
68
- return output
69
-
70
- @staticmethod
71
- @once_differentiable
72
- def backward(ctx, grad_output):
73
- (
74
- value,
75
- value_spatial_shapes,
76
- value_level_start_index,
77
- sampling_locations,
78
- attention_weights,
79
- ) = ctx.saved_tensors
80
- grad_value, grad_sampling_loc, grad_attn_weight = _C.ms_deform_attn_backward(
81
- value,
82
- value_spatial_shapes,
83
- value_level_start_index,
84
- sampling_locations,
85
- attention_weights,
86
- grad_output,
87
- ctx.im2col_step,
88
- )
89
-
90
- return grad_value, None, None, grad_sampling_loc, grad_attn_weight, None
91
-
92
-
93
- def multi_scale_deformable_attn_pytorch(
94
- value: torch.Tensor,
95
- value_spatial_shapes: torch.Tensor,
96
- sampling_locations: torch.Tensor,
97
- attention_weights: torch.Tensor,
98
- ) -> torch.Tensor:
99
-
100
- bs, _, num_heads, embed_dims = value.shape
101
- _, num_queries, num_heads, num_levels, num_points, _ = sampling_locations.shape
102
- value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1)
103
- sampling_grids = 2 * sampling_locations - 1
104
- sampling_value_list = []
105
- for level, (H_, W_) in enumerate(value_spatial_shapes):
106
- # bs, H_*W_, num_heads, embed_dims ->
107
- # bs, H_*W_, num_heads*embed_dims ->
108
- # bs, num_heads*embed_dims, H_*W_ ->
109
- # bs*num_heads, embed_dims, H_, W_
110
- value_l_ = (
111
- value_list[level].flatten(2).transpose(1, 2).reshape(bs * num_heads, embed_dims, H_, W_)
112
- )
113
- # bs, num_queries, num_heads, num_points, 2 ->
114
- # bs, num_heads, num_queries, num_points, 2 ->
115
- # bs*num_heads, num_queries, num_points, 2
116
- sampling_grid_l_ = sampling_grids[:, :, :, level].transpose(1, 2).flatten(0, 1)
117
- # bs*num_heads, embed_dims, num_queries, num_points
118
- sampling_value_l_ = F.grid_sample(
119
- value_l_, sampling_grid_l_, mode="bilinear", padding_mode="zeros", align_corners=False
120
- )
121
- sampling_value_list.append(sampling_value_l_)
122
- # (bs, num_queries, num_heads, num_levels, num_points) ->
123
- # (bs, num_heads, num_queries, num_levels, num_points) ->
124
- # (bs, num_heads, 1, num_queries, num_levels*num_points)
125
- attention_weights = attention_weights.transpose(1, 2).reshape(
126
- bs * num_heads, 1, num_queries, num_levels * num_points
127
- )
128
- output = (
129
- (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights)
130
- .sum(-1)
131
- .view(bs, num_heads * embed_dims, num_queries)
132
- )
133
- return output.transpose(1, 2).contiguous()
134
-
135
-
136
- class MultiScaleDeformableAttention(nn.Module):
137
- """Multi-Scale Deformable Attention Module used in Deformable-DETR
138
-
139
- `Deformable DETR: Deformable Transformers for End-to-End Object Detection.
140
- <https://arxiv.org/pdf/2010.04159.pdf>`_.
141
-
142
- Args:
143
- embed_dim (int): The embedding dimension of Attention. Default: 256.
144
- num_heads (int): The number of attention heads. Default: 8.
145
- num_levels (int): The number of feature map used in Attention. Default: 4.
146
- num_points (int): The number of sampling points for each query
147
- in each head. Default: 4.
148
- img2col_steps (int): The step used in image_to_column. Defualt: 64.
149
- dropout (float): Dropout layer used in output. Default: 0.1.
150
- batch_first (bool): if ``True``, then the input and output tensor will be
151
- provided as `(bs, n, embed_dim)`. Default: False. `(n, bs, embed_dim)`
152
- """
153
-
154
- def __init__(
155
- self,
156
- embed_dim: int = 256,
157
- num_heads: int = 8,
158
- num_levels: int = 4,
159
- num_points: int = 4,
160
- img2col_step: int = 64,
161
- batch_first: bool = False,
162
- ):
163
- super().__init__()
164
- if embed_dim % num_heads != 0:
165
- raise ValueError(
166
- "embed_dim must be divisible by num_heads, but got {} and {}".format(
167
- embed_dim, num_heads
168
- )
169
- )
170
- head_dim = embed_dim // num_heads
171
-
172
- self.batch_first = batch_first
173
-
174
- if not _is_power_of_2(head_dim):
175
- warnings.warn(
176
- """
177
- You'd better set d_model in MSDeformAttn to make sure that
178
- each dim of the attention head a power of 2, which is more efficient.
179
- """
180
- )
181
-
182
- self.im2col_step = img2col_step
183
- self.embed_dim = embed_dim
184
- self.num_heads = num_heads
185
- self.num_levels = num_levels
186
- self.num_points = num_points
187
- self.sampling_offsets = nn.Linear(embed_dim, num_heads * num_levels * num_points * 2)
188
- self.attention_weights = nn.Linear(embed_dim, num_heads * num_levels * num_points)
189
- self.value_proj = nn.Linear(embed_dim, embed_dim)
190
- self.output_proj = nn.Linear(embed_dim, embed_dim)
191
-
192
- self.init_weights()
193
-
194
- def _reset_parameters(self):
195
- return self.init_weights()
196
-
197
- def init_weights(self):
198
- """
199
- Default initialization for Parameters of Module.
200
- """
201
- constant_(self.sampling_offsets.weight.data, 0.0)
202
- thetas = torch.arange(self.num_heads, dtype=torch.float32) * (
203
- 2.0 * math.pi / self.num_heads
204
- )
205
- grid_init = torch.stack([thetas.cos(), thetas.sin()], -1)
206
- grid_init = (
207
- (grid_init / grid_init.abs().max(-1, keepdim=True)[0])
208
- .view(self.num_heads, 1, 1, 2)
209
- .repeat(1, self.num_levels, self.num_points, 1)
210
- )
211
- for i in range(self.num_points):
212
- grid_init[:, :, i, :] *= i + 1
213
- with torch.no_grad():
214
- self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1))
215
- constant_(self.attention_weights.weight.data, 0.0)
216
- constant_(self.attention_weights.bias.data, 0.0)
217
- xavier_uniform_(self.value_proj.weight.data)
218
- constant_(self.value_proj.bias.data, 0.0)
219
- xavier_uniform_(self.output_proj.weight.data)
220
- constant_(self.output_proj.bias.data, 0.0)
221
-
222
- def freeze_sampling_offsets(self):
223
- print("Freeze sampling offsets")
224
- self.sampling_offsets.weight.requires_grad = False
225
- self.sampling_offsets.bias.requires_grad = False
226
-
227
- def freeze_attention_weights(self):
228
- print("Freeze attention weights")
229
- self.attention_weights.weight.requires_grad = False
230
- self.attention_weights.bias.requires_grad = False
231
-
232
- def forward(
233
- self,
234
- query: torch.Tensor,
235
- key: Optional[torch.Tensor] = None,
236
- value: Optional[torch.Tensor] = None,
237
- query_pos: Optional[torch.Tensor] = None,
238
- key_padding_mask: Optional[torch.Tensor] = None,
239
- reference_points: Optional[torch.Tensor] = None,
240
- spatial_shapes: Optional[torch.Tensor] = None,
241
- level_start_index: Optional[torch.Tensor] = None,
242
- **kwargs
243
- ) -> torch.Tensor:
244
-
245
- """Forward Function of MultiScaleDeformableAttention
246
-
247
- Args:
248
- query (torch.Tensor): Query embeddings with shape
249
- `(num_query, bs, embed_dim)`
250
- key (torch.Tensor): Key embeddings with shape
251
- `(num_key, bs, embed_dim)`
252
- value (torch.Tensor): Value embeddings with shape
253
- `(num_key, bs, embed_dim)`
254
- query_pos (torch.Tensor): The position embedding for `query`. Default: None.
255
- key_padding_mask (torch.Tensor): ByteTensor for `query`, with shape `(bs, num_key)`,
256
- indicating which elements within `key` to be ignored in attention.
257
- reference_points (torch.Tensor): The normalized reference points
258
- with shape `(bs, num_query, num_levels, 2)`,
259
- all elements is range in [0, 1], top-left (0, 0),
260
- bottom-right (1, 1), including padding are.
261
- or `(N, Length_{query}, num_levels, 4)`, add additional
262
- two dimensions `(h, w)` to form reference boxes.
263
- spatial_shapes (torch.Tensor): Spatial shape of features in different levels.
264
- With shape `(num_levels, 2)`, last dimension represents `(h, w)`.
265
- level_start_index (torch.Tensor): The start index of each level. A tensor with
266
- shape `(num_levels, )` which can be represented as
267
- `[0, h_0 * w_0, h_0 * w_0 + h_1 * w_1, ...]`.
268
-
269
- Returns:
270
- torch.Tensor: forward results with shape `(num_query, bs, embed_dim)`
271
- """
272
-
273
- if value is None:
274
- value = query
275
-
276
- if query_pos is not None:
277
- query = query + query_pos
278
-
279
- if not self.batch_first:
280
- # change to (bs, num_query ,embed_dims)
281
- query = query.permute(1, 0, 2)
282
- value = value.permute(1, 0, 2)
283
-
284
- bs, num_query, _ = query.shape
285
- bs, num_value, _ = value.shape
286
-
287
- assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value
288
-
289
- value = self.value_proj(value)
290
- if key_padding_mask is not None:
291
- value = value.masked_fill(key_padding_mask[..., None], float(0))
292
- value = value.view(bs, num_value, self.num_heads, -1)
293
- sampling_offsets = self.sampling_offsets(query).view(
294
- bs, num_query, self.num_heads, self.num_levels, self.num_points, 2
295
- )
296
- attention_weights = self.attention_weights(query).view(
297
- bs, num_query, self.num_heads, self.num_levels * self.num_points
298
- )
299
- attention_weights = attention_weights.softmax(-1)
300
- attention_weights = attention_weights.view(
301
- bs,
302
- num_query,
303
- self.num_heads,
304
- self.num_levels,
305
- self.num_points,
306
- )
307
-
308
- # bs, num_query, num_heads, num_levels, num_points, 2
309
- if reference_points.shape[-1] == 2:
310
- offset_normalizer = torch.stack([spatial_shapes[..., 1], spatial_shapes[..., 0]], -1)
311
- sampling_locations = (
312
- reference_points[:, :, None, :, None, :]
313
- + sampling_offsets / offset_normalizer[None, None, None, :, None, :]
314
- )
315
- elif reference_points.shape[-1] == 4:
316
- sampling_locations = (
317
- reference_points[:, :, None, :, None, :2]
318
- + sampling_offsets
319
- / self.num_points
320
- * reference_points[:, :, None, :, None, 2:]
321
- * 0.5
322
- )
323
- else:
324
- raise ValueError(
325
- "Last dim of reference_points must be 2 or 4, but get {} instead.".format(
326
- reference_points.shape[-1]
327
- )
328
- )
329
-
330
- if torch.cuda.is_available() and value.is_cuda:
331
- halffloat = False
332
- if value.dtype == torch.float16:
333
- halffloat = True
334
- value = value.float()
335
- sampling_locations = sampling_locations.float()
336
- attention_weights = attention_weights.float()
337
-
338
- output = MultiScaleDeformableAttnFunction.apply(
339
- value,
340
- spatial_shapes,
341
- level_start_index,
342
- sampling_locations,
343
- attention_weights,
344
- self.im2col_step,
345
- )
346
-
347
- if halffloat:
348
- output = output.half()
349
- else:
350
- output = multi_scale_deformable_attn_pytorch(
351
- value, spatial_shapes, sampling_locations, attention_weights
352
- )
353
-
354
- output = self.output_proj(output)
355
-
356
- if not self.batch_first:
357
- output = output.permute(1, 0, 2)
358
-
359
- return output
360
-
361
-
362
- def create_dummy_class(klass, dependency, message=""):
363
- """
364
- When a dependency of a class is not available, create a dummy class which throws ImportError
365
- when used.
366
-
367
- Args:
368
- klass (str): name of the class.
369
- dependency (str): name of the dependency.
370
- message: extra message to print
371
- Returns:
372
- class: a class object
373
- """
374
- err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, klass)
375
- if message:
376
- err = err + " " + message
377
-
378
- class _DummyMetaClass(type):
379
- # throw error on class attribute access
380
- def __getattr__(_, __): # noqa: B902
381
- raise ImportError(err)
382
-
383
- class _Dummy(object, metaclass=_DummyMetaClass):
384
- # throw error on constructor
385
- def __init__(self, *args, **kwargs):
386
- raise ImportError(err)
387
-
388
- return _Dummy
389
-
390
-
391
- def create_dummy_func(func, dependency, message=""):
392
- """
393
- When a dependency of a function is not available, create a dummy function which throws
394
- ImportError when used.
395
-
396
- Args:
397
- func (str): name of the function.
398
- dependency (str or list[str]): name(s) of the dependency.
399
- message: extra message to print
400
- Returns:
401
- function: a function object
402
- """
403
- err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, func)
404
- if message:
405
- err = err + " " + message
406
-
407
- if isinstance(dependency, (list, tuple)):
408
- dependency = ",".join(dependency)
409
-
410
- def _dummy(*args, **kwargs):
411
- raise ImportError(err)
412
-
413
- return _dummy
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Andrew_AI
3
- emoji: 🐳
4
- colorFrom: yellow
5
- colorTo: purple
6
- sdk: docker
7
- pinned: false
8
- license: mit
9
- app_port: 7860
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CognitiveLabs/GPT-auto-webscraping/AssistantService.py DELETED
@@ -1,22 +0,0 @@
1
- from langchain.chat_models import ChatOpenAI
2
- from chains.output_format.base import chain_output_format
3
- from chains.code_generator.base import chain_code_generator
4
- import os
5
-
6
- class GPTAssistant():
7
- def __init__(self,api_key:str):
8
- os.environ['OPENAI_API_KEY'] = api_key
9
- self.llm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo-16k', request_timeout=120, client=None)
10
-
11
- def chain_response_format(self, html_content):
12
- # prompt templates
13
- output_format_chain = chain_output_format(self.llm)
14
-
15
- # chain
16
- return output_format_chain.run(html_content=html_content)
17
-
18
- def chain_code_generator(self, output_format, html_content):
19
- # Prompt templates
20
- script_chain = chain_code_generator(self.llm)
21
-
22
- return script_chain.run(output_format=output_format, html_content=html_content)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cong723/gpt-academic-public/crazy_functions/读文章写摘要.py DELETED
@@ -1,67 +0,0 @@
1
- from toolbox import update_ui
2
- from toolbox import CatchException, report_execption, write_results_to_file
3
- from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
4
- fast_debug = False
5
-
6
-
7
- def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
8
- import time, glob, os
9
- print('begin analysis on:', file_manifest)
10
- for index, fp in enumerate(file_manifest):
11
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
12
- file_content = f.read()
13
-
14
- prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
15
- i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
16
- i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
17
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
18
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
19
-
20
- if not fast_debug:
21
- msg = '正常'
22
- # ** gpt request **
23
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, llm_kwargs, chatbot, history=[], sys_prompt=system_prompt) # 带超时倒计时
24
-
25
- chatbot[-1] = (i_say_show_user, gpt_say)
26
- history.append(i_say_show_user); history.append(gpt_say)
27
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
28
- if not fast_debug: time.sleep(2)
29
-
30
- all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
31
- i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
32
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
33
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
34
-
35
- if not fast_debug:
36
- msg = '正常'
37
- # ** gpt request **
38
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say, llm_kwargs, chatbot, history=history, sys_prompt=system_prompt) # 带超时倒计时
39
-
40
- chatbot[-1] = (i_say, gpt_say)
41
- history.append(i_say); history.append(gpt_say)
42
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
43
- res = write_results_to_file(history)
44
- chatbot.append(("完成了吗?", res))
45
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
46
-
47
-
48
-
49
- @CatchException
50
- def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
51
- history = [] # 清空历史,以免输入溢出
52
- import glob, os
53
- if os.path.exists(txt):
54
- project_folder = txt
55
- else:
56
- if txt == "": txt = '空空如也的输入栏'
57
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
58
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
59
- return
60
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] # + \
61
- # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
62
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
63
- if len(file_manifest) == 0:
64
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
65
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
66
- return
67
- yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cpp4App/Cpp4App/SEM/paragraph_bayesian.py DELETED
@@ -1,52 +0,0 @@
1
- import csv
2
- import joblib
3
-
4
- from sklearn.naive_bayes import MultinomialNB
5
-
6
- from SEM.text_preprocessing import pre_process_title
7
- from sklearn.feature_extraction.text import TfidfVectorizer
8
-
9
-
10
-
11
- def readtrain():
12
- with open('SEM/training_data/title.csv', 'rt') as csvfile:
13
- reader = csv.reader(csvfile)
14
- column1 = [row for row in reader]
15
- content_train = [i[0] for i in column1[1:]]
16
- opinion_train = [i[1] for i in column1[1:]]
17
- train = [content_train, opinion_train]
18
- return train
19
-
20
- def segmentWord(cont):
21
- c = []
22
- for i in cont:
23
- clean_text = pre_process_title(i)
24
- c.append(clean_text)
25
- return c
26
-
27
- train = readtrain()
28
- content = segmentWord(train[1])
29
-
30
- textMark = train[0]
31
-
32
- train_content = content[:]
33
- # test_content = content[450:508]
34
- train_textMark = textMark[:]
35
- # test_textMark = textMark[450:508]
36
-
37
- tf = TfidfVectorizer(max_df=0.5)
38
-
39
- train_features = tf.fit_transform(train_content)
40
-
41
- load_pretrain_model = True
42
-
43
- if not load_pretrain_model:
44
-
45
- clf = MultinomialNB(alpha=0.1)
46
- clf.fit(train_features,train_textMark)
47
-
48
- joblib.dump(clf, 'SEM/model/para_model.pkl')
49
- else:
50
- clf = joblib.load('SEM/model/para_model.pkl')
51
-
52
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DEEMOSTECH/ChatAvatar/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Hyperhuman Hf
3
- emoji: ⚡
4
- colorFrom: indigo
5
- colorTo: pink
6
- sdk: static
7
- pinned: false
8
- ---
9
-
10
- This this the [Paper](https://arxiv.org/abs/2304.03117)
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/IcnsImagePlugin.py DELETED
@@ -1,399 +0,0 @@
1
- #
2
- # The Python Imaging Library.
3
- # $Id$
4
- #
5
- # macOS icns file decoder, based on icns.py by Bob Ippolito.
6
- #
7
- # history:
8
- # 2004-10-09 fl Turned into a PIL plugin; removed 2.3 dependencies.
9
- # 2020-04-04 Allow saving on all operating systems.
10
- #
11
- # Copyright (c) 2004 by Bob Ippolito.
12
- # Copyright (c) 2004 by Secret Labs.
13
- # Copyright (c) 2004 by Fredrik Lundh.
14
- # Copyright (c) 2014 by Alastair Houghton.
15
- # Copyright (c) 2020 by Pan Jing.
16
- #
17
- # See the README file for information on usage and redistribution.
18
- #
19
-
20
- import io
21
- import os
22
- import struct
23
- import sys
24
-
25
- from . import Image, ImageFile, PngImagePlugin, features
26
-
27
- enable_jpeg2k = features.check_codec("jpg_2000")
28
- if enable_jpeg2k:
29
- from . import Jpeg2KImagePlugin
30
-
31
- MAGIC = b"icns"
32
- HEADERSIZE = 8
33
-
34
-
35
- def nextheader(fobj):
36
- return struct.unpack(">4sI", fobj.read(HEADERSIZE))
37
-
38
-
39
- def read_32t(fobj, start_length, size):
40
- # The 128x128 icon seems to have an extra header for some reason.
41
- (start, length) = start_length
42
- fobj.seek(start)
43
- sig = fobj.read(4)
44
- if sig != b"\x00\x00\x00\x00":
45
- msg = "Unknown signature, expecting 0x00000000"
46
- raise SyntaxError(msg)
47
- return read_32(fobj, (start + 4, length - 4), size)
48
-
49
-
50
- def read_32(fobj, start_length, size):
51
- """
52
- Read a 32bit RGB icon resource. Seems to be either uncompressed or
53
- an RLE packbits-like scheme.
54
- """
55
- (start, length) = start_length
56
- fobj.seek(start)
57
- pixel_size = (size[0] * size[2], size[1] * size[2])
58
- sizesq = pixel_size[0] * pixel_size[1]
59
- if length == sizesq * 3:
60
- # uncompressed ("RGBRGBGB")
61
- indata = fobj.read(length)
62
- im = Image.frombuffer("RGB", pixel_size, indata, "raw", "RGB", 0, 1)
63
- else:
64
- # decode image
65
- im = Image.new("RGB", pixel_size, None)
66
- for band_ix in range(3):
67
- data = []
68
- bytesleft = sizesq
69
- while bytesleft > 0:
70
- byte = fobj.read(1)
71
- if not byte:
72
- break
73
- byte = byte[0]
74
- if byte & 0x80:
75
- blocksize = byte - 125
76
- byte = fobj.read(1)
77
- for i in range(blocksize):
78
- data.append(byte)
79
- else:
80
- blocksize = byte + 1
81
- data.append(fobj.read(blocksize))
82
- bytesleft -= blocksize
83
- if bytesleft <= 0:
84
- break
85
- if bytesleft != 0:
86
- msg = f"Error reading channel [{repr(bytesleft)} left]"
87
- raise SyntaxError(msg)
88
- band = Image.frombuffer("L", pixel_size, b"".join(data), "raw", "L", 0, 1)
89
- im.im.putband(band.im, band_ix)
90
- return {"RGB": im}
91
-
92
-
93
- def read_mk(fobj, start_length, size):
94
- # Alpha masks seem to be uncompressed
95
- start = start_length[0]
96
- fobj.seek(start)
97
- pixel_size = (size[0] * size[2], size[1] * size[2])
98
- sizesq = pixel_size[0] * pixel_size[1]
99
- band = Image.frombuffer("L", pixel_size, fobj.read(sizesq), "raw", "L", 0, 1)
100
- return {"A": band}
101
-
102
-
103
- def read_png_or_jpeg2000(fobj, start_length, size):
104
- (start, length) = start_length
105
- fobj.seek(start)
106
- sig = fobj.read(12)
107
- if sig[:8] == b"\x89PNG\x0d\x0a\x1a\x0a":
108
- fobj.seek(start)
109
- im = PngImagePlugin.PngImageFile(fobj)
110
- Image._decompression_bomb_check(im.size)
111
- return {"RGBA": im}
112
- elif (
113
- sig[:4] == b"\xff\x4f\xff\x51"
114
- or sig[:4] == b"\x0d\x0a\x87\x0a"
115
- or sig == b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a"
116
- ):
117
- if not enable_jpeg2k:
118
- msg = (
119
- "Unsupported icon subimage format (rebuild PIL "
120
- "with JPEG 2000 support to fix this)"
121
- )
122
- raise ValueError(msg)
123
- # j2k, jpc or j2c
124
- fobj.seek(start)
125
- jp2kstream = fobj.read(length)
126
- f = io.BytesIO(jp2kstream)
127
- im = Jpeg2KImagePlugin.Jpeg2KImageFile(f)
128
- Image._decompression_bomb_check(im.size)
129
- if im.mode != "RGBA":
130
- im = im.convert("RGBA")
131
- return {"RGBA": im}
132
- else:
133
- msg = "Unsupported icon subimage format"
134
- raise ValueError(msg)
135
-
136
-
137
- class IcnsFile:
138
- SIZES = {
139
- (512, 512, 2): [(b"ic10", read_png_or_jpeg2000)],
140
- (512, 512, 1): [(b"ic09", read_png_or_jpeg2000)],
141
- (256, 256, 2): [(b"ic14", read_png_or_jpeg2000)],
142
- (256, 256, 1): [(b"ic08", read_png_or_jpeg2000)],
143
- (128, 128, 2): [(b"ic13", read_png_or_jpeg2000)],
144
- (128, 128, 1): [
145
- (b"ic07", read_png_or_jpeg2000),
146
- (b"it32", read_32t),
147
- (b"t8mk", read_mk),
148
- ],
149
- (64, 64, 1): [(b"icp6", read_png_or_jpeg2000)],
150
- (32, 32, 2): [(b"ic12", read_png_or_jpeg2000)],
151
- (48, 48, 1): [(b"ih32", read_32), (b"h8mk", read_mk)],
152
- (32, 32, 1): [
153
- (b"icp5", read_png_or_jpeg2000),
154
- (b"il32", read_32),
155
- (b"l8mk", read_mk),
156
- ],
157
- (16, 16, 2): [(b"ic11", read_png_or_jpeg2000)],
158
- (16, 16, 1): [
159
- (b"icp4", read_png_or_jpeg2000),
160
- (b"is32", read_32),
161
- (b"s8mk", read_mk),
162
- ],
163
- }
164
-
165
- def __init__(self, fobj):
166
- """
167
- fobj is a file-like object as an icns resource
168
- """
169
- # signature : (start, length)
170
- self.dct = dct = {}
171
- self.fobj = fobj
172
- sig, filesize = nextheader(fobj)
173
- if not _accept(sig):
174
- msg = "not an icns file"
175
- raise SyntaxError(msg)
176
- i = HEADERSIZE
177
- while i < filesize:
178
- sig, blocksize = nextheader(fobj)
179
- if blocksize <= 0:
180
- msg = "invalid block header"
181
- raise SyntaxError(msg)
182
- i += HEADERSIZE
183
- blocksize -= HEADERSIZE
184
- dct[sig] = (i, blocksize)
185
- fobj.seek(blocksize, io.SEEK_CUR)
186
- i += blocksize
187
-
188
- def itersizes(self):
189
- sizes = []
190
- for size, fmts in self.SIZES.items():
191
- for fmt, reader in fmts:
192
- if fmt in self.dct:
193
- sizes.append(size)
194
- break
195
- return sizes
196
-
197
- def bestsize(self):
198
- sizes = self.itersizes()
199
- if not sizes:
200
- msg = "No 32bit icon resources found"
201
- raise SyntaxError(msg)
202
- return max(sizes)
203
-
204
- def dataforsize(self, size):
205
- """
206
- Get an icon resource as {channel: array}. Note that
207
- the arrays are bottom-up like windows bitmaps and will likely
208
- need to be flipped or transposed in some way.
209
- """
210
- dct = {}
211
- for code, reader in self.SIZES[size]:
212
- desc = self.dct.get(code)
213
- if desc is not None:
214
- dct.update(reader(self.fobj, desc, size))
215
- return dct
216
-
217
- def getimage(self, size=None):
218
- if size is None:
219
- size = self.bestsize()
220
- if len(size) == 2:
221
- size = (size[0], size[1], 1)
222
- channels = self.dataforsize(size)
223
-
224
- im = channels.get("RGBA", None)
225
- if im:
226
- return im
227
-
228
- im = channels.get("RGB").copy()
229
- try:
230
- im.putalpha(channels["A"])
231
- except KeyError:
232
- pass
233
- return im
234
-
235
-
236
- ##
237
- # Image plugin for Mac OS icons.
238
-
239
-
240
- class IcnsImageFile(ImageFile.ImageFile):
241
- """
242
- PIL image support for Mac OS .icns files.
243
- Chooses the best resolution, but will possibly load
244
- a different size image if you mutate the size attribute
245
- before calling 'load'.
246
-
247
- The info dictionary has a key 'sizes' that is a list
248
- of sizes that the icns file has.
249
- """
250
-
251
- format = "ICNS"
252
- format_description = "Mac OS icns resource"
253
-
254
- def _open(self):
255
- self.icns = IcnsFile(self.fp)
256
- self.mode = "RGBA"
257
- self.info["sizes"] = self.icns.itersizes()
258
- self.best_size = self.icns.bestsize()
259
- self.size = (
260
- self.best_size[0] * self.best_size[2],
261
- self.best_size[1] * self.best_size[2],
262
- )
263
-
264
- @property
265
- def size(self):
266
- return self._size
267
-
268
- @size.setter
269
- def size(self, value):
270
- info_size = value
271
- if info_size not in self.info["sizes"] and len(info_size) == 2:
272
- info_size = (info_size[0], info_size[1], 1)
273
- if (
274
- info_size not in self.info["sizes"]
275
- and len(info_size) == 3
276
- and info_size[2] == 1
277
- ):
278
- simple_sizes = [
279
- (size[0] * size[2], size[1] * size[2]) for size in self.info["sizes"]
280
- ]
281
- if value in simple_sizes:
282
- info_size = self.info["sizes"][simple_sizes.index(value)]
283
- if info_size not in self.info["sizes"]:
284
- msg = "This is not one of the allowed sizes of this image"
285
- raise ValueError(msg)
286
- self._size = value
287
-
288
- def load(self):
289
- if len(self.size) == 3:
290
- self.best_size = self.size
291
- self.size = (
292
- self.best_size[0] * self.best_size[2],
293
- self.best_size[1] * self.best_size[2],
294
- )
295
-
296
- px = Image.Image.load(self)
297
- if self.im is not None and self.im.size == self.size:
298
- # Already loaded
299
- return px
300
- self.load_prepare()
301
- # This is likely NOT the best way to do it, but whatever.
302
- im = self.icns.getimage(self.best_size)
303
-
304
- # If this is a PNG or JPEG 2000, it won't be loaded yet
305
- px = im.load()
306
-
307
- self.im = im.im
308
- self.mode = im.mode
309
- self.size = im.size
310
-
311
- return px
312
-
313
-
314
- def _save(im, fp, filename):
315
- """
316
- Saves the image as a series of PNG files,
317
- that are then combined into a .icns file.
318
- """
319
- if hasattr(fp, "flush"):
320
- fp.flush()
321
-
322
- sizes = {
323
- b"ic07": 128,
324
- b"ic08": 256,
325
- b"ic09": 512,
326
- b"ic10": 1024,
327
- b"ic11": 32,
328
- b"ic12": 64,
329
- b"ic13": 256,
330
- b"ic14": 512,
331
- }
332
- provided_images = {im.width: im for im in im.encoderinfo.get("append_images", [])}
333
- size_streams = {}
334
- for size in set(sizes.values()):
335
- image = (
336
- provided_images[size]
337
- if size in provided_images
338
- else im.resize((size, size))
339
- )
340
-
341
- temp = io.BytesIO()
342
- image.save(temp, "png")
343
- size_streams[size] = temp.getvalue()
344
-
345
- entries = []
346
- for type, size in sizes.items():
347
- stream = size_streams[size]
348
- entries.append(
349
- {"type": type, "size": HEADERSIZE + len(stream), "stream": stream}
350
- )
351
-
352
- # Header
353
- fp.write(MAGIC)
354
- file_length = HEADERSIZE # Header
355
- file_length += HEADERSIZE + 8 * len(entries) # TOC
356
- file_length += sum(entry["size"] for entry in entries)
357
- fp.write(struct.pack(">i", file_length))
358
-
359
- # TOC
360
- fp.write(b"TOC ")
361
- fp.write(struct.pack(">i", HEADERSIZE + len(entries) * HEADERSIZE))
362
- for entry in entries:
363
- fp.write(entry["type"])
364
- fp.write(struct.pack(">i", entry["size"]))
365
-
366
- # Data
367
- for entry in entries:
368
- fp.write(entry["type"])
369
- fp.write(struct.pack(">i", entry["size"]))
370
- fp.write(entry["stream"])
371
-
372
- if hasattr(fp, "flush"):
373
- fp.flush()
374
-
375
-
376
- def _accept(prefix):
377
- return prefix[:4] == MAGIC
378
-
379
-
380
- Image.register_open(IcnsImageFile.format, IcnsImageFile, _accept)
381
- Image.register_extension(IcnsImageFile.format, ".icns")
382
-
383
- Image.register_save(IcnsImageFile.format, _save)
384
- Image.register_mime(IcnsImageFile.format, "image/icns")
385
-
386
- if __name__ == "__main__":
387
- if len(sys.argv) < 2:
388
- print("Syntax: python3 IcnsImagePlugin.py [file]")
389
- sys.exit()
390
-
391
- with open(sys.argv[1], "rb") as fp:
392
- imf = IcnsImageFile(fp)
393
- for size in imf.info["sizes"]:
394
- imf.size = size
395
- imf.save("out-%s-%s-%s.png" % size)
396
- with Image.open(sys.argv[1]) as im:
397
- im.save("out.png")
398
- if sys.platform == "windows":
399
- os.startfile("out.png")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_signals.py DELETED
@@ -1,26 +0,0 @@
1
- from __future__ import annotations
2
-
3
- from typing import AsyncIterator
4
-
5
- from ._compat import DeprecatedAsyncContextManager
6
- from ._eventloop import get_asynclib
7
-
8
-
9
- def open_signal_receiver(
10
- *signals: int,
11
- ) -> DeprecatedAsyncContextManager[AsyncIterator[int]]:
12
- """
13
- Start receiving operating system signals.
14
-
15
- :param signals: signals to receive (e.g. ``signal.SIGINT``)
16
- :return: an asynchronous context manager for an asynchronous iterator which yields signal
17
- numbers
18
-
19
- .. warning:: Windows does not support signals natively so it is best to avoid relying on this
20
- in cross-platform applications.
21
-
22
- .. warning:: On asyncio, this permanently replaces any previous signal handler for the given
23
- signals, as set via :meth:`~asyncio.loop.add_signal_handler`.
24
-
25
- """
26
- return get_asynclib().open_signal_receiver(*signals)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/dsv-576afacd.js DELETED
@@ -1,6 +0,0 @@
1
- var D={},A={},E=34,m=10,R=13;function I(r){return new Function("d","return {"+r.map(function(t,e){return JSON.stringify(t)+": d["+e+'] || ""'}).join(",")+"}")}function B(r,t){var e=I(r);return function(a,c){return t(e(a),c,r)}}function F(r){var t=Object.create(null),e=[];return r.forEach(function(a){for(var c in a)c in t||e.push(t[c]=c)}),e}function f(r,t){var e=r+"",a=e.length;return a<t?new Array(t-a+1).join(0)+e:e}function L(r){return r<0?"-"+f(-r,6):r>9999?"+"+f(r,6):f(r,4)}function S(r){var t=r.getUTCHours(),e=r.getUTCMinutes(),a=r.getUTCSeconds(),c=r.getUTCMilliseconds();return isNaN(r)?"Invalid Date":L(r.getUTCFullYear())+"-"+f(r.getUTCMonth()+1,2)+"-"+f(r.getUTCDate(),2)+(c?"T"+f(t,2)+":"+f(e,2)+":"+f(a,2)+"."+f(c,3)+"Z":a?"T"+f(t,2)+":"+f(e,2)+":"+f(a,2)+"Z":e||t?"T"+f(t,2)+":"+f(e,2)+"Z":"")}function Z(r){var t=new RegExp('["'+r+`
2
- \r]`),e=r.charCodeAt(0);function a(n,o){var s,i,u=c(n,function(h,l){if(s)return s(h,l-1);i=h,s=o?B(h,o):I(h)});return u.columns=i||[],u}function c(n,o){var s=[],i=n.length,u=0,h=0,l,v=i<=0,C=!1;n.charCodeAt(i-1)===m&&--i,n.charCodeAt(i-1)===R&&--i;function w(){if(v)return A;if(C)return C=!1,D;var j,d=u,p;if(n.charCodeAt(d)===E){for(;u++<i&&n.charCodeAt(u)!==E||n.charCodeAt(++u)===E;);return(j=u)>=i?v=!0:(p=n.charCodeAt(u++))===m?C=!0:p===R&&(C=!0,n.charCodeAt(u)===m&&++u),n.slice(d+1,j-1).replace(/""/g,'"')}for(;u<i;){if((p=n.charCodeAt(j=u++))===m)C=!0;else if(p===R)C=!0,n.charCodeAt(u)===m&&++u;else if(p!==e)continue;return n.slice(d,j)}return v=!0,n.slice(d,i)}for(;(l=w())!==A;){for(var T=[];l!==D&&l!==A;)T.push(l),l=w();o&&(T=o(T,h++))==null||s.push(T)}return s}function U(n,o){return n.map(function(s){return o.map(function(i){return g(s[i])}).join(r)})}function O(n,o){return o==null&&(o=F(n)),[o.map(g).join(r)].concat(U(n,o)).join(`
3
- `)}function M(n,o){return o==null&&(o=F(n)),U(n,o).join(`
4
- `)}function b(n){return n.map(N).join(`
5
- `)}function N(n){return n.map(g).join(r)}function g(n){return n==null?"":n instanceof Date?S(n):t.test(n+="")?'"'+n.replace(/"/g,'""')+'"':n}return{parse:a,parseRows:c,format:O,formatBody:M,formatRows:b,formatRow:N,formatValue:g}}export{Z as d};
6
- //# sourceMappingURL=dsv-576afacd.js.map
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/TabItem.svelte_svelte_type_style_lang-ffbad424.js DELETED
@@ -1,2 +0,0 @@
1
- import{S as G,e as H,s as K,G as w,a9 as O,N as j,O as T,K as k,U as A,p as g,M as v,H as P,ay as Q,ab as R,ac as U,ad as F,z as J,v as L,A as p,w as I,a4 as S,B as V,D as W,m as B,aA as C,P as N,Q as X,R as z}from"./index-3370be2a.js";function D(n,e,l){const s=n.slice();return s[14]=e[l],s[16]=l,s}function Y(n){let e,l=n[14].name+"",s,f,d,_;function i(){return n[12](n[14],n[16])}return{c(){e=j("button"),s=N(l),f=T(),k(e,"class","svelte-kqij2n")},m(u,m){g(u,e,m),v(e,s),v(e,f),d||(_=X(e,"click",i),d=!0)},p(u,m){n=u,m&8&&l!==(l=n[14].name+"")&&z(s,l)},d(u){u&&p(e),d=!1,_()}}}function Z(n){let e,l=n[14].name+"",s,f;return{c(){e=j("button"),s=N(l),f=T(),k(e,"class","selected svelte-kqij2n")},m(d,_){g(d,e,_),v(e,s),v(e,f)},p(d,_){_&8&&l!==(l=d[14].name+"")&&z(s,l)},d(d){d&&p(e)}}}function M(n,e){let l,s;function f(i,u){return i[14].id===i[4]?Z:Y}let d=f(e),_=d(e);return{key:n,first:null,c(){l=B(),_.c(),s=B(),this.first=l},m(i,u){g(i,l,u),_.m(i,u),g(i,s,u)},p(i,u){e=i,d===(d=f(e))&&_?_.p(e,u):(_.d(1),_=d(e),_&&(_.c(),_.m(s.parentNode,s)))},d(i){i&&(p(l),p(s)),_.d(i)}}}function x(n){let e,l,s=[],f=new Map,d,_,i,u=w(n[3]);const m=t=>t[14].id;for(let t=0;t<u.length;t+=1){let o=D(n,u,t),r=m(o);f.set(r,s[t]=M(r,o))}const b=n[11].default,c=O(b,n,n[10],null);return{c(){e=j("div"),l=j("div");for(let t=0;t<s.length;t+=1)s[t].c();d=T(),c&&c.c(),k(l,"class","tab-nav scroll-hide svelte-kqij2n"),k(e,"class",_="tabs "+n[2].join(" ")+" svelte-kqij2n"),k(e,"id",n[1]),A(e,"hide",!n[0])},m(t,o){g(t,e,o),v(e,l);for(let r=0;r<s.length;r+=1)s[r]&&s[r].m(l,null);v(e,d),c&&c.m(e,null),i=!0},p(t,[o]){o&408&&(u=w(t[3]),s=P(s,o,m,1,t,u,f,l,Q,M,null,D)),c&&c.p&&(!i||o&1024)&&R(c,b,t,t[10],i?F(b,t[10],o,null):U(t[10]),null),(!i||o&4&&_!==(_="tabs "+t[2].join(" ")+" svelte-kqij2n"))&&k(e,"class",_),(!i||o&2)&&k(e,"id",t[1]),(!i||o&5)&&A(e,"hide",!t[0])},i(t){i||(J(c,t),i=!0)},o(t){L(c,t),i=!1},d(t){t&&p(e);for(let o=0;o<s.length;o+=1)s[o].d();c&&c.d(t)}}}const $={};function ee(n,e,l){let s,f,{$$slots:d={},$$scope:_}=e,{visible:i=!0}=e,{elem_id:u="id"}=e,{elem_classes:m=[]}=e,{selected:b}=e,c=[];const t=I(!1);S(n,t,a=>l(4,f=a));const o=I(0);S(n,o,a=>l(13,s=a));const r=V();W($,{register_tab:a=>(c.push({name:a.name,id:a.id}),t.update(h=>h??a.id),l(3,c),c.length-1),unregister_tab:a=>{const h=c.findIndex(y=>y.id===a.id);c.splice(h,1),t.update(y=>y===a.id?c[h]?.id||c[c.length-1]?.id:y)},selected_tab:t,selected_tab_index:o});function q(a){l(9,b=a),C(t,f=a,f),C(o,s=c.findIndex(h=>h.id===a),s),r("change")}const E=(a,h)=>{q(a.id),r("select",{value:a.name,index:h})};return n.$$set=a=>{"visible"in a&&l(0,i=a.visible),"elem_id"in a&&l(1,u=a.elem_id),"elem_classes"in a&&l(2,m=a.elem_classes),"selected"in a&&l(9,b=a.selected),"$$scope"in a&&l(10,_=a.$$scope)},n.$$.update=()=>{n.$$.dirty&512&&b!==null&&q(b)},[i,u,m,c,f,t,o,r,q,b,_,d,E]}class le extends G{constructor(e){super(),H(this,e,ee,x,K,{visible:0,elem_id:1,elem_classes:2,selected:9})}}export{le as T,$ as a};
2
- //# sourceMappingURL=TabItem.svelte_svelte_type_style_lang-ffbad424.js.map
 
 
 
spaces/DataScienceEngineering/1-SimPhysics-HTML5/README.md DELETED
@@ -1,14 +0,0 @@
1
- ---
2
- title: 🏖️PlayCanvas Simulation Vehicle Physics⛱️🌊 Live HTML5
3
- emoji: 1-Sim🌊
4
- colorFrom: green
5
- colorTo: gray
6
- sdk: static
7
- pinned: false
8
- ---
9
-
10
- Inspired by Danny Lange, VP AI and ML at Unity
11
- Reference: https://youtu.be/YsEDv13W1RI?t=48
12
-
13
- Quote on MLAgents: ... if you think about what I just said about evolution and that the creation of tools for intelligence yeah so you have the basic nature you have the 3d spatial environment you have gravity and you have inertia and the physics engine and now we throw in ml agents which is a machine learning system
14
-