Commit
·
dfa9a4e
1
Parent(s):
66e5b79
Update parquet files (step 16 of 397)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/DAEMON Tools Lite 44710333 Serial Number How to Get It for Free or Purchase a Paid License.md +0 -131
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ecmg Explorer Free Download How to Install and Use It Effectively.md +0 -123
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Enhance Your Vocals with RVox Plugin Free Trial for Windows 10.md +0 -31
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/HAP Program Download Everything You Need to Know About This Powerful HVAC Software.md +0 -36
- spaces/1gistliPinn/ChatGPT4/Examples/Avenged Sevenfold 10 Multitracks OGG.md +0 -17
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Challenge Yourself with Dunk Shot APK Day and Earn Points for Every Basket.md +0 -86
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Crazy Car Stunts - Mega Ramp MOD APK The Best Way to Test Your Driving Skills.md +0 -93
- spaces/1phancelerku/anime-remove-background/Download Agar.io Mod Menu with Unlimited Coins DNA and God Mode.md +0 -115
- spaces/1toTree/lora_test/ppdiffusers/utils/dummy_paddle_and_scipy_objects.py +0 -49
- spaces/6Eternal9/ChatGPT4/app.py +0 -193
- spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/nets_537238KB.py +0 -123
- spaces/801artistry/RVC801/infer/modules/uvr5/mdxnet.py +0 -246
- spaces/801artistry/RVC801/tools/dlmodels.sh +0 -566
- spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/text_encoder.py +0 -304
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-210e_deepfashion2_shorts_256x192/td_hm_res50_4xb64-210e_deepfashion2_shorts_256x192.py +0 -2861
- spaces/Abhilashvj/planogram-compliance/app_utils.py +0 -196
- spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Chatgpt4Online.py +0 -39
- spaces/AgentVerse/agentVerse/agentverse/__init__.py +0 -24
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/ExpandSubMenu.js +0 -40
- spaces/AlanMars/QYL-AI-Space/modules/models/configuration_moss.py +0 -118
- spaces/Alesmikes/elvire01/app.py +0 -85
- spaces/AlgoveraAI/web3-wallet-streamlit/app.py +0 -59
- spaces/AmazonScience/QA-NLU/README.md +0 -37
- spaces/Amr453/Transcription/README.md +0 -12
- spaces/Amrrs/DragGan-Inversion/stylegan_human/dnnlib/tflib/network.py +0 -658
- spaces/Andy1621/uniformer_image_detection/mmdet/utils/__init__.py +0 -5
- spaces/Andy1621/uniformer_light/uniformer_light_image.py +0 -535
- spaces/AndySAnker/DeepStruc/tools/module.py +0 -364
- spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/example/script.py +0 -139
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/dotenv/__main__.py +0 -6
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/rcnn.py +0 -327
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/dev/packaging/gen_wheel_index.sh +0 -46
- spaces/B1360976/waste-management-system/app.py +0 -125
- spaces/Bart92/RVC_HF/lib/infer_pack/modules/F0Predictor/__init__.py +0 -0
- spaces/Benson/text-generation/Examples/Buena Pizza Gran Pizza Descargar Mac.md +0 -101
- spaces/Benson/text-generation/Examples/Carreras De Coches Juego De Descarga Apk.md +0 -77
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/resolvelib/factory.py +0 -730
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/metadata.py +0 -1076
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_win32_console.py +0 -662
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/protocol.py +0 -42
- spaces/CVPR/LIVE/compute_distance.h +0 -949
- spaces/CVPR/LIVE/pybind11/tests/pybind11_tests.cpp +0 -91
- spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/shuffle.h +0 -54
- spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/malloc_and_free.h +0 -23
- spaces/CVPR/WALT/mmdet/models/necks/nasfcos_fpn.py +0 -161
- spaces/ChandraMohanNayal/AutoGPT/CODE_OF_CONDUCT.md +0 -40
- spaces/CofAI/chat.b4/g4f/Provider/Providers/ChatgptAi.py +0 -51
- spaces/Cong723/gpt-academic-public/docs/waifu_plugin/jquery-ui.min.js +0 -0
- spaces/Cropinky/hana_hanak_houses/realesrgan/version.py +0 -5
- spaces/DEBO-PROJECT/DEBO-V1/app.py +0 -868
spaces/1acneusushi/gradio-2dmoleculeeditor/data/DAEMON Tools Lite 44710333 Serial Number How to Get It for Free or Purchase a Paid License.md
DELETED
@@ -1,131 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>DAEMON Tools Lite 44710333 Serial Number: How to Get It and Use It</h1>
|
3 |
-
<p>DAEMON Tools Lite is a popular software that allows you to create and mount virtual disk images of various formats, such as ISO, MDS, MDF, MDX, B5T, B6T, NRG, CCD, CDI, CUE, ISZ and more. With DAEMON Tools Lite, you can emulate up to four virtual drives on your PC and access the content of your disk images without inserting the physical media. You can also create bootable USB sticks, compress and encrypt your disk images, protect them with passwords, and manage your image collection with ease.</p>
|
4 |
-
<h2>DAEMON Tools Lite 44710333 Serial Number</h2><br /><p><b><b>DOWNLOAD</b> >>> <a href="https://byltly.com/2uKzWA">https://byltly.com/2uKzWA</a></b></p><br /><br />
|
5 |
-
<h2>What is DAEMON Tools Lite?</h2>
|
6 |
-
<h3>Features and benefits of DAEMON Tools Lite</h3>
|
7 |
-
<p>DAEMON Tools Lite is a lightweight and user-friendly software that offers many features and benefits for its users. Some of the main features and benefits are:</p>
|
8 |
-
<ul>
|
9 |
-
<li>You can create disk images from physical media, such as CDs, DVDs, Blu-rays, hard disks, USB drives, etc.</li>
|
10 |
-
<li>You can mount disk images to virtual drives and access them as if they were real disks.</li>
|
11 |
-
<li>You can emulate up to four DT, SCSI or HDD devices on your PC.</li>
|
12 |
-
<li>You can create bootable USB sticks for operating systems or recovery tools.</li>
|
13 |
-
<li>You can compress and encrypt your disk images to save space and protect your data.</li>
|
14 |
-
<li>You can password-protect your disk images and add digital signatures to verify their authenticity.</li>
|
15 |
-
<li>You can manage your image collection with a built-in image catalog.</li>
|
16 |
-
<li>You can customize the interface and settings of DAEMON Tools Lite according to your preferences.</li>
|
17 |
-
</ul>
|
18 |
-
<h3>How to download and install DAEMON Tools Lite</h3>
|
19 |
-
<p>To download and install DAEMON Tools Lite on your PC, you need to follow these steps:</p>
|
20 |
-
<ol>
|
21 |
-
<li>Go to the official website of DAEMON Tools Lite at <a href="https://www.daemon-tools.cc/products/dtLite">https://www.daemon-tools.cc/products/dtLite</a> and click on the "Download" button.</li>
|
22 |
-
<li>Choose the version of DAEMON Tools Lite that suits your needs. You can choose between the free version (with ads) or the paid version (without ads).</li>
|
23 |
-
<li>Run the downloaded file and follow the instructions on the screen to complete the installation process.</li>
|
24 |
-
<li>Launch DAEMON Tools Lite and enjoy its features.</li>
|
25 |
-
</ol>
|
26 |
-
<h2>What is a serial number and why do you need it?</h2>
|
27 |
-
<h3>The difference between a serial number and a license key</h3>
|
28 |
-
<p>A serial number is a unique code that identifies a specific copy of a software product. A license key is a code that activates a software product and grants its user certain rights and privileges. A serial number is usually required to install or register a software product, while a license key is usually required to activate or update a software product.</p>
|
29 |
-
<p>In the case of DAEMON Tools Lite, you need both a serial number and a license key to use the full version of the software. The serial number is used to install DAEMON Tools Lite on your PC, while the license key is used to activate it online. Without a valid serial number and license key, you will not be able to access all the features and benefits of DAEMON Tools Lite.</p>
|
30 |
-
<h3>The advantages of having a valid serial number for DAEMON Tools Lite</h3>
|
31 |
-
<p>Having a valid serial number for DAEMON Tools Lite has many advantages for its users. Some of the main advantages are:</p>
|
32 |
-
<ul>
|
33 |
-
<li>You can use the full version of DAEMON Tools Lite without any limitations or restrictions.</li>
|
34 |
-
<li>You can access all the features and benefits of DAEMON Tools Lite, such as creating bootable USB sticks, compressing and encrypting disk images, password-protecting disk images, etc.</li>
|
35 |
-
<li>You can update DAEMON Tools Lite to the latest version whenever it is available.</li>
|
36 |
-
<li>You can get technical support from the developers of DAEMON Tools Lite in case you encounter any problems or issues with the software.</li>
|
37 |
-
<li>You can avoid any legal issues or penalties that may arise from using an illegal or pirated copy of DAEMON Tools Lite.</li>
|
38 |
-
</ul>
|
39 |
-
<h3>The risks of using a fake or cracked serial number for DAEMON Tools Lite</h3>
|
40 |
-
<p>Using a fake or cracked serial number for DAEMON Tools Lite has many risks and disadvantages for its users. Some of the main risks are:</p>
|
41 |
-
<p>How to activate DAEMON Tools Lite 10 with serial number<br />
|
42 |
-
DAEMON Tools Lite 10 free serial key download<br />
|
43 |
-
DAEMON Tools Lite 44710333 activation code<br />
|
44 |
-
DAEMON Tools Lite 10 license key generator<br />
|
45 |
-
DAEMON Tools Lite 44710333 crack full version<br />
|
46 |
-
DAEMON Tools Lite 10 serial number and unlock key<br />
|
47 |
-
How to get DAEMON Tools Lite 44710333 for free<br />
|
48 |
-
DAEMON Tools Lite 10 full version with serial number<br />
|
49 |
-
DAEMON Tools Lite 44710333 keygen download<br />
|
50 |
-
DAEMON Tools Lite 10 activation options and features<br />
|
51 |
-
DAEMON Tools Lite 44710333 serial number verification<br />
|
52 |
-
DAEMON Tools Lite 10 trial period and purchase<br />
|
53 |
-
How to use DAEMON Tools Lite 44710333 to create virtual drives<br />
|
54 |
-
DAEMON Tools Lite 10 installation and activation guide<br />
|
55 |
-
DAEMON Tools Lite 44710333 serial number and disclaimer<br />
|
56 |
-
DAEMON Tools Lite 10 software review and rating<br />
|
57 |
-
DAEMON Tools Lite 44710333 serial number and community support<br />
|
58 |
-
DAEMON Tools Lite 10 alternative software and comparison<br />
|
59 |
-
How to update DAEMON Tools Lite 44710333 to the latest version<br />
|
60 |
-
DAEMON Tools Lite 10 serial number and system requirements<br />
|
61 |
-
How to fix DAEMON Tools Lite 44710333 activation errors<br />
|
62 |
-
DAEMON Tools Lite 10 serial number and compatibility issues<br />
|
63 |
-
How to uninstall DAEMON Tools Lite 44710333 completely<br />
|
64 |
-
DAEMON Tools Lite 10 serial number and refund policy<br />
|
65 |
-
How to backup and restore DAEMON Tools Lite 44710333 settings<br />
|
66 |
-
DAEMON Tools Lite 10 serial number and privacy policy<br />
|
67 |
-
How to mount and unmount images with DAEMON Tools Lite 44710333<br />
|
68 |
-
DAEMON Tools Lite 10 serial number and security features<br />
|
69 |
-
How to customize and optimize DAEMON Tools Lite 44710333 preferences<br />
|
70 |
-
DAEMON Tools Lite 10 serial number and user manual<br />
|
71 |
-
How to burn and copy discs with DAEMON Tools Lite 44710333<br />
|
72 |
-
DAEMON Tools Lite 10 serial number and technical support<br />
|
73 |
-
How to create and edit images with DAEMON Tools Lite 44710333<br />
|
74 |
-
DAEMON Tools Lite 10 serial number and license agreement<br />
|
75 |
-
How to emulate and test drives with DAEMON Tools Lite 44710333<br />
|
76 |
-
DAEMON Tools Lite 10 serial number and registration process<br />
|
77 |
-
How to share and transfer images with DAEMON Tools Lite 44710333<br />
|
78 |
-
DAEMON Tools Lite 10 serial number and upgrade options<br />
|
79 |
-
How to troubleshoot and solve problems with DAEMON Tools Lite 44710333<br />
|
80 |
-
DAEMON Tools Lite 10 serial number and feedback form</p>
|
81 |
-
<ul>
|
82 |
-
<li>You may not be able to use the full version of DAEMON Tools Lite or access all its features and benefits.</li>
|
83 |
-
<li>You may encounter errors or bugs that may affect the performance or functionality of DAEMON Tools Lite.</li>
|
84 |
-
<li>You may expose your PC to viruses, malware or spyware that may harm your system or compromise your data.</li>
|
85 |
-
<li>You may violate the terms and conditions of use of DAEMON Tools Lite and infringe its intellectual property rights.</li>
|
86 |
-
<li>You may face legal actions or penalties from the developers of DAEMON Tools Lite or other authorities for using an illegal or pirated copy of their software.</li>
|
87 |
-
</ul>
|
88 |
-
<h2>How to get a DAEMON Tools Lite 44710333 serial number for free?</h2>
|
89 |
-
<h3>The official way: participate in promotions and giveaways</h3>
|
90 |
-
<p>The official way to get a free serial number for DAEMON Tools Lite is to participate in promotions and giveaways that are organized by the developers of the software or their partners. These promotions and giveaways are usually announced on their official website, social media pages or newsletters. To participate in these promotions and giveaways, you may need to follow certain rules or requirements, such as liking their page, sharing their post, subscribing to their newsletter, answering some questions, etc. If you are lucky enough, you may win a free serial number for DAEMON Tools Lite that you can use to install and activate the software on your PC.</p>
|
91 |
-
<h3>The alternative way: use a key generator or a serial number finder</h3>
|
92 |
-
<p>The alternative way to get a free serial number for DAEMON Tools Lite is to use a key generator or a serial number finder. These are tools that can generate random codes that may match with valid serial numbers for various software products. You can find these tools online by searching for keywords like "DAEMON Tools Lite 44710333 Serial Number", "DAEMON Tools Lite Key Generator", "DAEMON Tools Lite Serial Number Finder", etc. However, you should be careful when using these tools as they may not be reliable or safe. Some of these tools may not work properly or generate invalid codes that may not activate your software. Some of these tools may also contain viruses, malware or spyware that may infect your PC or steal your data. Therefore, you should always scan these tools with an antivirus program before using them and use them at your own risk.</p>
|
93 |
-
<h3>The illegal way: download a cracked version of DAEMON Tools Lite</h3>
|
94 |
-
<p>The illegal way to get a free serial number for DAEMON Tools Lite is to download a cracked version of the software from unauthorized sources. A cracked version is a modified version of the software that bypasses its security mechanisms and allows its users to use it without paying for it. You can find these cracked versions online by searching for keywords like "DAEMON Tools Lite 44710333 Crack", "DAEMON Tools Lite 44710333 Patch", "DAEMON Tools Lite 44710333 Keygen", etc. However, you should avoid using these cracked versions as they are illegal and risky. Some trying to activate DAEMON Tools Lite with a serial number. Some of the common activation errors are:</p>
|
95 |
-
<ul>
|
96 |
-
<li>Invalid serial number: This means that the serial number you entered is not valid or does not match with the product you are trying to activate. This may happen if you mistype the serial number, use a serial number for a different product or version, or use a fake or cracked serial number. To fix this error, you need to check the spelling and format of your serial number and make sure it is correct and corresponds to the product you are using. If you have purchased a serial number from an authorized source, you can contact the support team for assistance.</li>
|
97 |
-
<li>Activation limit exceeded: This means that you have already used your serial number to activate DAEMON Tools Lite on the maximum number of PCs allowed by your license. This may happen if you have installed DAEMON Tools Lite on multiple PCs or if you have changed your hardware configuration. To fix this error, you need to deactivate DAEMON Tools Lite on one of your PCs before activating it on another one. You can do this by going to the License wizard and clicking on the Deactivate button. If you have lost access to your previous PC or if you have any problems with deactivation, you can contact the support team for assistance.</li>
|
98 |
-
<li>Activation server is unavailable: This means that there is a problem with the connection between your PC and the activation server of DAEMON Tools Lite. This may happen if you have a poor internet connection, a firewall or antivirus software blocking the connection, or if the activation server is down for maintenance. To fix this error, you need to check your internet connection and make sure it is stable and secure. You also need to check your firewall and antivirus settings and make sure they are not preventing DAEMON Tools Lite from accessing the activation server. You can also try to activate DAEMON Tools Lite later when the activation server is back online.</li>
|
99 |
-
</ul>
|
100 |
-
<h3>How to update DAEMON Tools Lite with a serial number</h3>
|
101 |
-
<p>To update DAEMON Tools Lite with a serial number, you need to follow these steps:</p>
|
102 |
-
<ol>
|
103 |
-
<li>Go to the Settings of DAEMON Tools Lite and click on the General tile.</li>
|
104 |
-
<li>Click on the Update now button and wait for DAEMON Tools Lite to check for updates.</li>
|
105 |
-
<li>If there is a new version of DAEMON Tools Lite available, you will see a green icon in the upper right corner of the window. Click on it to download and install the latest version.</li>
|
106 |
-
<li>Launch DAEMON Tools Lite and enjoy its new features and improvements.</li>
|
107 |
-
</ol>
|
108 |
-
<p>You can also check for updates by visiting your personal DAEMON Tools account on their website. You will see a note New version available near the product name if there is an update available.</p>
|
109 |
-
<h2>Conclusion</h2>
|
110 |
-
<p>DAEMON Tools Lite is a powerful and versatile software that allows you to create and mount virtual disk images of various formats. To use the full version of DAEMON Tools Lite, you need both a serial number and a license key that you can get by purchasing them from an authorized source or by participating in promotions and giveaways. You should avoid using fake or cracked serial numbers as they may not work properly or may expose your PC to risks and legal issues. You should also keep your DAEMON Tools Lite updated to enjoy its latest features and improvements.</p>
|
111 |
-
<h2>FAQs</h2>
|
112 |
-
<ul>
|
113 |
-
<li>Q: What is the difference between DAEMON Tools Lite and DAEMON Tools Pro?</li>
|
114 |
-
<li>A: DAEMON Tools Lite is a free version of DAEMON Tools that offers basic functionality and features for personal use. DAEMON Tools Pro is a paid version of DAEMON Tools that offers advanced functionality and features for professional use.</li>
|
115 |
-
<li>Q: How many PCs can I activate with one serial number for DAEMON Tools Lite?</li>
|
116 |
-
<li>A: You can activate up to three PCs with one serial number for DAEMON Tools Lite if you have purchased a Personal License. If you have purchased a Commercial License, you can activate only one PC with one serial number.</li>
|
117 |
-
<li>Q: How can I recover my serial number for DAEMON Tools Lite if I lose it?</li>
|
118 |
-
<li>A: You can recover your serial number for DAEMON Tools Lite by logging into your personal DAEMON Tools account on their website and checking your order history. You can also contact the support team with your order details and request them to resend your serial number.</li>
|
119 |
-
<li>Q: How can I contact the support team of DAEMON Tools Lite if I have any questions or issues?</li>
|
120 |
-
<li>A: You can contact the support team of DAEMON Tools Lite by filling out an online form on their website or by sending an email to [email protected]. You can also visit their forum or social media pages for more information and help.</li>
|
121 |
-
<li>Q: How can I uninstall DAEMON Tools Lite from my PC?</li>
|
122 |
-
<li>A: You can uninstall DAEMON Tools Lite from your PC by following these steps:</li>
|
123 |
-
<ol>
|
124 |
-
<li>Go to the Control Panel of your PC and click on Programs and Features.</li>
|
125 |
-
<li>Select DAEMON Tools Lite from the list of programs and click on Uninstall.</li>
|
126 |
-
<li>Follow the instructions on the screen to complete the uninstallation process.</li>
|
127 |
-
</ol>
|
128 |
-
</ul>
|
129 |
-
</p> 0a6ba089eb<br />
|
130 |
-
<br />
|
131 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ecmg Explorer Free Download How to Install and Use It Effectively.md
DELETED
@@ -1,123 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Ecmg Explorer Free Download: A Simple Guide</h1>
|
3 |
-
<p>If you are looking for a simple and effective way to create and validate rules for bulk data migration, you might want to check out Ecmg Explorer. Ecmg Explorer is a tool that allows you to work with ECMG's Content Definition (CDF) and Content Transformation (CTF) files, which are used to define the source and target data structures, mappings, transformations, and validations for data migration. Ecmg Explorer also lets you view Repository Information Files (RIFs), which contain information about the source and target repositories. In this article, we will explain what Ecmg Explorer is, why you need it, how to download and install it, and how to use it.</p>
|
4 |
-
<h2>What is Ecmg Explorer?</h2>
|
5 |
-
<p>Ecmg Explorer is a tool developed by ECMG, LLC, a company that specializes in enterprise content management (ECM) solutions. ECMG offers a range of products and services for data migration, data governance, data quality, data integration, and data analytics. One of their products is ECMG Reader, which is a simple reader for ECMG content. However, if you want to create and validate your own data migration rules, you will need Ecmg Explorer.</p>
|
6 |
-
<h2>Ecmg explorer free download</h2><br /><p><b><b>Download</b> ➡ <a href="https://byltly.com/2uKyP7">https://byltly.com/2uKyP7</a></b></p><br /><br />
|
7 |
-
<h3>Ecmg Explorer is a tool for creating and validating rules for bulk data migration</h3>
|
8 |
-
<p>Data migration is the process of moving data from one system or repository to another. Data migration can be challenging because different systems or repositories may have different data structures, formats, standards, quality levels, and security requirements. To ensure a successful data migration, you need to define clear rules that specify how the source data should be transformed and validated before being loaded into the target system or repository. These rules are stored in CDF and CTF files.</p>
|
9 |
-
<h3>Ecmg Explorer supports Content Definition (CDF) and Content Transformation (CTF) files</h3>
|
10 |
-
<p>CDF and CTF files are XML-based files that contain the metadata and logic for data migration. A CDF file defines the source and target data structures, such as tables, columns, fields, indexes, keys, constraints, etc. A CTF file defines the mappings, transformations, validations, filters, etc., that should be applied to the source data before loading it into the target system or repository. For example, a CTF file can specify how to convert dates from one format to another, how to handle null values or duplicates, how to perform calculations or lookups, how to check data quality or integrity, etc.</p>
|
11 |
-
<h3>Ecmg Explorer can also display Repository Information Files (RIFs)</h3>
|
12 |
-
<p>A RIF file is an XML-based file that contains information about the source or target system or repository. A RIF file can include information such as the name, type, version, location, credentials, schema, etc., of the system or repository. A RIF file can also include information about the content types, properties, metadata models, security models, etc., of the system or repository. A RIF file can help you understand the characteristics and requirements of the source or target system or repository.</p>
|
13 |
-
<h2>Why do you need Ecmg Explorer?</h2>
|
14 |
-
<p>Ecmg Explorer is a useful tool for anyone who needs to perform bulk data migration between different systems or repositories. Whether you are an IT professional, a business analyst, a project manager, or a content manager,<strong> here are some reasons why you need Ecmg Explorer:</strong></p>
|
15 |
-
<h3>Ecmg Explorer helps you to review, audit, and test your data migration rules</h3>
|
16 |
-
<p>Ecmg Explorer allows you to open, edit, and save CDF and CTF files using a graphical editor. You can easily create, modify, or delete data structures, mappings, transformations, validations, etc., using drag-and-drop, context menus, or keyboard shortcuts. You can also view the XML code of the CDF and CTF files if you prefer.</p>
|
17 |
-
<p>Ecmg Explorer also allows you to validate and test your CDF and CTF files using a simulator. You can run simulations on sample data sets or real data sets to check if your data migration rules work as expected. You can view the simulation results in tabular or graphical formats, and export them as CSV or XML files. You can also view error logs, warnings, or messages generated by the simulator.</p>
|
18 |
-
<p>How to get Ecmg explorer for free<br />
|
19 |
-
Ecmg explorer free trial download<br />
|
20 |
-
Ecmg explorer free download full version<br />
|
21 |
-
Ecmg explorer free download for Windows 10<br />
|
22 |
-
Ecmg explorer free download for Mac<br />
|
23 |
-
Ecmg explorer free download for Linux<br />
|
24 |
-
Ecmg explorer free download for Android<br />
|
25 |
-
Ecmg explorer free download for iOS<br />
|
26 |
-
Ecmg explorer free download crack<br />
|
27 |
-
Ecmg explorer free download serial key<br />
|
28 |
-
Ecmg explorer free download with license<br />
|
29 |
-
Ecmg explorer free download no survey<br />
|
30 |
-
Ecmg explorer free download no virus<br />
|
31 |
-
Ecmg explorer free download safe<br />
|
32 |
-
Ecmg explorer free download latest version<br />
|
33 |
-
Ecmg explorer free download 2023<br />
|
34 |
-
Ecmg explorer free download offline installer<br />
|
35 |
-
Ecmg explorer free download zip file<br />
|
36 |
-
Ecmg explorer free download rar file<br />
|
37 |
-
Ecmg explorer free download setup file<br />
|
38 |
-
Ecmg explorer free download portable<br />
|
39 |
-
Ecmg explorer free download software<br />
|
40 |
-
Ecmg explorer free download tool<br />
|
41 |
-
Ecmg explorer free download utility<br />
|
42 |
-
Ecmg explorer free download app<br />
|
43 |
-
Ecmg explorer free download online<br />
|
44 |
-
Ecmg explorer free download web-based<br />
|
45 |
-
Ecmg explorer free download cloud-based<br />
|
46 |
-
Ecmg explorer free download features<br />
|
47 |
-
Ecmg explorer free download benefits<br />
|
48 |
-
Ecmg explorer free download advantages<br />
|
49 |
-
Ecmg explorer free download disadvantages<br />
|
50 |
-
Ecmg explorer free download pros and cons<br />
|
51 |
-
Ecmg explorer free download review<br />
|
52 |
-
Ecmg explorer free download testimonials<br />
|
53 |
-
Ecmg explorer free download ratings<br />
|
54 |
-
Ecmg explorer free download comparison<br />
|
55 |
-
Ecmg explorer free download alternatives<br />
|
56 |
-
Ecmg explorer free download competitors<br />
|
57 |
-
Ecmg explorer free download vs other explorers<br />
|
58 |
-
Ecmg explorer free download tutorial<br />
|
59 |
-
Ecmg explorer free download guide<br />
|
60 |
-
Ecmg explorer free download manual<br />
|
61 |
-
Ecmg explorer free download instructions<br />
|
62 |
-
Ecmg explorer free download tips and tricks<br />
|
63 |
-
Ecmg explorer free download best practices<br />
|
64 |
-
Ecmg explorer free download use cases<br />
|
65 |
-
Ecmg explorer free download examples<br />
|
66 |
-
How to use eCMG Explorer for data analysis and visualization?</p>
|
67 |
-
<h3>Ecmg Explorer provides an intuitive and easy-to-use interface</h3>
|
68 |
-
<p>Ecmg Explorer has a user-friendly interface that makes it easy to navigate and use. You can access all the features and functions of Ecmg Explorer from the main menu or toolbar. You can also customize your workspace by resizing or rearranging the windows or panels according to your preference.</p>
|
69 |
-
<p>Ecmg Explorer also has a built-in help system that provides detailed information and instructions on how to use Ecmg Explorer. You can access the help system by clicking on the Help menu or pressing F1 on your keyboard.</p>
|
70 |
-
<h3>Ecmg Explorer can handle complex data transformations and validations</h3>
|
71 |
-
<p>Ecmg Explorer supports a wide range of data transformations and validations that can handle complex data scenarios and requirements. For example, you can use Ecmg Explorer to:</p>
|
72 |
-
<ul>
|
73 |
-
<li>Convert data types or formats</li>
|
74 |
-
<li>Perform arithmetic or logical operations</li>
|
75 |
-
<li>Apply conditional or looping logic</li>
|
76 |
-
<li>Use variables or constants</li>
|
77 |
-
<li>Use functions or expressions</li>
|
78 |
-
<li>Perform lookups or joins</li>
|
79 |
-
<li>Filter or sort data</li>
|
80 |
-
<li>Check data quality or integrity</li>
|
81 |
-
<li>Enforce business rules or constraints</li>
|
82 |
-
<li>Handle errors or exceptions</li>
|
83 |
-
</ul>
|
84 |
-
<h2>How to download and install Ecmg Explorer?</h2>
|
85 |
-
<p>If you are interested in downloading and installing Ecmg Explorer on your computer,<strong> here are the steps you need to follow:</strong></p>
|
86 |
-
<h3>Ecmg Explorer is available for Windows operating systems</h3>
|
87 |
-
<p>Ecmg Explorer is compatible with Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10, and Windows Server 2008/2012/2016/2019 operating systems.<strong> You need to have at least 512 MB of RAM and 100 MB of free disk space on your computer.</strong></p>
|
88 |
-
<h3>Ecmg Explorer can be downloaded from the official website of ECMG, LLC</h3>
|
89 |
-
<p>You can download Ecmg Explorer from <a href="https://www.ecmg.com/products/ecmg-explorer/">https://www.ecmg.com/products/ecmg-explorer/</a>, which is the official website of ECMG, LLC.<strong> You need to fill out a form with your name, email address, company name, phone number, and country.</strong> After submitting the form, you will receive an email with a link to download Ecmg Explorer.<strong> The download size is about 50 MB.</strong></p>
|
90 |
-
<h3>Ecmg Explorer requires a license key to activate</h3>
|
91 |
-
<p>After downloading Ecmg Explorer,<strong> you need to run the setup file and follow the installation wizard.</strong> You can choose the default settings ```html p>After installing Ecmg Explorer, you need to activate it using a license key. You can obtain a license key from ECMG, LLC by contacting them at <a href="mailto:[email protected]">[email protected]</a> or calling them at +1 888-326-4261. You can also request a free trial license key for 30 days. To activate Ecmg Explorer, you need to enter the license key in the activation window that appears when you launch Ecmg Explorer for the first time.</p>
|
92 |
-
<h2>How to use Ecmg Explorer?</h2>
|
93 |
-
<p>Once you have downloaded, installed, and activated Ecmg Explorer, you can start using it to create and validate your data migration rules. Here are some basic steps on how to use Ecmg Explorer:</p>
|
94 |
-
<h3>Ecmg Explorer allows you to create and edit CDF and CTF files using a graphical editor</h3>
|
95 |
-
<p>To create a new CDF or CTF file, you need to click on the File menu and select New. You can then choose the type of file you want to create (CDF or CTF) and enter a name and location for the file. You can also open an existing CDF or CTF file by clicking on the File menu and selecting Open.</p>
|
96 |
-
<p>To edit a CDF or CTF file, you need to use the graphical editor that appears in the main window of Ecmg Explorer. The graphical editor consists of three panels: the Project Explorer panel, the Editor panel, and the Properties panel. The Project Explorer panel shows the structure and components of the CDF or CTF file. The Editor panel shows the graphical representation of the data structures, mappings, transformations, validations, etc. The Properties panel shows the properties and attributes of the selected component.</p>
|
97 |
-
<p>You can use the graphical editor to add, modify, or delete any component of the CDF or CTF file. You can use drag-and-drop, context menus, or keyboard shortcuts to perform various actions. For example, you can drag and drop tables, columns, fields, etc., from the Project Explorer panel to the Editor panel to create data structures. You can also drag and drop connectors, functions, expressions, etc., from the toolbar to the Editor panel to create mappings, transformations, validations, etc. You can also right-click on any component in the Editor panel or the Project Explorer panel to access context menus that allow you to edit, delete, copy, paste, rename, etc., the component. You can also use keyboard shortcuts such as Ctrl+C (copy), Ctrl+V (paste), Ctrl+Z (undo), Ctrl+Y (redo), etc., to perform various actions.</p>
|
98 |
-
<h3>Ecmg Explorer allows you to validate and test your CDF and CTF files using a simulator</h3>
|
99 |
-
<p>To validate and test your CDF and CTF files, you need to use the simulator that is integrated with Ecmg Explorer. The simulator allows you to run simulations on sample data sets or real data sets to check if your data migration rules work as expected.</p>
|
100 |
-
<p>To run a simulation, you need to click on the Simulator menu and select Run Simulation. You can then choose the source and target systems or repositories that you want to use for the simulation. You can either use sample data sets that are provided by Ecmg Explorer or real data sets that are stored in your local or remote systems or repositories. You can also choose the CDF and CTF files that you want to use for the simulation.</p>
|
101 |
-
<p>After running a simulation, you can view the simulation results in different formats. You can view the results in tabular format by clicking on the Simulator menu and selecting View Results in Table. You can also view the results in graphical format by clicking on the Simulator menu and selecting View Results in Graph. You can export the results as CSV or XML files by clicking on the Simulator menu and selecting Export Results.</p>
|
102 |
-
<p>You can also view error logs, warnings, or messages generated by the simulator by clicking on the Simulator menu and selecting View Logs.</p>
|
103 |
-
<h2>Conclusion</h2>
|
104 |
-
<p>Ecmg Explorer is a powerful and versatile tool for creating and validating rules for bulk data migration. It supports Content Definition (CDF) and Content Transformation (CTF) files, which are used to define the source and target data structures, mappings, transformations, and validations for data migration. It also supports Repository Information Files (RIFs), which contain information about the source and target systems or repositories.</p>
|
105 |
-
<p>Ecmg Explorer helps you to review, audit, and test your data migration rules using a graphical editor and a simulator. It provides an intuitive and easy-to-use interface that makes it simple to navigate and use. It can handle complex data scenarios and requirements using a wide range of data transformations and validations.</p>
|
106 |
-
<p>Ecmg Explorer is available for Windows operating systems, and can be downloaded from the official website of ECMG, LLC. It requires a license key to activate, which can be obtained from ECMG, LLC by contacting them via email or phone.</p>
|
107 |
-
<p>If you are looking for a simple and effective way to create and validate rules for bulk data migration, you might want to check out Ecmg Explorer.</p>
|
108 |
-
<h3>FAQs</h3>
|
109 |
-
<ul>
|
110 |
-
<li><strong>What is ECMG?</strong></li>
|
111 |
-
<li>ECMG is a company that specializes in enterprise content management (ECM) solutions. It offers a range of products and services for data migration, data governance, data quality, data integration, and data analytics.</li>
|
112 |
-
<li><strong>What is ECMG Reader?</strong></li>
|
113 |
-
<li>ECMG Reader is a simple reader for ECMG content. It allows you to view CDF, CTF, and RIF files. However, it does not allow you to create or edit them.</li>
|
114 |
-
<li><strong>What is ECMG Explorer?</strong></li>
|
115 |
-
<li>ECMG Explorer is a tool for creating and validating rules for bulk data migration. It allows you to create and edit CDF and CTF files using a graphical editor. It also allows you to validate and test your CDF and CTF files using a simulator. It also allows you to view RIF files.</li>
|
116 |
-
<li><strong>How much does ECMG Explorer cost?</strong></li>
|
117 |
-
<li>The price of ECMG Explorer depends on the number of licenses and the duration of usage. You can contact ECMG, LLC at <a href="mailto:[email protected]">[email protected]</a> or +1 888-326-4261 to get a quote.</li>
|
118 |
-
<li><strong>How can I get a free trial of ECMG Explorer?</strong></li>
|
119 |
-
<li>You can request a free trial license key for 30 days by contacting ECMG, LLC at <a href="mailto:[email protected]">[email protected]</a> or +1 888-326-4261.</li>
|
120 |
-
</ul>
|
121 |
-
</p> 0a6ba089eb<br />
|
122 |
-
<br />
|
123 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Enhance Your Vocals with RVox Plugin Free Trial for Windows 10.md
DELETED
@@ -1,31 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Get RVox Plugin Free Download for Windows 10</h1>
|
3 |
-
<p>If you are looking for a simple and effective way to enhance your vocals, you should try RVox plugin. RVox is a vocal compressor and gate plugin that can help you achieve a smooth and consistent vocal sound with minimal tweaking. RVox plugin is part of the Waves Renaissance bundle, which is a collection of high-quality audio plugins for mixing and mastering. However, if you don't want to buy the whole bundle, you can get RVox plugin free download for Windows 10 by following these steps.</p>
|
4 |
-
<h2>rvox plugin free download windows 10</h2><br /><p><b><b>Download File</b> » <a href="https://byltly.com/2uKyDW">https://byltly.com/2uKyDW</a></b></p><br /><br />
|
5 |
-
<p>Step 1: Go to the official Waves website and create an account. You will need to provide your name, email address, and password. You will also need to verify your email address by clicking on the link that will be sent to you.</p>
|
6 |
-
<p>Step 2: Once you have created your account, go to the Products page and scroll down to the Renaissance section. You will see a list of plugins that are included in the bundle, such as RCompressor, RVerb, RDeEsser, and RVox. Click on the RVox plugin and you will be taken to its product page.</p>
|
7 |
-
<p>Step 3: On the RVox product page, you will see a button that says "Get Demo". Click on it and you will be asked to log in with your Waves account. After logging in, you will see a confirmation message that says "Your demo license has been activated". You will also receive an email with a download link and instructions on how to install the plugin.</p>
|
8 |
-
<p>Step 4: Download the installer file from the link in the email and run it on your Windows 10 computer. Follow the instructions on the screen and choose the option to install only RVox plugin. You will also need to install Waves Central, which is a software that manages your Waves licenses and plugins.</p>
|
9 |
-
<p>Step 5: After installing RVox plugin and Waves Central, launch your DAW (digital audio workstation) and scan for new plugins. You should be able to find RVox plugin in your plugin list and use it on your vocal tracks. You will have 7 days to try out the plugin for free before it expires.</p>
|
10 |
-
<p>That's it! You have successfully got RVox plugin free download for Windows 10. If you like the plugin and want to keep using it after the trial period, you can buy it from the Waves website or from any authorized dealer. You can also buy the whole Renaissance bundle if you want to get more plugins for your music production. Enjoy!</p>
|
11 |
-
<p></p>
|
12 |
-
|
13 |
-
<h2>How to Use RVox Plugin on Your Vocals</h2>
|
14 |
-
<p>RVox plugin is very easy to use and has only three main controls: Gate, Compressor, and Gain. Here is a brief explanation of what each control does and how to adjust them for your vocals.</p>
|
15 |
-
<p>Gate: This control sets the threshold level below which the plugin will mute the signal. This is useful for reducing background noise and unwanted sounds from your vocal recordings. You can adjust the gate knob to the left or right to increase or decrease the threshold level. You can also turn off the gate by clicking on the power button next to the knob.</p>
|
16 |
-
<p>Compressor: This control sets the amount of compression that the plugin will apply to the signal. Compression is a process that reduces the dynamic range of the signal, making the loud parts quieter and the quiet parts louder. This helps to even out the volume and make the vocals more consistent and clear. You can adjust the compressor knob to the left or right to increase or decrease the amount of compression. You can also turn off the compressor by clicking on the power button next to the knob.</p>
|
17 |
-
<p>Gain: This control sets the output level of the plugin. You can use this to adjust the overall volume of your vocals after applying the gate and compressor. You can adjust the gain knob to the left or right to increase or decrease the output level. You can also turn off the gain by clicking on the power button next to the knob.</p>
|
18 |
-
<p>RVox plugin also has a meter that shows you the input and output levels of the signal, as well as the amount of gain reduction that the compressor is applying. You can use this to monitor your levels and avoid clipping or distortion.</p>
|
19 |
-
|
20 |
-
<h2>Why RVox Plugin is Great for Your Vocals</h2>
|
21 |
-
<p>RVox plugin is one of the best plugins for vocal processing because it offers several benefits, such as:</p>
|
22 |
-
<ul>
|
23 |
-
<li>It is simple and intuitive to use, with only three main controls and no complicated settings.</li>
|
24 |
-
<li>It works well on any type of vocals, from soft and smooth to loud and aggressive.</li>
|
25 |
-
<li>It can handle both mono and stereo signals, making it suitable for single or double vocals.</li>
|
26 |
-
<li>It can be used as a standalone plugin or in combination with other plugins for more creative effects.</li>
|
27 |
-
<li>It has a low CPU usage and does not affect your system performance.</li>
|
28 |
-
</ul>
|
29 |
-
<p>If you are looking for a plugin that can make your vocals sound better with minimal effort, you should definitely try RVox plugin. You will be amazed by how much it can improve your vocal sound and quality.</p> ddb901b051<br />
|
30 |
-
<br />
|
31 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/HAP Program Download Everything You Need to Know About This Powerful HVAC Software.md
DELETED
@@ -1,36 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Download and Install HAP Program for HVAC Design</h1>
|
3 |
-
<p>HAP (Hourly Analysis Program) is a software tool that helps you design and analyze commercial building HVAC systems. It is developed by Carrier, one of the leading manufacturers of HVAC equipment. HAP can help you with tasks such as load estimation, energy simulation, equipment selection, and cost analysis. In this article, we will show you how to download and install HAP program on your computer.</p>
|
4 |
-
<h2>Step 1: Visit Carrier's Website</h2>
|
5 |
-
<p>The first step is to visit Carrier's website at <a href="https://www.carrier.com/commercial/en/us/software/hvac-system-design/software-downloads/">https://www.carrier.com/commercial/en/us/software/hvac-system-design/software-downloads/</a>. Here you will find the latest versions of HAP and other eDesign programs from Carrier. You will also find release sheets and other resource material for each program.</p>
|
6 |
-
<h2>hap program download</h2><br /><p><b><b>Download File</b> ☆☆☆☆☆ <a href="https://byltly.com/2uKxhh">https://byltly.com/2uKxhh</a></b></p><br /><br />
|
7 |
-
<h2>Step 2: Choose the Program You Want to Download</h2>
|
8 |
-
<p>The next step is to choose the program you want to download. For HAP, you have two options: HAP v6.0 and HAP v4.16. The latest version is HAP v6.0, which has more features and capabilities than the previous version. However, if you have an older license key or if you want to use a legacy program that is compatible with HAP v4.16, you can choose that option as well.</p>
|
9 |
-
<h2>Step 3: Enter Your License Key or Access Code</h2>
|
10 |
-
<p>The third step is to enter your license key or access code to download the program. If you are a licensed customer of Carrier, you can use your license key to download HAP v6.0 or any other eDesign program. If you are a US or Canadian customer, you can use your access code to download HAP v4.16 or any other legacy program. If you do not have a license key or an access code, you can request a free 60-day trial of eDesign software from Carrier's website.</p>
|
11 |
-
<h2>Step 4: Run the Installer and Follow the Instructions</h2>
|
12 |
-
<p>The final step is to run the installer and follow the instructions on the screen. The installer will guide you through the installation process and ask you to accept the terms and conditions, choose the destination folder, and select the components you want to install. Once the installation is complete, you can launch HAP from your desktop or start menu.</p>
|
13 |
-
<h2>Conclusion</h2>
|
14 |
-
<p>HAP is a powerful and user-friendly software tool that can help you design and analyze commercial building HVAC systems. It can help you optimize your system performance, energy efficiency, and cost-effectiveness. To download and install HAP program on your computer, you need to visit Carrier's website, choose the program version, enter your license key or access code, and run the installer. If you need more help or information about HAP or other Carrier software tools, you can visit their website or contact their support team.</p>
|
15 |
-
|
16 |
-
<h2>Benefits of Using HAP Program for HVAC Design</h2>
|
17 |
-
<p>HAP program offers many benefits for HVAC designers and engineers. Some of the main benefits are:</p>
|
18 |
-
<ul>
|
19 |
-
<li>It can perform both design and simulation tasks, allowing you to compare different design alternatives and evaluate their performance and energy impacts.</li>
|
20 |
-
<li>It can handle complex and diverse HVAC systems, including variable air volume, chilled water, hot water, steam, heat recovery, and renewable energy systems.</li>
|
21 |
-
<li>It can model various building types and zones, including offices, retail stores, hotels, schools, hospitals, and industrial facilities.</li>
|
22 |
-
<li>It can use weather data from over 8,000 locations around the world, or you can create your own custom weather data.</li>
|
23 |
-
<li>It can generate detailed reports and graphs that show the results of your analysis, such as load profiles, energy consumption, peak demand, utility costs, emissions, and LEED credits.</li>
|
24 |
-
</ul>
|
25 |
-
<h2>How to Learn and Use HAP Program for HVAC Design</h2>
|
26 |
-
<p>HAP program is designed to be easy to learn and use for HVAC designers and engineers. It has a user-friendly interface that guides you through the steps of creating and running a project. It also has a comprehensive help system that provides explanations and examples for each feature and function. Some of the ways you can learn and use HAP program are:</p>
|
27 |
-
<ul>
|
28 |
-
<li>You can watch tutorial videos that show you how to perform common tasks and operations in HAP.</li>
|
29 |
-
<li>You can read the user manual that covers all aspects of HAP in detail.</li>
|
30 |
-
<li>You can attend online or in-person training courses that teach you how to use HAP effectively and efficiently.</li>
|
31 |
-
<li>You can access the documentation menu that contains various reference materials and standards related to HVAC design and analysis.</li>
|
32 |
-
<li>You can contact the technical support team that can answer your questions and solve your problems.</li>
|
33 |
-
</ul></p>
|
34 |
-
<p></p> ddb901b051<br />
|
35 |
-
<br />
|
36 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Avenged Sevenfold 10 Multitracks OGG.md
DELETED
@@ -1,17 +0,0 @@
|
|
1 |
-
<h2>Avenged Sevenfold 10 Multitracks OGG</h2><br /><p><b><b>DOWNLOAD</b> • <a href="https://imgfil.com/2uxXlX">https://imgfil.com/2uxXlX</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
Download You Dont Mess With The Zohan Mp4 Mp3
|
4 |
-
Download You Dont Mess With The Zohan Mp4 Mp3
|
5 |
-
In this video I will make my channel and my group in VK.
|
6 |
-
How to create a channel and group in VK.
|
7 |
-
How to promote a channel on YouTube.
|
8 |
-
How to make a group in VK.
|
9 |
-
How to make a YouTube channel.
|
10 |
-
How to create a YouTube channel.
|
11 |
-
Hello!
|
12 |
-
My name is Pavel Burenok.
|
13 |
-
I have been running a YouTube channel since 2013, as I said in this video, I will make my own channel and my group in VK.
|
14 |
-
How to make a YouTube channel. 8a78ff9644<br />
|
15 |
-
<br />
|
16 |
-
<br />
|
17 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Challenge Yourself with Dunk Shot APK Day and Earn Points for Every Basket.md
DELETED
@@ -1,86 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Dunk Shot APK Dayı: A Fun and Addictive Basketball Game for Android</h1>
|
3 |
-
<p>If you love basketball and arcade games, you will love Dunk Shot APK Dayı. This is a modified version of the popular game Dunk Shot by Ketchapp, which lets you shoot hoops with a free-falling basketball. You will earn points for every basket you make successfully, but the baskets will be higher and harder to reach each time. You will also unlock new balls, skins and backgrounds as you play. In this article, we will tell you everything you need to know about Dunk Shot APK Dayı, including how to play it, how to download and install it, how to improve your skills and score higher, and what other players think about it.</p>
|
4 |
-
<h2>How to Play Dunk Shot APK Dayı</h2>
|
5 |
-
<h3>The Basics: Drag, Aim and Shoot</h3>
|
6 |
-
<p>The gameplay of Dunk Shot APK Dayı is very simple and intuitive. You just need to drag your finger on the screen to aim the ball towards the next basket. The longer you drag, the more power you will give to your shot. You can also see a dotted line that shows the trajectory of your ball. When you release your finger, the ball will fly towards the basket. If you make it, you will earn points and move on to the next basket. If you miss it, the ball will fall down and you will have to start over.</p>
|
7 |
-
<h2>dunk shot apk dayı</h2><br /><p><b><b>Download Zip</b> ➡ <a href="https://urlin.us/2uT1Qz">https://urlin.us/2uT1Qz</a></b></p><br /><br />
|
8 |
-
<h3>The Challenges: Moving Baskets, Perfect Shots and Combos</h3>
|
9 |
-
<p>As you progress in the game, the baskets will start moving in different directions and speeds. You will have to adjust your aim and timing accordingly. Some baskets will also be very close or very far from your current position, requiring more precision and skill. You can also try to make perfect shots by sending the ball through the hoop without touching the rim. This will give you double points and start a combo. If you make consecutive perfect shots, you will earn more points for each one. For example, if you make three perfect shots in a row, you will get 2 + 3 + 4 = 9 points. Your ball will also catch fire when you make a combo, which looks cool and gives you extra points.</p>
|
10 |
-
<h3>The Rewards: Stars, Skins and Missions</h3>
|
11 |
-
<p>As you play Dunk Shot APK Dayı, you will collect stars that you can use to buy new balls and skins. There are over 40 different balls and skins to choose from, each with their own design and sound effects. You can also get free stars by watching ads or opening gifts that appear randomly on the screen. You can also complete missions that challenge you to achieve certain goals in the game, such as scoring a certain number of points or making a certain number of perfect shots. Completing missions will reward you with more stars or special balls.</p>
|
12 |
-
<p>dunk shot game download<br />
|
13 |
-
dunk shot mod apk unlimited money<br />
|
14 |
-
dunk shot online play<br />
|
15 |
-
dunk shot basketball game<br />
|
16 |
-
dunk shot apk pure<br />
|
17 |
-
dunk shot ketchapp<br />
|
18 |
-
dunk shot crazy games<br />
|
19 |
-
dunk shot apk mod<br />
|
20 |
-
dunk shot app store<br />
|
21 |
-
dunk shot unblocked<br />
|
22 |
-
dunk shot hack apk<br />
|
23 |
-
dunk shot cheats<br />
|
24 |
-
dunk shot tips and tricks<br />
|
25 |
-
dunk shot free fire<br />
|
26 |
-
dunk shot challenge<br />
|
27 |
-
dunk shot world record<br />
|
28 |
-
dunk shot highest score<br />
|
29 |
-
dunk shot best ball<br />
|
30 |
-
dunk shot 2 apk<br />
|
31 |
-
dunk shot 3d apk<br />
|
32 |
-
dunk shot android oyun club<br />
|
33 |
-
dunk shot apk indir<br />
|
34 |
-
dunk shot apk hile<br />
|
35 |
-
dunk shot apk son sürüm<br />
|
36 |
-
dunk shot apk para hilesi<br />
|
37 |
-
dunk shot apk güncel<br />
|
38 |
-
dunk shot apk dayı indir<br />
|
39 |
-
dunk shot apk dayı hileli<br />
|
40 |
-
dunk shot apk dayı mod menu<br />
|
41 |
-
dunk shot apk dayı son sürüm indir<br />
|
42 |
-
dunk shot apk dayı güncel hileli indir<br />
|
43 |
-
dunk shot apk dayı altın hilesi indir<br />
|
44 |
-
dunk shot apk dayı elmas hilesi indir<br />
|
45 |
-
dunk shot apk dayı can hilesi indir<br />
|
46 |
-
dunk shot apk dayı reklamsız indir<br />
|
47 |
-
dunk shot apk dayı sınırsız para indir<br />
|
48 |
-
dunk shot apk dayı sınırsız top indir<br />
|
49 |
-
dunk shot apk dayı sınırsız seviye indir<br />
|
50 |
-
dunk shot apk dayı sınırsız skor indir<br />
|
51 |
-
dunk shot apk dayı sınırsız süre indir<br />
|
52 |
-
dunk shot apk dayı sınırsız enerji indir<br />
|
53 |
-
dunk shot apk dayı sınırsız yükseltme indir<br />
|
54 |
-
dunk shot apk dayı sınırsız özellik indir<br />
|
55 |
-
dunk shot apk dayı sınırsız mod indir<br />
|
56 |
-
dunk shot apk dayı sınırsız oyun indir</p>
|
57 |
-
<h2>How to Download and Install Dunk Shot APK Dayı</h2>
|
58 |
-
<h3>The Steps: Find, Download and Install the APK File</h3>
|
59 |
-
<p>If you want to play Dunk Shot APK Dayı on your Android device, you will need to download and install an APK file. An APK file is an application package file that contains all the data of the app that you want to install. You can find the APK file for Dunk Shot APK Dayı on various websites that offer free and safe downloads of Android apps, such as APKPure, APKMirror or APKDayı. Here are the steps to download and install the APK file: - Go to one of the websites that offer Dunk Shot APK Dayı and search for the app. You can also use this link to go directly to the download page on APKDayı. - Tap on the download button and wait for the file to be downloaded to your device. You may need to allow downloads from unknown sources in your device settings. - Once the file is downloaded, tap on it to open it and start the installation process. You may need to grant some permissions to the app, such as access to your storage, network and device information. - Follow the instructions on the screen and wait for the installation to finish. You may see a confirmation message when the app is installed successfully. - Tap on the app icon to launch it and enjoy playing Dunk Shot APK Dayı.</p>
|
60 |
-
<h3>The Benefits: No Ads, Unlimited Stars and All Skins Unlocked</h3>
|
61 |
-
<p>One of the main reasons why you may want to download and install Dunk Shot APK Dayı is that it offers some benefits that you won't get in the original version of Dunk Shot by Ketchapp. Here are some of them: - No ads: You won't see any annoying ads popping up on your screen while you play. This will make your gaming experience more smooth and enjoyable. - Unlimited stars: You will have unlimited stars to spend on buying new balls and skins. You won't have to watch ads or complete missions to earn more stars. - All skins unlocked: You will have access to all the skins available in the game, without having to unlock them by collecting stars or completing missions. You can choose any skin you like and change it anytime.</p>
|
62 |
-
<h3>The Risks: Malware, Data Loss and Legal Issues</h3>
|
63 |
-
<p>However, downloading and installing Dunk Shot APK Dayı also comes with some risks that you should be aware of before you decide to do it. Here are some of them: - Malware: The APK file you download may contain malware or viruses that can harm your device or steal your data. You should always scan the file with a reliable antivirus software before opening it. You should also only download from trusted sources that guarantee safe and clean downloads. - Data loss: The app may not work properly on your device or cause some errors or crashes. This may result in data loss or corruption, such as losing your progress, settings or preferences in the game. You should always back up your data before installing any app that is not from the official Google Play Store. - Legal issues: The app may violate some terms of service or intellectual property rights of the original developer or publisher of Dunk Shot by Ketchapp. This may result in legal actions or penalties against you or the website that provides the APK file. You should always respect the rights of the creators and owners of the apps you use.</p>
|
64 |
-
<h2>How to Improve Your Skills and Score Higher in Dunk Shot APK Dayı</h2>
|
65 |
-
<h3>The Tips: Master the Controls, Practice Perfect Shots and Collect More Balls</h3>
|
66 |
-
<p>If you want to improve your skills and score higher in Dunk Shot APK Dayı, you will need to practice and learn some tips and tricks that can help you play better. Here are some of them: - Master the controls: The most important thing is to master the controls of dragging, aiming and shooting. You should practice until you can control the power and direction of your shots with ease and accuracy. You should also learn how to adjust your aim according to the movement and distance of the baskets. - Practice perfect shots: Perfect shots are essential for scoring high in Dunk Shot APK Dayı. They give you double points and start combos that multiply your score. You should practice until you can make perfect shots consistently and avoid touching the rim. You should also learn how to time your shots according to the speed and angle of the baskets. - Collect more balls: Having more balls means having more chances to score points. You should collect as many balls as you can by making baskets or opening gifts. You should also use different balls with different effects, such as fire balls, ice balls or rainbow balls.</p>
|
67 |
-
<h3>The Tricks: Bounce Shots, Fire Balls and Leaderboards</h3>
|
68 |
-
<p>Besides practicing and learning some tips, you can also use some tricks that can give you an edge in Dunk Shot APK Dayı. Here are some of them: - Bounce shots: Sometimes, you can make bounce shots by hitting the wall or another basket before reaching your target basket. This can help you avoid obstacles or reach higher baskets that are otherwise hard to reach. - Fire balls: When your ball is on fire, it will burn through any basket it touches, making it easier to score points. You can also use fire balls to break chains or nets that block your way. You can get fire balls by making combos or buying them with stars. - Leaderboards: You can compete with other players around the world by checking the leaderboards. You can see your rank, score and name on the global or local leaderboards. You can also challenge your friends or other players to beat your score and earn bragging rights.</p>
|
69 |
-
<h3>The Reviews: What Other Players Say About Dunk Shot APK Dayı</h3>
|
70 |
-
<p>If you are still not convinced that Dunk Shot APK Dayı is a fun and addictive game, you can read some of the reviews that other players have left on the websites that offer the APK file. Here are some of them: - "This game is awesome. I love the graphics, the sounds and the gameplay. It is very challenging and addictive. I can play it for hours without getting bored. The best part is that there are no ads and I can get unlimited stars and skins. I highly recommend this game to anyone who loves basketball and arcade games." - "I have been playing Dunk Shot for a long time and I really enjoy it. But when I found out about Dunk Shot APK Dayı, I was amazed. It is like a whole new game with more features and benefits. It is much more fun and exciting than the original version. I especially like the fire balls and the bounce shots. They make the game more dynamic and unpredictable." - "Dunk Shot APK Dayı is a great game for basketball fans and casual gamers alike. It is easy to play but hard to master. It has a lot of variety and content to keep you entertained. It also has a nice design and sound effects that make you feel like you are in a real basketball court. The only downside is that it may not work on some devices or cause some errors or crashes."</p>
|
71 |
-
<h2>Conclusion</h2>
|
72 |
-
<p>Dunk Shot APK Dayı is a modified version of the popular game Dunk Shot by Ketchapp, which lets you shoot hoops with a free-falling basketball. It offers some benefits that you won't get in the original version, such as no ads, unlimited stars and all skins unlocked. However, it also comes with some risks, such as malware, data loss and legal issues. You should always be careful when downloading and installing any app that is not from the official Google Play Store. You should also practice and learn some tips and tricks that can help you improve your skills and score higher in the game. Dunk Shot APK Dayı is a fun and addictive game that you can play on your Android device anytime and anywhere.</p>
|
73 |
-
<h2>FAQs</h2>
|
74 |
-
<p>Here are some frequently asked questions and answers about Dunk Shot APK Dayı:</p>
|
75 |
-
<h4>Q: Is Dunk Shot APK Dayı free to play?</h4>
|
76 |
-
<p>A: Yes, Dunk Shot APK Dayı is free to play. You don't need to pay anything to download or install it. However, you may need to allow some permissions or grant some access to the app when you install it.</p>
|
77 |
-
<h4>Q: Is Dunk Shot APK Dayı safe to download and install?</h4>
|
78 |
-
<p>A: Dunk Shot APK Dayı may not be safe to download and install on your device. It may contain malware or viruses that can harm your device or steal your data. You should always scan the file with a reliable antivirus software before opening it. You should also only download from trusted sources that guarantee safe and clean downloads.</p>
|
79 |
-
<h4>Q: Is Dunk Shot APK Dayı legal to use?</h4>
|
80 |
-
<p>A: Dunk Shot APK Dayı may not be legal to use on your device. It may violate some terms of service or intellectual property rights of the original developer or publisher of Dunk Shot by Ketchapp. This may result in legal actions or penalties against you or the website that provides the APK file. You should always respect the rights of the creators and owners of the apps you use.</p>
|
81 |
-
<h4>Q: How can I get more stars in Dunk Shot APK Dayı?</h4>
|
82 |
-
<p>A: In Dunk Shot APK Dayı, you will have unlimited stars to spend on buying new balls and skins. You won't have to watch ads or complete missions to earn more stars.</p>
|
83 |
-
<h4>Q: How can I unlock all the skins in Dunk Shot APK Dayı?</h4>
|
84 |
-
<p>A: In Dunk Shot APK Dayı, you will have access to all the skins available in the game, without having to unlock them by collecting stars or completing missions. You can choose any skin you like and change it anytime.</p> 197e85843d<br />
|
85 |
-
<br />
|
86 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Crazy Car Stunts - Mega Ramp MOD APK The Best Way to Test Your Driving Skills.md
DELETED
@@ -1,93 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Crazy Car Stunts Mega Ramp Mod Apk: A Review</h1>
|
3 |
-
<p>Do you love car games? Do you enjoy performing crazy stunts on mega ramps? If yes, then you should try Crazy Car Stunts Mega Ramp Mod Apk. This is a modified version of the original game that gives you unlimited money and access to all the cars and tracks. In this article, we will review this mod apk and tell you how to download and install it on your Android device.</p>
|
4 |
-
<h2>What is Crazy Car Stunts Mega Ramp Mod Apk?</h2>
|
5 |
-
<p>Crazy Car Stunts Mega Ramp Mod Apk is a racing game that lets you drive various cars on different tracks. You can perform amazing stunts, jumps, flips, and drifts on the mega ramps. You can also customize your cars and upgrade their performance. The game has realistic physics and graphics that make the gameplay more immersive and fun.</p>
|
6 |
-
<h2>crazy car stunts mega ramp mod apk</h2><br /><p><b><b>DOWNLOAD</b> ⭐ <a href="https://urlin.us/2uT1sF">https://urlin.us/2uT1sF</a></b></p><br /><br />
|
7 |
-
<h3>Features of Crazy Car Stunts Mega Ramp Mod Apk</h3>
|
8 |
-
<h4>Unlimited Money</h4>
|
9 |
-
<p>One of the best features of this mod apk is that it gives you unlimited money. You can use this money to buy any car you want, from sports cars to monster trucks. You can also use it to upgrade your cars and make them faster, stronger, and more agile. You don't have to worry about running out of money or watching ads to earn more.</p>
|
10 |
-
<h4>Various Cars and Tracks</h4>
|
11 |
-
<p>Another feature of this mod apk is that it unlocks all the cars and tracks in the game. You can choose from over 50 cars, each with its own unique design and features. You can also drive on over 100 tracks, each with its own challenges and obstacles. You can explore different environments, such as city, desert, snow, forest, and more.</p>
|
12 |
-
<h4>Realistic Physics and Graphics</h4>
|
13 |
-
<p>The last feature of this mod apk is that it enhances the physics and graphics of the game. The game has realistic physics that make the car movements and collisions more believable. The game also has stunning graphics that make the cars and tracks look more detailed and vivid. You can enjoy the game in high resolution and smooth frame rate.</p>
|
14 |
-
<h3>How to Download and Install Crazy Car Stunts Mega Ramp Mod Apk?</h3>
|
15 |
-
<p>If you want to download and install this mod apk on your Android device, you need to follow these simple steps:</p>
|
16 |
-
<h4>Step 1: Enable Unknown Sources</h4>
|
17 |
-
<p>First, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.</p>
|
18 |
-
<h4>Step 2: Download the Apk File</h4>
|
19 |
-
<p>Next, you need to download the apk file of this mod apk from a reliable source. You can use this link to download it directly. The file size is about 100 MB, so make sure you have enough space on your device.</p>
|
20 |
-
<h4>Step 3: Install the Apk File</h4>
|
21 |
-
<p>Finally, you need to install the apk file on your device. To do this, locate the file in your file manager and tap on it. Follow the instructions on the screen to complete the installation process.</p>
|
22 |
-
<p>* crazy car stunts mega ramp mod apk unlimited money<br />
|
23 |
-
* crazy car stunts mega ramp mod apk download<br />
|
24 |
-
* crazy car stunts mega ramp mod apk latest version<br />
|
25 |
-
* crazy car stunts mega ramp mod apk android 1<br />
|
26 |
-
* crazy car stunts mega ramp mod apk free<br />
|
27 |
-
* crazy car stunts mega ramp mod apk hack<br />
|
28 |
-
* crazy car stunts mega ramp mod apk offline<br />
|
29 |
-
* crazy car stunts mega ramp mod apk revdl<br />
|
30 |
-
* crazy car stunts mega ramp mod apk 2023<br />
|
31 |
-
* crazy car stunts mega ramp mod apk no ads<br />
|
32 |
-
* crazy car stunts mega ramp mod apk apkpure<br />
|
33 |
-
* crazy car stunts mega ramp mod apk rexdl<br />
|
34 |
-
* crazy car stunts mega ramp mod apk unlimited everything<br />
|
35 |
-
* crazy car stunts mega ramp mod apk for pc<br />
|
36 |
-
* crazy car stunts mega ramp mod apk online<br />
|
37 |
-
* crazy car stunts mega ramp mod apk happymod<br />
|
38 |
-
* crazy car stunts mega ramp mod apk unlocked<br />
|
39 |
-
* crazy car stunts mega ramp mod apk 8.0<br />
|
40 |
-
* crazy car stunts mega ramp mod apk apkloli<br />
|
41 |
-
* crazy car stunts mega ramp mod apk obb<br />
|
42 |
-
* crazy car stunts mega ramp mod apk all cars unlocked<br />
|
43 |
-
* crazy car stunts mega ramp mod apk 7.0.1<br />
|
44 |
-
* crazy car stunts mega ramp mod apk 7.0.2<br />
|
45 |
-
* crazy car stunts mega ramp mod apk 7.0.3<br />
|
46 |
-
* crazy car stunts mega ramp mod apk 7.0.4<br />
|
47 |
-
* crazy car stunts mega ramp mod apk 7.0.5<br />
|
48 |
-
* crazy car stunts mega ramp mod apk 7.0.6<br />
|
49 |
-
* crazy car stunts mega ramp mod apk 7.0.7<br />
|
50 |
-
* crazy car stunts mega ramp mod apk 7.0.8<br />
|
51 |
-
* crazy car stunts mega ramp mod apk 7.0.9<br />
|
52 |
-
* crazy car stunts mega ramp mod apk 7.1.0<br />
|
53 |
-
* crazy car stunts mega ramp mod apk 7.1.1<br />
|
54 |
-
* crazy car stunts mega ramp mod apk 7.1.2<br />
|
55 |
-
* crazy car stunts mega ramp mod apk 7.1.3<br />
|
56 |
-
* crazy car stunts mega ramp mod apk 7.1.4<br />
|
57 |
-
* crazy car stunts mega ramp mod apk 7.1.5<br />
|
58 |
-
* crazy car stunts mega ramp mod apk 7.1.6<br />
|
59 |
-
* crazy car stunts mega ramp mod apk 7.1.7<br />
|
60 |
-
* crazy car stunts mega ramp mod apk 7.1.8<br />
|
61 |
-
* crazy car stunts mega ramp mod apk 7.1.9</p>
|
62 |
-
<h3>Pros and Cons of Crazy Car Stunts Mega Ramp Mod Apk</h3>
|
63 |
-
<h4>Pros</h4>
|
64 |
-
<ul>
|
65 |
-
<li>You can enjoy unlimited money and access to all cars and tracks.</li>
|
66 |
-
<li>You can perform crazy stunts on mega ramps.</li>
|
67 |
-
<li>You can customize your cars and upgrade their performance.</li>
|
68 |
-
<li>You can enjoy realistic physics and graphics.</li>
|
69 |
-
</ul>
|
70 |
-
<h4>Cons</h4>
|
71 |
-
<ul>
|
72 |
-
<li>You may face some compatibility issues with some devices.</li>
|
73 |
-
<li>You may encounter some bugs and glitches in the game.</li>
|
74 |
-
<li>You may lose the thrill of earning money and unlocking cars and tracks by yourself.</li>
|
75 |
-
</ul>
|
76 |
-
<h3>Conclusion</h3>
|
77 |
-
<p>Crazy Car Stunts Mega Ramp Mod Apk is a fun and exciting racing game that lets you perform crazy stunts on mega ramps. You can enjoy unlimited money and access to all cars and tracks. You can also customize your cars and upgrade their performance. The game has realistic physics and graphics that make the gameplay more immersive and fun. If you want to download and install this mod apk on your Android device, you can follow the steps we mentioned above. However, you should also be aware of the pros and cons of this mod apk before you decide to use it.</p>
|
78 |
-
<h2>FAQs</h2>
|
79 |
-
<p>Here are some frequently asked questions about Crazy Car Stunts Mega Ramp Mod Apk:</p>
|
80 |
-
<ol>
|
81 |
-
<li>Is Crazy Car Stunts Mega Ramp Mod Apk safe to use?</li>
|
82 |
-
<p>Yes, this mod apk is safe to use as long as you download it from a trusted source. However, you should always scan the file for viruses before installing it on your device.</p>
|
83 |
-
<li>Is Crazy Car Stunts Mega Ramp Mod Apk free to use?</li>
|
84 |
-
<p>Yes, this mod apk is free to use and does not require any subscription or registration. You can enjoy all the features of the game without spending any money.</p>
|
85 |
-
<li>Can I play Crazy Car Stunts Mega Ramp Mod Apk offline?</li>
|
86 |
-
<p>Yes, you can play this mod apk offline without any internet connection. However, you may not be able to access some online features, such as leaderboards and achievements.</p>
|
87 |
-
<li>Can I play Crazy Car Stunts Mega Ramp Mod Apk with my friends?</li>
|
88 |
-
<p>Yes, you can play this mod apk with your friends online or locally. You can challenge them to races and stunts on the mega ramps. You can also chat with them and share your scores and achievements.</p>
|
89 |
-
<li>Can I update Crazy Car Stunts Mega Ramp Mod Apk?</li>
|
90 |
-
<p>Yes, you can update this mod apk whenever there is a new version available. However, you may lose your progress and data if you update it from a different source. Therefore, it is recommended that you backup your data before updating it.</p>
|
91 |
-
</ol></p> 197e85843d<br />
|
92 |
-
<br />
|
93 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Agar.io Mod Menu with Unlimited Coins DNA and God Mode.md
DELETED
@@ -1,115 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Download Agar.io Mod Menu for Android Devices</h1>
|
3 |
-
<p>If you are a fan of Agar.io, you might be wondering how to download agar io mod menu for your Android device. A mod menu is a tool that allows you to access various cheats and hacks in the game, such as god mode, macro, zoom hack, auto feed, aim-bot, coins hack, and DNA hack. With these features, you can have more fun, dominate the game, customize your cell, and impress your friends. However, you also need to be careful, as using a mod menu can risk getting banned, ruin the game balance, and lose the challenge. In this article, we will show you how to download and install agar io mod menu for Android devices, as well as what are the features, pros, and cons of using it.</p>
|
4 |
-
<h2>download agar io mod menu</h2><br /><p><b><b>Download File</b> » <a href="https://jinyurl.com/2uNPZ5">https://jinyurl.com/2uNPZ5</a></b></p><br /><br />
|
5 |
-
<h2>What is Agar.io and Why You Need a Mod Menu</h2>
|
6 |
-
<h3>Agar.io is a popular online multiplayer game where you control a cell and try to eat other cells</h3>
|
7 |
-
<p>Agar.io is a simple but addictive online multiplayer game that was released in 2015. The game is inspired by agar, a substance used in microbiology to culture bacteria. In the game, you control a cell that can move around on a map filled with other cells. Your goal is to eat smaller cells and avoid being eaten by bigger cells. As you eat more cells, you grow bigger and can split into smaller cells to move faster or escape enemies. You can also eject some mass to feed other cells or shoot viruses that can split bigger cells. The game has different modes, such as FFA (free-for-all), Teams, Experimental, Party, Battle Royale, and Zombie. You can also customize your cell with different skins and names.</p>
|
8 |
-
<h3>A mod menu is a tool that gives you access to various cheats and hacks in the game</h3>
|
9 |
-
<p>A mod menu is a tool that modifies the game code to give you access to various cheats and hacks in the game. A mod menu usually comes in the form of an apk file that you need to download and install on your device. Once installed, you can launch the game and activate the mod menu features from a hidden menu. Some of the features that a mod menu can offer are god mode, macro, zoom hack, auto feed, aim-bot, coins hack, and DNA hack. These features can give you an advantage over other players or make the game more fun and interesting.</p>
|
10 |
-
<h2>How to Download and Install Agar.io Mod Menu</h2>
|
11 |
-
<h3>Step 1: Find a reliable source for the mod menu apk file</h3>
|
12 |
-
<p>The first step to download agar io mod menu is to find a reliable source for the mod menu apk file. There are many websites that claim to offer agar io mod menu apk files, but not all of them are safe or working. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Some of them may also have outdated or fake mod menu apk files that do not work or cause errors. Therefore, you need to be careful and do some research before downloading any mod menu apk file. You can check the reviews, ratings, comments, and feedback from other users who have downloaded the mod menu apk file. You can also use antivirus software or online scanners to scan the mod menu apk file for any potential threats. One of the reliable sources that we recommend for agar io mod menu apk file is [this website].</p>
|
13 |
-
<h3>Step 2: Enable unknown sources on your device settings</h3>
|
14 |
-
<p>The second step to download agar io mod menu is to enable unknown sources on your device settings. This is because the mod menu apk file is not from the official Google Play Store, and your device may block the installation of apps from unknown sources by default. To enable unknown sources, you need to go to your device settings and look for the security or privacy option. Then, you need to find the option that says "allow installation of apps from unknown sources" or something similar. You need to toggle this option on and confirm your choice. This will allow you to install the mod menu apk file on your device.</p>
|
15 |
-
<p>download agar io mod menu for android<br />
|
16 |
-
download agar io mod menu for ios<br />
|
17 |
-
download agar io mod menu for pc<br />
|
18 |
-
download agar io mod menu apk<br />
|
19 |
-
download agar io mod menu ipa<br />
|
20 |
-
download agar io mod menu with godmode<br />
|
21 |
-
download agar io mod menu with zoom hack<br />
|
22 |
-
download agar io mod menu with coins hack<br />
|
23 |
-
download agar io mod menu with dna hack<br />
|
24 |
-
download agar io mod menu with aim-bot<br />
|
25 |
-
download agar io mod menu with macro<br />
|
26 |
-
download agar io mod menu with auto feed<br />
|
27 |
-
download agar io mod menu with double split<br />
|
28 |
-
download agar io mod menu with minimap<br />
|
29 |
-
download agar io mod menu with chat<br />
|
30 |
-
download agar io mod menu with themes<br />
|
31 |
-
download agar io mod menu with fps booster<br />
|
32 |
-
download agar io mod menu with helpers<br />
|
33 |
-
download agar io mod menu with unlimited zoom<br />
|
34 |
-
download agar io mod menu with free coins<br />
|
35 |
-
download agar io mod menu delta extension<br />
|
36 |
-
download agar io mod menu agartool m pelea<br />
|
37 |
-
download agar io mod menu agario hacks<br />
|
38 |
-
download agar io mod menu agario vanilla<br />
|
39 |
-
download agar io mod menu pfmacro sigmally<br />
|
40 |
-
download agar io mod menu 2019 bots agario clones games<br />
|
41 |
-
download agar io mod menu zoom script by nel99<br />
|
42 |
-
download agar io mod menu free bots vanilla version<br />
|
43 |
-
download agar io mod menu free bots ogario version<br />
|
44 |
-
download agar io mod menu invert all colors<br />
|
45 |
-
download agar io mod menu core mutation by zagar<br />
|
46 |
-
download agar io mod menu the ultimate agario script<br />
|
47 |
-
how to install agar io mod menu on android<br />
|
48 |
-
how to install agar io mod menu on ios<br />
|
49 |
-
how to install agar io mod menu on pc<br />
|
50 |
-
how to use agar io mod menu on android<br />
|
51 |
-
how to use agar io mod menu on ios<br />
|
52 |
-
how to use agar io mod menu on pc<br />
|
53 |
-
where to find agar io mod menu for android<br />
|
54 |
-
where to find agar io mod menu for ios<br />
|
55 |
-
where to find agar io mod menu for pc<br />
|
56 |
-
where to find agar io mod menu apk<br />
|
57 |
-
where to find agar io mod menu ipa<br />
|
58 |
-
best site to download agar io mod menu for android<br />
|
59 |
-
best site to download agar io mod menu for ios<br />
|
60 |
-
best site to download agar io mod menu for pc<br />
|
61 |
-
best site to download agar io mod menu apk <br />
|
62 |
-
best site to download agar io mod menu ipa</p>
|
63 |
-
<h3>Step 3: Download and install the mod menu apk file</h3>
|
64 |
-
<p>The third step to download agar io mod menu is to download and install the mod menu apk file. To do this, you need to go to the website where you found the mod menu apk file and click on the download button. You may need to complete some verification steps, such as captcha, surveys, or offers, before you can download the mod menu apk file. Once you have downloaded the mod menu apk file, you need to locate it on your device storage and tap on it. This will start the installation process of the mod menu apk file. You may need to grant some permissions and accept some terms and conditions before you can install the mod menu apk file. Once the installation is complete, you will see a confirmation message on your screen.</p>
|
65 |
-
<h3>Step 4: Launch the game and enjoy the mod menu features</h3>
|
66 |
-
<p>The final step to download agar io mod menu is to launch the game and enjoy the mod menu features. To do this, you need to open the game app on your device and wait for it to load. You will see a hidden menu icon on the top left corner of your screen. You need to tap on this icon to open the mod menu. You will see a list of features that you can activate or deactivate by tapping on them. Some of the features that you can use are god mode, macro, zoom hack, auto feed, aim-bot, coins hack, and DNA hack. You can also adjust some settings, such as speed, size, color, and skin of your cell. You can now play the game with the mod menu features and have more fun.</p>
|
67 |
-
<h2>What are the Features of Agar.io Mod Menu</h2>
|
68 |
-
<h3>God Mode: Become invincible and immune to viruses and enemies</h3>
|
69 |
-
<p>One of the features of agar io mod menu is god mode. This feature allows you to become invincible and immune to viruses and enemies in the game. You can eat any cell without being eaten by bigger cells. You can also pass through viruses without being split by them. This feature can help you grow bigger and faster without any risk.</p>
|
70 |
-
<h3>Macro: Feed faster and split faster with one tap</h3>
|
71 |
-
<p>Another feature of agar io mod menu is macro. This feature allows you to feed faster and split faster with one tap in the game. You can eject mass or split your cell with one tap instead of holding or tapping multiple times. This feature can help you feed your allies or escape your enemies more quickly and easily.</p>
|
72 |
-
<h3>Zoom Hack: See more of the map and plan your moves</h3>
|
73 |
-
<p>A third feature of agar io mod menu is zoom hack. This feature allows you to see more of the map and plan your moves in the game. You can zoom in or out of the map with a slider or a button. This feature can help you see where other cells are and where you should go.</p>
|
74 |
-
<h3>Auto Feed: Automatically feed your cell without tapping</h3>
|
75 |
-
<p>A fourth feature of agar io mod menu is auto feed. This feature allows you to automatically feed your cell without tapping in the game. You can set the amount of mass that you want to eject per second. This feature can help you feed your cell without wasting time or energy.</p>
|
76 |
-
<h3>Aim-Bot: Automatically target and eat smaller cells</h3>
|
77 |
-
<p>A fifth feature of agar io mod menu is aim-bot. This feature allows you to automatically target and eat smaller cells in the game. You can set the range and the angle of the aim-bot. This feature can help you eat more cells without moving your finger or mouse.</p>
|
78 |
-
<h3>Coins Hack: Get unlimited coins to buy skins and boosts</h3>
|
79 |
-
<p>A sixth feature of agar io mod menu is coins hack. This feature allows you to get unlimited coins to buy skins and boosts in the game. You can set the amount of coins that you want to add to your account. This feature can help you customize your cell and enhance your performance without spending real money.</p>
|
80 |
-
<h3>DNA Hack: Get unlimited DNA to upgrade your cell</h3>
|
81 |
-
<p>A seventh feature of agar io mod menu is DNA hack. This feature allows you to get unlimited DNA to upgrade your cell in the game. You can set the amount of DNA that you want to add to your account. This feature can help you improve your cell's abilities and stats without playing for hours.</p>
|
82 |
-
<h2>Pros and Cons of Using Agar.io Mod Menu</h2>
|
83 |
-
<h3>Pros: Have more fun, dominate the game, customize your cell, and impress your friends</h3>
|
84 |
-
<p>There are many pros of using agar io mod menu in the game. Some of them are:</p>
|
85 |
-
<ul>
|
86 |
-
<li>You can have more fun by using different features and experimenting with different strategies.</li>
|
87 |
-
<li>You can dominate the game by becoming invincible, eating more cells, and avoiding enemies.</li>
|
88 |
-
<li>You can customize your cell by buying skins and boosts with unlimited coins.</li>
|
89 |
-
<li>You can impress your friends by showing off your skills and achievements.</li>
|
90 |
-
</ul>
|
91 |
-
<h3>Cons: Risk getting banned, ruin the game balance, and lose the challenge</h3>
|
92 |
-
<p>There are also some cons of using agar io mod menu in the game. Some of them are:</p>
|
93 |
-
<ul>
|
94 |
-
<li>You can risk getting banned by the game developers or reported by other players if they detect that you are using a mod menu.</li>
|
95 |
-
<li>You can ruin the game balance by making it unfair for other players who are playing legitimately.</li>
|
96 |
-
<li>You can lose the challenge by making the game too easy and boring for yourself.</li>
|
97 |
-
</ul>
|
98 |
-
<h2>Conclusion</h2>
|
99 |
-
<p>In conclusion, agar io mod menu is a tool that allows you to access various cheats and hacks in the game, such as god mode, macro, zoom hack, auto feed, aim-bot, coins hack, and DNA hack. With these features, you can have more fun, dominate the game, customize your cell, and impress your friends. However, you also need to be careful, as using a mod menu can risk getting banned, ruin the game balance, and lose the challenge. To download agar io mod menu for Android devices, you need to find a reliable source for the mod menu apk file, enable unknown sources on your device settings, download and install the mod menu apk file, and launch the game and enjoy the mod menu features. We hope this article was helpful for you and answered your question on how to download agar io mod menu for Android devices.</p>
|
100 |
-
<h2>FAQs</h2>
|
101 |
-
<p>Here are some frequently asked questions about agar io mod menu:</p>
|
102 |
-
<ol>
|
103 |
-
<li>Is agar io mod menu safe to use?</li>
|
104 |
-
<p>Agar io mod menu is not an official app from the game developers, so it is not guaranteed to be safe or working. There may be some risks involved in using a mod menu, such as viruses, malware, spyware, errors, or bans. Therefore, you should use a mod menu at your own discretion and responsibility.</p>
|
105 |
-
<li>Is agar io mod menu free to use?</li>
|
106 |
-
<p>Agar io mod menu is usually free to download and use from various websites that offer it. However, some websites may require you to complete some verification steps, such as captcha, surveys, or offers, before you can download the mod menu apk file. These steps may take some time or money to complete.</p>
|
107 |
-
<li>Can I use agar io mod menu on iOS devices?</li>
|
108 |
-
<p>Agar io mod menu is mainly designed for Android devices, so it may not work on iOS devices. However, some websites may claim to offer agar io mod menu for iOS devices as well. You should be careful and do some research before downloading any mod menu for iOS devices, as they may not be safe or working.</p>
|
109 |
-
<li>Can I use agar io mod menu on PC?</li>
|
110 |
-
<p>Agar io mod menu is mainly designed for mobile devices, so it may not work on PC. However, some websites may claim to offer agar io mod menu for PC as well. You should be careful and do some research before downloading any mod menu for PC, as they may not be safe or working.</p>
|
111 |
-
<li>How can I update agar io mod menu?</li>
|
112 |
-
<p>Agar io mod menu may need to be updated from time to time to keep up with the latest version of the game or fix any bugs or errors. To update agar io mod menu, you need to check the website where you downloaded the mod menu apk file and see if there is a new version available. If there is, you need to download and install the new version of the mod menu apk file on your device. You may also need to uninstall the old version of the mod menu apk file before installing the new one.</p>
|
113 |
-
</ol></p> 401be4b1e0<br />
|
114 |
-
<br />
|
115 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1toTree/lora_test/ppdiffusers/utils/dummy_paddle_and_scipy_objects.py
DELETED
@@ -1,49 +0,0 @@
|
|
1 |
-
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
2 |
-
# Copyright 2022 The HuggingFace Team. All rights reserved.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
# This file is autogenerated by the command `make fix-copies`, do not edit.
|
17 |
-
# flake8: noqa
|
18 |
-
|
19 |
-
from . import DummyObject, requires_backends
|
20 |
-
|
21 |
-
|
22 |
-
class LMSDiscreteScheduler(metaclass=DummyObject):
|
23 |
-
_backends = ["paddle", "scipy"]
|
24 |
-
|
25 |
-
def __init__(self, *args, **kwargs):
|
26 |
-
requires_backends(self, ["paddle", "scipy"])
|
27 |
-
|
28 |
-
@classmethod
|
29 |
-
def from_config(cls, *args, **kwargs):
|
30 |
-
requires_backends(cls, ["paddle", "scipy"])
|
31 |
-
|
32 |
-
@classmethod
|
33 |
-
def from_pretrained(cls, *args, **kwargs):
|
34 |
-
requires_backends(cls, ["paddle", "scipy"])
|
35 |
-
|
36 |
-
|
37 |
-
class PreconfigLMSDiscreteScheduler(metaclass=DummyObject):
|
38 |
-
_backends = ["paddle", "scipy"]
|
39 |
-
|
40 |
-
def __init__(self, *args, **kwargs):
|
41 |
-
requires_backends(self, ["paddle", "scipy"])
|
42 |
-
|
43 |
-
@classmethod
|
44 |
-
def from_config(cls, *args, **kwargs):
|
45 |
-
requires_backends(cls, ["paddle", "scipy"])
|
46 |
-
|
47 |
-
@classmethod
|
48 |
-
def from_pretrained(cls, *args, **kwargs):
|
49 |
-
requires_backends(cls, ["paddle", "scipy"])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/6Eternal9/ChatGPT4/app.py
DELETED
@@ -1,193 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import os
|
3 |
-
import json
|
4 |
-
import requests
|
5 |
-
|
6 |
-
#Streaming endpoint
|
7 |
-
API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream"
|
8 |
-
|
9 |
-
#Huggingface provided GPT4 OpenAI API Key
|
10 |
-
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
|
11 |
-
|
12 |
-
#Inferenec function
|
13 |
-
def predict(system_msg, inputs, top_p, temperature, chat_counter, chatbot=[], history=[]):
|
14 |
-
|
15 |
-
headers = {
|
16 |
-
"Content-Type": "application/json",
|
17 |
-
"Authorization": f"Bearer {OPENAI_API_KEY}"
|
18 |
-
}
|
19 |
-
print(f"system message is ^^ {system_msg}")
|
20 |
-
if system_msg.strip() == '':
|
21 |
-
initial_message = [{"role": "user", "content": f"{inputs}"},]
|
22 |
-
multi_turn_message = []
|
23 |
-
else:
|
24 |
-
initial_message= [{"role": "system", "content": system_msg},
|
25 |
-
{"role": "user", "content": f"{inputs}"},]
|
26 |
-
multi_turn_message = [{"role": "system", "content": system_msg},]
|
27 |
-
|
28 |
-
if chat_counter == 0 :
|
29 |
-
payload = {
|
30 |
-
"model": "gpt-4",
|
31 |
-
"messages": initial_message ,
|
32 |
-
"temperature" : 1.0,
|
33 |
-
"top_p":1.0,
|
34 |
-
"n" : 1,
|
35 |
-
"stream": True,
|
36 |
-
"presence_penalty":0,
|
37 |
-
"frequency_penalty":0,
|
38 |
-
}
|
39 |
-
print(f"chat_counter - {chat_counter}")
|
40 |
-
else: #if chat_counter != 0 :
|
41 |
-
messages=multi_turn_message # Of the type of - [{"role": "system", "content": system_msg},]
|
42 |
-
for data in chatbot:
|
43 |
-
user = {}
|
44 |
-
user["role"] = "user"
|
45 |
-
user["content"] = data[0]
|
46 |
-
assistant = {}
|
47 |
-
assistant["role"] = "assistant"
|
48 |
-
assistant["content"] = data[1]
|
49 |
-
messages.append(user)
|
50 |
-
messages.append(assistant)
|
51 |
-
temp = {}
|
52 |
-
temp["role"] = "user"
|
53 |
-
temp["content"] = inputs
|
54 |
-
messages.append(temp)
|
55 |
-
#messages
|
56 |
-
payload = {
|
57 |
-
"model": "gpt-4",
|
58 |
-
"messages": messages, # Of the type of [{"role": "user", "content": f"{inputs}"}],
|
59 |
-
"temperature" : temperature, #1.0,
|
60 |
-
"top_p": top_p, #1.0,
|
61 |
-
"n" : 1,
|
62 |
-
"stream": True,
|
63 |
-
"presence_penalty":0,
|
64 |
-
"frequency_penalty":0,}
|
65 |
-
|
66 |
-
chat_counter+=1
|
67 |
-
|
68 |
-
history.append(inputs)
|
69 |
-
print(f"Logging : payload is - {payload}")
|
70 |
-
# make a POST request to the API endpoint using the requests.post method, passing in stream=True
|
71 |
-
response = requests.post(API_URL, headers=headers, json=payload, stream=True)
|
72 |
-
print(f"Logging : response code - {response}")
|
73 |
-
token_counter = 0
|
74 |
-
partial_words = ""
|
75 |
-
|
76 |
-
counter=0
|
77 |
-
for chunk in response.iter_lines():
|
78 |
-
#Skipping first chunk
|
79 |
-
if counter == 0:
|
80 |
-
counter+=1
|
81 |
-
continue
|
82 |
-
# check whether each line is non-empty
|
83 |
-
if chunk.decode() :
|
84 |
-
chunk = chunk.decode()
|
85 |
-
# decode each line as response data is in bytes
|
86 |
-
if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']:
|
87 |
-
partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"]
|
88 |
-
if token_counter == 0:
|
89 |
-
history.append(" " + partial_words)
|
90 |
-
else:
|
91 |
-
history[-1] = partial_words
|
92 |
-
chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list
|
93 |
-
token_counter+=1
|
94 |
-
yield chat, history, chat_counter, response # resembles {chatbot: chat, state: history}
|
95 |
-
|
96 |
-
#Resetting to blank
|
97 |
-
def reset_textbox():
|
98 |
-
return gr.update(value='')
|
99 |
-
|
100 |
-
#to set a component as visible=False
|
101 |
-
def set_visible_false():
|
102 |
-
return gr.update(visible=False)
|
103 |
-
|
104 |
-
#to set a component as visible=True
|
105 |
-
def set_visible_true():
|
106 |
-
return gr.update(visible=True)
|
107 |
-
|
108 |
-
title = """<h1 align="center">🔥GPT4 with ChatCompletions API +🚀Gradio-Streaming</h1>"""
|
109 |
-
|
110 |
-
#display message for themes feature
|
111 |
-
theme_addon_msg = """<center>🌟 Discover Gradio Themes with this Demo, featuring v3.22.0! Gradio v3.23.0 also enables seamless Theme sharing. You can develop or modify a theme, and send it to the hub using simple <code>theme.push_to_hub()</code>.
|
112 |
-
<br>🏆Participate in Gradio's Theme Building Hackathon to exhibit your creative flair and win fabulous rewards! Join here - <a href="https://huggingface.co/Gradio-Themes" target="_blank">Gradio-Themes-Party🎨</a> 🏆</center>
|
113 |
-
"""
|
114 |
-
|
115 |
-
#Using info to add additional information about System message in GPT4
|
116 |
-
system_msg_info = """A conversation could begin with a system message to gently instruct the assistant.
|
117 |
-
System message helps set the behavior of the AI Assistant. For example, the assistant could be instructed with 'You are a helpful assistant.'"""
|
118 |
-
|
119 |
-
#Modifying existing Gradio Theme
|
120 |
-
theme = gr.themes.Soft(primary_hue="zinc", secondary_hue="green", neutral_hue="green",
|
121 |
-
text_size=gr.themes.sizes.text_lg)
|
122 |
-
|
123 |
-
with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;} #chatbot {height: 520px; overflow: auto;}""",
|
124 |
-
theme=theme) as demo:
|
125 |
-
gr.HTML(title)
|
126 |
-
gr.HTML("""<h3 align="center">🔥This Huggingface Gradio Demo provides you full access to GPT4 API (4096 token limit). 🎉🥳🎉You don't need any OPENAI API key🙌</h1>""")
|
127 |
-
gr.HTML(theme_addon_msg)
|
128 |
-
gr.HTML('''<center><a href="https://huggingface.co/spaces/ysharma/ChatGPT4?duplicate=true"><img src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a>Duplicate the Space and run securely with your OpenAI API Key</center>''')
|
129 |
-
|
130 |
-
with gr.Column(elem_id = "col_container"):
|
131 |
-
#GPT4 API Key is provided by Huggingface
|
132 |
-
with gr.Accordion(label="System message:", open=False):
|
133 |
-
system_msg = gr.Textbox(label="Instruct the AI Assistant to set its beaviour", info = system_msg_info, value="")
|
134 |
-
accordion_msg = gr.HTML(value="🚧 To set System message you will have to refresh the app", visible=False)
|
135 |
-
chatbot = gr.Chatbot(label='GPT4', elem_id="chatbot")
|
136 |
-
inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter")
|
137 |
-
state = gr.State([])
|
138 |
-
with gr.Row():
|
139 |
-
with gr.Column(scale=7):
|
140 |
-
b1 = gr.Button().style(full_width=True)
|
141 |
-
with gr.Column(scale=3):
|
142 |
-
server_status_code = gr.Textbox(label="Status code from OpenAI server", )
|
143 |
-
|
144 |
-
#top_p, temperature
|
145 |
-
with gr.Accordion("Parameters", open=False):
|
146 |
-
top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",)
|
147 |
-
temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",)
|
148 |
-
chat_counter = gr.Number(value=0, visible=False, precision=0)
|
149 |
-
|
150 |
-
#Event handling
|
151 |
-
inputs.submit( predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key
|
152 |
-
b1.click( predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key
|
153 |
-
|
154 |
-
inputs.submit(set_visible_false, [], [system_msg])
|
155 |
-
b1.click(set_visible_false, [], [system_msg])
|
156 |
-
inputs.submit(set_visible_true, [], [accordion_msg])
|
157 |
-
b1.click(set_visible_true, [], [accordion_msg])
|
158 |
-
|
159 |
-
b1.click(reset_textbox, [], [inputs])
|
160 |
-
inputs.submit(reset_textbox, [], [inputs])
|
161 |
-
|
162 |
-
#Examples
|
163 |
-
with gr.Accordion(label="Examples for System message:", open=False):
|
164 |
-
gr.Examples(
|
165 |
-
examples = [["""You are an AI programming assistant.
|
166 |
-
|
167 |
-
- Follow the user's requirements carefully and to the letter.
|
168 |
-
- First think step-by-step -- describe your plan for what to build in pseudocode, written out in great detail.
|
169 |
-
- Then output the code in a single code block.
|
170 |
-
- Minimize any other prose."""], ["""You are ComedianGPT who is a helpful assistant. You answer everything with a joke and witty replies."""],
|
171 |
-
["You are ChefGPT, a helpful assistant who answers questions with culinary expertise and a pinch of humor."],
|
172 |
-
["You are FitnessGuruGPT, a fitness expert who shares workout tips and motivation with a playful twist."],
|
173 |
-
["You are SciFiGPT, an AI assistant who discusses science fiction topics with a blend of knowledge and wit."],
|
174 |
-
["You are PhilosopherGPT, a thoughtful assistant who responds to inquiries with philosophical insights and a touch of humor."],
|
175 |
-
["You are EcoWarriorGPT, a helpful assistant who shares environment-friendly advice with a lighthearted approach."],
|
176 |
-
["You are MusicMaestroGPT, a knowledgeable AI who discusses music and its history with a mix of facts and playful banter."],
|
177 |
-
["You are SportsFanGPT, an enthusiastic assistant who talks about sports and shares amusing anecdotes."],
|
178 |
-
["You are TechWhizGPT, a tech-savvy AI who can help users troubleshoot issues and answer questions with a dash of humor."],
|
179 |
-
["You are FashionistaGPT, an AI fashion expert who shares style advice and trends with a sprinkle of wit."],
|
180 |
-
["You are ArtConnoisseurGPT, an AI assistant who discusses art and its history with a blend of knowledge and playful commentary."],
|
181 |
-
["You are a helpful assistant that provides detailed and accurate information."],
|
182 |
-
["You are an assistant that speaks like Shakespeare."],
|
183 |
-
["You are a friendly assistant who uses casual language and humor."],
|
184 |
-
["You are a financial advisor who gives expert advice on investments and budgeting."],
|
185 |
-
["You are a health and fitness expert who provides advice on nutrition and exercise."],
|
186 |
-
["You are a travel consultant who offers recommendations for destinations, accommodations, and attractions."],
|
187 |
-
["You are a movie critic who shares insightful opinions on films and their themes."],
|
188 |
-
["You are a history enthusiast who loves to discuss historical events and figures."],
|
189 |
-
["You are a tech-savvy assistant who can help users troubleshoot issues and answer questions about gadgets and software."],
|
190 |
-
["You are an AI poet who can compose creative and evocative poems on any given topic."],],
|
191 |
-
inputs = system_msg,)
|
192 |
-
|
193 |
-
demo.queue(max_size=99, concurrency_count=20).launch(debug=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/nets_537238KB.py
DELETED
@@ -1,123 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import torch
|
3 |
-
import torch.nn.functional as F
|
4 |
-
from torch import nn
|
5 |
-
|
6 |
-
from . import layers_537238KB as layers
|
7 |
-
|
8 |
-
|
9 |
-
class BaseASPPNet(nn.Module):
|
10 |
-
def __init__(self, nin, ch, dilations=(4, 8, 16)):
|
11 |
-
super(BaseASPPNet, self).__init__()
|
12 |
-
self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
|
13 |
-
self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
|
14 |
-
self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
|
15 |
-
self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
|
16 |
-
|
17 |
-
self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
|
18 |
-
|
19 |
-
self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
|
20 |
-
self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
|
21 |
-
self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
|
22 |
-
self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
|
23 |
-
|
24 |
-
def __call__(self, x):
|
25 |
-
h, e1 = self.enc1(x)
|
26 |
-
h, e2 = self.enc2(h)
|
27 |
-
h, e3 = self.enc3(h)
|
28 |
-
h, e4 = self.enc4(h)
|
29 |
-
|
30 |
-
h = self.aspp(h)
|
31 |
-
|
32 |
-
h = self.dec4(h, e4)
|
33 |
-
h = self.dec3(h, e3)
|
34 |
-
h = self.dec2(h, e2)
|
35 |
-
h = self.dec1(h, e1)
|
36 |
-
|
37 |
-
return h
|
38 |
-
|
39 |
-
|
40 |
-
class CascadedASPPNet(nn.Module):
|
41 |
-
def __init__(self, n_fft):
|
42 |
-
super(CascadedASPPNet, self).__init__()
|
43 |
-
self.stg1_low_band_net = BaseASPPNet(2, 64)
|
44 |
-
self.stg1_high_band_net = BaseASPPNet(2, 64)
|
45 |
-
|
46 |
-
self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
|
47 |
-
self.stg2_full_band_net = BaseASPPNet(32, 64)
|
48 |
-
|
49 |
-
self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0)
|
50 |
-
self.stg3_full_band_net = BaseASPPNet(64, 128)
|
51 |
-
|
52 |
-
self.out = nn.Conv2d(128, 2, 1, bias=False)
|
53 |
-
self.aux1_out = nn.Conv2d(64, 2, 1, bias=False)
|
54 |
-
self.aux2_out = nn.Conv2d(64, 2, 1, bias=False)
|
55 |
-
|
56 |
-
self.max_bin = n_fft // 2
|
57 |
-
self.output_bin = n_fft // 2 + 1
|
58 |
-
|
59 |
-
self.offset = 128
|
60 |
-
|
61 |
-
def forward(self, x, aggressiveness=None):
|
62 |
-
mix = x.detach()
|
63 |
-
x = x.clone()
|
64 |
-
|
65 |
-
x = x[:, :, : self.max_bin]
|
66 |
-
|
67 |
-
bandw = x.size()[2] // 2
|
68 |
-
aux1 = torch.cat(
|
69 |
-
[
|
70 |
-
self.stg1_low_band_net(x[:, :, :bandw]),
|
71 |
-
self.stg1_high_band_net(x[:, :, bandw:]),
|
72 |
-
],
|
73 |
-
dim=2,
|
74 |
-
)
|
75 |
-
|
76 |
-
h = torch.cat([x, aux1], dim=1)
|
77 |
-
aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
|
78 |
-
|
79 |
-
h = torch.cat([x, aux1, aux2], dim=1)
|
80 |
-
h = self.stg3_full_band_net(self.stg3_bridge(h))
|
81 |
-
|
82 |
-
mask = torch.sigmoid(self.out(h))
|
83 |
-
mask = F.pad(
|
84 |
-
input=mask,
|
85 |
-
pad=(0, 0, 0, self.output_bin - mask.size()[2]),
|
86 |
-
mode="replicate",
|
87 |
-
)
|
88 |
-
|
89 |
-
if self.training:
|
90 |
-
aux1 = torch.sigmoid(self.aux1_out(aux1))
|
91 |
-
aux1 = F.pad(
|
92 |
-
input=aux1,
|
93 |
-
pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
|
94 |
-
mode="replicate",
|
95 |
-
)
|
96 |
-
aux2 = torch.sigmoid(self.aux2_out(aux2))
|
97 |
-
aux2 = F.pad(
|
98 |
-
input=aux2,
|
99 |
-
pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
|
100 |
-
mode="replicate",
|
101 |
-
)
|
102 |
-
return mask * mix, aux1 * mix, aux2 * mix
|
103 |
-
else:
|
104 |
-
if aggressiveness:
|
105 |
-
mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
|
106 |
-
mask[:, :, : aggressiveness["split_bin"]],
|
107 |
-
1 + aggressiveness["value"] / 3,
|
108 |
-
)
|
109 |
-
mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
|
110 |
-
mask[:, :, aggressiveness["split_bin"] :],
|
111 |
-
1 + aggressiveness["value"],
|
112 |
-
)
|
113 |
-
|
114 |
-
return mask * mix
|
115 |
-
|
116 |
-
def predict(self, x_mag, aggressiveness=None):
|
117 |
-
h = self.forward(x_mag, aggressiveness)
|
118 |
-
|
119 |
-
if self.offset > 0:
|
120 |
-
h = h[:, :, :, self.offset : -self.offset]
|
121 |
-
assert h.size()[3] > 0
|
122 |
-
|
123 |
-
return h
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/infer/modules/uvr5/mdxnet.py
DELETED
@@ -1,246 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import logging
|
3 |
-
|
4 |
-
logger = logging.getLogger(__name__)
|
5 |
-
|
6 |
-
import librosa
|
7 |
-
import numpy as np
|
8 |
-
import soundfile as sf
|
9 |
-
import torch
|
10 |
-
from tqdm import tqdm
|
11 |
-
|
12 |
-
cpu = torch.device("cpu")
|
13 |
-
|
14 |
-
|
15 |
-
class ConvTDFNetTrim:
|
16 |
-
def __init__(
|
17 |
-
self, device, model_name, target_name, L, dim_f, dim_t, n_fft, hop=1024
|
18 |
-
):
|
19 |
-
super(ConvTDFNetTrim, self).__init__()
|
20 |
-
|
21 |
-
self.dim_f = dim_f
|
22 |
-
self.dim_t = 2**dim_t
|
23 |
-
self.n_fft = n_fft
|
24 |
-
self.hop = hop
|
25 |
-
self.n_bins = self.n_fft // 2 + 1
|
26 |
-
self.chunk_size = hop * (self.dim_t - 1)
|
27 |
-
self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to(
|
28 |
-
device
|
29 |
-
)
|
30 |
-
self.target_name = target_name
|
31 |
-
self.blender = "blender" in model_name
|
32 |
-
|
33 |
-
self.dim_c = 4
|
34 |
-
out_c = self.dim_c * 4 if target_name == "*" else self.dim_c
|
35 |
-
self.freq_pad = torch.zeros(
|
36 |
-
[1, out_c, self.n_bins - self.dim_f, self.dim_t]
|
37 |
-
).to(device)
|
38 |
-
|
39 |
-
self.n = L // 2
|
40 |
-
|
41 |
-
def stft(self, x):
|
42 |
-
x = x.reshape([-1, self.chunk_size])
|
43 |
-
x = torch.stft(
|
44 |
-
x,
|
45 |
-
n_fft=self.n_fft,
|
46 |
-
hop_length=self.hop,
|
47 |
-
window=self.window,
|
48 |
-
center=True,
|
49 |
-
return_complex=True,
|
50 |
-
)
|
51 |
-
x = torch.view_as_real(x)
|
52 |
-
x = x.permute([0, 3, 1, 2])
|
53 |
-
x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape(
|
54 |
-
[-1, self.dim_c, self.n_bins, self.dim_t]
|
55 |
-
)
|
56 |
-
return x[:, :, : self.dim_f]
|
57 |
-
|
58 |
-
def istft(self, x, freq_pad=None):
|
59 |
-
freq_pad = (
|
60 |
-
self.freq_pad.repeat([x.shape[0], 1, 1, 1])
|
61 |
-
if freq_pad is None
|
62 |
-
else freq_pad
|
63 |
-
)
|
64 |
-
x = torch.cat([x, freq_pad], -2)
|
65 |
-
c = 4 * 2 if self.target_name == "*" else 2
|
66 |
-
x = x.reshape([-1, c, 2, self.n_bins, self.dim_t]).reshape(
|
67 |
-
[-1, 2, self.n_bins, self.dim_t]
|
68 |
-
)
|
69 |
-
x = x.permute([0, 2, 3, 1])
|
70 |
-
x = x.contiguous()
|
71 |
-
x = torch.view_as_complex(x)
|
72 |
-
x = torch.istft(
|
73 |
-
x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True
|
74 |
-
)
|
75 |
-
return x.reshape([-1, c, self.chunk_size])
|
76 |
-
|
77 |
-
|
78 |
-
def get_models(device, dim_f, dim_t, n_fft):
|
79 |
-
return ConvTDFNetTrim(
|
80 |
-
device=device,
|
81 |
-
model_name="Conv-TDF",
|
82 |
-
target_name="vocals",
|
83 |
-
L=11,
|
84 |
-
dim_f=dim_f,
|
85 |
-
dim_t=dim_t,
|
86 |
-
n_fft=n_fft,
|
87 |
-
)
|
88 |
-
|
89 |
-
|
90 |
-
class Predictor:
|
91 |
-
def __init__(self, args):
|
92 |
-
import onnxruntime as ort
|
93 |
-
|
94 |
-
logger.info(ort.get_available_providers())
|
95 |
-
self.args = args
|
96 |
-
self.model_ = get_models(
|
97 |
-
device=cpu, dim_f=args.dim_f, dim_t=args.dim_t, n_fft=args.n_fft
|
98 |
-
)
|
99 |
-
self.model = ort.InferenceSession(
|
100 |
-
os.path.join(args.onnx, self.model_.target_name + ".onnx"),
|
101 |
-
providers=[
|
102 |
-
"CUDAExecutionProvider",
|
103 |
-
"DmlExecutionProvider",
|
104 |
-
"CPUExecutionProvider",
|
105 |
-
],
|
106 |
-
)
|
107 |
-
logger.info("ONNX load done")
|
108 |
-
|
109 |
-
def demix(self, mix):
|
110 |
-
samples = mix.shape[-1]
|
111 |
-
margin = self.args.margin
|
112 |
-
chunk_size = self.args.chunks * 44100
|
113 |
-
assert not margin == 0, "margin cannot be zero!"
|
114 |
-
if margin > chunk_size:
|
115 |
-
margin = chunk_size
|
116 |
-
|
117 |
-
segmented_mix = {}
|
118 |
-
|
119 |
-
if self.args.chunks == 0 or samples < chunk_size:
|
120 |
-
chunk_size = samples
|
121 |
-
|
122 |
-
counter = -1
|
123 |
-
for skip in range(0, samples, chunk_size):
|
124 |
-
counter += 1
|
125 |
-
|
126 |
-
s_margin = 0 if counter == 0 else margin
|
127 |
-
end = min(skip + chunk_size + margin, samples)
|
128 |
-
|
129 |
-
start = skip - s_margin
|
130 |
-
|
131 |
-
segmented_mix[skip] = mix[:, start:end].copy()
|
132 |
-
if end == samples:
|
133 |
-
break
|
134 |
-
|
135 |
-
sources = self.demix_base(segmented_mix, margin_size=margin)
|
136 |
-
"""
|
137 |
-
mix:(2,big_sample)
|
138 |
-
segmented_mix:offset->(2,small_sample)
|
139 |
-
sources:(1,2,big_sample)
|
140 |
-
"""
|
141 |
-
return sources
|
142 |
-
|
143 |
-
def demix_base(self, mixes, margin_size):
|
144 |
-
chunked_sources = []
|
145 |
-
progress_bar = tqdm(total=len(mixes))
|
146 |
-
progress_bar.set_description("Processing")
|
147 |
-
for mix in mixes:
|
148 |
-
cmix = mixes[mix]
|
149 |
-
sources = []
|
150 |
-
n_sample = cmix.shape[1]
|
151 |
-
model = self.model_
|
152 |
-
trim = model.n_fft // 2
|
153 |
-
gen_size = model.chunk_size - 2 * trim
|
154 |
-
pad = gen_size - n_sample % gen_size
|
155 |
-
mix_p = np.concatenate(
|
156 |
-
(np.zeros((2, trim)), cmix, np.zeros((2, pad)), np.zeros((2, trim))), 1
|
157 |
-
)
|
158 |
-
mix_waves = []
|
159 |
-
i = 0
|
160 |
-
while i < n_sample + pad:
|
161 |
-
waves = np.array(mix_p[:, i : i + model.chunk_size])
|
162 |
-
mix_waves.append(waves)
|
163 |
-
i += gen_size
|
164 |
-
mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(cpu)
|
165 |
-
with torch.no_grad():
|
166 |
-
_ort = self.model
|
167 |
-
spek = model.stft(mix_waves)
|
168 |
-
if self.args.denoise:
|
169 |
-
spec_pred = (
|
170 |
-
-_ort.run(None, {"input": -spek.cpu().numpy()})[0] * 0.5
|
171 |
-
+ _ort.run(None, {"input": spek.cpu().numpy()})[0] * 0.5
|
172 |
-
)
|
173 |
-
tar_waves = model.istft(torch.tensor(spec_pred))
|
174 |
-
else:
|
175 |
-
tar_waves = model.istft(
|
176 |
-
torch.tensor(_ort.run(None, {"input": spek.cpu().numpy()})[0])
|
177 |
-
)
|
178 |
-
tar_signal = (
|
179 |
-
tar_waves[:, :, trim:-trim]
|
180 |
-
.transpose(0, 1)
|
181 |
-
.reshape(2, -1)
|
182 |
-
.numpy()[:, :-pad]
|
183 |
-
)
|
184 |
-
|
185 |
-
start = 0 if mix == 0 else margin_size
|
186 |
-
end = None if mix == list(mixes.keys())[::-1][0] else -margin_size
|
187 |
-
if margin_size == 0:
|
188 |
-
end = None
|
189 |
-
sources.append(tar_signal[:, start:end])
|
190 |
-
|
191 |
-
progress_bar.update(1)
|
192 |
-
|
193 |
-
chunked_sources.append(sources)
|
194 |
-
_sources = np.concatenate(chunked_sources, axis=-1)
|
195 |
-
# del self.model
|
196 |
-
progress_bar.close()
|
197 |
-
return _sources
|
198 |
-
|
199 |
-
def prediction(self, m, vocal_root, others_root, format):
|
200 |
-
os.makedirs(vocal_root, exist_ok=True)
|
201 |
-
os.makedirs(others_root, exist_ok=True)
|
202 |
-
basename = os.path.basename(m)
|
203 |
-
mix, rate = librosa.load(m, mono=False, sr=44100)
|
204 |
-
if mix.ndim == 1:
|
205 |
-
mix = np.asfortranarray([mix, mix])
|
206 |
-
mix = mix.T
|
207 |
-
sources = self.demix(mix.T)
|
208 |
-
opt = sources[0].T
|
209 |
-
if format in ["wav", "flac"]:
|
210 |
-
sf.write(
|
211 |
-
"%s/%s_main_vocal.%s" % (vocal_root, basename, format), mix - opt, rate
|
212 |
-
)
|
213 |
-
sf.write("%s/%s_others.%s" % (others_root, basename, format), opt, rate)
|
214 |
-
else:
|
215 |
-
path_vocal = "%s/%s_main_vocal.wav" % (vocal_root, basename)
|
216 |
-
path_other = "%s/%s_others.wav" % (others_root, basename)
|
217 |
-
sf.write(path_vocal, mix - opt, rate)
|
218 |
-
sf.write(path_other, opt, rate)
|
219 |
-
if os.path.exists(path_vocal):
|
220 |
-
os.system(
|
221 |
-
"ffmpeg -i %s -vn %s -q:a 2 -y"
|
222 |
-
% (path_vocal, path_vocal[:-4] + ".%s" % format)
|
223 |
-
)
|
224 |
-
if os.path.exists(path_other):
|
225 |
-
os.system(
|
226 |
-
"ffmpeg -i %s -vn %s -q:a 2 -y"
|
227 |
-
% (path_other, path_other[:-4] + ".%s" % format)
|
228 |
-
)
|
229 |
-
|
230 |
-
|
231 |
-
class MDXNetDereverb:
|
232 |
-
def __init__(self, chunks, device):
|
233 |
-
self.onnx = "assets/uvr5_weights/onnx_dereverb_By_FoxJoy"
|
234 |
-
self.shifts = 10 # 'Predict with randomised equivariant stabilisation'
|
235 |
-
self.mixing = "min_mag" # ['default','min_mag','max_mag']
|
236 |
-
self.chunks = chunks
|
237 |
-
self.margin = 44100
|
238 |
-
self.dim_t = 9
|
239 |
-
self.dim_f = 3072
|
240 |
-
self.n_fft = 6144
|
241 |
-
self.denoise = True
|
242 |
-
self.pred = Predictor(self)
|
243 |
-
self.device = device
|
244 |
-
|
245 |
-
def path_audio(self, input, vocal_root, others_root, format):
|
246 |
-
self.pred.prediction(input, vocal_root, others_root, format)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/tools/dlmodels.sh
DELETED
@@ -1,566 +0,0 @@
|
|
1 |
-
#!/bin/bash
|
2 |
-
|
3 |
-
echo working dir is $(pwd)
|
4 |
-
echo downloading requirement aria2 check.
|
5 |
-
|
6 |
-
if command -v aria2c &> /dev/null
|
7 |
-
then
|
8 |
-
echo "aria2c command found"
|
9 |
-
else
|
10 |
-
echo failed. please install aria2
|
11 |
-
sleep 5
|
12 |
-
exit 1
|
13 |
-
fi
|
14 |
-
|
15 |
-
d32="f0D32k.pth"
|
16 |
-
d40="f0D40k.pth"
|
17 |
-
d48="f0D48k.pth"
|
18 |
-
g32="f0G32k.pth"
|
19 |
-
g40="f0G40k.pth"
|
20 |
-
g48="f0G48k.pth"
|
21 |
-
|
22 |
-
d40v2="f0D40k.pth"
|
23 |
-
g40v2="f0G40k.pth"
|
24 |
-
|
25 |
-
dld32="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth"
|
26 |
-
dld40="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth"
|
27 |
-
dld48="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth"
|
28 |
-
dlg32="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth"
|
29 |
-
dlg40="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth"
|
30 |
-
dlg48="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth"
|
31 |
-
|
32 |
-
dld40v2="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth"
|
33 |
-
dlg40v2="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth"
|
34 |
-
|
35 |
-
hp2_all="HP2_all_vocals.pth"
|
36 |
-
hp3_all="HP3_all_vocals.pth"
|
37 |
-
hp5_only="HP5_only_main_vocal.pth"
|
38 |
-
VR_DeEchoAggressive="VR-DeEchoAggressive.pth"
|
39 |
-
VR_DeEchoDeReverb="VR-DeEchoDeReverb.pth"
|
40 |
-
VR_DeEchoNormal="VR-DeEchoNormal.pth"
|
41 |
-
onnx_dereverb="vocals.onnx"
|
42 |
-
rmvpe="rmvpe.pt"
|
43 |
-
|
44 |
-
dlhp2_all="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2_all_vocals.pth"
|
45 |
-
dlhp3_all="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP3_all_vocals.pth"
|
46 |
-
dlhp5_only="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5_only_main_vocal.pth"
|
47 |
-
dlVR_DeEchoAggressive="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoAggressive.pth"
|
48 |
-
dlVR_DeEchoDeReverb="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoDeReverb.pth"
|
49 |
-
dlVR_DeEchoNormal="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoNormal.pth"
|
50 |
-
dlonnx_dereverb="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/onnx_dereverb_By_FoxJoy/vocals.onnx"
|
51 |
-
dlrmvpe="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/rmvpe.pt"
|
52 |
-
|
53 |
-
hb="hubert_base.pt"
|
54 |
-
|
55 |
-
dlhb="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt"
|
56 |
-
|
57 |
-
echo dir check start.
|
58 |
-
|
59 |
-
if [ -d "./assets/pretrained" ]; then
|
60 |
-
echo dir ./assets/pretrained checked.
|
61 |
-
else
|
62 |
-
echo failed. generating dir ./assets/pretrained.
|
63 |
-
mkdir pretrained
|
64 |
-
fi
|
65 |
-
|
66 |
-
if [ -d "./assets/pretrained_v2" ]; then
|
67 |
-
echo dir ./assets/pretrained_v2 checked.
|
68 |
-
else
|
69 |
-
echo failed. generating dir ./assets/pretrained_v2.
|
70 |
-
mkdir pretrained_v2
|
71 |
-
fi
|
72 |
-
|
73 |
-
if [ -d "./assets/uvr5_weights" ]; then
|
74 |
-
echo dir ./assets/uvr5_weights checked.
|
75 |
-
else
|
76 |
-
echo failed. generating dir ./assets/uvr5_weights.
|
77 |
-
mkdir uvr5_weights
|
78 |
-
fi
|
79 |
-
|
80 |
-
if [ -d "./assets/uvr5_weights/onnx_dereverb_By_FoxJoy" ]; then
|
81 |
-
echo dir ./assets/uvr5_weights/onnx_dereverb_By_FoxJoy checked.
|
82 |
-
else
|
83 |
-
echo failed. generating dir ./assets/uvr5_weights/onnx_dereverb_By_FoxJoy.
|
84 |
-
mkdir uvr5_weights/onnx_dereverb_By_FoxJoy
|
85 |
-
fi
|
86 |
-
|
87 |
-
echo dir check finished.
|
88 |
-
|
89 |
-
echo required files check start.
|
90 |
-
|
91 |
-
echo checking D32k.pth
|
92 |
-
if [ -f "./assets/pretrained/D32k.pth" ]; then
|
93 |
-
echo D32k.pth in ./assets/pretrained checked.
|
94 |
-
else
|
95 |
-
echo failed. starting download from huggingface.
|
96 |
-
if command -v aria2c &> /dev/null; then
|
97 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d ./assets/pretrained -o D32k.pth
|
98 |
-
if [ -f "./assets/pretrained/D32k.pth" ]; then
|
99 |
-
echo download successful.
|
100 |
-
else
|
101 |
-
echo please try again!
|
102 |
-
exit 1
|
103 |
-
fi
|
104 |
-
else
|
105 |
-
echo aria2c command not found. Please install aria2c and try again.
|
106 |
-
exit 1
|
107 |
-
fi
|
108 |
-
fi
|
109 |
-
|
110 |
-
echo checking D40k.pth
|
111 |
-
if [ -f "./assets/pretrained/D40k.pth" ]; then
|
112 |
-
echo D40k.pth in ./assets/pretrained checked.
|
113 |
-
else
|
114 |
-
echo failed. starting download from huggingface.
|
115 |
-
if command -v aria2c &> /dev/null; then
|
116 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d ./assets/pretrained -o D40k.pth
|
117 |
-
if [ -f "./assets/pretrained/D40k.pth" ]; then
|
118 |
-
echo download successful.
|
119 |
-
else
|
120 |
-
echo please try again!
|
121 |
-
exit 1
|
122 |
-
fi
|
123 |
-
else
|
124 |
-
echo aria2c command not found. Please install aria2c and try again.
|
125 |
-
exit 1
|
126 |
-
fi
|
127 |
-
fi
|
128 |
-
|
129 |
-
echo checking D40k.pth
|
130 |
-
if [ -f "./assets/pretrained_v2/D40k.pth" ]; then
|
131 |
-
echo D40k.pth in ./assets/pretrained_v2 checked.
|
132 |
-
else
|
133 |
-
echo failed. starting download from huggingface.
|
134 |
-
if command -v aria2c &> /dev/null; then
|
135 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d ./assets/pretrained_v2 -o D40k.pth
|
136 |
-
if [ -f "./assets/pretrained_v2/D40k.pth" ]; then
|
137 |
-
echo download successful.
|
138 |
-
else
|
139 |
-
echo please try again!
|
140 |
-
exit 1
|
141 |
-
fi
|
142 |
-
else
|
143 |
-
echo aria2c command not found. Please install aria2c and try again.
|
144 |
-
exit 1
|
145 |
-
fi
|
146 |
-
fi
|
147 |
-
|
148 |
-
echo checking D48k.pth
|
149 |
-
if [ -f "./assets/pretrained/D48k.pth" ]; then
|
150 |
-
echo D48k.pth in ./assets/pretrained checked.
|
151 |
-
else
|
152 |
-
echo failed. starting download from huggingface.
|
153 |
-
if command -v aria2c &> /dev/null; then
|
154 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d ./assets/pretrained -o D48k.pth
|
155 |
-
if [ -f "./assets/pretrained/D48k.pth" ]; then
|
156 |
-
echo download successful.
|
157 |
-
else
|
158 |
-
echo please try again!
|
159 |
-
exit 1
|
160 |
-
fi
|
161 |
-
else
|
162 |
-
echo aria2c command not found. Please install aria2c and try again.
|
163 |
-
exit 1
|
164 |
-
fi
|
165 |
-
fi
|
166 |
-
|
167 |
-
echo checking G32k.pth
|
168 |
-
if [ -f "./assets/pretrained/G32k.pth" ]; then
|
169 |
-
echo G32k.pth in ./assets/pretrained checked.
|
170 |
-
else
|
171 |
-
echo failed. starting download from huggingface.
|
172 |
-
if command -v aria2c &> /dev/null; then
|
173 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d ./assets/pretrained -o G32k.pth
|
174 |
-
if [ -f "./assets/pretrained/G32k.pth" ]; then
|
175 |
-
echo download successful.
|
176 |
-
else
|
177 |
-
echo please try again!
|
178 |
-
exit 1
|
179 |
-
fi
|
180 |
-
else
|
181 |
-
echo aria2c command not found. Please install aria2c and try again.
|
182 |
-
exit 1
|
183 |
-
fi
|
184 |
-
fi
|
185 |
-
|
186 |
-
echo checking G40k.pth
|
187 |
-
if [ -f "./assets/pretrained/G40k.pth" ]; then
|
188 |
-
echo G40k.pth in ./assets/pretrained checked.
|
189 |
-
else
|
190 |
-
echo failed. starting download from huggingface.
|
191 |
-
if command -v aria2c &> /dev/null; then
|
192 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d ./assets/pretrained -o G40k.pth
|
193 |
-
if [ -f "./assets/pretrained/G40k.pth" ]; then
|
194 |
-
echo download successful.
|
195 |
-
else
|
196 |
-
echo please try again!
|
197 |
-
exit 1
|
198 |
-
fi
|
199 |
-
else
|
200 |
-
echo aria2c command not found. Please install aria2c and try again.
|
201 |
-
exit 1
|
202 |
-
fi
|
203 |
-
fi
|
204 |
-
|
205 |
-
echo checking G40k.pth
|
206 |
-
if [ -f "./assets/pretrained_v2/G40k.pth" ]; then
|
207 |
-
echo G40k.pth in ./assets/pretrained_v2 checked.
|
208 |
-
else
|
209 |
-
echo failed. starting download from huggingface.
|
210 |
-
if command -v aria2c &> /dev/null; then
|
211 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d ./assets/pretrained_v2 -o G40k.pth
|
212 |
-
if [ -f "./assets/pretrained_v2/G40k.pth" ]; then
|
213 |
-
echo download successful.
|
214 |
-
else
|
215 |
-
echo please try again!
|
216 |
-
exit 1
|
217 |
-
fi
|
218 |
-
else
|
219 |
-
echo aria2c command not found. Please install aria2c and try again.
|
220 |
-
exit 1
|
221 |
-
fi
|
222 |
-
fi
|
223 |
-
|
224 |
-
echo checking G48k.pth
|
225 |
-
if [ -f "./assets/pretrained/G48k.pth" ]; then
|
226 |
-
echo G48k.pth in ./assets/pretrained checked.
|
227 |
-
else
|
228 |
-
echo failed. starting download from huggingface.
|
229 |
-
if command -v aria2c &> /dev/null; then
|
230 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d ./assets/pretrained -o G48k.pth
|
231 |
-
if [ -f "./assets/pretrained/G48k.pth" ]; then
|
232 |
-
echo download successful.
|
233 |
-
else
|
234 |
-
echo please try again!
|
235 |
-
exit 1
|
236 |
-
fi
|
237 |
-
else
|
238 |
-
echo aria2c command not found. Please install aria2c and try again.
|
239 |
-
exit 1
|
240 |
-
fi
|
241 |
-
fi
|
242 |
-
|
243 |
-
echo checking $d32
|
244 |
-
if [ -f "./assets/pretrained/$d32" ]; then
|
245 |
-
echo $d32 in ./assets/pretrained checked.
|
246 |
-
else
|
247 |
-
echo failed. starting download from huggingface.
|
248 |
-
if command -v aria2c &> /dev/null; then
|
249 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dld32 -d ./assets/pretrained -o $d32
|
250 |
-
if [ -f "./assets/pretrained/$d32" ]; then
|
251 |
-
echo download successful.
|
252 |
-
else
|
253 |
-
echo please try again!
|
254 |
-
exit 1
|
255 |
-
fi
|
256 |
-
else
|
257 |
-
echo aria2c command not found. Please install aria2c and try again.
|
258 |
-
exit 1
|
259 |
-
fi
|
260 |
-
fi
|
261 |
-
|
262 |
-
echo checking $d40
|
263 |
-
if [ -f "./assets/pretrained/$d40" ]; then
|
264 |
-
echo $d40 in ./assets/pretrained checked.
|
265 |
-
else
|
266 |
-
echo failed. starting download from huggingface.
|
267 |
-
if command -v aria2c &> /dev/null; then
|
268 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dld40 -d ./assets/pretrained -o $d40
|
269 |
-
if [ -f "./assets/pretrained/$d40" ]; then
|
270 |
-
echo download successful.
|
271 |
-
else
|
272 |
-
echo please try again!
|
273 |
-
exit 1
|
274 |
-
fi
|
275 |
-
else
|
276 |
-
echo aria2c command not found. Please install aria2c and try again.
|
277 |
-
exit 1
|
278 |
-
fi
|
279 |
-
fi
|
280 |
-
|
281 |
-
echo checking $d40v2
|
282 |
-
if [ -f "./assets/pretrained_v2/$d40v2" ]; then
|
283 |
-
echo $d40v2 in ./assets/pretrained_v2 checked.
|
284 |
-
else
|
285 |
-
echo failed. starting download from huggingface.
|
286 |
-
if command -v aria2c &> /dev/null; then
|
287 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dld40v2 -d ./assets/pretrained_v2 -o $d40v2
|
288 |
-
if [ -f "./assets/pretrained_v2/$d40v2" ]; then
|
289 |
-
echo download successful.
|
290 |
-
else
|
291 |
-
echo please try again!
|
292 |
-
exit 1
|
293 |
-
fi
|
294 |
-
else
|
295 |
-
echo aria2c command not found. Please install aria2c and try again.
|
296 |
-
exit 1
|
297 |
-
fi
|
298 |
-
fi
|
299 |
-
|
300 |
-
echo checking $d48
|
301 |
-
if [ -f "./assets/pretrained/$d48" ]; then
|
302 |
-
echo $d48 in ./assets/pretrained checked.
|
303 |
-
else
|
304 |
-
echo failed. starting download from huggingface.
|
305 |
-
if command -v aria2c &> /dev/null; then
|
306 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dld48 -d ./assets/pretrained -o $d48
|
307 |
-
if [ -f "./assets/pretrained/$d48" ]; then
|
308 |
-
echo download successful.
|
309 |
-
else
|
310 |
-
echo please try again!
|
311 |
-
exit 1
|
312 |
-
fi
|
313 |
-
else
|
314 |
-
echo aria2c command not found. Please install aria2c and try again.
|
315 |
-
exit 1
|
316 |
-
fi
|
317 |
-
fi
|
318 |
-
|
319 |
-
echo checking $g32
|
320 |
-
if [ -f "./assets/pretrained/$g32" ]; then
|
321 |
-
echo $g32 in ./assets/pretrained checked.
|
322 |
-
else
|
323 |
-
echo failed. starting download from huggingface.
|
324 |
-
if command -v aria2c &> /dev/null; then
|
325 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlg32 -d ./assets/pretrained -o $g32
|
326 |
-
if [ -f "./assets/pretrained/$g32" ]; then
|
327 |
-
echo download successful.
|
328 |
-
else
|
329 |
-
echo please try again!
|
330 |
-
exit 1
|
331 |
-
fi
|
332 |
-
else
|
333 |
-
echo aria2c command not found. Please install aria2c and try again.
|
334 |
-
exit 1
|
335 |
-
fi
|
336 |
-
fi
|
337 |
-
|
338 |
-
echo checking $g40
|
339 |
-
if [ -f "./assets/pretrained/$g40" ]; then
|
340 |
-
echo $g40 in ./assets/pretrained checked.
|
341 |
-
else
|
342 |
-
echo failed. starting download from huggingface.
|
343 |
-
if command -v aria2c &> /dev/null; then
|
344 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlg40 -d ./assets/pretrained -o $g40
|
345 |
-
if [ -f "./assets/pretrained/$g40" ]; then
|
346 |
-
echo download successful.
|
347 |
-
else
|
348 |
-
echo please try again!
|
349 |
-
exit 1
|
350 |
-
fi
|
351 |
-
else
|
352 |
-
echo aria2c command not found. Please install aria2c and try again.
|
353 |
-
exit 1
|
354 |
-
fi
|
355 |
-
fi
|
356 |
-
|
357 |
-
echo checking $g40v2
|
358 |
-
if [ -f "./assets/pretrained_v2/$g40v2" ]; then
|
359 |
-
echo $g40v2 in ./assets/pretrained_v2 checked.
|
360 |
-
else
|
361 |
-
echo failed. starting download from huggingface.
|
362 |
-
if command -v aria2c &> /dev/null; then
|
363 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlg40v2 -d ./assets/pretrained_v2 -o $g40v2
|
364 |
-
if [ -f "./assets/pretrained_v2/$g40v2" ]; then
|
365 |
-
echo download successful.
|
366 |
-
else
|
367 |
-
echo please try again!
|
368 |
-
exit 1
|
369 |
-
fi
|
370 |
-
else
|
371 |
-
echo aria2c command not found. Please install aria2c and try again.
|
372 |
-
exit 1
|
373 |
-
fi
|
374 |
-
fi
|
375 |
-
|
376 |
-
echo checking $g48
|
377 |
-
if [ -f "./assets/pretrained/$g48" ]; then
|
378 |
-
echo $g48 in ./assets/pretrained checked.
|
379 |
-
else
|
380 |
-
echo failed. starting download from huggingface.
|
381 |
-
if command -v aria2c &> /dev/null; then
|
382 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlg48 -d ./assets/pretrained -o $g48
|
383 |
-
if [ -f "./assets/pretrained/$g48" ]; then
|
384 |
-
echo download successful.
|
385 |
-
else
|
386 |
-
echo please try again!
|
387 |
-
exit 1
|
388 |
-
fi
|
389 |
-
else
|
390 |
-
echo aria2c command not found. Please install aria2c and try again.
|
391 |
-
exit 1
|
392 |
-
fi
|
393 |
-
fi
|
394 |
-
|
395 |
-
echo checking $hp2_all
|
396 |
-
if [ -f "./assets/uvr5_weights/$hp2_all" ]; then
|
397 |
-
echo $hp2_all in ./assets/uvr5_weights checked.
|
398 |
-
else
|
399 |
-
echo failed. starting download from huggingface.
|
400 |
-
if command -v aria2c &> /dev/null; then
|
401 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlhp2_all -d ./assets/uvr5_weights -o $hp2_all
|
402 |
-
if [ -f "./assets/uvr5_weights/$hp2_all" ]; then
|
403 |
-
echo download successful.
|
404 |
-
else
|
405 |
-
echo please try again!
|
406 |
-
exit 1
|
407 |
-
fi
|
408 |
-
else
|
409 |
-
echo aria2c command not found. Please install aria2c and try again.
|
410 |
-
exit 1
|
411 |
-
fi
|
412 |
-
fi
|
413 |
-
|
414 |
-
echo checking $hp3_all
|
415 |
-
if [ -f "./assets/uvr5_weights/$hp3_all" ]; then
|
416 |
-
echo $hp3_all in ./assets/uvr5_weights checked.
|
417 |
-
else
|
418 |
-
echo failed. starting download from huggingface.
|
419 |
-
if command -v aria2c &> /dev/null; then
|
420 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlhp3_all -d ./assets/uvr5_weights -o $hp3_all
|
421 |
-
if [ -f "./assets/uvr5_weights/$hp3_all" ]; then
|
422 |
-
echo download successful.
|
423 |
-
else
|
424 |
-
echo please try again!
|
425 |
-
exit 1
|
426 |
-
fi
|
427 |
-
else
|
428 |
-
echo aria2c command not found. Please install aria2c and try again.
|
429 |
-
exit 1
|
430 |
-
fi
|
431 |
-
fi
|
432 |
-
|
433 |
-
echo checking $hp5_only
|
434 |
-
if [ -f "./assets/uvr5_weights/$hp5_only" ]; then
|
435 |
-
echo $hp5_only in ./assets/uvr5_weights checked.
|
436 |
-
else
|
437 |
-
echo failed. starting download from huggingface.
|
438 |
-
if command -v aria2c &> /dev/null; then
|
439 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlhp5_only -d ./assets/uvr5_weights -o $hp5_only
|
440 |
-
if [ -f "./assets/uvr5_weights/$hp5_only" ]; then
|
441 |
-
echo download successful.
|
442 |
-
else
|
443 |
-
echo please try again!
|
444 |
-
exit 1
|
445 |
-
fi
|
446 |
-
else
|
447 |
-
echo aria2c command not found. Please install aria2c and try again.
|
448 |
-
exit 1
|
449 |
-
fi
|
450 |
-
fi
|
451 |
-
|
452 |
-
echo checking $VR_DeEchoAggressive
|
453 |
-
if [ -f "./assets/uvr5_weights/$VR_DeEchoAggressive" ]; then
|
454 |
-
echo $VR_DeEchoAggressive in ./assets/uvr5_weights checked.
|
455 |
-
else
|
456 |
-
echo failed. starting download from huggingface.
|
457 |
-
if command -v aria2c &> /dev/null; then
|
458 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlVR_DeEchoAggressive -d ./assets/uvr5_weights -o $VR_DeEchoAggressive
|
459 |
-
if [ -f "./assets/uvr5_weights/$VR_DeEchoAggressive" ]; then
|
460 |
-
echo download successful.
|
461 |
-
else
|
462 |
-
echo please try again!
|
463 |
-
exit 1
|
464 |
-
fi
|
465 |
-
else
|
466 |
-
echo aria2c command not found. Please install aria2c and try again.
|
467 |
-
exit 1
|
468 |
-
fi
|
469 |
-
fi
|
470 |
-
|
471 |
-
echo checking $VR_DeEchoDeReverb
|
472 |
-
if [ -f "./assets/uvr5_weights/$VR_DeEchoDeReverb" ]; then
|
473 |
-
echo $VR_DeEchoDeReverb in ./assets/uvr5_weights checked.
|
474 |
-
else
|
475 |
-
echo failed. starting download from huggingface.
|
476 |
-
if command -v aria2c &> /dev/null; then
|
477 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlVR_DeEchoDeReverb -d ./assets/uvr5_weights -o $VR_DeEchoDeReverb
|
478 |
-
if [ -f "./assets/uvr5_weights/$VR_DeEchoDeReverb" ]; then
|
479 |
-
echo download successful.
|
480 |
-
else
|
481 |
-
echo please try again!
|
482 |
-
exit 1
|
483 |
-
fi
|
484 |
-
else
|
485 |
-
echo aria2c command not found. Please install aria2c and try again.
|
486 |
-
exit 1
|
487 |
-
fi
|
488 |
-
fi
|
489 |
-
|
490 |
-
echo checking $VR_DeEchoNormal
|
491 |
-
if [ -f "./assets/uvr5_weights/$VR_DeEchoNormal" ]; then
|
492 |
-
echo $VR_DeEchoNormal in ./assets/uvr5_weights checked.
|
493 |
-
else
|
494 |
-
echo failed. starting download from huggingface.
|
495 |
-
if command -v aria2c &> /dev/null; then
|
496 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlVR_DeEchoNormal -d ./assets/uvr5_weights -o $VR_DeEchoNormal
|
497 |
-
if [ -f "./assets/uvr5_weights/$VR_DeEchoNormal" ]; then
|
498 |
-
echo download successful.
|
499 |
-
else
|
500 |
-
echo please try again!
|
501 |
-
exit 1
|
502 |
-
fi
|
503 |
-
else
|
504 |
-
echo aria2c command not found. Please install aria2c and try again.
|
505 |
-
exit 1
|
506 |
-
fi
|
507 |
-
fi
|
508 |
-
|
509 |
-
echo checking $onnx_dereverb
|
510 |
-
if [ -f "./assets/uvr5_weights/onnx_dereverb_By_FoxJoy/$onnx_dereverb" ]; then
|
511 |
-
echo $onnx_dereverb in ./assets/uvr5_weights/onnx_dereverb_By_FoxJoy checked.
|
512 |
-
else
|
513 |
-
echo failed. starting download from huggingface.
|
514 |
-
if command -v aria2c &> /dev/null; then
|
515 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlonnx_dereverb -d ./assets/uvr5_weights/onnx_dereverb_By_FoxJoy -o $onnx_dereverb
|
516 |
-
if [ -f "./assets/uvr5_weights/onnx_dereverb_By_FoxJoy/$onnx_dereverb" ]; then
|
517 |
-
echo download successful.
|
518 |
-
else
|
519 |
-
echo please try again!
|
520 |
-
exit 1
|
521 |
-
fi
|
522 |
-
else
|
523 |
-
echo aria2c command not found. Please install aria2c and try again.
|
524 |
-
exit 1
|
525 |
-
fi
|
526 |
-
fi
|
527 |
-
|
528 |
-
echo checking $rmvpe
|
529 |
-
if [ -f "./assets/rmvpe/$rmvpe" ]; then
|
530 |
-
echo $rmvpe in ./assets/rmvpe checked.
|
531 |
-
else
|
532 |
-
echo failed. starting download from huggingface.
|
533 |
-
if command -v aria2c &> /dev/null; then
|
534 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlrmvpe -d ./assets/rmvpe -o $rmvpe
|
535 |
-
if [ -f "./assets/rmvpe/$rmvpe" ]; then
|
536 |
-
echo download successful.
|
537 |
-
else
|
538 |
-
echo please try again!
|
539 |
-
exit 1
|
540 |
-
fi
|
541 |
-
else
|
542 |
-
echo aria2c command not found. Please install aria2c and try again.
|
543 |
-
exit 1
|
544 |
-
fi
|
545 |
-
fi
|
546 |
-
|
547 |
-
echo checking $hb
|
548 |
-
if [ -f "./assets/hubert/$hb" ]; then
|
549 |
-
echo $hb in ./assets/hubert/pretrained checked.
|
550 |
-
else
|
551 |
-
echo failed. starting download from huggingface.
|
552 |
-
if command -v aria2c &> /dev/null; then
|
553 |
-
aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlhb -d ./assets/hubert/ -o $hb
|
554 |
-
if [ -f "./assets/hubert/$hb" ]; then
|
555 |
-
echo download successful.
|
556 |
-
else
|
557 |
-
echo please try again!
|
558 |
-
exit 1
|
559 |
-
fi
|
560 |
-
else
|
561 |
-
echo aria2c command not found. Please install aria2c and try again.
|
562 |
-
exit 1
|
563 |
-
fi
|
564 |
-
fi
|
565 |
-
|
566 |
-
echo required files check finished.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/text_encoder.py
DELETED
@@ -1,304 +0,0 @@
|
|
1 |
-
import re
|
2 |
-
import six
|
3 |
-
from six.moves import range # pylint: disable=redefined-builtin
|
4 |
-
|
5 |
-
PAD = "<pad>"
|
6 |
-
EOS = "<EOS>"
|
7 |
-
UNK = "<UNK>"
|
8 |
-
SEG = "|"
|
9 |
-
RESERVED_TOKENS = [PAD, EOS, UNK]
|
10 |
-
NUM_RESERVED_TOKENS = len(RESERVED_TOKENS)
|
11 |
-
PAD_ID = RESERVED_TOKENS.index(PAD) # Normally 0
|
12 |
-
EOS_ID = RESERVED_TOKENS.index(EOS) # Normally 1
|
13 |
-
UNK_ID = RESERVED_TOKENS.index(UNK) # Normally 2
|
14 |
-
|
15 |
-
if six.PY2:
|
16 |
-
RESERVED_TOKENS_BYTES = RESERVED_TOKENS
|
17 |
-
else:
|
18 |
-
RESERVED_TOKENS_BYTES = [bytes(PAD, "ascii"), bytes(EOS, "ascii")]
|
19 |
-
|
20 |
-
# Regular expression for unescaping token strings.
|
21 |
-
# '\u' is converted to '_'
|
22 |
-
# '\\' is converted to '\'
|
23 |
-
# '\213;' is converted to unichr(213)
|
24 |
-
_UNESCAPE_REGEX = re.compile(r"\\u|\\\\|\\([0-9]+);")
|
25 |
-
_ESCAPE_CHARS = set(u"\\_u;0123456789")
|
26 |
-
|
27 |
-
|
28 |
-
def strip_ids(ids, ids_to_strip):
|
29 |
-
"""Strip ids_to_strip from the end ids."""
|
30 |
-
ids = list(ids)
|
31 |
-
while ids and ids[-1] in ids_to_strip:
|
32 |
-
ids.pop()
|
33 |
-
return ids
|
34 |
-
|
35 |
-
|
36 |
-
class TextEncoder(object):
|
37 |
-
"""Base class for converting from ints to/from human readable strings."""
|
38 |
-
|
39 |
-
def __init__(self, num_reserved_ids=NUM_RESERVED_TOKENS):
|
40 |
-
self._num_reserved_ids = num_reserved_ids
|
41 |
-
|
42 |
-
@property
|
43 |
-
def num_reserved_ids(self):
|
44 |
-
return self._num_reserved_ids
|
45 |
-
|
46 |
-
def encode(self, s):
|
47 |
-
"""Transform a human-readable string into a sequence of int ids.
|
48 |
-
|
49 |
-
The ids should be in the range [num_reserved_ids, vocab_size). Ids [0,
|
50 |
-
num_reserved_ids) are reserved.
|
51 |
-
|
52 |
-
EOS is not appended.
|
53 |
-
|
54 |
-
Args:
|
55 |
-
s: human-readable string to be converted.
|
56 |
-
|
57 |
-
Returns:
|
58 |
-
ids: list of integers
|
59 |
-
"""
|
60 |
-
return [int(w) + self._num_reserved_ids for w in s.split()]
|
61 |
-
|
62 |
-
def decode(self, ids, strip_extraneous=False):
|
63 |
-
"""Transform a sequence of int ids into a human-readable string.
|
64 |
-
|
65 |
-
EOS is not expected in ids.
|
66 |
-
|
67 |
-
Args:
|
68 |
-
ids: list of integers to be converted.
|
69 |
-
strip_extraneous: bool, whether to strip off extraneous tokens
|
70 |
-
(EOS and PAD).
|
71 |
-
|
72 |
-
Returns:
|
73 |
-
s: human-readable string.
|
74 |
-
"""
|
75 |
-
if strip_extraneous:
|
76 |
-
ids = strip_ids(ids, list(range(self._num_reserved_ids or 0)))
|
77 |
-
return " ".join(self.decode_list(ids))
|
78 |
-
|
79 |
-
def decode_list(self, ids):
|
80 |
-
"""Transform a sequence of int ids into a their string versions.
|
81 |
-
|
82 |
-
This method supports transforming individual input/output ids to their
|
83 |
-
string versions so that sequence to/from text conversions can be visualized
|
84 |
-
in a human readable format.
|
85 |
-
|
86 |
-
Args:
|
87 |
-
ids: list of integers to be converted.
|
88 |
-
|
89 |
-
Returns:
|
90 |
-
strs: list of human-readable string.
|
91 |
-
"""
|
92 |
-
decoded_ids = []
|
93 |
-
for id_ in ids:
|
94 |
-
if 0 <= id_ < self._num_reserved_ids:
|
95 |
-
decoded_ids.append(RESERVED_TOKENS[int(id_)])
|
96 |
-
else:
|
97 |
-
decoded_ids.append(id_ - self._num_reserved_ids)
|
98 |
-
return [str(d) for d in decoded_ids]
|
99 |
-
|
100 |
-
@property
|
101 |
-
def vocab_size(self):
|
102 |
-
raise NotImplementedError()
|
103 |
-
|
104 |
-
|
105 |
-
class ByteTextEncoder(TextEncoder):
|
106 |
-
"""Encodes each byte to an id. For 8-bit strings only."""
|
107 |
-
|
108 |
-
def encode(self, s):
|
109 |
-
numres = self._num_reserved_ids
|
110 |
-
if six.PY2:
|
111 |
-
if isinstance(s, unicode):
|
112 |
-
s = s.encode("utf-8")
|
113 |
-
return [ord(c) + numres for c in s]
|
114 |
-
# Python3: explicitly convert to UTF-8
|
115 |
-
return [c + numres for c in s.encode("utf-8")]
|
116 |
-
|
117 |
-
def decode(self, ids, strip_extraneous=False):
|
118 |
-
if strip_extraneous:
|
119 |
-
ids = strip_ids(ids, list(range(self._num_reserved_ids or 0)))
|
120 |
-
numres = self._num_reserved_ids
|
121 |
-
decoded_ids = []
|
122 |
-
int2byte = six.int2byte
|
123 |
-
for id_ in ids:
|
124 |
-
if 0 <= id_ < numres:
|
125 |
-
decoded_ids.append(RESERVED_TOKENS_BYTES[int(id_)])
|
126 |
-
else:
|
127 |
-
decoded_ids.append(int2byte(id_ - numres))
|
128 |
-
if six.PY2:
|
129 |
-
return "".join(decoded_ids)
|
130 |
-
# Python3: join byte arrays and then decode string
|
131 |
-
return b"".join(decoded_ids).decode("utf-8", "replace")
|
132 |
-
|
133 |
-
def decode_list(self, ids):
|
134 |
-
numres = self._num_reserved_ids
|
135 |
-
decoded_ids = []
|
136 |
-
int2byte = six.int2byte
|
137 |
-
for id_ in ids:
|
138 |
-
if 0 <= id_ < numres:
|
139 |
-
decoded_ids.append(RESERVED_TOKENS_BYTES[int(id_)])
|
140 |
-
else:
|
141 |
-
decoded_ids.append(int2byte(id_ - numres))
|
142 |
-
# Python3: join byte arrays and then decode string
|
143 |
-
return decoded_ids
|
144 |
-
|
145 |
-
@property
|
146 |
-
def vocab_size(self):
|
147 |
-
return 2**8 + self._num_reserved_ids
|
148 |
-
|
149 |
-
|
150 |
-
class ByteTextEncoderWithEos(ByteTextEncoder):
|
151 |
-
"""Encodes each byte to an id and appends the EOS token."""
|
152 |
-
|
153 |
-
def encode(self, s):
|
154 |
-
return super(ByteTextEncoderWithEos, self).encode(s) + [EOS_ID]
|
155 |
-
|
156 |
-
|
157 |
-
class TokenTextEncoder(TextEncoder):
|
158 |
-
"""Encoder based on a user-supplied vocabulary (file or list)."""
|
159 |
-
|
160 |
-
def __init__(self,
|
161 |
-
vocab_filename,
|
162 |
-
reverse=False,
|
163 |
-
vocab_list=None,
|
164 |
-
replace_oov=None,
|
165 |
-
num_reserved_ids=NUM_RESERVED_TOKENS):
|
166 |
-
"""Initialize from a file or list, one token per line.
|
167 |
-
|
168 |
-
Handling of reserved tokens works as follows:
|
169 |
-
- When initializing from a list, we add reserved tokens to the vocab.
|
170 |
-
- When initializing from a file, we do not add reserved tokens to the vocab.
|
171 |
-
- When saving vocab files, we save reserved tokens to the file.
|
172 |
-
|
173 |
-
Args:
|
174 |
-
vocab_filename: If not None, the full filename to read vocab from. If this
|
175 |
-
is not None, then vocab_list should be None.
|
176 |
-
reverse: Boolean indicating if tokens should be reversed during encoding
|
177 |
-
and decoding.
|
178 |
-
vocab_list: If not None, a list of elements of the vocabulary. If this is
|
179 |
-
not None, then vocab_filename should be None.
|
180 |
-
replace_oov: If not None, every out-of-vocabulary token seen when
|
181 |
-
encoding will be replaced by this string (which must be in vocab).
|
182 |
-
num_reserved_ids: Number of IDs to save for reserved tokens like <EOS>.
|
183 |
-
"""
|
184 |
-
super(TokenTextEncoder, self).__init__(num_reserved_ids=num_reserved_ids)
|
185 |
-
self._reverse = reverse
|
186 |
-
self._replace_oov = replace_oov
|
187 |
-
if vocab_filename:
|
188 |
-
self._init_vocab_from_file(vocab_filename)
|
189 |
-
else:
|
190 |
-
assert vocab_list is not None
|
191 |
-
self._init_vocab_from_list(vocab_list)
|
192 |
-
self.pad_index = self._token_to_id[PAD]
|
193 |
-
self.eos_index = self._token_to_id[EOS]
|
194 |
-
self.unk_index = self._token_to_id[UNK]
|
195 |
-
self.seg_index = self._token_to_id[SEG] if SEG in self._token_to_id else self.eos_index
|
196 |
-
|
197 |
-
def encode(self, s):
|
198 |
-
"""Converts a space-separated string of tokens to a list of ids."""
|
199 |
-
sentence = s
|
200 |
-
tokens = sentence.strip().split()
|
201 |
-
if self._replace_oov is not None:
|
202 |
-
tokens = [t if t in self._token_to_id else self._replace_oov
|
203 |
-
for t in tokens]
|
204 |
-
ret = [self._token_to_id[tok] for tok in tokens]
|
205 |
-
return ret[::-1] if self._reverse else ret
|
206 |
-
|
207 |
-
def decode(self, ids, strip_eos=False, strip_padding=False):
|
208 |
-
if strip_padding and self.pad() in list(ids):
|
209 |
-
pad_pos = list(ids).index(self.pad())
|
210 |
-
ids = ids[:pad_pos]
|
211 |
-
if strip_eos and self.eos() in list(ids):
|
212 |
-
eos_pos = list(ids).index(self.eos())
|
213 |
-
ids = ids[:eos_pos]
|
214 |
-
return " ".join(self.decode_list(ids))
|
215 |
-
|
216 |
-
def decode_list(self, ids):
|
217 |
-
seq = reversed(ids) if self._reverse else ids
|
218 |
-
return [self._safe_id_to_token(i) for i in seq]
|
219 |
-
|
220 |
-
@property
|
221 |
-
def vocab_size(self):
|
222 |
-
return len(self._id_to_token)
|
223 |
-
|
224 |
-
def __len__(self):
|
225 |
-
return self.vocab_size
|
226 |
-
|
227 |
-
def _safe_id_to_token(self, idx):
|
228 |
-
return self._id_to_token.get(idx, "ID_%d" % idx)
|
229 |
-
|
230 |
-
def _init_vocab_from_file(self, filename):
|
231 |
-
"""Load vocab from a file.
|
232 |
-
|
233 |
-
Args:
|
234 |
-
filename: The file to load vocabulary from.
|
235 |
-
"""
|
236 |
-
with open(filename) as f:
|
237 |
-
tokens = [token.strip() for token in f.readlines()]
|
238 |
-
|
239 |
-
def token_gen():
|
240 |
-
for token in tokens:
|
241 |
-
yield token
|
242 |
-
|
243 |
-
self._init_vocab(token_gen(), add_reserved_tokens=False)
|
244 |
-
|
245 |
-
def _init_vocab_from_list(self, vocab_list):
|
246 |
-
"""Initialize tokens from a list of tokens.
|
247 |
-
|
248 |
-
It is ok if reserved tokens appear in the vocab list. They will be
|
249 |
-
removed. The set of tokens in vocab_list should be unique.
|
250 |
-
|
251 |
-
Args:
|
252 |
-
vocab_list: A list of tokens.
|
253 |
-
"""
|
254 |
-
def token_gen():
|
255 |
-
for token in vocab_list:
|
256 |
-
if token not in RESERVED_TOKENS:
|
257 |
-
yield token
|
258 |
-
|
259 |
-
self._init_vocab(token_gen())
|
260 |
-
|
261 |
-
def _init_vocab(self, token_generator, add_reserved_tokens=True):
|
262 |
-
"""Initialize vocabulary with tokens from token_generator."""
|
263 |
-
|
264 |
-
self._id_to_token = {}
|
265 |
-
non_reserved_start_index = 0
|
266 |
-
|
267 |
-
if add_reserved_tokens:
|
268 |
-
self._id_to_token.update(enumerate(RESERVED_TOKENS))
|
269 |
-
non_reserved_start_index = len(RESERVED_TOKENS)
|
270 |
-
|
271 |
-
self._id_to_token.update(
|
272 |
-
enumerate(token_generator, start=non_reserved_start_index))
|
273 |
-
|
274 |
-
# _token_to_id is the reverse of _id_to_token
|
275 |
-
self._token_to_id = dict((v, k)
|
276 |
-
for k, v in six.iteritems(self._id_to_token))
|
277 |
-
|
278 |
-
def pad(self):
|
279 |
-
return self.pad_index
|
280 |
-
|
281 |
-
def eos(self):
|
282 |
-
return self.eos_index
|
283 |
-
|
284 |
-
def unk(self):
|
285 |
-
return self.unk_index
|
286 |
-
|
287 |
-
def seg(self):
|
288 |
-
return self.seg_index
|
289 |
-
|
290 |
-
def store_to_file(self, filename):
|
291 |
-
"""Write vocab file to disk.
|
292 |
-
|
293 |
-
Vocab files have one token per line. The file ends in a newline. Reserved
|
294 |
-
tokens are written to the vocab file as well.
|
295 |
-
|
296 |
-
Args:
|
297 |
-
filename: Full path of the file to store the vocab to.
|
298 |
-
"""
|
299 |
-
with open(filename, "w") as f:
|
300 |
-
for i in range(len(self._id_to_token)):
|
301 |
-
f.write(self._id_to_token[i] + "\n")
|
302 |
-
|
303 |
-
def sil_phonemes(self):
|
304 |
-
return [p for p in self._id_to_token.values() if not p[0].isalpha()]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-210e_deepfashion2_shorts_256x192/td_hm_res50_4xb64-210e_deepfashion2_shorts_256x192.py
DELETED
@@ -1,2861 +0,0 @@
|
|
1 |
-
default_scope = 'mmpose'
|
2 |
-
default_hooks = dict(
|
3 |
-
timer=dict(type='IterTimerHook'),
|
4 |
-
logger=dict(type='LoggerHook', interval=50),
|
5 |
-
param_scheduler=dict(type='ParamSchedulerHook'),
|
6 |
-
checkpoint=dict(
|
7 |
-
type='CheckpointHook', interval=10, save_best='PCK', rule='greater'),
|
8 |
-
sampler_seed=dict(type='DistSamplerSeedHook'),
|
9 |
-
visualization=dict(type='PoseVisualizationHook', enable=False))
|
10 |
-
custom_hooks = [dict(type='SyncBuffersHook')]
|
11 |
-
env_cfg = dict(
|
12 |
-
cudnn_benchmark=False,
|
13 |
-
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
|
14 |
-
dist_cfg=dict(backend='nccl'))
|
15 |
-
vis_backends = [dict(type='LocalVisBackend')]
|
16 |
-
visualizer = dict(
|
17 |
-
type='PoseLocalVisualizer',
|
18 |
-
vis_backends=[dict(type='LocalVisBackend'),
|
19 |
-
dict(type='WandbVisBackend')],
|
20 |
-
name='visualizer')
|
21 |
-
log_processor = dict(
|
22 |
-
type='LogProcessor', window_size=50, by_epoch=True, num_digits=6)
|
23 |
-
log_level = 'INFO'
|
24 |
-
load_from = None
|
25 |
-
resume = False
|
26 |
-
backend_args = dict(backend='local')
|
27 |
-
train_cfg = dict(by_epoch=True, max_epochs=210, val_interval=10)
|
28 |
-
val_cfg = dict()
|
29 |
-
test_cfg = dict()
|
30 |
-
colors = dict(
|
31 |
-
sss=[255, 128, 0],
|
32 |
-
lss=[255, 0, 128],
|
33 |
-
sso=[128, 0, 255],
|
34 |
-
lso=[0, 128, 255],
|
35 |
-
vest=[0, 128, 128],
|
36 |
-
sling=[0, 0, 128],
|
37 |
-
shorts=[128, 128, 128],
|
38 |
-
trousers=[128, 0, 128],
|
39 |
-
skirt=[64, 128, 128],
|
40 |
-
ssd=[64, 64, 128],
|
41 |
-
lsd=[128, 64, 0],
|
42 |
-
vd=[128, 64, 255],
|
43 |
-
sd=[128, 64, 0])
|
44 |
-
dataset_info = dict(
|
45 |
-
dataset_name='deepfashion2',
|
46 |
-
paper_info=dict(
|
47 |
-
author=
|
48 |
-
'Yuying Ge and Ruimao Zhang and Lingyun Wu and Xiaogang Wang and Xiaoou Tang and Ping Luo',
|
49 |
-
title=
|
50 |
-
'DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images',
|
51 |
-
container=
|
52 |
-
'Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)',
|
53 |
-
year='2019',
|
54 |
-
homepage='https://github.com/switchablenorms/DeepFashion2'),
|
55 |
-
keypoint_info=dict({
|
56 |
-
0:
|
57 |
-
dict(name='sss_kpt1', id=0, color=[255, 128, 0], type='', swap=''),
|
58 |
-
1:
|
59 |
-
dict(
|
60 |
-
name='sss_kpt2',
|
61 |
-
id=1,
|
62 |
-
color=[255, 128, 0],
|
63 |
-
type='',
|
64 |
-
swap='sss_kpt6'),
|
65 |
-
2:
|
66 |
-
dict(
|
67 |
-
name='sss_kpt3',
|
68 |
-
id=2,
|
69 |
-
color=[255, 128, 0],
|
70 |
-
type='',
|
71 |
-
swap='sss_kpt5'),
|
72 |
-
3:
|
73 |
-
dict(name='sss_kpt4', id=3, color=[255, 128, 0], type='', swap=''),
|
74 |
-
4:
|
75 |
-
dict(
|
76 |
-
name='sss_kpt5',
|
77 |
-
id=4,
|
78 |
-
color=[255, 128, 0],
|
79 |
-
type='',
|
80 |
-
swap='sss_kpt3'),
|
81 |
-
5:
|
82 |
-
dict(
|
83 |
-
name='sss_kpt6',
|
84 |
-
id=5,
|
85 |
-
color=[255, 128, 0],
|
86 |
-
type='',
|
87 |
-
swap='sss_kpt2'),
|
88 |
-
6:
|
89 |
-
dict(
|
90 |
-
name='sss_kpt7',
|
91 |
-
id=6,
|
92 |
-
color=[255, 128, 0],
|
93 |
-
type='',
|
94 |
-
swap='sss_kpt25'),
|
95 |
-
7:
|
96 |
-
dict(
|
97 |
-
name='sss_kpt8',
|
98 |
-
id=7,
|
99 |
-
color=[255, 128, 0],
|
100 |
-
type='',
|
101 |
-
swap='sss_kpt24'),
|
102 |
-
8:
|
103 |
-
dict(
|
104 |
-
name='sss_kpt9',
|
105 |
-
id=8,
|
106 |
-
color=[255, 128, 0],
|
107 |
-
type='',
|
108 |
-
swap='sss_kpt23'),
|
109 |
-
9:
|
110 |
-
dict(
|
111 |
-
name='sss_kpt10',
|
112 |
-
id=9,
|
113 |
-
color=[255, 128, 0],
|
114 |
-
type='',
|
115 |
-
swap='sss_kpt22'),
|
116 |
-
10:
|
117 |
-
dict(
|
118 |
-
name='sss_kpt11',
|
119 |
-
id=10,
|
120 |
-
color=[255, 128, 0],
|
121 |
-
type='',
|
122 |
-
swap='sss_kpt21'),
|
123 |
-
11:
|
124 |
-
dict(
|
125 |
-
name='sss_kpt12',
|
126 |
-
id=11,
|
127 |
-
color=[255, 128, 0],
|
128 |
-
type='',
|
129 |
-
swap='sss_kpt20'),
|
130 |
-
12:
|
131 |
-
dict(
|
132 |
-
name='sss_kpt13',
|
133 |
-
id=12,
|
134 |
-
color=[255, 128, 0],
|
135 |
-
type='',
|
136 |
-
swap='sss_kpt19'),
|
137 |
-
13:
|
138 |
-
dict(
|
139 |
-
name='sss_kpt14',
|
140 |
-
id=13,
|
141 |
-
color=[255, 128, 0],
|
142 |
-
type='',
|
143 |
-
swap='sss_kpt18'),
|
144 |
-
14:
|
145 |
-
dict(
|
146 |
-
name='sss_kpt15',
|
147 |
-
id=14,
|
148 |
-
color=[255, 128, 0],
|
149 |
-
type='',
|
150 |
-
swap='sss_kpt17'),
|
151 |
-
15:
|
152 |
-
dict(name='sss_kpt16', id=15, color=[255, 128, 0], type='', swap=''),
|
153 |
-
16:
|
154 |
-
dict(
|
155 |
-
name='sss_kpt17',
|
156 |
-
id=16,
|
157 |
-
color=[255, 128, 0],
|
158 |
-
type='',
|
159 |
-
swap='sss_kpt15'),
|
160 |
-
17:
|
161 |
-
dict(
|
162 |
-
name='sss_kpt18',
|
163 |
-
id=17,
|
164 |
-
color=[255, 128, 0],
|
165 |
-
type='',
|
166 |
-
swap='sss_kpt14'),
|
167 |
-
18:
|
168 |
-
dict(
|
169 |
-
name='sss_kpt19',
|
170 |
-
id=18,
|
171 |
-
color=[255, 128, 0],
|
172 |
-
type='',
|
173 |
-
swap='sss_kpt13'),
|
174 |
-
19:
|
175 |
-
dict(
|
176 |
-
name='sss_kpt20',
|
177 |
-
id=19,
|
178 |
-
color=[255, 128, 0],
|
179 |
-
type='',
|
180 |
-
swap='sss_kpt12'),
|
181 |
-
20:
|
182 |
-
dict(
|
183 |
-
name='sss_kpt21',
|
184 |
-
id=20,
|
185 |
-
color=[255, 128, 0],
|
186 |
-
type='',
|
187 |
-
swap='sss_kpt11'),
|
188 |
-
21:
|
189 |
-
dict(
|
190 |
-
name='sss_kpt22',
|
191 |
-
id=21,
|
192 |
-
color=[255, 128, 0],
|
193 |
-
type='',
|
194 |
-
swap='sss_kpt10'),
|
195 |
-
22:
|
196 |
-
dict(
|
197 |
-
name='sss_kpt23',
|
198 |
-
id=22,
|
199 |
-
color=[255, 128, 0],
|
200 |
-
type='',
|
201 |
-
swap='sss_kpt9'),
|
202 |
-
23:
|
203 |
-
dict(
|
204 |
-
name='sss_kpt24',
|
205 |
-
id=23,
|
206 |
-
color=[255, 128, 0],
|
207 |
-
type='',
|
208 |
-
swap='sss_kpt8'),
|
209 |
-
24:
|
210 |
-
dict(
|
211 |
-
name='sss_kpt25',
|
212 |
-
id=24,
|
213 |
-
color=[255, 128, 0],
|
214 |
-
type='',
|
215 |
-
swap='sss_kpt7'),
|
216 |
-
25:
|
217 |
-
dict(name='lss_kpt1', id=25, color=[255, 0, 128], type='', swap=''),
|
218 |
-
26:
|
219 |
-
dict(
|
220 |
-
name='lss_kpt2',
|
221 |
-
id=26,
|
222 |
-
color=[255, 0, 128],
|
223 |
-
type='',
|
224 |
-
swap='lss_kpt6'),
|
225 |
-
27:
|
226 |
-
dict(
|
227 |
-
name='lss_kpt3',
|
228 |
-
id=27,
|
229 |
-
color=[255, 0, 128],
|
230 |
-
type='',
|
231 |
-
swap='lss_kpt5'),
|
232 |
-
28:
|
233 |
-
dict(name='lss_kpt4', id=28, color=[255, 0, 128], type='', swap=''),
|
234 |
-
29:
|
235 |
-
dict(
|
236 |
-
name='lss_kpt5',
|
237 |
-
id=29,
|
238 |
-
color=[255, 0, 128],
|
239 |
-
type='',
|
240 |
-
swap='lss_kpt3'),
|
241 |
-
30:
|
242 |
-
dict(
|
243 |
-
name='lss_kpt6',
|
244 |
-
id=30,
|
245 |
-
color=[255, 0, 128],
|
246 |
-
type='',
|
247 |
-
swap='lss_kpt2'),
|
248 |
-
31:
|
249 |
-
dict(
|
250 |
-
name='lss_kpt7',
|
251 |
-
id=31,
|
252 |
-
color=[255, 0, 128],
|
253 |
-
type='',
|
254 |
-
swap='lss_kpt33'),
|
255 |
-
32:
|
256 |
-
dict(
|
257 |
-
name='lss_kpt8',
|
258 |
-
id=32,
|
259 |
-
color=[255, 0, 128],
|
260 |
-
type='',
|
261 |
-
swap='lss_kpt32'),
|
262 |
-
33:
|
263 |
-
dict(
|
264 |
-
name='lss_kpt9',
|
265 |
-
id=33,
|
266 |
-
color=[255, 0, 128],
|
267 |
-
type='',
|
268 |
-
swap='lss_kpt31'),
|
269 |
-
34:
|
270 |
-
dict(
|
271 |
-
name='lss_kpt10',
|
272 |
-
id=34,
|
273 |
-
color=[255, 0, 128],
|
274 |
-
type='',
|
275 |
-
swap='lss_kpt30'),
|
276 |
-
35:
|
277 |
-
dict(
|
278 |
-
name='lss_kpt11',
|
279 |
-
id=35,
|
280 |
-
color=[255, 0, 128],
|
281 |
-
type='',
|
282 |
-
swap='lss_kpt29'),
|
283 |
-
36:
|
284 |
-
dict(
|
285 |
-
name='lss_kpt12',
|
286 |
-
id=36,
|
287 |
-
color=[255, 0, 128],
|
288 |
-
type='',
|
289 |
-
swap='lss_kpt28'),
|
290 |
-
37:
|
291 |
-
dict(
|
292 |
-
name='lss_kpt13',
|
293 |
-
id=37,
|
294 |
-
color=[255, 0, 128],
|
295 |
-
type='',
|
296 |
-
swap='lss_kpt27'),
|
297 |
-
38:
|
298 |
-
dict(
|
299 |
-
name='lss_kpt14',
|
300 |
-
id=38,
|
301 |
-
color=[255, 0, 128],
|
302 |
-
type='',
|
303 |
-
swap='lss_kpt26'),
|
304 |
-
39:
|
305 |
-
dict(
|
306 |
-
name='lss_kpt15',
|
307 |
-
id=39,
|
308 |
-
color=[255, 0, 128],
|
309 |
-
type='',
|
310 |
-
swap='lss_kpt25'),
|
311 |
-
40:
|
312 |
-
dict(
|
313 |
-
name='lss_kpt16',
|
314 |
-
id=40,
|
315 |
-
color=[255, 0, 128],
|
316 |
-
type='',
|
317 |
-
swap='lss_kpt24'),
|
318 |
-
41:
|
319 |
-
dict(
|
320 |
-
name='lss_kpt17',
|
321 |
-
id=41,
|
322 |
-
color=[255, 0, 128],
|
323 |
-
type='',
|
324 |
-
swap='lss_kpt23'),
|
325 |
-
42:
|
326 |
-
dict(
|
327 |
-
name='lss_kpt18',
|
328 |
-
id=42,
|
329 |
-
color=[255, 0, 128],
|
330 |
-
type='',
|
331 |
-
swap='lss_kpt22'),
|
332 |
-
43:
|
333 |
-
dict(
|
334 |
-
name='lss_kpt19',
|
335 |
-
id=43,
|
336 |
-
color=[255, 0, 128],
|
337 |
-
type='',
|
338 |
-
swap='lss_kpt21'),
|
339 |
-
44:
|
340 |
-
dict(name='lss_kpt20', id=44, color=[255, 0, 128], type='', swap=''),
|
341 |
-
45:
|
342 |
-
dict(
|
343 |
-
name='lss_kpt21',
|
344 |
-
id=45,
|
345 |
-
color=[255, 0, 128],
|
346 |
-
type='',
|
347 |
-
swap='lss_kpt19'),
|
348 |
-
46:
|
349 |
-
dict(
|
350 |
-
name='lss_kpt22',
|
351 |
-
id=46,
|
352 |
-
color=[255, 0, 128],
|
353 |
-
type='',
|
354 |
-
swap='lss_kpt18'),
|
355 |
-
47:
|
356 |
-
dict(
|
357 |
-
name='lss_kpt23',
|
358 |
-
id=47,
|
359 |
-
color=[255, 0, 128],
|
360 |
-
type='',
|
361 |
-
swap='lss_kpt17'),
|
362 |
-
48:
|
363 |
-
dict(
|
364 |
-
name='lss_kpt24',
|
365 |
-
id=48,
|
366 |
-
color=[255, 0, 128],
|
367 |
-
type='',
|
368 |
-
swap='lss_kpt16'),
|
369 |
-
49:
|
370 |
-
dict(
|
371 |
-
name='lss_kpt25',
|
372 |
-
id=49,
|
373 |
-
color=[255, 0, 128],
|
374 |
-
type='',
|
375 |
-
swap='lss_kpt15'),
|
376 |
-
50:
|
377 |
-
dict(
|
378 |
-
name='lss_kpt26',
|
379 |
-
id=50,
|
380 |
-
color=[255, 0, 128],
|
381 |
-
type='',
|
382 |
-
swap='lss_kpt14'),
|
383 |
-
51:
|
384 |
-
dict(
|
385 |
-
name='lss_kpt27',
|
386 |
-
id=51,
|
387 |
-
color=[255, 0, 128],
|
388 |
-
type='',
|
389 |
-
swap='lss_kpt13'),
|
390 |
-
52:
|
391 |
-
dict(
|
392 |
-
name='lss_kpt28',
|
393 |
-
id=52,
|
394 |
-
color=[255, 0, 128],
|
395 |
-
type='',
|
396 |
-
swap='lss_kpt12'),
|
397 |
-
53:
|
398 |
-
dict(
|
399 |
-
name='lss_kpt29',
|
400 |
-
id=53,
|
401 |
-
color=[255, 0, 128],
|
402 |
-
type='',
|
403 |
-
swap='lss_kpt11'),
|
404 |
-
54:
|
405 |
-
dict(
|
406 |
-
name='lss_kpt30',
|
407 |
-
id=54,
|
408 |
-
color=[255, 0, 128],
|
409 |
-
type='',
|
410 |
-
swap='lss_kpt10'),
|
411 |
-
55:
|
412 |
-
dict(
|
413 |
-
name='lss_kpt31',
|
414 |
-
id=55,
|
415 |
-
color=[255, 0, 128],
|
416 |
-
type='',
|
417 |
-
swap='lss_kpt9'),
|
418 |
-
56:
|
419 |
-
dict(
|
420 |
-
name='lss_kpt32',
|
421 |
-
id=56,
|
422 |
-
color=[255, 0, 128],
|
423 |
-
type='',
|
424 |
-
swap='lss_kpt8'),
|
425 |
-
57:
|
426 |
-
dict(
|
427 |
-
name='lss_kpt33',
|
428 |
-
id=57,
|
429 |
-
color=[255, 0, 128],
|
430 |
-
type='',
|
431 |
-
swap='lss_kpt7'),
|
432 |
-
58:
|
433 |
-
dict(name='sso_kpt1', id=58, color=[128, 0, 255], type='', swap=''),
|
434 |
-
59:
|
435 |
-
dict(
|
436 |
-
name='sso_kpt2',
|
437 |
-
id=59,
|
438 |
-
color=[128, 0, 255],
|
439 |
-
type='',
|
440 |
-
swap='sso_kpt26'),
|
441 |
-
60:
|
442 |
-
dict(
|
443 |
-
name='sso_kpt3',
|
444 |
-
id=60,
|
445 |
-
color=[128, 0, 255],
|
446 |
-
type='',
|
447 |
-
swap='sso_kpt5'),
|
448 |
-
61:
|
449 |
-
dict(
|
450 |
-
name='sso_kpt4',
|
451 |
-
id=61,
|
452 |
-
color=[128, 0, 255],
|
453 |
-
type='',
|
454 |
-
swap='sso_kpt6'),
|
455 |
-
62:
|
456 |
-
dict(
|
457 |
-
name='sso_kpt5',
|
458 |
-
id=62,
|
459 |
-
color=[128, 0, 255],
|
460 |
-
type='',
|
461 |
-
swap='sso_kpt3'),
|
462 |
-
63:
|
463 |
-
dict(
|
464 |
-
name='sso_kpt6',
|
465 |
-
id=63,
|
466 |
-
color=[128, 0, 255],
|
467 |
-
type='',
|
468 |
-
swap='sso_kpt4'),
|
469 |
-
64:
|
470 |
-
dict(
|
471 |
-
name='sso_kpt7',
|
472 |
-
id=64,
|
473 |
-
color=[128, 0, 255],
|
474 |
-
type='',
|
475 |
-
swap='sso_kpt25'),
|
476 |
-
65:
|
477 |
-
dict(
|
478 |
-
name='sso_kpt8',
|
479 |
-
id=65,
|
480 |
-
color=[128, 0, 255],
|
481 |
-
type='',
|
482 |
-
swap='sso_kpt24'),
|
483 |
-
66:
|
484 |
-
dict(
|
485 |
-
name='sso_kpt9',
|
486 |
-
id=66,
|
487 |
-
color=[128, 0, 255],
|
488 |
-
type='',
|
489 |
-
swap='sso_kpt23'),
|
490 |
-
67:
|
491 |
-
dict(
|
492 |
-
name='sso_kpt10',
|
493 |
-
id=67,
|
494 |
-
color=[128, 0, 255],
|
495 |
-
type='',
|
496 |
-
swap='sso_kpt22'),
|
497 |
-
68:
|
498 |
-
dict(
|
499 |
-
name='sso_kpt11',
|
500 |
-
id=68,
|
501 |
-
color=[128, 0, 255],
|
502 |
-
type='',
|
503 |
-
swap='sso_kpt21'),
|
504 |
-
69:
|
505 |
-
dict(
|
506 |
-
name='sso_kpt12',
|
507 |
-
id=69,
|
508 |
-
color=[128, 0, 255],
|
509 |
-
type='',
|
510 |
-
swap='sso_kpt20'),
|
511 |
-
70:
|
512 |
-
dict(
|
513 |
-
name='sso_kpt13',
|
514 |
-
id=70,
|
515 |
-
color=[128, 0, 255],
|
516 |
-
type='',
|
517 |
-
swap='sso_kpt19'),
|
518 |
-
71:
|
519 |
-
dict(
|
520 |
-
name='sso_kpt14',
|
521 |
-
id=71,
|
522 |
-
color=[128, 0, 255],
|
523 |
-
type='',
|
524 |
-
swap='sso_kpt18'),
|
525 |
-
72:
|
526 |
-
dict(
|
527 |
-
name='sso_kpt15',
|
528 |
-
id=72,
|
529 |
-
color=[128, 0, 255],
|
530 |
-
type='',
|
531 |
-
swap='sso_kpt17'),
|
532 |
-
73:
|
533 |
-
dict(
|
534 |
-
name='sso_kpt16',
|
535 |
-
id=73,
|
536 |
-
color=[128, 0, 255],
|
537 |
-
type='',
|
538 |
-
swap='sso_kpt29'),
|
539 |
-
74:
|
540 |
-
dict(
|
541 |
-
name='sso_kpt17',
|
542 |
-
id=74,
|
543 |
-
color=[128, 0, 255],
|
544 |
-
type='',
|
545 |
-
swap='sso_kpt15'),
|
546 |
-
75:
|
547 |
-
dict(
|
548 |
-
name='sso_kpt18',
|
549 |
-
id=75,
|
550 |
-
color=[128, 0, 255],
|
551 |
-
type='',
|
552 |
-
swap='sso_kpt14'),
|
553 |
-
76:
|
554 |
-
dict(
|
555 |
-
name='sso_kpt19',
|
556 |
-
id=76,
|
557 |
-
color=[128, 0, 255],
|
558 |
-
type='',
|
559 |
-
swap='sso_kpt13'),
|
560 |
-
77:
|
561 |
-
dict(
|
562 |
-
name='sso_kpt20',
|
563 |
-
id=77,
|
564 |
-
color=[128, 0, 255],
|
565 |
-
type='',
|
566 |
-
swap='sso_kpt12'),
|
567 |
-
78:
|
568 |
-
dict(
|
569 |
-
name='sso_kpt21',
|
570 |
-
id=78,
|
571 |
-
color=[128, 0, 255],
|
572 |
-
type='',
|
573 |
-
swap='sso_kpt11'),
|
574 |
-
79:
|
575 |
-
dict(
|
576 |
-
name='sso_kpt22',
|
577 |
-
id=79,
|
578 |
-
color=[128, 0, 255],
|
579 |
-
type='',
|
580 |
-
swap='sso_kpt10'),
|
581 |
-
80:
|
582 |
-
dict(
|
583 |
-
name='sso_kpt23',
|
584 |
-
id=80,
|
585 |
-
color=[128, 0, 255],
|
586 |
-
type='',
|
587 |
-
swap='sso_kpt9'),
|
588 |
-
81:
|
589 |
-
dict(
|
590 |
-
name='sso_kpt24',
|
591 |
-
id=81,
|
592 |
-
color=[128, 0, 255],
|
593 |
-
type='',
|
594 |
-
swap='sso_kpt8'),
|
595 |
-
82:
|
596 |
-
dict(
|
597 |
-
name='sso_kpt25',
|
598 |
-
id=82,
|
599 |
-
color=[128, 0, 255],
|
600 |
-
type='',
|
601 |
-
swap='sso_kpt7'),
|
602 |
-
83:
|
603 |
-
dict(
|
604 |
-
name='sso_kpt26',
|
605 |
-
id=83,
|
606 |
-
color=[128, 0, 255],
|
607 |
-
type='',
|
608 |
-
swap='sso_kpt2'),
|
609 |
-
84:
|
610 |
-
dict(
|
611 |
-
name='sso_kpt27',
|
612 |
-
id=84,
|
613 |
-
color=[128, 0, 255],
|
614 |
-
type='',
|
615 |
-
swap='sso_kpt30'),
|
616 |
-
85:
|
617 |
-
dict(
|
618 |
-
name='sso_kpt28',
|
619 |
-
id=85,
|
620 |
-
color=[128, 0, 255],
|
621 |
-
type='',
|
622 |
-
swap='sso_kpt31'),
|
623 |
-
86:
|
624 |
-
dict(
|
625 |
-
name='sso_kpt29',
|
626 |
-
id=86,
|
627 |
-
color=[128, 0, 255],
|
628 |
-
type='',
|
629 |
-
swap='sso_kpt16'),
|
630 |
-
87:
|
631 |
-
dict(
|
632 |
-
name='sso_kpt30',
|
633 |
-
id=87,
|
634 |
-
color=[128, 0, 255],
|
635 |
-
type='',
|
636 |
-
swap='sso_kpt27'),
|
637 |
-
88:
|
638 |
-
dict(
|
639 |
-
name='sso_kpt31',
|
640 |
-
id=88,
|
641 |
-
color=[128, 0, 255],
|
642 |
-
type='',
|
643 |
-
swap='sso_kpt28'),
|
644 |
-
89:
|
645 |
-
dict(name='lso_kpt1', id=89, color=[0, 128, 255], type='', swap=''),
|
646 |
-
90:
|
647 |
-
dict(
|
648 |
-
name='lso_kpt2',
|
649 |
-
id=90,
|
650 |
-
color=[0, 128, 255],
|
651 |
-
type='',
|
652 |
-
swap='lso_kpt6'),
|
653 |
-
91:
|
654 |
-
dict(
|
655 |
-
name='lso_kpt3',
|
656 |
-
id=91,
|
657 |
-
color=[0, 128, 255],
|
658 |
-
type='',
|
659 |
-
swap='lso_kpt5'),
|
660 |
-
92:
|
661 |
-
dict(
|
662 |
-
name='lso_kpt4',
|
663 |
-
id=92,
|
664 |
-
color=[0, 128, 255],
|
665 |
-
type='',
|
666 |
-
swap='lso_kpt34'),
|
667 |
-
93:
|
668 |
-
dict(
|
669 |
-
name='lso_kpt5',
|
670 |
-
id=93,
|
671 |
-
color=[0, 128, 255],
|
672 |
-
type='',
|
673 |
-
swap='lso_kpt3'),
|
674 |
-
94:
|
675 |
-
dict(
|
676 |
-
name='lso_kpt6',
|
677 |
-
id=94,
|
678 |
-
color=[0, 128, 255],
|
679 |
-
type='',
|
680 |
-
swap='lso_kpt2'),
|
681 |
-
95:
|
682 |
-
dict(
|
683 |
-
name='lso_kpt7',
|
684 |
-
id=95,
|
685 |
-
color=[0, 128, 255],
|
686 |
-
type='',
|
687 |
-
swap='lso_kpt33'),
|
688 |
-
96:
|
689 |
-
dict(
|
690 |
-
name='lso_kpt8',
|
691 |
-
id=96,
|
692 |
-
color=[0, 128, 255],
|
693 |
-
type='',
|
694 |
-
swap='lso_kpt32'),
|
695 |
-
97:
|
696 |
-
dict(
|
697 |
-
name='lso_kpt9',
|
698 |
-
id=97,
|
699 |
-
color=[0, 128, 255],
|
700 |
-
type='',
|
701 |
-
swap='lso_kpt31'),
|
702 |
-
98:
|
703 |
-
dict(
|
704 |
-
name='lso_kpt10',
|
705 |
-
id=98,
|
706 |
-
color=[0, 128, 255],
|
707 |
-
type='',
|
708 |
-
swap='lso_kpt30'),
|
709 |
-
99:
|
710 |
-
dict(
|
711 |
-
name='lso_kpt11',
|
712 |
-
id=99,
|
713 |
-
color=[0, 128, 255],
|
714 |
-
type='',
|
715 |
-
swap='lso_kpt29'),
|
716 |
-
100:
|
717 |
-
dict(
|
718 |
-
name='lso_kpt12',
|
719 |
-
id=100,
|
720 |
-
color=[0, 128, 255],
|
721 |
-
type='',
|
722 |
-
swap='lso_kpt28'),
|
723 |
-
101:
|
724 |
-
dict(
|
725 |
-
name='lso_kpt13',
|
726 |
-
id=101,
|
727 |
-
color=[0, 128, 255],
|
728 |
-
type='',
|
729 |
-
swap='lso_kpt27'),
|
730 |
-
102:
|
731 |
-
dict(
|
732 |
-
name='lso_kpt14',
|
733 |
-
id=102,
|
734 |
-
color=[0, 128, 255],
|
735 |
-
type='',
|
736 |
-
swap='lso_kpt26'),
|
737 |
-
103:
|
738 |
-
dict(
|
739 |
-
name='lso_kpt15',
|
740 |
-
id=103,
|
741 |
-
color=[0, 128, 255],
|
742 |
-
type='',
|
743 |
-
swap='lso_kpt25'),
|
744 |
-
104:
|
745 |
-
dict(
|
746 |
-
name='lso_kpt16',
|
747 |
-
id=104,
|
748 |
-
color=[0, 128, 255],
|
749 |
-
type='',
|
750 |
-
swap='lso_kpt24'),
|
751 |
-
105:
|
752 |
-
dict(
|
753 |
-
name='lso_kpt17',
|
754 |
-
id=105,
|
755 |
-
color=[0, 128, 255],
|
756 |
-
type='',
|
757 |
-
swap='lso_kpt23'),
|
758 |
-
106:
|
759 |
-
dict(
|
760 |
-
name='lso_kpt18',
|
761 |
-
id=106,
|
762 |
-
color=[0, 128, 255],
|
763 |
-
type='',
|
764 |
-
swap='lso_kpt22'),
|
765 |
-
107:
|
766 |
-
dict(
|
767 |
-
name='lso_kpt19',
|
768 |
-
id=107,
|
769 |
-
color=[0, 128, 255],
|
770 |
-
type='',
|
771 |
-
swap='lso_kpt21'),
|
772 |
-
108:
|
773 |
-
dict(
|
774 |
-
name='lso_kpt20',
|
775 |
-
id=108,
|
776 |
-
color=[0, 128, 255],
|
777 |
-
type='',
|
778 |
-
swap='lso_kpt37'),
|
779 |
-
109:
|
780 |
-
dict(
|
781 |
-
name='lso_kpt21',
|
782 |
-
id=109,
|
783 |
-
color=[0, 128, 255],
|
784 |
-
type='',
|
785 |
-
swap='lso_kpt19'),
|
786 |
-
110:
|
787 |
-
dict(
|
788 |
-
name='lso_kpt22',
|
789 |
-
id=110,
|
790 |
-
color=[0, 128, 255],
|
791 |
-
type='',
|
792 |
-
swap='lso_kpt18'),
|
793 |
-
111:
|
794 |
-
dict(
|
795 |
-
name='lso_kpt23',
|
796 |
-
id=111,
|
797 |
-
color=[0, 128, 255],
|
798 |
-
type='',
|
799 |
-
swap='lso_kpt17'),
|
800 |
-
112:
|
801 |
-
dict(
|
802 |
-
name='lso_kpt24',
|
803 |
-
id=112,
|
804 |
-
color=[0, 128, 255],
|
805 |
-
type='',
|
806 |
-
swap='lso_kpt16'),
|
807 |
-
113:
|
808 |
-
dict(
|
809 |
-
name='lso_kpt25',
|
810 |
-
id=113,
|
811 |
-
color=[0, 128, 255],
|
812 |
-
type='',
|
813 |
-
swap='lso_kpt15'),
|
814 |
-
114:
|
815 |
-
dict(
|
816 |
-
name='lso_kpt26',
|
817 |
-
id=114,
|
818 |
-
color=[0, 128, 255],
|
819 |
-
type='',
|
820 |
-
swap='lso_kpt14'),
|
821 |
-
115:
|
822 |
-
dict(
|
823 |
-
name='lso_kpt27',
|
824 |
-
id=115,
|
825 |
-
color=[0, 128, 255],
|
826 |
-
type='',
|
827 |
-
swap='lso_kpt13'),
|
828 |
-
116:
|
829 |
-
dict(
|
830 |
-
name='lso_kpt28',
|
831 |
-
id=116,
|
832 |
-
color=[0, 128, 255],
|
833 |
-
type='',
|
834 |
-
swap='lso_kpt12'),
|
835 |
-
117:
|
836 |
-
dict(
|
837 |
-
name='lso_kpt29',
|
838 |
-
id=117,
|
839 |
-
color=[0, 128, 255],
|
840 |
-
type='',
|
841 |
-
swap='lso_kpt11'),
|
842 |
-
118:
|
843 |
-
dict(
|
844 |
-
name='lso_kpt30',
|
845 |
-
id=118,
|
846 |
-
color=[0, 128, 255],
|
847 |
-
type='',
|
848 |
-
swap='lso_kpt10'),
|
849 |
-
119:
|
850 |
-
dict(
|
851 |
-
name='lso_kpt31',
|
852 |
-
id=119,
|
853 |
-
color=[0, 128, 255],
|
854 |
-
type='',
|
855 |
-
swap='lso_kpt9'),
|
856 |
-
120:
|
857 |
-
dict(
|
858 |
-
name='lso_kpt32',
|
859 |
-
id=120,
|
860 |
-
color=[0, 128, 255],
|
861 |
-
type='',
|
862 |
-
swap='lso_kpt8'),
|
863 |
-
121:
|
864 |
-
dict(
|
865 |
-
name='lso_kpt33',
|
866 |
-
id=121,
|
867 |
-
color=[0, 128, 255],
|
868 |
-
type='',
|
869 |
-
swap='lso_kpt7'),
|
870 |
-
122:
|
871 |
-
dict(
|
872 |
-
name='lso_kpt34',
|
873 |
-
id=122,
|
874 |
-
color=[0, 128, 255],
|
875 |
-
type='',
|
876 |
-
swap='lso_kpt4'),
|
877 |
-
123:
|
878 |
-
dict(
|
879 |
-
name='lso_kpt35',
|
880 |
-
id=123,
|
881 |
-
color=[0, 128, 255],
|
882 |
-
type='',
|
883 |
-
swap='lso_kpt38'),
|
884 |
-
124:
|
885 |
-
dict(
|
886 |
-
name='lso_kpt36',
|
887 |
-
id=124,
|
888 |
-
color=[0, 128, 255],
|
889 |
-
type='',
|
890 |
-
swap='lso_kpt39'),
|
891 |
-
125:
|
892 |
-
dict(
|
893 |
-
name='lso_kpt37',
|
894 |
-
id=125,
|
895 |
-
color=[0, 128, 255],
|
896 |
-
type='',
|
897 |
-
swap='lso_kpt20'),
|
898 |
-
126:
|
899 |
-
dict(
|
900 |
-
name='lso_kpt38',
|
901 |
-
id=126,
|
902 |
-
color=[0, 128, 255],
|
903 |
-
type='',
|
904 |
-
swap='lso_kpt35'),
|
905 |
-
127:
|
906 |
-
dict(
|
907 |
-
name='lso_kpt39',
|
908 |
-
id=127,
|
909 |
-
color=[0, 128, 255],
|
910 |
-
type='',
|
911 |
-
swap='lso_kpt36'),
|
912 |
-
128:
|
913 |
-
dict(name='vest_kpt1', id=128, color=[0, 128, 128], type='', swap=''),
|
914 |
-
129:
|
915 |
-
dict(
|
916 |
-
name='vest_kpt2',
|
917 |
-
id=129,
|
918 |
-
color=[0, 128, 128],
|
919 |
-
type='',
|
920 |
-
swap='vest_kpt6'),
|
921 |
-
130:
|
922 |
-
dict(
|
923 |
-
name='vest_kpt3',
|
924 |
-
id=130,
|
925 |
-
color=[0, 128, 128],
|
926 |
-
type='',
|
927 |
-
swap='vest_kpt5'),
|
928 |
-
131:
|
929 |
-
dict(name='vest_kpt4', id=131, color=[0, 128, 128], type='', swap=''),
|
930 |
-
132:
|
931 |
-
dict(
|
932 |
-
name='vest_kpt5',
|
933 |
-
id=132,
|
934 |
-
color=[0, 128, 128],
|
935 |
-
type='',
|
936 |
-
swap='vest_kpt3'),
|
937 |
-
133:
|
938 |
-
dict(
|
939 |
-
name='vest_kpt6',
|
940 |
-
id=133,
|
941 |
-
color=[0, 128, 128],
|
942 |
-
type='',
|
943 |
-
swap='vest_kpt2'),
|
944 |
-
134:
|
945 |
-
dict(
|
946 |
-
name='vest_kpt7',
|
947 |
-
id=134,
|
948 |
-
color=[0, 128, 128],
|
949 |
-
type='',
|
950 |
-
swap='vest_kpt15'),
|
951 |
-
135:
|
952 |
-
dict(
|
953 |
-
name='vest_kpt8',
|
954 |
-
id=135,
|
955 |
-
color=[0, 128, 128],
|
956 |
-
type='',
|
957 |
-
swap='vest_kpt14'),
|
958 |
-
136:
|
959 |
-
dict(
|
960 |
-
name='vest_kpt9',
|
961 |
-
id=136,
|
962 |
-
color=[0, 128, 128],
|
963 |
-
type='',
|
964 |
-
swap='vest_kpt13'),
|
965 |
-
137:
|
966 |
-
dict(
|
967 |
-
name='vest_kpt10',
|
968 |
-
id=137,
|
969 |
-
color=[0, 128, 128],
|
970 |
-
type='',
|
971 |
-
swap='vest_kpt12'),
|
972 |
-
138:
|
973 |
-
dict(name='vest_kpt11', id=138, color=[0, 128, 128], type='', swap=''),
|
974 |
-
139:
|
975 |
-
dict(
|
976 |
-
name='vest_kpt12',
|
977 |
-
id=139,
|
978 |
-
color=[0, 128, 128],
|
979 |
-
type='',
|
980 |
-
swap='vest_kpt10'),
|
981 |
-
140:
|
982 |
-
dict(name='vest_kpt13', id=140, color=[0, 128, 128], type='', swap=''),
|
983 |
-
141:
|
984 |
-
dict(
|
985 |
-
name='vest_kpt14',
|
986 |
-
id=141,
|
987 |
-
color=[0, 128, 128],
|
988 |
-
type='',
|
989 |
-
swap='vest_kpt8'),
|
990 |
-
142:
|
991 |
-
dict(
|
992 |
-
name='vest_kpt15',
|
993 |
-
id=142,
|
994 |
-
color=[0, 128, 128],
|
995 |
-
type='',
|
996 |
-
swap='vest_kpt7'),
|
997 |
-
143:
|
998 |
-
dict(name='sling_kpt1', id=143, color=[0, 0, 128], type='', swap=''),
|
999 |
-
144:
|
1000 |
-
dict(
|
1001 |
-
name='sling_kpt2',
|
1002 |
-
id=144,
|
1003 |
-
color=[0, 0, 128],
|
1004 |
-
type='',
|
1005 |
-
swap='sling_kpt6'),
|
1006 |
-
145:
|
1007 |
-
dict(
|
1008 |
-
name='sling_kpt3',
|
1009 |
-
id=145,
|
1010 |
-
color=[0, 0, 128],
|
1011 |
-
type='',
|
1012 |
-
swap='sling_kpt5'),
|
1013 |
-
146:
|
1014 |
-
dict(name='sling_kpt4', id=146, color=[0, 0, 128], type='', swap=''),
|
1015 |
-
147:
|
1016 |
-
dict(
|
1017 |
-
name='sling_kpt5',
|
1018 |
-
id=147,
|
1019 |
-
color=[0, 0, 128],
|
1020 |
-
type='',
|
1021 |
-
swap='sling_kpt3'),
|
1022 |
-
148:
|
1023 |
-
dict(
|
1024 |
-
name='sling_kpt6',
|
1025 |
-
id=148,
|
1026 |
-
color=[0, 0, 128],
|
1027 |
-
type='',
|
1028 |
-
swap='sling_kpt2'),
|
1029 |
-
149:
|
1030 |
-
dict(
|
1031 |
-
name='sling_kpt7',
|
1032 |
-
id=149,
|
1033 |
-
color=[0, 0, 128],
|
1034 |
-
type='',
|
1035 |
-
swap='sling_kpt15'),
|
1036 |
-
150:
|
1037 |
-
dict(
|
1038 |
-
name='sling_kpt8',
|
1039 |
-
id=150,
|
1040 |
-
color=[0, 0, 128],
|
1041 |
-
type='',
|
1042 |
-
swap='sling_kpt14'),
|
1043 |
-
151:
|
1044 |
-
dict(
|
1045 |
-
name='sling_kpt9',
|
1046 |
-
id=151,
|
1047 |
-
color=[0, 0, 128],
|
1048 |
-
type='',
|
1049 |
-
swap='sling_kpt13'),
|
1050 |
-
152:
|
1051 |
-
dict(
|
1052 |
-
name='sling_kpt10',
|
1053 |
-
id=152,
|
1054 |
-
color=[0, 0, 128],
|
1055 |
-
type='',
|
1056 |
-
swap='sling_kpt12'),
|
1057 |
-
153:
|
1058 |
-
dict(name='sling_kpt11', id=153, color=[0, 0, 128], type='', swap=''),
|
1059 |
-
154:
|
1060 |
-
dict(
|
1061 |
-
name='sling_kpt12',
|
1062 |
-
id=154,
|
1063 |
-
color=[0, 0, 128],
|
1064 |
-
type='',
|
1065 |
-
swap='sling_kpt10'),
|
1066 |
-
155:
|
1067 |
-
dict(
|
1068 |
-
name='sling_kpt13',
|
1069 |
-
id=155,
|
1070 |
-
color=[0, 0, 128],
|
1071 |
-
type='',
|
1072 |
-
swap='sling_kpt9'),
|
1073 |
-
156:
|
1074 |
-
dict(
|
1075 |
-
name='sling_kpt14',
|
1076 |
-
id=156,
|
1077 |
-
color=[0, 0, 128],
|
1078 |
-
type='',
|
1079 |
-
swap='sling_kpt8'),
|
1080 |
-
157:
|
1081 |
-
dict(
|
1082 |
-
name='sling_kpt15',
|
1083 |
-
id=157,
|
1084 |
-
color=[0, 0, 128],
|
1085 |
-
type='',
|
1086 |
-
swap='sling_kpt7'),
|
1087 |
-
158:
|
1088 |
-
dict(
|
1089 |
-
name='shorts_kpt1',
|
1090 |
-
id=158,
|
1091 |
-
color=[128, 128, 128],
|
1092 |
-
type='',
|
1093 |
-
swap='shorts_kpt3'),
|
1094 |
-
159:
|
1095 |
-
dict(
|
1096 |
-
name='shorts_kpt2',
|
1097 |
-
id=159,
|
1098 |
-
color=[128, 128, 128],
|
1099 |
-
type='',
|
1100 |
-
swap=''),
|
1101 |
-
160:
|
1102 |
-
dict(
|
1103 |
-
name='shorts_kpt3',
|
1104 |
-
id=160,
|
1105 |
-
color=[128, 128, 128],
|
1106 |
-
type='',
|
1107 |
-
swap='shorts_kpt1'),
|
1108 |
-
161:
|
1109 |
-
dict(
|
1110 |
-
name='shorts_kpt4',
|
1111 |
-
id=161,
|
1112 |
-
color=[128, 128, 128],
|
1113 |
-
type='',
|
1114 |
-
swap='shorts_kpt10'),
|
1115 |
-
162:
|
1116 |
-
dict(
|
1117 |
-
name='shorts_kpt5',
|
1118 |
-
id=162,
|
1119 |
-
color=[128, 128, 128],
|
1120 |
-
type='',
|
1121 |
-
swap='shorts_kpt9'),
|
1122 |
-
163:
|
1123 |
-
dict(
|
1124 |
-
name='shorts_kpt6',
|
1125 |
-
id=163,
|
1126 |
-
color=[128, 128, 128],
|
1127 |
-
type='',
|
1128 |
-
swap='shorts_kpt8'),
|
1129 |
-
164:
|
1130 |
-
dict(
|
1131 |
-
name='shorts_kpt7',
|
1132 |
-
id=164,
|
1133 |
-
color=[128, 128, 128],
|
1134 |
-
type='',
|
1135 |
-
swap=''),
|
1136 |
-
165:
|
1137 |
-
dict(
|
1138 |
-
name='shorts_kpt8',
|
1139 |
-
id=165,
|
1140 |
-
color=[128, 128, 128],
|
1141 |
-
type='',
|
1142 |
-
swap='shorts_kpt6'),
|
1143 |
-
166:
|
1144 |
-
dict(
|
1145 |
-
name='shorts_kpt9',
|
1146 |
-
id=166,
|
1147 |
-
color=[128, 128, 128],
|
1148 |
-
type='',
|
1149 |
-
swap='shorts_kpt5'),
|
1150 |
-
167:
|
1151 |
-
dict(
|
1152 |
-
name='shorts_kpt10',
|
1153 |
-
id=167,
|
1154 |
-
color=[128, 128, 128],
|
1155 |
-
type='',
|
1156 |
-
swap='shorts_kpt4'),
|
1157 |
-
168:
|
1158 |
-
dict(
|
1159 |
-
name='trousers_kpt1',
|
1160 |
-
id=168,
|
1161 |
-
color=[128, 0, 128],
|
1162 |
-
type='',
|
1163 |
-
swap='trousers_kpt3'),
|
1164 |
-
169:
|
1165 |
-
dict(
|
1166 |
-
name='trousers_kpt2',
|
1167 |
-
id=169,
|
1168 |
-
color=[128, 0, 128],
|
1169 |
-
type='',
|
1170 |
-
swap=''),
|
1171 |
-
170:
|
1172 |
-
dict(
|
1173 |
-
name='trousers_kpt3',
|
1174 |
-
id=170,
|
1175 |
-
color=[128, 0, 128],
|
1176 |
-
type='',
|
1177 |
-
swap='trousers_kpt1'),
|
1178 |
-
171:
|
1179 |
-
dict(
|
1180 |
-
name='trousers_kpt4',
|
1181 |
-
id=171,
|
1182 |
-
color=[128, 0, 128],
|
1183 |
-
type='',
|
1184 |
-
swap='trousers_kpt14'),
|
1185 |
-
172:
|
1186 |
-
dict(
|
1187 |
-
name='trousers_kpt5',
|
1188 |
-
id=172,
|
1189 |
-
color=[128, 0, 128],
|
1190 |
-
type='',
|
1191 |
-
swap='trousers_kpt13'),
|
1192 |
-
173:
|
1193 |
-
dict(
|
1194 |
-
name='trousers_kpt6',
|
1195 |
-
id=173,
|
1196 |
-
color=[128, 0, 128],
|
1197 |
-
type='',
|
1198 |
-
swap='trousers_kpt12'),
|
1199 |
-
174:
|
1200 |
-
dict(
|
1201 |
-
name='trousers_kpt7',
|
1202 |
-
id=174,
|
1203 |
-
color=[128, 0, 128],
|
1204 |
-
type='',
|
1205 |
-
swap='trousers_kpt11'),
|
1206 |
-
175:
|
1207 |
-
dict(
|
1208 |
-
name='trousers_kpt8',
|
1209 |
-
id=175,
|
1210 |
-
color=[128, 0, 128],
|
1211 |
-
type='',
|
1212 |
-
swap='trousers_kpt10'),
|
1213 |
-
176:
|
1214 |
-
dict(
|
1215 |
-
name='trousers_kpt9',
|
1216 |
-
id=176,
|
1217 |
-
color=[128, 0, 128],
|
1218 |
-
type='',
|
1219 |
-
swap=''),
|
1220 |
-
177:
|
1221 |
-
dict(
|
1222 |
-
name='trousers_kpt10',
|
1223 |
-
id=177,
|
1224 |
-
color=[128, 0, 128],
|
1225 |
-
type='',
|
1226 |
-
swap='trousers_kpt8'),
|
1227 |
-
178:
|
1228 |
-
dict(
|
1229 |
-
name='trousers_kpt11',
|
1230 |
-
id=178,
|
1231 |
-
color=[128, 0, 128],
|
1232 |
-
type='',
|
1233 |
-
swap='trousers_kpt7'),
|
1234 |
-
179:
|
1235 |
-
dict(
|
1236 |
-
name='trousers_kpt12',
|
1237 |
-
id=179,
|
1238 |
-
color=[128, 0, 128],
|
1239 |
-
type='',
|
1240 |
-
swap='trousers_kpt6'),
|
1241 |
-
180:
|
1242 |
-
dict(
|
1243 |
-
name='trousers_kpt13',
|
1244 |
-
id=180,
|
1245 |
-
color=[128, 0, 128],
|
1246 |
-
type='',
|
1247 |
-
swap='trousers_kpt5'),
|
1248 |
-
181:
|
1249 |
-
dict(
|
1250 |
-
name='trousers_kpt14',
|
1251 |
-
id=181,
|
1252 |
-
color=[128, 0, 128],
|
1253 |
-
type='',
|
1254 |
-
swap='trousers_kpt4'),
|
1255 |
-
182:
|
1256 |
-
dict(
|
1257 |
-
name='skirt_kpt1',
|
1258 |
-
id=182,
|
1259 |
-
color=[64, 128, 128],
|
1260 |
-
type='',
|
1261 |
-
swap='skirt_kpt3'),
|
1262 |
-
183:
|
1263 |
-
dict(
|
1264 |
-
name='skirt_kpt2', id=183, color=[64, 128, 128], type='', swap=''),
|
1265 |
-
184:
|
1266 |
-
dict(
|
1267 |
-
name='skirt_kpt3',
|
1268 |
-
id=184,
|
1269 |
-
color=[64, 128, 128],
|
1270 |
-
type='',
|
1271 |
-
swap='skirt_kpt1'),
|
1272 |
-
185:
|
1273 |
-
dict(
|
1274 |
-
name='skirt_kpt4',
|
1275 |
-
id=185,
|
1276 |
-
color=[64, 128, 128],
|
1277 |
-
type='',
|
1278 |
-
swap='skirt_kpt8'),
|
1279 |
-
186:
|
1280 |
-
dict(
|
1281 |
-
name='skirt_kpt5',
|
1282 |
-
id=186,
|
1283 |
-
color=[64, 128, 128],
|
1284 |
-
type='',
|
1285 |
-
swap='skirt_kpt7'),
|
1286 |
-
187:
|
1287 |
-
dict(
|
1288 |
-
name='skirt_kpt6', id=187, color=[64, 128, 128], type='', swap=''),
|
1289 |
-
188:
|
1290 |
-
dict(
|
1291 |
-
name='skirt_kpt7',
|
1292 |
-
id=188,
|
1293 |
-
color=[64, 128, 128],
|
1294 |
-
type='',
|
1295 |
-
swap='skirt_kpt5'),
|
1296 |
-
189:
|
1297 |
-
dict(
|
1298 |
-
name='skirt_kpt8',
|
1299 |
-
id=189,
|
1300 |
-
color=[64, 128, 128],
|
1301 |
-
type='',
|
1302 |
-
swap='skirt_kpt4'),
|
1303 |
-
190:
|
1304 |
-
dict(name='ssd_kpt1', id=190, color=[64, 64, 128], type='', swap=''),
|
1305 |
-
191:
|
1306 |
-
dict(
|
1307 |
-
name='ssd_kpt2',
|
1308 |
-
id=191,
|
1309 |
-
color=[64, 64, 128],
|
1310 |
-
type='',
|
1311 |
-
swap='ssd_kpt6'),
|
1312 |
-
192:
|
1313 |
-
dict(
|
1314 |
-
name='ssd_kpt3',
|
1315 |
-
id=192,
|
1316 |
-
color=[64, 64, 128],
|
1317 |
-
type='',
|
1318 |
-
swap='ssd_kpt5'),
|
1319 |
-
193:
|
1320 |
-
dict(name='ssd_kpt4', id=193, color=[64, 64, 128], type='', swap=''),
|
1321 |
-
194:
|
1322 |
-
dict(
|
1323 |
-
name='ssd_kpt5',
|
1324 |
-
id=194,
|
1325 |
-
color=[64, 64, 128],
|
1326 |
-
type='',
|
1327 |
-
swap='ssd_kpt3'),
|
1328 |
-
195:
|
1329 |
-
dict(
|
1330 |
-
name='ssd_kpt6',
|
1331 |
-
id=195,
|
1332 |
-
color=[64, 64, 128],
|
1333 |
-
type='',
|
1334 |
-
swap='ssd_kpt2'),
|
1335 |
-
196:
|
1336 |
-
dict(
|
1337 |
-
name='ssd_kpt7',
|
1338 |
-
id=196,
|
1339 |
-
color=[64, 64, 128],
|
1340 |
-
type='',
|
1341 |
-
swap='ssd_kpt29'),
|
1342 |
-
197:
|
1343 |
-
dict(
|
1344 |
-
name='ssd_kpt8',
|
1345 |
-
id=197,
|
1346 |
-
color=[64, 64, 128],
|
1347 |
-
type='',
|
1348 |
-
swap='ssd_kpt28'),
|
1349 |
-
198:
|
1350 |
-
dict(
|
1351 |
-
name='ssd_kpt9',
|
1352 |
-
id=198,
|
1353 |
-
color=[64, 64, 128],
|
1354 |
-
type='',
|
1355 |
-
swap='ssd_kpt27'),
|
1356 |
-
199:
|
1357 |
-
dict(
|
1358 |
-
name='ssd_kpt10',
|
1359 |
-
id=199,
|
1360 |
-
color=[64, 64, 128],
|
1361 |
-
type='',
|
1362 |
-
swap='ssd_kpt26'),
|
1363 |
-
200:
|
1364 |
-
dict(
|
1365 |
-
name='ssd_kpt11',
|
1366 |
-
id=200,
|
1367 |
-
color=[64, 64, 128],
|
1368 |
-
type='',
|
1369 |
-
swap='ssd_kpt25'),
|
1370 |
-
201:
|
1371 |
-
dict(
|
1372 |
-
name='ssd_kpt12',
|
1373 |
-
id=201,
|
1374 |
-
color=[64, 64, 128],
|
1375 |
-
type='',
|
1376 |
-
swap='ssd_kpt24'),
|
1377 |
-
202:
|
1378 |
-
dict(
|
1379 |
-
name='ssd_kpt13',
|
1380 |
-
id=202,
|
1381 |
-
color=[64, 64, 128],
|
1382 |
-
type='',
|
1383 |
-
swap='ssd_kpt23'),
|
1384 |
-
203:
|
1385 |
-
dict(
|
1386 |
-
name='ssd_kpt14',
|
1387 |
-
id=203,
|
1388 |
-
color=[64, 64, 128],
|
1389 |
-
type='',
|
1390 |
-
swap='ssd_kpt22'),
|
1391 |
-
204:
|
1392 |
-
dict(
|
1393 |
-
name='ssd_kpt15',
|
1394 |
-
id=204,
|
1395 |
-
color=[64, 64, 128],
|
1396 |
-
type='',
|
1397 |
-
swap='ssd_kpt21'),
|
1398 |
-
205:
|
1399 |
-
dict(
|
1400 |
-
name='ssd_kpt16',
|
1401 |
-
id=205,
|
1402 |
-
color=[64, 64, 128],
|
1403 |
-
type='',
|
1404 |
-
swap='ssd_kpt20'),
|
1405 |
-
206:
|
1406 |
-
dict(
|
1407 |
-
name='ssd_kpt17',
|
1408 |
-
id=206,
|
1409 |
-
color=[64, 64, 128],
|
1410 |
-
type='',
|
1411 |
-
swap='ssd_kpt19'),
|
1412 |
-
207:
|
1413 |
-
dict(name='ssd_kpt18', id=207, color=[64, 64, 128], type='', swap=''),
|
1414 |
-
208:
|
1415 |
-
dict(
|
1416 |
-
name='ssd_kpt19',
|
1417 |
-
id=208,
|
1418 |
-
color=[64, 64, 128],
|
1419 |
-
type='',
|
1420 |
-
swap='ssd_kpt17'),
|
1421 |
-
209:
|
1422 |
-
dict(
|
1423 |
-
name='ssd_kpt20',
|
1424 |
-
id=209,
|
1425 |
-
color=[64, 64, 128],
|
1426 |
-
type='',
|
1427 |
-
swap='ssd_kpt16'),
|
1428 |
-
210:
|
1429 |
-
dict(
|
1430 |
-
name='ssd_kpt21',
|
1431 |
-
id=210,
|
1432 |
-
color=[64, 64, 128],
|
1433 |
-
type='',
|
1434 |
-
swap='ssd_kpt15'),
|
1435 |
-
211:
|
1436 |
-
dict(
|
1437 |
-
name='ssd_kpt22',
|
1438 |
-
id=211,
|
1439 |
-
color=[64, 64, 128],
|
1440 |
-
type='',
|
1441 |
-
swap='ssd_kpt14'),
|
1442 |
-
212:
|
1443 |
-
dict(
|
1444 |
-
name='ssd_kpt23',
|
1445 |
-
id=212,
|
1446 |
-
color=[64, 64, 128],
|
1447 |
-
type='',
|
1448 |
-
swap='ssd_kpt13'),
|
1449 |
-
213:
|
1450 |
-
dict(
|
1451 |
-
name='ssd_kpt24',
|
1452 |
-
id=213,
|
1453 |
-
color=[64, 64, 128],
|
1454 |
-
type='',
|
1455 |
-
swap='ssd_kpt12'),
|
1456 |
-
214:
|
1457 |
-
dict(
|
1458 |
-
name='ssd_kpt25',
|
1459 |
-
id=214,
|
1460 |
-
color=[64, 64, 128],
|
1461 |
-
type='',
|
1462 |
-
swap='ssd_kpt11'),
|
1463 |
-
215:
|
1464 |
-
dict(
|
1465 |
-
name='ssd_kpt26',
|
1466 |
-
id=215,
|
1467 |
-
color=[64, 64, 128],
|
1468 |
-
type='',
|
1469 |
-
swap='ssd_kpt10'),
|
1470 |
-
216:
|
1471 |
-
dict(
|
1472 |
-
name='ssd_kpt27',
|
1473 |
-
id=216,
|
1474 |
-
color=[64, 64, 128],
|
1475 |
-
type='',
|
1476 |
-
swap='ssd_kpt9'),
|
1477 |
-
217:
|
1478 |
-
dict(
|
1479 |
-
name='ssd_kpt28',
|
1480 |
-
id=217,
|
1481 |
-
color=[64, 64, 128],
|
1482 |
-
type='',
|
1483 |
-
swap='ssd_kpt8'),
|
1484 |
-
218:
|
1485 |
-
dict(
|
1486 |
-
name='ssd_kpt29',
|
1487 |
-
id=218,
|
1488 |
-
color=[64, 64, 128],
|
1489 |
-
type='',
|
1490 |
-
swap='ssd_kpt7'),
|
1491 |
-
219:
|
1492 |
-
dict(name='lsd_kpt1', id=219, color=[128, 64, 0], type='', swap=''),
|
1493 |
-
220:
|
1494 |
-
dict(
|
1495 |
-
name='lsd_kpt2',
|
1496 |
-
id=220,
|
1497 |
-
color=[128, 64, 0],
|
1498 |
-
type='',
|
1499 |
-
swap='lsd_kpt6'),
|
1500 |
-
221:
|
1501 |
-
dict(
|
1502 |
-
name='lsd_kpt3',
|
1503 |
-
id=221,
|
1504 |
-
color=[128, 64, 0],
|
1505 |
-
type='',
|
1506 |
-
swap='lsd_kpt5'),
|
1507 |
-
222:
|
1508 |
-
dict(name='lsd_kpt4', id=222, color=[128, 64, 0], type='', swap=''),
|
1509 |
-
223:
|
1510 |
-
dict(
|
1511 |
-
name='lsd_kpt5',
|
1512 |
-
id=223,
|
1513 |
-
color=[128, 64, 0],
|
1514 |
-
type='',
|
1515 |
-
swap='lsd_kpt3'),
|
1516 |
-
224:
|
1517 |
-
dict(
|
1518 |
-
name='lsd_kpt6',
|
1519 |
-
id=224,
|
1520 |
-
color=[128, 64, 0],
|
1521 |
-
type='',
|
1522 |
-
swap='lsd_kpt2'),
|
1523 |
-
225:
|
1524 |
-
dict(
|
1525 |
-
name='lsd_kpt7',
|
1526 |
-
id=225,
|
1527 |
-
color=[128, 64, 0],
|
1528 |
-
type='',
|
1529 |
-
swap='lsd_kpt37'),
|
1530 |
-
226:
|
1531 |
-
dict(
|
1532 |
-
name='lsd_kpt8',
|
1533 |
-
id=226,
|
1534 |
-
color=[128, 64, 0],
|
1535 |
-
type='',
|
1536 |
-
swap='lsd_kpt36'),
|
1537 |
-
227:
|
1538 |
-
dict(
|
1539 |
-
name='lsd_kpt9',
|
1540 |
-
id=227,
|
1541 |
-
color=[128, 64, 0],
|
1542 |
-
type='',
|
1543 |
-
swap='lsd_kpt35'),
|
1544 |
-
228:
|
1545 |
-
dict(
|
1546 |
-
name='lsd_kpt10',
|
1547 |
-
id=228,
|
1548 |
-
color=[128, 64, 0],
|
1549 |
-
type='',
|
1550 |
-
swap='lsd_kpt34'),
|
1551 |
-
229:
|
1552 |
-
dict(
|
1553 |
-
name='lsd_kpt11',
|
1554 |
-
id=229,
|
1555 |
-
color=[128, 64, 0],
|
1556 |
-
type='',
|
1557 |
-
swap='lsd_kpt33'),
|
1558 |
-
230:
|
1559 |
-
dict(
|
1560 |
-
name='lsd_kpt12',
|
1561 |
-
id=230,
|
1562 |
-
color=[128, 64, 0],
|
1563 |
-
type='',
|
1564 |
-
swap='lsd_kpt32'),
|
1565 |
-
231:
|
1566 |
-
dict(
|
1567 |
-
name='lsd_kpt13',
|
1568 |
-
id=231,
|
1569 |
-
color=[128, 64, 0],
|
1570 |
-
type='',
|
1571 |
-
swap='lsd_kpt31'),
|
1572 |
-
232:
|
1573 |
-
dict(
|
1574 |
-
name='lsd_kpt14',
|
1575 |
-
id=232,
|
1576 |
-
color=[128, 64, 0],
|
1577 |
-
type='',
|
1578 |
-
swap='lsd_kpt30'),
|
1579 |
-
233:
|
1580 |
-
dict(
|
1581 |
-
name='lsd_kpt15',
|
1582 |
-
id=233,
|
1583 |
-
color=[128, 64, 0],
|
1584 |
-
type='',
|
1585 |
-
swap='lsd_kpt29'),
|
1586 |
-
234:
|
1587 |
-
dict(
|
1588 |
-
name='lsd_kpt16',
|
1589 |
-
id=234,
|
1590 |
-
color=[128, 64, 0],
|
1591 |
-
type='',
|
1592 |
-
swap='lsd_kpt28'),
|
1593 |
-
235:
|
1594 |
-
dict(
|
1595 |
-
name='lsd_kpt17',
|
1596 |
-
id=235,
|
1597 |
-
color=[128, 64, 0],
|
1598 |
-
type='',
|
1599 |
-
swap='lsd_kpt27'),
|
1600 |
-
236:
|
1601 |
-
dict(
|
1602 |
-
name='lsd_kpt18',
|
1603 |
-
id=236,
|
1604 |
-
color=[128, 64, 0],
|
1605 |
-
type='',
|
1606 |
-
swap='lsd_kpt26'),
|
1607 |
-
237:
|
1608 |
-
dict(
|
1609 |
-
name='lsd_kpt19',
|
1610 |
-
id=237,
|
1611 |
-
color=[128, 64, 0],
|
1612 |
-
type='',
|
1613 |
-
swap='lsd_kpt25'),
|
1614 |
-
238:
|
1615 |
-
dict(
|
1616 |
-
name='lsd_kpt20',
|
1617 |
-
id=238,
|
1618 |
-
color=[128, 64, 0],
|
1619 |
-
type='',
|
1620 |
-
swap='lsd_kpt24'),
|
1621 |
-
239:
|
1622 |
-
dict(
|
1623 |
-
name='lsd_kpt21',
|
1624 |
-
id=239,
|
1625 |
-
color=[128, 64, 0],
|
1626 |
-
type='',
|
1627 |
-
swap='lsd_kpt23'),
|
1628 |
-
240:
|
1629 |
-
dict(name='lsd_kpt22', id=240, color=[128, 64, 0], type='', swap=''),
|
1630 |
-
241:
|
1631 |
-
dict(
|
1632 |
-
name='lsd_kpt23',
|
1633 |
-
id=241,
|
1634 |
-
color=[128, 64, 0],
|
1635 |
-
type='',
|
1636 |
-
swap='lsd_kpt21'),
|
1637 |
-
242:
|
1638 |
-
dict(
|
1639 |
-
name='lsd_kpt24',
|
1640 |
-
id=242,
|
1641 |
-
color=[128, 64, 0],
|
1642 |
-
type='',
|
1643 |
-
swap='lsd_kpt20'),
|
1644 |
-
243:
|
1645 |
-
dict(
|
1646 |
-
name='lsd_kpt25',
|
1647 |
-
id=243,
|
1648 |
-
color=[128, 64, 0],
|
1649 |
-
type='',
|
1650 |
-
swap='lsd_kpt19'),
|
1651 |
-
244:
|
1652 |
-
dict(
|
1653 |
-
name='lsd_kpt26',
|
1654 |
-
id=244,
|
1655 |
-
color=[128, 64, 0],
|
1656 |
-
type='',
|
1657 |
-
swap='lsd_kpt18'),
|
1658 |
-
245:
|
1659 |
-
dict(
|
1660 |
-
name='lsd_kpt27',
|
1661 |
-
id=245,
|
1662 |
-
color=[128, 64, 0],
|
1663 |
-
type='',
|
1664 |
-
swap='lsd_kpt17'),
|
1665 |
-
246:
|
1666 |
-
dict(
|
1667 |
-
name='lsd_kpt28',
|
1668 |
-
id=246,
|
1669 |
-
color=[128, 64, 0],
|
1670 |
-
type='',
|
1671 |
-
swap='lsd_kpt16'),
|
1672 |
-
247:
|
1673 |
-
dict(
|
1674 |
-
name='lsd_kpt29',
|
1675 |
-
id=247,
|
1676 |
-
color=[128, 64, 0],
|
1677 |
-
type='',
|
1678 |
-
swap='lsd_kpt15'),
|
1679 |
-
248:
|
1680 |
-
dict(
|
1681 |
-
name='lsd_kpt30',
|
1682 |
-
id=248,
|
1683 |
-
color=[128, 64, 0],
|
1684 |
-
type='',
|
1685 |
-
swap='lsd_kpt14'),
|
1686 |
-
249:
|
1687 |
-
dict(
|
1688 |
-
name='lsd_kpt31',
|
1689 |
-
id=249,
|
1690 |
-
color=[128, 64, 0],
|
1691 |
-
type='',
|
1692 |
-
swap='lsd_kpt13'),
|
1693 |
-
250:
|
1694 |
-
dict(
|
1695 |
-
name='lsd_kpt32',
|
1696 |
-
id=250,
|
1697 |
-
color=[128, 64, 0],
|
1698 |
-
type='',
|
1699 |
-
swap='lsd_kpt12'),
|
1700 |
-
251:
|
1701 |
-
dict(
|
1702 |
-
name='lsd_kpt33',
|
1703 |
-
id=251,
|
1704 |
-
color=[128, 64, 0],
|
1705 |
-
type='',
|
1706 |
-
swap='lsd_kpt11'),
|
1707 |
-
252:
|
1708 |
-
dict(
|
1709 |
-
name='lsd_kpt34',
|
1710 |
-
id=252,
|
1711 |
-
color=[128, 64, 0],
|
1712 |
-
type='',
|
1713 |
-
swap='lsd_kpt10'),
|
1714 |
-
253:
|
1715 |
-
dict(
|
1716 |
-
name='lsd_kpt35',
|
1717 |
-
id=253,
|
1718 |
-
color=[128, 64, 0],
|
1719 |
-
type='',
|
1720 |
-
swap='lsd_kpt9'),
|
1721 |
-
254:
|
1722 |
-
dict(
|
1723 |
-
name='lsd_kpt36',
|
1724 |
-
id=254,
|
1725 |
-
color=[128, 64, 0],
|
1726 |
-
type='',
|
1727 |
-
swap='lsd_kpt8'),
|
1728 |
-
255:
|
1729 |
-
dict(
|
1730 |
-
name='lsd_kpt37',
|
1731 |
-
id=255,
|
1732 |
-
color=[128, 64, 0],
|
1733 |
-
type='',
|
1734 |
-
swap='lsd_kpt7'),
|
1735 |
-
256:
|
1736 |
-
dict(name='vd_kpt1', id=256, color=[128, 64, 255], type='', swap=''),
|
1737 |
-
257:
|
1738 |
-
dict(
|
1739 |
-
name='vd_kpt2',
|
1740 |
-
id=257,
|
1741 |
-
color=[128, 64, 255],
|
1742 |
-
type='',
|
1743 |
-
swap='vd_kpt6'),
|
1744 |
-
258:
|
1745 |
-
dict(
|
1746 |
-
name='vd_kpt3',
|
1747 |
-
id=258,
|
1748 |
-
color=[128, 64, 255],
|
1749 |
-
type='',
|
1750 |
-
swap='vd_kpt5'),
|
1751 |
-
259:
|
1752 |
-
dict(name='vd_kpt4', id=259, color=[128, 64, 255], type='', swap=''),
|
1753 |
-
260:
|
1754 |
-
dict(
|
1755 |
-
name='vd_kpt5',
|
1756 |
-
id=260,
|
1757 |
-
color=[128, 64, 255],
|
1758 |
-
type='',
|
1759 |
-
swap='vd_kpt3'),
|
1760 |
-
261:
|
1761 |
-
dict(
|
1762 |
-
name='vd_kpt6',
|
1763 |
-
id=261,
|
1764 |
-
color=[128, 64, 255],
|
1765 |
-
type='',
|
1766 |
-
swap='vd_kpt2'),
|
1767 |
-
262:
|
1768 |
-
dict(
|
1769 |
-
name='vd_kpt7',
|
1770 |
-
id=262,
|
1771 |
-
color=[128, 64, 255],
|
1772 |
-
type='',
|
1773 |
-
swap='vd_kpt19'),
|
1774 |
-
263:
|
1775 |
-
dict(
|
1776 |
-
name='vd_kpt8',
|
1777 |
-
id=263,
|
1778 |
-
color=[128, 64, 255],
|
1779 |
-
type='',
|
1780 |
-
swap='vd_kpt18'),
|
1781 |
-
264:
|
1782 |
-
dict(
|
1783 |
-
name='vd_kpt9',
|
1784 |
-
id=264,
|
1785 |
-
color=[128, 64, 255],
|
1786 |
-
type='',
|
1787 |
-
swap='vd_kpt17'),
|
1788 |
-
265:
|
1789 |
-
dict(
|
1790 |
-
name='vd_kpt10',
|
1791 |
-
id=265,
|
1792 |
-
color=[128, 64, 255],
|
1793 |
-
type='',
|
1794 |
-
swap='vd_kpt16'),
|
1795 |
-
266:
|
1796 |
-
dict(
|
1797 |
-
name='vd_kpt11',
|
1798 |
-
id=266,
|
1799 |
-
color=[128, 64, 255],
|
1800 |
-
type='',
|
1801 |
-
swap='vd_kpt15'),
|
1802 |
-
267:
|
1803 |
-
dict(
|
1804 |
-
name='vd_kpt12',
|
1805 |
-
id=267,
|
1806 |
-
color=[128, 64, 255],
|
1807 |
-
type='',
|
1808 |
-
swap='vd_kpt14'),
|
1809 |
-
268:
|
1810 |
-
dict(name='vd_kpt13', id=268, color=[128, 64, 255], type='', swap=''),
|
1811 |
-
269:
|
1812 |
-
dict(
|
1813 |
-
name='vd_kpt14',
|
1814 |
-
id=269,
|
1815 |
-
color=[128, 64, 255],
|
1816 |
-
type='',
|
1817 |
-
swap='vd_kpt12'),
|
1818 |
-
270:
|
1819 |
-
dict(
|
1820 |
-
name='vd_kpt15',
|
1821 |
-
id=270,
|
1822 |
-
color=[128, 64, 255],
|
1823 |
-
type='',
|
1824 |
-
swap='vd_kpt11'),
|
1825 |
-
271:
|
1826 |
-
dict(
|
1827 |
-
name='vd_kpt16',
|
1828 |
-
id=271,
|
1829 |
-
color=[128, 64, 255],
|
1830 |
-
type='',
|
1831 |
-
swap='vd_kpt10'),
|
1832 |
-
272:
|
1833 |
-
dict(
|
1834 |
-
name='vd_kpt17',
|
1835 |
-
id=272,
|
1836 |
-
color=[128, 64, 255],
|
1837 |
-
type='',
|
1838 |
-
swap='vd_kpt9'),
|
1839 |
-
273:
|
1840 |
-
dict(
|
1841 |
-
name='vd_kpt18',
|
1842 |
-
id=273,
|
1843 |
-
color=[128, 64, 255],
|
1844 |
-
type='',
|
1845 |
-
swap='vd_kpt8'),
|
1846 |
-
274:
|
1847 |
-
dict(
|
1848 |
-
name='vd_kpt19',
|
1849 |
-
id=274,
|
1850 |
-
color=[128, 64, 255],
|
1851 |
-
type='',
|
1852 |
-
swap='vd_kpt7'),
|
1853 |
-
275:
|
1854 |
-
dict(name='sd_kpt1', id=275, color=[128, 64, 0], type='', swap=''),
|
1855 |
-
276:
|
1856 |
-
dict(
|
1857 |
-
name='sd_kpt2',
|
1858 |
-
id=276,
|
1859 |
-
color=[128, 64, 0],
|
1860 |
-
type='',
|
1861 |
-
swap='sd_kpt6'),
|
1862 |
-
277:
|
1863 |
-
dict(
|
1864 |
-
name='sd_kpt3',
|
1865 |
-
id=277,
|
1866 |
-
color=[128, 64, 0],
|
1867 |
-
type='',
|
1868 |
-
swap='sd_kpt5'),
|
1869 |
-
278:
|
1870 |
-
dict(name='sd_kpt4', id=278, color=[128, 64, 0], type='', swap=''),
|
1871 |
-
279:
|
1872 |
-
dict(
|
1873 |
-
name='sd_kpt5',
|
1874 |
-
id=279,
|
1875 |
-
color=[128, 64, 0],
|
1876 |
-
type='',
|
1877 |
-
swap='sd_kpt3'),
|
1878 |
-
280:
|
1879 |
-
dict(
|
1880 |
-
name='sd_kpt6',
|
1881 |
-
id=280,
|
1882 |
-
color=[128, 64, 0],
|
1883 |
-
type='',
|
1884 |
-
swap='sd_kpt2'),
|
1885 |
-
281:
|
1886 |
-
dict(
|
1887 |
-
name='sd_kpt7',
|
1888 |
-
id=281,
|
1889 |
-
color=[128, 64, 0],
|
1890 |
-
type='',
|
1891 |
-
swap='sd_kpt19'),
|
1892 |
-
282:
|
1893 |
-
dict(
|
1894 |
-
name='sd_kpt8',
|
1895 |
-
id=282,
|
1896 |
-
color=[128, 64, 0],
|
1897 |
-
type='',
|
1898 |
-
swap='sd_kpt18'),
|
1899 |
-
283:
|
1900 |
-
dict(
|
1901 |
-
name='sd_kpt9',
|
1902 |
-
id=283,
|
1903 |
-
color=[128, 64, 0],
|
1904 |
-
type='',
|
1905 |
-
swap='sd_kpt17'),
|
1906 |
-
284:
|
1907 |
-
dict(
|
1908 |
-
name='sd_kpt10',
|
1909 |
-
id=284,
|
1910 |
-
color=[128, 64, 0],
|
1911 |
-
type='',
|
1912 |
-
swap='sd_kpt16'),
|
1913 |
-
285:
|
1914 |
-
dict(
|
1915 |
-
name='sd_kpt11',
|
1916 |
-
id=285,
|
1917 |
-
color=[128, 64, 0],
|
1918 |
-
type='',
|
1919 |
-
swap='sd_kpt15'),
|
1920 |
-
286:
|
1921 |
-
dict(
|
1922 |
-
name='sd_kpt12',
|
1923 |
-
id=286,
|
1924 |
-
color=[128, 64, 0],
|
1925 |
-
type='',
|
1926 |
-
swap='sd_kpt14'),
|
1927 |
-
287:
|
1928 |
-
dict(name='sd_kpt13', id=287, color=[128, 64, 0], type='', swap=''),
|
1929 |
-
288:
|
1930 |
-
dict(
|
1931 |
-
name='sd_kpt14',
|
1932 |
-
id=288,
|
1933 |
-
color=[128, 64, 0],
|
1934 |
-
type='',
|
1935 |
-
swap='sd_kpt12'),
|
1936 |
-
289:
|
1937 |
-
dict(
|
1938 |
-
name='sd_kpt15',
|
1939 |
-
id=289,
|
1940 |
-
color=[128, 64, 0],
|
1941 |
-
type='',
|
1942 |
-
swap='sd_kpt11'),
|
1943 |
-
290:
|
1944 |
-
dict(
|
1945 |
-
name='sd_kpt16',
|
1946 |
-
id=290,
|
1947 |
-
color=[128, 64, 0],
|
1948 |
-
type='',
|
1949 |
-
swap='sd_kpt10'),
|
1950 |
-
291:
|
1951 |
-
dict(
|
1952 |
-
name='sd_kpt17',
|
1953 |
-
id=291,
|
1954 |
-
color=[128, 64, 0],
|
1955 |
-
type='',
|
1956 |
-
swap='sd_kpt9'),
|
1957 |
-
292:
|
1958 |
-
dict(
|
1959 |
-
name='sd_kpt18',
|
1960 |
-
id=292,
|
1961 |
-
color=[128, 64, 0],
|
1962 |
-
type='',
|
1963 |
-
swap='sd_kpt8'),
|
1964 |
-
293:
|
1965 |
-
dict(
|
1966 |
-
name='sd_kpt19',
|
1967 |
-
id=293,
|
1968 |
-
color=[128, 64, 0],
|
1969 |
-
type='',
|
1970 |
-
swap='sd_kpt7')
|
1971 |
-
}),
|
1972 |
-
skeleton_info=dict({
|
1973 |
-
0:
|
1974 |
-
dict(link=('sss_kpt1', 'sss_kpt2'), id=0, color=[255, 128, 0]),
|
1975 |
-
1:
|
1976 |
-
dict(link=('sss_kpt2', 'sss_kpt7'), id=1, color=[255, 128, 0]),
|
1977 |
-
2:
|
1978 |
-
dict(link=('sss_kpt7', 'sss_kpt8'), id=2, color=[255, 128, 0]),
|
1979 |
-
3:
|
1980 |
-
dict(link=('sss_kpt8', 'sss_kpt9'), id=3, color=[255, 128, 0]),
|
1981 |
-
4:
|
1982 |
-
dict(link=('sss_kpt9', 'sss_kpt10'), id=4, color=[255, 128, 0]),
|
1983 |
-
5:
|
1984 |
-
dict(link=('sss_kpt10', 'sss_kpt11'), id=5, color=[255, 128, 0]),
|
1985 |
-
6:
|
1986 |
-
dict(link=('sss_kpt11', 'sss_kpt12'), id=6, color=[255, 128, 0]),
|
1987 |
-
7:
|
1988 |
-
dict(link=('sss_kpt12', 'sss_kpt13'), id=7, color=[255, 128, 0]),
|
1989 |
-
8:
|
1990 |
-
dict(link=('sss_kpt13', 'sss_kpt14'), id=8, color=[255, 128, 0]),
|
1991 |
-
9:
|
1992 |
-
dict(link=('sss_kpt14', 'sss_kpt15'), id=9, color=[255, 128, 0]),
|
1993 |
-
10:
|
1994 |
-
dict(link=('sss_kpt15', 'sss_kpt16'), id=10, color=[255, 128, 0]),
|
1995 |
-
11:
|
1996 |
-
dict(link=('sss_kpt16', 'sss_kpt17'), id=11, color=[255, 128, 0]),
|
1997 |
-
12:
|
1998 |
-
dict(link=('sss_kpt17', 'sss_kpt18'), id=12, color=[255, 128, 0]),
|
1999 |
-
13:
|
2000 |
-
dict(link=('sss_kpt18', 'sss_kpt19'), id=13, color=[255, 128, 0]),
|
2001 |
-
14:
|
2002 |
-
dict(link=('sss_kpt19', 'sss_kpt20'), id=14, color=[255, 128, 0]),
|
2003 |
-
15:
|
2004 |
-
dict(link=('sss_kpt20', 'sss_kpt21'), id=15, color=[255, 128, 0]),
|
2005 |
-
16:
|
2006 |
-
dict(link=('sss_kpt21', 'sss_kpt22'), id=16, color=[255, 128, 0]),
|
2007 |
-
17:
|
2008 |
-
dict(link=('sss_kpt22', 'sss_kpt23'), id=17, color=[255, 128, 0]),
|
2009 |
-
18:
|
2010 |
-
dict(link=('sss_kpt23', 'sss_kpt24'), id=18, color=[255, 128, 0]),
|
2011 |
-
19:
|
2012 |
-
dict(link=('sss_kpt24', 'sss_kpt25'), id=19, color=[255, 128, 0]),
|
2013 |
-
20:
|
2014 |
-
dict(link=('sss_kpt25', 'sss_kpt6'), id=20, color=[255, 128, 0]),
|
2015 |
-
21:
|
2016 |
-
dict(link=('sss_kpt6', 'sss_kpt1'), id=21, color=[255, 128, 0]),
|
2017 |
-
22:
|
2018 |
-
dict(link=('sss_kpt2', 'sss_kpt3'), id=22, color=[255, 128, 0]),
|
2019 |
-
23:
|
2020 |
-
dict(link=('sss_kpt3', 'sss_kpt4'), id=23, color=[255, 128, 0]),
|
2021 |
-
24:
|
2022 |
-
dict(link=('sss_kpt4', 'sss_kpt5'), id=24, color=[255, 128, 0]),
|
2023 |
-
25:
|
2024 |
-
dict(link=('sss_kpt5', 'sss_kpt6'), id=25, color=[255, 128, 0]),
|
2025 |
-
26:
|
2026 |
-
dict(link=('lss_kpt1', 'lss_kpt2'), id=26, color=[255, 0, 128]),
|
2027 |
-
27:
|
2028 |
-
dict(link=('lss_kpt2', 'lss_kpt7'), id=27, color=[255, 0, 128]),
|
2029 |
-
28:
|
2030 |
-
dict(link=('lss_kpt7', 'lss_kpt8'), id=28, color=[255, 0, 128]),
|
2031 |
-
29:
|
2032 |
-
dict(link=('lss_kpt8', 'lss_kpt9'), id=29, color=[255, 0, 128]),
|
2033 |
-
30:
|
2034 |
-
dict(link=('lss_kpt9', 'lss_kpt10'), id=30, color=[255, 0, 128]),
|
2035 |
-
31:
|
2036 |
-
dict(link=('lss_kpt10', 'lss_kpt11'), id=31, color=[255, 0, 128]),
|
2037 |
-
32:
|
2038 |
-
dict(link=('lss_kpt11', 'lss_kpt12'), id=32, color=[255, 0, 128]),
|
2039 |
-
33:
|
2040 |
-
dict(link=('lss_kpt12', 'lss_kpt13'), id=33, color=[255, 0, 128]),
|
2041 |
-
34:
|
2042 |
-
dict(link=('lss_kpt13', 'lss_kpt14'), id=34, color=[255, 0, 128]),
|
2043 |
-
35:
|
2044 |
-
dict(link=('lss_kpt14', 'lss_kpt15'), id=35, color=[255, 0, 128]),
|
2045 |
-
36:
|
2046 |
-
dict(link=('lss_kpt15', 'lss_kpt16'), id=36, color=[255, 0, 128]),
|
2047 |
-
37:
|
2048 |
-
dict(link=('lss_kpt16', 'lss_kpt17'), id=37, color=[255, 0, 128]),
|
2049 |
-
38:
|
2050 |
-
dict(link=('lss_kpt17', 'lss_kpt18'), id=38, color=[255, 0, 128]),
|
2051 |
-
39:
|
2052 |
-
dict(link=('lss_kpt18', 'lss_kpt19'), id=39, color=[255, 0, 128]),
|
2053 |
-
40:
|
2054 |
-
dict(link=('lss_kpt19', 'lss_kpt20'), id=40, color=[255, 0, 128]),
|
2055 |
-
41:
|
2056 |
-
dict(link=('lss_kpt20', 'lss_kpt21'), id=41, color=[255, 0, 128]),
|
2057 |
-
42:
|
2058 |
-
dict(link=('lss_kpt21', 'lss_kpt22'), id=42, color=[255, 0, 128]),
|
2059 |
-
43:
|
2060 |
-
dict(link=('lss_kpt22', 'lss_kpt23'), id=43, color=[255, 0, 128]),
|
2061 |
-
44:
|
2062 |
-
dict(link=('lss_kpt23', 'lss_kpt24'), id=44, color=[255, 0, 128]),
|
2063 |
-
45:
|
2064 |
-
dict(link=('lss_kpt24', 'lss_kpt25'), id=45, color=[255, 0, 128]),
|
2065 |
-
46:
|
2066 |
-
dict(link=('lss_kpt25', 'lss_kpt26'), id=46, color=[255, 0, 128]),
|
2067 |
-
47:
|
2068 |
-
dict(link=('lss_kpt26', 'lss_kpt27'), id=47, color=[255, 0, 128]),
|
2069 |
-
48:
|
2070 |
-
dict(link=('lss_kpt27', 'lss_kpt28'), id=48, color=[255, 0, 128]),
|
2071 |
-
49:
|
2072 |
-
dict(link=('lss_kpt28', 'lss_kpt29'), id=49, color=[255, 0, 128]),
|
2073 |
-
50:
|
2074 |
-
dict(link=('lss_kpt29', 'lss_kpt30'), id=50, color=[255, 0, 128]),
|
2075 |
-
51:
|
2076 |
-
dict(link=('lss_kpt30', 'lss_kpt31'), id=51, color=[255, 0, 128]),
|
2077 |
-
52:
|
2078 |
-
dict(link=('lss_kpt31', 'lss_kpt32'), id=52, color=[255, 0, 128]),
|
2079 |
-
53:
|
2080 |
-
dict(link=('lss_kpt32', 'lss_kpt33'), id=53, color=[255, 0, 128]),
|
2081 |
-
54:
|
2082 |
-
dict(link=('lss_kpt33', 'lss_kpt6'), id=54, color=[255, 0, 128]),
|
2083 |
-
55:
|
2084 |
-
dict(link=('lss_kpt6', 'lss_kpt5'), id=55, color=[255, 0, 128]),
|
2085 |
-
56:
|
2086 |
-
dict(link=('lss_kpt5', 'lss_kpt4'), id=56, color=[255, 0, 128]),
|
2087 |
-
57:
|
2088 |
-
dict(link=('lss_kpt4', 'lss_kpt3'), id=57, color=[255, 0, 128]),
|
2089 |
-
58:
|
2090 |
-
dict(link=('lss_kpt3', 'lss_kpt2'), id=58, color=[255, 0, 128]),
|
2091 |
-
59:
|
2092 |
-
dict(link=('lss_kpt6', 'lss_kpt1'), id=59, color=[255, 0, 128]),
|
2093 |
-
60:
|
2094 |
-
dict(link=('sso_kpt1', 'sso_kpt4'), id=60, color=[128, 0, 255]),
|
2095 |
-
61:
|
2096 |
-
dict(link=('sso_kpt4', 'sso_kpt7'), id=61, color=[128, 0, 255]),
|
2097 |
-
62:
|
2098 |
-
dict(link=('sso_kpt7', 'sso_kpt8'), id=62, color=[128, 0, 255]),
|
2099 |
-
63:
|
2100 |
-
dict(link=('sso_kpt8', 'sso_kpt9'), id=63, color=[128, 0, 255]),
|
2101 |
-
64:
|
2102 |
-
dict(link=('sso_kpt9', 'sso_kpt10'), id=64, color=[128, 0, 255]),
|
2103 |
-
65:
|
2104 |
-
dict(link=('sso_kpt10', 'sso_kpt11'), id=65, color=[128, 0, 255]),
|
2105 |
-
66:
|
2106 |
-
dict(link=('sso_kpt11', 'sso_kpt12'), id=66, color=[128, 0, 255]),
|
2107 |
-
67:
|
2108 |
-
dict(link=('sso_kpt12', 'sso_kpt13'), id=67, color=[128, 0, 255]),
|
2109 |
-
68:
|
2110 |
-
dict(link=('sso_kpt13', 'sso_kpt14'), id=68, color=[128, 0, 255]),
|
2111 |
-
69:
|
2112 |
-
dict(link=('sso_kpt14', 'sso_kpt15'), id=69, color=[128, 0, 255]),
|
2113 |
-
70:
|
2114 |
-
dict(link=('sso_kpt15', 'sso_kpt16'), id=70, color=[128, 0, 255]),
|
2115 |
-
71:
|
2116 |
-
dict(link=('sso_kpt16', 'sso_kpt31'), id=71, color=[128, 0, 255]),
|
2117 |
-
72:
|
2118 |
-
dict(link=('sso_kpt31', 'sso_kpt30'), id=72, color=[128, 0, 255]),
|
2119 |
-
73:
|
2120 |
-
dict(link=('sso_kpt30', 'sso_kpt2'), id=73, color=[128, 0, 255]),
|
2121 |
-
74:
|
2122 |
-
dict(link=('sso_kpt2', 'sso_kpt3'), id=74, color=[128, 0, 255]),
|
2123 |
-
75:
|
2124 |
-
dict(link=('sso_kpt3', 'sso_kpt4'), id=75, color=[128, 0, 255]),
|
2125 |
-
76:
|
2126 |
-
dict(link=('sso_kpt1', 'sso_kpt6'), id=76, color=[128, 0, 255]),
|
2127 |
-
77:
|
2128 |
-
dict(link=('sso_kpt6', 'sso_kpt25'), id=77, color=[128, 0, 255]),
|
2129 |
-
78:
|
2130 |
-
dict(link=('sso_kpt25', 'sso_kpt24'), id=78, color=[128, 0, 255]),
|
2131 |
-
79:
|
2132 |
-
dict(link=('sso_kpt24', 'sso_kpt23'), id=79, color=[128, 0, 255]),
|
2133 |
-
80:
|
2134 |
-
dict(link=('sso_kpt23', 'sso_kpt22'), id=80, color=[128, 0, 255]),
|
2135 |
-
81:
|
2136 |
-
dict(link=('sso_kpt22', 'sso_kpt21'), id=81, color=[128, 0, 255]),
|
2137 |
-
82:
|
2138 |
-
dict(link=('sso_kpt21', 'sso_kpt20'), id=82, color=[128, 0, 255]),
|
2139 |
-
83:
|
2140 |
-
dict(link=('sso_kpt20', 'sso_kpt19'), id=83, color=[128, 0, 255]),
|
2141 |
-
84:
|
2142 |
-
dict(link=('sso_kpt19', 'sso_kpt18'), id=84, color=[128, 0, 255]),
|
2143 |
-
85:
|
2144 |
-
dict(link=('sso_kpt18', 'sso_kpt17'), id=85, color=[128, 0, 255]),
|
2145 |
-
86:
|
2146 |
-
dict(link=('sso_kpt17', 'sso_kpt29'), id=86, color=[128, 0, 255]),
|
2147 |
-
87:
|
2148 |
-
dict(link=('sso_kpt29', 'sso_kpt28'), id=87, color=[128, 0, 255]),
|
2149 |
-
88:
|
2150 |
-
dict(link=('sso_kpt28', 'sso_kpt27'), id=88, color=[128, 0, 255]),
|
2151 |
-
89:
|
2152 |
-
dict(link=('sso_kpt27', 'sso_kpt26'), id=89, color=[128, 0, 255]),
|
2153 |
-
90:
|
2154 |
-
dict(link=('sso_kpt26', 'sso_kpt5'), id=90, color=[128, 0, 255]),
|
2155 |
-
91:
|
2156 |
-
dict(link=('sso_kpt5', 'sso_kpt6'), id=91, color=[128, 0, 255]),
|
2157 |
-
92:
|
2158 |
-
dict(link=('lso_kpt1', 'lso_kpt2'), id=92, color=[0, 128, 255]),
|
2159 |
-
93:
|
2160 |
-
dict(link=('lso_kpt2', 'lso_kpt7'), id=93, color=[0, 128, 255]),
|
2161 |
-
94:
|
2162 |
-
dict(link=('lso_kpt7', 'lso_kpt8'), id=94, color=[0, 128, 255]),
|
2163 |
-
95:
|
2164 |
-
dict(link=('lso_kpt8', 'lso_kpt9'), id=95, color=[0, 128, 255]),
|
2165 |
-
96:
|
2166 |
-
dict(link=('lso_kpt9', 'lso_kpt10'), id=96, color=[0, 128, 255]),
|
2167 |
-
97:
|
2168 |
-
dict(link=('lso_kpt10', 'lso_kpt11'), id=97, color=[0, 128, 255]),
|
2169 |
-
98:
|
2170 |
-
dict(link=('lso_kpt11', 'lso_kpt12'), id=98, color=[0, 128, 255]),
|
2171 |
-
99:
|
2172 |
-
dict(link=('lso_kpt12', 'lso_kpt13'), id=99, color=[0, 128, 255]),
|
2173 |
-
100:
|
2174 |
-
dict(link=('lso_kpt13', 'lso_kpt14'), id=100, color=[0, 128, 255]),
|
2175 |
-
101:
|
2176 |
-
dict(link=('lso_kpt14', 'lso_kpt15'), id=101, color=[0, 128, 255]),
|
2177 |
-
102:
|
2178 |
-
dict(link=('lso_kpt15', 'lso_kpt16'), id=102, color=[0, 128, 255]),
|
2179 |
-
103:
|
2180 |
-
dict(link=('lso_kpt16', 'lso_kpt17'), id=103, color=[0, 128, 255]),
|
2181 |
-
104:
|
2182 |
-
dict(link=('lso_kpt17', 'lso_kpt18'), id=104, color=[0, 128, 255]),
|
2183 |
-
105:
|
2184 |
-
dict(link=('lso_kpt18', 'lso_kpt19'), id=105, color=[0, 128, 255]),
|
2185 |
-
106:
|
2186 |
-
dict(link=('lso_kpt19', 'lso_kpt20'), id=106, color=[0, 128, 255]),
|
2187 |
-
107:
|
2188 |
-
dict(link=('lso_kpt20', 'lso_kpt39'), id=107, color=[0, 128, 255]),
|
2189 |
-
108:
|
2190 |
-
dict(link=('lso_kpt39', 'lso_kpt38'), id=108, color=[0, 128, 255]),
|
2191 |
-
109:
|
2192 |
-
dict(link=('lso_kpt38', 'lso_kpt4'), id=109, color=[0, 128, 255]),
|
2193 |
-
110:
|
2194 |
-
dict(link=('lso_kpt4', 'lso_kpt3'), id=110, color=[0, 128, 255]),
|
2195 |
-
111:
|
2196 |
-
dict(link=('lso_kpt3', 'lso_kpt2'), id=111, color=[0, 128, 255]),
|
2197 |
-
112:
|
2198 |
-
dict(link=('lso_kpt1', 'lso_kpt6'), id=112, color=[0, 128, 255]),
|
2199 |
-
113:
|
2200 |
-
dict(link=('lso_kpt6', 'lso_kpt33'), id=113, color=[0, 128, 255]),
|
2201 |
-
114:
|
2202 |
-
dict(link=('lso_kpt33', 'lso_kpt32'), id=114, color=[0, 128, 255]),
|
2203 |
-
115:
|
2204 |
-
dict(link=('lso_kpt32', 'lso_kpt31'), id=115, color=[0, 128, 255]),
|
2205 |
-
116:
|
2206 |
-
dict(link=('lso_kpt31', 'lso_kpt30'), id=116, color=[0, 128, 255]),
|
2207 |
-
117:
|
2208 |
-
dict(link=('lso_kpt30', 'lso_kpt29'), id=117, color=[0, 128, 255]),
|
2209 |
-
118:
|
2210 |
-
dict(link=('lso_kpt29', 'lso_kpt28'), id=118, color=[0, 128, 255]),
|
2211 |
-
119:
|
2212 |
-
dict(link=('lso_kpt28', 'lso_kpt27'), id=119, color=[0, 128, 255]),
|
2213 |
-
120:
|
2214 |
-
dict(link=('lso_kpt27', 'lso_kpt26'), id=120, color=[0, 128, 255]),
|
2215 |
-
121:
|
2216 |
-
dict(link=('lso_kpt26', 'lso_kpt25'), id=121, color=[0, 128, 255]),
|
2217 |
-
122:
|
2218 |
-
dict(link=('lso_kpt25', 'lso_kpt24'), id=122, color=[0, 128, 255]),
|
2219 |
-
123:
|
2220 |
-
dict(link=('lso_kpt24', 'lso_kpt23'), id=123, color=[0, 128, 255]),
|
2221 |
-
124:
|
2222 |
-
dict(link=('lso_kpt23', 'lso_kpt22'), id=124, color=[0, 128, 255]),
|
2223 |
-
125:
|
2224 |
-
dict(link=('lso_kpt22', 'lso_kpt21'), id=125, color=[0, 128, 255]),
|
2225 |
-
126:
|
2226 |
-
dict(link=('lso_kpt21', 'lso_kpt37'), id=126, color=[0, 128, 255]),
|
2227 |
-
127:
|
2228 |
-
dict(link=('lso_kpt37', 'lso_kpt36'), id=127, color=[0, 128, 255]),
|
2229 |
-
128:
|
2230 |
-
dict(link=('lso_kpt36', 'lso_kpt35'), id=128, color=[0, 128, 255]),
|
2231 |
-
129:
|
2232 |
-
dict(link=('lso_kpt35', 'lso_kpt34'), id=129, color=[0, 128, 255]),
|
2233 |
-
130:
|
2234 |
-
dict(link=('lso_kpt34', 'lso_kpt5'), id=130, color=[0, 128, 255]),
|
2235 |
-
131:
|
2236 |
-
dict(link=('lso_kpt5', 'lso_kpt6'), id=131, color=[0, 128, 255]),
|
2237 |
-
132:
|
2238 |
-
dict(link=('vest_kpt1', 'vest_kpt2'), id=132, color=[0, 128, 128]),
|
2239 |
-
133:
|
2240 |
-
dict(link=('vest_kpt2', 'vest_kpt7'), id=133, color=[0, 128, 128]),
|
2241 |
-
134:
|
2242 |
-
dict(link=('vest_kpt7', 'vest_kpt8'), id=134, color=[0, 128, 128]),
|
2243 |
-
135:
|
2244 |
-
dict(link=('vest_kpt8', 'vest_kpt9'), id=135, color=[0, 128, 128]),
|
2245 |
-
136:
|
2246 |
-
dict(link=('vest_kpt9', 'vest_kpt10'), id=136, color=[0, 128, 128]),
|
2247 |
-
137:
|
2248 |
-
dict(link=('vest_kpt10', 'vest_kpt11'), id=137, color=[0, 128, 128]),
|
2249 |
-
138:
|
2250 |
-
dict(link=('vest_kpt11', 'vest_kpt12'), id=138, color=[0, 128, 128]),
|
2251 |
-
139:
|
2252 |
-
dict(link=('vest_kpt12', 'vest_kpt13'), id=139, color=[0, 128, 128]),
|
2253 |
-
140:
|
2254 |
-
dict(link=('vest_kpt13', 'vest_kpt14'), id=140, color=[0, 128, 128]),
|
2255 |
-
141:
|
2256 |
-
dict(link=('vest_kpt14', 'vest_kpt15'), id=141, color=[0, 128, 128]),
|
2257 |
-
142:
|
2258 |
-
dict(link=('vest_kpt15', 'vest_kpt6'), id=142, color=[0, 128, 128]),
|
2259 |
-
143:
|
2260 |
-
dict(link=('vest_kpt6', 'vest_kpt1'), id=143, color=[0, 128, 128]),
|
2261 |
-
144:
|
2262 |
-
dict(link=('vest_kpt2', 'vest_kpt3'), id=144, color=[0, 128, 128]),
|
2263 |
-
145:
|
2264 |
-
dict(link=('vest_kpt3', 'vest_kpt4'), id=145, color=[0, 128, 128]),
|
2265 |
-
146:
|
2266 |
-
dict(link=('vest_kpt4', 'vest_kpt5'), id=146, color=[0, 128, 128]),
|
2267 |
-
147:
|
2268 |
-
dict(link=('vest_kpt5', 'vest_kpt6'), id=147, color=[0, 128, 128]),
|
2269 |
-
148:
|
2270 |
-
dict(link=('sling_kpt1', 'sling_kpt2'), id=148, color=[0, 0, 128]),
|
2271 |
-
149:
|
2272 |
-
dict(link=('sling_kpt2', 'sling_kpt8'), id=149, color=[0, 0, 128]),
|
2273 |
-
150:
|
2274 |
-
dict(link=('sling_kpt8', 'sling_kpt9'), id=150, color=[0, 0, 128]),
|
2275 |
-
151:
|
2276 |
-
dict(link=('sling_kpt9', 'sling_kpt10'), id=151, color=[0, 0, 128]),
|
2277 |
-
152:
|
2278 |
-
dict(link=('sling_kpt10', 'sling_kpt11'), id=152, color=[0, 0, 128]),
|
2279 |
-
153:
|
2280 |
-
dict(link=('sling_kpt11', 'sling_kpt12'), id=153, color=[0, 0, 128]),
|
2281 |
-
154:
|
2282 |
-
dict(link=('sling_kpt12', 'sling_kpt13'), id=154, color=[0, 0, 128]),
|
2283 |
-
155:
|
2284 |
-
dict(link=('sling_kpt13', 'sling_kpt14'), id=155, color=[0, 0, 128]),
|
2285 |
-
156:
|
2286 |
-
dict(link=('sling_kpt14', 'sling_kpt6'), id=156, color=[0, 0, 128]),
|
2287 |
-
157:
|
2288 |
-
dict(link=('sling_kpt2', 'sling_kpt7'), id=157, color=[0, 0, 128]),
|
2289 |
-
158:
|
2290 |
-
dict(link=('sling_kpt6', 'sling_kpt15'), id=158, color=[0, 0, 128]),
|
2291 |
-
159:
|
2292 |
-
dict(link=('sling_kpt2', 'sling_kpt3'), id=159, color=[0, 0, 128]),
|
2293 |
-
160:
|
2294 |
-
dict(link=('sling_kpt3', 'sling_kpt4'), id=160, color=[0, 0, 128]),
|
2295 |
-
161:
|
2296 |
-
dict(link=('sling_kpt4', 'sling_kpt5'), id=161, color=[0, 0, 128]),
|
2297 |
-
162:
|
2298 |
-
dict(link=('sling_kpt5', 'sling_kpt6'), id=162, color=[0, 0, 128]),
|
2299 |
-
163:
|
2300 |
-
dict(link=('sling_kpt1', 'sling_kpt6'), id=163, color=[0, 0, 128]),
|
2301 |
-
164:
|
2302 |
-
dict(
|
2303 |
-
link=('shorts_kpt1', 'shorts_kpt4'), id=164, color=[128, 128,
|
2304 |
-
128]),
|
2305 |
-
165:
|
2306 |
-
dict(
|
2307 |
-
link=('shorts_kpt4', 'shorts_kpt5'), id=165, color=[128, 128,
|
2308 |
-
128]),
|
2309 |
-
166:
|
2310 |
-
dict(
|
2311 |
-
link=('shorts_kpt5', 'shorts_kpt6'), id=166, color=[128, 128,
|
2312 |
-
128]),
|
2313 |
-
167:
|
2314 |
-
dict(
|
2315 |
-
link=('shorts_kpt6', 'shorts_kpt7'), id=167, color=[128, 128,
|
2316 |
-
128]),
|
2317 |
-
168:
|
2318 |
-
dict(
|
2319 |
-
link=('shorts_kpt7', 'shorts_kpt8'), id=168, color=[128, 128,
|
2320 |
-
128]),
|
2321 |
-
169:
|
2322 |
-
dict(
|
2323 |
-
link=('shorts_kpt8', 'shorts_kpt9'), id=169, color=[128, 128,
|
2324 |
-
128]),
|
2325 |
-
170:
|
2326 |
-
dict(
|
2327 |
-
link=('shorts_kpt9', 'shorts_kpt10'),
|
2328 |
-
id=170,
|
2329 |
-
color=[128, 128, 128]),
|
2330 |
-
171:
|
2331 |
-
dict(
|
2332 |
-
link=('shorts_kpt10', 'shorts_kpt3'),
|
2333 |
-
id=171,
|
2334 |
-
color=[128, 128, 128]),
|
2335 |
-
172:
|
2336 |
-
dict(
|
2337 |
-
link=('shorts_kpt3', 'shorts_kpt2'), id=172, color=[128, 128,
|
2338 |
-
128]),
|
2339 |
-
173:
|
2340 |
-
dict(
|
2341 |
-
link=('shorts_kpt2', 'shorts_kpt1'), id=173, color=[128, 128,
|
2342 |
-
128]),
|
2343 |
-
174:
|
2344 |
-
dict(
|
2345 |
-
link=('trousers_kpt1', 'trousers_kpt4'),
|
2346 |
-
id=174,
|
2347 |
-
color=[128, 0, 128]),
|
2348 |
-
175:
|
2349 |
-
dict(
|
2350 |
-
link=('trousers_kpt4', 'trousers_kpt5'),
|
2351 |
-
id=175,
|
2352 |
-
color=[128, 0, 128]),
|
2353 |
-
176:
|
2354 |
-
dict(
|
2355 |
-
link=('trousers_kpt5', 'trousers_kpt6'),
|
2356 |
-
id=176,
|
2357 |
-
color=[128, 0, 128]),
|
2358 |
-
177:
|
2359 |
-
dict(
|
2360 |
-
link=('trousers_kpt6', 'trousers_kpt7'),
|
2361 |
-
id=177,
|
2362 |
-
color=[128, 0, 128]),
|
2363 |
-
178:
|
2364 |
-
dict(
|
2365 |
-
link=('trousers_kpt7', 'trousers_kpt8'),
|
2366 |
-
id=178,
|
2367 |
-
color=[128, 0, 128]),
|
2368 |
-
179:
|
2369 |
-
dict(
|
2370 |
-
link=('trousers_kpt8', 'trousers_kpt9'),
|
2371 |
-
id=179,
|
2372 |
-
color=[128, 0, 128]),
|
2373 |
-
180:
|
2374 |
-
dict(
|
2375 |
-
link=('trousers_kpt9', 'trousers_kpt10'),
|
2376 |
-
id=180,
|
2377 |
-
color=[128, 0, 128]),
|
2378 |
-
181:
|
2379 |
-
dict(
|
2380 |
-
link=('trousers_kpt10', 'trousers_kpt11'),
|
2381 |
-
id=181,
|
2382 |
-
color=[128, 0, 128]),
|
2383 |
-
182:
|
2384 |
-
dict(
|
2385 |
-
link=('trousers_kpt11', 'trousers_kpt12'),
|
2386 |
-
id=182,
|
2387 |
-
color=[128, 0, 128]),
|
2388 |
-
183:
|
2389 |
-
dict(
|
2390 |
-
link=('trousers_kpt12', 'trousers_kpt13'),
|
2391 |
-
id=183,
|
2392 |
-
color=[128, 0, 128]),
|
2393 |
-
184:
|
2394 |
-
dict(
|
2395 |
-
link=('trousers_kpt13', 'trousers_kpt14'),
|
2396 |
-
id=184,
|
2397 |
-
color=[128, 0, 128]),
|
2398 |
-
185:
|
2399 |
-
dict(
|
2400 |
-
link=('trousers_kpt14', 'trousers_kpt3'),
|
2401 |
-
id=185,
|
2402 |
-
color=[128, 0, 128]),
|
2403 |
-
186:
|
2404 |
-
dict(
|
2405 |
-
link=('trousers_kpt3', 'trousers_kpt2'),
|
2406 |
-
id=186,
|
2407 |
-
color=[128, 0, 128]),
|
2408 |
-
187:
|
2409 |
-
dict(
|
2410 |
-
link=('trousers_kpt2', 'trousers_kpt1'),
|
2411 |
-
id=187,
|
2412 |
-
color=[128, 0, 128]),
|
2413 |
-
188:
|
2414 |
-
dict(link=('skirt_kpt1', 'skirt_kpt4'), id=188, color=[64, 128, 128]),
|
2415 |
-
189:
|
2416 |
-
dict(link=('skirt_kpt4', 'skirt_kpt5'), id=189, color=[64, 128, 128]),
|
2417 |
-
190:
|
2418 |
-
dict(link=('skirt_kpt5', 'skirt_kpt6'), id=190, color=[64, 128, 128]),
|
2419 |
-
191:
|
2420 |
-
dict(link=('skirt_kpt6', 'skirt_kpt7'), id=191, color=[64, 128, 128]),
|
2421 |
-
192:
|
2422 |
-
dict(link=('skirt_kpt7', 'skirt_kpt8'), id=192, color=[64, 128, 128]),
|
2423 |
-
193:
|
2424 |
-
dict(link=('skirt_kpt8', 'skirt_kpt3'), id=193, color=[64, 128, 128]),
|
2425 |
-
194:
|
2426 |
-
dict(link=('skirt_kpt3', 'skirt_kpt2'), id=194, color=[64, 128, 128]),
|
2427 |
-
195:
|
2428 |
-
dict(link=('skirt_kpt2', 'skirt_kpt1'), id=195, color=[64, 128, 128]),
|
2429 |
-
196:
|
2430 |
-
dict(link=('ssd_kpt1', 'ssd_kpt2'), id=196, color=[64, 64, 128]),
|
2431 |
-
197:
|
2432 |
-
dict(link=('ssd_kpt2', 'ssd_kpt7'), id=197, color=[64, 64, 128]),
|
2433 |
-
198:
|
2434 |
-
dict(link=('ssd_kpt7', 'ssd_kpt8'), id=198, color=[64, 64, 128]),
|
2435 |
-
199:
|
2436 |
-
dict(link=('ssd_kpt8', 'ssd_kpt9'), id=199, color=[64, 64, 128]),
|
2437 |
-
200:
|
2438 |
-
dict(link=('ssd_kpt9', 'ssd_kpt10'), id=200, color=[64, 64, 128]),
|
2439 |
-
201:
|
2440 |
-
dict(link=('ssd_kpt10', 'ssd_kpt11'), id=201, color=[64, 64, 128]),
|
2441 |
-
202:
|
2442 |
-
dict(link=('ssd_kpt11', 'ssd_kpt12'), id=202, color=[64, 64, 128]),
|
2443 |
-
203:
|
2444 |
-
dict(link=('ssd_kpt12', 'ssd_kpt13'), id=203, color=[64, 64, 128]),
|
2445 |
-
204:
|
2446 |
-
dict(link=('ssd_kpt13', 'ssd_kpt14'), id=204, color=[64, 64, 128]),
|
2447 |
-
205:
|
2448 |
-
dict(link=('ssd_kpt14', 'ssd_kpt15'), id=205, color=[64, 64, 128]),
|
2449 |
-
206:
|
2450 |
-
dict(link=('ssd_kpt15', 'ssd_kpt16'), id=206, color=[64, 64, 128]),
|
2451 |
-
207:
|
2452 |
-
dict(link=('ssd_kpt16', 'ssd_kpt17'), id=207, color=[64, 64, 128]),
|
2453 |
-
208:
|
2454 |
-
dict(link=('ssd_kpt17', 'ssd_kpt18'), id=208, color=[64, 64, 128]),
|
2455 |
-
209:
|
2456 |
-
dict(link=('ssd_kpt18', 'ssd_kpt19'), id=209, color=[64, 64, 128]),
|
2457 |
-
210:
|
2458 |
-
dict(link=('ssd_kpt19', 'ssd_kpt20'), id=210, color=[64, 64, 128]),
|
2459 |
-
211:
|
2460 |
-
dict(link=('ssd_kpt20', 'ssd_kpt21'), id=211, color=[64, 64, 128]),
|
2461 |
-
212:
|
2462 |
-
dict(link=('ssd_kpt21', 'ssd_kpt22'), id=212, color=[64, 64, 128]),
|
2463 |
-
213:
|
2464 |
-
dict(link=('ssd_kpt22', 'ssd_kpt23'), id=213, color=[64, 64, 128]),
|
2465 |
-
214:
|
2466 |
-
dict(link=('ssd_kpt23', 'ssd_kpt24'), id=214, color=[64, 64, 128]),
|
2467 |
-
215:
|
2468 |
-
dict(link=('ssd_kpt24', 'ssd_kpt25'), id=215, color=[64, 64, 128]),
|
2469 |
-
216:
|
2470 |
-
dict(link=('ssd_kpt25', 'ssd_kpt26'), id=216, color=[64, 64, 128]),
|
2471 |
-
217:
|
2472 |
-
dict(link=('ssd_kpt26', 'ssd_kpt27'), id=217, color=[64, 64, 128]),
|
2473 |
-
218:
|
2474 |
-
dict(link=('ssd_kpt27', 'ssd_kpt28'), id=218, color=[64, 64, 128]),
|
2475 |
-
219:
|
2476 |
-
dict(link=('ssd_kpt28', 'ssd_kpt29'), id=219, color=[64, 64, 128]),
|
2477 |
-
220:
|
2478 |
-
dict(link=('ssd_kpt29', 'ssd_kpt6'), id=220, color=[64, 64, 128]),
|
2479 |
-
221:
|
2480 |
-
dict(link=('ssd_kpt6', 'ssd_kpt5'), id=221, color=[64, 64, 128]),
|
2481 |
-
222:
|
2482 |
-
dict(link=('ssd_kpt5', 'ssd_kpt4'), id=222, color=[64, 64, 128]),
|
2483 |
-
223:
|
2484 |
-
dict(link=('ssd_kpt4', 'ssd_kpt3'), id=223, color=[64, 64, 128]),
|
2485 |
-
224:
|
2486 |
-
dict(link=('ssd_kpt3', 'ssd_kpt2'), id=224, color=[64, 64, 128]),
|
2487 |
-
225:
|
2488 |
-
dict(link=('ssd_kpt6', 'ssd_kpt1'), id=225, color=[64, 64, 128]),
|
2489 |
-
226:
|
2490 |
-
dict(link=('lsd_kpt1', 'lsd_kpt2'), id=226, color=[128, 64, 0]),
|
2491 |
-
227:
|
2492 |
-
dict(link=('lsd_kpt2', 'lsd_kpt7'), id=228, color=[128, 64, 0]),
|
2493 |
-
228:
|
2494 |
-
dict(link=('lsd_kpt7', 'lsd_kpt8'), id=228, color=[128, 64, 0]),
|
2495 |
-
229:
|
2496 |
-
dict(link=('lsd_kpt8', 'lsd_kpt9'), id=229, color=[128, 64, 0]),
|
2497 |
-
230:
|
2498 |
-
dict(link=('lsd_kpt9', 'lsd_kpt10'), id=230, color=[128, 64, 0]),
|
2499 |
-
231:
|
2500 |
-
dict(link=('lsd_kpt10', 'lsd_kpt11'), id=231, color=[128, 64, 0]),
|
2501 |
-
232:
|
2502 |
-
dict(link=('lsd_kpt11', 'lsd_kpt12'), id=232, color=[128, 64, 0]),
|
2503 |
-
233:
|
2504 |
-
dict(link=('lsd_kpt12', 'lsd_kpt13'), id=233, color=[128, 64, 0]),
|
2505 |
-
234:
|
2506 |
-
dict(link=('lsd_kpt13', 'lsd_kpt14'), id=234, color=[128, 64, 0]),
|
2507 |
-
235:
|
2508 |
-
dict(link=('lsd_kpt14', 'lsd_kpt15'), id=235, color=[128, 64, 0]),
|
2509 |
-
236:
|
2510 |
-
dict(link=('lsd_kpt15', 'lsd_kpt16'), id=236, color=[128, 64, 0]),
|
2511 |
-
237:
|
2512 |
-
dict(link=('lsd_kpt16', 'lsd_kpt17'), id=237, color=[128, 64, 0]),
|
2513 |
-
238:
|
2514 |
-
dict(link=('lsd_kpt17', 'lsd_kpt18'), id=238, color=[128, 64, 0]),
|
2515 |
-
239:
|
2516 |
-
dict(link=('lsd_kpt18', 'lsd_kpt19'), id=239, color=[128, 64, 0]),
|
2517 |
-
240:
|
2518 |
-
dict(link=('lsd_kpt19', 'lsd_kpt20'), id=240, color=[128, 64, 0]),
|
2519 |
-
241:
|
2520 |
-
dict(link=('lsd_kpt20', 'lsd_kpt21'), id=241, color=[128, 64, 0]),
|
2521 |
-
242:
|
2522 |
-
dict(link=('lsd_kpt21', 'lsd_kpt22'), id=242, color=[128, 64, 0]),
|
2523 |
-
243:
|
2524 |
-
dict(link=('lsd_kpt22', 'lsd_kpt23'), id=243, color=[128, 64, 0]),
|
2525 |
-
244:
|
2526 |
-
dict(link=('lsd_kpt23', 'lsd_kpt24'), id=244, color=[128, 64, 0]),
|
2527 |
-
245:
|
2528 |
-
dict(link=('lsd_kpt24', 'lsd_kpt25'), id=245, color=[128, 64, 0]),
|
2529 |
-
246:
|
2530 |
-
dict(link=('lsd_kpt25', 'lsd_kpt26'), id=246, color=[128, 64, 0]),
|
2531 |
-
247:
|
2532 |
-
dict(link=('lsd_kpt26', 'lsd_kpt27'), id=247, color=[128, 64, 0]),
|
2533 |
-
248:
|
2534 |
-
dict(link=('lsd_kpt27', 'lsd_kpt28'), id=248, color=[128, 64, 0]),
|
2535 |
-
249:
|
2536 |
-
dict(link=('lsd_kpt28', 'lsd_kpt29'), id=249, color=[128, 64, 0]),
|
2537 |
-
250:
|
2538 |
-
dict(link=('lsd_kpt29', 'lsd_kpt30'), id=250, color=[128, 64, 0]),
|
2539 |
-
251:
|
2540 |
-
dict(link=('lsd_kpt30', 'lsd_kpt31'), id=251, color=[128, 64, 0]),
|
2541 |
-
252:
|
2542 |
-
dict(link=('lsd_kpt31', 'lsd_kpt32'), id=252, color=[128, 64, 0]),
|
2543 |
-
253:
|
2544 |
-
dict(link=('lsd_kpt32', 'lsd_kpt33'), id=253, color=[128, 64, 0]),
|
2545 |
-
254:
|
2546 |
-
dict(link=('lsd_kpt33', 'lsd_kpt34'), id=254, color=[128, 64, 0]),
|
2547 |
-
255:
|
2548 |
-
dict(link=('lsd_kpt34', 'lsd_kpt35'), id=255, color=[128, 64, 0]),
|
2549 |
-
256:
|
2550 |
-
dict(link=('lsd_kpt35', 'lsd_kpt36'), id=256, color=[128, 64, 0]),
|
2551 |
-
257:
|
2552 |
-
dict(link=('lsd_kpt36', 'lsd_kpt37'), id=257, color=[128, 64, 0]),
|
2553 |
-
258:
|
2554 |
-
dict(link=('lsd_kpt37', 'lsd_kpt6'), id=258, color=[128, 64, 0]),
|
2555 |
-
259:
|
2556 |
-
dict(link=('lsd_kpt6', 'lsd_kpt5'), id=259, color=[128, 64, 0]),
|
2557 |
-
260:
|
2558 |
-
dict(link=('lsd_kpt5', 'lsd_kpt4'), id=260, color=[128, 64, 0]),
|
2559 |
-
261:
|
2560 |
-
dict(link=('lsd_kpt4', 'lsd_kpt3'), id=261, color=[128, 64, 0]),
|
2561 |
-
262:
|
2562 |
-
dict(link=('lsd_kpt3', 'lsd_kpt2'), id=262, color=[128, 64, 0]),
|
2563 |
-
263:
|
2564 |
-
dict(link=('lsd_kpt6', 'lsd_kpt1'), id=263, color=[128, 64, 0]),
|
2565 |
-
264:
|
2566 |
-
dict(link=('vd_kpt1', 'vd_kpt2'), id=264, color=[128, 64, 255]),
|
2567 |
-
265:
|
2568 |
-
dict(link=('vd_kpt2', 'vd_kpt7'), id=265, color=[128, 64, 255]),
|
2569 |
-
266:
|
2570 |
-
dict(link=('vd_kpt7', 'vd_kpt8'), id=266, color=[128, 64, 255]),
|
2571 |
-
267:
|
2572 |
-
dict(link=('vd_kpt8', 'vd_kpt9'), id=267, color=[128, 64, 255]),
|
2573 |
-
268:
|
2574 |
-
dict(link=('vd_kpt9', 'vd_kpt10'), id=268, color=[128, 64, 255]),
|
2575 |
-
269:
|
2576 |
-
dict(link=('vd_kpt10', 'vd_kpt11'), id=269, color=[128, 64, 255]),
|
2577 |
-
270:
|
2578 |
-
dict(link=('vd_kpt11', 'vd_kpt12'), id=270, color=[128, 64, 255]),
|
2579 |
-
271:
|
2580 |
-
dict(link=('vd_kpt12', 'vd_kpt13'), id=271, color=[128, 64, 255]),
|
2581 |
-
272:
|
2582 |
-
dict(link=('vd_kpt13', 'vd_kpt14'), id=272, color=[128, 64, 255]),
|
2583 |
-
273:
|
2584 |
-
dict(link=('vd_kpt14', 'vd_kpt15'), id=273, color=[128, 64, 255]),
|
2585 |
-
274:
|
2586 |
-
dict(link=('vd_kpt15', 'vd_kpt16'), id=274, color=[128, 64, 255]),
|
2587 |
-
275:
|
2588 |
-
dict(link=('vd_kpt16', 'vd_kpt17'), id=275, color=[128, 64, 255]),
|
2589 |
-
276:
|
2590 |
-
dict(link=('vd_kpt17', 'vd_kpt18'), id=276, color=[128, 64, 255]),
|
2591 |
-
277:
|
2592 |
-
dict(link=('vd_kpt18', 'vd_kpt19'), id=277, color=[128, 64, 255]),
|
2593 |
-
278:
|
2594 |
-
dict(link=('vd_kpt19', 'vd_kpt6'), id=278, color=[128, 64, 255]),
|
2595 |
-
279:
|
2596 |
-
dict(link=('vd_kpt6', 'vd_kpt5'), id=279, color=[128, 64, 255]),
|
2597 |
-
280:
|
2598 |
-
dict(link=('vd_kpt5', 'vd_kpt4'), id=280, color=[128, 64, 255]),
|
2599 |
-
281:
|
2600 |
-
dict(link=('vd_kpt4', 'vd_kpt3'), id=281, color=[128, 64, 255]),
|
2601 |
-
282:
|
2602 |
-
dict(link=('vd_kpt3', 'vd_kpt2'), id=282, color=[128, 64, 255]),
|
2603 |
-
283:
|
2604 |
-
dict(link=('vd_kpt6', 'vd_kpt1'), id=283, color=[128, 64, 255]),
|
2605 |
-
284:
|
2606 |
-
dict(link=('sd_kpt1', 'sd_kpt2'), id=284, color=[128, 64, 0]),
|
2607 |
-
285:
|
2608 |
-
dict(link=('sd_kpt2', 'sd_kpt8'), id=285, color=[128, 64, 0]),
|
2609 |
-
286:
|
2610 |
-
dict(link=('sd_kpt8', 'sd_kpt9'), id=286, color=[128, 64, 0]),
|
2611 |
-
287:
|
2612 |
-
dict(link=('sd_kpt9', 'sd_kpt10'), id=287, color=[128, 64, 0]),
|
2613 |
-
288:
|
2614 |
-
dict(link=('sd_kpt10', 'sd_kpt11'), id=288, color=[128, 64, 0]),
|
2615 |
-
289:
|
2616 |
-
dict(link=('sd_kpt11', 'sd_kpt12'), id=289, color=[128, 64, 0]),
|
2617 |
-
290:
|
2618 |
-
dict(link=('sd_kpt12', 'sd_kpt13'), id=290, color=[128, 64, 0]),
|
2619 |
-
291:
|
2620 |
-
dict(link=('sd_kpt13', 'sd_kpt14'), id=291, color=[128, 64, 0]),
|
2621 |
-
292:
|
2622 |
-
dict(link=('sd_kpt14', 'sd_kpt15'), id=292, color=[128, 64, 0]),
|
2623 |
-
293:
|
2624 |
-
dict(link=('sd_kpt15', 'sd_kpt16'), id=293, color=[128, 64, 0]),
|
2625 |
-
294:
|
2626 |
-
dict(link=('sd_kpt16', 'sd_kpt17'), id=294, color=[128, 64, 0]),
|
2627 |
-
295:
|
2628 |
-
dict(link=('sd_kpt17', 'sd_kpt18'), id=295, color=[128, 64, 0]),
|
2629 |
-
296:
|
2630 |
-
dict(link=('sd_kpt18', 'sd_kpt6'), id=296, color=[128, 64, 0]),
|
2631 |
-
297:
|
2632 |
-
dict(link=('sd_kpt6', 'sd_kpt5'), id=297, color=[128, 64, 0]),
|
2633 |
-
298:
|
2634 |
-
dict(link=('sd_kpt5', 'sd_kpt4'), id=298, color=[128, 64, 0]),
|
2635 |
-
299:
|
2636 |
-
dict(link=('sd_kpt4', 'sd_kpt3'), id=299, color=[128, 64, 0]),
|
2637 |
-
300:
|
2638 |
-
dict(link=('sd_kpt3', 'sd_kpt2'), id=300, color=[128, 64, 0]),
|
2639 |
-
301:
|
2640 |
-
dict(link=('sd_kpt2', 'sd_kpt7'), id=301, color=[128, 64, 0]),
|
2641 |
-
302:
|
2642 |
-
dict(link=('sd_kpt6', 'sd_kpt19'), id=302, color=[128, 64, 0]),
|
2643 |
-
303:
|
2644 |
-
dict(link=('sd_kpt6', 'sd_kpt1'), id=303, color=[128, 64, 0])
|
2645 |
-
}),
|
2646 |
-
joint_weights=[
|
2647 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2648 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2649 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2650 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2651 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2652 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2653 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2654 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2655 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2656 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2657 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2658 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2659 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2660 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2661 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2662 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2663 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2664 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2665 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2666 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2667 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0
|
2668 |
-
],
|
2669 |
-
sigmas=[])
|
2670 |
-
param_scheduler = [
|
2671 |
-
dict(
|
2672 |
-
type='LinearLR', begin=0, end=500, start_factor=0.001, by_epoch=False),
|
2673 |
-
dict(
|
2674 |
-
type='MultiStepLR',
|
2675 |
-
begin=0,
|
2676 |
-
end=210,
|
2677 |
-
milestones=[100, 160],
|
2678 |
-
gamma=0.1,
|
2679 |
-
by_epoch=True)
|
2680 |
-
]
|
2681 |
-
optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005))
|
2682 |
-
auto_scale_lr = dict(base_batch_size=512)
|
2683 |
-
dataset_type = 'DeepFashion2Dataset'
|
2684 |
-
data_mode = 'topdown'
|
2685 |
-
data_root = 'data/deepfashion2/'
|
2686 |
-
codec = dict(
|
2687 |
-
type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2)
|
2688 |
-
train_pipeline = [
|
2689 |
-
dict(type='LoadImage'),
|
2690 |
-
dict(type='GetBBoxCenterScale'),
|
2691 |
-
dict(type='RandomFlip', direction='horizontal'),
|
2692 |
-
dict(
|
2693 |
-
type='RandomBBoxTransform',
|
2694 |
-
shift_prob=0,
|
2695 |
-
rotate_factor=60,
|
2696 |
-
scale_factor=(0.75, 1.25)),
|
2697 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2698 |
-
dict(
|
2699 |
-
type='GenerateTarget',
|
2700 |
-
encoder=dict(
|
2701 |
-
type='MSRAHeatmap',
|
2702 |
-
input_size=(192, 256),
|
2703 |
-
heatmap_size=(48, 64),
|
2704 |
-
sigma=2)),
|
2705 |
-
dict(type='PackPoseInputs')
|
2706 |
-
]
|
2707 |
-
val_pipeline = [
|
2708 |
-
dict(type='LoadImage', backend_args=dict(backend='local')),
|
2709 |
-
dict(type='GetBBoxCenterScale'),
|
2710 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2711 |
-
dict(type='PackPoseInputs')
|
2712 |
-
]
|
2713 |
-
train_dataloader = dict(
|
2714 |
-
batch_size=64,
|
2715 |
-
num_workers=6,
|
2716 |
-
persistent_workers=True,
|
2717 |
-
sampler=dict(type='DefaultSampler', shuffle=True),
|
2718 |
-
dataset=dict(
|
2719 |
-
type='DeepFashion2Dataset',
|
2720 |
-
data_root='data/deepfashion2/',
|
2721 |
-
data_mode='topdown',
|
2722 |
-
ann_file='train/deepfashion2_shorts.json',
|
2723 |
-
data_prefix=dict(img='train/image/'),
|
2724 |
-
pipeline=[
|
2725 |
-
dict(type='LoadImage'),
|
2726 |
-
dict(type='GetBBoxCenterScale'),
|
2727 |
-
dict(type='RandomFlip', direction='horizontal'),
|
2728 |
-
dict(
|
2729 |
-
type='RandomBBoxTransform',
|
2730 |
-
shift_prob=0,
|
2731 |
-
rotate_factor=60,
|
2732 |
-
scale_factor=(0.75, 1.25)),
|
2733 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2734 |
-
dict(
|
2735 |
-
type='GenerateTarget',
|
2736 |
-
encoder=dict(
|
2737 |
-
type='MSRAHeatmap',
|
2738 |
-
input_size=(192, 256),
|
2739 |
-
heatmap_size=(48, 64),
|
2740 |
-
sigma=2)),
|
2741 |
-
dict(type='PackPoseInputs')
|
2742 |
-
]))
|
2743 |
-
val_dataloader = dict(
|
2744 |
-
batch_size=32,
|
2745 |
-
num_workers=6,
|
2746 |
-
persistent_workers=True,
|
2747 |
-
drop_last=False,
|
2748 |
-
sampler=dict(type='DefaultSampler', shuffle=False),
|
2749 |
-
dataset=dict(
|
2750 |
-
type='DeepFashion2Dataset',
|
2751 |
-
data_root='data/deepfashion2/',
|
2752 |
-
data_mode='topdown',
|
2753 |
-
ann_file='validation/deepfashion2_shorts.json',
|
2754 |
-
data_prefix=dict(img='validation/image/'),
|
2755 |
-
test_mode=True,
|
2756 |
-
pipeline=[
|
2757 |
-
dict(type='LoadImage', backend_args=dict(backend='local')),
|
2758 |
-
dict(type='GetBBoxCenterScale'),
|
2759 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2760 |
-
dict(type='PackPoseInputs')
|
2761 |
-
]))
|
2762 |
-
test_dataloader = dict(
|
2763 |
-
batch_size=32,
|
2764 |
-
num_workers=6,
|
2765 |
-
persistent_workers=True,
|
2766 |
-
drop_last=False,
|
2767 |
-
sampler=dict(type='DefaultSampler', shuffle=False),
|
2768 |
-
dataset=dict(
|
2769 |
-
type='DeepFashion2Dataset',
|
2770 |
-
data_root='data/deepfashion2/',
|
2771 |
-
data_mode='topdown',
|
2772 |
-
ann_file='validation/deepfashion2_shorts.json',
|
2773 |
-
data_prefix=dict(img='validation/image/'),
|
2774 |
-
test_mode=True,
|
2775 |
-
pipeline=[
|
2776 |
-
dict(type='LoadImage', backend_args=dict(backend='local')),
|
2777 |
-
dict(type='GetBBoxCenterScale'),
|
2778 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2779 |
-
dict(type='PackPoseInputs')
|
2780 |
-
]))
|
2781 |
-
channel_cfg = dict(
|
2782 |
-
num_output_channels=294,
|
2783 |
-
dataset_joints=294,
|
2784 |
-
dataset_channel=[[
|
2785 |
-
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
|
2786 |
-
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
|
2787 |
-
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
|
2788 |
-
56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
|
2789 |
-
74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
|
2790 |
-
92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
|
2791 |
-
108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
|
2792 |
-
122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
|
2793 |
-
136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
|
2794 |
-
150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
|
2795 |
-
164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
|
2796 |
-
178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
|
2797 |
-
192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
|
2798 |
-
206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
|
2799 |
-
220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
|
2800 |
-
234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
|
2801 |
-
248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
|
2802 |
-
262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
|
2803 |
-
276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
|
2804 |
-
290, 291, 292, 293
|
2805 |
-
]],
|
2806 |
-
inference_channel=[
|
2807 |
-
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
|
2808 |
-
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
|
2809 |
-
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
|
2810 |
-
56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
|
2811 |
-
74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
|
2812 |
-
92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
|
2813 |
-
108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
|
2814 |
-
122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
|
2815 |
-
136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
|
2816 |
-
150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
|
2817 |
-
164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
|
2818 |
-
178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
|
2819 |
-
192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
|
2820 |
-
206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
|
2821 |
-
220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
|
2822 |
-
234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
|
2823 |
-
248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
|
2824 |
-
262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
|
2825 |
-
276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
|
2826 |
-
290, 291, 292, 293
|
2827 |
-
])
|
2828 |
-
model = dict(
|
2829 |
-
type='TopdownPoseEstimator',
|
2830 |
-
data_preprocessor=dict(
|
2831 |
-
type='PoseDataPreprocessor',
|
2832 |
-
mean=[123.675, 116.28, 103.53],
|
2833 |
-
std=[58.395, 57.12, 57.375],
|
2834 |
-
bgr_to_rgb=True),
|
2835 |
-
backbone=dict(
|
2836 |
-
type='ResNet',
|
2837 |
-
depth=50,
|
2838 |
-
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
|
2839 |
-
head=dict(
|
2840 |
-
type='HeatmapHead',
|
2841 |
-
in_channels=2048,
|
2842 |
-
out_channels=294,
|
2843 |
-
loss=dict(type='KeypointMSELoss', use_target_weight=True),
|
2844 |
-
decoder=dict(
|
2845 |
-
type='MSRAHeatmap',
|
2846 |
-
input_size=(192, 256),
|
2847 |
-
heatmap_size=(48, 64),
|
2848 |
-
sigma=2)),
|
2849 |
-
test_cfg=dict(flip_test=True, flip_mode='heatmap', shift_heatmap=True))
|
2850 |
-
val_evaluator = [
|
2851 |
-
dict(type='PCKAccuracy', thr=0.2),
|
2852 |
-
dict(type='AUC'),
|
2853 |
-
dict(type='EPE')
|
2854 |
-
]
|
2855 |
-
test_evaluator = [
|
2856 |
-
dict(type='PCKAccuracy', thr=0.2),
|
2857 |
-
dict(type='AUC'),
|
2858 |
-
dict(type='EPE')
|
2859 |
-
]
|
2860 |
-
launcher = 'pytorch'
|
2861 |
-
work_dir = './work_dirs/td_hm_res50_4xb64-210e_deepfashion2_shorts_256x192'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Abhilashvj/planogram-compliance/app_utils.py
DELETED
@@ -1,196 +0,0 @@
|
|
1 |
-
import glob
|
2 |
-
import json
|
3 |
-
import os
|
4 |
-
import xml.etree.ElementTree as ET
|
5 |
-
|
6 |
-
import cv2
|
7 |
-
|
8 |
-
# from sklearn.externals import joblib
|
9 |
-
import joblib
|
10 |
-
import numpy as np
|
11 |
-
import pandas as pd
|
12 |
-
|
13 |
-
# from .variables import old_ocr_req_cols
|
14 |
-
# from .skew_correction import PageSkewWraper
|
15 |
-
|
16 |
-
const_HW = 1.294117647
|
17 |
-
const_W = 600
|
18 |
-
# https://www.forbes.com/sites/forbestechcouncil/2020/06/02/leveraging-technologies-to-align-realograms-and-planograms-for-grocery/?sh=506b8b78e86c
|
19 |
-
|
20 |
-
|
21 |
-
# https://stackoverflow.com/questions/39403183/python-opencv-sorting-contours
|
22 |
-
# http://devdoc.net/linux/OpenCV-3.2.0/da/d0c/tutorial_bounding_rects_circles.html
|
23 |
-
# https://stackoverflow.com/questions/10297713/find-contour-of-the-set-of-points-in-opencv
|
24 |
-
# https://stackoverflow.com/questions/16538774/dealing-with-contours-and-bounding-rectangle-in-opencv-2-4-python-2-7
|
25 |
-
# https://stackoverflow.com/questions/50308055/creating-bounding-boxes-for-contours
|
26 |
-
# https://stackoverflow.com/questions/57296398/how-can-i-get-better-results-of-bounding-box-using-find-contours-of-opencv
|
27 |
-
# http://amroamroamro.github.io/mexopencv/opencv/generalContours_demo1.html
|
28 |
-
# https://gist.github.com/bigsnarfdude/d811e31ee17495f82f10db12651ae82d
|
29 |
-
# http://man.hubwiz.com/docset/OpenCV.docset/Contents/Resources/Documents/da/d0c/tutorial_bounding_rects_circles.html
|
30 |
-
# https://www.analyticsvidhya.com/blog/2021/05/document-layout-detection-and-ocr-with-detectron2/
|
31 |
-
# https://colab.research.google.com/drive/1m6gaQF6Q4M0IaSjoo_4jWllKJjK-i6fw?usp=sharing#scrollTo=lEyl3wYKHAe1
|
32 |
-
# https://stackoverflow.com/questions/39403183/python-opencv-sorting-contours
|
33 |
-
# https://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html
|
34 |
-
# https://www.pyimagesearch.com/2016/03/21/ordering-coordinates-clockwise-with-python-and-opencv/
|
35 |
-
|
36 |
-
|
37 |
-
def bucket_sort(df, colmn, ymax_col="ymax", ymin_col="ymin"):
|
38 |
-
df["line_number"] = 0
|
39 |
-
colmn.append("line_number")
|
40 |
-
array_value = df[colmn].values
|
41 |
-
start_index = Line_counter = counter = 0
|
42 |
-
ymax, ymin, line_no = (
|
43 |
-
colmn.index(ymax_col),
|
44 |
-
colmn.index(ymin_col),
|
45 |
-
colmn.index("line_number"),
|
46 |
-
)
|
47 |
-
while counter < len(array_value):
|
48 |
-
current_ymax = array_value[start_index][ymax]
|
49 |
-
for next_index in range(start_index, len(array_value)):
|
50 |
-
counter += 1
|
51 |
-
|
52 |
-
next_ymin = array_value[next_index][ymin]
|
53 |
-
next_ymax = array_value[next_index][ymax]
|
54 |
-
if current_ymax > next_ymin:
|
55 |
-
|
56 |
-
array_value[next_index][line_no] = Line_counter + 1
|
57 |
-
# if current_ymax < next_ymax:
|
58 |
-
|
59 |
-
# current_ymax = next_ymax
|
60 |
-
else:
|
61 |
-
counter -= 1
|
62 |
-
break
|
63 |
-
# print(counter, len(array_value), start_index)
|
64 |
-
start_index = counter
|
65 |
-
Line_counter += 1
|
66 |
-
return pd.DataFrame(array_value, columns=colmn)
|
67 |
-
|
68 |
-
|
69 |
-
def do_sorting(df):
|
70 |
-
df.sort_values(["ymin", "xmin"], ascending=True, inplace=True)
|
71 |
-
df["idx"] = df.index
|
72 |
-
if "line_number" in df.columns:
|
73 |
-
print("line number removed")
|
74 |
-
df.drop("line_number", axis=1, inplace=True)
|
75 |
-
req_colns = ["xmin", "ymin", "xmax", "ymax", "idx"]
|
76 |
-
temp_df = df.copy()
|
77 |
-
temp = bucket_sort(temp_df.copy(), req_colns)
|
78 |
-
df = df.merge(temp[["idx", "line_number"]], on="idx")
|
79 |
-
df.sort_values(["line_number", "xmin"], ascending=True, inplace=True)
|
80 |
-
df = df.reset_index(drop=True)
|
81 |
-
df = df.reset_index(drop=True)
|
82 |
-
return df
|
83 |
-
|
84 |
-
|
85 |
-
def xml_to_csv(xml_file):
|
86 |
-
# https://gist.github.com/rotemtam/88d9a4efae243fc77ed4a0f9917c8f6c
|
87 |
-
xml_list = []
|
88 |
-
# for xml_file in glob.glob(path + '/*.xml'):
|
89 |
-
# https://discuss.streamlit.io/t/unable-to-read-files-using-standard-file-uploader/2258/2
|
90 |
-
tree = ET.parse(xml_file)
|
91 |
-
root = tree.getroot()
|
92 |
-
for member in root.findall("object"):
|
93 |
-
bbx = member.find("bndbox")
|
94 |
-
xmin = int(bbx.find("xmin").text)
|
95 |
-
ymin = int(bbx.find("ymin").text)
|
96 |
-
xmax = int(bbx.find("xmax").text)
|
97 |
-
ymax = int(bbx.find("ymax").text)
|
98 |
-
label = member.find("name").text
|
99 |
-
|
100 |
-
value = (
|
101 |
-
root.find("filename").text,
|
102 |
-
int(root.find("size")[0].text),
|
103 |
-
int(root.find("size")[1].text),
|
104 |
-
label,
|
105 |
-
xmin,
|
106 |
-
ymin,
|
107 |
-
xmax,
|
108 |
-
ymax,
|
109 |
-
)
|
110 |
-
xml_list.append(value)
|
111 |
-
column_name = [
|
112 |
-
"filename",
|
113 |
-
"width",
|
114 |
-
"height",
|
115 |
-
"cls",
|
116 |
-
"xmin",
|
117 |
-
"ymin",
|
118 |
-
"xmax",
|
119 |
-
"ymax",
|
120 |
-
]
|
121 |
-
xml_df = pd.DataFrame(xml_list, columns=column_name)
|
122 |
-
return xml_df
|
123 |
-
|
124 |
-
|
125 |
-
# def annotate_planogram_compliance(img0, sorted_xml_df, wrong_indexes, target_names):
|
126 |
-
# # annotator = Annotator(img0, line_width=3, pil=True)
|
127 |
-
# det = sorted_xml_df[['xmin', 'ymin', 'xmax', 'ymax','cls']].values
|
128 |
-
# # det[:, :4] = scale_coords((640, 640), det[:, :4], img0.shape).round()
|
129 |
-
# for i, (*xyxy, cls) in enumerate(det):
|
130 |
-
|
131 |
-
# c = int(cls) # integer class
|
132 |
-
|
133 |
-
# if i in wrong_indexes:
|
134 |
-
# # print(xyxy, "Wrong detection", (255, 0, 0))
|
135 |
-
# label = "Wrong detection"
|
136 |
-
# color = (0,0,255)
|
137 |
-
# else:
|
138 |
-
# # print(xyxy, label, (0, 255, 0))
|
139 |
-
# label = f'{target_names[c]}'
|
140 |
-
# color = (0,255, 0)
|
141 |
-
# org = (int(xyxy[0]), int(xyxy[1]) )
|
142 |
-
# top_left = org
|
143 |
-
# bottom_right = (int(xyxy[2]), int(xyxy[3]))
|
144 |
-
# # print("#"*50)
|
145 |
-
# # print(f"Anooatting cv2 rectangle with shape: { img0.shape}, top left: { top_left}, bottom right: { bottom_right} , color : { color }, thickness: {3}, cv2.LINE_8")
|
146 |
-
# # print("#"*50)
|
147 |
-
# cv2.rectangle(img0, top_left, bottom_right , color, 3, cv2.LINE_8)
|
148 |
-
|
149 |
-
# cv2.putText(img0, label, tuple(org), cv2. FONT_HERSHEY_SIMPLEX , 0.5, color)
|
150 |
-
|
151 |
-
# return img0
|
152 |
-
|
153 |
-
|
154 |
-
def annotate_planogram_compliance(
|
155 |
-
img0, sorted_df, correct_indexes, wrong_indexes, target_names
|
156 |
-
):
|
157 |
-
# annotator = Annotator(img0, line_width=3, pil=True)
|
158 |
-
det = sorted_df[["xmin", "ymin", "xmax", "ymax", "cls"]].values
|
159 |
-
# det[:, :4] = scale_coords((640, 640), det[:, :4], img0.shape).round()
|
160 |
-
for x, y in zip(*correct_indexes):
|
161 |
-
try:
|
162 |
-
row = sorted_df[sorted_df["line_number"] == x + 1].iloc[y]
|
163 |
-
xyxy = row[["xmin", "ymin", "xmax", "ymax"]].values
|
164 |
-
label = f'{target_names[row["cls"]]}'
|
165 |
-
color = (0, 255, 0)
|
166 |
-
# org = (int(xyxy[0]), int(xyxy[1]) )
|
167 |
-
top_left = (int(row["xmin"]), int(row["ymin"]))
|
168 |
-
bottom_right = (int(row["xmax"]), int(row["ymax"]))
|
169 |
-
cv2.rectangle(img0, top_left, bottom_right, color, 3, cv2.LINE_8)
|
170 |
-
|
171 |
-
cv2.putText(
|
172 |
-
img0, label, top_left, cv2.FONT_HERSHEY_SIMPLEX, 0.5, color
|
173 |
-
)
|
174 |
-
except Exception as e:
|
175 |
-
print("Error: " + str(e))
|
176 |
-
continue
|
177 |
-
|
178 |
-
for x, y in zip(*wrong_indexes):
|
179 |
-
try:
|
180 |
-
row = sorted_df[sorted_df["line_number"] == x + 1].iloc[y]
|
181 |
-
xyxy = row[["xmin", "ymin", "xmax", "ymax"]].values
|
182 |
-
label = f'{target_names[row["cls"]]}'
|
183 |
-
color = (0, 0, 255)
|
184 |
-
# org = (int(xyxy[0]), int(xyxy[1]) )
|
185 |
-
top_left = (row["xmin"], row["ymin"])
|
186 |
-
bottom_right = (row["xmax"], row["ymax"])
|
187 |
-
cv2.rectangle(img0, top_left, bottom_right, color, 3, cv2.LINE_8)
|
188 |
-
|
189 |
-
cv2.putText(
|
190 |
-
img0, label, top_left, cv2.FONT_HERSHEY_SIMPLEX, 0.5, color
|
191 |
-
)
|
192 |
-
except Exception as e:
|
193 |
-
print("Error: " + str(e))
|
194 |
-
continue
|
195 |
-
|
196 |
-
return img0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Chatgpt4Online.py
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
import json
|
4 |
-
from aiohttp import ClientSession
|
5 |
-
|
6 |
-
from ..typing import AsyncGenerator
|
7 |
-
from .base_provider import AsyncGeneratorProvider
|
8 |
-
|
9 |
-
|
10 |
-
class Chatgpt4Online(AsyncGeneratorProvider):
|
11 |
-
url = "https://chatgpt4online.org"
|
12 |
-
supports_gpt_35_turbo = True
|
13 |
-
working = True
|
14 |
-
|
15 |
-
@classmethod
|
16 |
-
async def create_async_generator(
|
17 |
-
cls,
|
18 |
-
model: str,
|
19 |
-
messages: list[dict[str, str]],
|
20 |
-
**kwargs
|
21 |
-
) -> AsyncGenerator:
|
22 |
-
async with ClientSession() as session:
|
23 |
-
data = {
|
24 |
-
"botId": "default",
|
25 |
-
"customId": None,
|
26 |
-
"session": "N/A",
|
27 |
-
"chatId": "",
|
28 |
-
"contextId": 58,
|
29 |
-
"messages": messages,
|
30 |
-
"newMessage": messages[-1]["content"],
|
31 |
-
"stream": True
|
32 |
-
}
|
33 |
-
async with session.post(cls.url + "/wp-json/mwai-ui/v1/chats/submit", json=data) as response:
|
34 |
-
response.raise_for_status()
|
35 |
-
async for line in response.content:
|
36 |
-
if line.startswith(b"data: "):
|
37 |
-
line = json.loads(line[6:])
|
38 |
-
if line["type"] == "live":
|
39 |
-
yield line["data"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/agentverse/__init__.py
DELETED
@@ -1,24 +0,0 @@
|
|
1 |
-
from .output_parser import output_parser_registry
|
2 |
-
from .environments import env_registry
|
3 |
-
from .environments.simulation_env.rules.order import order_registry
|
4 |
-
from .environments.simulation_env.rules.describer import describer_registry
|
5 |
-
from .environments.simulation_env.rules.selector import selector_registry
|
6 |
-
from .environments.simulation_env.rules.updater import updater_registry
|
7 |
-
from .environments.simulation_env.rules.visibility import visibility_registry
|
8 |
-
|
9 |
-
|
10 |
-
from .environments.tasksolving_env.rules.decision_maker import decision_maker_registry
|
11 |
-
from .environments.tasksolving_env.rules.evaluator import evaluator_registry
|
12 |
-
from .environments.tasksolving_env.rules.executor import executor_registry
|
13 |
-
from .environments.tasksolving_env.rules.role_assigner import role_assigner_registry
|
14 |
-
|
15 |
-
|
16 |
-
from .simulation import Simulation
|
17 |
-
from .tasksolving import TaskSolving
|
18 |
-
from .initialization import (
|
19 |
-
prepare_task_config,
|
20 |
-
load_agent,
|
21 |
-
load_environment,
|
22 |
-
load_llm,
|
23 |
-
load_memory,
|
24 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/methods/ExpandSubMenu.js
DELETED
@@ -1,40 +0,0 @@
|
|
1 |
-
var ExpandSubMenu = function (parentButton, items) {
|
2 |
-
var subMenu = this.childrenMap.subMenu;
|
3 |
-
// Submenu already expand
|
4 |
-
if (subMenu && subMenu.parentButton === parentButton) {
|
5 |
-
return this;
|
6 |
-
}
|
7 |
-
|
8 |
-
this.collapseSubMenu();
|
9 |
-
|
10 |
-
var orientation
|
11 |
-
if (this.root.toggleOrientation) {
|
12 |
-
orientation = (this.orientation === 0) ? 1 : 0;
|
13 |
-
} else {
|
14 |
-
orientation = this.orientation;
|
15 |
-
}
|
16 |
-
|
17 |
-
var subMenu = new this.constructor(this.scene, {
|
18 |
-
items: items,
|
19 |
-
orientation: orientation,
|
20 |
-
space: this.space,
|
21 |
-
|
22 |
-
createBackgroundCallback: this.root.createBackgroundCallback,
|
23 |
-
createBackgroundCallbackScope: this.root.createBackgroundCallbackScope,
|
24 |
-
createButtonCallback: this.root.createButtonCallback,
|
25 |
-
createButtonCallbackScope: this.root.createButtonCallbackScope,
|
26 |
-
easeIn: this.root.easeIn,
|
27 |
-
easeOut: this.root.easeOut,
|
28 |
-
|
29 |
-
_rootMenu: this.root,
|
30 |
-
_parentMenu: this,
|
31 |
-
_parentButton: parentButton
|
32 |
-
});
|
33 |
-
|
34 |
-
this.pin(subMenu);
|
35 |
-
this.childrenMap.subMenu = subMenu;
|
36 |
-
this.root.emit('expand', subMenu, parentButton, this);
|
37 |
-
return this;
|
38 |
-
}
|
39 |
-
|
40 |
-
export default ExpandSubMenu;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlanMars/QYL-AI-Space/modules/models/configuration_moss.py
DELETED
@@ -1,118 +0,0 @@
|
|
1 |
-
""" Moss model configuration"""
|
2 |
-
|
3 |
-
from transformers.utils import logging
|
4 |
-
from transformers.configuration_utils import PretrainedConfig
|
5 |
-
|
6 |
-
|
7 |
-
logger = logging.get_logger(__name__)
|
8 |
-
|
9 |
-
|
10 |
-
class MossConfig(PretrainedConfig):
|
11 |
-
r"""
|
12 |
-
This is the configuration class to store the configuration of a [`MossModel`]. It is used to instantiate a
|
13 |
-
Moss model according to the specified arguments, defining the model architecture. Instantiating a configuration
|
14 |
-
with the defaults will yield a similar configuration to that of the Moss
|
15 |
-
[fnlp/moss-moon-003-base](https://huggingface.co/fnlp/moss-moon-003-base) architecture. Configuration objects
|
16 |
-
inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from
|
17 |
-
[`PretrainedConfig`] for more information.
|
18 |
-
|
19 |
-
Args:
|
20 |
-
vocab_size (`int`, *optional*, defaults to 107008):
|
21 |
-
Vocabulary size of the Moss model. Defines the number of different tokens that can be represented by the
|
22 |
-
`inputs_ids` passed when calling [`MossModel`].
|
23 |
-
n_positions (`int`, *optional*, defaults to 2048):
|
24 |
-
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
25 |
-
just in case (e.g., 512 or 1024 or 2048).
|
26 |
-
n_embd (`int`, *optional*, defaults to 4096):
|
27 |
-
Dimensionality of the embeddings and hidden states.
|
28 |
-
n_layer (`int`, *optional*, defaults to 28):
|
29 |
-
Number of hidden layers in the Transformer encoder.
|
30 |
-
n_head (`int`, *optional*, defaults to 16):
|
31 |
-
Number of attention heads for each attention layer in the Transformer encoder.
|
32 |
-
rotary_dim (`int`, *optional*, defaults to 64):
|
33 |
-
Number of dimensions in the embedding that Rotary Position Embedding is applied to.
|
34 |
-
n_inner (`int`, *optional*, defaults to None):
|
35 |
-
Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd
|
36 |
-
activation_function (`str`, *optional*, defaults to `"gelu_new"`):
|
37 |
-
Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`.
|
38 |
-
resid_pdrop (`float`, *optional*, defaults to 0.1):
|
39 |
-
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
40 |
-
embd_pdrop (`int`, *optional*, defaults to 0.1):
|
41 |
-
The dropout ratio for the embeddings.
|
42 |
-
attn_pdrop (`float`, *optional*, defaults to 0.1):
|
43 |
-
The dropout ratio for the attention.
|
44 |
-
layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
|
45 |
-
The epsilon to use in the layer normalization layers.
|
46 |
-
initializer_range (`float`, *optional*, defaults to 0.02):
|
47 |
-
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
48 |
-
use_cache (`bool`, *optional*, defaults to `True`):
|
49 |
-
Whether or not the model should return the last key/values attentions (not used by all models).
|
50 |
-
|
51 |
-
Example:
|
52 |
-
|
53 |
-
```python
|
54 |
-
>>> from modeling_moss import MossModel
|
55 |
-
>>> from configuration_moss import MossConfig
|
56 |
-
|
57 |
-
>>> # Initializing a moss-moon-003-base configuration
|
58 |
-
>>> configuration = MossConfig()
|
59 |
-
|
60 |
-
>>> # Initializing a model (with random weights) from the configuration
|
61 |
-
>>> model = MossModel(configuration)
|
62 |
-
|
63 |
-
>>> # Accessing the model configuration
|
64 |
-
>>> configuration = model.config
|
65 |
-
```"""
|
66 |
-
|
67 |
-
model_type = "moss"
|
68 |
-
attribute_map = {
|
69 |
-
"max_position_embeddings": "n_positions",
|
70 |
-
"hidden_size": "n_embd",
|
71 |
-
"num_attention_heads": "n_head",
|
72 |
-
"num_hidden_layers": "n_layer",
|
73 |
-
}
|
74 |
-
|
75 |
-
def __init__(
|
76 |
-
self,
|
77 |
-
vocab_size=107008,
|
78 |
-
n_positions=2048,
|
79 |
-
n_ctx=2048,
|
80 |
-
n_embd=4096,
|
81 |
-
n_layer=28,
|
82 |
-
n_head=16,
|
83 |
-
rotary_dim=64,
|
84 |
-
n_inner=None,
|
85 |
-
activation_function="gelu_new",
|
86 |
-
resid_pdrop=0.0,
|
87 |
-
embd_pdrop=0.0,
|
88 |
-
attn_pdrop=0.0,
|
89 |
-
layer_norm_epsilon=1e-5,
|
90 |
-
initializer_range=0.02,
|
91 |
-
use_cache=True,
|
92 |
-
bos_token_id=106028,
|
93 |
-
eos_token_id=106068,
|
94 |
-
tie_word_embeddings=False,
|
95 |
-
**kwargs,
|
96 |
-
):
|
97 |
-
self.vocab_size = vocab_size
|
98 |
-
self.n_ctx = n_ctx
|
99 |
-
self.n_positions = n_positions
|
100 |
-
self.n_embd = n_embd
|
101 |
-
self.n_layer = n_layer
|
102 |
-
self.n_head = n_head
|
103 |
-
self.n_inner = n_inner
|
104 |
-
self.rotary_dim = rotary_dim
|
105 |
-
self.activation_function = activation_function
|
106 |
-
self.resid_pdrop = resid_pdrop
|
107 |
-
self.embd_pdrop = embd_pdrop
|
108 |
-
self.attn_pdrop = attn_pdrop
|
109 |
-
self.layer_norm_epsilon = layer_norm_epsilon
|
110 |
-
self.initializer_range = initializer_range
|
111 |
-
self.use_cache = use_cache
|
112 |
-
|
113 |
-
self.bos_token_id = bos_token_id
|
114 |
-
self.eos_token_id = eos_token_id
|
115 |
-
|
116 |
-
super().__init__(
|
117 |
-
bos_token_id=bos_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs
|
118 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Alesmikes/elvire01/app.py
DELETED
@@ -1,85 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
this model only supports english since text to speech is an english only model
|
3 |
-
"""
|
4 |
-
import os, time
|
5 |
-
import openai
|
6 |
-
import gradio as gr
|
7 |
-
from dotenv import load_dotenv
|
8 |
-
import pinecone
|
9 |
-
|
10 |
-
"""
|
11 |
-
Connecting to Open AI API
|
12 |
-
"""
|
13 |
-
load_dotenv()
|
14 |
-
openai.organization = os.getenv("OPENAI_ORG")
|
15 |
-
openai.api_key = os.getenv("OPENAI_API_KEY")
|
16 |
-
EMBEDDING_MODEL = "text-embedding-ada-002"
|
17 |
-
"""
|
18 |
-
Connecting to pincone API and assign index
|
19 |
-
"""
|
20 |
-
index_name = "luludemo"
|
21 |
-
pinecone.init(
|
22 |
-
api_key=os.getenv("Pinecone_KEY"),
|
23 |
-
environment=os.getenv("Pinecone_ENV")
|
24 |
-
)
|
25 |
-
|
26 |
-
"""
|
27 |
-
run cosin similarity to find context
|
28 |
-
"""
|
29 |
-
def LLMSearch(question):
|
30 |
-
index = pinecone.Index(index_name)
|
31 |
-
query = openai.Embedding.create(input=question, model=EMBEDDING_MODEL)["data"][0]["embedding"] # embed the user query into an embedding vector
|
32 |
-
res = index.query(query, top_k=3, include_metadata=True) # run cosin similarity to search the most relavent embeded content; this is done in pinecone only
|
33 |
-
contexts = [
|
34 |
-
x['metadata']['text'] for x in res['matches']
|
35 |
-
]
|
36 |
-
merged_context = "".join(contexts)
|
37 |
-
contextwithQuestion = "Context: " + "\n"+ merged_context + "*End of the context*" + "\n\n" + "Question: " + question
|
38 |
-
print(contextwithQuestion)
|
39 |
-
"""
|
40 |
-
pass the transcripted text to GPT
|
41 |
-
"""
|
42 |
-
messages = [
|
43 |
-
{"role": "system",
|
44 |
-
"content":
|
45 |
-
"You are an assistant that answers questions only based on the context provided. Before each question, some context will be provided.\
|
46 |
-
Context starts with 'Context:' and end with '*End of the context*'. Once you receive all the context, you will consider all of them to answer the questions.\
|
47 |
-
It is very important to answer the question as honestly as possible.\
|
48 |
-
If you are not sure about the answer based on the context provided, you can still try to come up with an answer but you must also tell the user that you are not confident about the answer and that the user should look for a secondary source to confirm the answer.\
|
49 |
-
It is very important to answer the questions politely. It is very important to answer the question in great detail.\
|
50 |
-
Once you receive all the context, you will receive a question that starts with 'Question:'. Once you receive the question, you can answer the question.\
|
51 |
-
"}
|
52 |
-
]
|
53 |
-
messages.append({"role": "user", "content":contextwithQuestion}) ## add user input to the list of message
|
54 |
-
|
55 |
-
response = openai.ChatCompletion.create(
|
56 |
-
model="gpt-3.5-turbo",
|
57 |
-
messages=messages
|
58 |
-
) ## pass the list of message to GPT
|
59 |
-
|
60 |
-
return response["choices"][0]["message"]["content"] ## add GPT response to the list of message
|
61 |
-
|
62 |
-
|
63 |
-
with gr.Blocks() as demo:
|
64 |
-
chatbot = gr.Chatbot()
|
65 |
-
msg = gr.Textbox()
|
66 |
-
clear = gr.Button("Clear")
|
67 |
-
|
68 |
-
def user(user_message, history):
|
69 |
-
return "", history + [[user_message, None]]
|
70 |
-
|
71 |
-
|
72 |
-
def bot(history):
|
73 |
-
bot_message = LLMSearch(history[-1][0])
|
74 |
-
history[-1][1] = bot_message
|
75 |
-
time.sleep(1)
|
76 |
-
return history
|
77 |
-
|
78 |
-
msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(
|
79 |
-
bot, chatbot, chatbot
|
80 |
-
)
|
81 |
-
clear.click(lambda: None, None, chatbot, queue=False)
|
82 |
-
|
83 |
-
demo.launch(debug=True)
|
84 |
-
|
85 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlgoveraAI/web3-wallet-streamlit/app.py
DELETED
@@ -1,59 +0,0 @@
|
|
1 |
-
from ocean_lib.config import Config
|
2 |
-
from ocean_lib.models.btoken import BToken #BToken is ERC20
|
3 |
-
from ocean_lib.ocean.ocean import Ocean
|
4 |
-
from ocean_lib.web3_internal.wallet import Wallet
|
5 |
-
from ocean_lib.web3_internal.currency import from_wei # wei is the smallest denomination of ether e.g. like cents
|
6 |
-
# from ocean_lib.web3_internal.currency import pretty_ether_and_wei
|
7 |
-
import streamlit as st
|
8 |
-
from web3 import Web3
|
9 |
-
from wallet_connect import connect
|
10 |
-
|
11 |
-
d = {
|
12 |
-
'network' : 'https://rinkeby.infura.io/v3/d163c48816434b0bbb3ac3925d6c6c80',
|
13 |
-
'BLOCK_CONFIRMATIONS': 0,
|
14 |
-
'metadataCacheUri' : 'https://aquarius.oceanprotocol.com',
|
15 |
-
'providerUri' : 'https://provider.rinkeby.oceanprotocol.com',
|
16 |
-
'PROVIDER_ADDRESS': '0x00bd138abd70e2f00903268f3db08f2d25677c9e',
|
17 |
-
'downloads.path': 'consume-downloads',
|
18 |
-
}
|
19 |
-
|
20 |
-
ocean = Ocean(d)
|
21 |
-
|
22 |
-
def wallet():
|
23 |
-
|
24 |
-
lower_case_address = connect("wallet")
|
25 |
-
address = ''
|
26 |
-
if len(lower_case_address[0]) > 3:
|
27 |
-
address = Web3.toChecksumAddress(lower_case_address[0])
|
28 |
-
if len(address) > 3:
|
29 |
-
|
30 |
-
OCEAN_token = BToken(ocean.web3, ocean.OCEAN_address)
|
31 |
-
|
32 |
-
eth_balance = from_wei(ocean.web3.eth.get_balance(address))
|
33 |
-
ocean_balance = from_wei(OCEAN_token.balanceOf(address))
|
34 |
-
|
35 |
-
st.write(f'Address: {address}')
|
36 |
-
st.write(f'ETH Balance: {eth_balance}')
|
37 |
-
st.write(f'OCEAN Balance: {ocean_balance}')
|
38 |
-
|
39 |
-
st.header("Web3 Wallet")
|
40 |
-
text = """
|
41 |
-
This demo shows the balance of tokens in your Web3 wallet.
|
42 |
-
If you do not have a Web3 wallet, see instructions on setting up a wallet in the links below.
|
43 |
-
Initially, your wallet should have no ETH and OCEAN tokens in it.
|
44 |
-
You can then request ETH and OCEAN test tokens by entering your public address into faucets
|
45 |
-
(follow the links at the bottom of the page).
|
46 |
-
Then wait about 15 seconds and re-run the app for the same private key.
|
47 |
-
This demo uses the Ocean Protocol Python library in the backend.
|
48 |
-
For more information on the advantages of combinining Ocean and HuggingFace,
|
49 |
-
check out the blog post link below.
|
50 |
-
|
51 |
-
Setup MetaMask: [https://www.oceanacademy.io/ocean101/chapter-8](https://www.oceanacademy.io/ocean101/chapter-8)
|
52 |
-
|
53 |
-
Get Rinkeby ETH: [https://faucet.rinkeby.io/](https://faucet.rinkeby.io/)
|
54 |
-
|
55 |
-
Get Test Ocean: [https://faucet.rinkeby.oceanprotocol.com/](https://faucet.rinkeby.oceanprotocol.com/)
|
56 |
-
"""
|
57 |
-
st.write(text)
|
58 |
-
|
59 |
-
wallet()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AmazonScience/QA-NLU/README.md
DELETED
@@ -1,37 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: QA NLU
|
3 |
-
emoji: 👁
|
4 |
-
colorFrom: yellow
|
5 |
-
colorTo: green
|
6 |
-
sdk: streamlit
|
7 |
-
app_file: app.py
|
8 |
-
pinned: false
|
9 |
-
---
|
10 |
-
|
11 |
-
# Configuration
|
12 |
-
|
13 |
-
`title`: _string_
|
14 |
-
Display title for the Space
|
15 |
-
|
16 |
-
`emoji`: _string_
|
17 |
-
Space emoji (emoji-only character allowed)
|
18 |
-
|
19 |
-
`colorFrom`: _string_
|
20 |
-
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
|
21 |
-
|
22 |
-
`colorTo`: _string_
|
23 |
-
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
|
24 |
-
|
25 |
-
`sdk`: _string_
|
26 |
-
Can be either `gradio` or `streamlit`
|
27 |
-
|
28 |
-
`sdk_version` : _string_
|
29 |
-
Only applicable for `streamlit` SDK.
|
30 |
-
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
|
31 |
-
|
32 |
-
`app_file`: _string_
|
33 |
-
Path to your main application file (which contains either `gradio` or `streamlit` Python code).
|
34 |
-
Path is relative to the root of the repository.
|
35 |
-
|
36 |
-
`pinned`: _boolean_
|
37 |
-
Whether the Space stays on top of your list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amr453/Transcription/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Transcription
|
3 |
-
emoji: ⚡
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: green
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.16.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/stylegan_human/dnnlib/tflib/network.py
DELETED
@@ -1,658 +0,0 @@
|
|
1 |
-
# Copyright (c) SenseTime Research. All rights reserved.
|
2 |
-
|
3 |
-
# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.
|
4 |
-
#
|
5 |
-
# This work is made available under the Nvidia Source Code License-NC.
|
6 |
-
# To view a copy of this license, visit
|
7 |
-
# https://nvlabs.github.io/stylegan2/license.html
|
8 |
-
|
9 |
-
"""Helper for managing networks."""
|
10 |
-
|
11 |
-
import types
|
12 |
-
import inspect
|
13 |
-
import re
|
14 |
-
import uuid
|
15 |
-
import sys
|
16 |
-
import numpy as np
|
17 |
-
import tensorflow as tf
|
18 |
-
|
19 |
-
from collections import OrderedDict
|
20 |
-
from typing import Any, List, Tuple, Union
|
21 |
-
|
22 |
-
from . import tfutil
|
23 |
-
from .. import util
|
24 |
-
|
25 |
-
from .tfutil import TfExpression, TfExpressionEx
|
26 |
-
|
27 |
-
# Custom import handlers for dealing with legacy data in pickle import.
|
28 |
-
_import_handlers = []
|
29 |
-
# Source code for temporary modules created during pickle import.
|
30 |
-
_import_module_src = dict()
|
31 |
-
|
32 |
-
|
33 |
-
def import_handler(handler_func):
|
34 |
-
"""Function decorator for declaring custom import handlers."""
|
35 |
-
_import_handlers.append(handler_func)
|
36 |
-
return handler_func
|
37 |
-
|
38 |
-
|
39 |
-
class Network:
|
40 |
-
"""Generic network abstraction.
|
41 |
-
|
42 |
-
Acts as a convenience wrapper for a parameterized network construction
|
43 |
-
function, providing several utility methods and convenient access to
|
44 |
-
the inputs/outputs/weights.
|
45 |
-
|
46 |
-
Network objects can be safely pickled and unpickled for long-term
|
47 |
-
archival purposes. The pickling works reliably as long as the underlying
|
48 |
-
network construction function is defined in a standalone Python module
|
49 |
-
that has no side effects or application-specific imports.
|
50 |
-
|
51 |
-
Args:
|
52 |
-
name: Network name. Used to select TensorFlow name and variable scopes.
|
53 |
-
func_name: Fully qualified name of the underlying network construction function, or a top-level function object.
|
54 |
-
static_kwargs: Keyword arguments to be passed in to the network construction function.
|
55 |
-
|
56 |
-
Attributes:
|
57 |
-
name: User-specified name, defaults to build func name if None.
|
58 |
-
scope: Unique TensorFlow scope containing template graph and variables, derived from the user-specified name.
|
59 |
-
static_kwargs: Arguments passed to the user-supplied build func.
|
60 |
-
components: Container for sub-networks. Passed to the build func, and retained between calls.
|
61 |
-
num_inputs: Number of input tensors.
|
62 |
-
num_outputs: Number of output tensors.
|
63 |
-
input_shapes: Input tensor shapes (NC or NCHW), including minibatch dimension.
|
64 |
-
output_shapes: Output tensor shapes (NC or NCHW), including minibatch dimension.
|
65 |
-
input_shape: Short-hand for input_shapes[0].
|
66 |
-
output_shape: Short-hand for output_shapes[0].
|
67 |
-
input_templates: Input placeholders in the template graph.
|
68 |
-
output_templates: Output tensors in the template graph.
|
69 |
-
input_names: Name string for each input.
|
70 |
-
output_names: Name string for each output.
|
71 |
-
own_vars: Variables defined by this network (local_name => var), excluding sub-networks.
|
72 |
-
vars: All variables (local_name => var).
|
73 |
-
trainables: All trainable variables (local_name => var).
|
74 |
-
var_global_to_local: Mapping from variable global names to local names.
|
75 |
-
"""
|
76 |
-
|
77 |
-
def __init__(self, name: str = None, func_name: Any = None, **static_kwargs):
|
78 |
-
tfutil.assert_tf_initialized()
|
79 |
-
assert isinstance(name, str) or name is None
|
80 |
-
assert func_name is not None
|
81 |
-
assert isinstance(
|
82 |
-
func_name, str) or util.is_top_level_function(func_name)
|
83 |
-
assert util.is_pickleable(static_kwargs)
|
84 |
-
|
85 |
-
self._init_fields()
|
86 |
-
self.name = name
|
87 |
-
self.static_kwargs = util.EasyDict(static_kwargs)
|
88 |
-
|
89 |
-
# Locate the user-specified network build function.
|
90 |
-
if util.is_top_level_function(func_name):
|
91 |
-
func_name = util.get_top_level_function_name(func_name)
|
92 |
-
module, self._build_func_name = util.get_module_from_obj_name(
|
93 |
-
func_name)
|
94 |
-
self._build_func = util.get_obj_from_module(
|
95 |
-
module, self._build_func_name)
|
96 |
-
assert callable(self._build_func)
|
97 |
-
|
98 |
-
# Dig up source code for the module containing the build function.
|
99 |
-
self._build_module_src = _import_module_src.get(module, None)
|
100 |
-
if self._build_module_src is None:
|
101 |
-
self._build_module_src = inspect.getsource(module)
|
102 |
-
|
103 |
-
# Init TensorFlow graph.
|
104 |
-
self._init_graph()
|
105 |
-
self.reset_own_vars()
|
106 |
-
|
107 |
-
def _init_fields(self) -> None:
|
108 |
-
self.name = None
|
109 |
-
self.scope = None
|
110 |
-
self.static_kwargs = util.EasyDict()
|
111 |
-
self.components = util.EasyDict()
|
112 |
-
self.num_inputs = 0
|
113 |
-
self.num_outputs = 0
|
114 |
-
self.input_shapes = [[]]
|
115 |
-
self.output_shapes = [[]]
|
116 |
-
self.input_shape = []
|
117 |
-
self.output_shape = []
|
118 |
-
self.input_templates = []
|
119 |
-
self.output_templates = []
|
120 |
-
self.input_names = []
|
121 |
-
self.output_names = []
|
122 |
-
self.own_vars = OrderedDict()
|
123 |
-
self.vars = OrderedDict()
|
124 |
-
self.trainables = OrderedDict()
|
125 |
-
self.var_global_to_local = OrderedDict()
|
126 |
-
|
127 |
-
# User-supplied build function that constructs the network.
|
128 |
-
self._build_func = None
|
129 |
-
self._build_func_name = None # Name of the build function.
|
130 |
-
# Full source code of the module containing the build function.
|
131 |
-
self._build_module_src = None
|
132 |
-
self._run_cache = dict() # Cached graph data for Network.run().
|
133 |
-
|
134 |
-
def _init_graph(self) -> None:
|
135 |
-
# Collect inputs.
|
136 |
-
self.input_names = []
|
137 |
-
|
138 |
-
for param in inspect.signature(self._build_func).parameters.values():
|
139 |
-
if param.kind == param.POSITIONAL_OR_KEYWORD and param.default is param.empty:
|
140 |
-
self.input_names.append(param.name)
|
141 |
-
|
142 |
-
self.num_inputs = len(self.input_names)
|
143 |
-
assert self.num_inputs >= 1
|
144 |
-
|
145 |
-
# Choose name and scope.
|
146 |
-
if self.name is None:
|
147 |
-
self.name = self._build_func_name
|
148 |
-
assert re.match("^[A-Za-z0-9_.\\-]*$", self.name)
|
149 |
-
with tf.name_scope(None):
|
150 |
-
self.scope = tf.get_default_graph().unique_name(self.name, mark_as_used=True)
|
151 |
-
|
152 |
-
# Finalize build func kwargs.
|
153 |
-
build_kwargs = dict(self.static_kwargs)
|
154 |
-
build_kwargs["is_template_graph"] = True
|
155 |
-
build_kwargs["components"] = self.components
|
156 |
-
|
157 |
-
# Build template graph.
|
158 |
-
# ignore surrounding scopes
|
159 |
-
with tfutil.absolute_variable_scope(self.scope, reuse=False), tfutil.absolute_name_scope(self.scope):
|
160 |
-
assert tf.get_variable_scope().name == self.scope
|
161 |
-
assert tf.get_default_graph().get_name_scope() == self.scope
|
162 |
-
# ignore surrounding control dependencies
|
163 |
-
with tf.control_dependencies(None):
|
164 |
-
self.input_templates = [tf.placeholder(
|
165 |
-
tf.float32, name=name) for name in self.input_names]
|
166 |
-
out_expr = self._build_func(
|
167 |
-
*self.input_templates, **build_kwargs)
|
168 |
-
|
169 |
-
# Collect outputs.
|
170 |
-
assert tfutil.is_tf_expression(out_expr) or isinstance(out_expr, tuple)
|
171 |
-
self.output_templates = [out_expr] if tfutil.is_tf_expression(
|
172 |
-
out_expr) else list(out_expr)
|
173 |
-
self.num_outputs = len(self.output_templates)
|
174 |
-
assert self.num_outputs >= 1
|
175 |
-
assert all(tfutil.is_tf_expression(t) for t in self.output_templates)
|
176 |
-
|
177 |
-
# Perform sanity checks.
|
178 |
-
if any(t.shape.ndims is None for t in self.input_templates):
|
179 |
-
raise ValueError(
|
180 |
-
"Network input shapes not defined. Please call x.set_shape() for each input.")
|
181 |
-
if any(t.shape.ndims is None for t in self.output_templates):
|
182 |
-
raise ValueError(
|
183 |
-
"Network output shapes not defined. Please call x.set_shape() where applicable.")
|
184 |
-
if any(not isinstance(comp, Network) for comp in self.components.values()):
|
185 |
-
raise ValueError(
|
186 |
-
"Components of a Network must be Networks themselves.")
|
187 |
-
if len(self.components) != len(set(comp.name for comp in self.components.values())):
|
188 |
-
raise ValueError("Components of a Network must have unique names.")
|
189 |
-
|
190 |
-
# List inputs and outputs.
|
191 |
-
self.input_shapes = [t.shape.as_list() for t in self.input_templates]
|
192 |
-
self.output_shapes = [t.shape.as_list() for t in self.output_templates]
|
193 |
-
self.input_shape = self.input_shapes[0]
|
194 |
-
self.output_shape = self.output_shapes[0]
|
195 |
-
self.output_names = [t.name.split(
|
196 |
-
"/")[-1].split(":")[0] for t in self.output_templates]
|
197 |
-
|
198 |
-
# List variables.
|
199 |
-
self.own_vars = OrderedDict((var.name[len(
|
200 |
-
self.scope) + 1:].split(":")[0], var) for var in tf.global_variables(self.scope + "/"))
|
201 |
-
self.vars = OrderedDict(self.own_vars)
|
202 |
-
self.vars.update((comp.name + "/" + name, var)
|
203 |
-
for comp in self.components.values() for name, var in comp.vars.items())
|
204 |
-
self.trainables = OrderedDict(
|
205 |
-
(name, var) for name, var in self.vars.items() if var.trainable)
|
206 |
-
self.var_global_to_local = OrderedDict(
|
207 |
-
(var.name.split(":")[0], name) for name, var in self.vars.items())
|
208 |
-
|
209 |
-
def reset_own_vars(self) -> None:
|
210 |
-
"""Re-initialize all variables of this network, excluding sub-networks."""
|
211 |
-
tfutil.run([var.initializer for var in self.own_vars.values()])
|
212 |
-
|
213 |
-
def reset_vars(self) -> None:
|
214 |
-
"""Re-initialize all variables of this network, including sub-networks."""
|
215 |
-
tfutil.run([var.initializer for var in self.vars.values()])
|
216 |
-
|
217 |
-
def reset_trainables(self) -> None:
|
218 |
-
"""Re-initialize all trainable variables of this network, including sub-networks."""
|
219 |
-
tfutil.run([var.initializer for var in self.trainables.values()])
|
220 |
-
|
221 |
-
def get_output_for(self, *in_expr: TfExpression, return_as_list: bool = False, **dynamic_kwargs) -> Union[TfExpression, List[TfExpression]]:
|
222 |
-
"""Construct TensorFlow expression(s) for the output(s) of this network, given the input expression(s)."""
|
223 |
-
assert len(in_expr) == self.num_inputs
|
224 |
-
assert not all(expr is None for expr in in_expr)
|
225 |
-
|
226 |
-
# Finalize build func kwargs.
|
227 |
-
build_kwargs = dict(self.static_kwargs)
|
228 |
-
build_kwargs.update(dynamic_kwargs)
|
229 |
-
build_kwargs["is_template_graph"] = False
|
230 |
-
build_kwargs["components"] = self.components
|
231 |
-
|
232 |
-
# Build TensorFlow graph to evaluate the network.
|
233 |
-
with tfutil.absolute_variable_scope(self.scope, reuse=True), tf.name_scope(self.name):
|
234 |
-
assert tf.get_variable_scope().name == self.scope
|
235 |
-
valid_inputs = [expr for expr in in_expr if expr is not None]
|
236 |
-
final_inputs = []
|
237 |
-
for expr, name, shape in zip(in_expr, self.input_names, self.input_shapes):
|
238 |
-
if expr is not None:
|
239 |
-
expr = tf.identity(expr, name=name)
|
240 |
-
else:
|
241 |
-
expr = tf.zeros([tf.shape(valid_inputs[0])[
|
242 |
-
0]] + shape[1:], name=name)
|
243 |
-
final_inputs.append(expr)
|
244 |
-
out_expr = self._build_func(*final_inputs, **build_kwargs)
|
245 |
-
|
246 |
-
# Propagate input shapes back to the user-specified expressions.
|
247 |
-
for expr, final in zip(in_expr, final_inputs):
|
248 |
-
if isinstance(expr, tf.Tensor):
|
249 |
-
expr.set_shape(final.shape)
|
250 |
-
|
251 |
-
# Express outputs in the desired format.
|
252 |
-
assert tfutil.is_tf_expression(out_expr) or isinstance(out_expr, tuple)
|
253 |
-
if return_as_list:
|
254 |
-
out_expr = [out_expr] if tfutil.is_tf_expression(
|
255 |
-
out_expr) else list(out_expr)
|
256 |
-
return out_expr
|
257 |
-
|
258 |
-
def get_var_local_name(self, var_or_global_name: Union[TfExpression, str]) -> str:
|
259 |
-
"""Get the local name of a given variable, without any surrounding name scopes."""
|
260 |
-
assert tfutil.is_tf_expression(
|
261 |
-
var_or_global_name) or isinstance(var_or_global_name, str)
|
262 |
-
global_name = var_or_global_name if isinstance(
|
263 |
-
var_or_global_name, str) else var_or_global_name.name
|
264 |
-
return self.var_global_to_local[global_name]
|
265 |
-
|
266 |
-
def find_var(self, var_or_local_name: Union[TfExpression, str]) -> TfExpression:
|
267 |
-
"""Find variable by local or global name."""
|
268 |
-
assert tfutil.is_tf_expression(
|
269 |
-
var_or_local_name) or isinstance(var_or_local_name, str)
|
270 |
-
return self.vars[var_or_local_name] if isinstance(var_or_local_name, str) else var_or_local_name
|
271 |
-
|
272 |
-
def get_var(self, var_or_local_name: Union[TfExpression, str]) -> np.ndarray:
|
273 |
-
"""Get the value of a given variable as NumPy array.
|
274 |
-
Note: This method is very inefficient -- prefer to use tflib.run(list_of_vars) whenever possible."""
|
275 |
-
return self.find_var(var_or_local_name).eval()
|
276 |
-
|
277 |
-
def set_var(self, var_or_local_name: Union[TfExpression, str], new_value: Union[int, float, np.ndarray]) -> None:
|
278 |
-
"""Set the value of a given variable based on the given NumPy array.
|
279 |
-
Note: This method is very inefficient -- prefer to use tflib.set_vars() whenever possible."""
|
280 |
-
tfutil.set_vars({self.find_var(var_or_local_name): new_value})
|
281 |
-
|
282 |
-
def __getstate__(self) -> dict:
|
283 |
-
"""Pickle export."""
|
284 |
-
state = dict()
|
285 |
-
state["version"] = 4
|
286 |
-
state["name"] = self.name
|
287 |
-
state["static_kwargs"] = dict(self.static_kwargs)
|
288 |
-
state["components"] = dict(self.components)
|
289 |
-
state["build_module_src"] = self._build_module_src
|
290 |
-
state["build_func_name"] = self._build_func_name
|
291 |
-
state["variables"] = list(
|
292 |
-
zip(self.own_vars.keys(), tfutil.run(list(self.own_vars.values()))))
|
293 |
-
return state
|
294 |
-
|
295 |
-
def __setstate__(self, state: dict) -> None:
|
296 |
-
"""Pickle import."""
|
297 |
-
# pylint: disable=attribute-defined-outside-init
|
298 |
-
tfutil.assert_tf_initialized()
|
299 |
-
self._init_fields()
|
300 |
-
|
301 |
-
# Execute custom import handlers.
|
302 |
-
for handler in _import_handlers:
|
303 |
-
state = handler(state)
|
304 |
-
|
305 |
-
# Set basic fields.
|
306 |
-
assert state["version"] in [2, 3, 4]
|
307 |
-
self.name = state["name"]
|
308 |
-
self.static_kwargs = util.EasyDict(state["static_kwargs"])
|
309 |
-
self.components = util.EasyDict(state.get("components", {}))
|
310 |
-
self._build_module_src = state["build_module_src"]
|
311 |
-
self._build_func_name = state["build_func_name"]
|
312 |
-
|
313 |
-
# Create temporary module from the imported source code.
|
314 |
-
module_name = "_tflib_network_import_" + uuid.uuid4().hex
|
315 |
-
module = types.ModuleType(module_name)
|
316 |
-
sys.modules[module_name] = module
|
317 |
-
_import_module_src[module] = self._build_module_src
|
318 |
-
exec(self._build_module_src, module.__dict__) # pylint: disable=exec-used
|
319 |
-
|
320 |
-
# Locate network build function in the temporary module.
|
321 |
-
self._build_func = util.get_obj_from_module(
|
322 |
-
module, self._build_func_name)
|
323 |
-
assert callable(self._build_func)
|
324 |
-
|
325 |
-
# Init TensorFlow graph.
|
326 |
-
self._init_graph()
|
327 |
-
self.reset_own_vars()
|
328 |
-
tfutil.set_vars({self.find_var(name): value for name,
|
329 |
-
value in state["variables"]})
|
330 |
-
|
331 |
-
def clone(self, name: str = None, **new_static_kwargs) -> "Network":
|
332 |
-
"""Create a clone of this network with its own copy of the variables."""
|
333 |
-
# pylint: disable=protected-access
|
334 |
-
net = object.__new__(Network)
|
335 |
-
net._init_fields()
|
336 |
-
net.name = name if name is not None else self.name
|
337 |
-
net.static_kwargs = util.EasyDict(self.static_kwargs)
|
338 |
-
net.static_kwargs.update(new_static_kwargs)
|
339 |
-
net._build_module_src = self._build_module_src
|
340 |
-
net._build_func_name = self._build_func_name
|
341 |
-
net._build_func = self._build_func
|
342 |
-
net._init_graph()
|
343 |
-
net.copy_vars_from(self)
|
344 |
-
return net
|
345 |
-
|
346 |
-
def copy_own_vars_from(self, src_net: "Network") -> None:
|
347 |
-
"""Copy the values of all variables from the given network, excluding sub-networks."""
|
348 |
-
names = [name for name in self.own_vars.keys()
|
349 |
-
if name in src_net.own_vars]
|
350 |
-
tfutil.set_vars(tfutil.run(
|
351 |
-
{self.vars[name]: src_net.vars[name] for name in names}))
|
352 |
-
|
353 |
-
def copy_vars_from(self, src_net: "Network") -> None:
|
354 |
-
"""Copy the values of all variables from the given network, including sub-networks."""
|
355 |
-
names = [name for name in self.vars.keys() if name in src_net.vars]
|
356 |
-
tfutil.set_vars(tfutil.run(
|
357 |
-
{self.vars[name]: src_net.vars[name] for name in names}))
|
358 |
-
|
359 |
-
def copy_trainables_from(self, src_net: "Network") -> None:
|
360 |
-
"""Copy the values of all trainable variables from the given network, including sub-networks."""
|
361 |
-
names = [name for name in self.trainables.keys()
|
362 |
-
if name in src_net.trainables]
|
363 |
-
tfutil.set_vars(tfutil.run(
|
364 |
-
{self.vars[name]: src_net.vars[name] for name in names}))
|
365 |
-
|
366 |
-
def convert(self, new_func_name: str, new_name: str = None, **new_static_kwargs) -> "Network":
|
367 |
-
"""Create new network with the given parameters, and copy all variables from this network."""
|
368 |
-
if new_name is None:
|
369 |
-
new_name = self.name
|
370 |
-
static_kwargs = dict(self.static_kwargs)
|
371 |
-
static_kwargs.update(new_static_kwargs)
|
372 |
-
net = Network(name=new_name, func_name=new_func_name, **static_kwargs)
|
373 |
-
net.copy_vars_from(self)
|
374 |
-
return net
|
375 |
-
|
376 |
-
def setup_as_moving_average_of(self, src_net: "Network", beta: TfExpressionEx = 0.99, beta_nontrainable: TfExpressionEx = 0.0) -> tf.Operation:
|
377 |
-
"""Construct a TensorFlow op that updates the variables of this network
|
378 |
-
to be slightly closer to those of the given network."""
|
379 |
-
with tfutil.absolute_name_scope(self.scope + "/_MovingAvg"):
|
380 |
-
ops = []
|
381 |
-
for name, var in self.vars.items():
|
382 |
-
if name in src_net.vars:
|
383 |
-
cur_beta = beta if name in self.trainables else beta_nontrainable
|
384 |
-
new_value = tfutil.lerp(src_net.vars[name], var, cur_beta)
|
385 |
-
ops.append(var.assign(new_value))
|
386 |
-
return tf.group(*ops)
|
387 |
-
|
388 |
-
def run(self,
|
389 |
-
*in_arrays: Tuple[Union[np.ndarray, None], ...],
|
390 |
-
input_transform: dict = None,
|
391 |
-
output_transform: dict = None,
|
392 |
-
return_as_list: bool = False,
|
393 |
-
print_progress: bool = False,
|
394 |
-
minibatch_size: int = None,
|
395 |
-
num_gpus: int = 1,
|
396 |
-
assume_frozen: bool = False,
|
397 |
-
**dynamic_kwargs) -> Union[np.ndarray, Tuple[np.ndarray, ...], List[np.ndarray]]:
|
398 |
-
"""Run this network for the given NumPy array(s), and return the output(s) as NumPy array(s).
|
399 |
-
|
400 |
-
Args:
|
401 |
-
input_transform: A dict specifying a custom transformation to be applied to the input tensor(s) before evaluating the network.
|
402 |
-
The dict must contain a 'func' field that points to a top-level function. The function is called with the input
|
403 |
-
TensorFlow expression(s) as positional arguments. Any remaining fields of the dict will be passed in as kwargs.
|
404 |
-
output_transform: A dict specifying a custom transformation to be applied to the output tensor(s) after evaluating the network.
|
405 |
-
The dict must contain a 'func' field that points to a top-level function. The function is called with the output
|
406 |
-
TensorFlow expression(s) as positional arguments. Any remaining fields of the dict will be passed in as kwargs.
|
407 |
-
return_as_list: True = return a list of NumPy arrays, False = return a single NumPy array, or a tuple if there are multiple outputs.
|
408 |
-
print_progress: Print progress to the console? Useful for very large input arrays.
|
409 |
-
minibatch_size: Maximum minibatch size to use, None = disable batching.
|
410 |
-
num_gpus: Number of GPUs to use.
|
411 |
-
assume_frozen: Improve multi-GPU performance by assuming that the trainable parameters will remain changed between calls.
|
412 |
-
dynamic_kwargs: Additional keyword arguments to be passed into the network build function.
|
413 |
-
"""
|
414 |
-
assert len(in_arrays) == self.num_inputs
|
415 |
-
assert not all(arr is None for arr in in_arrays)
|
416 |
-
assert input_transform is None or util.is_top_level_function(
|
417 |
-
input_transform["func"])
|
418 |
-
assert output_transform is None or util.is_top_level_function(
|
419 |
-
output_transform["func"])
|
420 |
-
output_transform, dynamic_kwargs = _handle_legacy_output_transforms(
|
421 |
-
output_transform, dynamic_kwargs)
|
422 |
-
num_items = in_arrays[0].shape[0]
|
423 |
-
if minibatch_size is None:
|
424 |
-
minibatch_size = num_items
|
425 |
-
|
426 |
-
# Construct unique hash key from all arguments that affect the TensorFlow graph.
|
427 |
-
key = dict(input_transform=input_transform, output_transform=output_transform,
|
428 |
-
num_gpus=num_gpus, assume_frozen=assume_frozen, dynamic_kwargs=dynamic_kwargs)
|
429 |
-
|
430 |
-
def unwind_key(obj):
|
431 |
-
if isinstance(obj, dict):
|
432 |
-
return [(key, unwind_key(value)) for key, value in sorted(obj.items())]
|
433 |
-
if callable(obj):
|
434 |
-
return util.get_top_level_function_name(obj)
|
435 |
-
return obj
|
436 |
-
key = repr(unwind_key(key))
|
437 |
-
|
438 |
-
# Build graph.
|
439 |
-
if key not in self._run_cache:
|
440 |
-
with tfutil.absolute_name_scope(self.scope + "/_Run"), tf.control_dependencies(None):
|
441 |
-
with tf.device("/cpu:0"):
|
442 |
-
in_expr = [tf.placeholder(tf.float32, name=name)
|
443 |
-
for name in self.input_names]
|
444 |
-
in_split = list(
|
445 |
-
zip(*[tf.split(x, num_gpus) for x in in_expr]))
|
446 |
-
|
447 |
-
out_split = []
|
448 |
-
for gpu in range(num_gpus):
|
449 |
-
with tf.device("/gpu:%d" % gpu):
|
450 |
-
net_gpu = self.clone() if assume_frozen else self
|
451 |
-
in_gpu = in_split[gpu]
|
452 |
-
|
453 |
-
if input_transform is not None:
|
454 |
-
in_kwargs = dict(input_transform)
|
455 |
-
in_gpu = in_kwargs.pop("func")(
|
456 |
-
*in_gpu, **in_kwargs)
|
457 |
-
in_gpu = [in_gpu] if tfutil.is_tf_expression(
|
458 |
-
in_gpu) else list(in_gpu)
|
459 |
-
|
460 |
-
assert len(in_gpu) == self.num_inputs
|
461 |
-
out_gpu = net_gpu.get_output_for(
|
462 |
-
*in_gpu, return_as_list=True, **dynamic_kwargs)
|
463 |
-
|
464 |
-
if output_transform is not None:
|
465 |
-
out_kwargs = dict(output_transform)
|
466 |
-
out_gpu = out_kwargs.pop("func")(
|
467 |
-
*out_gpu, **out_kwargs)
|
468 |
-
out_gpu = [out_gpu] if tfutil.is_tf_expression(
|
469 |
-
out_gpu) else list(out_gpu)
|
470 |
-
|
471 |
-
assert len(out_gpu) == self.num_outputs
|
472 |
-
out_split.append(out_gpu)
|
473 |
-
|
474 |
-
with tf.device("/cpu:0"):
|
475 |
-
out_expr = [tf.concat(outputs, axis=0)
|
476 |
-
for outputs in zip(*out_split)]
|
477 |
-
self._run_cache[key] = in_expr, out_expr
|
478 |
-
|
479 |
-
# Run minibatches.
|
480 |
-
in_expr, out_expr = self._run_cache[key]
|
481 |
-
out_arrays = [np.empty(
|
482 |
-
[num_items] + expr.shape.as_list()[1:], expr.dtype.name) for expr in out_expr]
|
483 |
-
|
484 |
-
for mb_begin in range(0, num_items, minibatch_size):
|
485 |
-
if print_progress:
|
486 |
-
print("\r%d / %d" % (mb_begin, num_items), end="")
|
487 |
-
|
488 |
-
mb_end = min(mb_begin + minibatch_size, num_items)
|
489 |
-
mb_num = mb_end - mb_begin
|
490 |
-
mb_in = [src[mb_begin: mb_end] if src is not None else np.zeros(
|
491 |
-
[mb_num] + shape[1:]) for src, shape in zip(in_arrays, self.input_shapes)]
|
492 |
-
mb_out = tf.get_default_session().run(out_expr, dict(zip(in_expr, mb_in)))
|
493 |
-
|
494 |
-
for dst, src in zip(out_arrays, mb_out):
|
495 |
-
dst[mb_begin: mb_end] = src
|
496 |
-
|
497 |
-
# Done.
|
498 |
-
if print_progress:
|
499 |
-
print("\r%d / %d" % (num_items, num_items))
|
500 |
-
|
501 |
-
if not return_as_list:
|
502 |
-
out_arrays = out_arrays[0] if len(
|
503 |
-
out_arrays) == 1 else tuple(out_arrays)
|
504 |
-
return out_arrays
|
505 |
-
|
506 |
-
def list_ops(self) -> List[TfExpression]:
|
507 |
-
include_prefix = self.scope + "/"
|
508 |
-
exclude_prefix = include_prefix + "_"
|
509 |
-
ops = tf.get_default_graph().get_operations()
|
510 |
-
ops = [op for op in ops if op.name.startswith(include_prefix)]
|
511 |
-
ops = [op for op in ops if not op.name.startswith(exclude_prefix)]
|
512 |
-
return ops
|
513 |
-
|
514 |
-
def list_layers(self) -> List[Tuple[str, TfExpression, List[TfExpression]]]:
|
515 |
-
"""Returns a list of (layer_name, output_expr, trainable_vars) tuples corresponding to
|
516 |
-
individual layers of the network. Mainly intended to be used for reporting."""
|
517 |
-
layers = []
|
518 |
-
|
519 |
-
def recurse(scope, parent_ops, parent_vars, level):
|
520 |
-
# Ignore specific patterns.
|
521 |
-
if any(p in scope for p in ["/Shape", "/strided_slice", "/Cast", "/concat", "/Assign"]):
|
522 |
-
return
|
523 |
-
|
524 |
-
# Filter ops and vars by scope.
|
525 |
-
global_prefix = scope + "/"
|
526 |
-
local_prefix = global_prefix[len(self.scope) + 1:]
|
527 |
-
cur_ops = [op for op in parent_ops if op.name.startswith(
|
528 |
-
global_prefix) or op.name == global_prefix[:-1]]
|
529 |
-
cur_vars = [(name, var) for name, var in parent_vars if name.startswith(
|
530 |
-
local_prefix) or name == local_prefix[:-1]]
|
531 |
-
if not cur_ops and not cur_vars:
|
532 |
-
return
|
533 |
-
|
534 |
-
# Filter out all ops related to variables.
|
535 |
-
for var in [op for op in cur_ops if op.type.startswith("Variable")]:
|
536 |
-
var_prefix = var.name + "/"
|
537 |
-
cur_ops = [
|
538 |
-
op for op in cur_ops if not op.name.startswith(var_prefix)]
|
539 |
-
|
540 |
-
# Scope does not contain ops as immediate children => recurse deeper.
|
541 |
-
contains_direct_ops = any("/" not in op.name[len(global_prefix):] and op.type not in [
|
542 |
-
"Identity", "Cast", "Transpose"] for op in cur_ops)
|
543 |
-
if (level == 0 or not contains_direct_ops) and (len(cur_ops) + len(cur_vars)) > 1:
|
544 |
-
visited = set()
|
545 |
-
for rel_name in [op.name[len(global_prefix):] for op in cur_ops] + [name[len(local_prefix):] for name, _var in cur_vars]:
|
546 |
-
token = rel_name.split("/")[0]
|
547 |
-
if token not in visited:
|
548 |
-
recurse(global_prefix + token,
|
549 |
-
cur_ops, cur_vars, level + 1)
|
550 |
-
visited.add(token)
|
551 |
-
return
|
552 |
-
|
553 |
-
# Report layer.
|
554 |
-
layer_name = scope[len(self.scope) + 1:]
|
555 |
-
layer_output = cur_ops[-1].outputs[0] if cur_ops else cur_vars[-1][1]
|
556 |
-
layer_trainables = [var for _name,
|
557 |
-
var in cur_vars if var.trainable]
|
558 |
-
layers.append((layer_name, layer_output, layer_trainables))
|
559 |
-
|
560 |
-
recurse(self.scope, self.list_ops(), list(self.vars.items()), 0)
|
561 |
-
return layers
|
562 |
-
|
563 |
-
def print_layers(self, title: str = None, hide_layers_with_no_params: bool = False) -> None:
|
564 |
-
"""Print a summary table of the network structure."""
|
565 |
-
rows = [[title if title is not None else self.name,
|
566 |
-
"Params", "OutputShape", "WeightShape"]]
|
567 |
-
rows += [["---"] * 4]
|
568 |
-
total_params = 0
|
569 |
-
|
570 |
-
for layer_name, layer_output, layer_trainables in self.list_layers():
|
571 |
-
num_params = sum(int(np.prod(var.shape.as_list()))
|
572 |
-
for var in layer_trainables)
|
573 |
-
weights = [
|
574 |
-
var for var in layer_trainables if var.name.endswith("/weight:0")]
|
575 |
-
weights.sort(key=lambda x: len(x.name))
|
576 |
-
if len(weights) == 0 and len(layer_trainables) == 1:
|
577 |
-
weights = layer_trainables
|
578 |
-
total_params += num_params
|
579 |
-
|
580 |
-
if not hide_layers_with_no_params or num_params != 0:
|
581 |
-
num_params_str = str(num_params) if num_params > 0 else "-"
|
582 |
-
output_shape_str = str(layer_output.shape)
|
583 |
-
weight_shape_str = str(weights[0].shape) if len(
|
584 |
-
weights) >= 1 else "-"
|
585 |
-
rows += [[layer_name, num_params_str,
|
586 |
-
output_shape_str, weight_shape_str]]
|
587 |
-
|
588 |
-
rows += [["---"] * 4]
|
589 |
-
rows += [["Total", str(total_params), "", ""]]
|
590 |
-
|
591 |
-
widths = [max(len(cell) for cell in column) for column in zip(*rows)]
|
592 |
-
print()
|
593 |
-
for row in rows:
|
594 |
-
print(" ".join(cell + " " * (width - len(cell))
|
595 |
-
for cell, width in zip(row, widths)))
|
596 |
-
print()
|
597 |
-
|
598 |
-
def setup_weight_histograms(self, title: str = None) -> None:
|
599 |
-
"""Construct summary ops to include histograms of all trainable parameters in TensorBoard."""
|
600 |
-
if title is None:
|
601 |
-
title = self.name
|
602 |
-
|
603 |
-
with tf.name_scope(None), tf.device(None), tf.control_dependencies(None):
|
604 |
-
for local_name, var in self.trainables.items():
|
605 |
-
if "/" in local_name:
|
606 |
-
p = local_name.split("/")
|
607 |
-
name = title + "_" + p[-1] + "/" + "_".join(p[:-1])
|
608 |
-
else:
|
609 |
-
name = title + "_toplevel/" + local_name
|
610 |
-
|
611 |
-
tf.summary.histogram(name, var)
|
612 |
-
|
613 |
-
# ----------------------------------------------------------------------------
|
614 |
-
# Backwards-compatible emulation of legacy output transformation in Network.run().
|
615 |
-
|
616 |
-
|
617 |
-
_print_legacy_warning = True
|
618 |
-
|
619 |
-
|
620 |
-
def _handle_legacy_output_transforms(output_transform, dynamic_kwargs):
|
621 |
-
global _print_legacy_warning
|
622 |
-
legacy_kwargs = ["out_mul", "out_add", "out_shrink", "out_dtype"]
|
623 |
-
if not any(kwarg in dynamic_kwargs for kwarg in legacy_kwargs):
|
624 |
-
return output_transform, dynamic_kwargs
|
625 |
-
|
626 |
-
if _print_legacy_warning:
|
627 |
-
_print_legacy_warning = False
|
628 |
-
print()
|
629 |
-
print("WARNING: Old-style output transformations in Network.run() are deprecated.")
|
630 |
-
print("Consider using 'output_transform=dict(func=tflib.convert_images_to_uint8)'")
|
631 |
-
print("instead of 'out_mul=127.5, out_add=127.5, out_dtype=np.uint8'.")
|
632 |
-
print()
|
633 |
-
assert output_transform is None
|
634 |
-
|
635 |
-
new_kwargs = dict(dynamic_kwargs)
|
636 |
-
new_transform = {kwarg: new_kwargs.pop(
|
637 |
-
kwarg) for kwarg in legacy_kwargs if kwarg in dynamic_kwargs}
|
638 |
-
new_transform["func"] = _legacy_output_transform_func
|
639 |
-
return new_transform, new_kwargs
|
640 |
-
|
641 |
-
|
642 |
-
def _legacy_output_transform_func(*expr, out_mul=1.0, out_add=0.0, out_shrink=1, out_dtype=None):
|
643 |
-
if out_mul != 1.0:
|
644 |
-
expr = [x * out_mul for x in expr]
|
645 |
-
|
646 |
-
if out_add != 0.0:
|
647 |
-
expr = [x + out_add for x in expr]
|
648 |
-
|
649 |
-
if out_shrink > 1:
|
650 |
-
ksize = [1, 1, out_shrink, out_shrink]
|
651 |
-
expr = [tf.nn.avg_pool(x, ksize=ksize, strides=ksize,
|
652 |
-
padding="VALID", data_format="NCHW") for x in expr]
|
653 |
-
|
654 |
-
if out_dtype is not None:
|
655 |
-
if tf.as_dtype(out_dtype).is_integer:
|
656 |
-
expr = [tf.round(x) for x in expr]
|
657 |
-
expr = [tf.saturate_cast(x, out_dtype) for x in expr]
|
658 |
-
return expr
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/mmdet/utils/__init__.py
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
from .collect_env import collect_env
|
2 |
-
from .logger import get_root_logger
|
3 |
-
from .optimizer import DistOptimizerHook
|
4 |
-
|
5 |
-
__all__ = ['get_root_logger', 'collect_env', 'DistOptimizerHook']
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_light/uniformer_light_image.py
DELETED
@@ -1,535 +0,0 @@
|
|
1 |
-
# All rights reserved.
|
2 |
-
from collections import OrderedDict
|
3 |
-
import torch
|
4 |
-
import torch.nn as nn
|
5 |
-
from functools import partial
|
6 |
-
import torch.nn.functional as F
|
7 |
-
import math
|
8 |
-
from timm.models.vision_transformer import _cfg
|
9 |
-
from timm.models.registry import register_model
|
10 |
-
from timm.models.layers import trunc_normal_, DropPath, to_2tuple
|
11 |
-
|
12 |
-
|
13 |
-
layer_scale = False
|
14 |
-
init_value = 1e-6
|
15 |
-
global_attn = None
|
16 |
-
token_indices = None
|
17 |
-
|
18 |
-
|
19 |
-
# code is from https://github.com/YifanXu74/Evo-ViT
|
20 |
-
def easy_gather(x, indices):
|
21 |
-
# x => B x N x C
|
22 |
-
# indices => B x N
|
23 |
-
B, N, C = x.shape
|
24 |
-
N_new = indices.shape[1]
|
25 |
-
offset = torch.arange(B, dtype=torch.long, device=x.device).view(B, 1) * N
|
26 |
-
indices = indices + offset
|
27 |
-
# only select the informative tokens
|
28 |
-
out = x.reshape(B * N, C)[indices.view(-1)].reshape(B, N_new, C)
|
29 |
-
return out
|
30 |
-
|
31 |
-
|
32 |
-
# code is from https://github.com/YifanXu74/Evo-ViT
|
33 |
-
def merge_tokens(x_drop, score):
|
34 |
-
# x_drop => B x N_drop
|
35 |
-
# score => B x N_drop
|
36 |
-
weight = score / torch.sum(score, dim=1, keepdim=True)
|
37 |
-
x_drop = weight.unsqueeze(-1) * x_drop
|
38 |
-
return torch.sum(x_drop, dim=1, keepdim=True)
|
39 |
-
|
40 |
-
|
41 |
-
class Mlp(nn.Module):
|
42 |
-
def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
|
43 |
-
super().__init__()
|
44 |
-
out_features = out_features or in_features
|
45 |
-
hidden_features = hidden_features or in_features
|
46 |
-
self.fc1 = nn.Linear(in_features, hidden_features)
|
47 |
-
self.act = act_layer()
|
48 |
-
self.fc2 = nn.Linear(hidden_features, out_features)
|
49 |
-
self.drop = nn.Dropout(drop)
|
50 |
-
|
51 |
-
def forward(self, x):
|
52 |
-
x = self.fc1(x)
|
53 |
-
x = self.act(x)
|
54 |
-
x = self.drop(x)
|
55 |
-
x = self.fc2(x)
|
56 |
-
x = self.drop(x)
|
57 |
-
return x
|
58 |
-
|
59 |
-
|
60 |
-
class CMlp(nn.Module):
|
61 |
-
def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
|
62 |
-
super().__init__()
|
63 |
-
out_features = out_features or in_features
|
64 |
-
hidden_features = hidden_features or in_features
|
65 |
-
self.fc1 = nn.Conv2d(in_features, hidden_features, 1)
|
66 |
-
self.act = act_layer()
|
67 |
-
self.fc2 = nn.Conv2d(hidden_features, out_features, 1)
|
68 |
-
self.drop = nn.Dropout(drop)
|
69 |
-
|
70 |
-
def forward(self, x):
|
71 |
-
x = self.fc1(x)
|
72 |
-
x = self.act(x)
|
73 |
-
x = self.drop(x)
|
74 |
-
x = self.fc2(x)
|
75 |
-
x = self.drop(x)
|
76 |
-
return x
|
77 |
-
|
78 |
-
|
79 |
-
class Attention(nn.Module):
|
80 |
-
def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0., trade_off=1):
|
81 |
-
super().__init__()
|
82 |
-
self.num_heads = num_heads
|
83 |
-
head_dim = dim // num_heads
|
84 |
-
# NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights
|
85 |
-
self.scale = qk_scale or head_dim ** -0.5
|
86 |
-
|
87 |
-
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
|
88 |
-
self.attn_drop = nn.Dropout(attn_drop)
|
89 |
-
self.proj = nn.Linear(dim, dim)
|
90 |
-
self.proj_drop = nn.Dropout(proj_drop)
|
91 |
-
# updating weight for global score
|
92 |
-
self.trade_off = trade_off
|
93 |
-
|
94 |
-
def forward(self, x):
|
95 |
-
B, N, C = x.shape
|
96 |
-
qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
|
97 |
-
q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
|
98 |
-
|
99 |
-
attn = (q @ k.transpose(-2, -1)) * self.scale
|
100 |
-
attn = attn.softmax(dim=-1)
|
101 |
-
|
102 |
-
# update global score
|
103 |
-
global global_attn
|
104 |
-
tradeoff = self.trade_off
|
105 |
-
if isinstance(global_attn, int):
|
106 |
-
global_attn = torch.mean(attn[:, :, 0, 1:], dim=1)
|
107 |
-
elif global_attn.shape[1] == N - 1:
|
108 |
-
# no additional token and no pruning, update all global scores
|
109 |
-
cls_attn = torch.mean(attn[:, :, 0, 1:], dim=1)
|
110 |
-
global_attn = (1 - tradeoff) * global_attn + tradeoff * cls_attn
|
111 |
-
else:
|
112 |
-
# only update the informative tokens
|
113 |
-
# the first one is class token
|
114 |
-
# the last one is rrepresentative token
|
115 |
-
cls_attn = torch.mean(attn[:, :, 0, 1:-1], dim=1)
|
116 |
-
if self.training:
|
117 |
-
temp_attn = (1 - tradeoff) * global_attn[:, :(N - 2)] + tradeoff * cls_attn
|
118 |
-
global_attn = torch.cat((temp_attn, global_attn[:, (N - 2):]), dim=1)
|
119 |
-
else:
|
120 |
-
# no use torch.cat() for fast inference
|
121 |
-
global_attn[:, :(N - 2)] = (1 - tradeoff) * global_attn[:, :(N - 2)] + tradeoff * cls_attn
|
122 |
-
|
123 |
-
attn = self.attn_drop(attn)
|
124 |
-
|
125 |
-
x = (attn @ v).transpose(1, 2).reshape(B, N, C)
|
126 |
-
x = self.proj(x)
|
127 |
-
x = self.proj_drop(x)
|
128 |
-
return x
|
129 |
-
|
130 |
-
|
131 |
-
class CBlock(nn.Module):
|
132 |
-
def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
|
133 |
-
drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
|
134 |
-
super().__init__()
|
135 |
-
self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
|
136 |
-
self.norm1 = nn.BatchNorm2d(dim)
|
137 |
-
self.conv1 = nn.Conv2d(dim, dim, 1)
|
138 |
-
self.conv2 = nn.Conv2d(dim, dim, 1)
|
139 |
-
self.attn = nn.Conv2d(dim, dim, 5, padding=2, groups=dim)
|
140 |
-
# NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
|
141 |
-
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
|
142 |
-
self.norm2 = nn.BatchNorm2d(dim)
|
143 |
-
mlp_hidden_dim = int(dim * mlp_ratio)
|
144 |
-
self.mlp = CMlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
|
145 |
-
global layer_scale
|
146 |
-
self.ls = layer_scale
|
147 |
-
if self.ls:
|
148 |
-
global init_value
|
149 |
-
print(f"Use layer_scale: {layer_scale}, init_values: {init_value}")
|
150 |
-
self.gamma_1 = nn.Parameter(init_value * torch.ones((1, dim, 1, 1)),requires_grad=True)
|
151 |
-
self.gamma_2 = nn.Parameter(init_value * torch.ones((1, dim, 1, 1)),requires_grad=True)
|
152 |
-
|
153 |
-
def forward(self, x):
|
154 |
-
x = x + self.pos_embed(x)
|
155 |
-
if self.ls:
|
156 |
-
x = x + self.drop_path(self.gamma_1 * self.conv2(self.attn(self.conv1(self.norm1(x)))))
|
157 |
-
x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x)))
|
158 |
-
else:
|
159 |
-
x = x + self.drop_path(self.conv2(self.attn(self.conv1(self.norm1(x)))))
|
160 |
-
x = x + self.drop_path(self.mlp(self.norm2(x)))
|
161 |
-
return x
|
162 |
-
|
163 |
-
|
164 |
-
class EvoSABlock(nn.Module):
|
165 |
-
def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
|
166 |
-
drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm, prune_ratio=1,
|
167 |
-
trade_off=0, downsample=False):
|
168 |
-
super().__init__()
|
169 |
-
self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
|
170 |
-
self.norm1 = norm_layer(dim)
|
171 |
-
self.attn = Attention(
|
172 |
-
dim,
|
173 |
-
num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
|
174 |
-
attn_drop=attn_drop, proj_drop=drop, trade_off=trade_off)
|
175 |
-
# NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
|
176 |
-
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
|
177 |
-
self.norm2 = norm_layer(dim)
|
178 |
-
mlp_hidden_dim = int(dim * mlp_ratio)
|
179 |
-
self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
|
180 |
-
self.prune_ratio = prune_ratio
|
181 |
-
self.downsample = downsample
|
182 |
-
if downsample:
|
183 |
-
self.avgpool = nn.AvgPool2d(kernel_size=2, stride=2)
|
184 |
-
global layer_scale
|
185 |
-
self.ls = layer_scale
|
186 |
-
if self.ls:
|
187 |
-
global init_value
|
188 |
-
print(f"Use layer_scale: {layer_scale}, init_values: {init_value}")
|
189 |
-
self.gamma_1 = nn.Parameter(init_value * torch.ones((dim)),requires_grad=True)
|
190 |
-
self.gamma_2 = nn.Parameter(init_value * torch.ones((dim)),requires_grad=True)
|
191 |
-
if self.prune_ratio != 1:
|
192 |
-
self.gamma_3 = nn.Parameter(init_value * torch.ones((dim)),requires_grad=True)
|
193 |
-
|
194 |
-
def forward(self, cls_token, x):
|
195 |
-
x = x + self.pos_embed(x)
|
196 |
-
B, C, H, W = x.shape
|
197 |
-
x = x.flatten(2).transpose(1, 2)
|
198 |
-
|
199 |
-
if self.prune_ratio == 1:
|
200 |
-
x = torch.cat([cls_token, x], dim=1)
|
201 |
-
if self.ls:
|
202 |
-
x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x)))
|
203 |
-
x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x)))
|
204 |
-
else:
|
205 |
-
x = x + self.drop_path(self.attn(self.norm1(x)))
|
206 |
-
x = x + self.drop_path(self.mlp(self.norm2(x)))
|
207 |
-
cls_token, x = x[:, :1], x[:, 1:]
|
208 |
-
x = x.transpose(1, 2).reshape(B, C, H, W)
|
209 |
-
return cls_token, x
|
210 |
-
else:
|
211 |
-
global global_attn, token_indices
|
212 |
-
# calculate the number of informative tokens
|
213 |
-
N = x.shape[1]
|
214 |
-
N_ = int(N * self.prune_ratio)
|
215 |
-
# sort global attention
|
216 |
-
indices = torch.argsort(global_attn, dim=1, descending=True)
|
217 |
-
|
218 |
-
# concatenate x, global attention and token indices => x_ga_ti
|
219 |
-
# rearrange the tensor according to new indices
|
220 |
-
x_ga_ti = torch.cat((x, global_attn.unsqueeze(-1), token_indices.unsqueeze(-1)), dim=-1)
|
221 |
-
x_ga_ti = easy_gather(x_ga_ti, indices)
|
222 |
-
x_sorted, global_attn, token_indices = x_ga_ti[:, :, :-2], x_ga_ti[:, :, -2], x_ga_ti[:, :, -1]
|
223 |
-
|
224 |
-
# informative tokens
|
225 |
-
x_info = x_sorted[:, :N_]
|
226 |
-
# merge dropped tokens
|
227 |
-
x_drop = x_sorted[:, N_:]
|
228 |
-
score = global_attn[:, N_:]
|
229 |
-
# B x N_drop x C => B x 1 x C
|
230 |
-
rep_token = merge_tokens(x_drop, score)
|
231 |
-
# concatenate new tokens
|
232 |
-
x = torch.cat((cls_token, x_info, rep_token), dim=1)
|
233 |
-
|
234 |
-
if self.ls:
|
235 |
-
# slow update
|
236 |
-
fast_update = 0
|
237 |
-
tmp_x = self.attn(self.norm1(x))
|
238 |
-
fast_update = fast_update + tmp_x[:, -1:]
|
239 |
-
x = x + self.drop_path(self.gamma_1 * tmp_x)
|
240 |
-
tmp_x = self.mlp(self.norm2(x))
|
241 |
-
fast_update = fast_update + tmp_x[:, -1:]
|
242 |
-
x = x + self.drop_path(self.gamma_2 * tmp_x)
|
243 |
-
# fast update
|
244 |
-
x_drop = x_drop + self.gamma_3 * fast_update.expand(-1, N - N_, -1)
|
245 |
-
else:
|
246 |
-
# slow update
|
247 |
-
fast_update = 0
|
248 |
-
tmp_x = self.attn(self.norm1(x))
|
249 |
-
fast_update = fast_update + tmp_x[:, -1:]
|
250 |
-
x = x + self.drop_path(tmp_x)
|
251 |
-
tmp_x = self.mlp(self.norm2(x))
|
252 |
-
fast_update = fast_update + tmp_x[:, -1:]
|
253 |
-
x = x + self.drop_path(tmp_x)
|
254 |
-
# fast update
|
255 |
-
x_drop = x_drop + fast_update.expand(-1, N - N_, -1)
|
256 |
-
|
257 |
-
cls_token, x = x[:, :1, :], x[:, 1:-1, :]
|
258 |
-
if self.training:
|
259 |
-
x_sorted = torch.cat((x, x_drop), dim=1)
|
260 |
-
else:
|
261 |
-
x_sorted[:, N_:] = x_drop
|
262 |
-
x_sorted[:, :N_] = x
|
263 |
-
|
264 |
-
# recover token
|
265 |
-
# scale for normalization
|
266 |
-
old_global_scale = torch.sum(global_attn, dim=1, keepdim=True)
|
267 |
-
# recover order
|
268 |
-
indices = torch.argsort(token_indices, dim=1)
|
269 |
-
x_ga_ti = torch.cat((x_sorted, global_attn.unsqueeze(-1), token_indices.unsqueeze(-1)), dim=-1)
|
270 |
-
x_ga_ti = easy_gather(x_ga_ti, indices)
|
271 |
-
x_patch, global_attn, token_indices = x_ga_ti[:, :, :-2], x_ga_ti[:, :, -2], x_ga_ti[:, :, -1]
|
272 |
-
x_patch = x_patch.transpose(1, 2).reshape(B, C, H, W)
|
273 |
-
|
274 |
-
if self.downsample:
|
275 |
-
# downsample global attention
|
276 |
-
global_attn = global_attn.reshape(B, 1, H, W)
|
277 |
-
global_attn = self.avgpool(global_attn).view(B, -1)
|
278 |
-
# normalize global attention
|
279 |
-
new_global_scale = torch.sum(global_attn, dim=1, keepdim=True)
|
280 |
-
scale = old_global_scale / new_global_scale
|
281 |
-
global_attn = global_attn * scale
|
282 |
-
|
283 |
-
return cls_token, x_patch
|
284 |
-
|
285 |
-
|
286 |
-
class PatchEmbed(nn.Module):
|
287 |
-
""" Image to Patch Embedding
|
288 |
-
"""
|
289 |
-
def __init__(self, patch_size=16, in_chans=3, embed_dim=768):
|
290 |
-
super().__init__()
|
291 |
-
self.norm = nn.LayerNorm(embed_dim)
|
292 |
-
self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
|
293 |
-
|
294 |
-
def forward(self, x):
|
295 |
-
x = self.proj(x)
|
296 |
-
B, C, H, W = x.shape
|
297 |
-
x = x.flatten(2).transpose(1, 2)
|
298 |
-
x = self.norm(x)
|
299 |
-
x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous()
|
300 |
-
return x
|
301 |
-
|
302 |
-
|
303 |
-
class head_embedding(nn.Module):
|
304 |
-
def __init__(self, in_channels, out_channels):
|
305 |
-
super(head_embedding, self).__init__()
|
306 |
-
self.proj = nn.Sequential(
|
307 |
-
nn.Conv2d(in_channels, out_channels // 2, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)),
|
308 |
-
nn.BatchNorm2d(out_channels // 2),
|
309 |
-
nn.GELU(),
|
310 |
-
nn.Conv2d(out_channels // 2, out_channels, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)),
|
311 |
-
nn.BatchNorm2d(out_channels),
|
312 |
-
)
|
313 |
-
|
314 |
-
def forward(self, x):
|
315 |
-
x = self.proj(x)
|
316 |
-
return x
|
317 |
-
|
318 |
-
|
319 |
-
class middle_embedding(nn.Module):
|
320 |
-
def __init__(self, in_channels, out_channels):
|
321 |
-
super(middle_embedding, self).__init__()
|
322 |
-
|
323 |
-
self.proj = nn.Sequential(
|
324 |
-
nn.Conv2d(in_channels, out_channels, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)),
|
325 |
-
nn.BatchNorm2d(out_channels),
|
326 |
-
)
|
327 |
-
|
328 |
-
def forward(self, x):
|
329 |
-
x = self.proj(x)
|
330 |
-
return x
|
331 |
-
|
332 |
-
|
333 |
-
class UniFormer_Light(nn.Module):
|
334 |
-
""" Vision Transformer
|
335 |
-
A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` -
|
336 |
-
https://arxiv.org/abs/2010.11929
|
337 |
-
"""
|
338 |
-
def __init__(self, depth=[3, 4, 8, 3], in_chans=3, num_classes=1000, embed_dim=[64, 128, 320, 512],
|
339 |
-
head_dim=64, mlp_ratio=[4., 4., 4., 4.], qkv_bias=True, qk_scale=None, representation_size=None,
|
340 |
-
drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=None, conv_stem=False,
|
341 |
-
prune_ratio=[[], [], [1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5], [0.5, 0.5, 0.5]],
|
342 |
-
trade_off=[[], [], [1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5], [0.5, 0.5, 0.5]]):
|
343 |
-
"""
|
344 |
-
Args:
|
345 |
-
img_size (int, tuple): input image size
|
346 |
-
patch_size (int, tuple): patch size
|
347 |
-
in_chans (int): number of input channels
|
348 |
-
num_classes (int): number of classes for classification head
|
349 |
-
embed_dim (int): embedding dimension
|
350 |
-
depth (int): depth of transformer
|
351 |
-
head_dim (int): head dimension
|
352 |
-
mlp_ratio (list): ratio of mlp hidden dim to embedding dim
|
353 |
-
qkv_bias (bool): enable bias for qkv if True
|
354 |
-
qk_scale (float): override default qk scale of head_dim ** -0.5 if set
|
355 |
-
representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
|
356 |
-
drop_rate (float): dropout rate
|
357 |
-
attn_drop_rate (float): attention dropout rate
|
358 |
-
drop_path_rate (float): stochastic depth rate
|
359 |
-
norm_layer: (nn.Module): normalization layer
|
360 |
-
"""
|
361 |
-
super().__init__()
|
362 |
-
self.num_classes = num_classes
|
363 |
-
self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
|
364 |
-
norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
|
365 |
-
if conv_stem:
|
366 |
-
self.patch_embed1 = head_embedding(in_channels=in_chans, out_channels=embed_dim[0])
|
367 |
-
self.patch_embed2 = PatchEmbed(
|
368 |
-
patch_size=2, in_chans=embed_dim[0], embed_dim=embed_dim[1])
|
369 |
-
self.patch_embed3 = PatchEmbed(
|
370 |
-
patch_size=2, in_chans=embed_dim[1], embed_dim=embed_dim[2])
|
371 |
-
self.patch_embed4 = PatchEmbed(
|
372 |
-
patch_size=2, in_chans=embed_dim[2], embed_dim=embed_dim[3])
|
373 |
-
else:
|
374 |
-
self.patch_embed1 = PatchEmbed(
|
375 |
-
patch_size=4, in_chans=in_chans, embed_dim=embed_dim[0])
|
376 |
-
self.patch_embed2 = PatchEmbed(
|
377 |
-
patch_size=2, in_chans=embed_dim[0], embed_dim=embed_dim[1])
|
378 |
-
self.patch_embed3 = PatchEmbed(
|
379 |
-
patch_size=2, in_chans=embed_dim[1], embed_dim=embed_dim[2])
|
380 |
-
self.patch_embed4 = PatchEmbed(
|
381 |
-
patch_size=2, in_chans=embed_dim[2], embed_dim=embed_dim[3])
|
382 |
-
|
383 |
-
# class token
|
384 |
-
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim[2]))
|
385 |
-
self.cls_upsample = nn.Linear(embed_dim[2], embed_dim[3])
|
386 |
-
|
387 |
-
self.pos_drop = nn.Dropout(p=drop_rate)
|
388 |
-
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depth))] # stochastic depth decay rule
|
389 |
-
num_heads = [dim // head_dim for dim in embed_dim]
|
390 |
-
self.blocks1 = nn.ModuleList([
|
391 |
-
CBlock(
|
392 |
-
dim=embed_dim[0], num_heads=num_heads[0], mlp_ratio=mlp_ratio[0], qkv_bias=qkv_bias, qk_scale=qk_scale,
|
393 |
-
drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer)
|
394 |
-
for i in range(depth[0])])
|
395 |
-
self.blocks2 = nn.ModuleList([
|
396 |
-
CBlock(
|
397 |
-
dim=embed_dim[1], num_heads=num_heads[1], mlp_ratio=mlp_ratio[1], qkv_bias=qkv_bias, qk_scale=qk_scale,
|
398 |
-
drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+depth[0]], norm_layer=norm_layer)
|
399 |
-
for i in range(depth[1])])
|
400 |
-
self.blocks3 = nn.ModuleList([
|
401 |
-
EvoSABlock(
|
402 |
-
dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio[2], qkv_bias=qkv_bias, qk_scale=qk_scale,
|
403 |
-
drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+depth[0]+depth[1]], norm_layer=norm_layer,
|
404 |
-
prune_ratio=prune_ratio[2][i], trade_off=trade_off[2][i],
|
405 |
-
downsample=True if i == depth[2] - 1 else False)
|
406 |
-
for i in range(depth[2])])
|
407 |
-
self.blocks4 = nn.ModuleList([
|
408 |
-
EvoSABlock(
|
409 |
-
dim=embed_dim[3], num_heads=num_heads[3], mlp_ratio=mlp_ratio[3], qkv_bias=qkv_bias, qk_scale=qk_scale,
|
410 |
-
drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+depth[0]+depth[1]+depth[2]], norm_layer=norm_layer,
|
411 |
-
prune_ratio=prune_ratio[3][i], trade_off=trade_off[3][i])
|
412 |
-
for i in range(depth[3])])
|
413 |
-
self.norm = nn.BatchNorm2d(embed_dim[-1])
|
414 |
-
self.norm_cls = nn.LayerNorm(embed_dim[-1])
|
415 |
-
|
416 |
-
# Representation layer
|
417 |
-
if representation_size:
|
418 |
-
self.num_features = representation_size
|
419 |
-
self.pre_logits = nn.Sequential(OrderedDict([
|
420 |
-
('fc', nn.Linear(embed_dim, representation_size)),
|
421 |
-
('act', nn.Tanh())
|
422 |
-
]))
|
423 |
-
else:
|
424 |
-
self.pre_logits = nn.Identity()
|
425 |
-
|
426 |
-
# Classifier head
|
427 |
-
self.head = nn.Linear(embed_dim[-1], num_classes) if num_classes > 0 else nn.Identity()
|
428 |
-
self.head_cls = nn.Linear(embed_dim[-1], num_classes) if num_classes > 0 else nn.Identity()
|
429 |
-
|
430 |
-
self.apply(self._init_weights)
|
431 |
-
|
432 |
-
def _init_weights(self, m):
|
433 |
-
if isinstance(m, nn.Linear):
|
434 |
-
trunc_normal_(m.weight, std=.02)
|
435 |
-
if isinstance(m, nn.Linear) and m.bias is not None:
|
436 |
-
nn.init.constant_(m.bias, 0)
|
437 |
-
elif isinstance(m, nn.LayerNorm):
|
438 |
-
nn.init.constant_(m.bias, 0)
|
439 |
-
nn.init.constant_(m.weight, 1.0)
|
440 |
-
|
441 |
-
@torch.jit.ignore
|
442 |
-
def no_weight_decay(self):
|
443 |
-
return {'pos_embed', 'cls_token'}
|
444 |
-
|
445 |
-
def get_classifier(self):
|
446 |
-
return self.head
|
447 |
-
|
448 |
-
def reset_classifier(self, num_classes, global_pool=''):
|
449 |
-
self.num_classes = num_classes
|
450 |
-
self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
|
451 |
-
|
452 |
-
def forward_features(self, x):
|
453 |
-
B = x.shape[0]
|
454 |
-
x = self.patch_embed1(x)
|
455 |
-
x = self.pos_drop(x)
|
456 |
-
for blk in self.blocks1:
|
457 |
-
x = blk(x)
|
458 |
-
x = self.patch_embed2(x)
|
459 |
-
for blk in self.blocks2:
|
460 |
-
x = blk(x)
|
461 |
-
x = self.patch_embed3(x)
|
462 |
-
# add cls_token in stage3
|
463 |
-
cls_token = self.cls_token.expand(x.shape[0], -1, -1)
|
464 |
-
global global_attn, token_indices
|
465 |
-
global_attn = 0
|
466 |
-
token_indices = torch.arange(x.shape[2] * x.shape[3], dtype=torch.long, device=x.device).unsqueeze(0)
|
467 |
-
token_indices = token_indices.expand(x.shape[0], -1)
|
468 |
-
for blk in self.blocks3:
|
469 |
-
cls_token, x = blk(cls_token, x)
|
470 |
-
# upsample cls_token before stage4
|
471 |
-
cls_token = self.cls_upsample(cls_token)
|
472 |
-
x = self.patch_embed4(x)
|
473 |
-
# whether reset global attention? Now simple avgpool
|
474 |
-
token_indices = torch.arange(x.shape[2] * x.shape[3], dtype=torch.long, device=x.device).unsqueeze(0)
|
475 |
-
token_indices = token_indices.expand(x.shape[0], -1)
|
476 |
-
for blk in self.blocks4:
|
477 |
-
cls_token, x = blk(cls_token, x)
|
478 |
-
if self.training:
|
479 |
-
# layer normalization for cls_token
|
480 |
-
cls_token = self.norm_cls(cls_token)
|
481 |
-
x = self.norm(x)
|
482 |
-
x = self.pre_logits(x)
|
483 |
-
return cls_token, x
|
484 |
-
|
485 |
-
def forward(self, x):
|
486 |
-
cls_token, x = self.forward_features(x)
|
487 |
-
x = x.flatten(2).mean(-1)
|
488 |
-
if self.training:
|
489 |
-
x = self.head(x), self.head_cls(cls_token.squeeze(1))
|
490 |
-
else:
|
491 |
-
x = self.head(x)
|
492 |
-
return x
|
493 |
-
|
494 |
-
|
495 |
-
def uniformer_xxs_image(**kwargs):
|
496 |
-
model = UniFormer_Light(
|
497 |
-
depth=[2, 5, 8, 2], conv_stem=True,
|
498 |
-
prune_ratio=[[], [], [1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5], [0.5, 0.5]],
|
499 |
-
trade_off=[[], [], [1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5], [0.5, 0.5]],
|
500 |
-
embed_dim=[56, 112, 224, 448], head_dim=28, mlp_ratio=[3, 3, 3, 3], qkv_bias=True,
|
501 |
-
**kwargs)
|
502 |
-
model.default_cfg = _cfg()
|
503 |
-
return model
|
504 |
-
|
505 |
-
|
506 |
-
def uniformer_xs_image(**kwargs):
|
507 |
-
model = UniFormer_Light(
|
508 |
-
depth=[3, 5, 9, 3], conv_stem=True,
|
509 |
-
prune_ratio=[[], [], [1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5], [0.5, 0.5, 0.5]],
|
510 |
-
trade_off=[[], [], [1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5], [0.5, 0.5, 0.5]],
|
511 |
-
embed_dim=[64, 128, 256, 512], head_dim=32, mlp_ratio=[3, 3, 3, 3], qkv_bias=True,
|
512 |
-
**kwargs)
|
513 |
-
model.default_cfg = _cfg()
|
514 |
-
return model
|
515 |
-
|
516 |
-
|
517 |
-
if __name__ == '__main__':
|
518 |
-
import time
|
519 |
-
from fvcore.nn import FlopCountAnalysis
|
520 |
-
from fvcore.nn import flop_count_table
|
521 |
-
import numpy as np
|
522 |
-
|
523 |
-
seed = 4217
|
524 |
-
np.random.seed(seed)
|
525 |
-
torch.manual_seed(seed)
|
526 |
-
torch.cuda.manual_seed(seed)
|
527 |
-
torch.cuda.manual_seed_all(seed)
|
528 |
-
|
529 |
-
model = uniformer_xxs_image()
|
530 |
-
# print(model)
|
531 |
-
|
532 |
-
flops = FlopCountAnalysis(model, torch.rand(1, 3, 160, 160))
|
533 |
-
s = time.time()
|
534 |
-
print(flop_count_table(flops, max_depth=1))
|
535 |
-
print(time.time()-s)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AndySAnker/DeepStruc/tools/module.py
DELETED
@@ -1,364 +0,0 @@
|
|
1 |
-
import torch.nn as nn
|
2 |
-
import torch, sys
|
3 |
-
import torch.nn.functional as F
|
4 |
-
import torch.nn
|
5 |
-
from torch_geometric.nn import GATConv
|
6 |
-
import pytorch_lightning as pl
|
7 |
-
from collections import OrderedDict
|
8 |
-
from torch_geometric.nn.glob import global_add_pool, GlobalAttention
|
9 |
-
from torch.distributions import Normal, Independent
|
10 |
-
from torch.distributions.kl import kl_divergence as KLD
|
11 |
-
|
12 |
-
class Net(pl.LightningModule):
|
13 |
-
def __init__(self, model_arch, lr=1e-4, beta=0, beta_inc=0.001, beta_max=1, rec_th=0.0001):
|
14 |
-
super(Net, self).__init__()
|
15 |
-
self.actFunc = nn.LeakyReLU()
|
16 |
-
self.actFunc_ReLU = nn.ReLU()
|
17 |
-
self.cluster_size = int(model_arch['decoder']['out_dim'])
|
18 |
-
self.latent_space = model_arch['latent_space']
|
19 |
-
self.beta = beta # starting val
|
20 |
-
self.beta_inc = beta_inc # beta increase
|
21 |
-
self.rec_th = rec_th # Update beta if loss_rec is =< this value
|
22 |
-
self.last_beta_update = 0
|
23 |
-
self.beta_max = beta_max
|
24 |
-
self.lr = lr
|
25 |
-
self.num_node_features = model_arch['node_features']
|
26 |
-
self.encoder_layers = self.Encoder(model_arch['node_features'], model_arch['encoder'], model_arch['mlps']['m0'])
|
27 |
-
self.decoder_layers = self.Decoder(model_arch['node_features'], model_arch['decoder'], model_arch['latent_space'])
|
28 |
-
self.mlp_layers = self.MLPs(model_arch['mlps'], model_arch['latent_space'])
|
29 |
-
|
30 |
-
self.prior_layers = self.conditioning_nw(model_arch['PDF_len'], model_arch['prior'], self.latent_space * 2)
|
31 |
-
self.posterior_layers = self.conditioning_nw(model_arch['PDF_len'], model_arch['posterior'], model_arch['mlps']['m0']) # Posterior
|
32 |
-
self.glob_at = GlobalAttention(torch.nn.Linear(model_arch['mlps']['m0'], 1), torch.nn.Linear(model_arch['mlps']['m0'], model_arch['mlps']['m0']))
|
33 |
-
|
34 |
-
|
35 |
-
def MLPs(self, model_arch, latent_dim):
|
36 |
-
layers = OrderedDict()
|
37 |
-
|
38 |
-
for idx, key in enumerate(model_arch.keys()):
|
39 |
-
if idx == 0:
|
40 |
-
layers[str(key)] = torch.nn.Linear(model_arch[key]*2, model_arch[key])
|
41 |
-
else:
|
42 |
-
layers[str(key)] = torch.nn.Linear(former_nhid, model_arch[key])
|
43 |
-
|
44 |
-
former_nhid = model_arch[key]
|
45 |
-
|
46 |
-
layers['-1'] = torch.nn.Linear(former_nhid, latent_dim*2)
|
47 |
-
|
48 |
-
|
49 |
-
return nn.Sequential(layers)
|
50 |
-
|
51 |
-
|
52 |
-
def Encoder(self, init_data, model_arch, out_dim):
|
53 |
-
layers = OrderedDict()
|
54 |
-
|
55 |
-
for idx, key in enumerate(model_arch.keys()):
|
56 |
-
if idx == 0:
|
57 |
-
layers[str(key)] = GATConv(init_data, model_arch[key])
|
58 |
-
else:
|
59 |
-
layers[str(key)] = GATConv(former_nhid, model_arch[key])
|
60 |
-
|
61 |
-
former_nhid = model_arch[key]
|
62 |
-
|
63 |
-
|
64 |
-
#layers['-1'] = GATConv(former_nhid, model_arch['m0'])
|
65 |
-
layers[str('e{}'.format(idx + 1))] = GATConv(former_nhid, out_dim)
|
66 |
-
|
67 |
-
return nn.Sequential(layers)
|
68 |
-
|
69 |
-
def Decoder(self, init_data, model_arch, latent_dim):
|
70 |
-
layers = OrderedDict()
|
71 |
-
|
72 |
-
for idx, key in enumerate(model_arch.keys()):
|
73 |
-
if idx == 0 :
|
74 |
-
layers[str(key)] = nn.Linear(latent_dim, model_arch[key])
|
75 |
-
elif key == 'out_dim':
|
76 |
-
continue
|
77 |
-
else:
|
78 |
-
layers[str(key)] = nn.Linear(former_nhid, model_arch[key])
|
79 |
-
|
80 |
-
former_nhid = model_arch[key]
|
81 |
-
|
82 |
-
|
83 |
-
layers[str('d{}'.format(idx+1))] = nn.Linear(former_nhid, model_arch['out_dim']*init_data)
|
84 |
-
|
85 |
-
return nn.Sequential(layers)
|
86 |
-
|
87 |
-
def conditioning_nw(self, pdf, model_arch, out):
|
88 |
-
### Conditioning network on prior for atom list
|
89 |
-
### Creates additional node features per node
|
90 |
-
### Assumes 1xself.atomRangex1 one hot encoding vector as input
|
91 |
-
### Output: 1x2*latent_dimx1
|
92 |
-
"""conditioning_layers = nn.Sequential(
|
93 |
-
GatedConv1d(pdf, 48, kernel_size=1, stride=1), nn.ReLU(),
|
94 |
-
GatedConv1d(48, 24, kernel_size=1, stride=1), nn.ReLU(),
|
95 |
-
GatedConv1d(24, out, kernel_size=1, stride=1))"""
|
96 |
-
|
97 |
-
|
98 |
-
conditioning_layers = torch.nn.Sequential()
|
99 |
-
for idx, key in enumerate(model_arch.keys()):
|
100 |
-
if idx == 0:
|
101 |
-
conditioning_layers.add_module(str(key), GatedConv1d(pdf, model_arch[key], kernel_size=1, stride=1))
|
102 |
-
else:
|
103 |
-
conditioning_layers.add_module(str(key), GatedConv1d(former_nhid, model_arch[key], kernel_size=1, stride=1))
|
104 |
-
|
105 |
-
former_nhid = model_arch[key]
|
106 |
-
conditioning_layers.add_module('-1', GatedConv1d(former_nhid, out, kernel_size=1, stride=1))
|
107 |
-
|
108 |
-
return conditioning_layers
|
109 |
-
|
110 |
-
|
111 |
-
def forward(self, data, mode='posterior', sigma_scale=1):
|
112 |
-
"""
|
113 |
-
|
114 |
-
Parameters
|
115 |
-
----------
|
116 |
-
data :
|
117 |
-
mode : str - posterior, prior or generate
|
118 |
-
|
119 |
-
Returns
|
120 |
-
-------
|
121 |
-
|
122 |
-
"""
|
123 |
-
self.sigma_scale = sigma_scale
|
124 |
-
if mode == 'posterior':
|
125 |
-
pdf_cond = data[1].to(self.device)
|
126 |
-
data = data[0].to(self.device)
|
127 |
-
try:
|
128 |
-
this_batch_size = len(data.batch.unique())
|
129 |
-
except:
|
130 |
-
this_batch_size = 1
|
131 |
-
|
132 |
-
# Prior
|
133 |
-
prior = self.get_prior_dist(pdf_cond)
|
134 |
-
|
135 |
-
# Posterior
|
136 |
-
posterior = self.get_posterior_dist(data, pdf_cond, this_batch_size)
|
137 |
-
|
138 |
-
# Divergence between posterior and prior
|
139 |
-
kl = KLD(posterior, prior) / this_batch_size
|
140 |
-
|
141 |
-
# Draw z from posterior distribution
|
142 |
-
z_sample = posterior.rsample()
|
143 |
-
z = z_sample.clone()
|
144 |
-
|
145 |
-
elif mode == 'prior':
|
146 |
-
try:
|
147 |
-
hej = data.clone()
|
148 |
-
pdf_cond = data.to(self.device)
|
149 |
-
this_batch_size = len(data)
|
150 |
-
except:
|
151 |
-
#print(data)
|
152 |
-
pdf_cond = data[1].to(self.device)
|
153 |
-
this_batch_size = 1
|
154 |
-
|
155 |
-
|
156 |
-
# Prior
|
157 |
-
prior = self.get_prior_dist(pdf_cond)
|
158 |
-
|
159 |
-
# Draw z from prior distribution
|
160 |
-
z_sample = prior.rsample()
|
161 |
-
z = z_sample.clone()
|
162 |
-
kl = torch.zeros(this_batch_size) -1
|
163 |
-
|
164 |
-
elif mode == 'generate':
|
165 |
-
# Set is given
|
166 |
-
z = data.clone()
|
167 |
-
z_sample = data.clone()
|
168 |
-
this_batch_size = 1
|
169 |
-
kl = torch.zeros(this_batch_size) -1
|
170 |
-
|
171 |
-
# Decoder
|
172 |
-
for idx, layer in enumerate(self.decoder_layers):
|
173 |
-
if idx == len(self.decoder_layers)-1:
|
174 |
-
z_sample = layer(z_sample)
|
175 |
-
else:
|
176 |
-
z_sample = self.actFunc(layer(z_sample))
|
177 |
-
|
178 |
-
z_sample = z_sample.view(this_batch_size, self.cluster_size, self.num_node_features) # Output
|
179 |
-
|
180 |
-
return z_sample, z, kl, self.mu, self.sigma#.mean()
|
181 |
-
|
182 |
-
|
183 |
-
def get_prior_dist(self, pdf_cond):
|
184 |
-
cond_prior = pdf_cond.clone()
|
185 |
-
|
186 |
-
for idx, layer in enumerate(self.prior_layers):
|
187 |
-
if idx == len(self.prior_layers) - 1:
|
188 |
-
cond_prior = layer(cond_prior)
|
189 |
-
else:
|
190 |
-
cond_prior = self.actFunc(layer(cond_prior))
|
191 |
-
|
192 |
-
cond_prior = cond_prior.squeeze(-1)
|
193 |
-
prior = self.get_distribution(cond_prior)
|
194 |
-
return prior
|
195 |
-
|
196 |
-
|
197 |
-
def get_posterior_dist(self, data, pdf_cond, this_batch_size):
|
198 |
-
cond_post = pdf_cond.clone()
|
199 |
-
|
200 |
-
# Posterior
|
201 |
-
for idx, layer in enumerate(self.posterior_layers):
|
202 |
-
if idx == len(self.posterior_layers) - 1:
|
203 |
-
cond_post = layer(cond_post)
|
204 |
-
else:
|
205 |
-
cond_post = self.actFunc(layer(cond_post))
|
206 |
-
|
207 |
-
# Encoder
|
208 |
-
z = data.x.clone()
|
209 |
-
for idx, layer in enumerate(self.encoder_layers):
|
210 |
-
if idx == len(self.encoder_layers) - 1:
|
211 |
-
z = layer(z, data.edge_index)
|
212 |
-
else:
|
213 |
-
edge_index = data.edge_index
|
214 |
-
|
215 |
-
z = self.actFunc(layer(z, edge_index))
|
216 |
-
test = z.clone()
|
217 |
-
|
218 |
-
#z = global_add_pool(z, data.batch, size=this_batch_size) # Sum note features
|
219 |
-
z = self.glob_at(test, data.batch, size=this_batch_size)
|
220 |
-
|
221 |
-
cond_post = cond_post.squeeze(-1)
|
222 |
-
|
223 |
-
z = torch.cat((z, cond_post), -1)
|
224 |
-
|
225 |
-
for idx, layer in enumerate(self.mlp_layers):
|
226 |
-
if idx == len(self.mlp_layers) - 1:
|
227 |
-
z = layer(z)
|
228 |
-
else:
|
229 |
-
z = self.actFunc(layer(z))
|
230 |
-
|
231 |
-
# Draw from distribution
|
232 |
-
posterior = self.get_distribution(z)
|
233 |
-
return posterior
|
234 |
-
|
235 |
-
|
236 |
-
def get_distribution(self, z):
|
237 |
-
mu, log_var = torch.chunk(z, 2, dim=-1)
|
238 |
-
log_var = nn.functional.softplus(log_var) # Sigma can't be negative
|
239 |
-
sigma = torch.exp(log_var / 2) * self.sigma_scale
|
240 |
-
self.sigma = sigma
|
241 |
-
self.mu = mu
|
242 |
-
distribution = Independent(Normal(loc=mu, scale=sigma), 2)
|
243 |
-
return distribution
|
244 |
-
|
245 |
-
|
246 |
-
def training_step(self, batch, batch_nb):
|
247 |
-
prediction, _, kl, _, _ = self.forward(batch)
|
248 |
-
|
249 |
-
loss = weighted_mse_loss(prediction, batch[0]['y'], self.device)
|
250 |
-
|
251 |
-
#loss = F.mse_loss(prediction, batch[0]['y'])
|
252 |
-
log_loss = loss#torch.log(loss)
|
253 |
-
|
254 |
-
tot_loss = log_loss + (self.beta * kl)
|
255 |
-
|
256 |
-
self.log('trn_tot', tot_loss, prog_bar=False, on_step=False, on_epoch=True)
|
257 |
-
self.log('trn_rec', loss, prog_bar=False, on_step=False, on_epoch=True)
|
258 |
-
self.log('trn_log_rec', log_loss, prog_bar=False, on_step=False, on_epoch=True)
|
259 |
-
self.log('trn_kld', kl, prog_bar=False, on_step=False, on_epoch=True)
|
260 |
-
|
261 |
-
return tot_loss
|
262 |
-
|
263 |
-
|
264 |
-
def validation_step(self, batch, batch_nb):
|
265 |
-
prediction, _, kl, _, _ = self.forward(batch)
|
266 |
-
prediction_pdf, _, _, _, _ = self.forward(batch[1], mode='prior')
|
267 |
-
|
268 |
-
#loss = weighted_mse_loss(prediction, batch[0]['y'], self.device, node_weight=5)
|
269 |
-
#loss_pdf = weighted_mse_loss(prediction_pdf, batch[0]['y'], self.device, node_weight=5)
|
270 |
-
|
271 |
-
loss = F.mse_loss(prediction, batch[0]['y'])
|
272 |
-
loss_pdf = F.mse_loss(prediction_pdf, batch[0]['y'])
|
273 |
-
|
274 |
-
log_loss = loss#torch.log(loss)
|
275 |
-
|
276 |
-
tot_loss = log_loss + (self.beta * kl)
|
277 |
-
|
278 |
-
if (self.last_beta_update != self.current_epoch and self.beta < self.beta_max) and loss <= self.rec_th:
|
279 |
-
self.beta += self.beta_inc
|
280 |
-
self.last_beta_update = self.current_epoch
|
281 |
-
|
282 |
-
beta = self.beta
|
283 |
-
self.log('vld_tot', tot_loss, prog_bar=True, on_epoch=True)
|
284 |
-
self.log('vld_rec', loss, prog_bar=True, on_epoch=True)
|
285 |
-
self.log('vld_log_rec', log_loss, prog_bar=True, on_epoch=True)
|
286 |
-
self.log('vld_rec_pdf', loss_pdf, prog_bar=True, on_epoch=True)
|
287 |
-
self.log('vld_kld', kl, prog_bar=True, on_epoch=True)
|
288 |
-
self.log('beta', beta, prog_bar=True, on_step=False, on_epoch=True)
|
289 |
-
|
290 |
-
return tot_loss
|
291 |
-
|
292 |
-
|
293 |
-
def test_step(self, batch, batch_nb):
|
294 |
-
prediction, _, kl, _, _ = self.forward(batch)
|
295 |
-
prediction_pdf, _, _, _, _ = self.forward(batch[1], mode='prior')
|
296 |
-
|
297 |
-
#loss = weighted_mse_loss(prediction, batch[0]['y'], self.device, node_weight=5)
|
298 |
-
#loss_pdf = weighted_mse_loss(prediction_pdf, batch[0]['y'], self.device, node_weight=5)
|
299 |
-
|
300 |
-
loss = F.mse_loss(prediction, batch[0]['y'])
|
301 |
-
loss_pdf = F.mse_loss(prediction_pdf, batch[0]['y'])
|
302 |
-
|
303 |
-
log_loss = loss#torch.log(loss)
|
304 |
-
|
305 |
-
tot_loss = log_loss + (self.beta * kl)
|
306 |
-
|
307 |
-
self.log('tst_tot', tot_loss, prog_bar=False, on_epoch=True)
|
308 |
-
self.log('tst_rec', loss, prog_bar=False, on_epoch=True)
|
309 |
-
self.log('tst_log_rec', log_loss, prog_bar=False, on_epoch=True)
|
310 |
-
self.log('tst_rec_pdf', loss_pdf, prog_bar=False, on_epoch=True)
|
311 |
-
self.log('tst_kld', kl, prog_bar=False, on_epoch=True)
|
312 |
-
|
313 |
-
return tot_loss
|
314 |
-
|
315 |
-
|
316 |
-
def configure_optimizers(self):
|
317 |
-
return torch.optim.Adam(self.parameters(), lr=self.lr)
|
318 |
-
|
319 |
-
|
320 |
-
class GatedConv1d(nn.Module):
|
321 |
-
def __init__(self, input_channels, output_channels,
|
322 |
-
kernel_size, stride, padding=0, dilation=1, activation=None):
|
323 |
-
super(GatedConv1d, self).__init__()
|
324 |
-
|
325 |
-
self.activation = activation
|
326 |
-
self.sigmoid = nn.Sigmoid()
|
327 |
-
|
328 |
-
self.h = nn.Conv1d(input_channels, output_channels, kernel_size,
|
329 |
-
stride, padding, dilation)
|
330 |
-
self.g = nn.Conv1d(input_channels, output_channels, kernel_size,
|
331 |
-
stride, padding, dilation)
|
332 |
-
|
333 |
-
def forward(self, x):
|
334 |
-
if self.activation is None:
|
335 |
-
h = self.h(x)
|
336 |
-
else:
|
337 |
-
h = self.activation(self.h(x))
|
338 |
-
g = self.sigmoid(self.g(x))
|
339 |
-
|
340 |
-
return h * g
|
341 |
-
|
342 |
-
|
343 |
-
def weighted_mse_loss(pred, label,device, dummy_weight=0.1, node_weight=1):
|
344 |
-
"""
|
345 |
-
|
346 |
-
Parameters
|
347 |
-
----------
|
348 |
-
pred : Predictions. (tensor)
|
349 |
-
label : True labels. (tensor)
|
350 |
-
dummy_weight : Weight of dummy nodes, default is 0.1. (float)
|
351 |
-
|
352 |
-
Returns
|
353 |
-
-------
|
354 |
-
this_loss : Computed loss. (tensor)
|
355 |
-
"""
|
356 |
-
mask = torch.ones(label.shape).to(device)
|
357 |
-
mask[label == -1.] = dummy_weight
|
358 |
-
mask[label >= -0] = node_weight
|
359 |
-
|
360 |
-
loss_func = nn.MSELoss(reduction='none')
|
361 |
-
this_loss = loss_func(pred, label)
|
362 |
-
this_loss = this_loss*mask
|
363 |
-
|
364 |
-
return this_loss.mean()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/example/script.py
DELETED
@@ -1,139 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
An example of extension. It does nothing, but you can add transformations
|
3 |
-
before the return statements to customize the webui behavior.
|
4 |
-
|
5 |
-
Starting from history_modifier and ending in output_modifier, the
|
6 |
-
functions are declared in the same order that they are called at
|
7 |
-
generation time.
|
8 |
-
"""
|
9 |
-
|
10 |
-
import gradio as gr
|
11 |
-
import torch
|
12 |
-
from transformers import LogitsProcessor
|
13 |
-
|
14 |
-
from modules import chat, shared
|
15 |
-
from modules.text_generation import (
|
16 |
-
decode,
|
17 |
-
encode,
|
18 |
-
generate_reply,
|
19 |
-
)
|
20 |
-
|
21 |
-
params = {
|
22 |
-
"display_name": "Example Extension",
|
23 |
-
"is_tab": False,
|
24 |
-
}
|
25 |
-
|
26 |
-
class MyLogits(LogitsProcessor):
|
27 |
-
"""
|
28 |
-
Manipulates the probabilities for the next token before it gets sampled.
|
29 |
-
Used in the logits_processor_modifier function below.
|
30 |
-
"""
|
31 |
-
def __init__(self):
|
32 |
-
pass
|
33 |
-
|
34 |
-
def __call__(self, input_ids, scores):
|
35 |
-
# probs = torch.softmax(scores, dim=-1, dtype=torch.float)
|
36 |
-
# probs[0] /= probs[0].sum()
|
37 |
-
# scores = torch.log(probs / (1 - probs))
|
38 |
-
return scores
|
39 |
-
|
40 |
-
def history_modifier(history):
|
41 |
-
"""
|
42 |
-
Modifies the chat history.
|
43 |
-
Only used in chat mode.
|
44 |
-
"""
|
45 |
-
return history
|
46 |
-
|
47 |
-
def state_modifier(state):
|
48 |
-
"""
|
49 |
-
Modifies the state variable, which is a dictionary containing the input
|
50 |
-
values in the UI like sliders and checkboxes.
|
51 |
-
"""
|
52 |
-
return state
|
53 |
-
|
54 |
-
def chat_input_modifier(text, visible_text, state):
|
55 |
-
"""
|
56 |
-
Modifies the user input string in chat mode (visible_text).
|
57 |
-
You can also modify the internal representation of the user
|
58 |
-
input (text) to change how it will appear in the prompt.
|
59 |
-
"""
|
60 |
-
return text, visible_text
|
61 |
-
|
62 |
-
def input_modifier(string, state, is_chat=False):
|
63 |
-
"""
|
64 |
-
In default/notebook modes, modifies the whole prompt.
|
65 |
-
|
66 |
-
In chat mode, it is the same as chat_input_modifier but only applied
|
67 |
-
to "text", here called "string", and not to "visible_text".
|
68 |
-
"""
|
69 |
-
return string
|
70 |
-
|
71 |
-
def bot_prefix_modifier(string, state):
|
72 |
-
"""
|
73 |
-
Modifies the prefix for the next bot reply in chat mode.
|
74 |
-
By default, the prefix will be something like "Bot Name:".
|
75 |
-
"""
|
76 |
-
return string
|
77 |
-
|
78 |
-
def tokenizer_modifier(state, prompt, input_ids, input_embeds):
|
79 |
-
"""
|
80 |
-
Modifies the input ids and embeds.
|
81 |
-
Used by the multimodal extension to put image embeddings in the prompt.
|
82 |
-
Only used by loaders that use the transformers library for sampling.
|
83 |
-
"""
|
84 |
-
return prompt, input_ids, input_embeds
|
85 |
-
|
86 |
-
def logits_processor_modifier(processor_list, input_ids):
|
87 |
-
"""
|
88 |
-
Adds logits processors to the list, allowing you to access and modify
|
89 |
-
the next token probabilities.
|
90 |
-
Only used by loaders that use the transformers library for sampling.
|
91 |
-
"""
|
92 |
-
processor_list.append(MyLogits())
|
93 |
-
return processor_list
|
94 |
-
|
95 |
-
def output_modifier(string, state, is_chat=False):
|
96 |
-
"""
|
97 |
-
Modifies the LLM output before it gets presented.
|
98 |
-
|
99 |
-
In chat mode, the modified version goes into history['visible'],
|
100 |
-
and the original version goes into history['internal'].
|
101 |
-
"""
|
102 |
-
return string
|
103 |
-
|
104 |
-
def custom_generate_chat_prompt(user_input, state, **kwargs):
|
105 |
-
"""
|
106 |
-
Replaces the function that generates the prompt from the chat history.
|
107 |
-
Only used in chat mode.
|
108 |
-
"""
|
109 |
-
result = chat.generate_chat_prompt(user_input, state, **kwargs)
|
110 |
-
return result
|
111 |
-
|
112 |
-
def custom_css():
|
113 |
-
"""
|
114 |
-
Returns a CSS string that gets appended to the CSS for the webui.
|
115 |
-
"""
|
116 |
-
return ''
|
117 |
-
|
118 |
-
def custom_js():
|
119 |
-
"""
|
120 |
-
Returns a javascript string that gets appended to the javascript
|
121 |
-
for the webui.
|
122 |
-
"""
|
123 |
-
return ''
|
124 |
-
|
125 |
-
def setup():
|
126 |
-
"""
|
127 |
-
Gets executed only once, when the extension is imported.
|
128 |
-
"""
|
129 |
-
pass
|
130 |
-
|
131 |
-
def ui():
|
132 |
-
"""
|
133 |
-
Gets executed when the UI is drawn. Custom gradio elements and
|
134 |
-
their corresponding event handlers should be defined here.
|
135 |
-
|
136 |
-
To learn about gradio components, check out the docs:
|
137 |
-
https://gradio.app/docs/
|
138 |
-
"""
|
139 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/dotenv/__main__.py
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
"""Entry point for cli, enables execution with `python -m dotenv`"""
|
2 |
-
|
3 |
-
from .cli import cli
|
4 |
-
|
5 |
-
if __name__ == "__main__":
|
6 |
-
cli()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/rcnn.py
DELETED
@@ -1,327 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
import logging
|
3 |
-
import numpy as np
|
4 |
-
from typing import Dict, List, Optional, Tuple
|
5 |
-
import torch
|
6 |
-
from torch import nn
|
7 |
-
|
8 |
-
from detectron2.config import configurable
|
9 |
-
from detectron2.data.detection_utils import convert_image_to_rgb
|
10 |
-
from detectron2.structures import ImageList, Instances
|
11 |
-
from detectron2.utils.events import get_event_storage
|
12 |
-
from detectron2.utils.logger import log_first_n
|
13 |
-
|
14 |
-
from ..backbone import Backbone, build_backbone
|
15 |
-
from ..postprocessing import detector_postprocess
|
16 |
-
from ..proposal_generator import build_proposal_generator
|
17 |
-
from ..roi_heads import build_roi_heads
|
18 |
-
from .build import META_ARCH_REGISTRY
|
19 |
-
|
20 |
-
__all__ = ["GeneralizedRCNN", "ProposalNetwork"]
|
21 |
-
|
22 |
-
|
23 |
-
@META_ARCH_REGISTRY.register()
|
24 |
-
class GeneralizedRCNN(nn.Module):
|
25 |
-
"""
|
26 |
-
Generalized R-CNN. Any models that contains the following three components:
|
27 |
-
1. Per-image feature extraction (aka backbone)
|
28 |
-
2. Region proposal generation
|
29 |
-
3. Per-region feature extraction and prediction
|
30 |
-
"""
|
31 |
-
|
32 |
-
@configurable
|
33 |
-
def __init__(
|
34 |
-
self,
|
35 |
-
*,
|
36 |
-
backbone: Backbone,
|
37 |
-
proposal_generator: nn.Module,
|
38 |
-
roi_heads: nn.Module,
|
39 |
-
pixel_mean: Tuple[float],
|
40 |
-
pixel_std: Tuple[float],
|
41 |
-
input_format: Optional[str] = None,
|
42 |
-
vis_period: int = 0,
|
43 |
-
):
|
44 |
-
"""
|
45 |
-
Args:
|
46 |
-
backbone: a backbone module, must follow detectron2's backbone interface
|
47 |
-
proposal_generator: a module that generates proposals using backbone features
|
48 |
-
roi_heads: a ROI head that performs per-region computation
|
49 |
-
pixel_mean, pixel_std: list or tuple with #channels element, representing
|
50 |
-
the per-channel mean and std to be used to normalize the input image
|
51 |
-
input_format: describe the meaning of channels of input. Needed by visualization
|
52 |
-
vis_period: the period to run visualization. Set to 0 to disable.
|
53 |
-
"""
|
54 |
-
super().__init__()
|
55 |
-
self.backbone = backbone
|
56 |
-
self.proposal_generator = proposal_generator
|
57 |
-
self.roi_heads = roi_heads
|
58 |
-
|
59 |
-
self.input_format = input_format
|
60 |
-
self.vis_period = vis_period
|
61 |
-
if vis_period > 0:
|
62 |
-
assert input_format is not None, "input_format is required for visualization!"
|
63 |
-
|
64 |
-
self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False)
|
65 |
-
self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False)
|
66 |
-
assert (
|
67 |
-
self.pixel_mean.shape == self.pixel_std.shape
|
68 |
-
), f"{self.pixel_mean} and {self.pixel_std} have different shapes!"
|
69 |
-
|
70 |
-
@classmethod
|
71 |
-
def from_config(cls, cfg):
|
72 |
-
backbone = build_backbone(cfg)
|
73 |
-
return {
|
74 |
-
"backbone": backbone,
|
75 |
-
"proposal_generator": build_proposal_generator(cfg, backbone.output_shape()),
|
76 |
-
"roi_heads": build_roi_heads(cfg, backbone.output_shape()),
|
77 |
-
"input_format": cfg.INPUT.FORMAT,
|
78 |
-
"vis_period": cfg.VIS_PERIOD,
|
79 |
-
"pixel_mean": cfg.MODEL.PIXEL_MEAN,
|
80 |
-
"pixel_std": cfg.MODEL.PIXEL_STD,
|
81 |
-
}
|
82 |
-
|
83 |
-
@property
|
84 |
-
def device(self):
|
85 |
-
return self.pixel_mean.device
|
86 |
-
|
87 |
-
def visualize_training(self, batched_inputs, proposals):
|
88 |
-
"""
|
89 |
-
A function used to visualize images and proposals. It shows ground truth
|
90 |
-
bounding boxes on the original image and up to 20 top-scoring predicted
|
91 |
-
object proposals on the original image. Users can implement different
|
92 |
-
visualization functions for different models.
|
93 |
-
|
94 |
-
Args:
|
95 |
-
batched_inputs (list): a list that contains input to the model.
|
96 |
-
proposals (list): a list that contains predicted proposals. Both
|
97 |
-
batched_inputs and proposals should have the same length.
|
98 |
-
"""
|
99 |
-
from detectron2.utils.visualizer import Visualizer
|
100 |
-
|
101 |
-
storage = get_event_storage()
|
102 |
-
max_vis_prop = 20
|
103 |
-
|
104 |
-
for input, prop in zip(batched_inputs, proposals):
|
105 |
-
img = input["image"]
|
106 |
-
img = convert_image_to_rgb(img.permute(1, 2, 0), self.input_format)
|
107 |
-
v_gt = Visualizer(img, None)
|
108 |
-
v_gt = v_gt.overlay_instances(boxes=input["instances"].gt_boxes)
|
109 |
-
anno_img = v_gt.get_image()
|
110 |
-
box_size = min(len(prop.proposal_boxes), max_vis_prop)
|
111 |
-
v_pred = Visualizer(img, None)
|
112 |
-
v_pred = v_pred.overlay_instances(
|
113 |
-
boxes=prop.proposal_boxes[0:box_size].tensor.cpu().numpy()
|
114 |
-
)
|
115 |
-
prop_img = v_pred.get_image()
|
116 |
-
vis_img = np.concatenate((anno_img, prop_img), axis=1)
|
117 |
-
vis_img = vis_img.transpose(2, 0, 1)
|
118 |
-
vis_name = "Left: GT bounding boxes; Right: Predicted proposals"
|
119 |
-
storage.put_image(vis_name, vis_img)
|
120 |
-
break # only visualize one image in a batch
|
121 |
-
|
122 |
-
def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]):
|
123 |
-
"""
|
124 |
-
Args:
|
125 |
-
batched_inputs: a list, batched outputs of :class:`DatasetMapper` .
|
126 |
-
Each item in the list contains the inputs for one image.
|
127 |
-
For now, each item in the list is a dict that contains:
|
128 |
-
|
129 |
-
* image: Tensor, image in (C, H, W) format.
|
130 |
-
* instances (optional): groundtruth :class:`Instances`
|
131 |
-
* proposals (optional): :class:`Instances`, precomputed proposals.
|
132 |
-
|
133 |
-
Other information that's included in the original dicts, such as:
|
134 |
-
|
135 |
-
* "height", "width" (int): the output resolution of the model, used in inference.
|
136 |
-
See :meth:`postprocess` for details.
|
137 |
-
|
138 |
-
Returns:
|
139 |
-
list[dict]:
|
140 |
-
Each dict is the output for one input image.
|
141 |
-
The dict contains one key "instances" whose value is a :class:`Instances`.
|
142 |
-
The :class:`Instances` object has the following keys:
|
143 |
-
"pred_boxes", "pred_classes", "scores", "pred_masks", "pred_keypoints"
|
144 |
-
"""
|
145 |
-
if not self.training:
|
146 |
-
return self.inference(batched_inputs)
|
147 |
-
|
148 |
-
images = self.preprocess_image(batched_inputs)
|
149 |
-
if "instances" in batched_inputs[0]:
|
150 |
-
gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
|
151 |
-
else:
|
152 |
-
gt_instances = None
|
153 |
-
|
154 |
-
features = self.backbone(images.tensor)
|
155 |
-
|
156 |
-
if self.proposal_generator is not None:
|
157 |
-
proposals, proposal_losses = self.proposal_generator(images, features, gt_instances)
|
158 |
-
else:
|
159 |
-
assert "proposals" in batched_inputs[0]
|
160 |
-
proposals = [x["proposals"].to(self.device) for x in batched_inputs]
|
161 |
-
proposal_losses = {}
|
162 |
-
|
163 |
-
_, detector_losses = self.roi_heads(images, features, proposals, gt_instances)
|
164 |
-
if self.vis_period > 0:
|
165 |
-
storage = get_event_storage()
|
166 |
-
if storage.iter % self.vis_period == 0:
|
167 |
-
self.visualize_training(batched_inputs, proposals)
|
168 |
-
|
169 |
-
losses = {}
|
170 |
-
losses.update(detector_losses)
|
171 |
-
losses.update(proposal_losses)
|
172 |
-
return losses
|
173 |
-
|
174 |
-
def inference(
|
175 |
-
self,
|
176 |
-
batched_inputs: List[Dict[str, torch.Tensor]],
|
177 |
-
detected_instances: Optional[List[Instances]] = None,
|
178 |
-
do_postprocess: bool = True,
|
179 |
-
):
|
180 |
-
"""
|
181 |
-
Run inference on the given inputs.
|
182 |
-
|
183 |
-
Args:
|
184 |
-
batched_inputs (list[dict]): same as in :meth:`forward`
|
185 |
-
detected_instances (None or list[Instances]): if not None, it
|
186 |
-
contains an `Instances` object per image. The `Instances`
|
187 |
-
object contains "pred_boxes" and "pred_classes" which are
|
188 |
-
known boxes in the image.
|
189 |
-
The inference will then skip the detection of bounding boxes,
|
190 |
-
and only predict other per-ROI outputs.
|
191 |
-
do_postprocess (bool): whether to apply post-processing on the outputs.
|
192 |
-
|
193 |
-
Returns:
|
194 |
-
When do_postprocess=True, same as in :meth:`forward`.
|
195 |
-
Otherwise, a list[Instances] containing raw network outputs.
|
196 |
-
"""
|
197 |
-
assert not self.training
|
198 |
-
|
199 |
-
images = self.preprocess_image(batched_inputs)
|
200 |
-
features = self.backbone(images.tensor)
|
201 |
-
|
202 |
-
if detected_instances is None:
|
203 |
-
if self.proposal_generator is not None:
|
204 |
-
proposals, _ = self.proposal_generator(images, features, None)
|
205 |
-
else:
|
206 |
-
assert "proposals" in batched_inputs[0]
|
207 |
-
proposals = [x["proposals"].to(self.device) for x in batched_inputs]
|
208 |
-
|
209 |
-
results, _ = self.roi_heads(images, features, proposals, None)
|
210 |
-
else:
|
211 |
-
detected_instances = [x.to(self.device) for x in detected_instances]
|
212 |
-
results = self.roi_heads.forward_with_given_boxes(features, detected_instances)
|
213 |
-
|
214 |
-
if do_postprocess:
|
215 |
-
assert not torch.jit.is_scripting(), "Scripting is not supported for postprocess."
|
216 |
-
return GeneralizedRCNN._postprocess(results, batched_inputs, images.image_sizes)
|
217 |
-
else:
|
218 |
-
return results
|
219 |
-
|
220 |
-
def preprocess_image(self, batched_inputs: List[Dict[str, torch.Tensor]]):
|
221 |
-
"""
|
222 |
-
Normalize, pad and batch the input images.
|
223 |
-
"""
|
224 |
-
images = [x["image"].to(self.device) for x in batched_inputs]
|
225 |
-
images = [(x - self.pixel_mean) / self.pixel_std for x in images]
|
226 |
-
images = ImageList.from_tensors(images, self.backbone.size_divisibility)
|
227 |
-
return images
|
228 |
-
|
229 |
-
@staticmethod
|
230 |
-
def _postprocess(instances, batched_inputs: List[Dict[str, torch.Tensor]], image_sizes):
|
231 |
-
"""
|
232 |
-
Rescale the output instances to the target size.
|
233 |
-
"""
|
234 |
-
# note: private function; subject to changes
|
235 |
-
processed_results = []
|
236 |
-
for results_per_image, input_per_image, image_size in zip(
|
237 |
-
instances, batched_inputs, image_sizes
|
238 |
-
):
|
239 |
-
height = input_per_image.get("height", image_size[0])
|
240 |
-
width = input_per_image.get("width", image_size[1])
|
241 |
-
r = detector_postprocess(results_per_image, height, width)
|
242 |
-
processed_results.append({"instances": r})
|
243 |
-
return processed_results
|
244 |
-
|
245 |
-
|
246 |
-
@META_ARCH_REGISTRY.register()
|
247 |
-
class ProposalNetwork(nn.Module):
|
248 |
-
"""
|
249 |
-
A meta architecture that only predicts object proposals.
|
250 |
-
"""
|
251 |
-
|
252 |
-
@configurable
|
253 |
-
def __init__(
|
254 |
-
self,
|
255 |
-
*,
|
256 |
-
backbone: Backbone,
|
257 |
-
proposal_generator: nn.Module,
|
258 |
-
pixel_mean: Tuple[float],
|
259 |
-
pixel_std: Tuple[float],
|
260 |
-
):
|
261 |
-
"""
|
262 |
-
Args:
|
263 |
-
backbone: a backbone module, must follow detectron2's backbone interface
|
264 |
-
proposal_generator: a module that generates proposals using backbone features
|
265 |
-
pixel_mean, pixel_std: list or tuple with #channels element, representing
|
266 |
-
the per-channel mean and std to be used to normalize the input image
|
267 |
-
"""
|
268 |
-
super().__init__()
|
269 |
-
self.backbone = backbone
|
270 |
-
self.proposal_generator = proposal_generator
|
271 |
-
self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False)
|
272 |
-
self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False)
|
273 |
-
|
274 |
-
@classmethod
|
275 |
-
def from_config(cls, cfg):
|
276 |
-
backbone = build_backbone(cfg)
|
277 |
-
return {
|
278 |
-
"backbone": backbone,
|
279 |
-
"proposal_generator": build_proposal_generator(cfg, backbone.output_shape()),
|
280 |
-
"pixel_mean": cfg.MODEL.PIXEL_MEAN,
|
281 |
-
"pixel_std": cfg.MODEL.PIXEL_STD,
|
282 |
-
}
|
283 |
-
|
284 |
-
@property
|
285 |
-
def device(self):
|
286 |
-
return self.pixel_mean.device
|
287 |
-
|
288 |
-
def forward(self, batched_inputs):
|
289 |
-
"""
|
290 |
-
Args:
|
291 |
-
Same as in :class:`GeneralizedRCNN.forward`
|
292 |
-
|
293 |
-
Returns:
|
294 |
-
list[dict]:
|
295 |
-
Each dict is the output for one input image.
|
296 |
-
The dict contains one key "proposals" whose value is a
|
297 |
-
:class:`Instances` with keys "proposal_boxes" and "objectness_logits".
|
298 |
-
"""
|
299 |
-
images = [x["image"].to(self.device) for x in batched_inputs]
|
300 |
-
images = [(x - self.pixel_mean) / self.pixel_std for x in images]
|
301 |
-
images = ImageList.from_tensors(images, self.backbone.size_divisibility)
|
302 |
-
features = self.backbone(images.tensor)
|
303 |
-
|
304 |
-
if "instances" in batched_inputs[0]:
|
305 |
-
gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
|
306 |
-
elif "targets" in batched_inputs[0]:
|
307 |
-
log_first_n(
|
308 |
-
logging.WARN, "'targets' in the model inputs is now renamed to 'instances'!", n=10
|
309 |
-
)
|
310 |
-
gt_instances = [x["targets"].to(self.device) for x in batched_inputs]
|
311 |
-
else:
|
312 |
-
gt_instances = None
|
313 |
-
proposals, proposal_losses = self.proposal_generator(images, features, gt_instances)
|
314 |
-
# In training, the proposals are not useful at all but we generate them anyway.
|
315 |
-
# This makes RPN-only models about 5% slower.
|
316 |
-
if self.training:
|
317 |
-
return proposal_losses
|
318 |
-
|
319 |
-
processed_results = []
|
320 |
-
for results_per_image, input_per_image, image_size in zip(
|
321 |
-
proposals, batched_inputs, images.image_sizes
|
322 |
-
):
|
323 |
-
height = input_per_image.get("height", image_size[0])
|
324 |
-
width = input_per_image.get("width", image_size[1])
|
325 |
-
r = detector_postprocess(results_per_image, height, width)
|
326 |
-
processed_results.append({"proposals": r})
|
327 |
-
return processed_results
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/dev/packaging/gen_wheel_index.sh
DELETED
@@ -1,46 +0,0 @@
|
|
1 |
-
#!/bin/bash -e
|
2 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
3 |
-
|
4 |
-
|
5 |
-
root=$(readlink -f $1)
|
6 |
-
if [[ -z "$root" ]]; then
|
7 |
-
echo "Usage: ./gen_wheel_index.sh /absolute/path/to/wheels"
|
8 |
-
exit
|
9 |
-
fi
|
10 |
-
|
11 |
-
export LC_ALL=C # reproducible sort
|
12 |
-
# NOTE: all sort in this script might not work when xx.10 is released
|
13 |
-
|
14 |
-
index=$root/index.html
|
15 |
-
|
16 |
-
cd "$root"
|
17 |
-
for cu in cpu cu92 cu100 cu101 cu102 cu110 cu111 cu113; do
|
18 |
-
mkdir -p "$root/$cu"
|
19 |
-
cd "$root/$cu"
|
20 |
-
echo "Creating $PWD/index.html ..."
|
21 |
-
# First sort by torch version, then stable sort by d2 version with unique.
|
22 |
-
# As a result, the latest torch version for each d2 version is kept.
|
23 |
-
for whl in $(find -type f -name '*.whl' -printf '%P\n' \
|
24 |
-
| sort -k 1 -r | sort -t '/' -k 2 --stable -r --unique); do
|
25 |
-
echo "<a href=\"${whl/+/%2B}\">$whl</a><br>"
|
26 |
-
done > index.html
|
27 |
-
|
28 |
-
|
29 |
-
for torch in torch*; do
|
30 |
-
cd "$root/$cu/$torch"
|
31 |
-
|
32 |
-
# list all whl for each cuda,torch version
|
33 |
-
echo "Creating $PWD/index.html ..."
|
34 |
-
for whl in $(find . -type f -name '*.whl' -printf '%P\n' | sort -r); do
|
35 |
-
echo "<a href=\"${whl/+/%2B}\">$whl</a><br>"
|
36 |
-
done > index.html
|
37 |
-
done
|
38 |
-
done
|
39 |
-
|
40 |
-
cd "$root"
|
41 |
-
# Just list everything:
|
42 |
-
echo "Creating $index ..."
|
43 |
-
for whl in $(find . -type f -name '*.whl' -printf '%P\n' | sort -r); do
|
44 |
-
echo "<a href=\"${whl/+/%2B}\">$whl</a><br>"
|
45 |
-
done > "$index"
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/B1360976/waste-management-system/app.py
DELETED
@@ -1,125 +0,0 @@
|
|
1 |
-
from fastai.vision import *
|
2 |
-
from fastai.imports import *
|
3 |
-
from fastai.learner import *
|
4 |
-
from fastai.vision.all import *
|
5 |
-
|
6 |
-
import streamlit as st
|
7 |
-
import numpy as np
|
8 |
-
import matplotlib.image as mpimg
|
9 |
-
import os
|
10 |
-
import time
|
11 |
-
from PIL import Image
|
12 |
-
import requests
|
13 |
-
from io import BytesIO
|
14 |
-
import pathlib
|
15 |
-
|
16 |
-
|
17 |
-
# st.set_page_config(layout="wide")
|
18 |
-
|
19 |
-
|
20 |
-
#for windows deployment
|
21 |
-
# temp = pathlib.PosixPath
|
22 |
-
# pathlib.PosixPath = pathlib.WindowsPath
|
23 |
-
|
24 |
-
|
25 |
-
#For linux deployment
|
26 |
-
plt = platform.system()
|
27 |
-
if plt == 'Linux': pathlib.WindowsPath = pathlib.PosixPath
|
28 |
-
|
29 |
-
|
30 |
-
path = Path('.')
|
31 |
-
|
32 |
-
with open('style.css') as f:
|
33 |
-
st.markdown(f'<style>{f.read()}</style>', unsafe_allow_html=True)
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
#------Create/Define all functions
|
38 |
-
|
39 |
-
def load_model():
|
40 |
-
model = load_learner(path/'waste_model.pkl')
|
41 |
-
return model
|
42 |
-
|
43 |
-
model = load_model()
|
44 |
-
|
45 |
-
def display_image(display_img):
|
46 |
-
st.image(display_img, width=400)
|
47 |
-
# use_column_width=True
|
48 |
-
|
49 |
-
|
50 |
-
def make_pred(model, img):
|
51 |
-
|
52 |
-
# Temporarily displays a message while executing
|
53 |
-
with st.spinner('Classifying, Please Wait...'):
|
54 |
-
time.sleep(1)
|
55 |
-
|
56 |
-
pred, pred_idx, prob = model.predict(img)
|
57 |
-
pred_prob = f'{prob[pred_idx]*100:.0f}%'
|
58 |
-
|
59 |
-
# Display the prediction
|
60 |
-
if pred == 'R':
|
61 |
-
pred_state = 'The Image is a Recyclable Waste'
|
62 |
-
|
63 |
-
else:
|
64 |
-
pred_state = 'The image is an organic waste'
|
65 |
-
|
66 |
-
return pred_state, pred_prob
|
67 |
-
|
68 |
-
|
69 |
-
########--------Setup Diagnosis Page--------########
|
70 |
-
|
71 |
-
# if selected_nav == 'Diagnosis':
|
72 |
-
|
73 |
-
########-------Create Side Bar---------########
|
74 |
-
|
75 |
-
# st.sidebar.image('wms.jpg')
|
76 |
-
#For image upload
|
77 |
-
img_upload = st.sidebar.file_uploader(label = 'Upload a Waste Image for Classification',
|
78 |
-
type=['png', 'jpg', 'jpeg'])
|
79 |
-
|
80 |
-
# For image selection
|
81 |
-
test_images = os.listdir(path/'sample')
|
82 |
-
img_selected = st.sidebar.selectbox(
|
83 |
-
'Please Select a Waste Image:', test_images)
|
84 |
-
|
85 |
-
|
86 |
-
if img_selected:
|
87 |
-
# Read the image
|
88 |
-
file_path = path/'sample'/img_selected
|
89 |
-
# Get the image to display
|
90 |
-
display_img = Image.open(file_path)
|
91 |
-
# display_img = display_img.resize((244,224))
|
92 |
-
img = PILImage.create(file_path)
|
93 |
-
|
94 |
-
|
95 |
-
if img_upload:
|
96 |
-
display_img = Image.open(img_upload)
|
97 |
-
img = PILImage.create(img_upload)
|
98 |
-
|
99 |
-
|
100 |
-
st.markdown("""
|
101 |
-
<h3 style="text-align:center;color:#006ef5;">Waste Classification System (DEMO)</h3>
|
102 |
-
""", unsafe_allow_html=True)
|
103 |
-
|
104 |
-
st.markdown("##")
|
105 |
-
|
106 |
-
|
107 |
-
st.markdown("""
|
108 |
-
<p> <b>Instruction:</b> Please upload a waste image (using the sidebar) for classification or select a sample image</p>
|
109 |
-
""", unsafe_allow_html=True)
|
110 |
-
|
111 |
-
with st.container():
|
112 |
-
|
113 |
-
display_image(display_img)
|
114 |
-
|
115 |
-
waste_prediction_output = ""
|
116 |
-
|
117 |
-
st.markdown("##")
|
118 |
-
|
119 |
-
if st.button('Classify Waste'):
|
120 |
-
waste_prediction, pred_prob = make_pred(model, img)
|
121 |
-
waste_prediction_output = f"{waste_prediction}, With a {pred_prob} Confidence"
|
122 |
-
|
123 |
-
st.success(waste_prediction_output)
|
124 |
-
|
125 |
-
st.markdown("##")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/lib/infer_pack/modules/F0Predictor/__init__.py
DELETED
File without changes
|
spaces/Benson/text-generation/Examples/Buena Pizza Gran Pizza Descargar Mac.md
DELETED
@@ -1,101 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Cómo descargar y jugar buena pizza, gran pizza en tu Mac</h1>
|
3 |
-
<p>¿Te encanta la pizza? ¿Quieres dirigir tu propia pizzería? ¿Quieres divertirte mientras lo haces? Si respondiste sí a cualquiera de estas preguntas, entonces deberías probar Good Pizza, Great Pizza, un juego de simulación que te permite crear y servir pizzas a tus clientes. En este artículo, te mostraremos qué es Good Pizza, Great Pizza, cómo descargarlo e instalarlo en tu Mac y cómo hacer deliciosas pizzas en casa. </p>
|
4 |
-
<h2>buena pizza gran pizza descargar mac</h2><br /><p><b><b>DOWNLOAD</b> >> <a href="https://bltlly.com/2v6LYM">https://bltlly.com/2v6LYM</a></b></p><br /><br />
|
5 |
-
<h2>¿Qué es una buena pizza? </h2>
|
6 |
-
<h3>Un divertido y desafiante juego de hacer pizza</h3>
|
7 |
-
<p>Good Pizza, Great Pizza es un juego desarrollado por TapBlaze, un estudio especializado en juegos casuales y de simulación. El juego fue lanzado en 2014 para dispositivos Android e iOS, y más tarde en 2020 para PC y Nintendo Switch. El juego ha sido descargado más de 50 millones de veces y ha recibido críticas positivas de jugadores y críticos por igual. </p>
|
8 |
-
<p>El juego te pone en los zapatos de un dueño de pizzería que tiene que competir con una pizzería rival al otro lado de la calle. Usted tiene que cumplir con los pedidos de pizza de los clientes con diferentes preferencias y personalidades, mientras que ganar suficiente dinero para mantener su tienda abierta. También puede actualizar su tienda con nuevos ingredientes, equipos y decoración para atraer más clientes y mejorar la calidad de su pizza. </p>
|
9 |
-
<h3>Características y jugabilidad</h3>
|
10 |
-
<p>Good Pizza, Great Pizza tiene muchas características que lo convierten en un juego divertido y adictivo. Algunas de ellas son:</p>
|
11 |
-
<ul>
|
12 |
-
<li>Con Pizza News Network (PNN), el primer noticiero sobre todas las cosas pizza. </li>
|
13 |
-
<li>Más de 80 clientes con pedidos de pizza únicos y personalidades. </li>
|
14 |
-
<li>Ingredientes de pizza incluyendo pepperoni, salchichas, cebollas, queso, setas y más. </li>
|
15 |
-
<li>Actualizaciones de equipos para ayudarle a convertirse en el maestro ovenist. </li>
|
16 |
-
<li>Juego simple, divertido y desafiante. </li>
|
17 |
-
<li>Creado por profesionales de la fabricación de pizza; el diseñador del juego trabajó en una cocina de pizza durante cuatro años. </li>
|
18 |
-
</ul>
|
19 |
-
|
20 |
-
<ol>
|
21 |
-
<li>Abre tu tienda y espera a que los clientes entren. </li>
|
22 |
-
<li>Escuche sus pedidos de pizza cuidadosamente y pida aclaraciones si es necesario. </li>
|
23 |
-
<li> Preparar la masa de pizza, salsa, y los ingredientes de acuerdo a sus peticiones. </li>
|
24 |
-
<li>Hornea la pizza en el horno hasta que esté cocida. </li>
|
25 |
-
<li>Cortar la pizza en el número deseado de piezas. </li>
|
26 |
-
<li>Caja de la pizza y entregarlo al cliente. </li>
|
27 |
-
<li>Recoge tu dinero y consejos. </li>
|
28 |
-
</ol>
|
29 |
-
<p>Tienes que ser rápido y preciso en la fabricación de pizzas, ya que los clientes se pondrán impacientes o infelices si te tomas demasiado tiempo o cometes errores. También tienes que administrar tu inventario y presupuesto sabiamente, ya que tienes que comprar ingredientes y pagar el alquiler todos los días. También puedes participar en eventos especiales y desafíos que pondrán a prueba tus habilidades con la pizza. </p>
|
30 |
-
<h3>Comentarios y valoraciones</h3>
|
31 |
-
|
32 |
-
<p>Si quieres jugar Good Pizza, Great Pizza en tu Mac, tienes dos opciones: usar un software emulador o usar la App Store. Explicaremos ambas opciones en detalle a continuación. </p>
|
33 |
-
<p></p>
|
34 |
-
<h3>Opción 1: Usando un software de emulación</h3>
|
35 |
-
<h4>¿Qué es un software de emulación? </h4>
|
36 |
-
<p>Un software emulador es un programa que le permite ejecutar aplicaciones diseñadas para un sistema operativo diferente en su computadora. Por ejemplo, si quieres jugar un juego de Android en tu Mac, puedes usar un software emulador para simular un dispositivo Android en tu Mac. De esta manera, puedes descargar e instalar aplicaciones Android en tu Mac y reproducirlas como si estuvieras usando un dispositivo Android. </p>
|
37 |
-
<h4>Cómo utilizar BlueStacks para jugar buena pizza, gran pizza en tu Mac</h4>
|
38 |
-
<p>Uno de los software emulador más populares y fiables para Mac es BlueStacks, que le permite ejecutar aplicaciones Android en su Mac. Estos son los pasos para usar BlueStacks para jugar Good Pizza, Great Pizza en tu Mac:</p>
|
39 |
-
<ol>
|
40 |
-
<li>Ir a la página web oficial de BlueStacks y descargar la última versión del software para Mac.</li>
|
41 |
-
<li>Instalar BlueStacks en su Mac siguiendo las instrucciones en la pantalla. </li>
|
42 |
-
<li>Inicie BlueStacks e inicie sesión con su cuenta de Google o cree una nueva. </li>
|
43 |
-
<li>Ir a la aplicación Google Play Store en BlueStacks y buscar buena pizza, Gran pizza.</li>
|
44 |
-
<li>Haga clic en el botón Instalar y espere a que el juego se descargue e instale. </li>
|
45 |
-
<li>Haga clic en el botón Abrir o vaya a la pestaña Mis aplicaciones en BlueStacks y encuentre Good Pizza, Great Pizza.</li>
|
46 |
-
<li> ¡Disfruta jugando buena pizza, gran pizza en tu Mac! </li>
|
47 |
-
</ol>
|
48 |
-
<p>Nota: También puede usar otro software emulador como NoxPlayer o MEmu para jugar Good Pizza, Great Pizza en su Mac, pero los pasos pueden variar ligeramente. </p>
|
49 |
-
<h3>Opción 2: Usando el App Store</h3>
|
50 |
-
<h4>Cómo acceder a la App Store en tu Mac</h4>
|
51 |
-
|
52 |
-
<ol>
|
53 |
-
<li>Haga clic en el icono de Apple en la esquina superior izquierda de la pantalla y seleccione Preferencias del sistema.</li>
|
54 |
-
<li>Haga clic en el ID de Apple e inicie sesión con su ID de Apple o cree uno nuevo. </li>
|
55 |
-
<li>Haga clic en Medios & Compras y asegúrese de que App Store se comprueba en Aplicaciones.</li>
|
56 |
-
<li>Cerrar las preferencias del sistema y haga clic en el icono de la App Store en su muelle o plataforma de lanzamiento. </li>
|
57 |
-
</ol>
|
58 |
-
<h4>Cómo descargar e instalar Good Pizza, Great Pizza desde la App Store</h4>
|
59 |
-
<p>Una vez que haya accedido a la App Store en su Mac, puede descargar e instalar Good Pizza, Great Pizza siguiendo estos pasos:</p>
|
60 |
-
<ol>
|
61 |
-
<li>Escribe Good Pizza, Great Pizza en la barra de búsqueda de la App Store y pulsa Enter.</li>
|
62 |
-
<li>Haga clic en el botón Obtener junto a Good Pizza, Great Pizza e introduzca su contraseña de Apple ID si se le solicita. </li>
|
63 |
-
<li>Espera a que el juego se descargue e instale en tu Mac.</li>
|
64 |
-
<li>Haga clic en el botón Reproducir o vaya a Launchpad y encuentre Good Pizza, Great Pizza.</li>
|
65 |
-
<li> ¡Disfruta jugando buena pizza, gran pizza en tu Mac! </li>
|
66 |
-
</ol>
|
67 |
-
<h2>Cómo hacer deliciosas pizzas en casa</h2>
|
68 |
-
<p>Si jugar Good Pizza, Great Pizza te ha hecho hambriento de algunas pizzas reales, ¿por qué no intentar hacerlas en casa? Hacer pizzas en casa es de 375°F o 190°C y hornear la masa durante unos 10 minutos, o hasta que esté ligeramente dorada. </li>
|
69 |
-
<li>Extiende la salsa de pizza uniformemente sobre la masa, dejando un pequeño borde alrededor de los bordes. Puedes usar salsa de pizza comprada en la tienda o hacer la tuya cocinando a fuego lento salsa de tomate, ajo, orégano, albahaca, sal y pimienta en una cacerola pequeña durante unos 15 minutos. </li>
|
70 |
-
<li>Espolvorear el queso mozzarella sobre la salsa y añadir los ingredientes de pizza de elección. Puedes usar cualquier aderezo que quieras, como pepperoni, salchichas, jamón, tocino, pollo, champiñones, cebollas, pimientos, aceitunas, piña, espinacas, etc.</li>
|
71 |
-
<li>Regresa la pizza al horno y hornea por otros 10 a 15 minutos, o hasta que el queso se derrita y burbujee y la corteza esté dorada y crujiente. </li>
|
72 |
-
|
73 |
-
<li>¡Disfruta de tu pizza casera con tu familia y amigos! </li>
|
74 |
-
</ol>
|
75 |
-
<h3>Consejos y trucos para mejorar tu pizza</h3>
|
76 |
-
<p>Aquí hay algunos consejos y trucos para hacer su pizza mejor:</p>
|
77 |
-
<ul>
|
78 |
-
<li>Usa ingredientes de alta calidad para tu pizza, como queso mozzarella fresco, salsa de tomate orgánica y hierbas frescas o secas. </li>
|
79 |
-
<li>Experimenta con diferentes combinaciones de aderezos y salsas para crear tus propias pizzas. </li>
|
80 |
-
<li>Usa una piedra para pizza o una bandeja para hornear precalentada para obtener una corteza más crujiente e incluso una distribución de calor. </li>
|
81 |
-
<li>Cepille un poco de aceite de oliva o mantequilla derretida en los bordes de la corteza y espolvoree un poco de ajo en polvo o queso parmesano para un sabor extra y crujiente. </li>
|
82 |
-
<li>Corta tu pizza en trozos más pequeños para que sea más fácil de comer y compartir. </li>
|
83 |
-
</ul>
|
84 |
-
<h2>Conclusión</h2>
|
85 |
-
<p>Good Pizza, Great Pizza es un juego divertido y desafiante que te permite dirigir tu propia pizzería y servir pizzas a tus clientes. Puedes descargarlo y reproducirlo en tu Mac usando un software emulador o la App Store. También puede hacer deliciosas pizzas en casa con ingredientes y equipos sencillos. Esperamos que haya disfrutado de este artículo y haya aprendido algo nuevo. ¡Ahora adelante y haga algunas buenas pizzas, grandes pizzas! </p>
|
86 |
-
<h2>Preguntas frecuentes</h2>
|
87 |
-
<p>Aquí hay algunas preguntas frecuentes sobre Good Pizza, Great Pizza:</p>
|
88 |
-
<ol>
|
89 |
-
<li>¿Cuántos capítulos hay en Good Pizza, Great Pizza? </li>
|
90 |
-
<p>Actualmente hay seis capítulos en Good Pizza, Great Pizza, cada uno con diferentes temas y desafíos. Los desarrolladores están trabajando en agregar más capítulos en el futuro. </p>
|
91 |
-
<li>¿Cómo puedo obtener más dinero y consejos en Good Pizza, Great Pizza? </li>
|
92 |
-
<p>Puedes obtener más dinero y consejos en Good Pizza, Great Pizza haciendo pizzas de forma rápida y precisa, satisfaciendo las solicitudes de los clientes, mejorando tu tienda y participando en eventos y desafíos. </p>
|
93 |
-
<li>¿Cómo puedo restablecer mi progreso en Good Pizza, Great Pizza? </li>
|
94 |
-
|
95 |
-
<li>¿Es Good Pizza, Great Pizza gratis para jugar? </li>
|
96 |
-
<p>Sí, Good Pizza, Great Pizza es gratis para jugar en todas las plataformas. Sin embargo, hay algunas compras opcionales en la aplicación que pueden mejorar tu experiencia de juego. </p>
|
97 |
-
<li>¿Puedo jugar buena pizza, gran pizza fuera de línea? </li>
|
98 |
-
<p>Sí, puedes jugar Good Pizza, Great Pizza sin conexión a Internet. Sin embargo, necesitará una conexión a Internet para acceder a algunas funciones como eventos, desafíos, tablas de clasificación y almacenamiento en la nube. </p>
|
99 |
-
</ol></p> 64aa2da5cf<br />
|
100 |
-
<br />
|
101 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Carreras De Coches Juego De Descarga Apk.md
DELETED
@@ -1,77 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Carreras en el juego de coches Descargar APK: Una guía para los entusiastas de las carreras de coches</h1>
|
3 |
-
<p>Si te gustan los juegos de carreras de coches, es posible que desee probar las carreras de coches, un simulador de conducción realista e inmersivo que le permite experimentar la emoción de las carreras en la carretera. En este juego, puede elegir entre diferentes coches, personalizarlos y conducirlos en varias pistas con diferentes condiciones climáticas y de tráfico. También puede competir con otros jugadores en línea y ver cómo se clasifica en las tablas de clasificación. </p>
|
4 |
-
<h2>carreras de coches juego de descarga apk</h2><br /><p><b><b>Download</b> ⇒ <a href="https://bltlly.com/2v6JHh">https://bltlly.com/2v6JHh</a></b></p><br /><br />
|
5 |
-
<p>Pero ¿cómo se puede descargar e instalar carreras en el juego de coches apk en su dispositivo Android? Y cuáles son las características y beneficios de este juego? ¿Y cuáles son algunos consejos y trucos para jugarlo? ¿Y cuáles son algunas alternativas a este juego si quieres probar algo diferente? </p>
|
6 |
-
<p>En este artículo, responderemos todas estas preguntas y más. Le guiará a través del proceso de descarga e instalación de carreras en apk juego de coches, mostrar las características y beneficios de este juego, darle algunos consejos y trucos para jugarlo, y sugerir algunas alternativas a este juego. Al final de este artículo, usted estará listo para disfrutar de las carreras en el juego de coches apk en su dispositivo Android. </p>
|
7 |
-
<h2> Cómo descargar e instalar carreras en el juego de coches APK en su dispositivo Android</h2>
|
8 |
-
<p>Descargar e instalar carreras en el juego de coches apk en su dispositivo Android es fácil y rápido. Solo tienes que seguir estos sencillos pasos:</p>
|
9 |
-
<p></p>
|
10 |
-
<h3>Paso 1: Encontrar una fuente confiable para el archivo APK</h3>
|
11 |
-
<p>Un archivo APK es un paquete de aplicaciones de Android que contiene todos los archivos y datos necesarios para ejecutar una aplicación en su dispositivo. Puedes encontrar muchas fuentes de archivos APK en línea, pero no todos son confiables. Algunos de ellos pueden contener virus o malware que pueden dañar su dispositivo o robar su información personal. Por lo tanto, siempre debe tener cuidado al descargar archivos APK de fuentes desconocidas. </p>
|
12 |
-
|
13 |
-
<h3>Paso 2: Habilitar fuentes desconocidas en la configuración del dispositivo</h3>
|
14 |
-
<p>Antes de que pueda instalar un archivo APK en su dispositivo, debe habilitar fuentes desconocidas en la configuración del dispositivo. Esto se debe a que los archivos APK no son de la tienda oficial de Google Play y su dispositivo puede bloquearlos de forma predeterminada. Para habilitar fuentes desconocidas, siga estos pasos:</p>
|
15 |
-
<ul>
|
16 |
-
<li>Ir a la configuración del dispositivo y toque en Seguridad o Privacidad.</li>
|
17 |
-
<li>Encontrar la opción que dice Fuentes desconocidas o Instalar aplicaciones desconocidas y alternar en. </li>
|
18 |
-
<li> Un mensaje de advertencia puede aparecer, diciéndole que la instalación de aplicaciones de fuentes desconocidas puede dañar su dispositivo. Toque en OK o Permitir proceder. </li>
|
19 |
-
</ul>
|
20 |
-
<h3>Paso 3: Descargar e instalar el archivo APK</h3>
|
21 |
-
<p>Ahora que ha habilitado fuentes desconocidas, puede descargar e instalar el archivo APK. Para hacer esto, siga estos pasos:</p>
|
22 |
-
<ul>
|
23 |
-
<li>Ve al sitio web donde descargaste el archivo APK y toca en él para iniciar la descarga. </li>
|
24 |
-
<li>Una vez que se complete la descarga, abra la aplicación de administrador de archivos en su dispositivo y busque el archivo APK en la carpeta Descargas. </li>
|
25 |
-
<li>Toque en el archivo APK para iniciar la instalación. Es posible que necesite conceder algunos permisos a la aplicación, como el acceso a su almacenamiento, cámara, micrófono, etc. Toque en Instalar o Siguiente para continuar. </li>
|
26 |
-
<li>Espere a que termine la instalación. Puede ver un mensaje que dice App instalado o Hecho. Toque en Abrir o Iniciar para iniciar la aplicación. </li>
|
27 |
-
</ul>
|
28 |
-
<p>Felicidades! Usted ha descargado con éxito e instalado carreras en el juego de coches apk en su dispositivo Android. Ahora puedes disfrutar jugando este increíble juego y sentir la adrenalina de las carreras en la carretera. </p>
|
29 |
-
<h2> Características y beneficios de las carreras en el juego de coches APK</h2>
|
30 |
-
<p>Carreras en apk juego de coches no es solo otro juego de carreras de coches. Es un juego que te ofrece muchas características y beneficios que lo hacen destacar de otros juegos similares. Estos son algunos de ellos:</p>
|
31 |
-
<h3>Característica 1: Gráficos 3D realistas y física</h3>
|
32 |
-
|
33 |
-
<h3>Característica 2: Múltiples coches y pistas para elegir</h3>
|
34 |
-
<p>Carreras en apk juego de coches le da una variedad de coches y pistas para elegir. Puedes seleccionar entre diferentes tipos de coches, como deportivos, muscle cars, SUV, etc., y personalizarlos con diferentes colores, ruedas, pegatinas, etc. También puedes elegir entre diferentes pistas, como calles de la ciudad, carreteras, desiertos, montañas, etc., y experimentar diferentes condiciones climáticas y de tráfico, tales como soleado, lluvioso, niebla, noche, día, etc. También puede desbloquear nuevos coches y pistas a medida que avanza en el juego. </p>
|
35 |
-
<h3>Característica 3: Controles fáciles e intuitivos</h3>
|
36 |
-
<p>Carreras en apk juego de coches tiene controles fáciles e intuitivos que lo hacen divertido y fácil de jugar. Puede elegir entre dos opciones para dirigir su automóvil: inclinación o toque. Si elige inclinación, puede inclinar su dispositivo hacia la izquierda o hacia la derecha para dirigir su automóvil. Si elige tocar, puede tocar en el lado izquierdo o derecho de la pantalla para dirigir su coche. También puede utilizar el acelerador y los pedales de freno en la pantalla para acelerar o ralentizar su coche. También puede cambiar la sensibilidad de los controles en el menú de configuración. </p>
|
37 |
-
<h3>Característica 4: Modo sin fin y tablas de clasificación</h3>
|
38 |
-
<p>Carreras en apk juego de coches tiene un modo sin fin que le permite conducir el mayor tiempo posible sin chocar contra otros coches u obstáculos. Cuanto más tiempo manejes, más puntos ganarás. También puedes competir con otros jugadores online y ver cómo te clasificas en las tablas de clasificación. Puedes comparar tus puntuaciones con tus amigos o con jugadores de todo el mundo. También puede desafiarse a sí mismo para vencer su propia puntuación alta o lograr otros objetivos en el juego. </p>
|
39 |
-
<h2> Consejos y trucos para jugar carreras en el juego de coches APK</h2>
|
40 |
-
<p>Carreras en apk juego de coches es un juego que requiere habilidad y estrategia para jugar bien. Aquí hay algunos consejos y trucos que pueden ayudarle a mejorar su rendimiento y disfrutar del juego más:</p>
|
41 |
-
<h3>Consejo 1: Utilice la opción de inclinación o toque para dirigir su coche</h3>
|
42 |
-
|
43 |
-
<h3>Consejo 2: Adelantar a otros coches lo más cerca posible para obtener puntos de bonificación</h3>
|
44 |
-
<p>Adelantar a otros coches no solo es divertido, sino también gratificante en las carreras en el juego de coches apk. Cuanto más te acercas a otro coche, más puntos de bonificación obtienes. Puedes ver los puntos de bonificación en la pantalla mientras adelantas a un coche. Sin embargo, también debe tener cuidado de no chocar contra otros coches u obstáculos, ya que esto terminará su juego y perderá sus puntos. También debe evitar conducir en el carril opuesto, ya que esto aumentará el riesgo de colisión y reducirá su puntuación. </p>
|
45 |
-
<h3>Consejo 3: Evite chocar con otros coches u obstáculos</h3>
|
46 |
-
<p>Chocar contra otros coches u obstáculos es la peor cosa que puede suceder en las carreras de coches apk juego. No solo terminará su juego y perderá sus puntos, sino que también dañará su automóvil y reducirá su rendimiento. Siempre debe tratar de evitar chocar contra cualquier cosa en la carretera, como otros coches, camiones, autobuses, barreras, conos, etc. También debe tener cuidado con las señales de tráfico y señales, tales como límites de velocidad, señales de parada, luces rojas, etc., ya que pueden indicar riesgos potenciales o cambios en las condiciones de la carretera. </p>
|
47 |
-
<h3>Consejo 4: Actualizar su coche para mejorar su rendimiento</h3>
|
48 |
-
<p>La mejora de su coche es una de las mejores maneras de mejorar su rendimiento y disfrutar del juego más en las carreras de coches apk juego. Puede actualizar su automóvil con diferentes piezas y accesorios, como motor, turbo, frenos, neumáticos, suspensión, etc. También puede cambiar la apariencia de su automóvil con diferentes colores, ruedas, pegatinas, etc. Actualizar su automóvil aumentará su velocidad, aceleración, manejo, frenar, etc., y hacer más fácil adelantar a otros coches y evitar accidentes. Puede actualizar su coche con las monedas que gana de jugar el juego o de ver anuncios. </p>
|
49 |
-
<h2>Alternativas a las carreras en el juego de coches APK</h2>
|
50 |
-
|
51 |
-
<h3>Alternativa 1: Asfalto 8 - Juego de carreras de coches</h3>
|
52 |
-
<p>Asphalt 8 es uno de los juegos de carreras de coches más populares y aclamados en Android. Cuenta con más de 300 coches y bicicletas de alto rendimiento de los principales fabricantes, como Ferrari, Lamborghini, McLaren, Bugatti, etc., que se puede conducir en más de 50 pistas impresionantes en todo el mundo. También puede realizar acrobacias increíbles y maniobras aéreas con su vehículo, tales como rollos de barril, volteretas, saltos, etc., y utilizar nitro aumenta la velocidad. Puedes jugar en solitario o multijugador y competir con otros jugadores online o offline. </p>
|
53 |
-
<h3>Alternativa 2: Real Racing 3</h3>
|
54 |
-
<p>Real Racing 3 es otro juego de carreras de coches realista e inmersiva en Android. Cuenta con más de 250 coches auténticos de las mejores marcas, como Ford, Aston Martin, Porsche, etc., que puede conducir en más de 40 pistas reales desde lugares famosos de todo el mundo. También puede personalizar su coche con diferentes pinturas, vinilos, llantas, etc., y actualizarlo con diferentes partes y componentes. Puedes jugar modos solo o multijugador y competir contra jugadores reales o oponentes controlados por IA. </p>
|
55 |
-
<h3>Alternativa 3: CSR Racing 2</h3>
|
56 |
-
<p>CSR Racing 2 es un juego de carreras de arrastre en Android que le permite construir y correr su coche de ensueño. Cuenta con más de 200 coches con licencia de los principales fabricantes, como Ferrari, Lamborghini, Bugatti, etc., que puede personalizar y sintonizar con varias opciones y características. También puede unirse a un equipo y competir con otros jugadores en diferentes eventos y desafíos. También puedes competir contra leyendas de carreras de arrastre de la vida real y celebridades, como Snoop Dogg, Lewis Hamilton, etc.</p>
|
57 |
-
<h2>Conclusión y preguntas frecuentes</h2>
|
58 |
-
|
59 |
-
<p>Si usted es un entusiasta de las carreras de coches, definitivamente debe dar carreras en apk juego de coches una oportunidad. No te arrepentirás. Y si quieres probar algo diferente, también puedes echar un vistazo a las alternativas que hemos sugerido en este artículo. </p>
|
60 |
-
<p>Aquí hay algunas preguntas frecuentes sobre las carreras en el juego de coches apk:</p>
|
61 |
-
<h3>FAQ 1: Es carreras en el juego de coches apk seguro para descargar e instalar? </h3>
|
62 |
-
<p>Sí, carreras en el juego de coches apk es seguro de descargar e instalar, siempre y cuando lo obtenga de una fuente confiable, como APKPure. También debe habilitar fuentes desconocidas en la configuración del dispositivo antes de instalarlo, ya que esto le permitirá instalar aplicaciones que no son de la tienda oficial de Google Play. Sin embargo, siempre debe tener cuidado al descargar archivos APK de fuentes desconocidas, ya que algunos de ellos pueden contener virus o malware que pueden dañar su dispositivo o robar su información personal. </p>
|
63 |
-
<h3>FAQ 2: ¿Cuánto espacio requieren las carreras en el juego de coches apk en su dispositivo? </h3>
|
64 |
-
<p>Carreras en apk juego de coches requiere unos 60 MB de espacio en su dispositivo. Sin embargo, esto puede variar dependiendo del modelo de dispositivo y la versión del juego. También debes asegurarte de tener suficiente espacio libre en tu dispositivo antes de descargar e instalar el juego, ya que esto evitará errores o fallos durante el proceso de instalación. </p>
|
65 |
-
<h3>FAQ 3: ¿Se puede jugar a las carreras de coches apk juego fuera de línea? </h3>
|
66 |
-
<p>Sí, se puede jugar a las carreras de coches apk juego fuera de línea, ya que este juego no requiere una conexión a Internet para funcionar. Sin embargo, no podrás acceder a algunas características del juego, como tablas de clasificación en línea, modos multijugador, etc., cuando no estés conectado. Usted tampoco será capaz de actualizar el juego o obtener nuevos coches y pistas cuando usted está fuera de línea. Por lo tanto, se recomienda que se conecte a Internet de vez en cuando para disfrutar de todas las características del juego. </p>
|
67 |
-
<h3>FAQ 4: ¿Cómo puede ponerse en contacto con el desarrollador de carreras en apk juego de coches? </h3>
|
68 |
-
|
69 |
-
<h3>FAQ 5: ¿Cuáles son algunos otros juegos similares a las carreras en apk juego de coches? </h3>
|
70 |
-
<p>Algunos otros juegos similares a las carreras en el juego de coches apk son:</p>
|
71 |
-
<ul>
|
72 |
-
<li>Racing Fever: Moto - Un juego de carreras de motos que te permite conducir por diferentes caminos con diferentes modos y desafíos. </li>
|
73 |
-
<li>Traffic Racer - Un juego de carreras de coches que le permite conducir a través del tráfico con diferentes coches y entornos. </li>
|
74 |
-
<li>Necesidad de velocidad sin límites - Un juego de carreras de coches que le permite construir y correr su coche de ensueño con varias opciones de personalización y eventos. </li>
|
75 |
-
</ul></p> 64aa2da5cf<br />
|
76 |
-
<br />
|
77 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/resolvelib/factory.py
DELETED
@@ -1,730 +0,0 @@
|
|
1 |
-
import contextlib
|
2 |
-
import functools
|
3 |
-
import logging
|
4 |
-
from typing import (
|
5 |
-
TYPE_CHECKING,
|
6 |
-
Dict,
|
7 |
-
FrozenSet,
|
8 |
-
Iterable,
|
9 |
-
Iterator,
|
10 |
-
List,
|
11 |
-
Mapping,
|
12 |
-
NamedTuple,
|
13 |
-
Optional,
|
14 |
-
Sequence,
|
15 |
-
Set,
|
16 |
-
Tuple,
|
17 |
-
TypeVar,
|
18 |
-
cast,
|
19 |
-
)
|
20 |
-
|
21 |
-
from pip._vendor.packaging.requirements import InvalidRequirement
|
22 |
-
from pip._vendor.packaging.specifiers import SpecifierSet
|
23 |
-
from pip._vendor.packaging.utils import NormalizedName, canonicalize_name
|
24 |
-
from pip._vendor.resolvelib import ResolutionImpossible
|
25 |
-
|
26 |
-
from pip._internal.cache import CacheEntry, WheelCache
|
27 |
-
from pip._internal.exceptions import (
|
28 |
-
DistributionNotFound,
|
29 |
-
InstallationError,
|
30 |
-
MetadataInconsistent,
|
31 |
-
UnsupportedPythonVersion,
|
32 |
-
UnsupportedWheel,
|
33 |
-
)
|
34 |
-
from pip._internal.index.package_finder import PackageFinder
|
35 |
-
from pip._internal.metadata import BaseDistribution, get_default_environment
|
36 |
-
from pip._internal.models.link import Link
|
37 |
-
from pip._internal.models.wheel import Wheel
|
38 |
-
from pip._internal.operations.prepare import RequirementPreparer
|
39 |
-
from pip._internal.req.constructors import install_req_from_link_and_ireq
|
40 |
-
from pip._internal.req.req_install import (
|
41 |
-
InstallRequirement,
|
42 |
-
check_invalid_constraint_type,
|
43 |
-
)
|
44 |
-
from pip._internal.resolution.base import InstallRequirementProvider
|
45 |
-
from pip._internal.utils.compatibility_tags import get_supported
|
46 |
-
from pip._internal.utils.hashes import Hashes
|
47 |
-
from pip._internal.utils.packaging import get_requirement
|
48 |
-
from pip._internal.utils.virtualenv import running_under_virtualenv
|
49 |
-
|
50 |
-
from .base import Candidate, CandidateVersion, Constraint, Requirement
|
51 |
-
from .candidates import (
|
52 |
-
AlreadyInstalledCandidate,
|
53 |
-
BaseCandidate,
|
54 |
-
EditableCandidate,
|
55 |
-
ExtrasCandidate,
|
56 |
-
LinkCandidate,
|
57 |
-
RequiresPythonCandidate,
|
58 |
-
as_base_candidate,
|
59 |
-
)
|
60 |
-
from .found_candidates import FoundCandidates, IndexCandidateInfo
|
61 |
-
from .requirements import (
|
62 |
-
ExplicitRequirement,
|
63 |
-
RequiresPythonRequirement,
|
64 |
-
SpecifierRequirement,
|
65 |
-
UnsatisfiableRequirement,
|
66 |
-
)
|
67 |
-
|
68 |
-
if TYPE_CHECKING:
|
69 |
-
from typing import Protocol
|
70 |
-
|
71 |
-
class ConflictCause(Protocol):
|
72 |
-
requirement: RequiresPythonRequirement
|
73 |
-
parent: Candidate
|
74 |
-
|
75 |
-
|
76 |
-
logger = logging.getLogger(__name__)
|
77 |
-
|
78 |
-
C = TypeVar("C")
|
79 |
-
Cache = Dict[Link, C]
|
80 |
-
|
81 |
-
|
82 |
-
class CollectedRootRequirements(NamedTuple):
|
83 |
-
requirements: List[Requirement]
|
84 |
-
constraints: Dict[str, Constraint]
|
85 |
-
user_requested: Dict[str, int]
|
86 |
-
|
87 |
-
|
88 |
-
class Factory:
|
89 |
-
def __init__(
|
90 |
-
self,
|
91 |
-
finder: PackageFinder,
|
92 |
-
preparer: RequirementPreparer,
|
93 |
-
make_install_req: InstallRequirementProvider,
|
94 |
-
wheel_cache: Optional[WheelCache],
|
95 |
-
use_user_site: bool,
|
96 |
-
force_reinstall: bool,
|
97 |
-
ignore_installed: bool,
|
98 |
-
ignore_requires_python: bool,
|
99 |
-
py_version_info: Optional[Tuple[int, ...]] = None,
|
100 |
-
) -> None:
|
101 |
-
self._finder = finder
|
102 |
-
self.preparer = preparer
|
103 |
-
self._wheel_cache = wheel_cache
|
104 |
-
self._python_candidate = RequiresPythonCandidate(py_version_info)
|
105 |
-
self._make_install_req_from_spec = make_install_req
|
106 |
-
self._use_user_site = use_user_site
|
107 |
-
self._force_reinstall = force_reinstall
|
108 |
-
self._ignore_requires_python = ignore_requires_python
|
109 |
-
|
110 |
-
self._build_failures: Cache[InstallationError] = {}
|
111 |
-
self._link_candidate_cache: Cache[LinkCandidate] = {}
|
112 |
-
self._editable_candidate_cache: Cache[EditableCandidate] = {}
|
113 |
-
self._installed_candidate_cache: Dict[str, AlreadyInstalledCandidate] = {}
|
114 |
-
self._extras_candidate_cache: Dict[
|
115 |
-
Tuple[int, FrozenSet[str]], ExtrasCandidate
|
116 |
-
] = {}
|
117 |
-
|
118 |
-
if not ignore_installed:
|
119 |
-
env = get_default_environment()
|
120 |
-
self._installed_dists = {
|
121 |
-
dist.canonical_name: dist
|
122 |
-
for dist in env.iter_installed_distributions(local_only=False)
|
123 |
-
}
|
124 |
-
else:
|
125 |
-
self._installed_dists = {}
|
126 |
-
|
127 |
-
@property
|
128 |
-
def force_reinstall(self) -> bool:
|
129 |
-
return self._force_reinstall
|
130 |
-
|
131 |
-
def _fail_if_link_is_unsupported_wheel(self, link: Link) -> None:
|
132 |
-
if not link.is_wheel:
|
133 |
-
return
|
134 |
-
wheel = Wheel(link.filename)
|
135 |
-
if wheel.supported(self._finder.target_python.get_tags()):
|
136 |
-
return
|
137 |
-
msg = f"{link.filename} is not a supported wheel on this platform."
|
138 |
-
raise UnsupportedWheel(msg)
|
139 |
-
|
140 |
-
def _make_extras_candidate(
|
141 |
-
self, base: BaseCandidate, extras: FrozenSet[str]
|
142 |
-
) -> ExtrasCandidate:
|
143 |
-
cache_key = (id(base), extras)
|
144 |
-
try:
|
145 |
-
candidate = self._extras_candidate_cache[cache_key]
|
146 |
-
except KeyError:
|
147 |
-
candidate = ExtrasCandidate(base, extras)
|
148 |
-
self._extras_candidate_cache[cache_key] = candidate
|
149 |
-
return candidate
|
150 |
-
|
151 |
-
def _make_candidate_from_dist(
|
152 |
-
self,
|
153 |
-
dist: BaseDistribution,
|
154 |
-
extras: FrozenSet[str],
|
155 |
-
template: InstallRequirement,
|
156 |
-
) -> Candidate:
|
157 |
-
try:
|
158 |
-
base = self._installed_candidate_cache[dist.canonical_name]
|
159 |
-
except KeyError:
|
160 |
-
base = AlreadyInstalledCandidate(dist, template, factory=self)
|
161 |
-
self._installed_candidate_cache[dist.canonical_name] = base
|
162 |
-
if not extras:
|
163 |
-
return base
|
164 |
-
return self._make_extras_candidate(base, extras)
|
165 |
-
|
166 |
-
def _make_candidate_from_link(
|
167 |
-
self,
|
168 |
-
link: Link,
|
169 |
-
extras: FrozenSet[str],
|
170 |
-
template: InstallRequirement,
|
171 |
-
name: Optional[NormalizedName],
|
172 |
-
version: Optional[CandidateVersion],
|
173 |
-
) -> Optional[Candidate]:
|
174 |
-
# TODO: Check already installed candidate, and use it if the link and
|
175 |
-
# editable flag match.
|
176 |
-
|
177 |
-
if link in self._build_failures:
|
178 |
-
# We already tried this candidate before, and it does not build.
|
179 |
-
# Don't bother trying again.
|
180 |
-
return None
|
181 |
-
|
182 |
-
if template.editable:
|
183 |
-
if link not in self._editable_candidate_cache:
|
184 |
-
try:
|
185 |
-
self._editable_candidate_cache[link] = EditableCandidate(
|
186 |
-
link,
|
187 |
-
template,
|
188 |
-
factory=self,
|
189 |
-
name=name,
|
190 |
-
version=version,
|
191 |
-
)
|
192 |
-
except MetadataInconsistent as e:
|
193 |
-
logger.info(
|
194 |
-
"Discarding [blue underline]%s[/]: [yellow]%s[reset]",
|
195 |
-
link,
|
196 |
-
e,
|
197 |
-
extra={"markup": True},
|
198 |
-
)
|
199 |
-
self._build_failures[link] = e
|
200 |
-
return None
|
201 |
-
|
202 |
-
base: BaseCandidate = self._editable_candidate_cache[link]
|
203 |
-
else:
|
204 |
-
if link not in self._link_candidate_cache:
|
205 |
-
try:
|
206 |
-
self._link_candidate_cache[link] = LinkCandidate(
|
207 |
-
link,
|
208 |
-
template,
|
209 |
-
factory=self,
|
210 |
-
name=name,
|
211 |
-
version=version,
|
212 |
-
)
|
213 |
-
except MetadataInconsistent as e:
|
214 |
-
logger.info(
|
215 |
-
"Discarding [blue underline]%s[/]: [yellow]%s[reset]",
|
216 |
-
link,
|
217 |
-
e,
|
218 |
-
extra={"markup": True},
|
219 |
-
)
|
220 |
-
self._build_failures[link] = e
|
221 |
-
return None
|
222 |
-
base = self._link_candidate_cache[link]
|
223 |
-
|
224 |
-
if not extras:
|
225 |
-
return base
|
226 |
-
return self._make_extras_candidate(base, extras)
|
227 |
-
|
228 |
-
def _iter_found_candidates(
|
229 |
-
self,
|
230 |
-
ireqs: Sequence[InstallRequirement],
|
231 |
-
specifier: SpecifierSet,
|
232 |
-
hashes: Hashes,
|
233 |
-
prefers_installed: bool,
|
234 |
-
incompatible_ids: Set[int],
|
235 |
-
) -> Iterable[Candidate]:
|
236 |
-
if not ireqs:
|
237 |
-
return ()
|
238 |
-
|
239 |
-
# The InstallRequirement implementation requires us to give it a
|
240 |
-
# "template". Here we just choose the first requirement to represent
|
241 |
-
# all of them.
|
242 |
-
# Hopefully the Project model can correct this mismatch in the future.
|
243 |
-
template = ireqs[0]
|
244 |
-
assert template.req, "Candidates found on index must be PEP 508"
|
245 |
-
name = canonicalize_name(template.req.name)
|
246 |
-
|
247 |
-
extras: FrozenSet[str] = frozenset()
|
248 |
-
for ireq in ireqs:
|
249 |
-
assert ireq.req, "Candidates found on index must be PEP 508"
|
250 |
-
specifier &= ireq.req.specifier
|
251 |
-
hashes &= ireq.hashes(trust_internet=False)
|
252 |
-
extras |= frozenset(ireq.extras)
|
253 |
-
|
254 |
-
def _get_installed_candidate() -> Optional[Candidate]:
|
255 |
-
"""Get the candidate for the currently-installed version."""
|
256 |
-
# If --force-reinstall is set, we want the version from the index
|
257 |
-
# instead, so we "pretend" there is nothing installed.
|
258 |
-
if self._force_reinstall:
|
259 |
-
return None
|
260 |
-
try:
|
261 |
-
installed_dist = self._installed_dists[name]
|
262 |
-
except KeyError:
|
263 |
-
return None
|
264 |
-
# Don't use the installed distribution if its version does not fit
|
265 |
-
# the current dependency graph.
|
266 |
-
if not specifier.contains(installed_dist.version, prereleases=True):
|
267 |
-
return None
|
268 |
-
candidate = self._make_candidate_from_dist(
|
269 |
-
dist=installed_dist,
|
270 |
-
extras=extras,
|
271 |
-
template=template,
|
272 |
-
)
|
273 |
-
# The candidate is a known incompatibility. Don't use it.
|
274 |
-
if id(candidate) in incompatible_ids:
|
275 |
-
return None
|
276 |
-
return candidate
|
277 |
-
|
278 |
-
def iter_index_candidate_infos() -> Iterator[IndexCandidateInfo]:
|
279 |
-
result = self._finder.find_best_candidate(
|
280 |
-
project_name=name,
|
281 |
-
specifier=specifier,
|
282 |
-
hashes=hashes,
|
283 |
-
)
|
284 |
-
icans = list(result.iter_applicable())
|
285 |
-
|
286 |
-
# PEP 592: Yanked releases are ignored unless the specifier
|
287 |
-
# explicitly pins a version (via '==' or '===') that can be
|
288 |
-
# solely satisfied by a yanked release.
|
289 |
-
all_yanked = all(ican.link.is_yanked for ican in icans)
|
290 |
-
|
291 |
-
def is_pinned(specifier: SpecifierSet) -> bool:
|
292 |
-
for sp in specifier:
|
293 |
-
if sp.operator == "===":
|
294 |
-
return True
|
295 |
-
if sp.operator != "==":
|
296 |
-
continue
|
297 |
-
if sp.version.endswith(".*"):
|
298 |
-
continue
|
299 |
-
return True
|
300 |
-
return False
|
301 |
-
|
302 |
-
pinned = is_pinned(specifier)
|
303 |
-
|
304 |
-
# PackageFinder returns earlier versions first, so we reverse.
|
305 |
-
for ican in reversed(icans):
|
306 |
-
if not (all_yanked and pinned) and ican.link.is_yanked:
|
307 |
-
continue
|
308 |
-
func = functools.partial(
|
309 |
-
self._make_candidate_from_link,
|
310 |
-
link=ican.link,
|
311 |
-
extras=extras,
|
312 |
-
template=template,
|
313 |
-
name=name,
|
314 |
-
version=ican.version,
|
315 |
-
)
|
316 |
-
yield ican.version, func
|
317 |
-
|
318 |
-
return FoundCandidates(
|
319 |
-
iter_index_candidate_infos,
|
320 |
-
_get_installed_candidate(),
|
321 |
-
prefers_installed,
|
322 |
-
incompatible_ids,
|
323 |
-
)
|
324 |
-
|
325 |
-
def _iter_explicit_candidates_from_base(
|
326 |
-
self,
|
327 |
-
base_requirements: Iterable[Requirement],
|
328 |
-
extras: FrozenSet[str],
|
329 |
-
) -> Iterator[Candidate]:
|
330 |
-
"""Produce explicit candidates from the base given an extra-ed package.
|
331 |
-
|
332 |
-
:param base_requirements: Requirements known to the resolver. The
|
333 |
-
requirements are guaranteed to not have extras.
|
334 |
-
:param extras: The extras to inject into the explicit requirements'
|
335 |
-
candidates.
|
336 |
-
"""
|
337 |
-
for req in base_requirements:
|
338 |
-
lookup_cand, _ = req.get_candidate_lookup()
|
339 |
-
if lookup_cand is None: # Not explicit.
|
340 |
-
continue
|
341 |
-
# We've stripped extras from the identifier, and should always
|
342 |
-
# get a BaseCandidate here, unless there's a bug elsewhere.
|
343 |
-
base_cand = as_base_candidate(lookup_cand)
|
344 |
-
assert base_cand is not None, "no extras here"
|
345 |
-
yield self._make_extras_candidate(base_cand, extras)
|
346 |
-
|
347 |
-
def _iter_candidates_from_constraints(
|
348 |
-
self,
|
349 |
-
identifier: str,
|
350 |
-
constraint: Constraint,
|
351 |
-
template: InstallRequirement,
|
352 |
-
) -> Iterator[Candidate]:
|
353 |
-
"""Produce explicit candidates from constraints.
|
354 |
-
|
355 |
-
This creates "fake" InstallRequirement objects that are basically clones
|
356 |
-
of what "should" be the template, but with original_link set to link.
|
357 |
-
"""
|
358 |
-
for link in constraint.links:
|
359 |
-
self._fail_if_link_is_unsupported_wheel(link)
|
360 |
-
candidate = self._make_candidate_from_link(
|
361 |
-
link,
|
362 |
-
extras=frozenset(),
|
363 |
-
template=install_req_from_link_and_ireq(link, template),
|
364 |
-
name=canonicalize_name(identifier),
|
365 |
-
version=None,
|
366 |
-
)
|
367 |
-
if candidate:
|
368 |
-
yield candidate
|
369 |
-
|
370 |
-
def find_candidates(
|
371 |
-
self,
|
372 |
-
identifier: str,
|
373 |
-
requirements: Mapping[str, Iterable[Requirement]],
|
374 |
-
incompatibilities: Mapping[str, Iterator[Candidate]],
|
375 |
-
constraint: Constraint,
|
376 |
-
prefers_installed: bool,
|
377 |
-
) -> Iterable[Candidate]:
|
378 |
-
# Collect basic lookup information from the requirements.
|
379 |
-
explicit_candidates: Set[Candidate] = set()
|
380 |
-
ireqs: List[InstallRequirement] = []
|
381 |
-
for req in requirements[identifier]:
|
382 |
-
cand, ireq = req.get_candidate_lookup()
|
383 |
-
if cand is not None:
|
384 |
-
explicit_candidates.add(cand)
|
385 |
-
if ireq is not None:
|
386 |
-
ireqs.append(ireq)
|
387 |
-
|
388 |
-
# If the current identifier contains extras, add explicit candidates
|
389 |
-
# from entries from extra-less identifier.
|
390 |
-
with contextlib.suppress(InvalidRequirement):
|
391 |
-
parsed_requirement = get_requirement(identifier)
|
392 |
-
explicit_candidates.update(
|
393 |
-
self._iter_explicit_candidates_from_base(
|
394 |
-
requirements.get(parsed_requirement.name, ()),
|
395 |
-
frozenset(parsed_requirement.extras),
|
396 |
-
),
|
397 |
-
)
|
398 |
-
|
399 |
-
# Add explicit candidates from constraints. We only do this if there are
|
400 |
-
# known ireqs, which represent requirements not already explicit. If
|
401 |
-
# there are no ireqs, we're constraining already-explicit requirements,
|
402 |
-
# which is handled later when we return the explicit candidates.
|
403 |
-
if ireqs:
|
404 |
-
try:
|
405 |
-
explicit_candidates.update(
|
406 |
-
self._iter_candidates_from_constraints(
|
407 |
-
identifier,
|
408 |
-
constraint,
|
409 |
-
template=ireqs[0],
|
410 |
-
),
|
411 |
-
)
|
412 |
-
except UnsupportedWheel:
|
413 |
-
# If we're constrained to install a wheel incompatible with the
|
414 |
-
# target architecture, no candidates will ever be valid.
|
415 |
-
return ()
|
416 |
-
|
417 |
-
# Since we cache all the candidates, incompatibility identification
|
418 |
-
# can be made quicker by comparing only the id() values.
|
419 |
-
incompat_ids = {id(c) for c in incompatibilities.get(identifier, ())}
|
420 |
-
|
421 |
-
# If none of the requirements want an explicit candidate, we can ask
|
422 |
-
# the finder for candidates.
|
423 |
-
if not explicit_candidates:
|
424 |
-
return self._iter_found_candidates(
|
425 |
-
ireqs,
|
426 |
-
constraint.specifier,
|
427 |
-
constraint.hashes,
|
428 |
-
prefers_installed,
|
429 |
-
incompat_ids,
|
430 |
-
)
|
431 |
-
|
432 |
-
return (
|
433 |
-
c
|
434 |
-
for c in explicit_candidates
|
435 |
-
if id(c) not in incompat_ids
|
436 |
-
and constraint.is_satisfied_by(c)
|
437 |
-
and all(req.is_satisfied_by(c) for req in requirements[identifier])
|
438 |
-
)
|
439 |
-
|
440 |
-
def _make_requirement_from_install_req(
|
441 |
-
self, ireq: InstallRequirement, requested_extras: Iterable[str]
|
442 |
-
) -> Optional[Requirement]:
|
443 |
-
if not ireq.match_markers(requested_extras):
|
444 |
-
logger.info(
|
445 |
-
"Ignoring %s: markers '%s' don't match your environment",
|
446 |
-
ireq.name,
|
447 |
-
ireq.markers,
|
448 |
-
)
|
449 |
-
return None
|
450 |
-
if not ireq.link:
|
451 |
-
return SpecifierRequirement(ireq)
|
452 |
-
self._fail_if_link_is_unsupported_wheel(ireq.link)
|
453 |
-
cand = self._make_candidate_from_link(
|
454 |
-
ireq.link,
|
455 |
-
extras=frozenset(ireq.extras),
|
456 |
-
template=ireq,
|
457 |
-
name=canonicalize_name(ireq.name) if ireq.name else None,
|
458 |
-
version=None,
|
459 |
-
)
|
460 |
-
if cand is None:
|
461 |
-
# There's no way we can satisfy a URL requirement if the underlying
|
462 |
-
# candidate fails to build. An unnamed URL must be user-supplied, so
|
463 |
-
# we fail eagerly. If the URL is named, an unsatisfiable requirement
|
464 |
-
# can make the resolver do the right thing, either backtrack (and
|
465 |
-
# maybe find some other requirement that's buildable) or raise a
|
466 |
-
# ResolutionImpossible eventually.
|
467 |
-
if not ireq.name:
|
468 |
-
raise self._build_failures[ireq.link]
|
469 |
-
return UnsatisfiableRequirement(canonicalize_name(ireq.name))
|
470 |
-
return self.make_requirement_from_candidate(cand)
|
471 |
-
|
472 |
-
def collect_root_requirements(
|
473 |
-
self, root_ireqs: List[InstallRequirement]
|
474 |
-
) -> CollectedRootRequirements:
|
475 |
-
collected = CollectedRootRequirements([], {}, {})
|
476 |
-
for i, ireq in enumerate(root_ireqs):
|
477 |
-
if ireq.constraint:
|
478 |
-
# Ensure we only accept valid constraints
|
479 |
-
problem = check_invalid_constraint_type(ireq)
|
480 |
-
if problem:
|
481 |
-
raise InstallationError(problem)
|
482 |
-
if not ireq.match_markers():
|
483 |
-
continue
|
484 |
-
assert ireq.name, "Constraint must be named"
|
485 |
-
name = canonicalize_name(ireq.name)
|
486 |
-
if name in collected.constraints:
|
487 |
-
collected.constraints[name] &= ireq
|
488 |
-
else:
|
489 |
-
collected.constraints[name] = Constraint.from_ireq(ireq)
|
490 |
-
else:
|
491 |
-
req = self._make_requirement_from_install_req(
|
492 |
-
ireq,
|
493 |
-
requested_extras=(),
|
494 |
-
)
|
495 |
-
if req is None:
|
496 |
-
continue
|
497 |
-
if ireq.user_supplied and req.name not in collected.user_requested:
|
498 |
-
collected.user_requested[req.name] = i
|
499 |
-
collected.requirements.append(req)
|
500 |
-
return collected
|
501 |
-
|
502 |
-
def make_requirement_from_candidate(
|
503 |
-
self, candidate: Candidate
|
504 |
-
) -> ExplicitRequirement:
|
505 |
-
return ExplicitRequirement(candidate)
|
506 |
-
|
507 |
-
def make_requirement_from_spec(
|
508 |
-
self,
|
509 |
-
specifier: str,
|
510 |
-
comes_from: Optional[InstallRequirement],
|
511 |
-
requested_extras: Iterable[str] = (),
|
512 |
-
) -> Optional[Requirement]:
|
513 |
-
ireq = self._make_install_req_from_spec(specifier, comes_from)
|
514 |
-
return self._make_requirement_from_install_req(ireq, requested_extras)
|
515 |
-
|
516 |
-
def make_requires_python_requirement(
|
517 |
-
self,
|
518 |
-
specifier: SpecifierSet,
|
519 |
-
) -> Optional[Requirement]:
|
520 |
-
if self._ignore_requires_python:
|
521 |
-
return None
|
522 |
-
# Don't bother creating a dependency for an empty Requires-Python.
|
523 |
-
if not str(specifier):
|
524 |
-
return None
|
525 |
-
return RequiresPythonRequirement(specifier, self._python_candidate)
|
526 |
-
|
527 |
-
def get_wheel_cache_entry(
|
528 |
-
self, link: Link, name: Optional[str]
|
529 |
-
) -> Optional[CacheEntry]:
|
530 |
-
"""Look up the link in the wheel cache.
|
531 |
-
|
532 |
-
If ``preparer.require_hashes`` is True, don't use the wheel cache,
|
533 |
-
because cached wheels, always built locally, have different hashes
|
534 |
-
than the files downloaded from the index server and thus throw false
|
535 |
-
hash mismatches. Furthermore, cached wheels at present have
|
536 |
-
nondeterministic contents due to file modification times.
|
537 |
-
"""
|
538 |
-
if self._wheel_cache is None:
|
539 |
-
return None
|
540 |
-
return self._wheel_cache.get_cache_entry(
|
541 |
-
link=link,
|
542 |
-
package_name=name,
|
543 |
-
supported_tags=get_supported(),
|
544 |
-
)
|
545 |
-
|
546 |
-
def get_dist_to_uninstall(self, candidate: Candidate) -> Optional[BaseDistribution]:
|
547 |
-
# TODO: Are there more cases this needs to return True? Editable?
|
548 |
-
dist = self._installed_dists.get(candidate.project_name)
|
549 |
-
if dist is None: # Not installed, no uninstallation required.
|
550 |
-
return None
|
551 |
-
|
552 |
-
# We're installing into global site. The current installation must
|
553 |
-
# be uninstalled, no matter it's in global or user site, because the
|
554 |
-
# user site installation has precedence over global.
|
555 |
-
if not self._use_user_site:
|
556 |
-
return dist
|
557 |
-
|
558 |
-
# We're installing into user site. Remove the user site installation.
|
559 |
-
if dist.in_usersite:
|
560 |
-
return dist
|
561 |
-
|
562 |
-
# We're installing into user site, but the installed incompatible
|
563 |
-
# package is in global site. We can't uninstall that, and would let
|
564 |
-
# the new user installation to "shadow" it. But shadowing won't work
|
565 |
-
# in virtual environments, so we error out.
|
566 |
-
if running_under_virtualenv() and dist.in_site_packages:
|
567 |
-
message = (
|
568 |
-
f"Will not install to the user site because it will lack "
|
569 |
-
f"sys.path precedence to {dist.raw_name} in {dist.location}"
|
570 |
-
)
|
571 |
-
raise InstallationError(message)
|
572 |
-
return None
|
573 |
-
|
574 |
-
def _report_requires_python_error(
|
575 |
-
self, causes: Sequence["ConflictCause"]
|
576 |
-
) -> UnsupportedPythonVersion:
|
577 |
-
assert causes, "Requires-Python error reported with no cause"
|
578 |
-
|
579 |
-
version = self._python_candidate.version
|
580 |
-
|
581 |
-
if len(causes) == 1:
|
582 |
-
specifier = str(causes[0].requirement.specifier)
|
583 |
-
message = (
|
584 |
-
f"Package {causes[0].parent.name!r} requires a different "
|
585 |
-
f"Python: {version} not in {specifier!r}"
|
586 |
-
)
|
587 |
-
return UnsupportedPythonVersion(message)
|
588 |
-
|
589 |
-
message = f"Packages require a different Python. {version} not in:"
|
590 |
-
for cause in causes:
|
591 |
-
package = cause.parent.format_for_error()
|
592 |
-
specifier = str(cause.requirement.specifier)
|
593 |
-
message += f"\n{specifier!r} (required by {package})"
|
594 |
-
return UnsupportedPythonVersion(message)
|
595 |
-
|
596 |
-
def _report_single_requirement_conflict(
|
597 |
-
self, req: Requirement, parent: Optional[Candidate]
|
598 |
-
) -> DistributionNotFound:
|
599 |
-
if parent is None:
|
600 |
-
req_disp = str(req)
|
601 |
-
else:
|
602 |
-
req_disp = f"{req} (from {parent.name})"
|
603 |
-
|
604 |
-
cands = self._finder.find_all_candidates(req.project_name)
|
605 |
-
skipped_by_requires_python = self._finder.requires_python_skipped_reasons()
|
606 |
-
versions = [str(v) for v in sorted({c.version for c in cands})]
|
607 |
-
|
608 |
-
if skipped_by_requires_python:
|
609 |
-
logger.critical(
|
610 |
-
"Ignored the following versions that require a different python "
|
611 |
-
"version: %s",
|
612 |
-
"; ".join(skipped_by_requires_python) or "none",
|
613 |
-
)
|
614 |
-
logger.critical(
|
615 |
-
"Could not find a version that satisfies the requirement %s "
|
616 |
-
"(from versions: %s)",
|
617 |
-
req_disp,
|
618 |
-
", ".join(versions) or "none",
|
619 |
-
)
|
620 |
-
if str(req) == "requirements.txt":
|
621 |
-
logger.info(
|
622 |
-
"HINT: You are attempting to install a package literally "
|
623 |
-
'named "requirements.txt" (which cannot exist). Consider '
|
624 |
-
"using the '-r' flag to install the packages listed in "
|
625 |
-
"requirements.txt"
|
626 |
-
)
|
627 |
-
|
628 |
-
return DistributionNotFound(f"No matching distribution found for {req}")
|
629 |
-
|
630 |
-
def get_installation_error(
|
631 |
-
self,
|
632 |
-
e: "ResolutionImpossible[Requirement, Candidate]",
|
633 |
-
constraints: Dict[str, Constraint],
|
634 |
-
) -> InstallationError:
|
635 |
-
assert e.causes, "Installation error reported with no cause"
|
636 |
-
|
637 |
-
# If one of the things we can't solve is "we need Python X.Y",
|
638 |
-
# that is what we report.
|
639 |
-
requires_python_causes = [
|
640 |
-
cause
|
641 |
-
for cause in e.causes
|
642 |
-
if isinstance(cause.requirement, RequiresPythonRequirement)
|
643 |
-
and not cause.requirement.is_satisfied_by(self._python_candidate)
|
644 |
-
]
|
645 |
-
if requires_python_causes:
|
646 |
-
# The comprehension above makes sure all Requirement instances are
|
647 |
-
# RequiresPythonRequirement, so let's cast for convenience.
|
648 |
-
return self._report_requires_python_error(
|
649 |
-
cast("Sequence[ConflictCause]", requires_python_causes),
|
650 |
-
)
|
651 |
-
|
652 |
-
# Otherwise, we have a set of causes which can't all be satisfied
|
653 |
-
# at once.
|
654 |
-
|
655 |
-
# The simplest case is when we have *one* cause that can't be
|
656 |
-
# satisfied. We just report that case.
|
657 |
-
if len(e.causes) == 1:
|
658 |
-
req, parent = e.causes[0]
|
659 |
-
if req.name not in constraints:
|
660 |
-
return self._report_single_requirement_conflict(req, parent)
|
661 |
-
|
662 |
-
# OK, we now have a list of requirements that can't all be
|
663 |
-
# satisfied at once.
|
664 |
-
|
665 |
-
# A couple of formatting helpers
|
666 |
-
def text_join(parts: List[str]) -> str:
|
667 |
-
if len(parts) == 1:
|
668 |
-
return parts[0]
|
669 |
-
|
670 |
-
return ", ".join(parts[:-1]) + " and " + parts[-1]
|
671 |
-
|
672 |
-
def describe_trigger(parent: Candidate) -> str:
|
673 |
-
ireq = parent.get_install_requirement()
|
674 |
-
if not ireq or not ireq.comes_from:
|
675 |
-
return f"{parent.name}=={parent.version}"
|
676 |
-
if isinstance(ireq.comes_from, InstallRequirement):
|
677 |
-
return str(ireq.comes_from.name)
|
678 |
-
return str(ireq.comes_from)
|
679 |
-
|
680 |
-
triggers = set()
|
681 |
-
for req, parent in e.causes:
|
682 |
-
if parent is None:
|
683 |
-
# This is a root requirement, so we can report it directly
|
684 |
-
trigger = req.format_for_error()
|
685 |
-
else:
|
686 |
-
trigger = describe_trigger(parent)
|
687 |
-
triggers.add(trigger)
|
688 |
-
|
689 |
-
if triggers:
|
690 |
-
info = text_join(sorted(triggers))
|
691 |
-
else:
|
692 |
-
info = "the requested packages"
|
693 |
-
|
694 |
-
msg = (
|
695 |
-
"Cannot install {} because these package versions "
|
696 |
-
"have conflicting dependencies.".format(info)
|
697 |
-
)
|
698 |
-
logger.critical(msg)
|
699 |
-
msg = "\nThe conflict is caused by:"
|
700 |
-
|
701 |
-
relevant_constraints = set()
|
702 |
-
for req, parent in e.causes:
|
703 |
-
if req.name in constraints:
|
704 |
-
relevant_constraints.add(req.name)
|
705 |
-
msg = msg + "\n "
|
706 |
-
if parent:
|
707 |
-
msg = msg + f"{parent.name} {parent.version} depends on "
|
708 |
-
else:
|
709 |
-
msg = msg + "The user requested "
|
710 |
-
msg = msg + req.format_for_error()
|
711 |
-
for key in relevant_constraints:
|
712 |
-
spec = constraints[key].specifier
|
713 |
-
msg += f"\n The user requested (constraint) {key}{spec}"
|
714 |
-
|
715 |
-
msg = (
|
716 |
-
msg
|
717 |
-
+ "\n\n"
|
718 |
-
+ "To fix this you could try to:\n"
|
719 |
-
+ "1. loosen the range of package versions you've specified\n"
|
720 |
-
+ "2. remove package versions to allow pip attempt to solve "
|
721 |
-
+ "the dependency conflict\n"
|
722 |
-
)
|
723 |
-
|
724 |
-
logger.info(msg)
|
725 |
-
|
726 |
-
return DistributionNotFound(
|
727 |
-
"ResolutionImpossible: for help visit "
|
728 |
-
"https://pip.pypa.io/en/latest/topics/dependency-resolution/"
|
729 |
-
"#dealing-with-dependency-conflicts"
|
730 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/metadata.py
DELETED
@@ -1,1076 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
#
|
3 |
-
# Copyright (C) 2012 The Python Software Foundation.
|
4 |
-
# See LICENSE.txt and CONTRIBUTORS.txt.
|
5 |
-
#
|
6 |
-
"""Implementation of the Metadata for Python packages PEPs.
|
7 |
-
|
8 |
-
Supports all metadata formats (1.0, 1.1, 1.2, 1.3/2.1 and 2.2).
|
9 |
-
"""
|
10 |
-
from __future__ import unicode_literals
|
11 |
-
|
12 |
-
import codecs
|
13 |
-
from email import message_from_file
|
14 |
-
import json
|
15 |
-
import logging
|
16 |
-
import re
|
17 |
-
|
18 |
-
|
19 |
-
from . import DistlibException, __version__
|
20 |
-
from .compat import StringIO, string_types, text_type
|
21 |
-
from .markers import interpret
|
22 |
-
from .util import extract_by_key, get_extras
|
23 |
-
from .version import get_scheme, PEP440_VERSION_RE
|
24 |
-
|
25 |
-
logger = logging.getLogger(__name__)
|
26 |
-
|
27 |
-
|
28 |
-
class MetadataMissingError(DistlibException):
|
29 |
-
"""A required metadata is missing"""
|
30 |
-
|
31 |
-
|
32 |
-
class MetadataConflictError(DistlibException):
|
33 |
-
"""Attempt to read or write metadata fields that are conflictual."""
|
34 |
-
|
35 |
-
|
36 |
-
class MetadataUnrecognizedVersionError(DistlibException):
|
37 |
-
"""Unknown metadata version number."""
|
38 |
-
|
39 |
-
|
40 |
-
class MetadataInvalidError(DistlibException):
|
41 |
-
"""A metadata value is invalid"""
|
42 |
-
|
43 |
-
# public API of this module
|
44 |
-
__all__ = ['Metadata', 'PKG_INFO_ENCODING', 'PKG_INFO_PREFERRED_VERSION']
|
45 |
-
|
46 |
-
# Encoding used for the PKG-INFO files
|
47 |
-
PKG_INFO_ENCODING = 'utf-8'
|
48 |
-
|
49 |
-
# preferred version. Hopefully will be changed
|
50 |
-
# to 1.2 once PEP 345 is supported everywhere
|
51 |
-
PKG_INFO_PREFERRED_VERSION = '1.1'
|
52 |
-
|
53 |
-
_LINE_PREFIX_1_2 = re.compile('\n \\|')
|
54 |
-
_LINE_PREFIX_PRE_1_2 = re.compile('\n ')
|
55 |
-
_241_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform',
|
56 |
-
'Summary', 'Description',
|
57 |
-
'Keywords', 'Home-page', 'Author', 'Author-email',
|
58 |
-
'License')
|
59 |
-
|
60 |
-
_314_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform',
|
61 |
-
'Supported-Platform', 'Summary', 'Description',
|
62 |
-
'Keywords', 'Home-page', 'Author', 'Author-email',
|
63 |
-
'License', 'Classifier', 'Download-URL', 'Obsoletes',
|
64 |
-
'Provides', 'Requires')
|
65 |
-
|
66 |
-
_314_MARKERS = ('Obsoletes', 'Provides', 'Requires', 'Classifier',
|
67 |
-
'Download-URL')
|
68 |
-
|
69 |
-
_345_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform',
|
70 |
-
'Supported-Platform', 'Summary', 'Description',
|
71 |
-
'Keywords', 'Home-page', 'Author', 'Author-email',
|
72 |
-
'Maintainer', 'Maintainer-email', 'License',
|
73 |
-
'Classifier', 'Download-URL', 'Obsoletes-Dist',
|
74 |
-
'Project-URL', 'Provides-Dist', 'Requires-Dist',
|
75 |
-
'Requires-Python', 'Requires-External')
|
76 |
-
|
77 |
-
_345_MARKERS = ('Provides-Dist', 'Requires-Dist', 'Requires-Python',
|
78 |
-
'Obsoletes-Dist', 'Requires-External', 'Maintainer',
|
79 |
-
'Maintainer-email', 'Project-URL')
|
80 |
-
|
81 |
-
_426_FIELDS = ('Metadata-Version', 'Name', 'Version', 'Platform',
|
82 |
-
'Supported-Platform', 'Summary', 'Description',
|
83 |
-
'Keywords', 'Home-page', 'Author', 'Author-email',
|
84 |
-
'Maintainer', 'Maintainer-email', 'License',
|
85 |
-
'Classifier', 'Download-URL', 'Obsoletes-Dist',
|
86 |
-
'Project-URL', 'Provides-Dist', 'Requires-Dist',
|
87 |
-
'Requires-Python', 'Requires-External', 'Private-Version',
|
88 |
-
'Obsoleted-By', 'Setup-Requires-Dist', 'Extension',
|
89 |
-
'Provides-Extra')
|
90 |
-
|
91 |
-
_426_MARKERS = ('Private-Version', 'Provides-Extra', 'Obsoleted-By',
|
92 |
-
'Setup-Requires-Dist', 'Extension')
|
93 |
-
|
94 |
-
# See issue #106: Sometimes 'Requires' and 'Provides' occur wrongly in
|
95 |
-
# the metadata. Include them in the tuple literal below to allow them
|
96 |
-
# (for now).
|
97 |
-
# Ditto for Obsoletes - see issue #140.
|
98 |
-
_566_FIELDS = _426_FIELDS + ('Description-Content-Type',
|
99 |
-
'Requires', 'Provides', 'Obsoletes')
|
100 |
-
|
101 |
-
_566_MARKERS = ('Description-Content-Type',)
|
102 |
-
|
103 |
-
_643_MARKERS = ('Dynamic', 'License-File')
|
104 |
-
|
105 |
-
_643_FIELDS = _566_FIELDS + _643_MARKERS
|
106 |
-
|
107 |
-
_ALL_FIELDS = set()
|
108 |
-
_ALL_FIELDS.update(_241_FIELDS)
|
109 |
-
_ALL_FIELDS.update(_314_FIELDS)
|
110 |
-
_ALL_FIELDS.update(_345_FIELDS)
|
111 |
-
_ALL_FIELDS.update(_426_FIELDS)
|
112 |
-
_ALL_FIELDS.update(_566_FIELDS)
|
113 |
-
_ALL_FIELDS.update(_643_FIELDS)
|
114 |
-
|
115 |
-
EXTRA_RE = re.compile(r'''extra\s*==\s*("([^"]+)"|'([^']+)')''')
|
116 |
-
|
117 |
-
|
118 |
-
def _version2fieldlist(version):
|
119 |
-
if version == '1.0':
|
120 |
-
return _241_FIELDS
|
121 |
-
elif version == '1.1':
|
122 |
-
return _314_FIELDS
|
123 |
-
elif version == '1.2':
|
124 |
-
return _345_FIELDS
|
125 |
-
elif version in ('1.3', '2.1'):
|
126 |
-
# avoid adding field names if already there
|
127 |
-
return _345_FIELDS + tuple(f for f in _566_FIELDS if f not in _345_FIELDS)
|
128 |
-
elif version == '2.0':
|
129 |
-
raise ValueError('Metadata 2.0 is withdrawn and not supported')
|
130 |
-
# return _426_FIELDS
|
131 |
-
elif version == '2.2':
|
132 |
-
return _643_FIELDS
|
133 |
-
raise MetadataUnrecognizedVersionError(version)
|
134 |
-
|
135 |
-
|
136 |
-
def _best_version(fields):
|
137 |
-
"""Detect the best version depending on the fields used."""
|
138 |
-
def _has_marker(keys, markers):
|
139 |
-
for marker in markers:
|
140 |
-
if marker in keys:
|
141 |
-
return True
|
142 |
-
return False
|
143 |
-
|
144 |
-
keys = []
|
145 |
-
for key, value in fields.items():
|
146 |
-
if value in ([], 'UNKNOWN', None):
|
147 |
-
continue
|
148 |
-
keys.append(key)
|
149 |
-
|
150 |
-
possible_versions = ['1.0', '1.1', '1.2', '1.3', '2.1', '2.2'] # 2.0 removed
|
151 |
-
|
152 |
-
# first let's try to see if a field is not part of one of the version
|
153 |
-
for key in keys:
|
154 |
-
if key not in _241_FIELDS and '1.0' in possible_versions:
|
155 |
-
possible_versions.remove('1.0')
|
156 |
-
logger.debug('Removed 1.0 due to %s', key)
|
157 |
-
if key not in _314_FIELDS and '1.1' in possible_versions:
|
158 |
-
possible_versions.remove('1.1')
|
159 |
-
logger.debug('Removed 1.1 due to %s', key)
|
160 |
-
if key not in _345_FIELDS and '1.2' in possible_versions:
|
161 |
-
possible_versions.remove('1.2')
|
162 |
-
logger.debug('Removed 1.2 due to %s', key)
|
163 |
-
if key not in _566_FIELDS and '1.3' in possible_versions:
|
164 |
-
possible_versions.remove('1.3')
|
165 |
-
logger.debug('Removed 1.3 due to %s', key)
|
166 |
-
if key not in _566_FIELDS and '2.1' in possible_versions:
|
167 |
-
if key != 'Description': # In 2.1, description allowed after headers
|
168 |
-
possible_versions.remove('2.1')
|
169 |
-
logger.debug('Removed 2.1 due to %s', key)
|
170 |
-
if key not in _643_FIELDS and '2.2' in possible_versions:
|
171 |
-
possible_versions.remove('2.2')
|
172 |
-
logger.debug('Removed 2.2 due to %s', key)
|
173 |
-
# if key not in _426_FIELDS and '2.0' in possible_versions:
|
174 |
-
# possible_versions.remove('2.0')
|
175 |
-
# logger.debug('Removed 2.0 due to %s', key)
|
176 |
-
|
177 |
-
# possible_version contains qualified versions
|
178 |
-
if len(possible_versions) == 1:
|
179 |
-
return possible_versions[0] # found !
|
180 |
-
elif len(possible_versions) == 0:
|
181 |
-
logger.debug('Out of options - unknown metadata set: %s', fields)
|
182 |
-
raise MetadataConflictError('Unknown metadata set')
|
183 |
-
|
184 |
-
# let's see if one unique marker is found
|
185 |
-
is_1_1 = '1.1' in possible_versions and _has_marker(keys, _314_MARKERS)
|
186 |
-
is_1_2 = '1.2' in possible_versions and _has_marker(keys, _345_MARKERS)
|
187 |
-
is_2_1 = '2.1' in possible_versions and _has_marker(keys, _566_MARKERS)
|
188 |
-
# is_2_0 = '2.0' in possible_versions and _has_marker(keys, _426_MARKERS)
|
189 |
-
is_2_2 = '2.2' in possible_versions and _has_marker(keys, _643_MARKERS)
|
190 |
-
if int(is_1_1) + int(is_1_2) + int(is_2_1) + int(is_2_2) > 1:
|
191 |
-
raise MetadataConflictError('You used incompatible 1.1/1.2/2.1/2.2 fields')
|
192 |
-
|
193 |
-
# we have the choice, 1.0, or 1.2, 2.1 or 2.2
|
194 |
-
# - 1.0 has a broken Summary field but works with all tools
|
195 |
-
# - 1.1 is to avoid
|
196 |
-
# - 1.2 fixes Summary but has little adoption
|
197 |
-
# - 2.1 adds more features
|
198 |
-
# - 2.2 is the latest
|
199 |
-
if not is_1_1 and not is_1_2 and not is_2_1 and not is_2_2:
|
200 |
-
# we couldn't find any specific marker
|
201 |
-
if PKG_INFO_PREFERRED_VERSION in possible_versions:
|
202 |
-
return PKG_INFO_PREFERRED_VERSION
|
203 |
-
if is_1_1:
|
204 |
-
return '1.1'
|
205 |
-
if is_1_2:
|
206 |
-
return '1.2'
|
207 |
-
if is_2_1:
|
208 |
-
return '2.1'
|
209 |
-
# if is_2_2:
|
210 |
-
# return '2.2'
|
211 |
-
|
212 |
-
return '2.2'
|
213 |
-
|
214 |
-
# This follows the rules about transforming keys as described in
|
215 |
-
# https://www.python.org/dev/peps/pep-0566/#id17
|
216 |
-
_ATTR2FIELD = {
|
217 |
-
name.lower().replace("-", "_"): name for name in _ALL_FIELDS
|
218 |
-
}
|
219 |
-
_FIELD2ATTR = {field: attr for attr, field in _ATTR2FIELD.items()}
|
220 |
-
|
221 |
-
_PREDICATE_FIELDS = ('Requires-Dist', 'Obsoletes-Dist', 'Provides-Dist')
|
222 |
-
_VERSIONS_FIELDS = ('Requires-Python',)
|
223 |
-
_VERSION_FIELDS = ('Version',)
|
224 |
-
_LISTFIELDS = ('Platform', 'Classifier', 'Obsoletes',
|
225 |
-
'Requires', 'Provides', 'Obsoletes-Dist',
|
226 |
-
'Provides-Dist', 'Requires-Dist', 'Requires-External',
|
227 |
-
'Project-URL', 'Supported-Platform', 'Setup-Requires-Dist',
|
228 |
-
'Provides-Extra', 'Extension', 'License-File')
|
229 |
-
_LISTTUPLEFIELDS = ('Project-URL',)
|
230 |
-
|
231 |
-
_ELEMENTSFIELD = ('Keywords',)
|
232 |
-
|
233 |
-
_UNICODEFIELDS = ('Author', 'Maintainer', 'Summary', 'Description')
|
234 |
-
|
235 |
-
_MISSING = object()
|
236 |
-
|
237 |
-
_FILESAFE = re.compile('[^A-Za-z0-9.]+')
|
238 |
-
|
239 |
-
|
240 |
-
def _get_name_and_version(name, version, for_filename=False):
|
241 |
-
"""Return the distribution name with version.
|
242 |
-
|
243 |
-
If for_filename is true, return a filename-escaped form."""
|
244 |
-
if for_filename:
|
245 |
-
# For both name and version any runs of non-alphanumeric or '.'
|
246 |
-
# characters are replaced with a single '-'. Additionally any
|
247 |
-
# spaces in the version string become '.'
|
248 |
-
name = _FILESAFE.sub('-', name)
|
249 |
-
version = _FILESAFE.sub('-', version.replace(' ', '.'))
|
250 |
-
return '%s-%s' % (name, version)
|
251 |
-
|
252 |
-
|
253 |
-
class LegacyMetadata(object):
|
254 |
-
"""The legacy metadata of a release.
|
255 |
-
|
256 |
-
Supports versions 1.0, 1.1, 1.2, 2.0 and 1.3/2.1 (auto-detected). You can
|
257 |
-
instantiate the class with one of these arguments (or none):
|
258 |
-
- *path*, the path to a metadata file
|
259 |
-
- *fileobj* give a file-like object with metadata as content
|
260 |
-
- *mapping* is a dict-like object
|
261 |
-
- *scheme* is a version scheme name
|
262 |
-
"""
|
263 |
-
# TODO document the mapping API and UNKNOWN default key
|
264 |
-
|
265 |
-
def __init__(self, path=None, fileobj=None, mapping=None,
|
266 |
-
scheme='default'):
|
267 |
-
if [path, fileobj, mapping].count(None) < 2:
|
268 |
-
raise TypeError('path, fileobj and mapping are exclusive')
|
269 |
-
self._fields = {}
|
270 |
-
self.requires_files = []
|
271 |
-
self._dependencies = None
|
272 |
-
self.scheme = scheme
|
273 |
-
if path is not None:
|
274 |
-
self.read(path)
|
275 |
-
elif fileobj is not None:
|
276 |
-
self.read_file(fileobj)
|
277 |
-
elif mapping is not None:
|
278 |
-
self.update(mapping)
|
279 |
-
self.set_metadata_version()
|
280 |
-
|
281 |
-
def set_metadata_version(self):
|
282 |
-
self._fields['Metadata-Version'] = _best_version(self._fields)
|
283 |
-
|
284 |
-
def _write_field(self, fileobj, name, value):
|
285 |
-
fileobj.write('%s: %s\n' % (name, value))
|
286 |
-
|
287 |
-
def __getitem__(self, name):
|
288 |
-
return self.get(name)
|
289 |
-
|
290 |
-
def __setitem__(self, name, value):
|
291 |
-
return self.set(name, value)
|
292 |
-
|
293 |
-
def __delitem__(self, name):
|
294 |
-
field_name = self._convert_name(name)
|
295 |
-
try:
|
296 |
-
del self._fields[field_name]
|
297 |
-
except KeyError:
|
298 |
-
raise KeyError(name)
|
299 |
-
|
300 |
-
def __contains__(self, name):
|
301 |
-
return (name in self._fields or
|
302 |
-
self._convert_name(name) in self._fields)
|
303 |
-
|
304 |
-
def _convert_name(self, name):
|
305 |
-
if name in _ALL_FIELDS:
|
306 |
-
return name
|
307 |
-
name = name.replace('-', '_').lower()
|
308 |
-
return _ATTR2FIELD.get(name, name)
|
309 |
-
|
310 |
-
def _default_value(self, name):
|
311 |
-
if name in _LISTFIELDS or name in _ELEMENTSFIELD:
|
312 |
-
return []
|
313 |
-
return 'UNKNOWN'
|
314 |
-
|
315 |
-
def _remove_line_prefix(self, value):
|
316 |
-
if self.metadata_version in ('1.0', '1.1'):
|
317 |
-
return _LINE_PREFIX_PRE_1_2.sub('\n', value)
|
318 |
-
else:
|
319 |
-
return _LINE_PREFIX_1_2.sub('\n', value)
|
320 |
-
|
321 |
-
def __getattr__(self, name):
|
322 |
-
if name in _ATTR2FIELD:
|
323 |
-
return self[name]
|
324 |
-
raise AttributeError(name)
|
325 |
-
|
326 |
-
#
|
327 |
-
# Public API
|
328 |
-
#
|
329 |
-
|
330 |
-
# dependencies = property(_get_dependencies, _set_dependencies)
|
331 |
-
|
332 |
-
def get_fullname(self, filesafe=False):
|
333 |
-
"""Return the distribution name with version.
|
334 |
-
|
335 |
-
If filesafe is true, return a filename-escaped form."""
|
336 |
-
return _get_name_and_version(self['Name'], self['Version'], filesafe)
|
337 |
-
|
338 |
-
def is_field(self, name):
|
339 |
-
"""return True if name is a valid metadata key"""
|
340 |
-
name = self._convert_name(name)
|
341 |
-
return name in _ALL_FIELDS
|
342 |
-
|
343 |
-
def is_multi_field(self, name):
|
344 |
-
name = self._convert_name(name)
|
345 |
-
return name in _LISTFIELDS
|
346 |
-
|
347 |
-
def read(self, filepath):
|
348 |
-
"""Read the metadata values from a file path."""
|
349 |
-
fp = codecs.open(filepath, 'r', encoding='utf-8')
|
350 |
-
try:
|
351 |
-
self.read_file(fp)
|
352 |
-
finally:
|
353 |
-
fp.close()
|
354 |
-
|
355 |
-
def read_file(self, fileob):
|
356 |
-
"""Read the metadata values from a file object."""
|
357 |
-
msg = message_from_file(fileob)
|
358 |
-
self._fields['Metadata-Version'] = msg['metadata-version']
|
359 |
-
|
360 |
-
# When reading, get all the fields we can
|
361 |
-
for field in _ALL_FIELDS:
|
362 |
-
if field not in msg:
|
363 |
-
continue
|
364 |
-
if field in _LISTFIELDS:
|
365 |
-
# we can have multiple lines
|
366 |
-
values = msg.get_all(field)
|
367 |
-
if field in _LISTTUPLEFIELDS and values is not None:
|
368 |
-
values = [tuple(value.split(',')) for value in values]
|
369 |
-
self.set(field, values)
|
370 |
-
else:
|
371 |
-
# single line
|
372 |
-
value = msg[field]
|
373 |
-
if value is not None and value != 'UNKNOWN':
|
374 |
-
self.set(field, value)
|
375 |
-
|
376 |
-
# PEP 566 specifies that the body be used for the description, if
|
377 |
-
# available
|
378 |
-
body = msg.get_payload()
|
379 |
-
self["Description"] = body if body else self["Description"]
|
380 |
-
# logger.debug('Attempting to set metadata for %s', self)
|
381 |
-
# self.set_metadata_version()
|
382 |
-
|
383 |
-
def write(self, filepath, skip_unknown=False):
|
384 |
-
"""Write the metadata fields to filepath."""
|
385 |
-
fp = codecs.open(filepath, 'w', encoding='utf-8')
|
386 |
-
try:
|
387 |
-
self.write_file(fp, skip_unknown)
|
388 |
-
finally:
|
389 |
-
fp.close()
|
390 |
-
|
391 |
-
def write_file(self, fileobject, skip_unknown=False):
|
392 |
-
"""Write the PKG-INFO format data to a file object."""
|
393 |
-
self.set_metadata_version()
|
394 |
-
|
395 |
-
for field in _version2fieldlist(self['Metadata-Version']):
|
396 |
-
values = self.get(field)
|
397 |
-
if skip_unknown and values in ('UNKNOWN', [], ['UNKNOWN']):
|
398 |
-
continue
|
399 |
-
if field in _ELEMENTSFIELD:
|
400 |
-
self._write_field(fileobject, field, ','.join(values))
|
401 |
-
continue
|
402 |
-
if field not in _LISTFIELDS:
|
403 |
-
if field == 'Description':
|
404 |
-
if self.metadata_version in ('1.0', '1.1'):
|
405 |
-
values = values.replace('\n', '\n ')
|
406 |
-
else:
|
407 |
-
values = values.replace('\n', '\n |')
|
408 |
-
values = [values]
|
409 |
-
|
410 |
-
if field in _LISTTUPLEFIELDS:
|
411 |
-
values = [','.join(value) for value in values]
|
412 |
-
|
413 |
-
for value in values:
|
414 |
-
self._write_field(fileobject, field, value)
|
415 |
-
|
416 |
-
def update(self, other=None, **kwargs):
|
417 |
-
"""Set metadata values from the given iterable `other` and kwargs.
|
418 |
-
|
419 |
-
Behavior is like `dict.update`: If `other` has a ``keys`` method,
|
420 |
-
they are looped over and ``self[key]`` is assigned ``other[key]``.
|
421 |
-
Else, ``other`` is an iterable of ``(key, value)`` iterables.
|
422 |
-
|
423 |
-
Keys that don't match a metadata field or that have an empty value are
|
424 |
-
dropped.
|
425 |
-
"""
|
426 |
-
def _set(key, value):
|
427 |
-
if key in _ATTR2FIELD and value:
|
428 |
-
self.set(self._convert_name(key), value)
|
429 |
-
|
430 |
-
if not other:
|
431 |
-
# other is None or empty container
|
432 |
-
pass
|
433 |
-
elif hasattr(other, 'keys'):
|
434 |
-
for k in other.keys():
|
435 |
-
_set(k, other[k])
|
436 |
-
else:
|
437 |
-
for k, v in other:
|
438 |
-
_set(k, v)
|
439 |
-
|
440 |
-
if kwargs:
|
441 |
-
for k, v in kwargs.items():
|
442 |
-
_set(k, v)
|
443 |
-
|
444 |
-
def set(self, name, value):
|
445 |
-
"""Control then set a metadata field."""
|
446 |
-
name = self._convert_name(name)
|
447 |
-
|
448 |
-
if ((name in _ELEMENTSFIELD or name == 'Platform') and
|
449 |
-
not isinstance(value, (list, tuple))):
|
450 |
-
if isinstance(value, string_types):
|
451 |
-
value = [v.strip() for v in value.split(',')]
|
452 |
-
else:
|
453 |
-
value = []
|
454 |
-
elif (name in _LISTFIELDS and
|
455 |
-
not isinstance(value, (list, tuple))):
|
456 |
-
if isinstance(value, string_types):
|
457 |
-
value = [value]
|
458 |
-
else:
|
459 |
-
value = []
|
460 |
-
|
461 |
-
if logger.isEnabledFor(logging.WARNING):
|
462 |
-
project_name = self['Name']
|
463 |
-
|
464 |
-
scheme = get_scheme(self.scheme)
|
465 |
-
if name in _PREDICATE_FIELDS and value is not None:
|
466 |
-
for v in value:
|
467 |
-
# check that the values are valid
|
468 |
-
if not scheme.is_valid_matcher(v.split(';')[0]):
|
469 |
-
logger.warning(
|
470 |
-
"'%s': '%s' is not valid (field '%s')",
|
471 |
-
project_name, v, name)
|
472 |
-
# FIXME this rejects UNKNOWN, is that right?
|
473 |
-
elif name in _VERSIONS_FIELDS and value is not None:
|
474 |
-
if not scheme.is_valid_constraint_list(value):
|
475 |
-
logger.warning("'%s': '%s' is not a valid version (field '%s')",
|
476 |
-
project_name, value, name)
|
477 |
-
elif name in _VERSION_FIELDS and value is not None:
|
478 |
-
if not scheme.is_valid_version(value):
|
479 |
-
logger.warning("'%s': '%s' is not a valid version (field '%s')",
|
480 |
-
project_name, value, name)
|
481 |
-
|
482 |
-
if name in _UNICODEFIELDS:
|
483 |
-
if name == 'Description':
|
484 |
-
value = self._remove_line_prefix(value)
|
485 |
-
|
486 |
-
self._fields[name] = value
|
487 |
-
|
488 |
-
def get(self, name, default=_MISSING):
|
489 |
-
"""Get a metadata field."""
|
490 |
-
name = self._convert_name(name)
|
491 |
-
if name not in self._fields:
|
492 |
-
if default is _MISSING:
|
493 |
-
default = self._default_value(name)
|
494 |
-
return default
|
495 |
-
if name in _UNICODEFIELDS:
|
496 |
-
value = self._fields[name]
|
497 |
-
return value
|
498 |
-
elif name in _LISTFIELDS:
|
499 |
-
value = self._fields[name]
|
500 |
-
if value is None:
|
501 |
-
return []
|
502 |
-
res = []
|
503 |
-
for val in value:
|
504 |
-
if name not in _LISTTUPLEFIELDS:
|
505 |
-
res.append(val)
|
506 |
-
else:
|
507 |
-
# That's for Project-URL
|
508 |
-
res.append((val[0], val[1]))
|
509 |
-
return res
|
510 |
-
|
511 |
-
elif name in _ELEMENTSFIELD:
|
512 |
-
value = self._fields[name]
|
513 |
-
if isinstance(value, string_types):
|
514 |
-
return value.split(',')
|
515 |
-
return self._fields[name]
|
516 |
-
|
517 |
-
def check(self, strict=False):
|
518 |
-
"""Check if the metadata is compliant. If strict is True then raise if
|
519 |
-
no Name or Version are provided"""
|
520 |
-
self.set_metadata_version()
|
521 |
-
|
522 |
-
# XXX should check the versions (if the file was loaded)
|
523 |
-
missing, warnings = [], []
|
524 |
-
|
525 |
-
for attr in ('Name', 'Version'): # required by PEP 345
|
526 |
-
if attr not in self:
|
527 |
-
missing.append(attr)
|
528 |
-
|
529 |
-
if strict and missing != []:
|
530 |
-
msg = 'missing required metadata: %s' % ', '.join(missing)
|
531 |
-
raise MetadataMissingError(msg)
|
532 |
-
|
533 |
-
for attr in ('Home-page', 'Author'):
|
534 |
-
if attr not in self:
|
535 |
-
missing.append(attr)
|
536 |
-
|
537 |
-
# checking metadata 1.2 (XXX needs to check 1.1, 1.0)
|
538 |
-
if self['Metadata-Version'] != '1.2':
|
539 |
-
return missing, warnings
|
540 |
-
|
541 |
-
scheme = get_scheme(self.scheme)
|
542 |
-
|
543 |
-
def are_valid_constraints(value):
|
544 |
-
for v in value:
|
545 |
-
if not scheme.is_valid_matcher(v.split(';')[0]):
|
546 |
-
return False
|
547 |
-
return True
|
548 |
-
|
549 |
-
for fields, controller in ((_PREDICATE_FIELDS, are_valid_constraints),
|
550 |
-
(_VERSIONS_FIELDS,
|
551 |
-
scheme.is_valid_constraint_list),
|
552 |
-
(_VERSION_FIELDS,
|
553 |
-
scheme.is_valid_version)):
|
554 |
-
for field in fields:
|
555 |
-
value = self.get(field, None)
|
556 |
-
if value is not None and not controller(value):
|
557 |
-
warnings.append("Wrong value for '%s': %s" % (field, value))
|
558 |
-
|
559 |
-
return missing, warnings
|
560 |
-
|
561 |
-
def todict(self, skip_missing=False):
|
562 |
-
"""Return fields as a dict.
|
563 |
-
|
564 |
-
Field names will be converted to use the underscore-lowercase style
|
565 |
-
instead of hyphen-mixed case (i.e. home_page instead of Home-page).
|
566 |
-
This is as per https://www.python.org/dev/peps/pep-0566/#id17.
|
567 |
-
"""
|
568 |
-
self.set_metadata_version()
|
569 |
-
|
570 |
-
fields = _version2fieldlist(self['Metadata-Version'])
|
571 |
-
|
572 |
-
data = {}
|
573 |
-
|
574 |
-
for field_name in fields:
|
575 |
-
if not skip_missing or field_name in self._fields:
|
576 |
-
key = _FIELD2ATTR[field_name]
|
577 |
-
if key != 'project_url':
|
578 |
-
data[key] = self[field_name]
|
579 |
-
else:
|
580 |
-
data[key] = [','.join(u) for u in self[field_name]]
|
581 |
-
|
582 |
-
return data
|
583 |
-
|
584 |
-
def add_requirements(self, requirements):
|
585 |
-
if self['Metadata-Version'] == '1.1':
|
586 |
-
# we can't have 1.1 metadata *and* Setuptools requires
|
587 |
-
for field in ('Obsoletes', 'Requires', 'Provides'):
|
588 |
-
if field in self:
|
589 |
-
del self[field]
|
590 |
-
self['Requires-Dist'] += requirements
|
591 |
-
|
592 |
-
# Mapping API
|
593 |
-
# TODO could add iter* variants
|
594 |
-
|
595 |
-
def keys(self):
|
596 |
-
return list(_version2fieldlist(self['Metadata-Version']))
|
597 |
-
|
598 |
-
def __iter__(self):
|
599 |
-
for key in self.keys():
|
600 |
-
yield key
|
601 |
-
|
602 |
-
def values(self):
|
603 |
-
return [self[key] for key in self.keys()]
|
604 |
-
|
605 |
-
def items(self):
|
606 |
-
return [(key, self[key]) for key in self.keys()]
|
607 |
-
|
608 |
-
def __repr__(self):
|
609 |
-
return '<%s %s %s>' % (self.__class__.__name__, self.name,
|
610 |
-
self.version)
|
611 |
-
|
612 |
-
|
613 |
-
METADATA_FILENAME = 'pydist.json'
|
614 |
-
WHEEL_METADATA_FILENAME = 'metadata.json'
|
615 |
-
LEGACY_METADATA_FILENAME = 'METADATA'
|
616 |
-
|
617 |
-
|
618 |
-
class Metadata(object):
|
619 |
-
"""
|
620 |
-
The metadata of a release. This implementation uses 2.1
|
621 |
-
metadata where possible. If not possible, it wraps a LegacyMetadata
|
622 |
-
instance which handles the key-value metadata format.
|
623 |
-
"""
|
624 |
-
|
625 |
-
METADATA_VERSION_MATCHER = re.compile(r'^\d+(\.\d+)*$')
|
626 |
-
|
627 |
-
NAME_MATCHER = re.compile('^[0-9A-Z]([0-9A-Z_.-]*[0-9A-Z])?$', re.I)
|
628 |
-
|
629 |
-
FIELDNAME_MATCHER = re.compile('^[A-Z]([0-9A-Z-]*[0-9A-Z])?$', re.I)
|
630 |
-
|
631 |
-
VERSION_MATCHER = PEP440_VERSION_RE
|
632 |
-
|
633 |
-
SUMMARY_MATCHER = re.compile('.{1,2047}')
|
634 |
-
|
635 |
-
METADATA_VERSION = '2.0'
|
636 |
-
|
637 |
-
GENERATOR = 'distlib (%s)' % __version__
|
638 |
-
|
639 |
-
MANDATORY_KEYS = {
|
640 |
-
'name': (),
|
641 |
-
'version': (),
|
642 |
-
'summary': ('legacy',),
|
643 |
-
}
|
644 |
-
|
645 |
-
INDEX_KEYS = ('name version license summary description author '
|
646 |
-
'author_email keywords platform home_page classifiers '
|
647 |
-
'download_url')
|
648 |
-
|
649 |
-
DEPENDENCY_KEYS = ('extras run_requires test_requires build_requires '
|
650 |
-
'dev_requires provides meta_requires obsoleted_by '
|
651 |
-
'supports_environments')
|
652 |
-
|
653 |
-
SYNTAX_VALIDATORS = {
|
654 |
-
'metadata_version': (METADATA_VERSION_MATCHER, ()),
|
655 |
-
'name': (NAME_MATCHER, ('legacy',)),
|
656 |
-
'version': (VERSION_MATCHER, ('legacy',)),
|
657 |
-
'summary': (SUMMARY_MATCHER, ('legacy',)),
|
658 |
-
'dynamic': (FIELDNAME_MATCHER, ('legacy',)),
|
659 |
-
}
|
660 |
-
|
661 |
-
__slots__ = ('_legacy', '_data', 'scheme')
|
662 |
-
|
663 |
-
def __init__(self, path=None, fileobj=None, mapping=None,
|
664 |
-
scheme='default'):
|
665 |
-
if [path, fileobj, mapping].count(None) < 2:
|
666 |
-
raise TypeError('path, fileobj and mapping are exclusive')
|
667 |
-
self._legacy = None
|
668 |
-
self._data = None
|
669 |
-
self.scheme = scheme
|
670 |
-
#import pdb; pdb.set_trace()
|
671 |
-
if mapping is not None:
|
672 |
-
try:
|
673 |
-
self._validate_mapping(mapping, scheme)
|
674 |
-
self._data = mapping
|
675 |
-
except MetadataUnrecognizedVersionError:
|
676 |
-
self._legacy = LegacyMetadata(mapping=mapping, scheme=scheme)
|
677 |
-
self.validate()
|
678 |
-
else:
|
679 |
-
data = None
|
680 |
-
if path:
|
681 |
-
with open(path, 'rb') as f:
|
682 |
-
data = f.read()
|
683 |
-
elif fileobj:
|
684 |
-
data = fileobj.read()
|
685 |
-
if data is None:
|
686 |
-
# Initialised with no args - to be added
|
687 |
-
self._data = {
|
688 |
-
'metadata_version': self.METADATA_VERSION,
|
689 |
-
'generator': self.GENERATOR,
|
690 |
-
}
|
691 |
-
else:
|
692 |
-
if not isinstance(data, text_type):
|
693 |
-
data = data.decode('utf-8')
|
694 |
-
try:
|
695 |
-
self._data = json.loads(data)
|
696 |
-
self._validate_mapping(self._data, scheme)
|
697 |
-
except ValueError:
|
698 |
-
# Note: MetadataUnrecognizedVersionError does not
|
699 |
-
# inherit from ValueError (it's a DistlibException,
|
700 |
-
# which should not inherit from ValueError).
|
701 |
-
# The ValueError comes from the json.load - if that
|
702 |
-
# succeeds and we get a validation error, we want
|
703 |
-
# that to propagate
|
704 |
-
self._legacy = LegacyMetadata(fileobj=StringIO(data),
|
705 |
-
scheme=scheme)
|
706 |
-
self.validate()
|
707 |
-
|
708 |
-
common_keys = set(('name', 'version', 'license', 'keywords', 'summary'))
|
709 |
-
|
710 |
-
none_list = (None, list)
|
711 |
-
none_dict = (None, dict)
|
712 |
-
|
713 |
-
mapped_keys = {
|
714 |
-
'run_requires': ('Requires-Dist', list),
|
715 |
-
'build_requires': ('Setup-Requires-Dist', list),
|
716 |
-
'dev_requires': none_list,
|
717 |
-
'test_requires': none_list,
|
718 |
-
'meta_requires': none_list,
|
719 |
-
'extras': ('Provides-Extra', list),
|
720 |
-
'modules': none_list,
|
721 |
-
'namespaces': none_list,
|
722 |
-
'exports': none_dict,
|
723 |
-
'commands': none_dict,
|
724 |
-
'classifiers': ('Classifier', list),
|
725 |
-
'source_url': ('Download-URL', None),
|
726 |
-
'metadata_version': ('Metadata-Version', None),
|
727 |
-
}
|
728 |
-
|
729 |
-
del none_list, none_dict
|
730 |
-
|
731 |
-
def __getattribute__(self, key):
|
732 |
-
common = object.__getattribute__(self, 'common_keys')
|
733 |
-
mapped = object.__getattribute__(self, 'mapped_keys')
|
734 |
-
if key in mapped:
|
735 |
-
lk, maker = mapped[key]
|
736 |
-
if self._legacy:
|
737 |
-
if lk is None:
|
738 |
-
result = None if maker is None else maker()
|
739 |
-
else:
|
740 |
-
result = self._legacy.get(lk)
|
741 |
-
else:
|
742 |
-
value = None if maker is None else maker()
|
743 |
-
if key not in ('commands', 'exports', 'modules', 'namespaces',
|
744 |
-
'classifiers'):
|
745 |
-
result = self._data.get(key, value)
|
746 |
-
else:
|
747 |
-
# special cases for PEP 459
|
748 |
-
sentinel = object()
|
749 |
-
result = sentinel
|
750 |
-
d = self._data.get('extensions')
|
751 |
-
if d:
|
752 |
-
if key == 'commands':
|
753 |
-
result = d.get('python.commands', value)
|
754 |
-
elif key == 'classifiers':
|
755 |
-
d = d.get('python.details')
|
756 |
-
if d:
|
757 |
-
result = d.get(key, value)
|
758 |
-
else:
|
759 |
-
d = d.get('python.exports')
|
760 |
-
if not d:
|
761 |
-
d = self._data.get('python.exports')
|
762 |
-
if d:
|
763 |
-
result = d.get(key, value)
|
764 |
-
if result is sentinel:
|
765 |
-
result = value
|
766 |
-
elif key not in common:
|
767 |
-
result = object.__getattribute__(self, key)
|
768 |
-
elif self._legacy:
|
769 |
-
result = self._legacy.get(key)
|
770 |
-
else:
|
771 |
-
result = self._data.get(key)
|
772 |
-
return result
|
773 |
-
|
774 |
-
def _validate_value(self, key, value, scheme=None):
|
775 |
-
if key in self.SYNTAX_VALIDATORS:
|
776 |
-
pattern, exclusions = self.SYNTAX_VALIDATORS[key]
|
777 |
-
if (scheme or self.scheme) not in exclusions:
|
778 |
-
m = pattern.match(value)
|
779 |
-
if not m:
|
780 |
-
raise MetadataInvalidError("'%s' is an invalid value for "
|
781 |
-
"the '%s' property" % (value,
|
782 |
-
key))
|
783 |
-
|
784 |
-
def __setattr__(self, key, value):
|
785 |
-
self._validate_value(key, value)
|
786 |
-
common = object.__getattribute__(self, 'common_keys')
|
787 |
-
mapped = object.__getattribute__(self, 'mapped_keys')
|
788 |
-
if key in mapped:
|
789 |
-
lk, _ = mapped[key]
|
790 |
-
if self._legacy:
|
791 |
-
if lk is None:
|
792 |
-
raise NotImplementedError
|
793 |
-
self._legacy[lk] = value
|
794 |
-
elif key not in ('commands', 'exports', 'modules', 'namespaces',
|
795 |
-
'classifiers'):
|
796 |
-
self._data[key] = value
|
797 |
-
else:
|
798 |
-
# special cases for PEP 459
|
799 |
-
d = self._data.setdefault('extensions', {})
|
800 |
-
if key == 'commands':
|
801 |
-
d['python.commands'] = value
|
802 |
-
elif key == 'classifiers':
|
803 |
-
d = d.setdefault('python.details', {})
|
804 |
-
d[key] = value
|
805 |
-
else:
|
806 |
-
d = d.setdefault('python.exports', {})
|
807 |
-
d[key] = value
|
808 |
-
elif key not in common:
|
809 |
-
object.__setattr__(self, key, value)
|
810 |
-
else:
|
811 |
-
if key == 'keywords':
|
812 |
-
if isinstance(value, string_types):
|
813 |
-
value = value.strip()
|
814 |
-
if value:
|
815 |
-
value = value.split()
|
816 |
-
else:
|
817 |
-
value = []
|
818 |
-
if self._legacy:
|
819 |
-
self._legacy[key] = value
|
820 |
-
else:
|
821 |
-
self._data[key] = value
|
822 |
-
|
823 |
-
@property
|
824 |
-
def name_and_version(self):
|
825 |
-
return _get_name_and_version(self.name, self.version, True)
|
826 |
-
|
827 |
-
@property
|
828 |
-
def provides(self):
|
829 |
-
if self._legacy:
|
830 |
-
result = self._legacy['Provides-Dist']
|
831 |
-
else:
|
832 |
-
result = self._data.setdefault('provides', [])
|
833 |
-
s = '%s (%s)' % (self.name, self.version)
|
834 |
-
if s not in result:
|
835 |
-
result.append(s)
|
836 |
-
return result
|
837 |
-
|
838 |
-
@provides.setter
|
839 |
-
def provides(self, value):
|
840 |
-
if self._legacy:
|
841 |
-
self._legacy['Provides-Dist'] = value
|
842 |
-
else:
|
843 |
-
self._data['provides'] = value
|
844 |
-
|
845 |
-
def get_requirements(self, reqts, extras=None, env=None):
|
846 |
-
"""
|
847 |
-
Base method to get dependencies, given a set of extras
|
848 |
-
to satisfy and an optional environment context.
|
849 |
-
:param reqts: A list of sometimes-wanted dependencies,
|
850 |
-
perhaps dependent on extras and environment.
|
851 |
-
:param extras: A list of optional components being requested.
|
852 |
-
:param env: An optional environment for marker evaluation.
|
853 |
-
"""
|
854 |
-
if self._legacy:
|
855 |
-
result = reqts
|
856 |
-
else:
|
857 |
-
result = []
|
858 |
-
extras = get_extras(extras or [], self.extras)
|
859 |
-
for d in reqts:
|
860 |
-
if 'extra' not in d and 'environment' not in d:
|
861 |
-
# unconditional
|
862 |
-
include = True
|
863 |
-
else:
|
864 |
-
if 'extra' not in d:
|
865 |
-
# Not extra-dependent - only environment-dependent
|
866 |
-
include = True
|
867 |
-
else:
|
868 |
-
include = d.get('extra') in extras
|
869 |
-
if include:
|
870 |
-
# Not excluded because of extras, check environment
|
871 |
-
marker = d.get('environment')
|
872 |
-
if marker:
|
873 |
-
include = interpret(marker, env)
|
874 |
-
if include:
|
875 |
-
result.extend(d['requires'])
|
876 |
-
for key in ('build', 'dev', 'test'):
|
877 |
-
e = ':%s:' % key
|
878 |
-
if e in extras:
|
879 |
-
extras.remove(e)
|
880 |
-
# A recursive call, but it should terminate since 'test'
|
881 |
-
# has been removed from the extras
|
882 |
-
reqts = self._data.get('%s_requires' % key, [])
|
883 |
-
result.extend(self.get_requirements(reqts, extras=extras,
|
884 |
-
env=env))
|
885 |
-
return result
|
886 |
-
|
887 |
-
@property
|
888 |
-
def dictionary(self):
|
889 |
-
if self._legacy:
|
890 |
-
return self._from_legacy()
|
891 |
-
return self._data
|
892 |
-
|
893 |
-
@property
|
894 |
-
def dependencies(self):
|
895 |
-
if self._legacy:
|
896 |
-
raise NotImplementedError
|
897 |
-
else:
|
898 |
-
return extract_by_key(self._data, self.DEPENDENCY_KEYS)
|
899 |
-
|
900 |
-
@dependencies.setter
|
901 |
-
def dependencies(self, value):
|
902 |
-
if self._legacy:
|
903 |
-
raise NotImplementedError
|
904 |
-
else:
|
905 |
-
self._data.update(value)
|
906 |
-
|
907 |
-
def _validate_mapping(self, mapping, scheme):
|
908 |
-
if mapping.get('metadata_version') != self.METADATA_VERSION:
|
909 |
-
raise MetadataUnrecognizedVersionError()
|
910 |
-
missing = []
|
911 |
-
for key, exclusions in self.MANDATORY_KEYS.items():
|
912 |
-
if key not in mapping:
|
913 |
-
if scheme not in exclusions:
|
914 |
-
missing.append(key)
|
915 |
-
if missing:
|
916 |
-
msg = 'Missing metadata items: %s' % ', '.join(missing)
|
917 |
-
raise MetadataMissingError(msg)
|
918 |
-
for k, v in mapping.items():
|
919 |
-
self._validate_value(k, v, scheme)
|
920 |
-
|
921 |
-
def validate(self):
|
922 |
-
if self._legacy:
|
923 |
-
missing, warnings = self._legacy.check(True)
|
924 |
-
if missing or warnings:
|
925 |
-
logger.warning('Metadata: missing: %s, warnings: %s',
|
926 |
-
missing, warnings)
|
927 |
-
else:
|
928 |
-
self._validate_mapping(self._data, self.scheme)
|
929 |
-
|
930 |
-
def todict(self):
|
931 |
-
if self._legacy:
|
932 |
-
return self._legacy.todict(True)
|
933 |
-
else:
|
934 |
-
result = extract_by_key(self._data, self.INDEX_KEYS)
|
935 |
-
return result
|
936 |
-
|
937 |
-
def _from_legacy(self):
|
938 |
-
assert self._legacy and not self._data
|
939 |
-
result = {
|
940 |
-
'metadata_version': self.METADATA_VERSION,
|
941 |
-
'generator': self.GENERATOR,
|
942 |
-
}
|
943 |
-
lmd = self._legacy.todict(True) # skip missing ones
|
944 |
-
for k in ('name', 'version', 'license', 'summary', 'description',
|
945 |
-
'classifier'):
|
946 |
-
if k in lmd:
|
947 |
-
if k == 'classifier':
|
948 |
-
nk = 'classifiers'
|
949 |
-
else:
|
950 |
-
nk = k
|
951 |
-
result[nk] = lmd[k]
|
952 |
-
kw = lmd.get('Keywords', [])
|
953 |
-
if kw == ['']:
|
954 |
-
kw = []
|
955 |
-
result['keywords'] = kw
|
956 |
-
keys = (('requires_dist', 'run_requires'),
|
957 |
-
('setup_requires_dist', 'build_requires'))
|
958 |
-
for ok, nk in keys:
|
959 |
-
if ok in lmd and lmd[ok]:
|
960 |
-
result[nk] = [{'requires': lmd[ok]}]
|
961 |
-
result['provides'] = self.provides
|
962 |
-
author = {}
|
963 |
-
maintainer = {}
|
964 |
-
return result
|
965 |
-
|
966 |
-
LEGACY_MAPPING = {
|
967 |
-
'name': 'Name',
|
968 |
-
'version': 'Version',
|
969 |
-
('extensions', 'python.details', 'license'): 'License',
|
970 |
-
'summary': 'Summary',
|
971 |
-
'description': 'Description',
|
972 |
-
('extensions', 'python.project', 'project_urls', 'Home'): 'Home-page',
|
973 |
-
('extensions', 'python.project', 'contacts', 0, 'name'): 'Author',
|
974 |
-
('extensions', 'python.project', 'contacts', 0, 'email'): 'Author-email',
|
975 |
-
'source_url': 'Download-URL',
|
976 |
-
('extensions', 'python.details', 'classifiers'): 'Classifier',
|
977 |
-
}
|
978 |
-
|
979 |
-
def _to_legacy(self):
|
980 |
-
def process_entries(entries):
|
981 |
-
reqts = set()
|
982 |
-
for e in entries:
|
983 |
-
extra = e.get('extra')
|
984 |
-
env = e.get('environment')
|
985 |
-
rlist = e['requires']
|
986 |
-
for r in rlist:
|
987 |
-
if not env and not extra:
|
988 |
-
reqts.add(r)
|
989 |
-
else:
|
990 |
-
marker = ''
|
991 |
-
if extra:
|
992 |
-
marker = 'extra == "%s"' % extra
|
993 |
-
if env:
|
994 |
-
if marker:
|
995 |
-
marker = '(%s) and %s' % (env, marker)
|
996 |
-
else:
|
997 |
-
marker = env
|
998 |
-
reqts.add(';'.join((r, marker)))
|
999 |
-
return reqts
|
1000 |
-
|
1001 |
-
assert self._data and not self._legacy
|
1002 |
-
result = LegacyMetadata()
|
1003 |
-
nmd = self._data
|
1004 |
-
# import pdb; pdb.set_trace()
|
1005 |
-
for nk, ok in self.LEGACY_MAPPING.items():
|
1006 |
-
if not isinstance(nk, tuple):
|
1007 |
-
if nk in nmd:
|
1008 |
-
result[ok] = nmd[nk]
|
1009 |
-
else:
|
1010 |
-
d = nmd
|
1011 |
-
found = True
|
1012 |
-
for k in nk:
|
1013 |
-
try:
|
1014 |
-
d = d[k]
|
1015 |
-
except (KeyError, IndexError):
|
1016 |
-
found = False
|
1017 |
-
break
|
1018 |
-
if found:
|
1019 |
-
result[ok] = d
|
1020 |
-
r1 = process_entries(self.run_requires + self.meta_requires)
|
1021 |
-
r2 = process_entries(self.build_requires + self.dev_requires)
|
1022 |
-
if self.extras:
|
1023 |
-
result['Provides-Extra'] = sorted(self.extras)
|
1024 |
-
result['Requires-Dist'] = sorted(r1)
|
1025 |
-
result['Setup-Requires-Dist'] = sorted(r2)
|
1026 |
-
# TODO: any other fields wanted
|
1027 |
-
return result
|
1028 |
-
|
1029 |
-
def write(self, path=None, fileobj=None, legacy=False, skip_unknown=True):
|
1030 |
-
if [path, fileobj].count(None) != 1:
|
1031 |
-
raise ValueError('Exactly one of path and fileobj is needed')
|
1032 |
-
self.validate()
|
1033 |
-
if legacy:
|
1034 |
-
if self._legacy:
|
1035 |
-
legacy_md = self._legacy
|
1036 |
-
else:
|
1037 |
-
legacy_md = self._to_legacy()
|
1038 |
-
if path:
|
1039 |
-
legacy_md.write(path, skip_unknown=skip_unknown)
|
1040 |
-
else:
|
1041 |
-
legacy_md.write_file(fileobj, skip_unknown=skip_unknown)
|
1042 |
-
else:
|
1043 |
-
if self._legacy:
|
1044 |
-
d = self._from_legacy()
|
1045 |
-
else:
|
1046 |
-
d = self._data
|
1047 |
-
if fileobj:
|
1048 |
-
json.dump(d, fileobj, ensure_ascii=True, indent=2,
|
1049 |
-
sort_keys=True)
|
1050 |
-
else:
|
1051 |
-
with codecs.open(path, 'w', 'utf-8') as f:
|
1052 |
-
json.dump(d, f, ensure_ascii=True, indent=2,
|
1053 |
-
sort_keys=True)
|
1054 |
-
|
1055 |
-
def add_requirements(self, requirements):
|
1056 |
-
if self._legacy:
|
1057 |
-
self._legacy.add_requirements(requirements)
|
1058 |
-
else:
|
1059 |
-
run_requires = self._data.setdefault('run_requires', [])
|
1060 |
-
always = None
|
1061 |
-
for entry in run_requires:
|
1062 |
-
if 'environment' not in entry and 'extra' not in entry:
|
1063 |
-
always = entry
|
1064 |
-
break
|
1065 |
-
if always is None:
|
1066 |
-
always = { 'requires': requirements }
|
1067 |
-
run_requires.insert(0, always)
|
1068 |
-
else:
|
1069 |
-
rset = set(always['requires']) | set(requirements)
|
1070 |
-
always['requires'] = sorted(rset)
|
1071 |
-
|
1072 |
-
def __repr__(self):
|
1073 |
-
name = self.name or '(no name)'
|
1074 |
-
version = self.version or 'no version'
|
1075 |
-
return '<%s %s %s (%s)>' % (self.__class__.__name__,
|
1076 |
-
self.metadata_version, name, version)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_win32_console.py
DELETED
@@ -1,662 +0,0 @@
|
|
1 |
-
"""Light wrapper around the Win32 Console API - this module should only be imported on Windows
|
2 |
-
|
3 |
-
The API that this module wraps is documented at https://docs.microsoft.com/en-us/windows/console/console-functions
|
4 |
-
"""
|
5 |
-
import ctypes
|
6 |
-
import sys
|
7 |
-
from typing import Any
|
8 |
-
|
9 |
-
windll: Any = None
|
10 |
-
if sys.platform == "win32":
|
11 |
-
windll = ctypes.LibraryLoader(ctypes.WinDLL)
|
12 |
-
else:
|
13 |
-
raise ImportError(f"{__name__} can only be imported on Windows")
|
14 |
-
|
15 |
-
import time
|
16 |
-
from ctypes import Structure, byref, wintypes
|
17 |
-
from typing import IO, NamedTuple, Type, cast
|
18 |
-
|
19 |
-
from pip._vendor.rich.color import ColorSystem
|
20 |
-
from pip._vendor.rich.style import Style
|
21 |
-
|
22 |
-
STDOUT = -11
|
23 |
-
ENABLE_VIRTUAL_TERMINAL_PROCESSING = 4
|
24 |
-
|
25 |
-
COORD = wintypes._COORD
|
26 |
-
|
27 |
-
|
28 |
-
class LegacyWindowsError(Exception):
|
29 |
-
pass
|
30 |
-
|
31 |
-
|
32 |
-
class WindowsCoordinates(NamedTuple):
|
33 |
-
"""Coordinates in the Windows Console API are (y, x), not (x, y).
|
34 |
-
This class is intended to prevent that confusion.
|
35 |
-
Rows and columns are indexed from 0.
|
36 |
-
This class can be used in place of wintypes._COORD in arguments and argtypes.
|
37 |
-
"""
|
38 |
-
|
39 |
-
row: int
|
40 |
-
col: int
|
41 |
-
|
42 |
-
@classmethod
|
43 |
-
def from_param(cls, value: "WindowsCoordinates") -> COORD:
|
44 |
-
"""Converts a WindowsCoordinates into a wintypes _COORD structure.
|
45 |
-
This classmethod is internally called by ctypes to perform the conversion.
|
46 |
-
|
47 |
-
Args:
|
48 |
-
value (WindowsCoordinates): The input coordinates to convert.
|
49 |
-
|
50 |
-
Returns:
|
51 |
-
wintypes._COORD: The converted coordinates struct.
|
52 |
-
"""
|
53 |
-
return COORD(value.col, value.row)
|
54 |
-
|
55 |
-
|
56 |
-
class CONSOLE_SCREEN_BUFFER_INFO(Structure):
|
57 |
-
_fields_ = [
|
58 |
-
("dwSize", COORD),
|
59 |
-
("dwCursorPosition", COORD),
|
60 |
-
("wAttributes", wintypes.WORD),
|
61 |
-
("srWindow", wintypes.SMALL_RECT),
|
62 |
-
("dwMaximumWindowSize", COORD),
|
63 |
-
]
|
64 |
-
|
65 |
-
|
66 |
-
class CONSOLE_CURSOR_INFO(ctypes.Structure):
|
67 |
-
_fields_ = [("dwSize", wintypes.DWORD), ("bVisible", wintypes.BOOL)]
|
68 |
-
|
69 |
-
|
70 |
-
_GetStdHandle = windll.kernel32.GetStdHandle
|
71 |
-
_GetStdHandle.argtypes = [
|
72 |
-
wintypes.DWORD,
|
73 |
-
]
|
74 |
-
_GetStdHandle.restype = wintypes.HANDLE
|
75 |
-
|
76 |
-
|
77 |
-
def GetStdHandle(handle: int = STDOUT) -> wintypes.HANDLE:
|
78 |
-
"""Retrieves a handle to the specified standard device (standard input, standard output, or standard error).
|
79 |
-
|
80 |
-
Args:
|
81 |
-
handle (int): Integer identifier for the handle. Defaults to -11 (stdout).
|
82 |
-
|
83 |
-
Returns:
|
84 |
-
wintypes.HANDLE: The handle
|
85 |
-
"""
|
86 |
-
return cast(wintypes.HANDLE, _GetStdHandle(handle))
|
87 |
-
|
88 |
-
|
89 |
-
_GetConsoleMode = windll.kernel32.GetConsoleMode
|
90 |
-
_GetConsoleMode.argtypes = [wintypes.HANDLE, wintypes.LPDWORD]
|
91 |
-
_GetConsoleMode.restype = wintypes.BOOL
|
92 |
-
|
93 |
-
|
94 |
-
def GetConsoleMode(std_handle: wintypes.HANDLE) -> int:
|
95 |
-
"""Retrieves the current input mode of a console's input buffer
|
96 |
-
or the current output mode of a console screen buffer.
|
97 |
-
|
98 |
-
Args:
|
99 |
-
std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer.
|
100 |
-
|
101 |
-
Raises:
|
102 |
-
LegacyWindowsError: If any error occurs while calling the Windows console API.
|
103 |
-
|
104 |
-
Returns:
|
105 |
-
int: Value representing the current console mode as documented at
|
106 |
-
https://docs.microsoft.com/en-us/windows/console/getconsolemode#parameters
|
107 |
-
"""
|
108 |
-
|
109 |
-
console_mode = wintypes.DWORD()
|
110 |
-
success = bool(_GetConsoleMode(std_handle, console_mode))
|
111 |
-
if not success:
|
112 |
-
raise LegacyWindowsError("Unable to get legacy Windows Console Mode")
|
113 |
-
return console_mode.value
|
114 |
-
|
115 |
-
|
116 |
-
_FillConsoleOutputCharacterW = windll.kernel32.FillConsoleOutputCharacterW
|
117 |
-
_FillConsoleOutputCharacterW.argtypes = [
|
118 |
-
wintypes.HANDLE,
|
119 |
-
ctypes.c_char,
|
120 |
-
wintypes.DWORD,
|
121 |
-
cast(Type[COORD], WindowsCoordinates),
|
122 |
-
ctypes.POINTER(wintypes.DWORD),
|
123 |
-
]
|
124 |
-
_FillConsoleOutputCharacterW.restype = wintypes.BOOL
|
125 |
-
|
126 |
-
|
127 |
-
def FillConsoleOutputCharacter(
|
128 |
-
std_handle: wintypes.HANDLE,
|
129 |
-
char: str,
|
130 |
-
length: int,
|
131 |
-
start: WindowsCoordinates,
|
132 |
-
) -> int:
|
133 |
-
"""Writes a character to the console screen buffer a specified number of times, beginning at the specified coordinates.
|
134 |
-
|
135 |
-
Args:
|
136 |
-
std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer.
|
137 |
-
char (str): The character to write. Must be a string of length 1.
|
138 |
-
length (int): The number of times to write the character.
|
139 |
-
start (WindowsCoordinates): The coordinates to start writing at.
|
140 |
-
|
141 |
-
Returns:
|
142 |
-
int: The number of characters written.
|
143 |
-
"""
|
144 |
-
character = ctypes.c_char(char.encode())
|
145 |
-
num_characters = wintypes.DWORD(length)
|
146 |
-
num_written = wintypes.DWORD(0)
|
147 |
-
_FillConsoleOutputCharacterW(
|
148 |
-
std_handle,
|
149 |
-
character,
|
150 |
-
num_characters,
|
151 |
-
start,
|
152 |
-
byref(num_written),
|
153 |
-
)
|
154 |
-
return num_written.value
|
155 |
-
|
156 |
-
|
157 |
-
_FillConsoleOutputAttribute = windll.kernel32.FillConsoleOutputAttribute
|
158 |
-
_FillConsoleOutputAttribute.argtypes = [
|
159 |
-
wintypes.HANDLE,
|
160 |
-
wintypes.WORD,
|
161 |
-
wintypes.DWORD,
|
162 |
-
cast(Type[COORD], WindowsCoordinates),
|
163 |
-
ctypes.POINTER(wintypes.DWORD),
|
164 |
-
]
|
165 |
-
_FillConsoleOutputAttribute.restype = wintypes.BOOL
|
166 |
-
|
167 |
-
|
168 |
-
def FillConsoleOutputAttribute(
|
169 |
-
std_handle: wintypes.HANDLE,
|
170 |
-
attributes: int,
|
171 |
-
length: int,
|
172 |
-
start: WindowsCoordinates,
|
173 |
-
) -> int:
|
174 |
-
"""Sets the character attributes for a specified number of character cells,
|
175 |
-
beginning at the specified coordinates in a screen buffer.
|
176 |
-
|
177 |
-
Args:
|
178 |
-
std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer.
|
179 |
-
attributes (int): Integer value representing the foreground and background colours of the cells.
|
180 |
-
length (int): The number of cells to set the output attribute of.
|
181 |
-
start (WindowsCoordinates): The coordinates of the first cell whose attributes are to be set.
|
182 |
-
|
183 |
-
Returns:
|
184 |
-
int: The number of cells whose attributes were actually set.
|
185 |
-
"""
|
186 |
-
num_cells = wintypes.DWORD(length)
|
187 |
-
style_attrs = wintypes.WORD(attributes)
|
188 |
-
num_written = wintypes.DWORD(0)
|
189 |
-
_FillConsoleOutputAttribute(
|
190 |
-
std_handle, style_attrs, num_cells, start, byref(num_written)
|
191 |
-
)
|
192 |
-
return num_written.value
|
193 |
-
|
194 |
-
|
195 |
-
_SetConsoleTextAttribute = windll.kernel32.SetConsoleTextAttribute
|
196 |
-
_SetConsoleTextAttribute.argtypes = [
|
197 |
-
wintypes.HANDLE,
|
198 |
-
wintypes.WORD,
|
199 |
-
]
|
200 |
-
_SetConsoleTextAttribute.restype = wintypes.BOOL
|
201 |
-
|
202 |
-
|
203 |
-
def SetConsoleTextAttribute(
|
204 |
-
std_handle: wintypes.HANDLE, attributes: wintypes.WORD
|
205 |
-
) -> bool:
|
206 |
-
"""Set the colour attributes for all text written after this function is called.
|
207 |
-
|
208 |
-
Args:
|
209 |
-
std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer.
|
210 |
-
attributes (int): Integer value representing the foreground and background colours.
|
211 |
-
|
212 |
-
|
213 |
-
Returns:
|
214 |
-
bool: True if the attribute was set successfully, otherwise False.
|
215 |
-
"""
|
216 |
-
return bool(_SetConsoleTextAttribute(std_handle, attributes))
|
217 |
-
|
218 |
-
|
219 |
-
_GetConsoleScreenBufferInfo = windll.kernel32.GetConsoleScreenBufferInfo
|
220 |
-
_GetConsoleScreenBufferInfo.argtypes = [
|
221 |
-
wintypes.HANDLE,
|
222 |
-
ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO),
|
223 |
-
]
|
224 |
-
_GetConsoleScreenBufferInfo.restype = wintypes.BOOL
|
225 |
-
|
226 |
-
|
227 |
-
def GetConsoleScreenBufferInfo(
|
228 |
-
std_handle: wintypes.HANDLE,
|
229 |
-
) -> CONSOLE_SCREEN_BUFFER_INFO:
|
230 |
-
"""Retrieves information about the specified console screen buffer.
|
231 |
-
|
232 |
-
Args:
|
233 |
-
std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer.
|
234 |
-
|
235 |
-
Returns:
|
236 |
-
CONSOLE_SCREEN_BUFFER_INFO: A CONSOLE_SCREEN_BUFFER_INFO ctype struct contain information about
|
237 |
-
screen size, cursor position, colour attributes, and more."""
|
238 |
-
console_screen_buffer_info = CONSOLE_SCREEN_BUFFER_INFO()
|
239 |
-
_GetConsoleScreenBufferInfo(std_handle, byref(console_screen_buffer_info))
|
240 |
-
return console_screen_buffer_info
|
241 |
-
|
242 |
-
|
243 |
-
_SetConsoleCursorPosition = windll.kernel32.SetConsoleCursorPosition
|
244 |
-
_SetConsoleCursorPosition.argtypes = [
|
245 |
-
wintypes.HANDLE,
|
246 |
-
cast(Type[COORD], WindowsCoordinates),
|
247 |
-
]
|
248 |
-
_SetConsoleCursorPosition.restype = wintypes.BOOL
|
249 |
-
|
250 |
-
|
251 |
-
def SetConsoleCursorPosition(
|
252 |
-
std_handle: wintypes.HANDLE, coords: WindowsCoordinates
|
253 |
-
) -> bool:
|
254 |
-
"""Set the position of the cursor in the console screen
|
255 |
-
|
256 |
-
Args:
|
257 |
-
std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer.
|
258 |
-
coords (WindowsCoordinates): The coordinates to move the cursor to.
|
259 |
-
|
260 |
-
Returns:
|
261 |
-
bool: True if the function succeeds, otherwise False.
|
262 |
-
"""
|
263 |
-
return bool(_SetConsoleCursorPosition(std_handle, coords))
|
264 |
-
|
265 |
-
|
266 |
-
_GetConsoleCursorInfo = windll.kernel32.GetConsoleCursorInfo
|
267 |
-
_GetConsoleCursorInfo.argtypes = [
|
268 |
-
wintypes.HANDLE,
|
269 |
-
ctypes.POINTER(CONSOLE_CURSOR_INFO),
|
270 |
-
]
|
271 |
-
_GetConsoleCursorInfo.restype = wintypes.BOOL
|
272 |
-
|
273 |
-
|
274 |
-
def GetConsoleCursorInfo(
|
275 |
-
std_handle: wintypes.HANDLE, cursor_info: CONSOLE_CURSOR_INFO
|
276 |
-
) -> bool:
|
277 |
-
"""Get the cursor info - used to get cursor visibility and width
|
278 |
-
|
279 |
-
Args:
|
280 |
-
std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer.
|
281 |
-
cursor_info (CONSOLE_CURSOR_INFO): CONSOLE_CURSOR_INFO ctype struct that receives information
|
282 |
-
about the console's cursor.
|
283 |
-
|
284 |
-
Returns:
|
285 |
-
bool: True if the function succeeds, otherwise False.
|
286 |
-
"""
|
287 |
-
return bool(_GetConsoleCursorInfo(std_handle, byref(cursor_info)))
|
288 |
-
|
289 |
-
|
290 |
-
_SetConsoleCursorInfo = windll.kernel32.SetConsoleCursorInfo
|
291 |
-
_SetConsoleCursorInfo.argtypes = [
|
292 |
-
wintypes.HANDLE,
|
293 |
-
ctypes.POINTER(CONSOLE_CURSOR_INFO),
|
294 |
-
]
|
295 |
-
_SetConsoleCursorInfo.restype = wintypes.BOOL
|
296 |
-
|
297 |
-
|
298 |
-
def SetConsoleCursorInfo(
|
299 |
-
std_handle: wintypes.HANDLE, cursor_info: CONSOLE_CURSOR_INFO
|
300 |
-
) -> bool:
|
301 |
-
"""Set the cursor info - used for adjusting cursor visibility and width
|
302 |
-
|
303 |
-
Args:
|
304 |
-
std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer.
|
305 |
-
cursor_info (CONSOLE_CURSOR_INFO): CONSOLE_CURSOR_INFO ctype struct containing the new cursor info.
|
306 |
-
|
307 |
-
Returns:
|
308 |
-
bool: True if the function succeeds, otherwise False.
|
309 |
-
"""
|
310 |
-
return bool(_SetConsoleCursorInfo(std_handle, byref(cursor_info)))
|
311 |
-
|
312 |
-
|
313 |
-
_SetConsoleTitle = windll.kernel32.SetConsoleTitleW
|
314 |
-
_SetConsoleTitle.argtypes = [wintypes.LPCWSTR]
|
315 |
-
_SetConsoleTitle.restype = wintypes.BOOL
|
316 |
-
|
317 |
-
|
318 |
-
def SetConsoleTitle(title: str) -> bool:
|
319 |
-
"""Sets the title of the current console window
|
320 |
-
|
321 |
-
Args:
|
322 |
-
title (str): The new title of the console window.
|
323 |
-
|
324 |
-
Returns:
|
325 |
-
bool: True if the function succeeds, otherwise False.
|
326 |
-
"""
|
327 |
-
return bool(_SetConsoleTitle(title))
|
328 |
-
|
329 |
-
|
330 |
-
class LegacyWindowsTerm:
|
331 |
-
"""This class allows interaction with the legacy Windows Console API. It should only be used in the context
|
332 |
-
of environments where virtual terminal processing is not available. However, if it is used in a Windows environment,
|
333 |
-
the entire API should work.
|
334 |
-
|
335 |
-
Args:
|
336 |
-
file (IO[str]): The file which the Windows Console API HANDLE is retrieved from, defaults to sys.stdout.
|
337 |
-
"""
|
338 |
-
|
339 |
-
BRIGHT_BIT = 8
|
340 |
-
|
341 |
-
# Indices are ANSI color numbers, values are the corresponding Windows Console API color numbers
|
342 |
-
ANSI_TO_WINDOWS = [
|
343 |
-
0, # black The Windows colours are defined in wincon.h as follows:
|
344 |
-
4, # red define FOREGROUND_BLUE 0x0001 -- 0000 0001
|
345 |
-
2, # green define FOREGROUND_GREEN 0x0002 -- 0000 0010
|
346 |
-
6, # yellow define FOREGROUND_RED 0x0004 -- 0000 0100
|
347 |
-
1, # blue define FOREGROUND_INTENSITY 0x0008 -- 0000 1000
|
348 |
-
5, # magenta define BACKGROUND_BLUE 0x0010 -- 0001 0000
|
349 |
-
3, # cyan define BACKGROUND_GREEN 0x0020 -- 0010 0000
|
350 |
-
7, # white define BACKGROUND_RED 0x0040 -- 0100 0000
|
351 |
-
8, # bright black (grey) define BACKGROUND_INTENSITY 0x0080 -- 1000 0000
|
352 |
-
12, # bright red
|
353 |
-
10, # bright green
|
354 |
-
14, # bright yellow
|
355 |
-
9, # bright blue
|
356 |
-
13, # bright magenta
|
357 |
-
11, # bright cyan
|
358 |
-
15, # bright white
|
359 |
-
]
|
360 |
-
|
361 |
-
def __init__(self, file: "IO[str]") -> None:
|
362 |
-
handle = GetStdHandle(STDOUT)
|
363 |
-
self._handle = handle
|
364 |
-
default_text = GetConsoleScreenBufferInfo(handle).wAttributes
|
365 |
-
self._default_text = default_text
|
366 |
-
|
367 |
-
self._default_fore = default_text & 7
|
368 |
-
self._default_back = (default_text >> 4) & 7
|
369 |
-
self._default_attrs = self._default_fore | (self._default_back << 4)
|
370 |
-
|
371 |
-
self._file = file
|
372 |
-
self.write = file.write
|
373 |
-
self.flush = file.flush
|
374 |
-
|
375 |
-
@property
|
376 |
-
def cursor_position(self) -> WindowsCoordinates:
|
377 |
-
"""Returns the current position of the cursor (0-based)
|
378 |
-
|
379 |
-
Returns:
|
380 |
-
WindowsCoordinates: The current cursor position.
|
381 |
-
"""
|
382 |
-
coord: COORD = GetConsoleScreenBufferInfo(self._handle).dwCursorPosition
|
383 |
-
return WindowsCoordinates(row=cast(int, coord.Y), col=cast(int, coord.X))
|
384 |
-
|
385 |
-
@property
|
386 |
-
def screen_size(self) -> WindowsCoordinates:
|
387 |
-
"""Returns the current size of the console screen buffer, in character columns and rows
|
388 |
-
|
389 |
-
Returns:
|
390 |
-
WindowsCoordinates: The width and height of the screen as WindowsCoordinates.
|
391 |
-
"""
|
392 |
-
screen_size: COORD = GetConsoleScreenBufferInfo(self._handle).dwSize
|
393 |
-
return WindowsCoordinates(
|
394 |
-
row=cast(int, screen_size.Y), col=cast(int, screen_size.X)
|
395 |
-
)
|
396 |
-
|
397 |
-
def write_text(self, text: str) -> None:
|
398 |
-
"""Write text directly to the terminal without any modification of styles
|
399 |
-
|
400 |
-
Args:
|
401 |
-
text (str): The text to write to the console
|
402 |
-
"""
|
403 |
-
self.write(text)
|
404 |
-
self.flush()
|
405 |
-
|
406 |
-
def write_styled(self, text: str, style: Style) -> None:
|
407 |
-
"""Write styled text to the terminal.
|
408 |
-
|
409 |
-
Args:
|
410 |
-
text (str): The text to write
|
411 |
-
style (Style): The style of the text
|
412 |
-
"""
|
413 |
-
color = style.color
|
414 |
-
bgcolor = style.bgcolor
|
415 |
-
if style.reverse:
|
416 |
-
color, bgcolor = bgcolor, color
|
417 |
-
|
418 |
-
if color:
|
419 |
-
fore = color.downgrade(ColorSystem.WINDOWS).number
|
420 |
-
fore = fore if fore is not None else 7 # Default to ANSI 7: White
|
421 |
-
if style.bold:
|
422 |
-
fore = fore | self.BRIGHT_BIT
|
423 |
-
if style.dim:
|
424 |
-
fore = fore & ~self.BRIGHT_BIT
|
425 |
-
fore = self.ANSI_TO_WINDOWS[fore]
|
426 |
-
else:
|
427 |
-
fore = self._default_fore
|
428 |
-
|
429 |
-
if bgcolor:
|
430 |
-
back = bgcolor.downgrade(ColorSystem.WINDOWS).number
|
431 |
-
back = back if back is not None else 0 # Default to ANSI 0: Black
|
432 |
-
back = self.ANSI_TO_WINDOWS[back]
|
433 |
-
else:
|
434 |
-
back = self._default_back
|
435 |
-
|
436 |
-
assert fore is not None
|
437 |
-
assert back is not None
|
438 |
-
|
439 |
-
SetConsoleTextAttribute(
|
440 |
-
self._handle, attributes=ctypes.c_ushort(fore | (back << 4))
|
441 |
-
)
|
442 |
-
self.write_text(text)
|
443 |
-
SetConsoleTextAttribute(self._handle, attributes=self._default_text)
|
444 |
-
|
445 |
-
def move_cursor_to(self, new_position: WindowsCoordinates) -> None:
|
446 |
-
"""Set the position of the cursor
|
447 |
-
|
448 |
-
Args:
|
449 |
-
new_position (WindowsCoordinates): The WindowsCoordinates representing the new position of the cursor.
|
450 |
-
"""
|
451 |
-
if new_position.col < 0 or new_position.row < 0:
|
452 |
-
return
|
453 |
-
SetConsoleCursorPosition(self._handle, coords=new_position)
|
454 |
-
|
455 |
-
def erase_line(self) -> None:
|
456 |
-
"""Erase all content on the line the cursor is currently located at"""
|
457 |
-
screen_size = self.screen_size
|
458 |
-
cursor_position = self.cursor_position
|
459 |
-
cells_to_erase = screen_size.col
|
460 |
-
start_coordinates = WindowsCoordinates(row=cursor_position.row, col=0)
|
461 |
-
FillConsoleOutputCharacter(
|
462 |
-
self._handle, " ", length=cells_to_erase, start=start_coordinates
|
463 |
-
)
|
464 |
-
FillConsoleOutputAttribute(
|
465 |
-
self._handle,
|
466 |
-
self._default_attrs,
|
467 |
-
length=cells_to_erase,
|
468 |
-
start=start_coordinates,
|
469 |
-
)
|
470 |
-
|
471 |
-
def erase_end_of_line(self) -> None:
|
472 |
-
"""Erase all content from the cursor position to the end of that line"""
|
473 |
-
cursor_position = self.cursor_position
|
474 |
-
cells_to_erase = self.screen_size.col - cursor_position.col
|
475 |
-
FillConsoleOutputCharacter(
|
476 |
-
self._handle, " ", length=cells_to_erase, start=cursor_position
|
477 |
-
)
|
478 |
-
FillConsoleOutputAttribute(
|
479 |
-
self._handle,
|
480 |
-
self._default_attrs,
|
481 |
-
length=cells_to_erase,
|
482 |
-
start=cursor_position,
|
483 |
-
)
|
484 |
-
|
485 |
-
def erase_start_of_line(self) -> None:
|
486 |
-
"""Erase all content from the cursor position to the start of that line"""
|
487 |
-
row, col = self.cursor_position
|
488 |
-
start = WindowsCoordinates(row, 0)
|
489 |
-
FillConsoleOutputCharacter(self._handle, " ", length=col, start=start)
|
490 |
-
FillConsoleOutputAttribute(
|
491 |
-
self._handle, self._default_attrs, length=col, start=start
|
492 |
-
)
|
493 |
-
|
494 |
-
def move_cursor_up(self) -> None:
|
495 |
-
"""Move the cursor up a single cell"""
|
496 |
-
cursor_position = self.cursor_position
|
497 |
-
SetConsoleCursorPosition(
|
498 |
-
self._handle,
|
499 |
-
coords=WindowsCoordinates(
|
500 |
-
row=cursor_position.row - 1, col=cursor_position.col
|
501 |
-
),
|
502 |
-
)
|
503 |
-
|
504 |
-
def move_cursor_down(self) -> None:
|
505 |
-
"""Move the cursor down a single cell"""
|
506 |
-
cursor_position = self.cursor_position
|
507 |
-
SetConsoleCursorPosition(
|
508 |
-
self._handle,
|
509 |
-
coords=WindowsCoordinates(
|
510 |
-
row=cursor_position.row + 1,
|
511 |
-
col=cursor_position.col,
|
512 |
-
),
|
513 |
-
)
|
514 |
-
|
515 |
-
def move_cursor_forward(self) -> None:
|
516 |
-
"""Move the cursor forward a single cell. Wrap to the next line if required."""
|
517 |
-
row, col = self.cursor_position
|
518 |
-
if col == self.screen_size.col - 1:
|
519 |
-
row += 1
|
520 |
-
col = 0
|
521 |
-
else:
|
522 |
-
col += 1
|
523 |
-
SetConsoleCursorPosition(
|
524 |
-
self._handle, coords=WindowsCoordinates(row=row, col=col)
|
525 |
-
)
|
526 |
-
|
527 |
-
def move_cursor_to_column(self, column: int) -> None:
|
528 |
-
"""Move cursor to the column specified by the zero-based column index, staying on the same row
|
529 |
-
|
530 |
-
Args:
|
531 |
-
column (int): The zero-based column index to move the cursor to.
|
532 |
-
"""
|
533 |
-
row, _ = self.cursor_position
|
534 |
-
SetConsoleCursorPosition(self._handle, coords=WindowsCoordinates(row, column))
|
535 |
-
|
536 |
-
def move_cursor_backward(self) -> None:
|
537 |
-
"""Move the cursor backward a single cell. Wrap to the previous line if required."""
|
538 |
-
row, col = self.cursor_position
|
539 |
-
if col == 0:
|
540 |
-
row -= 1
|
541 |
-
col = self.screen_size.col - 1
|
542 |
-
else:
|
543 |
-
col -= 1
|
544 |
-
SetConsoleCursorPosition(
|
545 |
-
self._handle, coords=WindowsCoordinates(row=row, col=col)
|
546 |
-
)
|
547 |
-
|
548 |
-
def hide_cursor(self) -> None:
|
549 |
-
"""Hide the cursor"""
|
550 |
-
current_cursor_size = self._get_cursor_size()
|
551 |
-
invisible_cursor = CONSOLE_CURSOR_INFO(dwSize=current_cursor_size, bVisible=0)
|
552 |
-
SetConsoleCursorInfo(self._handle, cursor_info=invisible_cursor)
|
553 |
-
|
554 |
-
def show_cursor(self) -> None:
|
555 |
-
"""Show the cursor"""
|
556 |
-
current_cursor_size = self._get_cursor_size()
|
557 |
-
visible_cursor = CONSOLE_CURSOR_INFO(dwSize=current_cursor_size, bVisible=1)
|
558 |
-
SetConsoleCursorInfo(self._handle, cursor_info=visible_cursor)
|
559 |
-
|
560 |
-
def set_title(self, title: str) -> None:
|
561 |
-
"""Set the title of the terminal window
|
562 |
-
|
563 |
-
Args:
|
564 |
-
title (str): The new title of the console window
|
565 |
-
"""
|
566 |
-
assert len(title) < 255, "Console title must be less than 255 characters"
|
567 |
-
SetConsoleTitle(title)
|
568 |
-
|
569 |
-
def _get_cursor_size(self) -> int:
|
570 |
-
"""Get the percentage of the character cell that is filled by the cursor"""
|
571 |
-
cursor_info = CONSOLE_CURSOR_INFO()
|
572 |
-
GetConsoleCursorInfo(self._handle, cursor_info=cursor_info)
|
573 |
-
return int(cursor_info.dwSize)
|
574 |
-
|
575 |
-
|
576 |
-
if __name__ == "__main__":
|
577 |
-
handle = GetStdHandle()
|
578 |
-
|
579 |
-
from pip._vendor.rich.console import Console
|
580 |
-
|
581 |
-
console = Console()
|
582 |
-
|
583 |
-
term = LegacyWindowsTerm(sys.stdout)
|
584 |
-
term.set_title("Win32 Console Examples")
|
585 |
-
|
586 |
-
style = Style(color="black", bgcolor="red")
|
587 |
-
|
588 |
-
heading = Style.parse("black on green")
|
589 |
-
|
590 |
-
# Check colour output
|
591 |
-
console.rule("Checking colour output")
|
592 |
-
console.print("[on red]on red!")
|
593 |
-
console.print("[blue]blue!")
|
594 |
-
console.print("[yellow]yellow!")
|
595 |
-
console.print("[bold yellow]bold yellow!")
|
596 |
-
console.print("[bright_yellow]bright_yellow!")
|
597 |
-
console.print("[dim bright_yellow]dim bright_yellow!")
|
598 |
-
console.print("[italic cyan]italic cyan!")
|
599 |
-
console.print("[bold white on blue]bold white on blue!")
|
600 |
-
console.print("[reverse bold white on blue]reverse bold white on blue!")
|
601 |
-
console.print("[bold black on cyan]bold black on cyan!")
|
602 |
-
console.print("[black on green]black on green!")
|
603 |
-
console.print("[blue on green]blue on green!")
|
604 |
-
console.print("[white on black]white on black!")
|
605 |
-
console.print("[black on white]black on white!")
|
606 |
-
console.print("[#1BB152 on #DA812D]#1BB152 on #DA812D!")
|
607 |
-
|
608 |
-
# Check cursor movement
|
609 |
-
console.rule("Checking cursor movement")
|
610 |
-
console.print()
|
611 |
-
term.move_cursor_backward()
|
612 |
-
term.move_cursor_backward()
|
613 |
-
term.write_text("went back and wrapped to prev line")
|
614 |
-
time.sleep(1)
|
615 |
-
term.move_cursor_up()
|
616 |
-
term.write_text("we go up")
|
617 |
-
time.sleep(1)
|
618 |
-
term.move_cursor_down()
|
619 |
-
term.write_text("and down")
|
620 |
-
time.sleep(1)
|
621 |
-
term.move_cursor_up()
|
622 |
-
term.move_cursor_backward()
|
623 |
-
term.move_cursor_backward()
|
624 |
-
term.write_text("we went up and back 2")
|
625 |
-
time.sleep(1)
|
626 |
-
term.move_cursor_down()
|
627 |
-
term.move_cursor_backward()
|
628 |
-
term.move_cursor_backward()
|
629 |
-
term.write_text("we went down and back 2")
|
630 |
-
time.sleep(1)
|
631 |
-
|
632 |
-
# Check erasing of lines
|
633 |
-
term.hide_cursor()
|
634 |
-
console.print()
|
635 |
-
console.rule("Checking line erasing")
|
636 |
-
console.print("\n...Deleting to the start of the line...")
|
637 |
-
term.write_text("The red arrow shows the cursor location, and direction of erase")
|
638 |
-
time.sleep(1)
|
639 |
-
term.move_cursor_to_column(16)
|
640 |
-
term.write_styled("<", Style.parse("black on red"))
|
641 |
-
term.move_cursor_backward()
|
642 |
-
time.sleep(1)
|
643 |
-
term.erase_start_of_line()
|
644 |
-
time.sleep(1)
|
645 |
-
|
646 |
-
console.print("\n\n...And to the end of the line...")
|
647 |
-
term.write_text("The red arrow shows the cursor location, and direction of erase")
|
648 |
-
time.sleep(1)
|
649 |
-
|
650 |
-
term.move_cursor_to_column(16)
|
651 |
-
term.write_styled(">", Style.parse("black on red"))
|
652 |
-
time.sleep(1)
|
653 |
-
term.erase_end_of_line()
|
654 |
-
time.sleep(1)
|
655 |
-
|
656 |
-
console.print("\n\n...Now the whole line will be erased...")
|
657 |
-
term.write_styled("I'm going to disappear!", style=Style.parse("black on cyan"))
|
658 |
-
time.sleep(1)
|
659 |
-
term.erase_line()
|
660 |
-
|
661 |
-
term.show_cursor()
|
662 |
-
print("\n")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/protocol.py
DELETED
@@ -1,42 +0,0 @@
|
|
1 |
-
from typing import Any, cast, Set, TYPE_CHECKING
|
2 |
-
from inspect import isclass
|
3 |
-
|
4 |
-
if TYPE_CHECKING:
|
5 |
-
from pip._vendor.rich.console import RenderableType
|
6 |
-
|
7 |
-
_GIBBERISH = """aihwerij235234ljsdnp34ksodfipwoe234234jlskjdf"""
|
8 |
-
|
9 |
-
|
10 |
-
def is_renderable(check_object: Any) -> bool:
|
11 |
-
"""Check if an object may be rendered by Rich."""
|
12 |
-
return (
|
13 |
-
isinstance(check_object, str)
|
14 |
-
or hasattr(check_object, "__rich__")
|
15 |
-
or hasattr(check_object, "__rich_console__")
|
16 |
-
)
|
17 |
-
|
18 |
-
|
19 |
-
def rich_cast(renderable: object) -> "RenderableType":
|
20 |
-
"""Cast an object to a renderable by calling __rich__ if present.
|
21 |
-
|
22 |
-
Args:
|
23 |
-
renderable (object): A potentially renderable object
|
24 |
-
|
25 |
-
Returns:
|
26 |
-
object: The result of recursively calling __rich__.
|
27 |
-
"""
|
28 |
-
from pip._vendor.rich.console import RenderableType
|
29 |
-
|
30 |
-
rich_visited_set: Set[type] = set() # Prevent potential infinite loop
|
31 |
-
while hasattr(renderable, "__rich__") and not isclass(renderable):
|
32 |
-
# Detect object which claim to have all the attributes
|
33 |
-
if hasattr(renderable, _GIBBERISH):
|
34 |
-
return repr(renderable)
|
35 |
-
cast_method = getattr(renderable, "__rich__")
|
36 |
-
renderable = cast_method()
|
37 |
-
renderable_type = type(renderable)
|
38 |
-
if renderable_type in rich_visited_set:
|
39 |
-
break
|
40 |
-
rich_visited_set.add(renderable_type)
|
41 |
-
|
42 |
-
return cast(RenderableType, renderable)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/compute_distance.h
DELETED
@@ -1,949 +0,0 @@
|
|
1 |
-
#pragma once
|
2 |
-
|
3 |
-
#include "diffvg.h"
|
4 |
-
#include "edge_query.h"
|
5 |
-
#include "scene.h"
|
6 |
-
#include "shape.h"
|
7 |
-
#include "solve.h"
|
8 |
-
#include "vector.h"
|
9 |
-
|
10 |
-
#include <cassert>
|
11 |
-
|
12 |
-
struct ClosestPointPathInfo {
|
13 |
-
int base_point_id;
|
14 |
-
int point_id;
|
15 |
-
float t_root;
|
16 |
-
};
|
17 |
-
|
18 |
-
DEVICE
|
19 |
-
inline
|
20 |
-
bool closest_point(const Circle &circle, const Vector2f &pt,
|
21 |
-
Vector2f *result) {
|
22 |
-
*result = circle.center + circle.radius * normalize(pt - circle.center);
|
23 |
-
return false;
|
24 |
-
}
|
25 |
-
|
26 |
-
DEVICE
|
27 |
-
inline
|
28 |
-
bool closest_point(const Path &path, const BVHNode *bvh_nodes, const Vector2f &pt, float max_radius,
|
29 |
-
ClosestPointPathInfo *path_info,
|
30 |
-
Vector2f *result) {
|
31 |
-
auto min_dist = max_radius;
|
32 |
-
auto ret_pt = Vector2f{0, 0};
|
33 |
-
auto found = false;
|
34 |
-
auto num_segments = path.num_base_points;
|
35 |
-
constexpr auto max_bvh_size = 128;
|
36 |
-
int bvh_stack[max_bvh_size];
|
37 |
-
auto stack_size = 0;
|
38 |
-
bvh_stack[stack_size++] = 2 * num_segments - 2;
|
39 |
-
while (stack_size > 0) {
|
40 |
-
const BVHNode &node = bvh_nodes[bvh_stack[--stack_size]];
|
41 |
-
if (node.child1 < 0) {
|
42 |
-
// leaf
|
43 |
-
auto base_point_id = node.child0;
|
44 |
-
auto point_id = - node.child1 - 1;
|
45 |
-
assert(base_point_id < num_segments);
|
46 |
-
assert(point_id < path.num_points);
|
47 |
-
auto dist = 0.f;
|
48 |
-
auto closest_pt = Vector2f{0, 0};
|
49 |
-
auto t_root = 0.f;
|
50 |
-
if (path.num_control_points[base_point_id] == 0) {
|
51 |
-
// Straight line
|
52 |
-
auto i0 = point_id;
|
53 |
-
auto i1 = (point_id + 1) % path.num_points;
|
54 |
-
auto p0 = Vector2f{path.points[2 * i0], path.points[2 * i0 + 1]};
|
55 |
-
auto p1 = Vector2f{path.points[2 * i1], path.points[2 * i1 + 1]};
|
56 |
-
// project pt to line
|
57 |
-
auto t = dot(pt - p0, p1 - p0) / dot(p1 - p0, p1 - p0);
|
58 |
-
if (t < 0) {
|
59 |
-
dist = distance(p0, pt);
|
60 |
-
closest_pt = p0;
|
61 |
-
t_root = 0;
|
62 |
-
} else if (t > 1) {
|
63 |
-
dist = distance(p1, pt);
|
64 |
-
closest_pt = p1;
|
65 |
-
t_root = 1;
|
66 |
-
} else {
|
67 |
-
dist = distance(p0 + t * (p1 - p0), pt);
|
68 |
-
closest_pt = p0 + t * (p1 - p0);
|
69 |
-
t_root = t;
|
70 |
-
}
|
71 |
-
} else if (path.num_control_points[base_point_id] == 1) {
|
72 |
-
// Quadratic Bezier curve
|
73 |
-
auto i0 = point_id;
|
74 |
-
auto i1 = point_id + 1;
|
75 |
-
auto i2 = (point_id + 2) % path.num_points;
|
76 |
-
auto p0 = Vector2f{path.points[2 * i0], path.points[2 * i0 + 1]};
|
77 |
-
auto p1 = Vector2f{path.points[2 * i1], path.points[2 * i1 + 1]};
|
78 |
-
auto p2 = Vector2f{path.points[2 * i2], path.points[2 * i2 + 1]};
|
79 |
-
if (path.use_distance_approx) {
|
80 |
-
closest_pt = quadratic_closest_pt_approx(p0, p1, p2, pt, &t_root);
|
81 |
-
dist = distance(closest_pt, pt);
|
82 |
-
} else {
|
83 |
-
auto eval = [&](float t) -> Vector2f {
|
84 |
-
auto tt = 1 - t;
|
85 |
-
return (tt*tt)*p0 + (2*tt*t)*p1 + (t*t)*p2;
|
86 |
-
};
|
87 |
-
auto pt0 = eval(0);
|
88 |
-
auto pt1 = eval(1);
|
89 |
-
auto dist0 = distance(pt0, pt);
|
90 |
-
auto dist1 = distance(pt1, pt);
|
91 |
-
{
|
92 |
-
dist = dist0;
|
93 |
-
closest_pt = pt0;
|
94 |
-
t_root = 0;
|
95 |
-
}
|
96 |
-
if (dist1 < dist) {
|
97 |
-
dist = dist1;
|
98 |
-
closest_pt = pt1;
|
99 |
-
t_root = 1;
|
100 |
-
}
|
101 |
-
// The curve is (1-t)^2p0 + 2(1-t)tp1 + t^2p2
|
102 |
-
// = (p0-2p1+p2)t^2+(-2p0+2p1)t+p0 = q
|
103 |
-
// Want to solve (q - pt) dot q' = 0
|
104 |
-
// q' = (p0-2p1+p2)t + (-p0+p1)
|
105 |
-
// Expanding (p0-2p1+p2)^2 t^3 +
|
106 |
-
// 3(p0-2p1+p2)(-p0+p1) t^2 +
|
107 |
-
// (2(-p0+p1)^2+(p0-2p1+p2)(p0-pt))t +
|
108 |
-
// (-p0+p1)(p0-pt) = 0
|
109 |
-
auto A = sum((p0-2*p1+p2)*(p0-2*p1+p2));
|
110 |
-
auto B = sum(3*(p0-2*p1+p2)*(-p0+p1));
|
111 |
-
auto C = sum(2*(-p0+p1)*(-p0+p1)+(p0-2*p1+p2)*(p0-pt));
|
112 |
-
auto D = sum((-p0+p1)*(p0-pt));
|
113 |
-
float t[3];
|
114 |
-
int num_sol = solve_cubic(A, B, C, D, t);
|
115 |
-
for (int j = 0; j < num_sol; j++) {
|
116 |
-
if (t[j] >= 0 && t[j] <= 1) {
|
117 |
-
auto p = eval(t[j]);
|
118 |
-
auto distp = distance(p, pt);
|
119 |
-
if (distp < dist) {
|
120 |
-
dist = distp;
|
121 |
-
closest_pt = p;
|
122 |
-
t_root = t[j];
|
123 |
-
}
|
124 |
-
}
|
125 |
-
}
|
126 |
-
}
|
127 |
-
} else if (path.num_control_points[base_point_id] == 2) {
|
128 |
-
// Cubic Bezier curve
|
129 |
-
auto i0 = point_id;
|
130 |
-
auto i1 = point_id + 1;
|
131 |
-
auto i2 = point_id + 2;
|
132 |
-
auto i3 = (point_id + 3) % path.num_points;
|
133 |
-
auto p0 = Vector2f{path.points[2 * i0], path.points[2 * i0 + 1]};
|
134 |
-
auto p1 = Vector2f{path.points[2 * i1], path.points[2 * i1 + 1]};
|
135 |
-
auto p2 = Vector2f{path.points[2 * i2], path.points[2 * i2 + 1]};
|
136 |
-
auto p3 = Vector2f{path.points[2 * i3], path.points[2 * i3 + 1]};
|
137 |
-
auto eval = [&](float t) -> Vector2f {
|
138 |
-
auto tt = 1 - t;
|
139 |
-
return (tt*tt*tt)*p0 + (3*tt*tt*t)*p1 + (3*tt*t*t)*p2 + (t*t*t)*p3;
|
140 |
-
};
|
141 |
-
auto pt0 = eval(0);
|
142 |
-
auto pt1 = eval(1);
|
143 |
-
auto dist0 = distance(pt0, pt);
|
144 |
-
auto dist1 = distance(pt1, pt);
|
145 |
-
{
|
146 |
-
dist = dist0;
|
147 |
-
closest_pt = pt0;
|
148 |
-
t_root = 0;
|
149 |
-
}
|
150 |
-
if (dist1 < dist) {
|
151 |
-
dist = dist1;
|
152 |
-
closest_pt = pt1;
|
153 |
-
t_root = 1;
|
154 |
-
}
|
155 |
-
// The curve is (1 - t)^3 p0 + 3 * (1 - t)^2 t p1 + 3 * (1 - t) t^2 p2 + t^3 p3
|
156 |
-
// = (-p0+3p1-3p2+p3) t^3 + (3p0-6p1+3p2) t^2 + (-3p0+3p1) t + p0
|
157 |
-
// Want to solve (q - pt) dot q' = 0
|
158 |
-
// q' = 3*(-p0+3p1-3p2+p3)t^2 + 2*(3p0-6p1+3p2)t + (-3p0+3p1)
|
159 |
-
// Expanding
|
160 |
-
// 3*(-p0+3p1-3p2+p3)^2 t^5
|
161 |
-
// 5*(-p0+3p1-3p2+p3)(3p0-6p1+3p2) t^4
|
162 |
-
// 4*(-p0+3p1-3p2+p3)(-3p0+3p1) + 2*(3p0-6p1+3p2)^2 t^3
|
163 |
-
// 3*(3p0-6p1+3p2)(-3p0+3p1) + 3*(-p0+3p1-3p2+p3)(p0-pt) t^2
|
164 |
-
// (-3p0+3p1)^2+2(p0-pt)(3p0-6p1+3p2) t
|
165 |
-
// (p0-pt)(-3p0+3p1)
|
166 |
-
double A = 3*sum((-p0+3*p1-3*p2+p3)*(-p0+3*p1-3*p2+p3));
|
167 |
-
double B = 5*sum((-p0+3*p1-3*p2+p3)*(3*p0-6*p1+3*p2));
|
168 |
-
double C = 4*sum((-p0+3*p1-3*p2+p3)*(-3*p0+3*p1)) + 2*sum((3*p0-6*p1+3*p2)*(3*p0-6*p1+3*p2));
|
169 |
-
double D = 3*(sum((3*p0-6*p1+3*p2)*(-3*p0+3*p1)) + sum((-p0+3*p1-3*p2+p3)*(p0-pt)));
|
170 |
-
double E = sum((-3*p0+3*p1)*(-3*p0+3*p1)) + 2*sum((p0-pt)*(3*p0-6*p1+3*p2));
|
171 |
-
double F = sum((p0-pt)*(-3*p0+3*p1));
|
172 |
-
// normalize the polynomial
|
173 |
-
B /= A;
|
174 |
-
C /= A;
|
175 |
-
D /= A;
|
176 |
-
E /= A;
|
177 |
-
F /= A;
|
178 |
-
// Isolator Polynomials:
|
179 |
-
// https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.133.2233&rep=rep1&type=pdf
|
180 |
-
// x/5 + B/25
|
181 |
-
// /-----------------------------------------------------
|
182 |
-
// 5x^4 + 4B x^3 + 3C x^2 + 2D x + E / x^5 + B x^4 + C x^3 + D x^2 + E x + F
|
183 |
-
// x^5 + 4B/5 x^4 + 3C/5 x^3 + 2D/5 x^2 + E/5 x
|
184 |
-
// ----------------------------------------------------
|
185 |
-
// B/5 x^4 + 2C/5 x^3 + 3D/5 x^2 + 4E/5 x + F
|
186 |
-
// B/5 x^4 + 4B^2/25 x^3 + 3BC/25 x^2 + 2BD/25 x + BE/25
|
187 |
-
// ----------------------------------------------------
|
188 |
-
// (2C/5 - 4B^2/25)x^3 + (3D/5-3BC/25)x^2 + (4E/5-2BD/25) + (F-BE/25)
|
189 |
-
auto p1A = ((2 / 5.f) * C - (4 / 25.f) * B * B);
|
190 |
-
auto p1B = ((3 / 5.f) * D - (3 / 25.f) * B * C);
|
191 |
-
auto p1C = ((4 / 5.f) * E - (2 / 25.f) * B * D);
|
192 |
-
auto p1D = F - B * E / 25.f;
|
193 |
-
// auto q1A = 1 / 5.f;
|
194 |
-
// auto q1B = B / 25.f;
|
195 |
-
// x/5 + B/25 = 0
|
196 |
-
// x = -B/5
|
197 |
-
auto q_root = -B/5.f;
|
198 |
-
double p_roots[3];
|
199 |
-
int num_sol = solve_cubic(p1A, p1B, p1C, p1D, p_roots);
|
200 |
-
float intervals[4];
|
201 |
-
if (q_root >= 0 && q_root <= 1) {
|
202 |
-
intervals[0] = q_root;
|
203 |
-
}
|
204 |
-
for (int j = 0; j < num_sol; j++) {
|
205 |
-
intervals[j + 1] = p_roots[j];
|
206 |
-
}
|
207 |
-
auto num_intervals = 1 + num_sol;
|
208 |
-
// sort intervals
|
209 |
-
for (int j = 1; j < num_intervals; j++) {
|
210 |
-
for (int k = j; k > 0 && intervals[k - 1] > intervals[k]; k--) {
|
211 |
-
auto tmp = intervals[k];
|
212 |
-
intervals[k] = intervals[k - 1];
|
213 |
-
intervals[k - 1] = tmp;
|
214 |
-
}
|
215 |
-
}
|
216 |
-
auto eval_polynomial = [&] (double t) {
|
217 |
-
return t*t*t*t*t+
|
218 |
-
B*t*t*t*t+
|
219 |
-
C*t*t*t+
|
220 |
-
D*t*t+
|
221 |
-
E*t+
|
222 |
-
F;
|
223 |
-
};
|
224 |
-
auto eval_polynomial_deriv = [&] (double t) {
|
225 |
-
return 5*t*t*t*t+
|
226 |
-
4*B*t*t*t+
|
227 |
-
3*C*t*t+
|
228 |
-
2*D*t+
|
229 |
-
E;
|
230 |
-
};
|
231 |
-
auto lower_bound = 0.f;
|
232 |
-
for (int j = 0; j < num_intervals + 1; j++) {
|
233 |
-
if (j < num_intervals && intervals[j] < 0.f) {
|
234 |
-
continue;
|
235 |
-
}
|
236 |
-
auto upper_bound = j < num_intervals ?
|
237 |
-
min(intervals[j], 1.f) : 1.f;
|
238 |
-
auto lb = lower_bound;
|
239 |
-
auto ub = upper_bound;
|
240 |
-
auto lb_eval = eval_polynomial(lb);
|
241 |
-
auto ub_eval = eval_polynomial(ub);
|
242 |
-
if (lb_eval * ub_eval > 0) {
|
243 |
-
// Doesn't have root
|
244 |
-
continue;
|
245 |
-
}
|
246 |
-
if (lb_eval > ub_eval) {
|
247 |
-
swap_(lb, ub);
|
248 |
-
}
|
249 |
-
auto t = 0.5f * (lb + ub);
|
250 |
-
auto num_iter = 20;
|
251 |
-
for (int it = 0; it < num_iter; it++) {
|
252 |
-
if (!(t >= lb && t <= ub)) {
|
253 |
-
t = 0.5f * (lb + ub);
|
254 |
-
}
|
255 |
-
auto value = eval_polynomial(t);
|
256 |
-
if (fabs(value) < 1e-5f || it == num_iter - 1) {
|
257 |
-
break;
|
258 |
-
}
|
259 |
-
// The derivative may not be entirely accurate,
|
260 |
-
// but the bisection is going to handle this
|
261 |
-
if (value > 0.f) {
|
262 |
-
ub = t;
|
263 |
-
} else {
|
264 |
-
lb = t;
|
265 |
-
}
|
266 |
-
auto derivative = eval_polynomial_deriv(t);
|
267 |
-
t -= value / derivative;
|
268 |
-
}
|
269 |
-
auto p = eval(t);
|
270 |
-
auto distp = distance(p, pt);
|
271 |
-
if (distp < dist) {
|
272 |
-
dist = distp;
|
273 |
-
closest_pt = p;
|
274 |
-
t_root = t;
|
275 |
-
}
|
276 |
-
if (upper_bound >= 1.f) {
|
277 |
-
break;
|
278 |
-
}
|
279 |
-
lower_bound = upper_bound;
|
280 |
-
}
|
281 |
-
} else {
|
282 |
-
assert(false);
|
283 |
-
}
|
284 |
-
if (dist < min_dist) {
|
285 |
-
min_dist = dist;
|
286 |
-
ret_pt = closest_pt;
|
287 |
-
path_info->base_point_id = base_point_id;
|
288 |
-
path_info->point_id = point_id;
|
289 |
-
path_info->t_root = t_root;
|
290 |
-
found = true;
|
291 |
-
}
|
292 |
-
} else {
|
293 |
-
assert(node.child0 >= 0 && node.child1 >= 0);
|
294 |
-
const AABB &b0 = bvh_nodes[node.child0].box;
|
295 |
-
if (within_distance(b0, pt, min_dist)) {
|
296 |
-
bvh_stack[stack_size++] = node.child0;
|
297 |
-
}
|
298 |
-
const AABB &b1 = bvh_nodes[node.child1].box;
|
299 |
-
if (within_distance(b1, pt, min_dist)) {
|
300 |
-
bvh_stack[stack_size++] = node.child1;
|
301 |
-
}
|
302 |
-
assert(stack_size <= max_bvh_size);
|
303 |
-
}
|
304 |
-
}
|
305 |
-
if (found) {
|
306 |
-
assert(path_info->base_point_id < num_segments);
|
307 |
-
}
|
308 |
-
*result = ret_pt;
|
309 |
-
return found;
|
310 |
-
}
|
311 |
-
|
312 |
-
DEVICE
|
313 |
-
inline
|
314 |
-
bool closest_point(const Rect &rect, const Vector2f &pt,
|
315 |
-
Vector2f *result) {
|
316 |
-
auto min_dist = 0.f;
|
317 |
-
auto closest_pt = Vector2f{0, 0};
|
318 |
-
auto update = [&](const Vector2f &p0, const Vector2f &p1, bool first) {
|
319 |
-
// project pt to line
|
320 |
-
auto t = dot(pt - p0, p1 - p0) / dot(p1 - p0, p1 - p0);
|
321 |
-
if (t < 0) {
|
322 |
-
auto d = distance(p0, pt);
|
323 |
-
if (first || d < min_dist) {
|
324 |
-
min_dist = d;
|
325 |
-
closest_pt = p0;
|
326 |
-
}
|
327 |
-
} else if (t > 1) {
|
328 |
-
auto d = distance(p1, pt);
|
329 |
-
if (first || d < min_dist) {
|
330 |
-
min_dist = d;
|
331 |
-
closest_pt = p1;
|
332 |
-
}
|
333 |
-
} else {
|
334 |
-
auto p = p0 + t * (p1 - p0);
|
335 |
-
auto d = distance(p, pt);
|
336 |
-
if (first || d < min_dist) {
|
337 |
-
min_dist = d;
|
338 |
-
closest_pt = p0;
|
339 |
-
}
|
340 |
-
}
|
341 |
-
};
|
342 |
-
auto left_top = rect.p_min;
|
343 |
-
auto right_top = Vector2f{rect.p_max.x, rect.p_min.y};
|
344 |
-
auto left_bottom = Vector2f{rect.p_min.x, rect.p_max.y};
|
345 |
-
auto right_bottom = rect.p_max;
|
346 |
-
update(left_top, left_bottom, true);
|
347 |
-
update(left_top, right_top, false);
|
348 |
-
update(right_top, right_bottom, false);
|
349 |
-
update(left_bottom, right_bottom, false);
|
350 |
-
*result = closest_pt;
|
351 |
-
return true;
|
352 |
-
}
|
353 |
-
|
354 |
-
DEVICE
|
355 |
-
inline
|
356 |
-
bool closest_point(const Shape &shape, const BVHNode *bvh_nodes, const Vector2f &pt, float max_radius,
|
357 |
-
ClosestPointPathInfo *path_info,
|
358 |
-
Vector2f *result) {
|
359 |
-
switch (shape.type) {
|
360 |
-
case ShapeType::Circle:
|
361 |
-
return closest_point(*(const Circle *)shape.ptr, pt, result);
|
362 |
-
case ShapeType::Ellipse:
|
363 |
-
// https://www.geometrictools.com/Documentation/DistancePointEllipseEllipsoid.pdf
|
364 |
-
assert(false);
|
365 |
-
return false;
|
366 |
-
case ShapeType::Path:
|
367 |
-
return closest_point(*(const Path *)shape.ptr, bvh_nodes, pt, max_radius, path_info, result);
|
368 |
-
case ShapeType::Rect:
|
369 |
-
return closest_point(*(const Rect *)shape.ptr, pt, result);
|
370 |
-
}
|
371 |
-
assert(false);
|
372 |
-
return false;
|
373 |
-
}
|
374 |
-
|
375 |
-
DEVICE
|
376 |
-
inline
|
377 |
-
bool compute_distance(const SceneData &scene,
|
378 |
-
int shape_group_id,
|
379 |
-
const Vector2f &pt,
|
380 |
-
float max_radius,
|
381 |
-
int *min_shape_id,
|
382 |
-
Vector2f *closest_pt_,
|
383 |
-
ClosestPointPathInfo *path_info,
|
384 |
-
float *result) {
|
385 |
-
const ShapeGroup &shape_group = scene.shape_groups[shape_group_id];
|
386 |
-
// pt is in canvas space, transform it to shape's local space
|
387 |
-
auto local_pt = xform_pt(shape_group.canvas_to_shape, pt);
|
388 |
-
|
389 |
-
constexpr auto max_bvh_stack_size = 64;
|
390 |
-
int bvh_stack[max_bvh_stack_size];
|
391 |
-
auto stack_size = 0;
|
392 |
-
bvh_stack[stack_size++] = 2 * shape_group.num_shapes - 2;
|
393 |
-
const auto &bvh_nodes = scene.shape_groups_bvh_nodes[shape_group_id];
|
394 |
-
|
395 |
-
auto min_dist = max_radius;
|
396 |
-
auto found = false;
|
397 |
-
|
398 |
-
while (stack_size > 0) {
|
399 |
-
const BVHNode &node = bvh_nodes[bvh_stack[--stack_size]];
|
400 |
-
if (node.child1 < 0) {
|
401 |
-
// leaf
|
402 |
-
auto shape_id = node.child0;
|
403 |
-
const auto &shape = scene.shapes[shape_id];
|
404 |
-
ClosestPointPathInfo local_path_info{-1, -1};
|
405 |
-
auto local_closest_pt = Vector2f{0, 0};
|
406 |
-
if (closest_point(shape, scene.path_bvhs[shape_id], local_pt, max_radius, &local_path_info, &local_closest_pt)) {
|
407 |
-
auto closest_pt = xform_pt(shape_group.shape_to_canvas, local_closest_pt);
|
408 |
-
auto dist = distance(closest_pt, pt);
|
409 |
-
if (!found || dist < min_dist) {
|
410 |
-
found = true;
|
411 |
-
min_dist = dist;
|
412 |
-
if (min_shape_id != nullptr) {
|
413 |
-
*min_shape_id = shape_id;
|
414 |
-
}
|
415 |
-
if (closest_pt_ != nullptr) {
|
416 |
-
*closest_pt_ = closest_pt;
|
417 |
-
}
|
418 |
-
if (path_info != nullptr) {
|
419 |
-
*path_info = local_path_info;
|
420 |
-
}
|
421 |
-
}
|
422 |
-
}
|
423 |
-
} else {
|
424 |
-
assert(node.child0 >= 0 && node.child1 >= 0);
|
425 |
-
const AABB &b0 = bvh_nodes[node.child0].box;
|
426 |
-
if (inside(b0, local_pt, max_radius)) {
|
427 |
-
bvh_stack[stack_size++] = node.child0;
|
428 |
-
}
|
429 |
-
const AABB &b1 = bvh_nodes[node.child1].box;
|
430 |
-
if (inside(b1, local_pt, max_radius)) {
|
431 |
-
bvh_stack[stack_size++] = node.child1;
|
432 |
-
}
|
433 |
-
assert(stack_size <= max_bvh_stack_size);
|
434 |
-
}
|
435 |
-
}
|
436 |
-
|
437 |
-
*result = min_dist;
|
438 |
-
return found;
|
439 |
-
}
|
440 |
-
|
441 |
-
|
442 |
-
DEVICE
|
443 |
-
inline
|
444 |
-
void d_closest_point(const Circle &circle,
|
445 |
-
const Vector2f &pt,
|
446 |
-
const Vector2f &d_closest_pt,
|
447 |
-
Circle &d_circle,
|
448 |
-
Vector2f &d_pt) {
|
449 |
-
// return circle.center + circle.radius * normalize(pt - circle.center);
|
450 |
-
auto d_center = d_closest_pt *
|
451 |
-
(1 + d_normalize(pt - circle.center, circle.radius * d_closest_pt));
|
452 |
-
atomic_add(&d_circle.center.x, d_center);
|
453 |
-
atomic_add(&d_circle.radius, dot(d_closest_pt, normalize(pt - circle.center)));
|
454 |
-
}
|
455 |
-
|
456 |
-
DEVICE
|
457 |
-
inline
|
458 |
-
void d_closest_point(const Path &path,
|
459 |
-
const Vector2f &pt,
|
460 |
-
const Vector2f &d_closest_pt,
|
461 |
-
const ClosestPointPathInfo &path_info,
|
462 |
-
Path &d_path,
|
463 |
-
Vector2f &d_pt) {
|
464 |
-
auto base_point_id = path_info.base_point_id;
|
465 |
-
auto point_id = path_info.point_id;
|
466 |
-
auto min_t_root = path_info.t_root;
|
467 |
-
|
468 |
-
if (path.num_control_points[base_point_id] == 0) {
|
469 |
-
// Straight line
|
470 |
-
auto i0 = point_id;
|
471 |
-
auto i1 = (point_id + 1) % path.num_points;
|
472 |
-
auto p0 = Vector2f{path.points[2 * i0], path.points[2 * i0 + 1]};
|
473 |
-
auto p1 = Vector2f{path.points[2 * i1], path.points[2 * i1 + 1]};
|
474 |
-
// project pt to line
|
475 |
-
auto t = dot(pt - p0, p1 - p0) / dot(p1 - p0, p1 - p0);
|
476 |
-
auto d_p0 = Vector2f{0, 0};
|
477 |
-
auto d_p1 = Vector2f{0, 0};
|
478 |
-
if (t < 0) {
|
479 |
-
d_p0 += d_closest_pt;
|
480 |
-
} else if (t > 1) {
|
481 |
-
d_p1 += d_closest_pt;
|
482 |
-
} else {
|
483 |
-
auto d_p = d_closest_pt;
|
484 |
-
// p = p0 + t * (p1 - p0)
|
485 |
-
d_p0 += d_p * (1 - t);
|
486 |
-
d_p1 += d_p * t;
|
487 |
-
}
|
488 |
-
atomic_add(d_path.points + 2 * i0, d_p0);
|
489 |
-
atomic_add(d_path.points + 2 * i1, d_p1);
|
490 |
-
} else if (path.num_control_points[base_point_id] == 1) {
|
491 |
-
// Quadratic Bezier curve
|
492 |
-
auto i0 = point_id;
|
493 |
-
auto i1 = point_id + 1;
|
494 |
-
auto i2 = (point_id + 2) % path.num_points;
|
495 |
-
auto p0 = Vector2f{path.points[2 * i0], path.points[2 * i0 + 1]};
|
496 |
-
auto p1 = Vector2f{path.points[2 * i1], path.points[2 * i1 + 1]};
|
497 |
-
auto p2 = Vector2f{path.points[2 * i2], path.points[2 * i2 + 1]};
|
498 |
-
// auto eval = [&](float t) -> Vector2f {
|
499 |
-
// auto tt = 1 - t;
|
500 |
-
// return (tt*tt)*p0 + (2*tt*t)*p1 + (t*t)*p2;
|
501 |
-
// };
|
502 |
-
// auto dist0 = distance(eval(0), pt);
|
503 |
-
// auto dist1 = distance(eval(1), pt);
|
504 |
-
auto d_p0 = Vector2f{0, 0};
|
505 |
-
auto d_p1 = Vector2f{0, 0};
|
506 |
-
auto d_p2 = Vector2f{0, 0};
|
507 |
-
auto t = min_t_root;
|
508 |
-
if (t == 0) {
|
509 |
-
d_p0 += d_closest_pt;
|
510 |
-
} else if (t == 1) {
|
511 |
-
d_p2 += d_closest_pt;
|
512 |
-
} else {
|
513 |
-
// The curve is (1-t)^2p0 + 2(1-t)tp1 + t^2p2
|
514 |
-
// = (p0-2p1+p2)t^2+(-2p0+2p1)t+p0 = q
|
515 |
-
// Want to solve (q - pt) dot q' = 0
|
516 |
-
// q' = (p0-2p1+p2)t + (-p0+p1)
|
517 |
-
// Expanding (p0-2p1+p2)^2 t^3 +
|
518 |
-
// 3(p0-2p1+p2)(-p0+p1) t^2 +
|
519 |
-
// (2(-p0+p1)^2+(p0-2p1+p2)(p0-pt))t +
|
520 |
-
// (-p0+p1)(p0-pt) = 0
|
521 |
-
auto A = sum((p0-2*p1+p2)*(p0-2*p1+p2));
|
522 |
-
auto B = sum(3*(p0-2*p1+p2)*(-p0+p1));
|
523 |
-
auto C = sum(2*(-p0+p1)*(-p0+p1)+(p0-2*p1+p2)*(p0-pt));
|
524 |
-
// auto D = sum((-p0+p1)*(p0-pt));
|
525 |
-
auto d_p = d_closest_pt;
|
526 |
-
// p = eval(t)
|
527 |
-
auto tt = 1 - t;
|
528 |
-
// (tt*tt)*p0 + (2*tt*t)*p1 + (t*t)*p2
|
529 |
-
auto d_tt = 2 * tt * dot(d_p, p0) + 2 * t * dot(d_p, p1);
|
530 |
-
auto d_t = -d_tt + 2 * tt * dot(d_p, p1) + 2 * t * dot(d_p, p2);
|
531 |
-
auto d_p0 = d_p * tt * tt;
|
532 |
-
auto d_p1 = 2 * d_p * tt * t;
|
533 |
-
auto d_p2 = d_p * t * t;
|
534 |
-
// implicit function theorem: dt/dA = -1/(p'(t)) * dp/dA
|
535 |
-
auto poly_deriv_t = 3 * A * t * t + 2 * B * t + C;
|
536 |
-
if (fabs(poly_deriv_t) > 1e-6f) {
|
537 |
-
auto d_A = - (d_t / poly_deriv_t) * t * t * t;
|
538 |
-
auto d_B = - (d_t / poly_deriv_t) * t * t;
|
539 |
-
auto d_C = - (d_t / poly_deriv_t) * t;
|
540 |
-
auto d_D = - (d_t / poly_deriv_t);
|
541 |
-
// A = sum((p0-2*p1+p2)*(p0-2*p1+p2))
|
542 |
-
// B = sum(3*(p0-2*p1+p2)*(-p0+p1))
|
543 |
-
// C = sum(2*(-p0+p1)*(-p0+p1)+(p0-2*p1+p2)*(p0-pt))
|
544 |
-
// D = sum((-p0+p1)*(p0-pt))
|
545 |
-
d_p0 += 2*d_A*(p0-2*p1+p2)+
|
546 |
-
3*d_B*((-p0+p1)-(p0-2*p1+p2))+
|
547 |
-
2*d_C*(-2*(-p0+p1))+
|
548 |
-
d_C*((p0-pt)+(p0-2*p1+p2))+
|
549 |
-
2*d_D*(-(p0-pt)+(-p0+p1));
|
550 |
-
d_p1 += (-2)*2*d_A*(p0-2*p1+p2)+
|
551 |
-
3*d_B*(-2*(-p0+p1)+(p0-2*p1+p2))+
|
552 |
-
2*d_C*(2*(-p0+p1))+
|
553 |
-
d_C*((-2)*(p0-pt))+
|
554 |
-
d_D*(p0-pt);
|
555 |
-
d_p2 += 2*d_A*(p0-2*p1+p2)+
|
556 |
-
3*d_B*(-p0+p1)+
|
557 |
-
d_C*(p0-pt);
|
558 |
-
d_pt += d_C*(-(p0-2*p1+p2))+
|
559 |
-
d_D*(-(-p0+p1));
|
560 |
-
}
|
561 |
-
}
|
562 |
-
atomic_add(d_path.points + 2 * i0, d_p0);
|
563 |
-
atomic_add(d_path.points + 2 * i1, d_p1);
|
564 |
-
atomic_add(d_path.points + 2 * i2, d_p2);
|
565 |
-
} else if (path.num_control_points[base_point_id] == 2) {
|
566 |
-
// Cubic Bezier curve
|
567 |
-
auto i0 = point_id;
|
568 |
-
auto i1 = point_id + 1;
|
569 |
-
auto i2 = point_id + 2;
|
570 |
-
auto i3 = (point_id + 3) % path.num_points;
|
571 |
-
auto p0 = Vector2f{path.points[2 * i0], path.points[2 * i0 + 1]};
|
572 |
-
auto p1 = Vector2f{path.points[2 * i1], path.points[2 * i1 + 1]};
|
573 |
-
auto p2 = Vector2f{path.points[2 * i2], path.points[2 * i2 + 1]};
|
574 |
-
auto p3 = Vector2f{path.points[2 * i3], path.points[2 * i3 + 1]};
|
575 |
-
// auto eval = [&](float t) -> Vector2f {
|
576 |
-
// auto tt = 1 - t;
|
577 |
-
// return (tt*tt*tt)*p0 + (3*tt*tt*t)*p1 + (3*tt*t*t)*p2 + (t*t*t)*p3;
|
578 |
-
// };
|
579 |
-
auto d_p0 = Vector2f{0, 0};
|
580 |
-
auto d_p1 = Vector2f{0, 0};
|
581 |
-
auto d_p2 = Vector2f{0, 0};
|
582 |
-
auto d_p3 = Vector2f{0, 0};
|
583 |
-
auto t = min_t_root;
|
584 |
-
if (t == 0) {
|
585 |
-
// closest_pt = p0
|
586 |
-
d_p0 += d_closest_pt;
|
587 |
-
} else if (t == 1) {
|
588 |
-
// closest_pt = p1
|
589 |
-
d_p3 += d_closest_pt;
|
590 |
-
} else {
|
591 |
-
// The curve is (1 - t)^3 p0 + 3 * (1 - t)^2 t p1 + 3 * (1 - t) t^2 p2 + t^3 p3
|
592 |
-
// = (-p0+3p1-3p2+p3) t^3 + (3p0-6p1+3p2) t^2 + (-3p0+3p1) t + p0
|
593 |
-
// Want to solve (q - pt) dot q' = 0
|
594 |
-
// q' = 3*(-p0+3p1-3p2+p3)t^2 + 2*(3p0-6p1+3p2)t + (-3p0+3p1)
|
595 |
-
// Expanding
|
596 |
-
// 3*(-p0+3p1-3p2+p3)^2 t^5
|
597 |
-
// 5*(-p0+3p1-3p2+p3)(3p0-6p1+3p2) t^4
|
598 |
-
// 4*(-p0+3p1-3p2+p3)(-3p0+3p1) + 2*(3p0-6p1+3p2)^2 t^3
|
599 |
-
// 3*(3p0-6p1+3p2)(-3p0+3p1) + 3*(-p0+3p1-3p2+p3)(p0-pt) t^2
|
600 |
-
// (-3p0+3p1)^2+2(p0-pt)(3p0-6p1+3p2) t
|
601 |
-
// (p0-pt)(-3p0+3p1)
|
602 |
-
double A = 3*sum((-p0+3*p1-3*p2+p3)*(-p0+3*p1-3*p2+p3));
|
603 |
-
double B = 5*sum((-p0+3*p1-3*p2+p3)*(3*p0-6*p1+3*p2));
|
604 |
-
double C = 4*sum((-p0+3*p1-3*p2+p3)*(-3*p0+3*p1)) + 2*sum((3*p0-6*p1+3*p2)*(3*p0-6*p1+3*p2));
|
605 |
-
double D = 3*(sum((3*p0-6*p1+3*p2)*(-3*p0+3*p1)) + sum((-p0+3*p1-3*p2+p3)*(p0-pt)));
|
606 |
-
double E = sum((-3*p0+3*p1)*(-3*p0+3*p1)) + 2*sum((p0-pt)*(3*p0-6*p1+3*p2));
|
607 |
-
double F = sum((p0-pt)*(-3*p0+3*p1));
|
608 |
-
B /= A;
|
609 |
-
C /= A;
|
610 |
-
D /= A;
|
611 |
-
E /= A;
|
612 |
-
F /= A;
|
613 |
-
// auto eval_polynomial = [&] (double t) {
|
614 |
-
// return t*t*t*t*t+
|
615 |
-
// B*t*t*t*t+
|
616 |
-
// C*t*t*t+
|
617 |
-
// D*t*t+
|
618 |
-
// E*t+
|
619 |
-
// F;
|
620 |
-
// };
|
621 |
-
auto eval_polynomial_deriv = [&] (double t) {
|
622 |
-
return 5*t*t*t*t+
|
623 |
-
4*B*t*t*t+
|
624 |
-
3*C*t*t+
|
625 |
-
2*D*t+
|
626 |
-
E;
|
627 |
-
};
|
628 |
-
|
629 |
-
// auto p = eval(t);
|
630 |
-
auto d_p = d_closest_pt;
|
631 |
-
// (tt*tt*tt)*p0 + (3*tt*tt*t)*p1 + (3*tt*t*t)*p2 + (t*t*t)*p3
|
632 |
-
auto tt = 1 - t;
|
633 |
-
auto d_tt = 3 * tt * tt * dot(d_p, p0) +
|
634 |
-
6 * tt * t * dot(d_p, p1) +
|
635 |
-
3 * t * t * dot(d_p, p2);
|
636 |
-
auto d_t = -d_tt +
|
637 |
-
3 * tt * tt * dot(d_p, p1) +
|
638 |
-
6 * tt * t * dot(d_p, p2) +
|
639 |
-
3 * t * t * dot(d_p, p3);
|
640 |
-
d_p0 += d_p * (tt * tt * tt);
|
641 |
-
d_p1 += d_p * (3 * tt * tt * t);
|
642 |
-
d_p2 += d_p * (3 * tt * t * t);
|
643 |
-
d_p3 += d_p * (t * t * t);
|
644 |
-
// implicit function theorem: dt/dA = -1/(p'(t)) * dp/dA
|
645 |
-
auto poly_deriv_t = eval_polynomial_deriv(t);
|
646 |
-
if (fabs(poly_deriv_t) > 1e-10f) {
|
647 |
-
auto d_B = -(d_t / poly_deriv_t) * t * t * t * t;
|
648 |
-
auto d_C = -(d_t / poly_deriv_t) * t * t * t;
|
649 |
-
auto d_D = -(d_t / poly_deriv_t) * t * t;
|
650 |
-
auto d_E = -(d_t / poly_deriv_t) * t;
|
651 |
-
auto d_F = -(d_t / poly_deriv_t);
|
652 |
-
// B = B' / A
|
653 |
-
// C = C' / A
|
654 |
-
// D = D' / A
|
655 |
-
// E = E' / A
|
656 |
-
// F = F' / A
|
657 |
-
auto d_A = -d_B * B / A
|
658 |
-
-d_C * C / A
|
659 |
-
-d_D * D / A
|
660 |
-
-d_E * E / A
|
661 |
-
-d_F * F / A;
|
662 |
-
d_B /= A;
|
663 |
-
d_C /= A;
|
664 |
-
d_D /= A;
|
665 |
-
d_E /= A;
|
666 |
-
d_F /= A;
|
667 |
-
{
|
668 |
-
double A = 3*sum((-p0+3*p1-3*p2+p3)*(-p0+3*p1-3*p2+p3)) + 1e-3;
|
669 |
-
double B = 5*sum((-p0+3*p1-3*p2+p3)*(3*p0-6*p1+3*p2));
|
670 |
-
double C = 4*sum((-p0+3*p1-3*p2+p3)*(-3*p0+3*p1)) + 2*sum((3*p0-6*p1+3*p2)*(3*p0-6*p1+3*p2));
|
671 |
-
double D = 3*(sum((3*p0-6*p1+3*p2)*(-3*p0+3*p1)) + sum((-p0+3*p1-3*p2+p3)*(p0-pt)));
|
672 |
-
double E = sum((-3*p0+3*p1)*(-3*p0+3*p1)) + 2*sum((p0-pt)*(3*p0-6*p1+3*p2));
|
673 |
-
double F = sum((p0-pt)*(-3*p0+3*p1));
|
674 |
-
B /= A;
|
675 |
-
C /= A;
|
676 |
-
D /= A;
|
677 |
-
E /= A;
|
678 |
-
F /= A;
|
679 |
-
auto eval_polynomial = [&] (double t) {
|
680 |
-
return t*t*t*t*t+
|
681 |
-
B*t*t*t*t+
|
682 |
-
C*t*t*t+
|
683 |
-
D*t*t+
|
684 |
-
E*t+
|
685 |
-
F;
|
686 |
-
};
|
687 |
-
auto eval_polynomial_deriv = [&] (double t) {
|
688 |
-
return 5*t*t*t*t+
|
689 |
-
4*B*t*t*t+
|
690 |
-
3*C*t*t+
|
691 |
-
2*D*t+
|
692 |
-
E;
|
693 |
-
};
|
694 |
-
auto lb = t - 1e-2f;
|
695 |
-
auto ub = t + 1e-2f;
|
696 |
-
auto lb_eval = eval_polynomial(lb);
|
697 |
-
auto ub_eval = eval_polynomial(ub);
|
698 |
-
if (lb_eval > ub_eval) {
|
699 |
-
swap_(lb, ub);
|
700 |
-
}
|
701 |
-
auto t_ = 0.5f * (lb + ub);
|
702 |
-
auto num_iter = 20;
|
703 |
-
for (int it = 0; it < num_iter; it++) {
|
704 |
-
if (!(t_ >= lb && t_ <= ub)) {
|
705 |
-
t_ = 0.5f * (lb + ub);
|
706 |
-
}
|
707 |
-
auto value = eval_polynomial(t_);
|
708 |
-
if (fabs(value) < 1e-5f || it == num_iter - 1) {
|
709 |
-
break;
|
710 |
-
}
|
711 |
-
// The derivative may not be entirely accurate,
|
712 |
-
// but the bisection is going to handle this
|
713 |
-
if (value > 0.f) {
|
714 |
-
ub = t_;
|
715 |
-
} else {
|
716 |
-
lb = t_;
|
717 |
-
}
|
718 |
-
auto derivative = eval_polynomial_deriv(t);
|
719 |
-
t_ -= value / derivative;
|
720 |
-
}
|
721 |
-
}
|
722 |
-
// A = 3*sum((-p0+3*p1-3*p2+p3)*(-p0+3*p1-3*p2+p3))
|
723 |
-
d_p0 += d_A * 3 * (-1) * 2 * (-p0+3*p1-3*p2+p3);
|
724 |
-
d_p1 += d_A * 3 * 3 * 2 * (-p0+3*p1-3*p2+p3);
|
725 |
-
d_p2 += d_A * 3 * (-3) * 2 * (-p0+3*p1-3*p2+p3);
|
726 |
-
d_p3 += d_A * 3 * 1 * 2 * (-p0+3*p1-3*p2+p3);
|
727 |
-
// B = 5*sum((-p0+3*p1-3*p2+p3)*(3*p0-6*p1+3*p2))
|
728 |
-
d_p0 += d_B * 5 * ((-1) * (3*p0-6*p1+3*p2) + 3 * (-p0+3*p1-3*p2+p3));
|
729 |
-
d_p1 += d_B * 5 * (3 * (3*p0-6*p1+3*p2) + (-6) * (-p0+3*p1-3*p2+p3));
|
730 |
-
d_p2 += d_B * 5 * ((-3) * (3*p0-6*p1+3*p2) + 3 * (-p0+3*p1-3*p2+p3));
|
731 |
-
d_p3 += d_B * 5 * (3*p0-6*p1+3*p2);
|
732 |
-
// C = 4*sum((-p0+3*p1-3*p2+p3)*(-3*p0+3*p1)) + 2*sum((3*p0-6*p1+3*p2)*(3*p0-6*p1+3*p2))
|
733 |
-
d_p0 += d_C * 4 * ((-1) * (-3*p0+3*p1) + (-3) * (-p0+3*p1-3*p2+p3)) +
|
734 |
-
d_C * 2 * (3 * 2 * (3*p0-6*p1+3*p2));
|
735 |
-
d_p1 += d_C * 4 * (3 * (-3*p0+3*p1) + 3 * (-p0+3*p1-3*p2+p3)) +
|
736 |
-
d_C * 2 * ((-6) * 2 * (3*p0-6*p1+3*p2));
|
737 |
-
d_p2 += d_C * 4 * ((-3) * (-3*p0+3*p1)) +
|
738 |
-
d_C * 2 * (3 * 2 * (3*p0-6*p1+3*p2));
|
739 |
-
d_p3 += d_C * 4 * (-3*p0+3*p1);
|
740 |
-
// D = 3*(sum((3*p0-6*p1+3*p2)*(-3*p0+3*p1)) + sum((-p0+3*p1-3*p2+p3)*(p0-pt)))
|
741 |
-
d_p0 += d_D * 3 * (3 * (-3*p0+3*p1) + (-3) * (3*p0-6*p1+3*p2)) +
|
742 |
-
d_D * 3 * ((-1) * (p0-pt) + 1 * (-p0+3*p1-3*p2+p3));
|
743 |
-
d_p1 += d_D * 3 * ((-6) * (-3*p0+3*p1) + (3) * (3*p0-6*p1+3*p2)) +
|
744 |
-
d_D * 3 * (3 * (p0-pt));
|
745 |
-
d_p2 += d_D * 3 * (3 * (-3*p0+3*p1)) +
|
746 |
-
d_D * 3 * ((-3) * (p0-pt));
|
747 |
-
d_pt += d_D * 3 * ((-1) * (-p0+3*p1-3*p2+p3));
|
748 |
-
// E = sum((-3*p0+3*p1)*(-3*p0+3*p1)) + 2*sum((p0-pt)*(3*p0-6*p1+3*p2))
|
749 |
-
d_p0 += d_E * ((-3) * 2 * (-3*p0+3*p1)) +
|
750 |
-
d_E * 2 * (1 * (3*p0-6*p1+3*p2) + 3 * (p0-pt));
|
751 |
-
d_p1 += d_E * ( 3 * 2 * (-3*p0+3*p1)) +
|
752 |
-
d_E * 2 * ((-6) * (p0-pt));
|
753 |
-
d_p2 += d_E * 2 * ( 3 * (p0-pt));
|
754 |
-
d_pt += d_E * 2 * ((-1) * (3*p0-6*p1+3*p2));
|
755 |
-
// F = sum((p0-pt)*(-3*p0+3*p1))
|
756 |
-
d_p0 += d_F * (1 * (-3*p0+3*p1)) +
|
757 |
-
d_F * ((-3) * (p0-pt));
|
758 |
-
d_p1 += d_F * (3 * (p0-pt));
|
759 |
-
d_pt += d_F * ((-1) * (-3*p0+3*p1));
|
760 |
-
}
|
761 |
-
}
|
762 |
-
atomic_add(d_path.points + 2 * i0, d_p0);
|
763 |
-
atomic_add(d_path.points + 2 * i1, d_p1);
|
764 |
-
atomic_add(d_path.points + 2 * i2, d_p2);
|
765 |
-
atomic_add(d_path.points + 2 * i3, d_p3);
|
766 |
-
} else {
|
767 |
-
assert(false);
|
768 |
-
}
|
769 |
-
}
|
770 |
-
|
771 |
-
DEVICE
|
772 |
-
inline
|
773 |
-
void d_closest_point(const Rect &rect,
|
774 |
-
const Vector2f &pt,
|
775 |
-
const Vector2f &d_closest_pt,
|
776 |
-
Rect &d_rect,
|
777 |
-
Vector2f &d_pt) {
|
778 |
-
auto dist = [&](const Vector2f &p0, const Vector2f &p1) -> float {
|
779 |
-
// project pt to line
|
780 |
-
auto t = dot(pt - p0, p1 - p0) / dot(p1 - p0, p1 - p0);
|
781 |
-
if (t < 0) {
|
782 |
-
return distance(p0, pt);
|
783 |
-
} else if (t > 1) {
|
784 |
-
return distance(p1, pt);
|
785 |
-
} else {
|
786 |
-
return distance(p0 + t * (p1 - p0), pt);
|
787 |
-
}
|
788 |
-
// return 0;
|
789 |
-
};
|
790 |
-
auto left_top = rect.p_min;
|
791 |
-
auto right_top = Vector2f{rect.p_max.x, rect.p_min.y};
|
792 |
-
auto left_bottom = Vector2f{rect.p_min.x, rect.p_max.y};
|
793 |
-
auto right_bottom = rect.p_max;
|
794 |
-
auto left_dist = dist(left_top, left_bottom);
|
795 |
-
auto top_dist = dist(left_top, right_top);
|
796 |
-
auto right_dist = dist(right_top, right_bottom);
|
797 |
-
auto bottom_dist = dist(left_bottom, right_bottom);
|
798 |
-
int min_id = 0;
|
799 |
-
auto min_dist = left_dist;
|
800 |
-
if (top_dist < min_dist) { min_dist = top_dist; min_id = 1; }
|
801 |
-
if (right_dist < min_dist) { min_dist = right_dist; min_id = 2; }
|
802 |
-
if (bottom_dist < min_dist) { min_dist = bottom_dist; min_id = 3; }
|
803 |
-
|
804 |
-
auto d_update = [&](const Vector2f &p0, const Vector2f &p1,
|
805 |
-
const Vector2f &d_closest_pt,
|
806 |
-
Vector2f &d_p0, Vector2f &d_p1) {
|
807 |
-
// project pt to line
|
808 |
-
auto t = dot(pt - p0, p1 - p0) / dot(p1 - p0, p1 - p0);
|
809 |
-
if (t < 0) {
|
810 |
-
d_p0 += d_closest_pt;
|
811 |
-
} else if (t > 1) {
|
812 |
-
d_p1 += d_closest_pt;
|
813 |
-
} else {
|
814 |
-
// p = p0 + t * (p1 - p0)
|
815 |
-
auto d_p = d_closest_pt;
|
816 |
-
d_p0 += d_p * (1 - t);
|
817 |
-
d_p1 += d_p * t;
|
818 |
-
auto d_t = sum(d_p * (p1 - p0));
|
819 |
-
// t = dot(pt - p0, p1 - p0) / dot(p1 - p0, p1 - p0)
|
820 |
-
auto d_numerator = d_t / dot(p1 - p0, p1 - p0);
|
821 |
-
auto d_denominator = d_t * (-t) / dot(p1 - p0, p1 - p0);
|
822 |
-
// numerator = dot(pt - p0, p1 - p0)
|
823 |
-
d_pt += (p1 - p0) * d_numerator;
|
824 |
-
d_p1 += (pt - p0) * d_numerator;
|
825 |
-
d_p0 += ((p0 - p1) + (p0 - pt)) * d_numerator;
|
826 |
-
// denominator = dot(p1 - p0, p1 - p0)
|
827 |
-
d_p1 += 2 * (p1 - p0) * d_denominator;
|
828 |
-
d_p0 += 2 * (p0 - p1) * d_denominator;
|
829 |
-
}
|
830 |
-
};
|
831 |
-
auto d_left_top = Vector2f{0, 0};
|
832 |
-
auto d_right_top = Vector2f{0, 0};
|
833 |
-
auto d_left_bottom = Vector2f{0, 0};
|
834 |
-
auto d_right_bottom = Vector2f{0, 0};
|
835 |
-
if (min_id == 0) {
|
836 |
-
d_update(left_top, left_bottom, d_closest_pt, d_left_top, d_left_bottom);
|
837 |
-
} else if (min_id == 1) {
|
838 |
-
d_update(left_top, right_top, d_closest_pt, d_left_top, d_right_top);
|
839 |
-
} else if (min_id == 2) {
|
840 |
-
d_update(right_top, right_bottom, d_closest_pt, d_right_top, d_right_bottom);
|
841 |
-
} else {
|
842 |
-
assert(min_id == 3);
|
843 |
-
d_update(left_bottom, right_bottom, d_closest_pt, d_left_bottom, d_right_bottom);
|
844 |
-
}
|
845 |
-
auto d_p_min = Vector2f{0, 0};
|
846 |
-
auto d_p_max = Vector2f{0, 0};
|
847 |
-
// left_top = rect.p_min
|
848 |
-
// right_top = Vector2f{rect.p_max.x, rect.p_min.y}
|
849 |
-
// left_bottom = Vector2f{rect.p_min.x, rect.p_max.y}
|
850 |
-
// right_bottom = rect.p_max
|
851 |
-
d_p_min += d_left_top;
|
852 |
-
d_p_max.x += d_right_top.x;
|
853 |
-
d_p_min.y += d_right_top.y;
|
854 |
-
d_p_min.x += d_left_bottom.x;
|
855 |
-
d_p_max.y += d_left_bottom.y;
|
856 |
-
d_p_max += d_right_bottom;
|
857 |
-
atomic_add(d_rect.p_min, d_p_min);
|
858 |
-
atomic_add(d_rect.p_max, d_p_max);
|
859 |
-
}
|
860 |
-
|
861 |
-
DEVICE
|
862 |
-
inline
|
863 |
-
void d_closest_point(const Shape &shape,
|
864 |
-
const Vector2f &pt,
|
865 |
-
const Vector2f &d_closest_pt,
|
866 |
-
const ClosestPointPathInfo &path_info,
|
867 |
-
Shape &d_shape,
|
868 |
-
Vector2f &d_pt) {
|
869 |
-
switch (shape.type) {
|
870 |
-
case ShapeType::Circle:
|
871 |
-
d_closest_point(*(const Circle *)shape.ptr,
|
872 |
-
pt,
|
873 |
-
d_closest_pt,
|
874 |
-
*(Circle *)d_shape.ptr,
|
875 |
-
d_pt);
|
876 |
-
break;
|
877 |
-
case ShapeType::Ellipse:
|
878 |
-
// https://www.geometrictools.com/Documentation/DistancePointEllipseEllipsoid.pdf
|
879 |
-
assert(false);
|
880 |
-
break;
|
881 |
-
case ShapeType::Path:
|
882 |
-
d_closest_point(*(const Path *)shape.ptr,
|
883 |
-
pt,
|
884 |
-
d_closest_pt,
|
885 |
-
path_info,
|
886 |
-
*(Path *)d_shape.ptr,
|
887 |
-
d_pt);
|
888 |
-
break;
|
889 |
-
case ShapeType::Rect:
|
890 |
-
d_closest_point(*(const Rect *)shape.ptr,
|
891 |
-
pt,
|
892 |
-
d_closest_pt,
|
893 |
-
*(Rect *)d_shape.ptr,
|
894 |
-
d_pt);
|
895 |
-
break;
|
896 |
-
}
|
897 |
-
}
|
898 |
-
|
899 |
-
DEVICE
|
900 |
-
inline
|
901 |
-
void d_compute_distance(const Matrix3x3f &canvas_to_shape,
|
902 |
-
const Matrix3x3f &shape_to_canvas,
|
903 |
-
const Shape &shape,
|
904 |
-
const Vector2f &pt,
|
905 |
-
const Vector2f &closest_pt,
|
906 |
-
const ClosestPointPathInfo &path_info,
|
907 |
-
float d_dist,
|
908 |
-
Matrix3x3f &d_shape_to_canvas,
|
909 |
-
Shape &d_shape,
|
910 |
-
float *d_translation) {
|
911 |
-
if (distance_squared(pt, closest_pt) < 1e-10f) {
|
912 |
-
// The derivative at distance=0 is undefined
|
913 |
-
return;
|
914 |
-
}
|
915 |
-
assert(isfinite(d_dist));
|
916 |
-
// pt is in canvas space, transform it to shape's local space
|
917 |
-
auto local_pt = xform_pt(canvas_to_shape, pt);
|
918 |
-
auto local_closest_pt = xform_pt(canvas_to_shape, closest_pt);
|
919 |
-
// auto local_closest_pt = closest_point(shape, local_pt);
|
920 |
-
// auto closest_pt = xform_pt(shape_group.shape_to_canvas, local_closest_pt);
|
921 |
-
// auto dist = distance(closest_pt, pt);
|
922 |
-
auto d_pt = Vector2f{0, 0};
|
923 |
-
auto d_closest_pt = Vector2f{0, 0};
|
924 |
-
d_distance(closest_pt, pt, d_dist, d_closest_pt, d_pt);
|
925 |
-
assert(isfinite(d_pt));
|
926 |
-
assert(isfinite(d_closest_pt));
|
927 |
-
// auto closest_pt = xform_pt(shape_group.shape_to_canvas, local_closest_pt);
|
928 |
-
auto d_local_closest_pt = Vector2f{0, 0};
|
929 |
-
auto d_shape_to_canvas_ = Matrix3x3f();
|
930 |
-
d_xform_pt(shape_to_canvas, local_closest_pt, d_closest_pt,
|
931 |
-
d_shape_to_canvas_, d_local_closest_pt);
|
932 |
-
assert(isfinite(d_local_closest_pt));
|
933 |
-
auto d_local_pt = Vector2f{0, 0};
|
934 |
-
d_closest_point(shape, local_pt, d_local_closest_pt, path_info, d_shape, d_local_pt);
|
935 |
-
assert(isfinite(d_local_pt));
|
936 |
-
auto d_canvas_to_shape = Matrix3x3f();
|
937 |
-
d_xform_pt(canvas_to_shape,
|
938 |
-
pt,
|
939 |
-
d_local_pt,
|
940 |
-
d_canvas_to_shape,
|
941 |
-
d_pt);
|
942 |
-
// http://jack.valmadre.net/notes/2016/09/04/back-prop-differentials/#back-propagation-using-differentials
|
943 |
-
auto tc2s = transpose(canvas_to_shape);
|
944 |
-
d_shape_to_canvas_ += -tc2s * d_canvas_to_shape * tc2s;
|
945 |
-
atomic_add(&d_shape_to_canvas(0, 0), d_shape_to_canvas_);
|
946 |
-
if (d_translation != nullptr) {
|
947 |
-
atomic_add(d_translation, -d_pt);
|
948 |
-
}
|
949 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/pybind11/tests/pybind11_tests.cpp
DELETED
@@ -1,91 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
tests/pybind11_tests.cpp -- pybind example plugin
|
3 |
-
|
4 |
-
Copyright (c) 2016 Wenzel Jakob <[email protected]>
|
5 |
-
|
6 |
-
All rights reserved. Use of this source code is governed by a
|
7 |
-
BSD-style license that can be found in the LICENSE file.
|
8 |
-
*/
|
9 |
-
|
10 |
-
#include "pybind11_tests.h"
|
11 |
-
#include "constructor_stats.h"
|
12 |
-
|
13 |
-
#include <functional>
|
14 |
-
#include <list>
|
15 |
-
|
16 |
-
/*
|
17 |
-
For testing purposes, we define a static global variable here in a function that each individual
|
18 |
-
test .cpp calls with its initialization lambda. It's convenient here because we can just not
|
19 |
-
compile some test files to disable/ignore some of the test code.
|
20 |
-
|
21 |
-
It is NOT recommended as a way to use pybind11 in practice, however: the initialization order will
|
22 |
-
be essentially random, which is okay for our test scripts (there are no dependencies between the
|
23 |
-
individual pybind11 test .cpp files), but most likely not what you want when using pybind11
|
24 |
-
productively.
|
25 |
-
|
26 |
-
Instead, see the "How can I reduce the build time?" question in the "Frequently asked questions"
|
27 |
-
section of the documentation for good practice on splitting binding code over multiple files.
|
28 |
-
*/
|
29 |
-
std::list<std::function<void(py::module &)>> &initializers() {
|
30 |
-
static std::list<std::function<void(py::module &)>> inits;
|
31 |
-
return inits;
|
32 |
-
}
|
33 |
-
|
34 |
-
test_initializer::test_initializer(Initializer init) {
|
35 |
-
initializers().push_back(init);
|
36 |
-
}
|
37 |
-
|
38 |
-
test_initializer::test_initializer(const char *submodule_name, Initializer init) {
|
39 |
-
initializers().push_back([=](py::module &parent) {
|
40 |
-
auto m = parent.def_submodule(submodule_name);
|
41 |
-
init(m);
|
42 |
-
});
|
43 |
-
}
|
44 |
-
|
45 |
-
void bind_ConstructorStats(py::module &m) {
|
46 |
-
py::class_<ConstructorStats>(m, "ConstructorStats")
|
47 |
-
.def("alive", &ConstructorStats::alive)
|
48 |
-
.def("values", &ConstructorStats::values)
|
49 |
-
.def_readwrite("default_constructions", &ConstructorStats::default_constructions)
|
50 |
-
.def_readwrite("copy_assignments", &ConstructorStats::copy_assignments)
|
51 |
-
.def_readwrite("move_assignments", &ConstructorStats::move_assignments)
|
52 |
-
.def_readwrite("copy_constructions", &ConstructorStats::copy_constructions)
|
53 |
-
.def_readwrite("move_constructions", &ConstructorStats::move_constructions)
|
54 |
-
.def_static("get", (ConstructorStats &(*)(py::object)) &ConstructorStats::get, py::return_value_policy::reference_internal)
|
55 |
-
|
56 |
-
// Not exactly ConstructorStats, but related: expose the internal pybind number of registered instances
|
57 |
-
// to allow instance cleanup checks (invokes a GC first)
|
58 |
-
.def_static("detail_reg_inst", []() {
|
59 |
-
ConstructorStats::gc();
|
60 |
-
return py::detail::get_internals().registered_instances.size();
|
61 |
-
})
|
62 |
-
;
|
63 |
-
}
|
64 |
-
|
65 |
-
PYBIND11_MODULE(pybind11_tests, m) {
|
66 |
-
m.doc() = "pybind11 test module";
|
67 |
-
|
68 |
-
bind_ConstructorStats(m);
|
69 |
-
|
70 |
-
#if !defined(NDEBUG)
|
71 |
-
m.attr("debug_enabled") = true;
|
72 |
-
#else
|
73 |
-
m.attr("debug_enabled") = false;
|
74 |
-
#endif
|
75 |
-
|
76 |
-
py::class_<UserType>(m, "UserType", "A `py::class_` type for testing")
|
77 |
-
.def(py::init<>())
|
78 |
-
.def(py::init<int>())
|
79 |
-
.def("get_value", &UserType::value, "Get value using a method")
|
80 |
-
.def("set_value", &UserType::set, "Set value using a method")
|
81 |
-
.def_property("value", &UserType::value, &UserType::set, "Get/set value using a property")
|
82 |
-
.def("__repr__", [](const UserType& u) { return "UserType({})"_s.format(u.value()); });
|
83 |
-
|
84 |
-
py::class_<IncType, UserType>(m, "IncType")
|
85 |
-
.def(py::init<>())
|
86 |
-
.def(py::init<int>())
|
87 |
-
.def("__repr__", [](const IncType& u) { return "IncType({})"_s.format(u.value()); });
|
88 |
-
|
89 |
-
for (const auto &initializer : initializers())
|
90 |
-
initializer(m);
|
91 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/shuffle.h
DELETED
@@ -1,54 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2020 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
/*! \file shuffle.h
|
18 |
-
* \brief Generic implementations of shuffle functions.
|
19 |
-
*/
|
20 |
-
|
21 |
-
#pragma once
|
22 |
-
|
23 |
-
|
24 |
-
#include <thrust/detail/config.h>
|
25 |
-
#include <thrust/detail/cpp11_required.h>
|
26 |
-
|
27 |
-
#if THRUST_CPP_DIALECT >= 2011
|
28 |
-
|
29 |
-
#include <thrust/system/detail/generic/tag.h>
|
30 |
-
|
31 |
-
namespace thrust {
|
32 |
-
namespace system {
|
33 |
-
namespace detail {
|
34 |
-
namespace generic {
|
35 |
-
|
36 |
-
template <typename ExecutionPolicy, typename RandomIterator, typename URBG>
|
37 |
-
__host__ __device__ void shuffle(
|
38 |
-
thrust::execution_policy<ExecutionPolicy>& exec, RandomIterator first,
|
39 |
-
RandomIterator last, URBG&& g);
|
40 |
-
|
41 |
-
template <typename ExecutionPolicy, typename RandomIterator,
|
42 |
-
typename OutputIterator, typename URBG>
|
43 |
-
__host__ __device__ void shuffle_copy(
|
44 |
-
thrust::execution_policy<ExecutionPolicy>& exec, RandomIterator first,
|
45 |
-
RandomIterator last, OutputIterator result, URBG&& g);
|
46 |
-
|
47 |
-
} // end namespace generic
|
48 |
-
} // end namespace detail
|
49 |
-
} // end namespace system
|
50 |
-
} // end namespace thrust
|
51 |
-
|
52 |
-
#include <thrust/system/detail/generic/shuffle.inl>
|
53 |
-
|
54 |
-
#endif
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/malloc_and_free.h
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
|
21 |
-
// this system inherits malloc and free
|
22 |
-
#include <thrust/system/cpp/detail/malloc_and_free.h>
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/models/necks/nasfcos_fpn.py
DELETED
@@ -1,161 +0,0 @@
|
|
1 |
-
import torch.nn as nn
|
2 |
-
import torch.nn.functional as F
|
3 |
-
from mmcv.cnn import ConvModule, caffe2_xavier_init
|
4 |
-
from mmcv.ops.merge_cells import ConcatCell
|
5 |
-
|
6 |
-
from ..builder import NECKS
|
7 |
-
|
8 |
-
|
9 |
-
@NECKS.register_module()
|
10 |
-
class NASFCOS_FPN(nn.Module):
|
11 |
-
"""FPN structure in NASFPN.
|
12 |
-
|
13 |
-
Implementation of paper `NAS-FCOS: Fast Neural Architecture Search for
|
14 |
-
Object Detection <https://arxiv.org/abs/1906.04423>`_
|
15 |
-
|
16 |
-
Args:
|
17 |
-
in_channels (List[int]): Number of input channels per scale.
|
18 |
-
out_channels (int): Number of output channels (used at each scale)
|
19 |
-
num_outs (int): Number of output scales.
|
20 |
-
start_level (int): Index of the start input backbone level used to
|
21 |
-
build the feature pyramid. Default: 0.
|
22 |
-
end_level (int): Index of the end input backbone level (exclusive) to
|
23 |
-
build the feature pyramid. Default: -1, which means the last level.
|
24 |
-
add_extra_convs (bool): It decides whether to add conv
|
25 |
-
layers on top of the original feature maps. Default to False.
|
26 |
-
If True, its actual mode is specified by `extra_convs_on_inputs`.
|
27 |
-
conv_cfg (dict): dictionary to construct and config conv layer.
|
28 |
-
norm_cfg (dict): dictionary to construct and config norm layer.
|
29 |
-
"""
|
30 |
-
|
31 |
-
def __init__(self,
|
32 |
-
in_channels,
|
33 |
-
out_channels,
|
34 |
-
num_outs,
|
35 |
-
start_level=1,
|
36 |
-
end_level=-1,
|
37 |
-
add_extra_convs=False,
|
38 |
-
conv_cfg=None,
|
39 |
-
norm_cfg=None):
|
40 |
-
super(NASFCOS_FPN, self).__init__()
|
41 |
-
assert isinstance(in_channels, list)
|
42 |
-
self.in_channels = in_channels
|
43 |
-
self.out_channels = out_channels
|
44 |
-
self.num_ins = len(in_channels)
|
45 |
-
self.num_outs = num_outs
|
46 |
-
self.norm_cfg = norm_cfg
|
47 |
-
self.conv_cfg = conv_cfg
|
48 |
-
|
49 |
-
if end_level == -1:
|
50 |
-
self.backbone_end_level = self.num_ins
|
51 |
-
assert num_outs >= self.num_ins - start_level
|
52 |
-
else:
|
53 |
-
self.backbone_end_level = end_level
|
54 |
-
assert end_level <= len(in_channels)
|
55 |
-
assert num_outs == end_level - start_level
|
56 |
-
self.start_level = start_level
|
57 |
-
self.end_level = end_level
|
58 |
-
self.add_extra_convs = add_extra_convs
|
59 |
-
|
60 |
-
self.adapt_convs = nn.ModuleList()
|
61 |
-
for i in range(self.start_level, self.backbone_end_level):
|
62 |
-
adapt_conv = ConvModule(
|
63 |
-
in_channels[i],
|
64 |
-
out_channels,
|
65 |
-
1,
|
66 |
-
stride=1,
|
67 |
-
padding=0,
|
68 |
-
bias=False,
|
69 |
-
norm_cfg=dict(type='BN'),
|
70 |
-
act_cfg=dict(type='ReLU', inplace=False))
|
71 |
-
self.adapt_convs.append(adapt_conv)
|
72 |
-
|
73 |
-
# C2 is omitted according to the paper
|
74 |
-
extra_levels = num_outs - self.backbone_end_level + self.start_level
|
75 |
-
|
76 |
-
def build_concat_cell(with_input1_conv, with_input2_conv):
|
77 |
-
cell_conv_cfg = dict(
|
78 |
-
kernel_size=1, padding=0, bias=False, groups=out_channels)
|
79 |
-
return ConcatCell(
|
80 |
-
in_channels=out_channels,
|
81 |
-
out_channels=out_channels,
|
82 |
-
with_out_conv=True,
|
83 |
-
out_conv_cfg=cell_conv_cfg,
|
84 |
-
out_norm_cfg=dict(type='BN'),
|
85 |
-
out_conv_order=('norm', 'act', 'conv'),
|
86 |
-
with_input1_conv=with_input1_conv,
|
87 |
-
with_input2_conv=with_input2_conv,
|
88 |
-
input_conv_cfg=conv_cfg,
|
89 |
-
input_norm_cfg=norm_cfg,
|
90 |
-
upsample_mode='nearest')
|
91 |
-
|
92 |
-
# Denote c3=f0, c4=f1, c5=f2 for convince
|
93 |
-
self.fpn = nn.ModuleDict()
|
94 |
-
self.fpn['c22_1'] = build_concat_cell(True, True)
|
95 |
-
self.fpn['c22_2'] = build_concat_cell(True, True)
|
96 |
-
self.fpn['c32'] = build_concat_cell(True, False)
|
97 |
-
self.fpn['c02'] = build_concat_cell(True, False)
|
98 |
-
self.fpn['c42'] = build_concat_cell(True, True)
|
99 |
-
self.fpn['c36'] = build_concat_cell(True, True)
|
100 |
-
self.fpn['c61'] = build_concat_cell(True, True) # f9
|
101 |
-
self.extra_downsamples = nn.ModuleList()
|
102 |
-
for i in range(extra_levels):
|
103 |
-
extra_act_cfg = None if i == 0 \
|
104 |
-
else dict(type='ReLU', inplace=False)
|
105 |
-
self.extra_downsamples.append(
|
106 |
-
ConvModule(
|
107 |
-
out_channels,
|
108 |
-
out_channels,
|
109 |
-
3,
|
110 |
-
stride=2,
|
111 |
-
padding=1,
|
112 |
-
act_cfg=extra_act_cfg,
|
113 |
-
order=('act', 'norm', 'conv')))
|
114 |
-
|
115 |
-
def forward(self, inputs):
|
116 |
-
"""Forward function."""
|
117 |
-
feats = [
|
118 |
-
adapt_conv(inputs[i + self.start_level])
|
119 |
-
for i, adapt_conv in enumerate(self.adapt_convs)
|
120 |
-
]
|
121 |
-
|
122 |
-
for (i, module_name) in enumerate(self.fpn):
|
123 |
-
idx_1, idx_2 = int(module_name[1]), int(module_name[2])
|
124 |
-
res = self.fpn[module_name](feats[idx_1], feats[idx_2])
|
125 |
-
feats.append(res)
|
126 |
-
|
127 |
-
ret = []
|
128 |
-
for (idx, input_idx) in zip([9, 8, 7], [1, 2, 3]): # add P3, P4, P5
|
129 |
-
feats1, feats2 = feats[idx], feats[5]
|
130 |
-
feats2_resize = F.interpolate(
|
131 |
-
feats2,
|
132 |
-
size=feats1.size()[2:],
|
133 |
-
mode='bilinear',
|
134 |
-
align_corners=False)
|
135 |
-
|
136 |
-
feats_sum = feats1 + feats2_resize
|
137 |
-
ret.append(
|
138 |
-
F.interpolate(
|
139 |
-
feats_sum,
|
140 |
-
size=inputs[input_idx].size()[2:],
|
141 |
-
mode='bilinear',
|
142 |
-
align_corners=False))
|
143 |
-
|
144 |
-
for submodule in self.extra_downsamples:
|
145 |
-
ret.append(submodule(ret[-1]))
|
146 |
-
|
147 |
-
return tuple(ret)
|
148 |
-
|
149 |
-
def init_weights(self):
|
150 |
-
"""Initialize the weights of module."""
|
151 |
-
for module in self.fpn.values():
|
152 |
-
if hasattr(module, 'conv_out'):
|
153 |
-
caffe2_xavier_init(module.out_conv.conv)
|
154 |
-
|
155 |
-
for modules in [
|
156 |
-
self.adapt_convs.modules(),
|
157 |
-
self.extra_downsamples.modules()
|
158 |
-
]:
|
159 |
-
for module in modules:
|
160 |
-
if isinstance(module, nn.Conv2d):
|
161 |
-
caffe2_xavier_init(module)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ChandraMohanNayal/AutoGPT/CODE_OF_CONDUCT.md
DELETED
@@ -1,40 +0,0 @@
|
|
1 |
-
# Code of Conduct for auto-gpt
|
2 |
-
|
3 |
-
## 1. Purpose
|
4 |
-
|
5 |
-
The purpose of this Code of Conduct is to provide guidelines for contributors to the auto-gpt project on GitHub. We aim to create a positive and inclusive environment where all participants can contribute and collaborate effectively. By participating in this project, you agree to abide by this Code of Conduct.
|
6 |
-
|
7 |
-
## 2. Scope
|
8 |
-
|
9 |
-
This Code of Conduct applies to all contributors, maintainers, and users of the auto-gpt project. It extends to all project spaces, including but not limited to issues, pull requests, code reviews, comments, and other forms of communication within the project.
|
10 |
-
|
11 |
-
## 3. Our Standards
|
12 |
-
|
13 |
-
We encourage the following behavior:
|
14 |
-
|
15 |
-
* Being respectful and considerate to others
|
16 |
-
* Actively seeking diverse perspectives
|
17 |
-
* Providing constructive feedback and assistance
|
18 |
-
* Demonstrating empathy and understanding
|
19 |
-
|
20 |
-
We discourage the following behavior:
|
21 |
-
|
22 |
-
* Harassment or discrimination of any kind
|
23 |
-
* Disrespectful, offensive, or inappropriate language or content
|
24 |
-
* Personal attacks or insults
|
25 |
-
* Unwarranted criticism or negativity
|
26 |
-
|
27 |
-
## 4. Reporting and Enforcement
|
28 |
-
|
29 |
-
If you witness or experience any violations of this Code of Conduct, please report them to the project maintainers by email or other appropriate means. The maintainers will investigate and take appropriate action, which may include warnings, temporary or permanent bans, or other measures as necessary.
|
30 |
-
|
31 |
-
Maintainers are responsible for ensuring compliance with this Code of Conduct and may take action to address any violations.
|
32 |
-
|
33 |
-
## 5. Acknowledgements
|
34 |
-
|
35 |
-
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html).
|
36 |
-
|
37 |
-
## 6. Contact
|
38 |
-
|
39 |
-
If you have any questions or concerns, please contact the project maintainers.
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat.b4/g4f/Provider/Providers/ChatgptAi.py
DELETED
@@ -1,51 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import requests, re
|
3 |
-
from ...typing import sha256, Dict, get_type_hints
|
4 |
-
|
5 |
-
url = 'https://chatgpt.ai/gpt-4/'
|
6 |
-
model = ['gpt-4']
|
7 |
-
supports_stream = True
|
8 |
-
needs_auth = False
|
9 |
-
|
10 |
-
|
11 |
-
def _create_completion(model: str, messages: list, stream: bool, **kwargs):
|
12 |
-
chat = ''
|
13 |
-
for message in messages:
|
14 |
-
chat += '%s: %s\n' % (message['role'], message['content'])
|
15 |
-
chat += 'assistant: '
|
16 |
-
|
17 |
-
response = requests.get('https://chatgpt.ai/')
|
18 |
-
nonce, post_id, _, bot_id = re.findall(r'data-nonce="(.*)"\n data-post-id="(.*)"\n data-url="(.*)"\n data-bot-id="(.*)"\n data-width', response.text)[0]
|
19 |
-
|
20 |
-
headers = {
|
21 |
-
'authority': 'chatgpt.ai',
|
22 |
-
'accept': '*/*',
|
23 |
-
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
|
24 |
-
'cache-control': 'no-cache',
|
25 |
-
'origin': 'https://chatgpt.ai',
|
26 |
-
'pragma': 'no-cache',
|
27 |
-
'referer': 'https://chatgpt.ai/gpt-4/',
|
28 |
-
'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
|
29 |
-
'sec-ch-ua-mobile': '?0',
|
30 |
-
'sec-ch-ua-platform': '"Windows"',
|
31 |
-
'sec-fetch-dest': 'empty',
|
32 |
-
'sec-fetch-mode': 'cors',
|
33 |
-
'sec-fetch-site': 'same-origin',
|
34 |
-
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
|
35 |
-
}
|
36 |
-
data = {
|
37 |
-
'_wpnonce': nonce,
|
38 |
-
'post_id': post_id,
|
39 |
-
'url': 'https://chatgpt.ai/gpt-4',
|
40 |
-
'action': 'wpaicg_chat_shortcode_message',
|
41 |
-
'message': chat,
|
42 |
-
'bot_id': bot_id
|
43 |
-
}
|
44 |
-
|
45 |
-
response = requests.post('https://chatgpt.ai/wp-admin/admin-ajax.php',
|
46 |
-
headers=headers, data=data)
|
47 |
-
|
48 |
-
yield (response.json()['data'])
|
49 |
-
|
50 |
-
params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
|
51 |
-
'(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cong723/gpt-academic-public/docs/waifu_plugin/jquery-ui.min.js
DELETED
The diff for this file is too large to render.
See raw diff
|
|
spaces/Cropinky/hana_hanak_houses/realesrgan/version.py
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
# GENERATED VERSION FILE
|
2 |
-
# TIME: Fri Jun 2 00:17:29 2023
|
3 |
-
__version__ = '0.3.0'
|
4 |
-
__gitsha__ = '5ca1078'
|
5 |
-
version_info = (0, 3, 0)
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DEBO-PROJECT/DEBO-V1/app.py
DELETED
@@ -1,868 +0,0 @@
|
|
1 |
-
import streamlit as st
|
2 |
-
import numpy as np
|
3 |
-
import pprint
|
4 |
-
import time
|
5 |
-
import openai
|
6 |
-
|
7 |
-
from decimal import Decimal
|
8 |
-
from gtts import gTTS
|
9 |
-
from collections import Counter
|
10 |
-
from streamlit_chat import message
|
11 |
-
from audiorecorder import audiorecorder
|
12 |
-
|
13 |
-
# internal modules
|
14 |
-
from bots.judgement_bot import debate_judgement
|
15 |
-
# from modules.db_modules import get_db, put_item, get_lastest_item
|
16 |
-
from modules.gpt_modules import gpt_call, gpt_call_context
|
17 |
-
from modules.whisper_modules import whisper_transcribe
|
18 |
-
from modules.setting_modules import blockPrint
|
19 |
-
|
20 |
-
#########################################################
|
21 |
-
# Disabled Console print
|
22 |
-
#########################################################
|
23 |
-
blockPrint()
|
24 |
-
|
25 |
-
#########################################################
|
26 |
-
# Page Configurations
|
27 |
-
#########################################################
|
28 |
-
st.set_page_config(page_title="Debate With GPT : DEBO")
|
29 |
-
|
30 |
-
#########################################################
|
31 |
-
# GET DB
|
32 |
-
#########################################################
|
33 |
-
# dynamodb = get_db()
|
34 |
-
|
35 |
-
#########################################################
|
36 |
-
# Time Stamp
|
37 |
-
#########################################################
|
38 |
-
tm = time.localtime()
|
39 |
-
time_stamp = time.strftime('%Y-%m-%d %I:%M:%S %p', tm)
|
40 |
-
|
41 |
-
#########################################################
|
42 |
-
# Initialize session state variables
|
43 |
-
#########################################################
|
44 |
-
if "page" not in st.session_state:
|
45 |
-
st.session_state.page = "Page 1"
|
46 |
-
|
47 |
-
if "topic" not in st.session_state:
|
48 |
-
st.session_state.topic = "None"
|
49 |
-
|
50 |
-
if "user_id" not in st.session_state:
|
51 |
-
st.session_state.user_id = ""
|
52 |
-
|
53 |
-
if "case1" not in st.session_state:
|
54 |
-
st.session_state.case1 = ""
|
55 |
-
|
56 |
-
if "case2" not in st.session_state:
|
57 |
-
st.session_state.case2 = ""
|
58 |
-
|
59 |
-
if "case3" not in st.session_state:
|
60 |
-
st.session_state.case3 = ""
|
61 |
-
|
62 |
-
if "page2_tab" not in st.session_state:
|
63 |
-
st.session_state.page2_tab = "tab1"
|
64 |
-
|
65 |
-
if "ask_gpt_prev_response" not in st.session_state:
|
66 |
-
st.session_state.ask_gpt_prev_response = ""
|
67 |
-
|
68 |
-
if "total_debate_history" not in st.session_state:
|
69 |
-
st.session_state.total_debate_history = []
|
70 |
-
|
71 |
-
if "user_debate_history" not in st.session_state:
|
72 |
-
st.session_state.user_debate_history = []
|
73 |
-
|
74 |
-
if "bot_debate_history" not in st.session_state:
|
75 |
-
st.session_state.bot_debate_history = []
|
76 |
-
|
77 |
-
if "pros_and_cons" not in st.session_state:
|
78 |
-
st.session_state.pros_and_cons = ""
|
79 |
-
|
80 |
-
if "start_time" not in st.session_state:
|
81 |
-
st.session_state.start_time = time.time()
|
82 |
-
|
83 |
-
if "end_time" not in st.session_state:
|
84 |
-
st.session_state.end_time = time.time()
|
85 |
-
|
86 |
-
if "debate_time" not in st.session_state:
|
87 |
-
st.session_state.debate_time = 0
|
88 |
-
|
89 |
-
if "judgement_result" not in st.session_state:
|
90 |
-
st.session_state.judgement_result = ""
|
91 |
-
|
92 |
-
if "pre_audio" not in st.session_state:
|
93 |
-
st.session_state.pre_audio = np.array([])
|
94 |
-
|
95 |
-
if "disabled" not in st.session_state:
|
96 |
-
st.session_state.disabled = True
|
97 |
-
|
98 |
-
# for db session number
|
99 |
-
if "session_num" not in st.session_state:
|
100 |
-
st.session_state.session_num = 0
|
101 |
-
|
102 |
-
# OpenAI API Key
|
103 |
-
if "OPENAI_API_KEY" not in st.session_state:
|
104 |
-
st.session_state.OPENAI_API_KEY = ""
|
105 |
-
|
106 |
-
#########################################################
|
107 |
-
# Page Controller
|
108 |
-
#########################################################
|
109 |
-
def page_1_2_controller():
|
110 |
-
st.session_state.page = "Page 2"
|
111 |
-
|
112 |
-
def page_2_4_controller():
|
113 |
-
st.session_state.page = "Page 4"
|
114 |
-
|
115 |
-
def page_4_5_controller():
|
116 |
-
st.session_state.page = "Page 5"
|
117 |
-
|
118 |
-
def page_5_6_controller():
|
119 |
-
st.session_state.page = "Page 6"
|
120 |
-
|
121 |
-
def page_n_1_controller():
|
122 |
-
st.session_state.page = "Page 1"
|
123 |
-
|
124 |
-
def page2_tab_controller():
|
125 |
-
st.session_state.page2_tab = "tab2"
|
126 |
-
|
127 |
-
#########################################################
|
128 |
-
# Page 1
|
129 |
-
#########################################################
|
130 |
-
# def validate_user_id(id_input):
|
131 |
-
# table = dynamodb.Table('DEBO_user')
|
132 |
-
# users_set = get_all_items(table, 'user_id')
|
133 |
-
# if id_input in users_set:
|
134 |
-
# return False
|
135 |
-
# else:
|
136 |
-
# return True
|
137 |
-
|
138 |
-
def validate_openai_api_key(api_key):
|
139 |
-
openai.api_key = api_key
|
140 |
-
try:
|
141 |
-
response = openai.Completion.create(
|
142 |
-
engine="davinci",
|
143 |
-
prompt="This is a test.",
|
144 |
-
max_tokens=5
|
145 |
-
)
|
146 |
-
except:
|
147 |
-
return False
|
148 |
-
else:
|
149 |
-
return True
|
150 |
-
|
151 |
-
def save_info(user_id):
|
152 |
-
# You can add the code to save the submitted info (e.g., to a database)
|
153 |
-
st.session_state.user_id = user_id
|
154 |
-
|
155 |
-
#########################################################
|
156 |
-
# Session Update
|
157 |
-
#########################################################
|
158 |
-
# debate_setting = get_lastest_item(
|
159 |
-
# table=dynamodb.Table('DEBO_debate_setting'),
|
160 |
-
# name_of_partition_key="user_id",
|
161 |
-
# value_of_partition_key=st.session_state.user_id,
|
162 |
-
# limit_num=1
|
163 |
-
# )
|
164 |
-
# if not debate_setting:
|
165 |
-
# st.session_state.session_num = 0
|
166 |
-
# else:
|
167 |
-
# st.session_state.session_num = debate_setting[0]['session_num']
|
168 |
-
st.session_state.session_num = 0
|
169 |
-
|
170 |
-
|
171 |
-
def page1():
|
172 |
-
val_id = False
|
173 |
-
val_api_key = False
|
174 |
-
|
175 |
-
st.header('User Info')
|
176 |
-
st.caption('Please enter User ID and OpenAI API Key both:)')
|
177 |
-
user_id = st.text_input(
|
178 |
-
label='User ID',
|
179 |
-
max_chars=20,
|
180 |
-
placeholder="Enter user ID (anything you want)",
|
181 |
-
)
|
182 |
-
# message_id = st.empty()
|
183 |
-
openai_api_key = st.text_input(
|
184 |
-
label='OpenAI API Key',
|
185 |
-
placeholder="Paste your OpenAI API key (sk-...)",
|
186 |
-
help='You can get your API key from https://platform.openai.com/account/api-keys.',
|
187 |
-
type="password",
|
188 |
-
)
|
189 |
-
message_api_key = st.empty()
|
190 |
-
|
191 |
-
if user_id:
|
192 |
-
save_info(user_id)
|
193 |
-
val_id = True
|
194 |
-
# if validate_user_id(user_id):
|
195 |
-
# message_id.success('User ID successfully verified!', icon="✅")
|
196 |
-
# save_info(user_id)
|
197 |
-
# val_id = True
|
198 |
-
# else:
|
199 |
-
# message_id.error('Please fill in correct User ID.', icon="🚨")
|
200 |
-
# st.session_state.disabled = True
|
201 |
-
else:
|
202 |
-
# message_id.error('Please fill in User ID.', icon="🚨")
|
203 |
-
st.session_state.disabled = True
|
204 |
-
|
205 |
-
if openai_api_key:
|
206 |
-
if validate_openai_api_key(openai_api_key):
|
207 |
-
message_api_key.success('OpenAI API Key successfully verified!', icon="✅")
|
208 |
-
st.session_state["OPENAI_API_KEY"] = openai_api_key
|
209 |
-
val_api_key = True
|
210 |
-
else:
|
211 |
-
message_api_key.error(
|
212 |
-
f'AuthenticationError: Incorrect API key provided: "{openai_api_key}".'
|
213 |
-
'\nYou can find your API key at https://platform.openai.com/account/api-keys.', icon="🚨"
|
214 |
-
)
|
215 |
-
st.session_state.disabled = True
|
216 |
-
else:
|
217 |
-
st.session_state.disabled = True
|
218 |
-
|
219 |
-
if val_id and val_api_key:
|
220 |
-
st.session_state.disabled = False
|
221 |
-
|
222 |
-
st.button(
|
223 |
-
label='Next',
|
224 |
-
type='primary',
|
225 |
-
disabled=st.session_state.disabled,
|
226 |
-
on_click=page_1_2_controller
|
227 |
-
)
|
228 |
-
#########################################################
|
229 |
-
# Page 2
|
230 |
-
#########################################################
|
231 |
-
def page2():
|
232 |
-
_, _, pre, home = st.columns([5, 5, 1, 1])
|
233 |
-
with pre:
|
234 |
-
st.button("🔙", on_click=page_n_1_controller, use_container_width=True)
|
235 |
-
with home:
|
236 |
-
st.button("🔝", on_click=page_n_1_controller, use_container_width=True)
|
237 |
-
|
238 |
-
st.header("Choose Option")
|
239 |
-
option_result = st.selectbox("Choose your option", ["Total Debate", "Evaluation Only & Analyzing Utterances"])
|
240 |
-
|
241 |
-
# add controller
|
242 |
-
if option_result == "Total Debate":
|
243 |
-
page_control_func = page_2_4_controller
|
244 |
-
st.session_state.disabled = False
|
245 |
-
elif option_result == "Evaluation Only & Analyzing Utterances":
|
246 |
-
st.info('Sorry:( This function will be developed soon.', icon="ℹ️")
|
247 |
-
page_control_func = page_1_2_controller
|
248 |
-
st.session_state.disabled = True
|
249 |
-
|
250 |
-
st.button(
|
251 |
-
label='Next',
|
252 |
-
type='primary',
|
253 |
-
disabled=st.session_state.disabled,
|
254 |
-
on_click=page_control_func,
|
255 |
-
)
|
256 |
-
|
257 |
-
|
258 |
-
#########################################################
|
259 |
-
# Page 4
|
260 |
-
#########################################################
|
261 |
-
def store_debate_data(checked, case1, case2, case3):
|
262 |
-
if checked:
|
263 |
-
st.session_state.case1, st.session_state.case2, st.session_state.case3 = "", "", ""
|
264 |
-
if not checked:
|
265 |
-
st.session_state.case1, st.session_state.case2, st.session_state.case3 = case1, case2, case3
|
266 |
-
|
267 |
-
# put_item(
|
268 |
-
# table=dynamodb.Table('DEBO_debate_setting'),
|
269 |
-
# item={
|
270 |
-
# 'user_id': st.session_state.user_id,
|
271 |
-
# 'time_stamp': time_stamp,
|
272 |
-
# 'debate_theme': st.session_state.debate_theme,
|
273 |
-
# 'debate_topic': st.session_state.topic,
|
274 |
-
# 'case1': st.session_state.case1,
|
275 |
-
# 'case2': st.session_state.case2,
|
276 |
-
# 'case3': st.session_state.case3,
|
277 |
-
# 'session_num': st.session_state.session_num,
|
278 |
-
# }
|
279 |
-
# )
|
280 |
-
|
281 |
-
def page4():
|
282 |
-
#########################################################
|
283 |
-
# Tab 1 - Total Debate (토론 준비 -> 연습 -> 평가)
|
284 |
-
#########################################################
|
285 |
-
|
286 |
-
_, _, pre, home = st.columns([5, 5, 1, 1])
|
287 |
-
with pre:
|
288 |
-
st.button("🔙", on_click=page_1_2_controller, use_container_width=True)
|
289 |
-
with home:
|
290 |
-
st.button("🔝", on_click=page_n_1_controller, use_container_width=True)
|
291 |
-
|
292 |
-
st.header("Total Debate")
|
293 |
-
debate_themes = ['Education','Sports','Religion','Justice','Pandemic','Politics','Minority','etc']
|
294 |
-
|
295 |
-
st.subheader("1. Theme")
|
296 |
-
st.session_state.debate_theme = st.selectbox("Choose your debate theme", debate_themes)
|
297 |
-
|
298 |
-
if st.session_state.debate_theme == 'Education':
|
299 |
-
topic_list = [
|
300 |
-
"THBT college entrance examinations should accept students only on the basis of their academic performance in secondary education.",
|
301 |
-
"THS a world where the government gives cash that individuals can use to freely select their academic preference (including but not limited to school of choice, private academies, and tutoring) instead of funding for public education.",
|
302 |
-
"THW abolish all requirements and evaluation criteria in higher education (i.e., attendance, exams, assignments)."
|
303 |
-
]
|
304 |
-
elif st.session_state.debate_theme == 'Sports':
|
305 |
-
topic_list = [
|
306 |
-
"THBT having star players for team sports do more harm than good to the team.",
|
307 |
-
"THR the emphasis on winning a medal in the Olympics as a core symbol of success.",
|
308 |
-
"THP a world where sports serves purely entertainment purposes even at the expense of fair play."
|
309 |
-
]
|
310 |
-
elif st.session_state.debate_theme == 'Religion':
|
311 |
-
topic_list = [
|
312 |
-
"THW, as a religious group/leader, cease attempts at increasing the number of believers and instead prioritize boosting loyalty amongst adherents to the religion.",
|
313 |
-
"Assuming feasibility, TH prefers a world where a panel of church leaders would create a universally accepted interpretation of the Bible that the believers would abide by.",
|
314 |
-
"THW aggressively crackdown on megachurches."
|
315 |
-
]
|
316 |
-
elif st.session_state.debate_theme == 'Justice':
|
317 |
-
topic_list = [
|
318 |
-
"In 2050, AI robots are able to replicate the appearance, conversation, and reaction to emotions of human beings. However, their intelligence still does not allow them to sense emotions and feelings such as pain, happiness, joy, and etc.",
|
319 |
-
"In the case a human destroys the robot beyond repair, THW charge murder instead of property damage.",
|
320 |
-
"THP a world where the criminal justice system’s role is mainly for victim’s vengeance. THW allow prosecutors and victims to veto assigned judges."
|
321 |
-
]
|
322 |
-
elif st.session_state.debate_theme == 'Pandemic':
|
323 |
-
topic_list = [
|
324 |
-
"During a pandemic, THBT businesses that benefit from the pandemic should be additionally taxed.",
|
325 |
-
"THW nullify the effect of medical patents in cases of medical emergencies.",
|
326 |
-
"THW ban media content that denies the efficacy of the COVID-19 without substantial evidence."
|
327 |
-
]
|
328 |
-
elif st.session_state.debate_theme == 'Politics':
|
329 |
-
topic_list = [
|
330 |
-
"Info: The Candle Light Will (촛불민심) is a term derived from the symbolic candle-light protests for the impeachment of the late president Park Geun Hye, commonly used to mean the people’s will to fight against corrupt governments. The Moon administration has frequently referred to the Candle Light Will as the driving force behind its election that grants legitimacy to its policies. THR the ‘candle light will’ narrative in the political discourse of South Korea.",
|
331 |
-
"THW impose a cap on the property and income of politicians.",
|
332 |
-
"THW give the youth extra votes."
|
333 |
-
]
|
334 |
-
elif st.session_state.debate_theme == 'Minority':
|
335 |
-
topic_list = [
|
336 |
-
"Context: A prominent member of the LGBT movement has discovered that a very influential politician helping the LGBT movement has been lying about their sexual orientation as being gay when they are straight. THW disclose this information.",
|
337 |
-
"THBT the LGBTQIA+ movement should denounce the existence of marriage as opposed to fighting for equal marriage rights.",
|
338 |
-
"THBT the LGBTQIA+ movement should condemn the consumption of movies and TV shows that cast straight actors/actresses in non-heterosexual identified roles."
|
339 |
-
]
|
340 |
-
else:
|
341 |
-
topic_list = [
|
342 |
-
"THW remove all laws that relate to filial responsibilities.",
|
343 |
-
"THW require parents to receive approval from experts in relevant fields before making crucial decisions for their children.",
|
344 |
-
"Assuming it is possible to measure the ‘societal danger’ of the fetus in the future, THBT the state should raise infants that pose high levels of threat.",
|
345 |
-
"THBT any upper limits on prison sentences for particularly heinous crimes should be abolished.",
|
346 |
-
"THW require dating apps to anonymize profile pictures.",
|
347 |
-
"THW adopt a Pass/Fail grading system for students who suffer from mental health problems (e.g. depression, bipolar disorder, etc.).",
|
348 |
-
"THBT South Korean feminist movements should reject feminist icons that are adversarial and embody violence.",
|
349 |
-
"THBT freedom of speech should be considered obsolete.",
|
350 |
-
"THR the narrative that eccentric personalities are essential to create art.",
|
351 |
-
"THW allow parents of severely mentally disabled children to medically impede their children's physical growth.",
|
352 |
-
"THR the emphasis on longevity in relationships.",
|
353 |
-
"Assuming feasibility, THW choose to continuously relive the happiest moment of one’s life."
|
354 |
-
]
|
355 |
-
|
356 |
-
st.subheader("2. Topic")
|
357 |
-
topic = st.session_state.topic = st.selectbox(
|
358 |
-
label="Choose your topic",
|
359 |
-
options=topic_list,
|
360 |
-
format_func=lambda x: x[:35] + "...",
|
361 |
-
# help="This is help message",
|
362 |
-
)
|
363 |
-
st.write("> Topic : ", topic)
|
364 |
-
|
365 |
-
st.subheader("3. Side")
|
366 |
-
st.session_state.pros_and_cons = st.selectbox("Choose your Side (Pros and Cons)", ["Pros", "Cons"])
|
367 |
-
|
368 |
-
st.subheader("4. Cases")
|
369 |
-
st.caption('📢 These are just a tool to help you structure your thoughts on the content and does not reflect the actual discussion.')
|
370 |
-
checked = st.checkbox(
|
371 |
-
label="If you Don't need to write this 3 cases, Please check",
|
372 |
-
key="disabled",
|
373 |
-
)
|
374 |
-
#########################################################
|
375 |
-
# Save case in session
|
376 |
-
#########################################################
|
377 |
-
case1 = st.text_area(
|
378 |
-
label="Write a Case 1",
|
379 |
-
placeholder="Each case should be consisted of opinion, reasoning, and example.",
|
380 |
-
height=150,
|
381 |
-
disabled=st.session_state.disabled
|
382 |
-
)
|
383 |
-
case2 = st.text_area(
|
384 |
-
label="Write a Case 2",
|
385 |
-
placeholder="Each case should be consisted of opinion, reasoning, and example.",
|
386 |
-
height=150,
|
387 |
-
disabled=st.session_state.disabled
|
388 |
-
)
|
389 |
-
case3 = st.text_area(
|
390 |
-
label="Write a Case 3",
|
391 |
-
placeholder="Each case should be consisted of opinion, reasoning, and example.",
|
392 |
-
height=150,
|
393 |
-
disabled=st.session_state.disabled
|
394 |
-
)
|
395 |
-
case_error_message = st.empty()
|
396 |
-
|
397 |
-
st.write("*" * 50)
|
398 |
-
|
399 |
-
# Save the data to database
|
400 |
-
start = st.button(
|
401 |
-
label="Start Debate",
|
402 |
-
type='primary',
|
403 |
-
on_click=store_debate_data,
|
404 |
-
args=(checked, case1, case2, case3)
|
405 |
-
)
|
406 |
-
|
407 |
-
def validate_case(error_message):
|
408 |
-
if not st.session_state.case1 or not st.session_state.case2 or not st.session_state.case3:
|
409 |
-
error_message.error("Please fill out above all", icon="🚨")
|
410 |
-
return False
|
411 |
-
else:
|
412 |
-
return True
|
413 |
-
|
414 |
-
if start:
|
415 |
-
if checked:
|
416 |
-
page_4_5_controller()
|
417 |
-
st.experimental_rerun()
|
418 |
-
else:
|
419 |
-
if validate_case(case_error_message):
|
420 |
-
page_4_5_controller()
|
421 |
-
st.experimental_rerun()
|
422 |
-
|
423 |
-
#########################################################
|
424 |
-
# Ask to GPT
|
425 |
-
#########################################################
|
426 |
-
with st.sidebar:
|
427 |
-
st.sidebar.title('Ask to GPT')
|
428 |
-
user_input = st.sidebar.text_area(
|
429 |
-
label="Question",
|
430 |
-
placeholder="Input text here",
|
431 |
-
height=100)
|
432 |
-
output = st.sidebar.button("Ask")
|
433 |
-
error_message = st.empty()
|
434 |
-
if output:
|
435 |
-
if not user_input:
|
436 |
-
error_message.error("Please enter your question")
|
437 |
-
result = st.session_state.ask_gpt_prev_response
|
438 |
-
else:
|
439 |
-
try:
|
440 |
-
result = gpt_call(st.session_state['OPENAI_API_KEY'], user_input)
|
441 |
-
st.session_state.ask_gpt_prev_response = result
|
442 |
-
except:
|
443 |
-
st.warning('Chat-GPT Error : The engine is currently overloaded. Please click "Rerun" button below.', icon="⚠️")
|
444 |
-
time.sleep(1)
|
445 |
-
rerun = st.button(label="Rerun", type="primary")
|
446 |
-
if rerun:
|
447 |
-
st.experimental_rerun()
|
448 |
-
st.stop()
|
449 |
-
|
450 |
-
# Save user_prompt and bot_response to database
|
451 |
-
# put_item(
|
452 |
-
# table=dynamodb.Table('DEBO_gpt_ask'),
|
453 |
-
# item={
|
454 |
-
# 'user_id': st.session_state.user_id,
|
455 |
-
# 'time_stamp': time_stamp,
|
456 |
-
# 'user_prompt': user_input,
|
457 |
-
# 'bot_response': result,
|
458 |
-
# 'session_num': st.session_state.session_num,
|
459 |
-
# }
|
460 |
-
# )
|
461 |
-
|
462 |
-
else:
|
463 |
-
result = st.session_state.ask_gpt_prev_response
|
464 |
-
|
465 |
-
st.sidebar.text_area(
|
466 |
-
label="Answer",
|
467 |
-
placeholder="(Answer will be shown here)",
|
468 |
-
value=result,
|
469 |
-
height=400)
|
470 |
-
|
471 |
-
#########################################################
|
472 |
-
# Page5
|
473 |
-
#########################################################
|
474 |
-
|
475 |
-
def generate_response(prompt):
|
476 |
-
if len(prompt.split()) < 5:
|
477 |
-
response = "Please speak longer!"
|
478 |
-
else:
|
479 |
-
try:
|
480 |
-
response = gpt_call_context(st.session_state['OPENAI_API_KEY'], st.session_state['total_debate_history'])
|
481 |
-
except:
|
482 |
-
raise RuntimeError("ChatGPT API Error")
|
483 |
-
|
484 |
-
st.session_state['user_debate_history'].append(prompt)
|
485 |
-
st.session_state['total_debate_history'].append({"role": "user", "content": prompt})
|
486 |
-
st.session_state['bot_debate_history'].append(response)
|
487 |
-
st.session_state['total_debate_history'].append({"role": "assistant", "content": response})
|
488 |
-
return response
|
489 |
-
|
490 |
-
def execute_stt(audio):
|
491 |
-
# audio 기록 누적
|
492 |
-
#user_audio_path = "audio/" + str(st.session_state.user_id) + "_" + str(st.session_state.session_num) + "_" + str(time.time()) + ".wav"
|
493 |
-
# audio 기록을 누적하고 싶지 않다면
|
494 |
-
user_audio_path = "audio/audio.wav"
|
495 |
-
wav_file = open(user_audio_path, "wb")
|
496 |
-
wav_file.write(audio.tobytes())
|
497 |
-
|
498 |
-
try:
|
499 |
-
user_input = whisper_transcribe(st.session_state['OPENAI_API_KEY'], wav_file)
|
500 |
-
wav_file.close()
|
501 |
-
return user_input
|
502 |
-
except:
|
503 |
-
raise RuntimeError("Whisper API Error")
|
504 |
-
|
505 |
-
def page5():
|
506 |
-
|
507 |
-
# time
|
508 |
-
st.session_state.start_time = time.time()
|
509 |
-
|
510 |
-
#########################################################
|
511 |
-
# Ask to GPT
|
512 |
-
#########################################################
|
513 |
-
with st.sidebar:
|
514 |
-
st.sidebar.title('Ask to GPT')
|
515 |
-
user_input = st.sidebar.text_area(
|
516 |
-
label="Question",
|
517 |
-
placeholder="Input text here",
|
518 |
-
height=100)
|
519 |
-
output = st.sidebar.button("Ask")
|
520 |
-
error_message = st.empty()
|
521 |
-
if output:
|
522 |
-
if not user_input:
|
523 |
-
error_message.error("Please enter your question")
|
524 |
-
result = st.session_state.ask_gpt_prev_response
|
525 |
-
else:
|
526 |
-
try:
|
527 |
-
result = gpt_call(st.session_state['OPENAI_API_KEY'], user_input)
|
528 |
-
st.session_state.ask_gpt_prev_response = result
|
529 |
-
except:
|
530 |
-
st.warning('Chat-GPT Error : The engine is currently overloaded. Please click "Rerun" button below.', icon="⚠️")
|
531 |
-
time.sleep(1)
|
532 |
-
rerun = st.button(label="Rerun", type="primary")
|
533 |
-
if rerun:
|
534 |
-
st.experimental_rerun()
|
535 |
-
st.stop()
|
536 |
-
|
537 |
-
# put_item(
|
538 |
-
# table=dynamodb.Table('DEBO_gpt_ask'),
|
539 |
-
# item={
|
540 |
-
# 'user_id': st.session_state.user_id,
|
541 |
-
# 'time_stamp': time_stamp,
|
542 |
-
# 'user_prompt': user_input,
|
543 |
-
# 'bot_response': result,
|
544 |
-
# 'session_num': st.session_state.session_num,
|
545 |
-
# }
|
546 |
-
# )
|
547 |
-
else:
|
548 |
-
result = st.session_state.ask_gpt_prev_response
|
549 |
-
|
550 |
-
st.sidebar.text_area(
|
551 |
-
label="Answer",
|
552 |
-
placeholder="(Answer will be shown here)",
|
553 |
-
value=result,
|
554 |
-
height=400)
|
555 |
-
|
556 |
-
# default system prompt settings
|
557 |
-
if not st.session_state['total_debate_history']:
|
558 |
-
|
559 |
-
# bot role, pros and cons
|
560 |
-
if st.session_state.pros_and_cons == "Pros":
|
561 |
-
bot_role = "Cons"
|
562 |
-
elif st.session_state.pros_and_cons == "Cons":
|
563 |
-
bot_role = "Pros"
|
564 |
-
else:
|
565 |
-
bot_role = "(Not yet Determined)"
|
566 |
-
|
567 |
-
debate_preset = "\n".join([
|
568 |
-
"Debate Rules: ",
|
569 |
-
"1) This debate will be divided into two teams, pro and con, with two debates on each team.",
|
570 |
-
"2) The order of speaking is: first debater for the pro side, first debater for the con side, second debater for the pro side, second debater for the con side.",
|
571 |
-
"3) Answer logically with an introduction, body, and conclusion.",
|
572 |
-
"4) Your role : " + bot_role + " side debator",
|
573 |
-
"5) Debate subject: " + st.session_state['topic'],
|
574 |
-
])
|
575 |
-
first_prompt = "Now we're going to start. Summarize the subject and your role. And ask user ready to begin."
|
576 |
-
|
577 |
-
try:
|
578 |
-
response = gpt_call(st.session_state['OPENAI_API_KEY'], debate_preset + "\n" + first_prompt, role="system")
|
579 |
-
except:
|
580 |
-
st.warning('Chat-GPT Error : The engine is currently overloaded. Please click "Rerun" button below.', icon="⚠️")
|
581 |
-
time.sleep(1)
|
582 |
-
rerun = st.button(label="Rerun", type="primary")
|
583 |
-
if rerun:
|
584 |
-
st.experimental_rerun()
|
585 |
-
st.stop()
|
586 |
-
|
587 |
-
st.session_state['total_debate_history'].append({"role": "system", "content": debate_preset})
|
588 |
-
st.session_state['total_debate_history'].append({"role": "assistant", "content": response})
|
589 |
-
st.session_state['bot_debate_history'].append(response)
|
590 |
-
|
591 |
-
_, _, pre, home = st.columns([5, 5, 1, 1])
|
592 |
-
with pre:
|
593 |
-
st.button("🔙", on_click=page_2_4_controller, use_container_width=True)
|
594 |
-
with home:
|
595 |
-
st.button("🔝", on_click=page_n_1_controller, use_container_width=True)
|
596 |
-
|
597 |
-
# container for chat history
|
598 |
-
response_container = st.container()
|
599 |
-
# Chat-GPT & Whisper api error handling
|
600 |
-
openai_error_bottom = st.empty()
|
601 |
-
# container for text box
|
602 |
-
container = st.container()
|
603 |
-
reload = False
|
604 |
-
|
605 |
-
with container:
|
606 |
-
with st.form(key='my_form', clear_on_submit=True):
|
607 |
-
st.caption("1. Click '⏺️ Record' button and it turn into '⏹️ Recording...' and say something.")
|
608 |
-
st.caption("2. After finish your utterance, click '⏹️ Recording...' button again and it turn off.")
|
609 |
-
st.caption("3. Click '💬 Send' button and DEBO process your input in short time and give you response.")
|
610 |
-
|
611 |
-
user_input = None
|
612 |
-
# record voice
|
613 |
-
audio = audiorecorder("⏺️ Record", "⏹️ Recording...")
|
614 |
-
if np.array_equal(st.session_state['pre_audio'], audio):
|
615 |
-
audio = np.array([])
|
616 |
-
|
617 |
-
submit_button = st.form_submit_button(label='💬 Send')
|
618 |
-
send_error_message = st.empty()
|
619 |
-
|
620 |
-
#if submit_button and user_input:
|
621 |
-
if submit_button:
|
622 |
-
if audio.any():
|
623 |
-
try:
|
624 |
-
user_input = execute_stt(audio)
|
625 |
-
except:
|
626 |
-
openai_error_bottom.warning('Whisper Error : The engine is currently overloaded. Please click "Rerun" button below.', icon="⚠️")
|
627 |
-
time.sleep(1)
|
628 |
-
rerun = st.button(label="Rerun", type="primary")
|
629 |
-
reload = True
|
630 |
-
if rerun:
|
631 |
-
st.experimental_rerun()
|
632 |
-
st.stop()
|
633 |
-
try :
|
634 |
-
response = generate_response(user_input)
|
635 |
-
except:
|
636 |
-
openai_error_bottom.warning('Chat-GPT Error : The engine is currently overloaded. Please click "Rerun" button below.', icon="⚠️")
|
637 |
-
time.sleep(1)
|
638 |
-
rerun = st.button(label="Rerun", type="primary")
|
639 |
-
reload = True
|
640 |
-
if rerun:
|
641 |
-
st.experimental_rerun()
|
642 |
-
st.stop()
|
643 |
-
st.session_state['pre_audio'] = audio
|
644 |
-
|
645 |
-
# debate_main_latest_data = get_lastest_item(
|
646 |
-
# table=dynamodb.Table('DEBO_debate_main'),
|
647 |
-
# name_of_partition_key="user_id",
|
648 |
-
# value_of_partition_key=st.session_state.user_id,
|
649 |
-
# limit_num=1
|
650 |
-
# )
|
651 |
-
# if not debate_main_latest_data:
|
652 |
-
# turn_num = 0
|
653 |
-
# else:
|
654 |
-
# turn_num = debate_main_latest_data[0]['turn_num']
|
655 |
-
|
656 |
-
# put_item(
|
657 |
-
# table=dynamodb.Table('DEBO_debate_main'),
|
658 |
-
# item={
|
659 |
-
# 'user_id': st.session_state.user_id,
|
660 |
-
# 'time_stamp': time_stamp,
|
661 |
-
# 'session_num': st.session_state.session_num,
|
662 |
-
# 'bot_response': response,
|
663 |
-
# 'user_prompt': user_input,
|
664 |
-
# 'turn_num': turn_num,
|
665 |
-
# }
|
666 |
-
# )
|
667 |
-
else:
|
668 |
-
send_error_message.error("Please record your voice first", icon="🚨")
|
669 |
-
reload = True
|
670 |
-
|
671 |
-
with response_container:
|
672 |
-
try:
|
673 |
-
message(st.session_state['bot_debate_history'][0], key='0_bot')
|
674 |
-
except:
|
675 |
-
st.warning('Server Error : Unexpected Server error occur. Please click "Rerun" button below.', icon="⚠️")
|
676 |
-
time.sleep(1)
|
677 |
-
reload = True
|
678 |
-
st.session_state['total_debate_history'] = []
|
679 |
-
rerun = st.button(label="Rerun", type="primary")
|
680 |
-
if rerun:
|
681 |
-
st.experimental_rerun()
|
682 |
-
st.stop()
|
683 |
-
if len(st.session_state['bot_debate_history']) == 1:
|
684 |
-
text_to_speech = gTTS(text=st.session_state['bot_debate_history'][0], lang='en', slow=False)
|
685 |
-
text_to_speech.save(f"audio/ses_{st.session_state['session_num']}_bot_res_0.mp3")
|
686 |
-
|
687 |
-
audio_file = open(f"audio/ses_{st.session_state['session_num']}_bot_res_0.mp3", 'rb')
|
688 |
-
audio_bytes = audio_file.read()
|
689 |
-
st.audio(audio_bytes, format='audio/ogg')
|
690 |
-
|
691 |
-
message_pairs = zip(
|
692 |
-
st.session_state['bot_debate_history'][1:],
|
693 |
-
st.session_state['user_debate_history'],
|
694 |
-
)
|
695 |
-
for i, (bot_hist, user_hist) in enumerate(message_pairs):
|
696 |
-
message(user_hist, is_user=True, key=str(i)+'_user')
|
697 |
-
message(bot_hist, key=str(i + 1)+'_bot')
|
698 |
-
|
699 |
-
if i == len(st.session_state['bot_debate_history']) - 2 and not reload:
|
700 |
-
text_to_speech = gTTS(text=bot_hist, lang='en', slow=False)
|
701 |
-
text_to_speech.save(f"audio/ses_{st.session_state['session_num']}_bot_res_{str(i + 1)}.mp3")
|
702 |
-
audio_file = open(f"audio/ses_{st.session_state['session_num']}_bot_res_{str(i + 1)}.mp3", 'rb')
|
703 |
-
audio_bytes = audio_file.read()
|
704 |
-
st.audio(audio_bytes, format='audio/ogg')
|
705 |
-
reload = False
|
706 |
-
|
707 |
-
st.button(
|
708 |
-
label="Next",
|
709 |
-
type="primary",
|
710 |
-
on_click=page_5_6_controller
|
711 |
-
)
|
712 |
-
|
713 |
-
print("#"*80)
|
714 |
-
pprint.pprint(st.session_state.to_dict())
|
715 |
-
print("#"*80)
|
716 |
-
|
717 |
-
#########################################################
|
718 |
-
# Page6 - Total Debate Evaluation
|
719 |
-
#########################################################
|
720 |
-
@st.cache_data
|
721 |
-
def preprocess_words(user_history):
|
722 |
-
res = " ".join(user_history)
|
723 |
-
res = res.lower()
|
724 |
-
res = res.translate(dict.fromkeys(map(ord, '!"#&\(),./:;<=>@[\\]^_`{|}~')))
|
725 |
-
return res.split()
|
726 |
-
|
727 |
-
@st.cache_data
|
728 |
-
def get_stop_words():
|
729 |
-
file = open("text/stop_words.txt", "r")
|
730 |
-
try:
|
731 |
-
content = file.read()
|
732 |
-
stopwords = content.split(",")
|
733 |
-
finally:
|
734 |
-
file.close()
|
735 |
-
return set(stopwords)
|
736 |
-
|
737 |
-
def page6():
|
738 |
-
|
739 |
-
# end time
|
740 |
-
st.session_state.end_time = time.time()
|
741 |
-
st.session_state.debate_time = st.session_state.end_time - st.session_state.start_time
|
742 |
-
|
743 |
-
_, _, pre, home = st.columns([5, 5, 1, 1])
|
744 |
-
with pre:
|
745 |
-
st.button("🔙", on_click=page_4_5_controller, use_container_width=True)
|
746 |
-
with home:
|
747 |
-
st.button("🔝", on_click=page_n_1_controller, use_container_width=True)
|
748 |
-
|
749 |
-
# st.tab
|
750 |
-
st.header('Total Debate Evaluation')
|
751 |
-
st.caption('📢 Note that evaluation using GPT is an experimental feature. Please check it out and give us your feedback.')
|
752 |
-
|
753 |
-
tab1, tab2 = st.tabs(['Debate Evaluation', 'Debate Analysis']) ## Delete 'Perfect Case'
|
754 |
-
|
755 |
-
with tab1:
|
756 |
-
st.header("Debate Evaluation")
|
757 |
-
|
758 |
-
if st.session_state.judgement_result == "":
|
759 |
-
with st.spinner('Wait for result...'):
|
760 |
-
judgement_result = ""
|
761 |
-
|
762 |
-
user_debate_history = "".join(
|
763 |
-
st.session_state.user_debate_history
|
764 |
-
)
|
765 |
-
bot_debate_history = "".join(
|
766 |
-
st.session_state.bot_debate_history
|
767 |
-
)
|
768 |
-
|
769 |
-
judgement_result = debate_judgement(
|
770 |
-
user_debate_history,
|
771 |
-
bot_debate_history
|
772 |
-
)
|
773 |
-
|
774 |
-
st.write("Debate Judgement Result")
|
775 |
-
st.write(judgement_result)
|
776 |
-
|
777 |
-
# if judgement_result != "":
|
778 |
-
# put_item(
|
779 |
-
# table=dynamodb.Table('DEBO_evaluation'),
|
780 |
-
# item={
|
781 |
-
# 'user_id': st.session_state.user_id,
|
782 |
-
# 'time_stamp': time_stamp,
|
783 |
-
# 'judgement_text': judgement_result,
|
784 |
-
# 'session_num': st.session_state.session_num,
|
785 |
-
# }
|
786 |
-
# )
|
787 |
-
st.success('Done!')
|
788 |
-
else:
|
789 |
-
st.write(st.session_state.judgement_result)
|
790 |
-
|
791 |
-
|
792 |
-
with tab2:
|
793 |
-
st.header('Debate Analysis')
|
794 |
-
|
795 |
-
# 유저의 history를 기반으로 발화량, 빈출 단어, 발화 습관 세 가지를 분석
|
796 |
-
user_history = st.session_state.user_debate_history
|
797 |
-
|
798 |
-
# 1. 발화량: 총 단어, 평균 속도(단어/시간)를 평균 발화량 혹은 참고 지표와 비교해 제시
|
799 |
-
|
800 |
-
# 총 단어
|
801 |
-
# 텍스트를 단어로 분할합니다.
|
802 |
-
# 각 단어의 빈도를 계산합니다.
|
803 |
-
|
804 |
-
# 리스트를 문자열로 변환하고, 전처리를 합니다. 공백을 기준으로 단어를 분할합니다.
|
805 |
-
total_word_list = preprocess_words(user_history)
|
806 |
-
total_word_count = len(total_word_list)
|
807 |
-
#total_word_count = len(user_history.split())
|
808 |
-
st.write("Total Word Count: ", total_word_count)
|
809 |
-
|
810 |
-
# 평균 속도(단어/시간)
|
811 |
-
#user_debate_time = st.session_state.user_debate_time
|
812 |
-
average_word_per_time = total_word_count / st.session_state.debate_time # 시간 단위보고 나중에 수정하기
|
813 |
-
st.write("Average Word Per Time: ", average_word_per_time)
|
814 |
-
|
815 |
-
# 2. 빈출 단어: 반복해서 사용하는 단어 리스트
|
816 |
-
# 불용어 제거
|
817 |
-
total_word_list = [word for word in total_word_list if word not in get_stop_words()]
|
818 |
-
# 빈도 계산
|
819 |
-
frequency = Counter(total_word_list)
|
820 |
-
# 가장 빈도가 높은 데이터 출력
|
821 |
-
most_common_data = frequency.most_common(5)
|
822 |
-
|
823 |
-
st.write("Most Common Words: ")
|
824 |
-
for word, count in most_common_data:
|
825 |
-
st.write(" - ", word, ":", count)
|
826 |
-
|
827 |
-
# print(most_common_data)
|
828 |
-
# st.write("Most Common Words: ", most_common_data)
|
829 |
-
|
830 |
-
# 3. 발화 습관: 불필요한 언어습관(아, 음)
|
831 |
-
# whisper preprocesser에서 주면
|
832 |
-
disfluency_word_list = ['eh', 'umm', 'ah', 'uh', 'er', 'erm', 'err']
|
833 |
-
# Count the disfluency words
|
834 |
-
disfluency_counts = sum(user_word in disfluency_word_list for user_word in user_history)
|
835 |
-
st.write("Disfluency Counts: ", disfluency_counts)
|
836 |
-
|
837 |
-
# if total_word_count != "" and average_word_per_time != "" and disfluency_counts != "":
|
838 |
-
|
839 |
-
# put_item(
|
840 |
-
# table=dynamodb.Table('DEBO_debate_analysis'),
|
841 |
-
# item={
|
842 |
-
# 'user_id': st.session_state.user_id,
|
843 |
-
# 'time_stamp': time_stamp,
|
844 |
-
# 'total_word_count': total_word_count,
|
845 |
-
# 'average_word_per_time': Decimal(str(average_word_per_time)),
|
846 |
-
# 'disfluency_counts': disfluency_counts,
|
847 |
-
# 'session_num': int(st.session_state.session_num),
|
848 |
-
# }
|
849 |
-
# )
|
850 |
-
|
851 |
-
|
852 |
-
#########################################################
|
853 |
-
# Page Routing
|
854 |
-
#########################################################
|
855 |
-
pages = {
|
856 |
-
"Page 1": page1, # user_id를 입력받는 페이지
|
857 |
-
"Page 2": page2, # 원하는 기능을 선택하는 페이지
|
858 |
-
"Page 4": page4, # 토론 세부사항 설정
|
859 |
-
"Page 5": page5, # Total Debate
|
860 |
-
"Page 6": page6, # Evaluation Only
|
861 |
-
}
|
862 |
-
|
863 |
-
selection = st.session_state.page
|
864 |
-
print("selection:", selection)
|
865 |
-
|
866 |
-
page = pages[selection]
|
867 |
-
# Execute selected page function
|
868 |
-
page()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|