Commit
·
f7766fb
1
Parent(s):
b657399
Update parquet files (step 40 of 296)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Burp Suite Professional Crack Linux HOT.md +0 -107
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/HDTracks Downloader Not Working? Try These Simple Solutions.md +0 -27
- spaces/1gistliPinn/ChatGPT4/Examples/Archaeological Laboratory Methods An Introduction Downloadzip.md +0 -7
- spaces/1gistliPinn/ChatGPT4/Examples/Dharam Sankat Mein Hd 1080p Blu-ray Fixed Download Torrent.md +0 -6
- spaces/1phancelerku/anime-remove-background/Experience the Thrill of Tongits Offline - The No.1 Free Card Game in the Philippines.md +0 -106
- spaces/1phancelerku/anime-remove-background/FR Legends APK 0.3.3.1 The Ultimate Drift Racing Game for Android - Customize Your Car and Compete Online.md +0 -161
- spaces/1toTree/lora_test/ppdiffusers/download_utils.py +0 -44
- spaces/3bdo7ss/Neutron_Chatbot/app.py +0 -29
- spaces/801artistry/RVC801/infer/lib/csvutil.py +0 -41
- spaces/801artistry/RVC801/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py +0 -90
- spaces/801artistry/RVC801/lib/infer_pack/modules/F0Predictor/__init__.py +0 -0
- spaces/AIConsultant/MusicGen/scripts/templates/base.html +0 -16
- spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/align_trans.py +0 -304
- spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/model.py +0 -835
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb16-150e_deepfashion2_long_sleeved_dress_256x192.py +0 -172
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnest50.py +0 -24
- spaces/AUST001/True-GPT4/app.py +0 -99
- spaces/Acapellas/vocalinstrumentalremover/README.md +0 -39
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toggleswitch.js +0 -2
- spaces/Aishwini/myfirstaigen/README.md +0 -12
- spaces/Aki004/herta-so-vits/onnxexport/model_onnx.py +0 -335
- spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/text/shanghainese.py +0 -64
- spaces/Aloento/9Nine-PITS/text/paddle_zh.py +0 -115
- spaces/Alycer/VITS-Umamusume-voice-synthesizer/data_utils.py +0 -393
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py +0 -290
- spaces/Andy1621/uniformer_image_detection/configs/_base_/models/cascade_mask_rcnn_uniformer_fpn.py +0 -201
- spaces/Andy1621/uniformer_image_segmentation/configs/_base_/datasets/chase_db1.py +0 -59
- spaces/Annotation-AI/segment-similarthings/README.md +0 -12
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/table.py +0 -1002
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_R_101_FPN_200ep_LSJ.py +0 -14
- spaces/BBrother/Pandora/README.md +0 -11
- spaces/Benson/text-generation/Examples/Descargar Counter Strike 1.3.md +0 -81
- spaces/BongoCaat/ArtGenerator/app.py +0 -611
- spaces/BradAllgood/fastai_chapter2_new/README.md +0 -13
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated.h +0 -35
- spaces/CVPR/LIVE/thrust/thrust/cmake/thrust-config.cmake +0 -652
- spaces/CVPR/LIVE/thrust/thrust/mr/pool.h +0 -505
- spaces/CVPR/WALT/mmdet/datasets/__init__.py +0 -24
- spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/__init__.py +0 -18
- spaces/CVPR/drawings-to-human/static/_app/immutable/chunks/index-bcf2726a.js +0 -1
- spaces/CVPR/monoscene_lite/helpers.py +0 -336
- spaces/CVPR/regionclip-demo/detectron2/checkpoint/catalog.py +0 -115
- spaces/Cam-Brazy/BearTest/app.py +0 -20
- spaces/Chaitanya01/InvestingPlatform/alerts.py +0 -85
- spaces/ChandraMohanNayal/AutoGPT/run_continuous.bat +0 -3
- spaces/ChillyFaze/runwayml-stable-diffusion-v1-5/README.md +0 -13
- spaces/ClearLove443/Robby-chatbot/modules/history.py +0 -58
- spaces/Cletrason/Cletrason-toad-mario-movie/app_text_to_video.py +0 -97
- spaces/CofAI/chat/g4f/Provider/Providers/helpers/theb.py +0 -48
- spaces/Cong723/gpt-academic-public/docs/WithFastapi.md +0 -43
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Burp Suite Professional Crack Linux HOT.md
DELETED
@@ -1,107 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Burp Suite Professional Crack Linux: What You Need to Know</h1>
|
3 |
-
<p>If you are a web security tester, you might have heard of Burp Suite Professional, a powerful tool that helps you find and exploit vulnerabilities in web applications. But what if you want to use Burp Suite Professional without paying for it? Is there a way to crack it on Linux and use it for free? In this article, we will answer these questions and more. We will explain what Burp Suite Professional is, what Burp Suite Professional crack linux is, what are the risks of using it, and what are the alternatives to it. By the end of this article, you will have a better understanding of Burp Suite Professional crack linux and why you should avoid it.</p>
|
4 |
-
<h2>Burp suite professional crack linux</h2><br /><p><b><b>Download</b> 🗸 <a href="https://byltly.com/2uKw83">https://byltly.com/2uKw83</a></b></p><br /><br />
|
5 |
-
<h2>What is Burp Suite Professional?</h2>
|
6 |
-
<p>Burp Suite Professional is a web security testing tool developed by PortSwigger, a company that specializes in web security research and software development. Burp Suite Professional is designed to help web security testers perform various tasks such as:</p>
|
7 |
-
<ul>
|
8 |
-
<li>Spidering and crawling web applications to discover their structure and functionality.</li>
|
9 |
-
<li>Intercepting and modifying HTTP requests and responses between the browser and the server.</li>
|
10 |
-
<li>Scanning web applications for common and complex vulnerabilities such as SQL injection, cross-site scripting, broken authentication, and more.</li>
|
11 |
-
<li>Exploiting web vulnerabilities using various techniques such as intruder, repeater, sequencer, comparer, decoder, and more.</li>
|
12 |
-
<li>Automating and customizing web security testing using extensions, macros, scripts, and APIs.</li>
|
13 |
-
</ul>
|
14 |
-
<p>Burp Suite Professional is widely used by web security testers around the world because of its features, usability, reliability, and support. It is considered one of the best web security testing tools in the market.</p>
|
15 |
-
<h3>Why use Burp Suite Professional?</h3>
|
16 |
-
<p>There are many reasons why you might want to use Burp Suite Professional for your web security testing projects. Some of them are:</p>
|
17 |
-
<ul>
|
18 |
-
<li>Burp Suite Professional has a user-friendly interface that allows you to easily navigate and control its features.</li>
|
19 |
-
<li>Burp Suite Professional has a comprehensive set of features that cover all aspects of web security testing from reconnaissance to exploitation.</li>
|
20 |
-
<li>Burp Suite Professional has a high accuracy and speed in finding and exploiting web vulnerabilities.</li>
|
21 |
-
<li>Burp Suite Professional has a large and active community of users and developers who share their knowledge, experience, and feedback.</li>
|
22 |
-
<li>Burp Suite Professional has a regular update cycle that ensures its compatibility with the latest web technologies and standards.</li>
|
23 |
-
<li>Burp Suite Professional has a professional support team that provides technical assistance and guidance to its users.</li>
|
24 |
-
</ul>
|
25 |
-
<h3>How much does Burp Suite Professional cost?</h3>
|
26 |
-
<p>Burp Suite Professional is not a free tool. It requires a license to use it legally and fully. The license can be purchased from PortSwigger's website for either an individual or an enterprise user. The license can be either annual or perpetual. The annual license costs $399 per user per year, while the perpetual license costs $999 per user for the first year and $299 per user for each subsequent year. The license includes all updates and support for the duration of the license period. PortSwigger also offers discounts for academic institutions, non-profit organizations, and bulk purchases.</p>
|
27 |
-
<h2>What is Burp Suite Professional crack linux?</h2>
|
28 |
-
<p>Burp Suite Professional crack linux is an illegal way of using Burp Suite Professional without paying for it. It involves downloading a cracked version of Burp Suite Professional or using a loader or a keygen to bypass the license verification process. Burp Suite Professional crack linux is usually available on various websites or forums that offer pirated software or hacking tools. Some examples of these websites or forums are:</p>
|
29 |
-
<p></p>
|
30 |
-
<ul>
|
31 |
-
<li>CrackWatch</li>
|
32 |
-
<li>Nulled</li>
|
33 |
-
<li>BlackHatWorld</li>
|
34 |
-
<li>Hacker News</li>
|
35 |
-
</ul>
|
36 |
-
<p>Burp Suite Professional crack linux is often advertised as a free or cheap alternative to buying Burp Suite Professional from PortSwigger's website. However, using Burp Suite Professional crack linux is not only illegal but also risky.</p>
|
37 |
-
<h3>How does Burp Suite Professional crack linux work?</h3>
|
38 |
-
<p>Burp Suite Professional crack linux works by modifying or replacing some of the files or components of Burp Suite Professional that are responsible for checking the license validity. There are two main methods of cracking Burp Suite Professional on Linux:</p>
|
39 |
-
<ul>
|
40 |
-
<li>Using a loader: A loader is a program that runs before Burp Suite Professional and injects some code into its memory to disable or bypass the license verification process. The loader can be either a shell script or a binary file that is executed before running Burp Suite Professional.</li>
|
41 |
-
<li>Using a keygen: A keygen is a program that generates a fake license key that can be used to activate Burp Suite Professional. The keygen can be either a standalone program or an online service that provides the license key upon request.</li>
|
42 |
-
</ul>
|
43 |
-
<p>Both methods of cracking Burp Suite Professional on Linux are based on reverse engineering or exploiting the vulnerabilities of Burp Suite Professional's license verification mechanism. However, these methods are not reliable or secure, as they can be easily detected or blocked by PortSwigger or by the antivirus software on the user's system.</p>
|
44 |
-
<h3>Where can you find Burp Suite Professional crack linux?</h3>
|
45 |
-
<p>As mentioned earlier, Burp Suite Professional crack linux can be found on various websites or forums that offer pirated software or hacking tools. However, finding a working and safe version of Burp Suite Professional crack linux is not easy, as most of the links or files are either broken, outdated, infected, or fake. Moreover, these websites or forums are often full of ads, pop-ups, redirects, and other annoying or malicious elements that can harm the user's system or browser. Therefore, it is not advisable to visit or download anything from these websites or forums.</p>
|
46 |
-
<h2>What are the risks of using Burp Suite Professional crack linux?</h2>
|
47 |
-
<p>Using Burp Suite Professional crack linux is not worth the trouble, as it comes with many risks and consequences that can outweigh any perceived benefits. Some of the risks of using Burp Suite Professional crack linux are:</p>
|
48 |
-
<h3>Legal risks</h3>
|
49 |
-
<p>Using Burp Suite Professional crack linux is illegal, as it violates the terms of service and intellectual property rights of PortSwigger, the developer of Burp Suite Professional. PortSwigger has the right to take legal action against anyone who uses Burp Suite Professional crack linux for web security testing or any other purpose. PortSwigger can also revoke or blacklist the license keys that are generated by the keygens or used by the loaders. This means that users who use Burp Suite Professional crack linux can face lawsuits, fines, penalties, or even jail time for their actions.</p>
|
50 |
-
<h3>Security risks</h3>
|
51 |
-
<p>Using Burp Suite Professional crack linux is risky, as it exposes users to malware infections, data breaches, identity theft, and other cyberattacks. The cracked versions of Burp Suite Professional or the loaders or keygens that are used to crack it can contain viruses, trojans, worms, ransomware, spyware, adware, rootkits, backdoors, or other malicious code that can compromise the user's system or network. These malware can steal, delete, encrypt, modify, or leak the user's personal or professional data, such as passwords, credit card numbers, bank accounts, emails, files, documents, photos, videos, etc. They can also hijack the user's browser, webcam, microphone , or keyboard, and perform malicious actions on the user's behalf, such as sending spam, making fraudulent transactions, accessing restricted websites, etc. They can also damage or disable the user's system or network, and prevent the user from accessing or recovering their data.</p>
|
52 |
-
<h3>Ethical risks</h3>
|
53 |
-
<p>Using Burp Suite Professional crack linux is unethical, as it undermines the professionalism and credibility of web security testers and harms the web security community. Web security testers are expected to follow certain ethical principles and standards when performing their work, such as respecting the privacy and property of others, obtaining proper authorization and consent, reporting and disclosing vulnerabilities responsibly, and using legitimate and authorized tools and methods. Using Burp Suite Professional crack linux violates these ethical principles and standards, as it shows a lack of respect and integrity towards PortSwigger, the developer of Burp Suite Professional, and towards the web application owners and users, who trust web security testers to protect their web applications from cyber threats. Using Burp Suite Professional crack linux also harms the web security community, as it creates a negative image and reputation for web security testers, and reduces the trust and cooperation between them and the web application owners and developers.</p>
|
54 |
-
<h2>What are the alternatives to Burp Suite Professional crack linux?</h2>
|
55 |
-
<p>Using Burp Suite Professional crack linux is not the only way to perform web security testing. There are some legitimate and safe alternatives to Burp Suite Professional crack linux that can provide similar or better results without the risks and consequences. Some of these alternatives are:</p>
|
56 |
-
<h3>Burp Suite Community Edition</h3>
|
57 |
-
<p>Burp Suite Community Edition is a free version of Burp Suite that has limited features but still useful for web security testing. Burp Suite Community Edition allows users to perform basic tasks such as spidering, intercepting, scanning, and exploiting web applications. However, it does not have some of the advanced features of Burp Suite Professional, such as intruder, repeater, sequencer, comparer, decoder, extensions, macros, scripts, APIs, etc. Burp Suite Community Edition also has some limitations in terms of functionality and performance, such as a lower scanning speed, a smaller number of concurrent requests, a shorter session duration, etc. Burp Suite Community Edition can be downloaded from PortSwigger's website for free without any license or registration required.</p>
|
58 |
-
<h3>Other web security testing tools</h3>
|
59 |
-
<p>There are many other web security testing tools that can compete with or complement Burp Suite Professional in terms of features, pricing, and performance. Some of these tools are:</p>
|
60 |
-
<table>
|
61 |
-
<tr>
|
62 |
-
<th>Tool</th>
|
63 |
-
<th>Features</th>
|
64 |
-
<th>Pricing</th>
|
65 |
-
</tr>
|
66 |
-
<tr>
|
67 |
-
<td>ZAP (Zed Attack Proxy)</td>
|
68 |
-
<td>A free and open source web security testing tool that has similar features to Burp Suite Professional, such as spidering, intercepting, scanning, exploiting, and automating web applications. It also has some unique features, such as active and passive scanning modes, dynamic SSL certificates, AJAX spider, etc.</td>
|
69 |
-
<td>Free</td>
|
70 |
-
</tr>
|
71 |
-
<tr>
|
72 |
-
<td>Netsparker</td>
|
73 |
-
<td>A commercial web security testing tool that has similar features to Burp Suite Professional, such as spidering, intercepting, scanning, exploiting, and automating web applications. It also has some unique features, such as proof-based scanning, vulnerability management, compliance reporting, etc.</td>
|
74 |
-
<td>$1,950 per user per year for the standard edition, $4,950 per user per year for the enterprise edition.</td>
|
75 |
-
</tr>
|
76 |
-
<tr>
|
77 |
-
<td>Acunetix</td>
|
78 |
-
<td>A commercial web security testing tool that has similar features to Burp Suite Professional, such as spidering, intercepting, scanning, exploiting, and automating web applications. It also has some unique features, such as interactive application security testing (IAST), network security scanning, malware detection, etc.</td>
|
79 |
-
<td>$4,495 per user per year for the standard edition, $5,995 per user per year for the enterprise edition.</td>
|
80 |
-
</tr>
|
81 |
-
<tr>
|
82 |
-
<td>OWASP Web Testing Environment (WTE)</td>
|
83 |
-
<td>A free and open source collection of web security testing tools that can be used together with Burp Suite Professional or separately. Some of the tools included in WTE are Nmap, Nikto, SQLmap, Metasploit, etc.</td>
|
84 |
-
<td>Free</td>
|
85 |
-
</tr>
|
86 |
-
</table>
|
87 |
-
<p>These are just some examples of the many web security testing tools that are available in the market. Users can choose the best tool for their needs and preferences based on their own research and comparison.</p>
|
88 |
-
<h3>Official trial or subscription of Burp Suite Professional</h3>
|
89 |
-
<p>The best alternative to Burp Suite Professional crack linux is to use the official trial or subscription of Burp Suite Professional from PortSwigger's website. This way, users can enjoy the full functionality and support of Burp Suite Professional without any legal, security, or ethical risks. PortSwigger offers a 30-day free trial of Burp Suite Professional for users who want to test its features and performance before buying it. Users can also buy an annual or perpetual license of Burp Suite Professional from PortSwigger's website with various payment options and discounts. Users who buy Burp Suite Professional from PortSwigger's website can also access the latest updates and support from PortSwigger's team and community.</p>
|
90 |
-
<h2>Conclusion</h2>
|
91 |
-
<p>Burp Suite Professional is a web security testing tool that helps users find and exploit vulnerabilities in web applications. However, using Burp Suite Professional crack linux is not a good idea, as it is illegal, risky, and unethical. Users who use Burp Suite Professional crack linux can face legal action from PortSwigger, malware infections from the cracked versions or loaders or keygens, and loss of professionalism and credibility in the web security community. Therefore, users should avoid using Burp Suite Professional crack linux and use one of the alternatives instead. Users can use Burp Suite Community Edition for free with limited features, other web security testing tools with similar or better features and pricing, or the official trial or subscription of Burp Suite Professional with full functionality and support. By doing so, users can perform web security testing legally and safely with Burp Suite Professional.</p>
|
92 |
-
<h2>FAQs</h2>
|
93 |
-
<p>Here are some frequently asked questions about Burp Suite Professional crack linux and its alternatives:</p>
|
94 |
-
<ol>
|
95 |
-
<li><b>Is Burp Suite Professional crack linux safe?</b></li>
|
96 |
-
<p>No, Burp Suite Professional crack linux is not safe. It can contain malware that can infect your system or network, steal or leak your data, hijack your browser or devices, or damage or disable your system or network. It can also be detected or blocked by PortSwigger or by the antivirus software on your system.</p>
|
97 |
-
<li><b>Is Burp Suite Professional crack linux legal?</b></li>
|
98 |
-
<p>No, Burp Suite Professional crack linux is illegal. It violates the terms of service and intellectual property rights of PortSwigger, the developer of Burp Suite Professional. PortSwigger has the right to take legal action against anyone who uses Burp Suite Professional crack linux for web security testing or any other purpose. PortSwigger can also revoke or blacklist the license keys that are generated by the keygens or used by the loaders.</p>
|
99 |
-
<li><b>Is Burp Suite Professional crack linux ethical?</b></li>
|
100 |
-
<p>No, Burp Suite Professional crack linux is unethical. It undermines the professionalism and credibility of web security testers and harms the web security community. Web security testers are expected to follow certain ethical principles and standards when performing their work, such as respecting the privacy and property of others, obtaining proper authorization and consent, reporting and disclosing vulnerabilities responsibly, and using legitimate and authorized tools and methods. Using Burp Suite Professional crack linux violates these ethical principles and standards, as it shows a lack of respect and integrity towards PortSwigger, the developer of Burp Suite Professional, and towards the web application owners and users, who trust web security testers to protect their web applications from cyber threats. Using Burp Suite Professional crack linux also harms the web security community, as it creates a negative image and reputation for web security testers, and reduces the trust and cooperation between them and the web application owners and developers.</p>
|
101 |
-
<li><b>What is the difference between Burp Suite Professional and Burp Suite Community Edition?</b></li>
|
102 |
-
<p>Burp Suite Professional and Burp Suite Community Edition are two versions of Burp Suite, a web security testing tool developed by PortSwigger. Burp Suite Professional is a paid version that has a comprehensive set of features that cover all aspects of web security testing from reconnaissance to exploitation. Burp Suite Community Edition is a free version that has limited features but still useful for web security testing. Burp Suite Community Edition allows users to perform basic tasks such as spidering, intercepting, scanning, and exploiting web applications. However, it does not have some of the advanced features of Burp Suite Professional, such as intruder, repeater, sequencer, comparer, decoder, extensions, macros, scripts, APIs, etc. Burp Suite Community Edition also has some limitations in terms of functionality and performance, such as a lower scanning speed, a smaller number of concurrent requests, a shorter session duration, etc.</p>
|
103 |
-
<li><b>How can I get a free trial or a discount for Burp Suite Professional?</b></li>
|
104 |
-
<p>You can get a free trial or a discount for Burp Suite Professional from PortSwigger's website. PortSwigger offers a 30-day free trial of Burp Suite Professional for users who want to test its features and performance before buying it. You can sign up for the free trial on PortSwigger's website with your email address and download the latest version of Burp Suite Professional. You can also buy an annual or perpetual license of Burp Suite Professional from PortSwigger's website with various payment options and discounts. PortSwigger offers discounts for academic institutions, non-profit organizations, and bulk purchases. You can contact PortSwigger's sales team to request a quote or a discount code.</p>
|
105 |
-
<h2></h2></p> b2dd77e56b<br />
|
106 |
-
<br />
|
107 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/HDTracks Downloader Not Working? Try These Simple Solutions.md
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Fix HDTracks Downloader Not Working</h1>
|
3 |
-
<p>If you are a music lover who enjoys high-resolution audio downloads from <a href="https://www.hdtracks.com/">HDTracks</a>, you may have encountered some issues with their download manager. Some users have reported that the download manager does not work, fails to parse the link, or stops midway through the download. This can be frustrating and disappointing, especially when you have paid for the music and want to enjoy it as soon as possible.</p>
|
4 |
-
<h2>hdtracks downloader not working</h2><br /><p><b><b>DOWNLOAD</b> ⚙ <a href="https://byltly.com/2uKvBa">https://byltly.com/2uKvBa</a></b></p><br /><br />
|
5 |
-
<p>Fortunately, there are some possible solutions to fix HDTracks downloader not working. In this article, we will share some tips and tricks that may help you resolve the problem and get your music downloaded smoothly.</p>
|
6 |
-
<h2>Check the Link</h2>
|
7 |
-
<p>As this error is usually a result of the downloader not being able to parse the link provided, you should check the link to confirm it's accessible. Please access the link with a working browser to ensure it is working before proceeding with your attempt. If parsing error persists, please try the following:</p>
|
8 |
-
<ul>
|
9 |
-
<li>Copy and paste the link directly from your order confirmation email or your HDTracks account page.</li>
|
10 |
-
<li>Make sure there are no spaces or special characters in the link.</li>
|
11 |
-
<li>Try using a different browser or device to access the link.</li>
|
12 |
-
</ul>
|
13 |
-
<h2>Restart the Downloader</h2>
|
14 |
-
<p>Sometimes, the downloader may encounter a glitch or a network interruption that causes it to stop working. In this case, you can try restarting the downloader and resuming the download. Here are the steps:</p>
|
15 |
-
<ol>
|
16 |
-
<li>Exit the downloader program. You can delete the failed download files, but it will work even if you don't.</li>
|
17 |
-
<li>Restart the downloader from your applications or start folder. It will only download the failed tracks.</li>
|
18 |
-
<li>If the download still fails, try deleting the entire album folder and starting over.</li>
|
19 |
-
</ol>
|
20 |
-
<h2>Update the Downloader</h2>
|
21 |
-
<p>Another possible reason for HDTracks downloader not working is that you are using an outdated version of the downloader. HDTracks may release new versions of their downloader to fix bugs, improve performance, or add features. Therefore, you should always check if there is a newer version available and update your downloader accordingly. You can find the latest version of HDTracks downloader <a href="https://www.hdtracks.com/downloader/channels/v18/stable/HDtracksDownloader200032.exe">here</a>.</p>
|
22 |
-
<p></p>
|
23 |
-
<h2>Contact HDTracks Support</h2>
|
24 |
-
<p>If none of the above solutions work for you, you may need to contact HDTracks support team for further assistance. They are usually responsive and helpful, and they can provide you with alternative download methods or refund options if necessary. You can contact them via email at <a href="mailto:[email protected]">[email protected]</a> or via phone at 1-888-HDTRACKS (1-888-438-7225).</p>
|
25 |
-
<p>We hope this article has helped you fix HDTracks downloader not working and enjoy your high-resolution music downloads. If you have any other questions or suggestions, please feel free to leave a comment below.</p> ddb901b051<br />
|
26 |
-
<br />
|
27 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Archaeological Laboratory Methods An Introduction Downloadzip.md
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<p>archaeology as a discipline studies the interactions between humans and the physical environment. how do we make sense of our surroundings and our worlds? archaeology is the study of that interaction. this course introduces the nature of the archaeology discipline and the major tools of archaeological data analysis and interpretation. </p>
|
3 |
-
<p>in this course we will study the archaeological processes of ceramics, stone tools, and other materials that comprise the archaeological record. we will examine the archaeological materials in the context of their archaeological context. for example, we will study both the physical characteristics of the archaeological materials and the methods archaeologists use to date them. we will examine the archaeology of the human body, and how the physical remains of the body (bones, teeth, fingernails, etc.) can be used to reconstruct the life of the individual that they belonged to. </p>
|
4 |
-
<h2>Archaeological Laboratory Methods An Introduction Downloadzip</h2><br /><p><b><b>DOWNLOAD</b> ✔ <a href="https://imgfil.com/2uxZcy">https://imgfil.com/2uxZcy</a></b></p><br /><br />
|
5 |
-
<p>this course will introduce students to the scientific method, which underlies all scientific inquiry, and is one of the critical skills for all students to learn in a science, technology, engineering, and math (stem) course. we will cover the purpose of the scientific method, introduce the scientific method in its historical and current forms, and explore its relationship to the various disciplines (biology, geology, physics, archaeology, etc.). then, we will examine the scientific method through a series of activities. first, students will examine the purpose of the scientific method. second, students will design experiments, which they will conduct and analyze. we will also examine the methods of observation, correlation, and experimentation. finally, students will examine how the scientific method relates to other fields of study, including anthropology and archaeology. </p> 899543212b<br />
|
6 |
-
<br />
|
7 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Dharam Sankat Mein Hd 1080p Blu-ray Fixed Download Torrent.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>Dharam Sankat Mein Hd 1080p Blu-ray Download Torrent</h2><br /><p><b><b>Download</b> >> <a href="https://imgfil.com/2uxYzi">https://imgfil.com/2uxYzi</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
Dharam Sankat Mein Man 2 Full Movie In Hindi Hd 1080p . ... NH-8 ... Torrent Download free BluRay 720p HD, Free Full Movie Tera Intezaar . 1fdad05405<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Experience the Thrill of Tongits Offline - The No.1 Free Card Game in the Philippines.md
DELETED
@@ -1,106 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Tongits Go Offline Download: How to Play the Popular Filipino Card Game Anytime, Anywhere</h1>
|
3 |
-
<p>Tongits Go is a popular free card game in the Philippines that has more than 50 million Filipino players online. It is a rummy-type game that is similar to Gin Rummy, Tonk, and Mahjong. The objective of the game is to be the first to play all of your cards or to have the lowest score by forming sets and runs of cards. You can also lay off cards on other players' melds or call "Tongit" or "Draw" to end the game.</p>
|
4 |
-
<p>Tongits Go is not only a fun and exciting game, but also a social and cultural phenomenon in the Philippines. It is a way for Filipinos to connect with their friends, family, and fellow countrymen, as well as to showcase their skills and strategies. Tongits Go also offers various local games, such as Sabong, Mahjong, Pusoy, Slots, Color Game, and more, that cater to different tastes and preferences.</p>
|
5 |
-
<h2>tongits go offline download</h2><br /><p><b><b>Download File</b> <a href="https://jinyurl.com/2uNNah">https://jinyurl.com/2uNNah</a></b></p><br /><br />
|
6 |
-
<p>But what if you want to play Tongits Go without an internet connection? What if you want to enjoy the game without ads or in-app purchases? What if you want to practice your skills or have fun with artificial intelligence opponents? Well, there is a solution for that: Tongits Go offline mode. In this article, we will show you how to download Tongits Go offline mode, how to play it, and what are the benefits of playing it. Let's get started!</p>
|
7 |
-
<h2>How to Download Tongits Go Offline Mode</h2>
|
8 |
-
<p>Downloading Tongits Go offline mode is easy and simple. Just follow these steps:</p>
|
9 |
-
<ol>
|
10 |
-
<li>Go to the official website of Tongits Go ([11](https://tongitsgo.com)) or the app store (Google Play Store for Android devices or App Store for iOS devices) and download the game. You can also scan the QR code on the website or click on this link ([10](https://play.google.com/store/apps/details?id=com.tongitsgo.play)) for Android devices or this link ([9](https://apps.apple.com/ph/app/tongits-go/id1443568670)) for iOS devices.</li>
|
11 |
-
<li>Install the game and open it. You will see a welcome screen with two options: online mode and offline mode. Choose the offline mode option.</li>
|
12 |
-
<li>Start playing! You can choose from different game modes, such as Gold Table, Family Table, Ranking Game, or Tournament. You can also customize your avatar, table, theme, and settings.</li>
|
13 |
-
</ol>
|
14 |
-
<h2>How to Play Tongits Go Offline Mode</h2>
|
15 |
-
<p>Playing Tongits Go offline mode is similar to playing online mode, except that you don't need an internet connection and you don't have ads or in-app purchases. Here are some of the basic rules and features of Tongits Go offline mode:</p>
|
16 |
-
<ul>
|
17 |
-
<li>The game is - The game is played with a standard 52-card deck. Each player is dealt 12 cards, except for the dealer who gets 13 cards. The remaining cards are placed face down on the table as the draw pile. The top card of the draw pile is turned face up and placed next to it as the discard pile.</li>
|
18 |
-
<li>The dealer starts the game by either drawing a card from the draw pile or picking up the discard pile. The dealer then has to either form a meld (a set of three or four cards of the same rank or a run of three or more cards of the same suit) and lay it down on the table, or discard a card to the discard pile.</li>
|
19 |
-
<li>The next player can either draw a card from the draw pile, pick up the discard pile, or lay off cards on the existing melds on the table. Laying off cards means adding cards to your own or other players' melds to make them longer or bigger.</li>
|
20 |
-
<li>The game continues in a clockwise direction until one of the following happens: <ul>
|
21 |
-
<li>A player plays all of his or her cards and calls "Tongit". This means that the player wins the game and gets all the points in the pot, plus a bonus of 20 points.</li>
|
22 |
-
<li>A player has only one card left and calls "Draw". This means that the game ends in a draw and the points in the pot are divided equally among all players.</li>
|
23 |
-
<li>The draw pile runs out of cards and no one can play any more cards. This means that the game ends and the player with the lowest score wins the game and gets all the points in the pot.</li>
|
24 |
-
</ul>
|
25 |
-
</li>
|
26 |
-
<li>The score of each player is calculated by adding up the values of the cards in his or her hand, minus the values of the cards in his or her melds. The values of the cards are as follows: Aces are worth 1 point, 2 to 10 are worth their face value, and J, Q, and K are worth 10 points each.</li>
|
27 |
-
</ul>
|
28 |
-
<p>Tongits Go offline mode also offers different game modes and features that you can enjoy, such as:</p>
|
29 |
-
<ul>
|
30 |
-
<li>Gold Table: This is where you can play with gold coins that you can earn by playing games or completing tasks. You can use gold coins to buy items or enter tournaments.</li>
|
31 |
-
<li>Family Table: This is where you can create your own private room and invite your friends or family to play with you. You can set your own rules and chat with your loved ones.</li>
|
32 |
-
<li>Ranking Game: This is where you can compete with other players and climb up the leaderboard. You can earn rewards and badges based on your performance.</li>
|
33 |
-
<li>Tournament: This is where you can join various events and challenges and win prizes and trophies. You can also watch live streams of other players and learn from their strategies.</li>
|
34 |
-
</ul>
|
35 |
-
<h2>The Benefits of Playing Tongits Go Offline Mode</h2>
|
36 |
-
<p>Playing Tongits Go offline mode has many benefits that you can enjoy, such as:</p>
|
37 |
-
<ul>
|
38 |
-
<li>No internet connection required: You can play Tongits Go offline mode anytime, anywhere, without worrying about your data usage or network connection. You can also save your battery life and avoid interruptions from calls or messages.</li>
|
39 |
-
<li>No ads or in-app purchases: You can play Tongits Go offline mode without seeing any ads or being tempted to buy anything. You can focus on the game and have fun without spending any money.</li>
|
40 |
-
<li>More practice and fun with artificial intelligence opponents: You can play Tongits Go offline mode with smart and challenging computer opponents that will test your skills and strategies. You can also adjust the difficulty level and speed of the game according to your preference. You can have fun and learn from your mistakes without feeling embarrassed or frustrated.</li>
|
41 |
-
</ul>
|
42 |
-
<h2>Conclusion</h2>
|
43 |
-
<p>Tongits Go is a popular Filipino card game that you can play online or offline. Offline mode allows you to play without an internet connection, without ads or in-app purchases, and with artificial intelligence opponents. You can also choose from different game modes, such as Gold Table, Family Table, Ranking Game, or Tournament. You can download Tongits Go offline mode from the official website or app store and start playing right away. Tongits Go offline mode is a great way to have fun, practice your skills, and connect with your culture. Download it now and enjoy!</p>
|
44 |
-
<p>tongits go offline download apk<br />
|
45 |
-
tongits go offline download for pc<br />
|
46 |
-
tongits go offline download ios<br />
|
47 |
-
tongits go offline download free<br />
|
48 |
-
tongits go offline download latest version<br />
|
49 |
-
tongits go offline download app store<br />
|
50 |
-
tongits go offline download google play<br />
|
51 |
-
tongits go offline download mac<br />
|
52 |
-
tongits go offline download windows 10<br />
|
53 |
-
tongits go offline download laptop<br />
|
54 |
-
tongits go offline download mod apk<br />
|
55 |
-
tongits go offline download no internet<br />
|
56 |
-
tongits go offline download without wifi<br />
|
57 |
-
tongits go offline download update<br />
|
58 |
-
tongits go offline download 2023<br />
|
59 |
-
tongits go offline download hack<br />
|
60 |
-
tongits go offline download cheat<br />
|
61 |
-
tongits go offline download unlimited gold<br />
|
62 |
-
tongits go offline download review<br />
|
63 |
-
tongits go offline download rating<br />
|
64 |
-
tongits go offline download game play<br />
|
65 |
-
tongits go offline download tutorial<br />
|
66 |
-
tongits go offline download tips and tricks<br />
|
67 |
-
tongits go offline download rules and regulations<br />
|
68 |
-
tongits go offline download how to play<br />
|
69 |
-
tongits go offline download best strategy<br />
|
70 |
-
tongits go offline download leaderboard<br />
|
71 |
-
tongits go offline download jackpot<br />
|
72 |
-
tongits go offline download hitpot<br />
|
73 |
-
tongits go offline download table collection<br />
|
74 |
-
tongits go offline download themes and tables<br />
|
75 |
-
tongits go offline download rummy game<br />
|
76 |
-
tongits go offline download filipino card game<br />
|
77 |
-
tongits go offline download pinoy card game<br />
|
78 |
-
tongits go offline download tung it game<br />
|
79 |
-
tongits go offline download tonk game<br />
|
80 |
-
tongits go offline download mahjong game<br />
|
81 |
-
tongits go offline download poker game<br />
|
82 |
-
tongits go offline download fun card game<br />
|
83 |
-
tongits go offline download simulated card game<br />
|
84 |
-
tongits go offline vs online mode <br />
|
85 |
-
tongits go online vs. other players <br />
|
86 |
-
tongits go online vs. computer <br />
|
87 |
-
tongits go online vs. friends <br />
|
88 |
-
tongits go online vs. real money <br />
|
89 |
-
tongits go online vs. free gold <br />
|
90 |
-
tongue its online vs. other card games</p>
|
91 |
-
<h2>FAQs</h2>
|
92 |
-
<p>Here are some frequently asked questions about Tongits Go offline mode:</p>
|
93 |
-
<ol>
|
94 |
-
<li><b>What are the minimum requirements to play Tongits Go offline mode?</b></li>
|
95 |
-
<p>To play Tongits Go offline mode, you need to have a device that runs on Android 4.4 or higher or iOS 9.0 or higher. You also need to have at least 100 MB of free storage space on your device.</p>
|
96 |
-
<li><b>How can I get free gold and rewards in Tongits Go offline mode?</b></li>
|
97 |
-
<p>You can get free gold and rewards in Tongits Go offline mode by playing games, completing tasks, logging in daily, inviting friends, watching videos, or joining events. You can also get free gold and rewards by playing online mode and syncing your account.</p>
|
98 |
-
<li><b>How can I play with my friends and family in Tongits Go offline mode?</b></li>
|
99 |
-
<p>You can play with your friends and family in Tongits Go offline mode by creating a Family Table and inviting them to join. You can also play with them online by adding them as friends and joining their tables.</p>
|
100 |
-
<li><b>How can I contact the developers of Tongits Go if I have any issues or feedback?</b></li>
|
101 |
-
<p>You can contact the developers of Tongits Go by sending an email to [8](mailto:[email protected]) or by visiting their Facebook page ([7](https://www.facebook.com/TongitsGoOfficial)). You can also leave a review or rating on the app store or website.</p>
|
102 |
-
<li><b>What are some other popular card games in the Philippines that I can play offline?</b></li>
|
103 |
-
<p>Some other popular card games in the Philippines that you can play offline are Pusoy, Pusoy Dos, Lucky 9, Blackjack, Poker, and Baccarat. You can also play these games online on Tongits Go or other platforms.</p>
|
104 |
-
</ol></p> 401be4b1e0<br />
|
105 |
-
<br />
|
106 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/FR Legends APK 0.3.3.1 The Ultimate Drift Racing Game for Android - Customize Your Car and Compete Online.md
DELETED
@@ -1,161 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<table>
|
3 |
-
<tr>
|
4 |
-
<td>
|
5 |
-
<h1>FR Legends APK 0.3.3.1: The Ultimate Drifting Game for Android</h1>
|
6 |
-
<p>If you are a fan of drifting and racing games, you might have heard of FR Legends, one of the most popular and realistic drifting games for mobile devices. FR Legends is a game that lets you experience the thrill of drifting with customizable cars, realistic physics, various game modes, online multiplayer, stunning graphics and sound, and more.</p>
|
7 |
-
<p>In this article, we will tell you everything you need to know about FR Legends APK 0.3.3.1, the latest version of this amazing game that you can download and install on your Android device for free.</p>
|
8 |
-
<h2>fr legends apk 0.3.3.1</h2><br /><p><b><b>Download</b> — <a href="https://jinyurl.com/2uNM2c">https://jinyurl.com/2uNM2c</a></b></p><br /><br />
|
9 |
-
<h2>What is FR Legends?</h2>
|
10 |
-
<p>FR Legends is a game developed by TWIN TURBO TECH CO., LTD, a company based in China that specializes in creating high-quality racing games for mobile platforms.</p>
|
11 |
-
<p>FR Legends stands for Front-engine, Rear-wheel drive Legend cars, which are the type of cars that you can drive and customize in this game.</p>
|
12 |
-
<p>FR Legends is a game that focuses on drifting, which is a driving technique where the driver intentionally oversteers the car to make it slide sideways while maintaining control and speed.</p>
|
13 |
-
<p>Drifting is not only fun and challenging, but also a form of art and expression that requires skill and creativity.</p>
|
14 |
-
<p>FR Legends is a game that lets you unleash your inner drifter and show off your style and skills to other players around the world.</p>
|
15 |
-
<h2>What are the features of FR Legends APK 0.3.3.1?</h2>
|
16 |
-
<h3>Customizable cars</h3>
|
17 |
-
<p>One of the best features of FR Legends is that you can customize your car to suit your preferences and personality.</p>
|
18 |
-
<p>You can choose from a variety of cars that are inspired by real-life models such as Toyota AE86, Nissan S13, BMW E30, Mazda RX <p>-7, and more.</p>
|
19 |
-
<p>You can also modify your car's engine, suspension, tires, brakes, exhaust, turbo, transmission, and more to improve its performance and handling.</p>
|
20 |
-
<p>fr legends apk 0.3.3.1 download<br />
|
21 |
-
fr legends apk 0.3.3.1 mod<br />
|
22 |
-
fr legends apk 0.3.3.1 unlimited money<br />
|
23 |
-
fr legends apk 0.3.3.1 latest version<br />
|
24 |
-
fr legends apk 0.3.3.1 free download<br />
|
25 |
-
fr legends apk 0.3.3.1 android<br />
|
26 |
-
fr legends apk 0.3.3.1 update<br />
|
27 |
-
fr legends apk 0.3.3.1 hack<br />
|
28 |
-
fr legends apk 0.3.3.1 gameplay<br />
|
29 |
-
fr legends apk 0.3.3.1 review<br />
|
30 |
-
fr legends apk 0.3.3.1 offline<br />
|
31 |
-
fr legends apk 0.3.3.1 obb<br />
|
32 |
-
fr legends apk 0.3.3.1 no root<br />
|
33 |
-
fr legends apk 0.3.3.1 cheats<br />
|
34 |
-
fr legends apk 0.3.3.1 features<br />
|
35 |
-
fr legends apk 0.3.3.1 installation<br />
|
36 |
-
fr legends apk 0.3.3.1 requirements<br />
|
37 |
-
fr legends apk 0.3.3.1 tips and tricks<br />
|
38 |
-
fr legends apk 0.3.3.1 best cars<br />
|
39 |
-
fr legends apk 0.3.3.1 customizations<br />
|
40 |
-
fr legends apk 0.3.3.1 graphics settings<br />
|
41 |
-
fr legends apk 0.3.3.1 multiplayer mode<br />
|
42 |
-
fr legends apk 0.3.3.1 new maps<br />
|
43 |
-
fr legends apk 0.3.3.1 sound effects<br />
|
44 |
-
fr legends apk 0.3.3.1 controls<br />
|
45 |
-
fr legends apk 0.3</p>
|
46 |
-
<p>Moreover, you can change your car's appearance by changing its color, paint, decals, stickers, body kits, spoilers, wheels, lights, and more to make it look unique and cool.</p>
|
47 |
-
<h3>Realistic physics</h3>
|
48 |
-
<p>Another great feature of FR Legends is that it has realistic physics that simulate the behavior of the car and the environment.</p>
|
49 |
-
<p>You can feel the weight, speed, traction, inertia, and gravity of your car as you drift and race on different tracks and terrains.</p>
|
50 |
-
<p>You can also see the smoke, sparks, dust, and skid marks that your car leaves behind as you slide and burn rubber.</p>
|
51 |
-
<p>The game also has dynamic weather and lighting effects that change the atmosphere and visibility of the tracks.</p>
|
52 |
-
<h3>Various game modes</h3>
|
53 |
-
<p>FR Legends has various game modes that you can choose from depending on your mood and preference.</p>
|
54 |
-
<p>You can play the Career mode, where you can start from the bottom and work your way up to become a legendary drifter. You can compete in different events and challenges that test your skills and earn coins and reputation.</p>
|
55 |
-
<p>You can play the Free mode, where you can practice your drifting skills and explore the tracks without any pressure or limitations. You can also customize your car and settings to your liking.</p>
|
56 |
-
<p>You can play the Arcade mode, where you can enjoy a casual and fun drifting experience with simple controls and objectives. You can also unlock new cars and tracks as you progress.</p>
|
57 |
-
<h3>Online multiplayer</h3>
|
58 |
-
<p>FR Legends also has an online multiplayer feature that lets you connect with other players around the world and challenge them to drift battles.</p>
|
59 |
-
<p>You can join or create a room with up to 8 players and choose a track and a game mode to play. You can also chat with other players and make friends or rivals.</p>
|
60 |
-
<p>You can compete in two types of drift battles: Tandem and Solo. In Tandem battles, you have to follow or lead another player's car as closely as possible while drifting. In Solo battles, you have to score higher than your opponent by drifting better and faster.</p>
|
61 |
-
<h3>Stunning graphics and sound</h3>
|
62 |
-
<p>FR Legends also has stunning graphics and sound that make the game more immersive and realistic.</p>
|
63 |
-
<p>The game has 3D graphics that are detailed and smooth. The cars look authentic and have different animations and effects. The tracks look diverse and have different scenery and landmarks. The game also has a dynamic camera that follows your car from different angles.</p>
|
64 |
-
<p>The game also has realistic sound effects that match the action on the screen. You can hear the engine roar, the tires screech, the turbo whistle, the exhaust pop, and more. You can also hear the crowd cheer, the announcer commentate, and the music play in the background.</p>
|
65 |
-
<h2>How to download and install FR Legends APK 0.3.3.1?</h2>
|
66 |
-
<h3>Step 1: Download the APK file from a trusted source</h3>
|
67 |
-
<p>To download FR Legends APK 0.3.3.1, you need to find a trusted source that provides the latest version of the file. You can use Google or any other search engine to find such sources.</p>
|
68 |
-
<p>One of the sources that we recommend is [FR Legends APK 0.3.3.1 Download], which is a website that offers safe and fast downloads of various APK files for Android devices.</p>
|
69 |
-
<h3>Step 2: Enable unknown sources on your device</h3>
|
70 |
-
<p>To install FR Legends APK 0.3.3.1, you need to enable unknown sources on your device. This is because FR Legends APK 0.3.3.1 is not available on the Google Play Store, so you need to allow your device to install apps from other sources.</p>
|
71 |
-
<p>To enable unknown sources on your device, follow these steps:</p>
|
72 |
-
<ul>
|
73 |
-
<li>Go to Settings > Security > Unknown sources</li>
|
74 |
-
<li>Toggle on the switch or check the box to enable unknown sources</li>
|
75 |
-
<li>A warning message will appear. Tap OK or Confirm to proceed</li>
|
76 |
-
</ul>
|
77 |
-
<h3>Step 3: Locate and install the APK file</h3>
|
78 |
-
<p>To locate and install FR Legends APK 0.3.3.1, follow these steps:</p>
|
79 |
-
<ul>
|
80 |
-
<li>Go to your device's file manager or download manager</li>
|
81 |
-
<li>Find the FR Legends APK 0.3.3.1 file that you downloaded in step <li>1. Tap on the file to open it</li>
|
82 |
-
<li>A pop-up message will appear. Tap Install to start the installation process</li>
|
83 |
-
<li>Wait for the installation to finish. It may take a few seconds or minutes depending on your device and internet speed</li>
|
84 |
-
</ul>
|
85 |
-
<h3>Step 4: Launch the game and enjoy</h3>
|
86 |
-
<p>To launch the game and enjoy, follow these steps:</p>
|
87 |
-
<ul>
|
88 |
-
<li>Go to your device's app drawer or home screen</li>
|
89 |
-
<li>Find the FR Legends icon and tap on it to open the game</li>
|
90 |
-
<li>Grant the necessary permissions and accept the terms and conditions</li>
|
91 |
-
<li>Choose your language and region</li>
|
92 |
-
<li>Create your profile and customize your settings</li>
|
93 |
-
<li>Start playing and drifting with FR Legends APK 0.3.3.1</li>
|
94 |
-
</ul>
|
95 |
-
<h2>How to play FR Legends APK 0.3.3.1?</h2>
|
96 |
-
<h3>Choose your car and customize it</h3>
|
97 |
-
<p>To choose your car and customize it, follow these steps:</p>
|
98 |
-
<ul>
|
99 |
-
<li>Tap on the Garage icon on the main menu</li>
|
100 |
-
<li>Swipe left or right to browse through the available cars</li>
|
101 |
-
<li>Tap on the car that you want to drive and customize</li>
|
102 |
-
<li>Tap on the Customize icon on the bottom right corner</li>
|
103 |
-
<li>Swipe left or right to access different categories of customization such as Engine, Suspension, Body, Paint, Decals, etc.</li>
|
104 |
-
<li>Tap on the category that you want to modify and select the option that you want to apply</li>
|
105 |
-
<li>You can preview the changes on your car by tapping on the Preview icon on the top right corner</li>
|
106 |
-
<li>You can also rotate, zoom, and move your car by using your fingers on the screen</li>
|
107 |
-
<li>When you are satisfied with your customization, tap on the Save icon on the top left corner</li>
|
108 |
-
<li>You can also buy new cars or parts with coins that you earn by playing the game</li>
|
109 |
-
</ul>
|
110 |
-
<h3>Select a game mode and a track</h3>
|
111 |
-
<p>To select a game mode and a track, follow these steps:</p>
|
112 |
-
<ul>
|
113 |
-
<li>Tap on the Play icon on the main menu</li>
|
114 |
-
<li>Swipe left or right to choose between Career mode, Free mode, or Arcade mode</li>
|
115 |
-
<li>Tap on the mode that you want to play</li>
|
116 |
-
<li>Swipe left or right to choose between different tracks such as Ebisu, Meihan, Sekia Hills, etc.</li>
|
117 |
-
<li>Tap on the track that you want to play</li>
|
118 |
-
<li>You can also change the weather, time of day, and difficulty level by tapping on the icons on the bottom of the screen</li>
|
119 |
-
<li>When you are ready, tap on the Start icon on the top right corner</li>
|
120 |
-
</ul>
|
121 |
-
<h3>Control your car with simple gestures</h3>
|
122 |
-
<p>To control your car with simple gestures, follow these steps:</p>
|
123 |
-
<ul>
|
124 |
-
<li>To accelerate, press and hold the gas pedal on the right side of the screen</li>
|
125 |
-
<li>To brake, press and hold the brake pedal on the left side of the screen</li>
|
126 |
-
<li>To steer, swipe left or right on the steering wheel on the bottom center of the screen</li>
|
127 |
-
<li>To drift, swipe up or down on the handbrake lever on the right side of the screen</li>
|
128 |
-
<li>To change the camera angle, tap on the camera icon on the top left corner of the screen</li>
|
129 |
-
<li>To pause the game, tap on the pause icon on the top right corner of the screen</li>
|
130 |
-
</ul>
|
131 |
-
<h3>Earn coins and reputation by drifting and racing</h3>
|
132 |
-
<p>To earn coins and reputation by drifting and racing, follow these steps:</p>
|
133 |
-
<ul>
|
134 |
-
<li>When you are playing a game mode, you will see a score meter on the top center of the screen that shows your current score and combo</li>
|
135 |
-
<li>You can increase your score and combo by drifting, overtaking, following, or leading other cars</li>
|
136 |
-
<li>You can also perform tricks such as donuts, 360s, wall taps, etc. to earn extra points</li>
|
137 |
-
<li>The longer and better you drift, the higher your score and combo will be</li>
|
138 |
-
<li>However, if you crash, spin out, or stop drifting, your score and combo will reset</li>
|
139 |
-
<li>At the end of each game mode, you will see a summary screen that shows your total score, coins earned, reputation earned, and rank achieved</li>
|
140 |
-
<li>You can use coins to buy new cars or parts in the garage</li>
|
141 |
-
<li>You can use reputation to unlock new tracks and events in the career mode</li>
|
142 |
-
<li>You can also compare your scores and ranks with other players on the leaderboard</li>
|
143 |
-
</ul>
|
144 |
-
<h2>Conclusion</h2>
|
145 |
-
<p>FR Legends APK 0.3.3.1 is a game that lets you experience the thrill of drifting with customizable cars, realistic physics, various game modes, online multiplayer, stunning graphics and sound, and more.</p>
|
146 |
-
<p>If you are a fan of drifting and racing games, you should definitely download and install FR Legends APK 0.3.3.1 on your Android device for free.</p>
|
147 |
-
<p>You will not regret it as you will have hours of fun and excitement with this amazing game.</p>
|
148 |
-
<p>So what are you waiting for? Download FR Legends APK 0.3.3.1 now and start drifting like a legend!</p>
|
149 |
-
<h2>FAQs</h2>
|
150 |
-
<h4>Q: Is FR Legends APK 0.3.3.1 safe to download and install?</h4>
|
151 |
-
<p>A: Yes, FR Legends APK 0.3.3.1 is safe to download and install as long as you get it from a trusted source such as [FR Legends APK 0.3.3.1 Download]. However, you should always scan any APK file with an antivirus software before installing it on your device.</p>
|
152 |
-
<h4>Q: Is FR Legends APK 0.3.3.1 compatible with my device?</h4>
|
153 |
-
<p>A: FR Legends APK 0.3.3.1 is compatible with most Android devices that have Android 4.1 or higher version installed. However, some devices may have performance issues or bugs due to different hardware specifications or software versions.</p>
|
154 |
-
<h4>Q: How can I update FR Legends APK 0.3.3.1?</h4>
|
155 |
-
<p>A: To update FR Legends APK 0.3.3.1, you need to download and install the latest version of the file from a trusted source such as [FR Legends APK 0.3.3.1 Download]. You may also need to uninstall the previous version of the game before installing the new one.</p>
|
156 |
-
<h4>Q: How can I contact the developer of FR Legends?</h4>
|
157 |
-
<p>A: You can contact the developer of FR Legends by sending an email to [email protected] or by visiting their official website at https://www.twinturbo.co/.</p>
|
158 |
-
<h4>Q: How can I support the development of FR Legends?</h4>
|
159 |
-
<p>A: You can support the development of FR Legends by rating and reviewing the game on Google Play Store or any other app store that you downloaded it from.</p> 401be4b1e0<br />
|
160 |
-
<br />
|
161 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1toTree/lora_test/ppdiffusers/download_utils.py
DELETED
@@ -1,44 +0,0 @@
|
|
1 |
-
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
2 |
-
# Copyright 2022 The HuggingFace Team. All rights reserved.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
import os
|
17 |
-
|
18 |
-
from paddlenlp.utils.downloader import get_path_from_url_with_filelock
|
19 |
-
from paddlenlp.utils.log import logger
|
20 |
-
|
21 |
-
from .utils import DOWNLOAD_SERVER, PPDIFFUSERS_CACHE
|
22 |
-
|
23 |
-
|
24 |
-
def ppdiffusers_bos_download(pretrained_model_name_or_path, filename=None, subfolder=None, cache_dir=None):
|
25 |
-
if cache_dir is None:
|
26 |
-
cache_dir = PPDIFFUSERS_CACHE
|
27 |
-
cache_dir = (
|
28 |
-
pretrained_model_name_or_path
|
29 |
-
if os.path.isdir(pretrained_model_name_or_path)
|
30 |
-
else os.path.join(cache_dir, pretrained_model_name_or_path)
|
31 |
-
)
|
32 |
-
url = DOWNLOAD_SERVER + "/" + pretrained_model_name_or_path
|
33 |
-
if subfolder is not None:
|
34 |
-
url = url + "/" + subfolder
|
35 |
-
cache_dir = os.path.join(cache_dir, subfolder)
|
36 |
-
if filename is not None:
|
37 |
-
url = url + "/" + filename
|
38 |
-
|
39 |
-
file_path = os.path.join(cache_dir, filename)
|
40 |
-
if os.path.exists(file_path):
|
41 |
-
logger.info("Already cached %s" % file_path)
|
42 |
-
else:
|
43 |
-
file_path = get_path_from_url_with_filelock(url, cache_dir)
|
44 |
-
return file_path
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/3bdo7ss/Neutron_Chatbot/app.py
DELETED
@@ -1,29 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
from sentence_transformers import SentenceTransformer, util
|
3 |
-
|
4 |
-
ts_model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
|
5 |
-
|
6 |
-
def similarity(*data):
|
7 |
-
question = data[0]
|
8 |
-
q = data[1::2]
|
9 |
-
a = data[2::2]
|
10 |
-
similarities = []
|
11 |
-
for i in q:
|
12 |
-
embedding_1= ts_model.encode(i, convert_to_tensor=True)
|
13 |
-
embedding_2 = ts_model.encode(question, convert_to_tensor=True)
|
14 |
-
|
15 |
-
similarities.append(float(util.pytorch_cos_sim(embedding_1, embedding_2)))
|
16 |
-
max_similarity = max(similarities)
|
17 |
-
max_similarity_index = similarities.index(max_similarity)
|
18 |
-
|
19 |
-
if max_similarity <= 0.5:
|
20 |
-
return "It seems that, I don't have a specific answer for that Question"
|
21 |
-
else:
|
22 |
-
return a[max_similarity_index]
|
23 |
-
|
24 |
-
|
25 |
-
gr.Interface(
|
26 |
-
fn = similarity,
|
27 |
-
inputs = [gr.Textbox(label = "Main Q"),gr.Textbox(label = "Q1"),gr.Textbox(label = "A1"),gr.Textbox(label = "Q2"),gr.Textbox(label = "A2")],
|
28 |
-
outputs = "text"
|
29 |
-
).launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/infer/lib/csvutil.py
DELETED
@@ -1,41 +0,0 @@
|
|
1 |
-
|
2 |
-
import numpy as np
|
3 |
-
|
4 |
-
# import praatio
|
5 |
-
# import praatio.praat_scripts
|
6 |
-
import os
|
7 |
-
import sys
|
8 |
-
|
9 |
-
import random
|
10 |
-
|
11 |
-
import csv
|
12 |
-
|
13 |
-
# praatEXE = join('.',os.path.abspath(os.getcwd()) + r"\Praat.exe")
|
14 |
-
|
15 |
-
|
16 |
-
def CSVutil(file, rw, type, *args):
|
17 |
-
if type == "formanting":
|
18 |
-
if rw == "r":
|
19 |
-
with open(file) as fileCSVread:
|
20 |
-
csv_reader = list(csv.reader(fileCSVread))
|
21 |
-
return (
|
22 |
-
(csv_reader[0][0], csv_reader[0][1], csv_reader[0][2])
|
23 |
-
if csv_reader is not None
|
24 |
-
else (lambda: exec('raise ValueError("No data")'))()
|
25 |
-
)
|
26 |
-
else:
|
27 |
-
if args:
|
28 |
-
doformnt = args[0]
|
29 |
-
else:
|
30 |
-
doformnt = False
|
31 |
-
qfr = args[1] if len(args) > 1 else 1.0
|
32 |
-
tmb = args[2] if len(args) > 2 else 1.0
|
33 |
-
with open(file, rw, newline="") as fileCSVwrite:
|
34 |
-
csv_writer = csv.writer(fileCSVwrite, delimiter=",")
|
35 |
-
csv_writer.writerow([doformnt, qfr, tmb])
|
36 |
-
elif type == "stop":
|
37 |
-
stop = args[0] if args else False
|
38 |
-
with open(file, rw, newline="") as fileCSVwrite:
|
39 |
-
csv_writer = csv.writer(fileCSVwrite, delimiter=",")
|
40 |
-
csv_writer.writerow([stop])
|
41 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py
DELETED
@@ -1,90 +0,0 @@
|
|
1 |
-
from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
|
2 |
-
import pyworld
|
3 |
-
import numpy as np
|
4 |
-
|
5 |
-
|
6 |
-
class DioF0Predictor(F0Predictor):
|
7 |
-
def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
|
8 |
-
self.hop_length = hop_length
|
9 |
-
self.f0_min = f0_min
|
10 |
-
self.f0_max = f0_max
|
11 |
-
self.sampling_rate = sampling_rate
|
12 |
-
|
13 |
-
def interpolate_f0(self, f0):
|
14 |
-
"""
|
15 |
-
对F0进行插值处理
|
16 |
-
"""
|
17 |
-
|
18 |
-
data = np.reshape(f0, (f0.size, 1))
|
19 |
-
|
20 |
-
vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
|
21 |
-
vuv_vector[data > 0.0] = 1.0
|
22 |
-
vuv_vector[data <= 0.0] = 0.0
|
23 |
-
|
24 |
-
ip_data = data
|
25 |
-
|
26 |
-
frame_number = data.size
|
27 |
-
last_value = 0.0
|
28 |
-
for i in range(frame_number):
|
29 |
-
if data[i] <= 0.0:
|
30 |
-
j = i + 1
|
31 |
-
for j in range(i + 1, frame_number):
|
32 |
-
if data[j] > 0.0:
|
33 |
-
break
|
34 |
-
if j < frame_number - 1:
|
35 |
-
if last_value > 0.0:
|
36 |
-
step = (data[j] - data[i - 1]) / float(j - i)
|
37 |
-
for k in range(i, j):
|
38 |
-
ip_data[k] = data[i - 1] + step * (k - i + 1)
|
39 |
-
else:
|
40 |
-
for k in range(i, j):
|
41 |
-
ip_data[k] = data[j]
|
42 |
-
else:
|
43 |
-
for k in range(i, frame_number):
|
44 |
-
ip_data[k] = last_value
|
45 |
-
else:
|
46 |
-
ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
|
47 |
-
last_value = data[i]
|
48 |
-
|
49 |
-
return ip_data[:, 0], vuv_vector[:, 0]
|
50 |
-
|
51 |
-
def resize_f0(self, x, target_len):
|
52 |
-
source = np.array(x)
|
53 |
-
source[source < 0.001] = np.nan
|
54 |
-
target = np.interp(
|
55 |
-
np.arange(0, len(source) * target_len, len(source)) / target_len,
|
56 |
-
np.arange(0, len(source)),
|
57 |
-
source,
|
58 |
-
)
|
59 |
-
res = np.nan_to_num(target)
|
60 |
-
return res
|
61 |
-
|
62 |
-
def compute_f0(self, wav, p_len=None):
|
63 |
-
if p_len is None:
|
64 |
-
p_len = wav.shape[0] // self.hop_length
|
65 |
-
f0, t = pyworld.dio(
|
66 |
-
wav.astype(np.double),
|
67 |
-
fs=self.sampling_rate,
|
68 |
-
f0_floor=self.f0_min,
|
69 |
-
f0_ceil=self.f0_max,
|
70 |
-
frame_period=1000 * self.hop_length / self.sampling_rate,
|
71 |
-
)
|
72 |
-
f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
|
73 |
-
for index, pitch in enumerate(f0):
|
74 |
-
f0[index] = round(pitch, 1)
|
75 |
-
return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
|
76 |
-
|
77 |
-
def compute_f0_uv(self, wav, p_len=None):
|
78 |
-
if p_len is None:
|
79 |
-
p_len = wav.shape[0] // self.hop_length
|
80 |
-
f0, t = pyworld.dio(
|
81 |
-
wav.astype(np.double),
|
82 |
-
fs=self.sampling_rate,
|
83 |
-
f0_floor=self.f0_min,
|
84 |
-
f0_ceil=self.f0_max,
|
85 |
-
frame_period=1000 * self.hop_length / self.sampling_rate,
|
86 |
-
)
|
87 |
-
f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
|
88 |
-
for index, pitch in enumerate(f0):
|
89 |
-
f0[index] = round(pitch, 1)
|
90 |
-
return self.interpolate_f0(self.resize_f0(f0, p_len))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/lib/infer_pack/modules/F0Predictor/__init__.py
DELETED
File without changes
|
spaces/AIConsultant/MusicGen/scripts/templates/base.html
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
<!DOCTYPE html>
|
2 |
-
<html lang="en">
|
3 |
-
<head>
|
4 |
-
{% block head %}
|
5 |
-
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
6 |
-
<link rel="stylesheet" href="{{url_for('static', filename='style.css')}}" />
|
7 |
-
<title>AudioCraft — MOS</title>
|
8 |
-
{% endblock %}
|
9 |
-
</head>
|
10 |
-
<body>
|
11 |
-
<div class="content">
|
12 |
-
<h1>AudioCraft — MOS </h1>
|
13 |
-
{% block content %}{% endblock %}
|
14 |
-
</div>
|
15 |
-
</body>
|
16 |
-
</html>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIFILMS/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/align_trans.py
DELETED
@@ -1,304 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
"""
|
3 |
-
Created on Mon Apr 24 15:43:29 2017
|
4 |
-
@author: zhaoy
|
5 |
-
"""
|
6 |
-
import numpy as np
|
7 |
-
import cv2
|
8 |
-
|
9 |
-
# from scipy.linalg import lstsq
|
10 |
-
# from scipy.ndimage import geometric_transform # , map_coordinates
|
11 |
-
|
12 |
-
from models.mtcnn.mtcnn_pytorch.src.matlab_cp2tform import get_similarity_transform_for_cv2
|
13 |
-
|
14 |
-
# reference facial points, a list of coordinates (x,y)
|
15 |
-
REFERENCE_FACIAL_POINTS = [
|
16 |
-
[30.29459953, 51.69630051],
|
17 |
-
[65.53179932, 51.50139999],
|
18 |
-
[48.02519989, 71.73660278],
|
19 |
-
[33.54930115, 92.3655014],
|
20 |
-
[62.72990036, 92.20410156]
|
21 |
-
]
|
22 |
-
|
23 |
-
DEFAULT_CROP_SIZE = (96, 112)
|
24 |
-
|
25 |
-
|
26 |
-
class FaceWarpException(Exception):
|
27 |
-
def __str__(self):
|
28 |
-
return 'In File {}:{}'.format(
|
29 |
-
__file__, super.__str__(self))
|
30 |
-
|
31 |
-
|
32 |
-
def get_reference_facial_points(output_size=None,
|
33 |
-
inner_padding_factor=0.0,
|
34 |
-
outer_padding=(0, 0),
|
35 |
-
default_square=False):
|
36 |
-
"""
|
37 |
-
Function:
|
38 |
-
----------
|
39 |
-
get reference 5 key points according to crop settings:
|
40 |
-
0. Set default crop_size:
|
41 |
-
if default_square:
|
42 |
-
crop_size = (112, 112)
|
43 |
-
else:
|
44 |
-
crop_size = (96, 112)
|
45 |
-
1. Pad the crop_size by inner_padding_factor in each side;
|
46 |
-
2. Resize crop_size into (output_size - outer_padding*2),
|
47 |
-
pad into output_size with outer_padding;
|
48 |
-
3. Output reference_5point;
|
49 |
-
Parameters:
|
50 |
-
----------
|
51 |
-
@output_size: (w, h) or None
|
52 |
-
size of aligned face image
|
53 |
-
@inner_padding_factor: (w_factor, h_factor)
|
54 |
-
padding factor for inner (w, h)
|
55 |
-
@outer_padding: (w_pad, h_pad)
|
56 |
-
each row is a pair of coordinates (x, y)
|
57 |
-
@default_square: True or False
|
58 |
-
if True:
|
59 |
-
default crop_size = (112, 112)
|
60 |
-
else:
|
61 |
-
default crop_size = (96, 112);
|
62 |
-
!!! make sure, if output_size is not None:
|
63 |
-
(output_size - outer_padding)
|
64 |
-
= some_scale * (default crop_size * (1.0 + inner_padding_factor))
|
65 |
-
Returns:
|
66 |
-
----------
|
67 |
-
@reference_5point: 5x2 np.array
|
68 |
-
each row is a pair of transformed coordinates (x, y)
|
69 |
-
"""
|
70 |
-
# print('\n===> get_reference_facial_points():')
|
71 |
-
|
72 |
-
# print('---> Params:')
|
73 |
-
# print(' output_size: ', output_size)
|
74 |
-
# print(' inner_padding_factor: ', inner_padding_factor)
|
75 |
-
# print(' outer_padding:', outer_padding)
|
76 |
-
# print(' default_square: ', default_square)
|
77 |
-
|
78 |
-
tmp_5pts = np.array(REFERENCE_FACIAL_POINTS)
|
79 |
-
tmp_crop_size = np.array(DEFAULT_CROP_SIZE)
|
80 |
-
|
81 |
-
# 0) make the inner region a square
|
82 |
-
if default_square:
|
83 |
-
size_diff = max(tmp_crop_size) - tmp_crop_size
|
84 |
-
tmp_5pts += size_diff / 2
|
85 |
-
tmp_crop_size += size_diff
|
86 |
-
|
87 |
-
# print('---> default:')
|
88 |
-
# print(' crop_size = ', tmp_crop_size)
|
89 |
-
# print(' reference_5pts = ', tmp_5pts)
|
90 |
-
|
91 |
-
if (output_size and
|
92 |
-
output_size[0] == tmp_crop_size[0] and
|
93 |
-
output_size[1] == tmp_crop_size[1]):
|
94 |
-
# print('output_size == DEFAULT_CROP_SIZE {}: return default reference points'.format(tmp_crop_size))
|
95 |
-
return tmp_5pts
|
96 |
-
|
97 |
-
if (inner_padding_factor == 0 and
|
98 |
-
outer_padding == (0, 0)):
|
99 |
-
if output_size is None:
|
100 |
-
# print('No paddings to do: return default reference points')
|
101 |
-
return tmp_5pts
|
102 |
-
else:
|
103 |
-
raise FaceWarpException(
|
104 |
-
'No paddings to do, output_size must be None or {}'.format(tmp_crop_size))
|
105 |
-
|
106 |
-
# check output size
|
107 |
-
if not (0 <= inner_padding_factor <= 1.0):
|
108 |
-
raise FaceWarpException('Not (0 <= inner_padding_factor <= 1.0)')
|
109 |
-
|
110 |
-
if ((inner_padding_factor > 0 or outer_padding[0] > 0 or outer_padding[1] > 0)
|
111 |
-
and output_size is None):
|
112 |
-
output_size = tmp_crop_size * \
|
113 |
-
(1 + inner_padding_factor * 2).astype(np.int32)
|
114 |
-
output_size += np.array(outer_padding)
|
115 |
-
# print(' deduced from paddings, output_size = ', output_size)
|
116 |
-
|
117 |
-
if not (outer_padding[0] < output_size[0]
|
118 |
-
and outer_padding[1] < output_size[1]):
|
119 |
-
raise FaceWarpException('Not (outer_padding[0] < output_size[0]'
|
120 |
-
'and outer_padding[1] < output_size[1])')
|
121 |
-
|
122 |
-
# 1) pad the inner region according inner_padding_factor
|
123 |
-
# print('---> STEP1: pad the inner region according inner_padding_factor')
|
124 |
-
if inner_padding_factor > 0:
|
125 |
-
size_diff = tmp_crop_size * inner_padding_factor * 2
|
126 |
-
tmp_5pts += size_diff / 2
|
127 |
-
tmp_crop_size += np.round(size_diff).astype(np.int32)
|
128 |
-
|
129 |
-
# print(' crop_size = ', tmp_crop_size)
|
130 |
-
# print(' reference_5pts = ', tmp_5pts)
|
131 |
-
|
132 |
-
# 2) resize the padded inner region
|
133 |
-
# print('---> STEP2: resize the padded inner region')
|
134 |
-
size_bf_outer_pad = np.array(output_size) - np.array(outer_padding) * 2
|
135 |
-
# print(' crop_size = ', tmp_crop_size)
|
136 |
-
# print(' size_bf_outer_pad = ', size_bf_outer_pad)
|
137 |
-
|
138 |
-
if size_bf_outer_pad[0] * tmp_crop_size[1] != size_bf_outer_pad[1] * tmp_crop_size[0]:
|
139 |
-
raise FaceWarpException('Must have (output_size - outer_padding)'
|
140 |
-
'= some_scale * (crop_size * (1.0 + inner_padding_factor)')
|
141 |
-
|
142 |
-
scale_factor = size_bf_outer_pad[0].astype(np.float32) / tmp_crop_size[0]
|
143 |
-
# print(' resize scale_factor = ', scale_factor)
|
144 |
-
tmp_5pts = tmp_5pts * scale_factor
|
145 |
-
# size_diff = tmp_crop_size * (scale_factor - min(scale_factor))
|
146 |
-
# tmp_5pts = tmp_5pts + size_diff / 2
|
147 |
-
tmp_crop_size = size_bf_outer_pad
|
148 |
-
# print(' crop_size = ', tmp_crop_size)
|
149 |
-
# print(' reference_5pts = ', tmp_5pts)
|
150 |
-
|
151 |
-
# 3) add outer_padding to make output_size
|
152 |
-
reference_5point = tmp_5pts + np.array(outer_padding)
|
153 |
-
tmp_crop_size = output_size
|
154 |
-
# print('---> STEP3: add outer_padding to make output_size')
|
155 |
-
# print(' crop_size = ', tmp_crop_size)
|
156 |
-
# print(' reference_5pts = ', tmp_5pts)
|
157 |
-
|
158 |
-
# print('===> end get_reference_facial_points\n')
|
159 |
-
|
160 |
-
return reference_5point
|
161 |
-
|
162 |
-
|
163 |
-
def get_affine_transform_matrix(src_pts, dst_pts):
|
164 |
-
"""
|
165 |
-
Function:
|
166 |
-
----------
|
167 |
-
get affine transform matrix 'tfm' from src_pts to dst_pts
|
168 |
-
Parameters:
|
169 |
-
----------
|
170 |
-
@src_pts: Kx2 np.array
|
171 |
-
source points matrix, each row is a pair of coordinates (x, y)
|
172 |
-
@dst_pts: Kx2 np.array
|
173 |
-
destination points matrix, each row is a pair of coordinates (x, y)
|
174 |
-
Returns:
|
175 |
-
----------
|
176 |
-
@tfm: 2x3 np.array
|
177 |
-
transform matrix from src_pts to dst_pts
|
178 |
-
"""
|
179 |
-
|
180 |
-
tfm = np.float32([[1, 0, 0], [0, 1, 0]])
|
181 |
-
n_pts = src_pts.shape[0]
|
182 |
-
ones = np.ones((n_pts, 1), src_pts.dtype)
|
183 |
-
src_pts_ = np.hstack([src_pts, ones])
|
184 |
-
dst_pts_ = np.hstack([dst_pts, ones])
|
185 |
-
|
186 |
-
# #print(('src_pts_:\n' + str(src_pts_))
|
187 |
-
# #print(('dst_pts_:\n' + str(dst_pts_))
|
188 |
-
|
189 |
-
A, res, rank, s = np.linalg.lstsq(src_pts_, dst_pts_)
|
190 |
-
|
191 |
-
# #print(('np.linalg.lstsq return A: \n' + str(A))
|
192 |
-
# #print(('np.linalg.lstsq return res: \n' + str(res))
|
193 |
-
# #print(('np.linalg.lstsq return rank: \n' + str(rank))
|
194 |
-
# #print(('np.linalg.lstsq return s: \n' + str(s))
|
195 |
-
|
196 |
-
if rank == 3:
|
197 |
-
tfm = np.float32([
|
198 |
-
[A[0, 0], A[1, 0], A[2, 0]],
|
199 |
-
[A[0, 1], A[1, 1], A[2, 1]]
|
200 |
-
])
|
201 |
-
elif rank == 2:
|
202 |
-
tfm = np.float32([
|
203 |
-
[A[0, 0], A[1, 0], 0],
|
204 |
-
[A[0, 1], A[1, 1], 0]
|
205 |
-
])
|
206 |
-
|
207 |
-
return tfm
|
208 |
-
|
209 |
-
|
210 |
-
def warp_and_crop_face(src_img,
|
211 |
-
facial_pts,
|
212 |
-
reference_pts=None,
|
213 |
-
crop_size=(96, 112),
|
214 |
-
align_type='smilarity'):
|
215 |
-
"""
|
216 |
-
Function:
|
217 |
-
----------
|
218 |
-
apply affine transform 'trans' to uv
|
219 |
-
Parameters:
|
220 |
-
----------
|
221 |
-
@src_img: 3x3 np.array
|
222 |
-
input image
|
223 |
-
@facial_pts: could be
|
224 |
-
1)a list of K coordinates (x,y)
|
225 |
-
or
|
226 |
-
2) Kx2 or 2xK np.array
|
227 |
-
each row or col is a pair of coordinates (x, y)
|
228 |
-
@reference_pts: could be
|
229 |
-
1) a list of K coordinates (x,y)
|
230 |
-
or
|
231 |
-
2) Kx2 or 2xK np.array
|
232 |
-
each row or col is a pair of coordinates (x, y)
|
233 |
-
or
|
234 |
-
3) None
|
235 |
-
if None, use default reference facial points
|
236 |
-
@crop_size: (w, h)
|
237 |
-
output face image size
|
238 |
-
@align_type: transform type, could be one of
|
239 |
-
1) 'similarity': use similarity transform
|
240 |
-
2) 'cv2_affine': use the first 3 points to do affine transform,
|
241 |
-
by calling cv2.getAffineTransform()
|
242 |
-
3) 'affine': use all points to do affine transform
|
243 |
-
Returns:
|
244 |
-
----------
|
245 |
-
@face_img: output face image with size (w, h) = @crop_size
|
246 |
-
"""
|
247 |
-
|
248 |
-
if reference_pts is None:
|
249 |
-
if crop_size[0] == 96 and crop_size[1] == 112:
|
250 |
-
reference_pts = REFERENCE_FACIAL_POINTS
|
251 |
-
else:
|
252 |
-
default_square = False
|
253 |
-
inner_padding_factor = 0
|
254 |
-
outer_padding = (0, 0)
|
255 |
-
output_size = crop_size
|
256 |
-
|
257 |
-
reference_pts = get_reference_facial_points(output_size,
|
258 |
-
inner_padding_factor,
|
259 |
-
outer_padding,
|
260 |
-
default_square)
|
261 |
-
|
262 |
-
ref_pts = np.float32(reference_pts)
|
263 |
-
ref_pts_shp = ref_pts.shape
|
264 |
-
if max(ref_pts_shp) < 3 or min(ref_pts_shp) != 2:
|
265 |
-
raise FaceWarpException(
|
266 |
-
'reference_pts.shape must be (K,2) or (2,K) and K>2')
|
267 |
-
|
268 |
-
if ref_pts_shp[0] == 2:
|
269 |
-
ref_pts = ref_pts.T
|
270 |
-
|
271 |
-
src_pts = np.float32(facial_pts)
|
272 |
-
src_pts_shp = src_pts.shape
|
273 |
-
if max(src_pts_shp) < 3 or min(src_pts_shp) != 2:
|
274 |
-
raise FaceWarpException(
|
275 |
-
'facial_pts.shape must be (K,2) or (2,K) and K>2')
|
276 |
-
|
277 |
-
if src_pts_shp[0] == 2:
|
278 |
-
src_pts = src_pts.T
|
279 |
-
|
280 |
-
# #print('--->src_pts:\n', src_pts
|
281 |
-
# #print('--->ref_pts\n', ref_pts
|
282 |
-
|
283 |
-
if src_pts.shape != ref_pts.shape:
|
284 |
-
raise FaceWarpException(
|
285 |
-
'facial_pts and reference_pts must have the same shape')
|
286 |
-
|
287 |
-
if align_type is 'cv2_affine':
|
288 |
-
tfm = cv2.getAffineTransform(src_pts[0:3], ref_pts[0:3])
|
289 |
-
# #print(('cv2.getAffineTransform() returns tfm=\n' + str(tfm))
|
290 |
-
elif align_type is 'affine':
|
291 |
-
tfm = get_affine_transform_matrix(src_pts, ref_pts)
|
292 |
-
# #print(('get_affine_transform_matrix() returns tfm=\n' + str(tfm))
|
293 |
-
else:
|
294 |
-
tfm = get_similarity_transform_for_cv2(src_pts, ref_pts)
|
295 |
-
# #print(('get_similarity_transform_for_cv2() returns tfm=\n' + str(tfm))
|
296 |
-
|
297 |
-
# #print('--->Transform matrix: '
|
298 |
-
# #print(('type(tfm):' + str(type(tfm)))
|
299 |
-
# #print(('tfm.dtype:' + str(tfm.dtype))
|
300 |
-
# #print( tfm
|
301 |
-
|
302 |
-
face_img = cv2.warpAffine(src_img, tfm, (crop_size[0], crop_size[1]))
|
303 |
-
|
304 |
-
return face_img, tfm
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/model.py
DELETED
@@ -1,835 +0,0 @@
|
|
1 |
-
# pytorch_diffusion + derived encoder decoder
|
2 |
-
import math
|
3 |
-
import torch
|
4 |
-
import torch.nn as nn
|
5 |
-
import numpy as np
|
6 |
-
from einops import rearrange
|
7 |
-
|
8 |
-
from ldm.util import instantiate_from_config
|
9 |
-
from ldm.modules.attention import LinearAttention
|
10 |
-
|
11 |
-
|
12 |
-
def get_timestep_embedding(timesteps, embedding_dim):
|
13 |
-
"""
|
14 |
-
This matches the implementation in Denoising Diffusion Probabilistic Models:
|
15 |
-
From Fairseq.
|
16 |
-
Build sinusoidal embeddings.
|
17 |
-
This matches the implementation in tensor2tensor, but differs slightly
|
18 |
-
from the description in Section 3.5 of "Attention Is All You Need".
|
19 |
-
"""
|
20 |
-
assert len(timesteps.shape) == 1
|
21 |
-
|
22 |
-
half_dim = embedding_dim // 2
|
23 |
-
emb = math.log(10000) / (half_dim - 1)
|
24 |
-
emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb)
|
25 |
-
emb = emb.to(device=timesteps.device)
|
26 |
-
emb = timesteps.float()[:, None] * emb[None, :]
|
27 |
-
emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1)
|
28 |
-
if embedding_dim % 2 == 1: # zero pad
|
29 |
-
emb = torch.nn.functional.pad(emb, (0,1,0,0))
|
30 |
-
return emb
|
31 |
-
|
32 |
-
|
33 |
-
def nonlinearity(x):
|
34 |
-
# swish
|
35 |
-
return x*torch.sigmoid(x)
|
36 |
-
|
37 |
-
|
38 |
-
def Normalize(in_channels, num_groups=32):
|
39 |
-
return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True)
|
40 |
-
|
41 |
-
|
42 |
-
class Upsample(nn.Module):
|
43 |
-
def __init__(self, in_channels, with_conv):
|
44 |
-
super().__init__()
|
45 |
-
self.with_conv = with_conv
|
46 |
-
if self.with_conv:
|
47 |
-
self.conv = torch.nn.Conv2d(in_channels,
|
48 |
-
in_channels,
|
49 |
-
kernel_size=3,
|
50 |
-
stride=1,
|
51 |
-
padding=1)
|
52 |
-
|
53 |
-
def forward(self, x):
|
54 |
-
x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest")
|
55 |
-
if self.with_conv:
|
56 |
-
x = self.conv(x)
|
57 |
-
return x
|
58 |
-
|
59 |
-
|
60 |
-
class Downsample(nn.Module):
|
61 |
-
def __init__(self, in_channels, with_conv):
|
62 |
-
super().__init__()
|
63 |
-
self.with_conv = with_conv
|
64 |
-
if self.with_conv:
|
65 |
-
# no asymmetric padding in torch conv, must do it ourselves
|
66 |
-
self.conv = torch.nn.Conv2d(in_channels,
|
67 |
-
in_channels,
|
68 |
-
kernel_size=3,
|
69 |
-
stride=2,
|
70 |
-
padding=0)
|
71 |
-
|
72 |
-
def forward(self, x):
|
73 |
-
if self.with_conv:
|
74 |
-
pad = (0,1,0,1)
|
75 |
-
x = torch.nn.functional.pad(x, pad, mode="constant", value=0)
|
76 |
-
x = self.conv(x)
|
77 |
-
else:
|
78 |
-
x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2)
|
79 |
-
return x
|
80 |
-
|
81 |
-
|
82 |
-
class ResnetBlock(nn.Module):
|
83 |
-
def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False,
|
84 |
-
dropout, temb_channels=512):
|
85 |
-
super().__init__()
|
86 |
-
self.in_channels = in_channels
|
87 |
-
out_channels = in_channels if out_channels is None else out_channels
|
88 |
-
self.out_channels = out_channels
|
89 |
-
self.use_conv_shortcut = conv_shortcut
|
90 |
-
|
91 |
-
self.norm1 = Normalize(in_channels)
|
92 |
-
self.conv1 = torch.nn.Conv2d(in_channels,
|
93 |
-
out_channels,
|
94 |
-
kernel_size=3,
|
95 |
-
stride=1,
|
96 |
-
padding=1)
|
97 |
-
if temb_channels > 0:
|
98 |
-
self.temb_proj = torch.nn.Linear(temb_channels,
|
99 |
-
out_channels)
|
100 |
-
self.norm2 = Normalize(out_channels)
|
101 |
-
self.dropout = torch.nn.Dropout(dropout)
|
102 |
-
self.conv2 = torch.nn.Conv2d(out_channels,
|
103 |
-
out_channels,
|
104 |
-
kernel_size=3,
|
105 |
-
stride=1,
|
106 |
-
padding=1)
|
107 |
-
if self.in_channels != self.out_channels:
|
108 |
-
if self.use_conv_shortcut:
|
109 |
-
self.conv_shortcut = torch.nn.Conv2d(in_channels,
|
110 |
-
out_channels,
|
111 |
-
kernel_size=3,
|
112 |
-
stride=1,
|
113 |
-
padding=1)
|
114 |
-
else:
|
115 |
-
self.nin_shortcut = torch.nn.Conv2d(in_channels,
|
116 |
-
out_channels,
|
117 |
-
kernel_size=1,
|
118 |
-
stride=1,
|
119 |
-
padding=0)
|
120 |
-
|
121 |
-
def forward(self, x, temb):
|
122 |
-
h = x
|
123 |
-
h = self.norm1(h)
|
124 |
-
h = nonlinearity(h)
|
125 |
-
h = self.conv1(h)
|
126 |
-
|
127 |
-
if temb is not None:
|
128 |
-
h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None]
|
129 |
-
|
130 |
-
h = self.norm2(h)
|
131 |
-
h = nonlinearity(h)
|
132 |
-
h = self.dropout(h)
|
133 |
-
h = self.conv2(h)
|
134 |
-
|
135 |
-
if self.in_channels != self.out_channels:
|
136 |
-
if self.use_conv_shortcut:
|
137 |
-
x = self.conv_shortcut(x)
|
138 |
-
else:
|
139 |
-
x = self.nin_shortcut(x)
|
140 |
-
|
141 |
-
return x+h
|
142 |
-
|
143 |
-
|
144 |
-
class LinAttnBlock(LinearAttention):
|
145 |
-
"""to match AttnBlock usage"""
|
146 |
-
def __init__(self, in_channels):
|
147 |
-
super().__init__(dim=in_channels, heads=1, dim_head=in_channels)
|
148 |
-
|
149 |
-
|
150 |
-
class AttnBlock(nn.Module):
|
151 |
-
def __init__(self, in_channels):
|
152 |
-
super().__init__()
|
153 |
-
self.in_channels = in_channels
|
154 |
-
|
155 |
-
self.norm = Normalize(in_channels)
|
156 |
-
self.q = torch.nn.Conv2d(in_channels,
|
157 |
-
in_channels,
|
158 |
-
kernel_size=1,
|
159 |
-
stride=1,
|
160 |
-
padding=0)
|
161 |
-
self.k = torch.nn.Conv2d(in_channels,
|
162 |
-
in_channels,
|
163 |
-
kernel_size=1,
|
164 |
-
stride=1,
|
165 |
-
padding=0)
|
166 |
-
self.v = torch.nn.Conv2d(in_channels,
|
167 |
-
in_channels,
|
168 |
-
kernel_size=1,
|
169 |
-
stride=1,
|
170 |
-
padding=0)
|
171 |
-
self.proj_out = torch.nn.Conv2d(in_channels,
|
172 |
-
in_channels,
|
173 |
-
kernel_size=1,
|
174 |
-
stride=1,
|
175 |
-
padding=0)
|
176 |
-
|
177 |
-
|
178 |
-
def forward(self, x):
|
179 |
-
h_ = x
|
180 |
-
h_ = self.norm(h_)
|
181 |
-
q = self.q(h_)
|
182 |
-
k = self.k(h_)
|
183 |
-
v = self.v(h_)
|
184 |
-
|
185 |
-
# compute attention
|
186 |
-
b,c,h,w = q.shape
|
187 |
-
q = q.reshape(b,c,h*w)
|
188 |
-
q = q.permute(0,2,1) # b,hw,c
|
189 |
-
k = k.reshape(b,c,h*w) # b,c,hw
|
190 |
-
w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j]
|
191 |
-
w_ = w_ * (int(c)**(-0.5))
|
192 |
-
w_ = torch.nn.functional.softmax(w_, dim=2)
|
193 |
-
|
194 |
-
# attend to values
|
195 |
-
v = v.reshape(b,c,h*w)
|
196 |
-
w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q)
|
197 |
-
h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j]
|
198 |
-
h_ = h_.reshape(b,c,h,w)
|
199 |
-
|
200 |
-
h_ = self.proj_out(h_)
|
201 |
-
|
202 |
-
return x+h_
|
203 |
-
|
204 |
-
|
205 |
-
def make_attn(in_channels, attn_type="vanilla"):
|
206 |
-
assert attn_type in ["vanilla", "linear", "none"], f'attn_type {attn_type} unknown'
|
207 |
-
print(f"making attention of type '{attn_type}' with {in_channels} in_channels")
|
208 |
-
if attn_type == "vanilla":
|
209 |
-
return AttnBlock(in_channels)
|
210 |
-
elif attn_type == "none":
|
211 |
-
return nn.Identity(in_channels)
|
212 |
-
else:
|
213 |
-
return LinAttnBlock(in_channels)
|
214 |
-
|
215 |
-
|
216 |
-
class Model(nn.Module):
|
217 |
-
def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
|
218 |
-
attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
|
219 |
-
resolution, use_timestep=True, use_linear_attn=False, attn_type="vanilla"):
|
220 |
-
super().__init__()
|
221 |
-
if use_linear_attn: attn_type = "linear"
|
222 |
-
self.ch = ch
|
223 |
-
self.temb_ch = self.ch*4
|
224 |
-
self.num_resolutions = len(ch_mult)
|
225 |
-
self.num_res_blocks = num_res_blocks
|
226 |
-
self.resolution = resolution
|
227 |
-
self.in_channels = in_channels
|
228 |
-
|
229 |
-
self.use_timestep = use_timestep
|
230 |
-
if self.use_timestep:
|
231 |
-
# timestep embedding
|
232 |
-
self.temb = nn.Module()
|
233 |
-
self.temb.dense = nn.ModuleList([
|
234 |
-
torch.nn.Linear(self.ch,
|
235 |
-
self.temb_ch),
|
236 |
-
torch.nn.Linear(self.temb_ch,
|
237 |
-
self.temb_ch),
|
238 |
-
])
|
239 |
-
|
240 |
-
# downsampling
|
241 |
-
self.conv_in = torch.nn.Conv2d(in_channels,
|
242 |
-
self.ch,
|
243 |
-
kernel_size=3,
|
244 |
-
stride=1,
|
245 |
-
padding=1)
|
246 |
-
|
247 |
-
curr_res = resolution
|
248 |
-
in_ch_mult = (1,)+tuple(ch_mult)
|
249 |
-
self.down = nn.ModuleList()
|
250 |
-
for i_level in range(self.num_resolutions):
|
251 |
-
block = nn.ModuleList()
|
252 |
-
attn = nn.ModuleList()
|
253 |
-
block_in = ch*in_ch_mult[i_level]
|
254 |
-
block_out = ch*ch_mult[i_level]
|
255 |
-
for i_block in range(self.num_res_blocks):
|
256 |
-
block.append(ResnetBlock(in_channels=block_in,
|
257 |
-
out_channels=block_out,
|
258 |
-
temb_channels=self.temb_ch,
|
259 |
-
dropout=dropout))
|
260 |
-
block_in = block_out
|
261 |
-
if curr_res in attn_resolutions:
|
262 |
-
attn.append(make_attn(block_in, attn_type=attn_type))
|
263 |
-
down = nn.Module()
|
264 |
-
down.block = block
|
265 |
-
down.attn = attn
|
266 |
-
if i_level != self.num_resolutions-1:
|
267 |
-
down.downsample = Downsample(block_in, resamp_with_conv)
|
268 |
-
curr_res = curr_res // 2
|
269 |
-
self.down.append(down)
|
270 |
-
|
271 |
-
# middle
|
272 |
-
self.mid = nn.Module()
|
273 |
-
self.mid.block_1 = ResnetBlock(in_channels=block_in,
|
274 |
-
out_channels=block_in,
|
275 |
-
temb_channels=self.temb_ch,
|
276 |
-
dropout=dropout)
|
277 |
-
self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
|
278 |
-
self.mid.block_2 = ResnetBlock(in_channels=block_in,
|
279 |
-
out_channels=block_in,
|
280 |
-
temb_channels=self.temb_ch,
|
281 |
-
dropout=dropout)
|
282 |
-
|
283 |
-
# upsampling
|
284 |
-
self.up = nn.ModuleList()
|
285 |
-
for i_level in reversed(range(self.num_resolutions)):
|
286 |
-
block = nn.ModuleList()
|
287 |
-
attn = nn.ModuleList()
|
288 |
-
block_out = ch*ch_mult[i_level]
|
289 |
-
skip_in = ch*ch_mult[i_level]
|
290 |
-
for i_block in range(self.num_res_blocks+1):
|
291 |
-
if i_block == self.num_res_blocks:
|
292 |
-
skip_in = ch*in_ch_mult[i_level]
|
293 |
-
block.append(ResnetBlock(in_channels=block_in+skip_in,
|
294 |
-
out_channels=block_out,
|
295 |
-
temb_channels=self.temb_ch,
|
296 |
-
dropout=dropout))
|
297 |
-
block_in = block_out
|
298 |
-
if curr_res in attn_resolutions:
|
299 |
-
attn.append(make_attn(block_in, attn_type=attn_type))
|
300 |
-
up = nn.Module()
|
301 |
-
up.block = block
|
302 |
-
up.attn = attn
|
303 |
-
if i_level != 0:
|
304 |
-
up.upsample = Upsample(block_in, resamp_with_conv)
|
305 |
-
curr_res = curr_res * 2
|
306 |
-
self.up.insert(0, up) # prepend to get consistent order
|
307 |
-
|
308 |
-
# end
|
309 |
-
self.norm_out = Normalize(block_in)
|
310 |
-
self.conv_out = torch.nn.Conv2d(block_in,
|
311 |
-
out_ch,
|
312 |
-
kernel_size=3,
|
313 |
-
stride=1,
|
314 |
-
padding=1)
|
315 |
-
|
316 |
-
def forward(self, x, t=None, context=None):
|
317 |
-
#assert x.shape[2] == x.shape[3] == self.resolution
|
318 |
-
if context is not None:
|
319 |
-
# assume aligned context, cat along channel axis
|
320 |
-
x = torch.cat((x, context), dim=1)
|
321 |
-
if self.use_timestep:
|
322 |
-
# timestep embedding
|
323 |
-
assert t is not None
|
324 |
-
temb = get_timestep_embedding(t, self.ch)
|
325 |
-
temb = self.temb.dense[0](temb)
|
326 |
-
temb = nonlinearity(temb)
|
327 |
-
temb = self.temb.dense[1](temb)
|
328 |
-
else:
|
329 |
-
temb = None
|
330 |
-
|
331 |
-
# downsampling
|
332 |
-
hs = [self.conv_in(x)]
|
333 |
-
for i_level in range(self.num_resolutions):
|
334 |
-
for i_block in range(self.num_res_blocks):
|
335 |
-
h = self.down[i_level].block[i_block](hs[-1], temb)
|
336 |
-
if len(self.down[i_level].attn) > 0:
|
337 |
-
h = self.down[i_level].attn[i_block](h)
|
338 |
-
hs.append(h)
|
339 |
-
if i_level != self.num_resolutions-1:
|
340 |
-
hs.append(self.down[i_level].downsample(hs[-1]))
|
341 |
-
|
342 |
-
# middle
|
343 |
-
h = hs[-1]
|
344 |
-
h = self.mid.block_1(h, temb)
|
345 |
-
h = self.mid.attn_1(h)
|
346 |
-
h = self.mid.block_2(h, temb)
|
347 |
-
|
348 |
-
# upsampling
|
349 |
-
for i_level in reversed(range(self.num_resolutions)):
|
350 |
-
for i_block in range(self.num_res_blocks+1):
|
351 |
-
h = self.up[i_level].block[i_block](
|
352 |
-
torch.cat([h, hs.pop()], dim=1), temb)
|
353 |
-
if len(self.up[i_level].attn) > 0:
|
354 |
-
h = self.up[i_level].attn[i_block](h)
|
355 |
-
if i_level != 0:
|
356 |
-
h = self.up[i_level].upsample(h)
|
357 |
-
|
358 |
-
# end
|
359 |
-
h = self.norm_out(h)
|
360 |
-
h = nonlinearity(h)
|
361 |
-
h = self.conv_out(h)
|
362 |
-
return h
|
363 |
-
|
364 |
-
def get_last_layer(self):
|
365 |
-
return self.conv_out.weight
|
366 |
-
|
367 |
-
|
368 |
-
class Encoder(nn.Module):
|
369 |
-
def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
|
370 |
-
attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
|
371 |
-
resolution, z_channels, double_z=True, use_linear_attn=False, attn_type="vanilla",
|
372 |
-
**ignore_kwargs):
|
373 |
-
super().__init__()
|
374 |
-
if use_linear_attn: attn_type = "linear"
|
375 |
-
self.ch = ch
|
376 |
-
self.temb_ch = 0
|
377 |
-
self.num_resolutions = len(ch_mult)
|
378 |
-
self.num_res_blocks = num_res_blocks
|
379 |
-
self.resolution = resolution
|
380 |
-
self.in_channels = in_channels
|
381 |
-
|
382 |
-
# downsampling
|
383 |
-
self.conv_in = torch.nn.Conv2d(in_channels,
|
384 |
-
self.ch,
|
385 |
-
kernel_size=3,
|
386 |
-
stride=1,
|
387 |
-
padding=1)
|
388 |
-
|
389 |
-
curr_res = resolution
|
390 |
-
in_ch_mult = (1,)+tuple(ch_mult)
|
391 |
-
self.in_ch_mult = in_ch_mult
|
392 |
-
self.down = nn.ModuleList()
|
393 |
-
for i_level in range(self.num_resolutions):
|
394 |
-
block = nn.ModuleList()
|
395 |
-
attn = nn.ModuleList()
|
396 |
-
block_in = ch*in_ch_mult[i_level]
|
397 |
-
block_out = ch*ch_mult[i_level]
|
398 |
-
for i_block in range(self.num_res_blocks):
|
399 |
-
block.append(ResnetBlock(in_channels=block_in,
|
400 |
-
out_channels=block_out,
|
401 |
-
temb_channels=self.temb_ch,
|
402 |
-
dropout=dropout))
|
403 |
-
block_in = block_out
|
404 |
-
if curr_res in attn_resolutions:
|
405 |
-
attn.append(make_attn(block_in, attn_type=attn_type))# vanilla attention
|
406 |
-
down = nn.Module()
|
407 |
-
down.block = block
|
408 |
-
down.attn = attn
|
409 |
-
if i_level != self.num_resolutions-1:
|
410 |
-
down.downsample = Downsample(block_in, resamp_with_conv)
|
411 |
-
curr_res = curr_res // 2
|
412 |
-
self.down.append(down)
|
413 |
-
|
414 |
-
# middle
|
415 |
-
self.mid = nn.Module()
|
416 |
-
self.mid.block_1 = ResnetBlock(in_channels=block_in,
|
417 |
-
out_channels=block_in,
|
418 |
-
temb_channels=self.temb_ch,
|
419 |
-
dropout=dropout)
|
420 |
-
self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
|
421 |
-
self.mid.block_2 = ResnetBlock(in_channels=block_in,
|
422 |
-
out_channels=block_in,
|
423 |
-
temb_channels=self.temb_ch,
|
424 |
-
dropout=dropout)
|
425 |
-
|
426 |
-
# end
|
427 |
-
self.norm_out = Normalize(block_in)# GroupNorm
|
428 |
-
self.conv_out = torch.nn.Conv2d(block_in,
|
429 |
-
2*z_channels if double_z else z_channels,
|
430 |
-
kernel_size=3,
|
431 |
-
stride=1,
|
432 |
-
padding=1)
|
433 |
-
|
434 |
-
def forward(self, x):
|
435 |
-
# timestep embedding
|
436 |
-
temb = None
|
437 |
-
|
438 |
-
# downsampling
|
439 |
-
hs = [self.conv_in(x)]
|
440 |
-
for i_level in range(self.num_resolutions):
|
441 |
-
for i_block in range(self.num_res_blocks):
|
442 |
-
h = self.down[i_level].block[i_block](hs[-1], temb)
|
443 |
-
if len(self.down[i_level].attn) > 0:
|
444 |
-
h = self.down[i_level].attn[i_block](h)
|
445 |
-
hs.append(h)
|
446 |
-
if i_level != self.num_resolutions-1:
|
447 |
-
hs.append(self.down[i_level].downsample(hs[-1]))
|
448 |
-
|
449 |
-
# middle
|
450 |
-
h = hs[-1]
|
451 |
-
h = self.mid.block_1(h, temb)
|
452 |
-
h = self.mid.attn_1(h)
|
453 |
-
h = self.mid.block_2(h, temb)
|
454 |
-
|
455 |
-
# end
|
456 |
-
h = self.norm_out(h)
|
457 |
-
h = nonlinearity(h)
|
458 |
-
h = self.conv_out(h)
|
459 |
-
return h
|
460 |
-
|
461 |
-
|
462 |
-
class Decoder(nn.Module):
|
463 |
-
def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks,
|
464 |
-
attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels,
|
465 |
-
resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False,
|
466 |
-
attn_type="vanilla", **ignorekwargs):
|
467 |
-
super().__init__()
|
468 |
-
if use_linear_attn: attn_type = "linear"
|
469 |
-
self.ch = ch
|
470 |
-
self.temb_ch = 0
|
471 |
-
self.num_resolutions = len(ch_mult)
|
472 |
-
self.num_res_blocks = num_res_blocks
|
473 |
-
self.resolution = resolution
|
474 |
-
self.in_channels = in_channels
|
475 |
-
self.give_pre_end = give_pre_end
|
476 |
-
self.tanh_out = tanh_out
|
477 |
-
|
478 |
-
# compute in_ch_mult, block_in and curr_res at lowest res
|
479 |
-
in_ch_mult = (1,)+tuple(ch_mult)
|
480 |
-
block_in = ch*ch_mult[self.num_resolutions-1]
|
481 |
-
curr_res = resolution // 2**(self.num_resolutions-1)
|
482 |
-
self.z_shape = (1,z_channels,curr_res,curr_res)
|
483 |
-
print("Working with z of shape {} = {} dimensions.".format(
|
484 |
-
self.z_shape, np.prod(self.z_shape)))
|
485 |
-
|
486 |
-
# z to block_in
|
487 |
-
self.conv_in = torch.nn.Conv2d(z_channels,
|
488 |
-
block_in,
|
489 |
-
kernel_size=3,
|
490 |
-
stride=1,
|
491 |
-
padding=1)
|
492 |
-
|
493 |
-
# middle
|
494 |
-
self.mid = nn.Module()
|
495 |
-
self.mid.block_1 = ResnetBlock(in_channels=block_in,
|
496 |
-
out_channels=block_in,
|
497 |
-
temb_channels=self.temb_ch,
|
498 |
-
dropout=dropout)
|
499 |
-
self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
|
500 |
-
self.mid.block_2 = ResnetBlock(in_channels=block_in,
|
501 |
-
out_channels=block_in,
|
502 |
-
temb_channels=self.temb_ch,
|
503 |
-
dropout=dropout)
|
504 |
-
|
505 |
-
# upsampling
|
506 |
-
self.up = nn.ModuleList()
|
507 |
-
for i_level in reversed(range(self.num_resolutions)):
|
508 |
-
block = nn.ModuleList()
|
509 |
-
attn = nn.ModuleList()
|
510 |
-
block_out = ch*ch_mult[i_level]
|
511 |
-
for i_block in range(self.num_res_blocks+1):
|
512 |
-
block.append(ResnetBlock(in_channels=block_in,
|
513 |
-
out_channels=block_out,
|
514 |
-
temb_channels=self.temb_ch,
|
515 |
-
dropout=dropout))
|
516 |
-
block_in = block_out
|
517 |
-
if curr_res in attn_resolutions:
|
518 |
-
attn.append(make_attn(block_in, attn_type=attn_type))
|
519 |
-
up = nn.Module()
|
520 |
-
up.block = block
|
521 |
-
up.attn = attn
|
522 |
-
if i_level != 0:
|
523 |
-
up.upsample = Upsample(block_in, resamp_with_conv)
|
524 |
-
curr_res = curr_res * 2
|
525 |
-
self.up.insert(0, up) # prepend to get consistent order
|
526 |
-
|
527 |
-
# end
|
528 |
-
self.norm_out = Normalize(block_in)
|
529 |
-
self.conv_out = torch.nn.Conv2d(block_in,
|
530 |
-
out_ch,
|
531 |
-
kernel_size=3,
|
532 |
-
stride=1,
|
533 |
-
padding=1)
|
534 |
-
|
535 |
-
def forward(self, z):
|
536 |
-
#assert z.shape[1:] == self.z_shape[1:]
|
537 |
-
self.last_z_shape = z.shape
|
538 |
-
|
539 |
-
# timestep embedding
|
540 |
-
temb = None
|
541 |
-
|
542 |
-
# z to block_in
|
543 |
-
h = self.conv_in(z)
|
544 |
-
|
545 |
-
# middle
|
546 |
-
h = self.mid.block_1(h, temb)
|
547 |
-
h = self.mid.attn_1(h)
|
548 |
-
h = self.mid.block_2(h, temb)
|
549 |
-
|
550 |
-
# upsampling
|
551 |
-
for i_level in reversed(range(self.num_resolutions)):
|
552 |
-
for i_block in range(self.num_res_blocks+1):
|
553 |
-
h = self.up[i_level].block[i_block](h, temb)
|
554 |
-
if len(self.up[i_level].attn) > 0:
|
555 |
-
h = self.up[i_level].attn[i_block](h)
|
556 |
-
if i_level != 0:
|
557 |
-
h = self.up[i_level].upsample(h)
|
558 |
-
|
559 |
-
# end
|
560 |
-
if self.give_pre_end:
|
561 |
-
return h
|
562 |
-
|
563 |
-
h = self.norm_out(h)
|
564 |
-
h = nonlinearity(h)
|
565 |
-
h = self.conv_out(h)
|
566 |
-
if self.tanh_out:
|
567 |
-
h = torch.tanh(h)
|
568 |
-
return h
|
569 |
-
|
570 |
-
|
571 |
-
class SimpleDecoder(nn.Module):
|
572 |
-
def __init__(self, in_channels, out_channels, *args, **kwargs):
|
573 |
-
super().__init__()
|
574 |
-
self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1),
|
575 |
-
ResnetBlock(in_channels=in_channels,
|
576 |
-
out_channels=2 * in_channels,
|
577 |
-
temb_channels=0, dropout=0.0),
|
578 |
-
ResnetBlock(in_channels=2 * in_channels,
|
579 |
-
out_channels=4 * in_channels,
|
580 |
-
temb_channels=0, dropout=0.0),
|
581 |
-
ResnetBlock(in_channels=4 * in_channels,
|
582 |
-
out_channels=2 * in_channels,
|
583 |
-
temb_channels=0, dropout=0.0),
|
584 |
-
nn.Conv2d(2*in_channels, in_channels, 1),
|
585 |
-
Upsample(in_channels, with_conv=True)])
|
586 |
-
# end
|
587 |
-
self.norm_out = Normalize(in_channels)
|
588 |
-
self.conv_out = torch.nn.Conv2d(in_channels,
|
589 |
-
out_channels,
|
590 |
-
kernel_size=3,
|
591 |
-
stride=1,
|
592 |
-
padding=1)
|
593 |
-
|
594 |
-
def forward(self, x):
|
595 |
-
for i, layer in enumerate(self.model):
|
596 |
-
if i in [1,2,3]:
|
597 |
-
x = layer(x, None)
|
598 |
-
else:
|
599 |
-
x = layer(x)
|
600 |
-
|
601 |
-
h = self.norm_out(x)
|
602 |
-
h = nonlinearity(h)
|
603 |
-
x = self.conv_out(h)
|
604 |
-
return x
|
605 |
-
|
606 |
-
|
607 |
-
class UpsampleDecoder(nn.Module):
|
608 |
-
def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution,
|
609 |
-
ch_mult=(2,2), dropout=0.0):
|
610 |
-
super().__init__()
|
611 |
-
# upsampling
|
612 |
-
self.temb_ch = 0
|
613 |
-
self.num_resolutions = len(ch_mult)
|
614 |
-
self.num_res_blocks = num_res_blocks
|
615 |
-
block_in = in_channels
|
616 |
-
curr_res = resolution // 2 ** (self.num_resolutions - 1)
|
617 |
-
self.res_blocks = nn.ModuleList()
|
618 |
-
self.upsample_blocks = nn.ModuleList()
|
619 |
-
for i_level in range(self.num_resolutions):
|
620 |
-
res_block = []
|
621 |
-
block_out = ch * ch_mult[i_level]
|
622 |
-
for i_block in range(self.num_res_blocks + 1):
|
623 |
-
res_block.append(ResnetBlock(in_channels=block_in,
|
624 |
-
out_channels=block_out,
|
625 |
-
temb_channels=self.temb_ch,
|
626 |
-
dropout=dropout))
|
627 |
-
block_in = block_out
|
628 |
-
self.res_blocks.append(nn.ModuleList(res_block))
|
629 |
-
if i_level != self.num_resolutions - 1:
|
630 |
-
self.upsample_blocks.append(Upsample(block_in, True))
|
631 |
-
curr_res = curr_res * 2
|
632 |
-
|
633 |
-
# end
|
634 |
-
self.norm_out = Normalize(block_in)
|
635 |
-
self.conv_out = torch.nn.Conv2d(block_in,
|
636 |
-
out_channels,
|
637 |
-
kernel_size=3,
|
638 |
-
stride=1,
|
639 |
-
padding=1)
|
640 |
-
|
641 |
-
def forward(self, x):
|
642 |
-
# upsampling
|
643 |
-
h = x
|
644 |
-
for k, i_level in enumerate(range(self.num_resolutions)):
|
645 |
-
for i_block in range(self.num_res_blocks + 1):
|
646 |
-
h = self.res_blocks[i_level][i_block](h, None)
|
647 |
-
if i_level != self.num_resolutions - 1:
|
648 |
-
h = self.upsample_blocks[k](h)
|
649 |
-
h = self.norm_out(h)
|
650 |
-
h = nonlinearity(h)
|
651 |
-
h = self.conv_out(h)
|
652 |
-
return h
|
653 |
-
|
654 |
-
|
655 |
-
class LatentRescaler(nn.Module):
|
656 |
-
def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2):
|
657 |
-
super().__init__()
|
658 |
-
# residual block, interpolate, residual block
|
659 |
-
self.factor = factor
|
660 |
-
self.conv_in = nn.Conv2d(in_channels,
|
661 |
-
mid_channels,
|
662 |
-
kernel_size=3,
|
663 |
-
stride=1,
|
664 |
-
padding=1)
|
665 |
-
self.res_block1 = nn.ModuleList([ResnetBlock(in_channels=mid_channels,
|
666 |
-
out_channels=mid_channels,
|
667 |
-
temb_channels=0,
|
668 |
-
dropout=0.0) for _ in range(depth)])
|
669 |
-
self.attn = AttnBlock(mid_channels)
|
670 |
-
self.res_block2 = nn.ModuleList([ResnetBlock(in_channels=mid_channels,
|
671 |
-
out_channels=mid_channels,
|
672 |
-
temb_channels=0,
|
673 |
-
dropout=0.0) for _ in range(depth)])
|
674 |
-
|
675 |
-
self.conv_out = nn.Conv2d(mid_channels,
|
676 |
-
out_channels,
|
677 |
-
kernel_size=1,
|
678 |
-
)
|
679 |
-
|
680 |
-
def forward(self, x):
|
681 |
-
x = self.conv_in(x)
|
682 |
-
for block in self.res_block1:
|
683 |
-
x = block(x, None)
|
684 |
-
x = torch.nn.functional.interpolate(x, size=(int(round(x.shape[2]*self.factor)), int(round(x.shape[3]*self.factor))))
|
685 |
-
x = self.attn(x)
|
686 |
-
for block in self.res_block2:
|
687 |
-
x = block(x, None)
|
688 |
-
x = self.conv_out(x)
|
689 |
-
return x
|
690 |
-
|
691 |
-
|
692 |
-
class MergedRescaleEncoder(nn.Module):
|
693 |
-
def __init__(self, in_channels, ch, resolution, out_ch, num_res_blocks,
|
694 |
-
attn_resolutions, dropout=0.0, resamp_with_conv=True,
|
695 |
-
ch_mult=(1,2,4,8), rescale_factor=1.0, rescale_module_depth=1):
|
696 |
-
super().__init__()
|
697 |
-
intermediate_chn = ch * ch_mult[-1]
|
698 |
-
self.encoder = Encoder(in_channels=in_channels, num_res_blocks=num_res_blocks, ch=ch, ch_mult=ch_mult,
|
699 |
-
z_channels=intermediate_chn, double_z=False, resolution=resolution,
|
700 |
-
attn_resolutions=attn_resolutions, dropout=dropout, resamp_with_conv=resamp_with_conv,
|
701 |
-
out_ch=None)
|
702 |
-
self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=intermediate_chn,
|
703 |
-
mid_channels=intermediate_chn, out_channels=out_ch, depth=rescale_module_depth)
|
704 |
-
|
705 |
-
def forward(self, x):
|
706 |
-
x = self.encoder(x)
|
707 |
-
x = self.rescaler(x)
|
708 |
-
return x
|
709 |
-
|
710 |
-
|
711 |
-
class MergedRescaleDecoder(nn.Module):
|
712 |
-
def __init__(self, z_channels, out_ch, resolution, num_res_blocks, attn_resolutions, ch, ch_mult=(1,2,4,8),
|
713 |
-
dropout=0.0, resamp_with_conv=True, rescale_factor=1.0, rescale_module_depth=1):
|
714 |
-
super().__init__()
|
715 |
-
tmp_chn = z_channels*ch_mult[-1]
|
716 |
-
self.decoder = Decoder(out_ch=out_ch, z_channels=tmp_chn, attn_resolutions=attn_resolutions, dropout=dropout,
|
717 |
-
resamp_with_conv=resamp_with_conv, in_channels=None, num_res_blocks=num_res_blocks,
|
718 |
-
ch_mult=ch_mult, resolution=resolution, ch=ch)
|
719 |
-
self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=z_channels, mid_channels=tmp_chn,
|
720 |
-
out_channels=tmp_chn, depth=rescale_module_depth)
|
721 |
-
|
722 |
-
def forward(self, x):
|
723 |
-
x = self.rescaler(x)
|
724 |
-
x = self.decoder(x)
|
725 |
-
return x
|
726 |
-
|
727 |
-
|
728 |
-
class Upsampler(nn.Module):
|
729 |
-
def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2):
|
730 |
-
super().__init__()
|
731 |
-
assert out_size >= in_size
|
732 |
-
num_blocks = int(np.log2(out_size//in_size))+1
|
733 |
-
factor_up = 1.+ (out_size % in_size)
|
734 |
-
print(f"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}")
|
735 |
-
self.rescaler = LatentRescaler(factor=factor_up, in_channels=in_channels, mid_channels=2*in_channels,
|
736 |
-
out_channels=in_channels)
|
737 |
-
self.decoder = Decoder(out_ch=out_channels, resolution=out_size, z_channels=in_channels, num_res_blocks=2,
|
738 |
-
attn_resolutions=[], in_channels=None, ch=in_channels,
|
739 |
-
ch_mult=[ch_mult for _ in range(num_blocks)])
|
740 |
-
|
741 |
-
def forward(self, x):
|
742 |
-
x = self.rescaler(x)
|
743 |
-
x = self.decoder(x)
|
744 |
-
return x
|
745 |
-
|
746 |
-
|
747 |
-
class Resize(nn.Module):
|
748 |
-
def __init__(self, in_channels=None, learned=False, mode="bilinear"):
|
749 |
-
super().__init__()
|
750 |
-
self.with_conv = learned
|
751 |
-
self.mode = mode
|
752 |
-
if self.with_conv:
|
753 |
-
print(f"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode")
|
754 |
-
raise NotImplementedError()
|
755 |
-
assert in_channels is not None
|
756 |
-
# no asymmetric padding in torch conv, must do it ourselves
|
757 |
-
self.conv = torch.nn.Conv2d(in_channels,
|
758 |
-
in_channels,
|
759 |
-
kernel_size=4,
|
760 |
-
stride=2,
|
761 |
-
padding=1)
|
762 |
-
|
763 |
-
def forward(self, x, scale_factor=1.0):
|
764 |
-
if scale_factor==1.0:
|
765 |
-
return x
|
766 |
-
else:
|
767 |
-
x = torch.nn.functional.interpolate(x, mode=self.mode, align_corners=False, scale_factor=scale_factor)
|
768 |
-
return x
|
769 |
-
|
770 |
-
class FirstStagePostProcessor(nn.Module):
|
771 |
-
|
772 |
-
def __init__(self, ch_mult:list, in_channels,
|
773 |
-
pretrained_model:nn.Module=None,
|
774 |
-
reshape=False,
|
775 |
-
n_channels=None,
|
776 |
-
dropout=0.,
|
777 |
-
pretrained_config=None):
|
778 |
-
super().__init__()
|
779 |
-
if pretrained_config is None:
|
780 |
-
assert pretrained_model is not None, 'Either "pretrained_model" or "pretrained_config" must not be None'
|
781 |
-
self.pretrained_model = pretrained_model
|
782 |
-
else:
|
783 |
-
assert pretrained_config is not None, 'Either "pretrained_model" or "pretrained_config" must not be None'
|
784 |
-
self.instantiate_pretrained(pretrained_config)
|
785 |
-
|
786 |
-
self.do_reshape = reshape
|
787 |
-
|
788 |
-
if n_channels is None:
|
789 |
-
n_channels = self.pretrained_model.encoder.ch
|
790 |
-
|
791 |
-
self.proj_norm = Normalize(in_channels,num_groups=in_channels//2)
|
792 |
-
self.proj = nn.Conv2d(in_channels,n_channels,kernel_size=3,
|
793 |
-
stride=1,padding=1)
|
794 |
-
|
795 |
-
blocks = []
|
796 |
-
downs = []
|
797 |
-
ch_in = n_channels
|
798 |
-
for m in ch_mult:
|
799 |
-
blocks.append(ResnetBlock(in_channels=ch_in,out_channels=m*n_channels,dropout=dropout))
|
800 |
-
ch_in = m * n_channels
|
801 |
-
downs.append(Downsample(ch_in, with_conv=False))
|
802 |
-
|
803 |
-
self.model = nn.ModuleList(blocks)
|
804 |
-
self.downsampler = nn.ModuleList(downs)
|
805 |
-
|
806 |
-
|
807 |
-
def instantiate_pretrained(self, config):
|
808 |
-
model = instantiate_from_config(config)
|
809 |
-
self.pretrained_model = model.eval()
|
810 |
-
# self.pretrained_model.train = False
|
811 |
-
for param in self.pretrained_model.parameters():
|
812 |
-
param.requires_grad = False
|
813 |
-
|
814 |
-
|
815 |
-
@torch.no_grad()
|
816 |
-
def encode_with_pretrained(self,x):
|
817 |
-
c = self.pretrained_model.encode(x)
|
818 |
-
if isinstance(c, DiagonalGaussianDistribution):
|
819 |
-
c = c.mode()
|
820 |
-
return c
|
821 |
-
|
822 |
-
def forward(self,x):
|
823 |
-
z_fs = self.encode_with_pretrained(x)
|
824 |
-
z = self.proj_norm(z_fs)
|
825 |
-
z = self.proj(z)
|
826 |
-
z = nonlinearity(z)
|
827 |
-
|
828 |
-
for submodel, downmodel in zip(self.model,self.downsampler):
|
829 |
-
z = submodel(z,temb=None)
|
830 |
-
z = downmodel(z)
|
831 |
-
|
832 |
-
if self.do_reshape:
|
833 |
-
z = rearrange(z,'b c h w -> b (h w) c')
|
834 |
-
return z
|
835 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb16-150e_deepfashion2_long_sleeved_dress_256x192.py
DELETED
@@ -1,172 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../../../_base_/default_runtime.py',
|
3 |
-
'../../../_base_/datasets/deepfashion2.py'
|
4 |
-
]
|
5 |
-
|
6 |
-
default_hooks = dict(checkpoint=dict(save_best='PCK', rule='greater'))
|
7 |
-
|
8 |
-
resume = False # 断点恢复
|
9 |
-
load_from = None # 模型权重加载
|
10 |
-
train_cfg = dict(by_epoch=True, max_epochs=150, val_interval=10) # 训练轮数,测试间隔
|
11 |
-
param_scheduler = [
|
12 |
-
dict( # warmup策略
|
13 |
-
type='LinearLR',
|
14 |
-
begin=0,
|
15 |
-
end=500,
|
16 |
-
start_factor=0.001,
|
17 |
-
by_epoch=False),
|
18 |
-
dict( # scheduler
|
19 |
-
type='MultiStepLR',
|
20 |
-
begin=0,
|
21 |
-
end=150,
|
22 |
-
milestones=[100, 130],
|
23 |
-
gamma=0.1,
|
24 |
-
by_epoch=True)
|
25 |
-
]
|
26 |
-
optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005)) # 优化器和学习率
|
27 |
-
auto_scale_lr = dict(base_batch_size=512) # 根据batch_size自动缩放学习率
|
28 |
-
|
29 |
-
backend_args = dict(backend='local') # 数据加载后端设置,默认从本地硬盘加载
|
30 |
-
dataset_type = 'DeepFashion2Dataset' # 数据集类名 DeepFashionDataset
|
31 |
-
data_mode = 'topdown' # 算法结构类型,用于指定标注信息加载策略
|
32 |
-
data_root = 'data/deepfashion2/' # 数据存放路径
|
33 |
-
# 定义数据编解码器,用于生成target和对pred进行解码,同时包含了输入图片和输出heatmap尺寸等信息
|
34 |
-
codec = dict(
|
35 |
-
type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2)
|
36 |
-
|
37 |
-
train_pipeline = [
|
38 |
-
dict(type='LoadImage'),
|
39 |
-
dict(type='GetBBoxCenterScale'),
|
40 |
-
dict(type='RandomFlip', direction='horizontal'),
|
41 |
-
dict(
|
42 |
-
type='RandomBBoxTransform',
|
43 |
-
shift_prob=0,
|
44 |
-
rotate_factor=60,
|
45 |
-
scale_factor=(0.75, 1.25)),
|
46 |
-
dict(type='TopdownAffine', input_size=codec['input_size']),
|
47 |
-
dict(type='GenerateTarget', encoder=codec),
|
48 |
-
dict(type='PackPoseInputs')
|
49 |
-
]
|
50 |
-
val_pipeline = [ # 测试时数据增强
|
51 |
-
dict(type='LoadImage', backend_args=backend_args), # 加载图片
|
52 |
-
dict(type='GetBBoxCenterScale'), # 根据bbox获取center和scale
|
53 |
-
dict(type='TopdownAffine', input_size=codec['input_size']), # 根据变换矩阵更新目标数据
|
54 |
-
dict(type='PackPoseInputs') # 对target进行打包用于训练
|
55 |
-
]
|
56 |
-
train_dataloader = dict( # 训练数据加载
|
57 |
-
batch_size=16, # 批次大小
|
58 |
-
num_workers=6, # 数据加载进程数
|
59 |
-
persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销
|
60 |
-
sampler=dict(type='DefaultSampler', shuffle=True), # 采样策略,打乱数据
|
61 |
-
dataset=dict(
|
62 |
-
type=dataset_type, # 数据集类名
|
63 |
-
data_root=data_root, # 数据集路径
|
64 |
-
data_mode=data_mode, # 算法类型
|
65 |
-
ann_file='train/deepfashion2_long_sleeved_dress.json', # 标注文件路径
|
66 |
-
data_prefix=dict(img='train/image/'), # 图像路径
|
67 |
-
pipeline=train_pipeline # 数据流水线
|
68 |
-
))
|
69 |
-
val_dataloader = dict(
|
70 |
-
batch_size=16,
|
71 |
-
num_workers=6,
|
72 |
-
persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销
|
73 |
-
drop_last=False,
|
74 |
-
sampler=dict(type='DefaultSampler', shuffle=False), # 采样策略,不进行打乱
|
75 |
-
dataset=dict(
|
76 |
-
type=dataset_type, # 数据集类名
|
77 |
-
data_root=data_root, # 数据集路径
|
78 |
-
data_mode=data_mode, # 算法类型
|
79 |
-
ann_file='validation/deepfashion2_long_sleeved_dress.json', # 标注文件路径
|
80 |
-
data_prefix=dict(img='validation/image/'), # 图像路径
|
81 |
-
test_mode=True, # 测试模式开关
|
82 |
-
pipeline=val_pipeline # 数据流水线
|
83 |
-
))
|
84 |
-
test_dataloader = val_dataloader # 默认情况下不区分验证集和测试集,用户根据需要来自行定义
|
85 |
-
|
86 |
-
channel_cfg = dict(
|
87 |
-
num_output_channels=294,
|
88 |
-
dataset_joints=294,
|
89 |
-
dataset_channel=[
|
90 |
-
[
|
91 |
-
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
|
92 |
-
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
|
93 |
-
36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52,
|
94 |
-
53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69,
|
95 |
-
70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86,
|
96 |
-
87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102,
|
97 |
-
103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115,
|
98 |
-
116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128,
|
99 |
-
129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141,
|
100 |
-
142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154,
|
101 |
-
155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167,
|
102 |
-
168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180,
|
103 |
-
181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193,
|
104 |
-
194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206,
|
105 |
-
207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
|
106 |
-
220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232,
|
107 |
-
233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245,
|
108 |
-
246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258,
|
109 |
-
259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271,
|
110 |
-
272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284,
|
111 |
-
285, 286, 287, 288, 289, 290, 291, 292, 293
|
112 |
-
],
|
113 |
-
],
|
114 |
-
inference_channel=[
|
115 |
-
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
|
116 |
-
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
|
117 |
-
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
|
118 |
-
56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
|
119 |
-
74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
|
120 |
-
92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
|
121 |
-
108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
|
122 |
-
122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
|
123 |
-
136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
|
124 |
-
150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
|
125 |
-
164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
|
126 |
-
178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
|
127 |
-
192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
|
128 |
-
206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
|
129 |
-
220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
|
130 |
-
234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
|
131 |
-
248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
|
132 |
-
262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
|
133 |
-
276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
|
134 |
-
290, 291, 292, 293
|
135 |
-
])
|
136 |
-
|
137 |
-
model = dict(
|
138 |
-
type='TopdownPoseEstimator', # 模型结构决定了算法流程
|
139 |
-
data_preprocessor=dict( # 数据归一化和通道顺序调整,作为模型的一部分
|
140 |
-
type='PoseDataPreprocessor',
|
141 |
-
mean=[123.675, 116.28, 103.53],
|
142 |
-
std=[58.395, 57.12, 57.375],
|
143 |
-
bgr_to_rgb=True),
|
144 |
-
backbone=dict(
|
145 |
-
type='ResNet',
|
146 |
-
depth=50,
|
147 |
-
init_cfg=dict(
|
148 |
-
type='Pretrained', # 预训练参数,只加载backbone权重用于迁移学习
|
149 |
-
checkpoint='torchvision://resnet50')),
|
150 |
-
head=dict( # 模型头部
|
151 |
-
type='HeatmapHead',
|
152 |
-
in_channels=2048,
|
153 |
-
out_channels=channel_cfg['num_output_channels'],
|
154 |
-
# deconv_out_channels=None,
|
155 |
-
loss=dict(type='KeypointMSELoss', use_target_weight=True), # 损失函数
|
156 |
-
decoder=codec), # 解码器,将heatmap解码成坐标值
|
157 |
-
test_cfg=dict(
|
158 |
-
flip_test=True, # 开启测试时水平翻转集成
|
159 |
-
flip_mode='heatmap', # 对heatmap进行翻转
|
160 |
-
shift_heatmap=True, # 对翻转后的结果进行平移提高精度
|
161 |
-
))
|
162 |
-
|
163 |
-
val_evaluator = [
|
164 |
-
dict(type='PCKAccuracy', thr=0.2),
|
165 |
-
dict(type='AUC'),
|
166 |
-
dict(type='EPE'),
|
167 |
-
]
|
168 |
-
test_evaluator = val_evaluator # 默认情况下不区分验证集和测试集,用户根据需要来自行定义
|
169 |
-
|
170 |
-
visualizer = dict(
|
171 |
-
vis_backends=[dict(type='LocalVisBackend'),
|
172 |
-
dict(type='WandbVisBackend')])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnest50.py
DELETED
@@ -1,24 +0,0 @@
|
|
1 |
-
# model settings
|
2 |
-
model = dict(
|
3 |
-
type='ImageClassifier',
|
4 |
-
backbone=dict(
|
5 |
-
type='ResNeSt',
|
6 |
-
depth=50,
|
7 |
-
num_stages=4,
|
8 |
-
out_indices=(3, ),
|
9 |
-
style='pytorch'),
|
10 |
-
neck=dict(type='GlobalAveragePooling'),
|
11 |
-
head=dict(
|
12 |
-
type='LinearClsHead',
|
13 |
-
num_classes=1000,
|
14 |
-
in_channels=2048,
|
15 |
-
loss=dict(
|
16 |
-
type='LabelSmoothLoss',
|
17 |
-
label_smooth_val=0.1,
|
18 |
-
num_classes=1000,
|
19 |
-
reduction='mean',
|
20 |
-
loss_weight=1.0),
|
21 |
-
topk=(1, 5),
|
22 |
-
cal_acc=False),
|
23 |
-
train_cfg=dict(augments=dict(type='Mixup', alpha=0.2)),
|
24 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AUST001/True-GPT4/app.py
DELETED
@@ -1,99 +0,0 @@
|
|
1 |
-
from pickle import NONE
|
2 |
-
import numpy as np
|
3 |
-
import cv2
|
4 |
-
import urllib.request
|
5 |
-
import openai
|
6 |
-
import gradio as gr
|
7 |
-
import random
|
8 |
-
import poe
|
9 |
-
|
10 |
-
client = None
|
11 |
-
user_contexts = {}
|
12 |
-
|
13 |
-
def get_assistant_response(user_question, context):
|
14 |
-
global client
|
15 |
-
context.append({"role": "user", "content": user_question})
|
16 |
-
for chunk in client.send_message("beaver", context): # capybara
|
17 |
-
pass
|
18 |
-
# print(chunk["text"])
|
19 |
-
assistant_response = chunk["text"]
|
20 |
-
context.append({"role": "assistant", "content": assistant_response})
|
21 |
-
client.send_chat_break("beaver") # capybara
|
22 |
-
return assistant_response
|
23 |
-
|
24 |
-
def generate_image_url(prompt):
|
25 |
-
response = openai.Image.create(
|
26 |
-
prompt=prompt,
|
27 |
-
n=1, # 生成1张图片
|
28 |
-
size="512x512", # 图像大小
|
29 |
-
)
|
30 |
-
image_url = response["data"][0]["url"]
|
31 |
-
return image_url
|
32 |
-
|
33 |
-
def greet(user_id, api_key, user_question, clear_history):
|
34 |
-
global client
|
35 |
-
if len(api_key)>5:
|
36 |
-
client = poe.Client(api_key)
|
37 |
-
global user_contexts
|
38 |
-
if user_id not in user_contexts:
|
39 |
-
user_contexts[user_id] = [
|
40 |
-
{"role": "system", "content": "你是一个聪明的AI助手。请参考对话记录,回答用户的最后一个问题,无需做多余的解释,更不要强调对话历史的事情"},
|
41 |
-
{"role": "user", "content": "你会说中文吗?"},
|
42 |
-
{"role": "assistant", "content": "是的,我可以说中文。"}
|
43 |
-
]
|
44 |
-
|
45 |
-
context = user_contexts[user_id]
|
46 |
-
|
47 |
-
if clear_history:
|
48 |
-
context = [
|
49 |
-
{"role": "system", "content": "你是一个聪明的AI助手。请参考对话记录,回答用户的最后一个问题,无需做多余的解释,更不要强调对话历史的事情"},
|
50 |
-
{"role": "user", "content": "你会说中文吗?"},
|
51 |
-
{"role": "assistant", "content": "是的,我可以说中文。"}
|
52 |
-
]
|
53 |
-
user_contexts[user_id] = context
|
54 |
-
return '清空成功', '保持聊天记录', np.ones((5,5))
|
55 |
-
else:
|
56 |
-
# 如果user提问包含生成图像的特定指令(这里我们使用“生成图片:”作为示例)
|
57 |
-
if user_question.startswith("生成图片:") or user_question.startswith("生成图片:"):
|
58 |
-
image_prompt = user_question[5:] # 提取用于生成图片的文本
|
59 |
-
image_url = generate_image_url(image_prompt)
|
60 |
-
resp = urllib.request.urlopen(image_url)
|
61 |
-
image = np.asarray(bytearray(resp.read()), dtype="uint8")
|
62 |
-
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
|
63 |
-
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
|
64 |
-
# return image
|
65 |
-
return '', '图片已生成', image
|
66 |
-
get_assistant_response(user_question, context)
|
67 |
-
prompt = ""
|
68 |
-
|
69 |
-
for item in context[3:]:
|
70 |
-
prompt += item["role"] + ": " + item["content"] + "\n"
|
71 |
-
return '', prompt, np.ones((5,5))
|
72 |
-
|
73 |
-
demo = gr.Interface(
|
74 |
-
fn=greet,
|
75 |
-
inputs=[
|
76 |
-
gr.Textbox(lines=1, label='请输入用户ID', placeholder='请输入用户ID'),
|
77 |
-
gr.Textbox(lines=1, label='请输入你的专属密钥', placeholder='请输入你的专属密钥'),
|
78 |
-
gr.Textbox(lines=15, label='请输入问题', placeholder='请输入您的问题'),
|
79 |
-
gr.Checkbox(label='清空聊天记录', default=False)
|
80 |
-
],
|
81 |
-
outputs=[
|
82 |
-
gr.Textbox(lines=1, label='聊天记录状态', placeholder='等待清空聊天记录'),
|
83 |
-
gr.Textbox(lines=23, label='AI回答', placeholder='等待AI回答')
|
84 |
-
],
|
85 |
-
title="True GPT4",
|
86 |
-
description="""
|
87 |
-
1.使用说明:
|
88 |
-
请输入您的问题,AI助手会给出回答。
|
89 |
-
支持连续对话,可以记录对话历史。
|
90 |
-
重新开始对话勾选清空聊天记录,输出清空成功表示重新开启对话。
|
91 |
-
2.特别警告:
|
92 |
-
为了防止用户数据混乱,请自定义用户ID。
|
93 |
-
理论上如果被别人知道自己的ID,那么别人可以查看自己的历史对话,对此你可以选择在对话结束后清除对话记录。
|
94 |
-
3.作者的GPT4网页导航网站链接如下:http://aust001.pythonanywhere.com/ -> 专属密钥进群获取
|
95 |
-
"""
|
96 |
-
)
|
97 |
-
|
98 |
-
if __name__ == "__main__":
|
99 |
-
demo.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Acapellas/vocalinstrumentalremover/README.md
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: null
|
3 |
-
emoji: ⚡
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: gray
|
6 |
-
sdk: gradio
|
7 |
-
app_file: app.py
|
8 |
-
pinned: true
|
9 |
-
duplicated_from: null
|
10 |
-
python_version: 3.9.13
|
11 |
-
---
|
12 |
-
|
13 |
-
# Configuration
|
14 |
-
|
15 |
-
`title`: _string_
|
16 |
-
Display title for the Space
|
17 |
-
|
18 |
-
`emoji`: _string_
|
19 |
-
Space emoji (emoji-only character allowed)
|
20 |
-
|
21 |
-
`colorFrom`: _string_
|
22 |
-
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
|
23 |
-
|
24 |
-
`colorTo`: _string_
|
25 |
-
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
|
26 |
-
|
27 |
-
`sdk`: _string_
|
28 |
-
Can be either `gradio` or `streamlit`
|
29 |
-
|
30 |
-
`sdk_version` : _string_
|
31 |
-
Only applicable for `streamlit` SDK.
|
32 |
-
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
|
33 |
-
|
34 |
-
`app_file`: _string_
|
35 |
-
Path to your main application file (which contains either `gradio` or `streamlit` Python code).
|
36 |
-
Path is relative to the root of the repository.
|
37 |
-
|
38 |
-
`pinned`: _boolean_
|
39 |
-
Whether the Space stays on top of your list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toggleswitch.js
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
import ToggleSwitch from './gameobjects/shape/toggleswitch/ToggleSwitch.js';
|
2 |
-
export default ToggleSwitch;
|
|
|
|
|
|
spaces/Aishwini/myfirstaigen/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Myfirstaigen
|
3 |
-
emoji: ⚡
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: purple
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.39.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aki004/herta-so-vits/onnxexport/model_onnx.py
DELETED
@@ -1,335 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch import nn
|
3 |
-
from torch.nn import functional as F
|
4 |
-
|
5 |
-
import modules.attentions as attentions
|
6 |
-
import modules.commons as commons
|
7 |
-
import modules.modules as modules
|
8 |
-
|
9 |
-
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
|
10 |
-
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
|
11 |
-
|
12 |
-
import utils
|
13 |
-
from modules.commons import init_weights, get_padding
|
14 |
-
from vdecoder.hifigan.models import Generator
|
15 |
-
from utils import f0_to_coarse
|
16 |
-
|
17 |
-
|
18 |
-
class ResidualCouplingBlock(nn.Module):
|
19 |
-
def __init__(self,
|
20 |
-
channels,
|
21 |
-
hidden_channels,
|
22 |
-
kernel_size,
|
23 |
-
dilation_rate,
|
24 |
-
n_layers,
|
25 |
-
n_flows=4,
|
26 |
-
gin_channels=0):
|
27 |
-
super().__init__()
|
28 |
-
self.channels = channels
|
29 |
-
self.hidden_channels = hidden_channels
|
30 |
-
self.kernel_size = kernel_size
|
31 |
-
self.dilation_rate = dilation_rate
|
32 |
-
self.n_layers = n_layers
|
33 |
-
self.n_flows = n_flows
|
34 |
-
self.gin_channels = gin_channels
|
35 |
-
|
36 |
-
self.flows = nn.ModuleList()
|
37 |
-
for i in range(n_flows):
|
38 |
-
self.flows.append(
|
39 |
-
modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers,
|
40 |
-
gin_channels=gin_channels, mean_only=True))
|
41 |
-
self.flows.append(modules.Flip())
|
42 |
-
|
43 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
44 |
-
if not reverse:
|
45 |
-
for flow in self.flows:
|
46 |
-
x, _ = flow(x, x_mask, g=g, reverse=reverse)
|
47 |
-
else:
|
48 |
-
for flow in reversed(self.flows):
|
49 |
-
x = flow(x, x_mask, g=g, reverse=reverse)
|
50 |
-
return x
|
51 |
-
|
52 |
-
|
53 |
-
class Encoder(nn.Module):
|
54 |
-
def __init__(self,
|
55 |
-
in_channels,
|
56 |
-
out_channels,
|
57 |
-
hidden_channels,
|
58 |
-
kernel_size,
|
59 |
-
dilation_rate,
|
60 |
-
n_layers,
|
61 |
-
gin_channels=0):
|
62 |
-
super().__init__()
|
63 |
-
self.in_channels = in_channels
|
64 |
-
self.out_channels = out_channels
|
65 |
-
self.hidden_channels = hidden_channels
|
66 |
-
self.kernel_size = kernel_size
|
67 |
-
self.dilation_rate = dilation_rate
|
68 |
-
self.n_layers = n_layers
|
69 |
-
self.gin_channels = gin_channels
|
70 |
-
|
71 |
-
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
|
72 |
-
self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
|
73 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
74 |
-
|
75 |
-
def forward(self, x, x_lengths, g=None):
|
76 |
-
# print(x.shape,x_lengths.shape)
|
77 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
78 |
-
x = self.pre(x) * x_mask
|
79 |
-
x = self.enc(x, x_mask, g=g)
|
80 |
-
stats = self.proj(x) * x_mask
|
81 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
82 |
-
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
|
83 |
-
return z, m, logs, x_mask
|
84 |
-
|
85 |
-
|
86 |
-
class TextEncoder(nn.Module):
|
87 |
-
def __init__(self,
|
88 |
-
out_channels,
|
89 |
-
hidden_channels,
|
90 |
-
kernel_size,
|
91 |
-
n_layers,
|
92 |
-
gin_channels=0,
|
93 |
-
filter_channels=None,
|
94 |
-
n_heads=None,
|
95 |
-
p_dropout=None):
|
96 |
-
super().__init__()
|
97 |
-
self.out_channels = out_channels
|
98 |
-
self.hidden_channels = hidden_channels
|
99 |
-
self.kernel_size = kernel_size
|
100 |
-
self.n_layers = n_layers
|
101 |
-
self.gin_channels = gin_channels
|
102 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
103 |
-
self.f0_emb = nn.Embedding(256, hidden_channels)
|
104 |
-
|
105 |
-
self.enc_ = attentions.Encoder(
|
106 |
-
hidden_channels,
|
107 |
-
filter_channels,
|
108 |
-
n_heads,
|
109 |
-
n_layers,
|
110 |
-
kernel_size,
|
111 |
-
p_dropout)
|
112 |
-
|
113 |
-
def forward(self, x, x_mask, f0=None, z=None):
|
114 |
-
x = x + self.f0_emb(f0).transpose(1, 2)
|
115 |
-
x = self.enc_(x * x_mask, x_mask)
|
116 |
-
stats = self.proj(x) * x_mask
|
117 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
118 |
-
z = (m + z * torch.exp(logs)) * x_mask
|
119 |
-
return z, m, logs, x_mask
|
120 |
-
|
121 |
-
|
122 |
-
class DiscriminatorP(torch.nn.Module):
|
123 |
-
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
|
124 |
-
super(DiscriminatorP, self).__init__()
|
125 |
-
self.period = period
|
126 |
-
self.use_spectral_norm = use_spectral_norm
|
127 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
128 |
-
self.convs = nn.ModuleList([
|
129 |
-
norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
130 |
-
norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
131 |
-
norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
132 |
-
norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
133 |
-
norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
|
134 |
-
])
|
135 |
-
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
|
136 |
-
|
137 |
-
def forward(self, x):
|
138 |
-
fmap = []
|
139 |
-
|
140 |
-
# 1d to 2d
|
141 |
-
b, c, t = x.shape
|
142 |
-
if t % self.period != 0: # pad first
|
143 |
-
n_pad = self.period - (t % self.period)
|
144 |
-
x = F.pad(x, (0, n_pad), "reflect")
|
145 |
-
t = t + n_pad
|
146 |
-
x = x.view(b, c, t // self.period, self.period)
|
147 |
-
|
148 |
-
for l in self.convs:
|
149 |
-
x = l(x)
|
150 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
151 |
-
fmap.append(x)
|
152 |
-
x = self.conv_post(x)
|
153 |
-
fmap.append(x)
|
154 |
-
x = torch.flatten(x, 1, -1)
|
155 |
-
|
156 |
-
return x, fmap
|
157 |
-
|
158 |
-
|
159 |
-
class DiscriminatorS(torch.nn.Module):
|
160 |
-
def __init__(self, use_spectral_norm=False):
|
161 |
-
super(DiscriminatorS, self).__init__()
|
162 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
163 |
-
self.convs = nn.ModuleList([
|
164 |
-
norm_f(Conv1d(1, 16, 15, 1, padding=7)),
|
165 |
-
norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
|
166 |
-
norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
|
167 |
-
norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
|
168 |
-
norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
|
169 |
-
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
|
170 |
-
])
|
171 |
-
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
|
172 |
-
|
173 |
-
def forward(self, x):
|
174 |
-
fmap = []
|
175 |
-
|
176 |
-
for l in self.convs:
|
177 |
-
x = l(x)
|
178 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
179 |
-
fmap.append(x)
|
180 |
-
x = self.conv_post(x)
|
181 |
-
fmap.append(x)
|
182 |
-
x = torch.flatten(x, 1, -1)
|
183 |
-
|
184 |
-
return x, fmap
|
185 |
-
|
186 |
-
|
187 |
-
class F0Decoder(nn.Module):
|
188 |
-
def __init__(self,
|
189 |
-
out_channels,
|
190 |
-
hidden_channels,
|
191 |
-
filter_channels,
|
192 |
-
n_heads,
|
193 |
-
n_layers,
|
194 |
-
kernel_size,
|
195 |
-
p_dropout,
|
196 |
-
spk_channels=0):
|
197 |
-
super().__init__()
|
198 |
-
self.out_channels = out_channels
|
199 |
-
self.hidden_channels = hidden_channels
|
200 |
-
self.filter_channels = filter_channels
|
201 |
-
self.n_heads = n_heads
|
202 |
-
self.n_layers = n_layers
|
203 |
-
self.kernel_size = kernel_size
|
204 |
-
self.p_dropout = p_dropout
|
205 |
-
self.spk_channels = spk_channels
|
206 |
-
|
207 |
-
self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1)
|
208 |
-
self.decoder = attentions.FFT(
|
209 |
-
hidden_channels,
|
210 |
-
filter_channels,
|
211 |
-
n_heads,
|
212 |
-
n_layers,
|
213 |
-
kernel_size,
|
214 |
-
p_dropout)
|
215 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
|
216 |
-
self.f0_prenet = nn.Conv1d(1, hidden_channels, 3, padding=1)
|
217 |
-
self.cond = nn.Conv1d(spk_channels, hidden_channels, 1)
|
218 |
-
|
219 |
-
def forward(self, x, norm_f0, x_mask, spk_emb=None):
|
220 |
-
x = torch.detach(x)
|
221 |
-
if spk_emb is not None:
|
222 |
-
x = x + self.cond(spk_emb)
|
223 |
-
x += self.f0_prenet(norm_f0)
|
224 |
-
x = self.prenet(x) * x_mask
|
225 |
-
x = self.decoder(x * x_mask, x_mask)
|
226 |
-
x = self.proj(x) * x_mask
|
227 |
-
return x
|
228 |
-
|
229 |
-
|
230 |
-
class SynthesizerTrn(nn.Module):
|
231 |
-
"""
|
232 |
-
Synthesizer for Training
|
233 |
-
"""
|
234 |
-
|
235 |
-
def __init__(self,
|
236 |
-
spec_channels,
|
237 |
-
segment_size,
|
238 |
-
inter_channels,
|
239 |
-
hidden_channels,
|
240 |
-
filter_channels,
|
241 |
-
n_heads,
|
242 |
-
n_layers,
|
243 |
-
kernel_size,
|
244 |
-
p_dropout,
|
245 |
-
resblock,
|
246 |
-
resblock_kernel_sizes,
|
247 |
-
resblock_dilation_sizes,
|
248 |
-
upsample_rates,
|
249 |
-
upsample_initial_channel,
|
250 |
-
upsample_kernel_sizes,
|
251 |
-
gin_channels,
|
252 |
-
ssl_dim,
|
253 |
-
n_speakers,
|
254 |
-
sampling_rate=44100,
|
255 |
-
**kwargs):
|
256 |
-
super().__init__()
|
257 |
-
self.spec_channels = spec_channels
|
258 |
-
self.inter_channels = inter_channels
|
259 |
-
self.hidden_channels = hidden_channels
|
260 |
-
self.filter_channels = filter_channels
|
261 |
-
self.n_heads = n_heads
|
262 |
-
self.n_layers = n_layers
|
263 |
-
self.kernel_size = kernel_size
|
264 |
-
self.p_dropout = p_dropout
|
265 |
-
self.resblock = resblock
|
266 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
267 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
268 |
-
self.upsample_rates = upsample_rates
|
269 |
-
self.upsample_initial_channel = upsample_initial_channel
|
270 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
271 |
-
self.segment_size = segment_size
|
272 |
-
self.gin_channels = gin_channels
|
273 |
-
self.ssl_dim = ssl_dim
|
274 |
-
self.emb_g = nn.Embedding(n_speakers, gin_channels)
|
275 |
-
|
276 |
-
self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2)
|
277 |
-
|
278 |
-
self.enc_p = TextEncoder(
|
279 |
-
inter_channels,
|
280 |
-
hidden_channels,
|
281 |
-
filter_channels=filter_channels,
|
282 |
-
n_heads=n_heads,
|
283 |
-
n_layers=n_layers,
|
284 |
-
kernel_size=kernel_size,
|
285 |
-
p_dropout=p_dropout
|
286 |
-
)
|
287 |
-
hps = {
|
288 |
-
"sampling_rate": sampling_rate,
|
289 |
-
"inter_channels": inter_channels,
|
290 |
-
"resblock": resblock,
|
291 |
-
"resblock_kernel_sizes": resblock_kernel_sizes,
|
292 |
-
"resblock_dilation_sizes": resblock_dilation_sizes,
|
293 |
-
"upsample_rates": upsample_rates,
|
294 |
-
"upsample_initial_channel": upsample_initial_channel,
|
295 |
-
"upsample_kernel_sizes": upsample_kernel_sizes,
|
296 |
-
"gin_channels": gin_channels,
|
297 |
-
}
|
298 |
-
self.dec = Generator(h=hps)
|
299 |
-
self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
|
300 |
-
self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
|
301 |
-
self.f0_decoder = F0Decoder(
|
302 |
-
1,
|
303 |
-
hidden_channels,
|
304 |
-
filter_channels,
|
305 |
-
n_heads,
|
306 |
-
n_layers,
|
307 |
-
kernel_size,
|
308 |
-
p_dropout,
|
309 |
-
spk_channels=gin_channels
|
310 |
-
)
|
311 |
-
self.emb_uv = nn.Embedding(2, hidden_channels)
|
312 |
-
self.predict_f0 = False
|
313 |
-
|
314 |
-
def forward(self, c, f0, mel2ph, uv, noise=None, g=None):
|
315 |
-
|
316 |
-
decoder_inp = F.pad(c, [0, 0, 1, 0])
|
317 |
-
mel2ph_ = mel2ph.unsqueeze(2).repeat([1, 1, c.shape[-1]])
|
318 |
-
c = torch.gather(decoder_inp, 1, mel2ph_).transpose(1, 2) # [B, T, H]
|
319 |
-
|
320 |
-
c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device)
|
321 |
-
g = g.unsqueeze(0)
|
322 |
-
g = self.emb_g(g).transpose(1, 2)
|
323 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype)
|
324 |
-
x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1, 2)
|
325 |
-
|
326 |
-
if self.predict_f0:
|
327 |
-
lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500
|
328 |
-
norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False)
|
329 |
-
pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g)
|
330 |
-
f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1)
|
331 |
-
|
332 |
-
z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), z=noise)
|
333 |
-
z = self.flow(z_p, c_mask, g=g, reverse=True)
|
334 |
-
o = self.dec(z * c_mask, g=g, f0=f0)
|
335 |
-
return o
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/text/shanghainese.py
DELETED
@@ -1,64 +0,0 @@
|
|
1 |
-
import re
|
2 |
-
import cn2an
|
3 |
-
import opencc
|
4 |
-
|
5 |
-
|
6 |
-
converter = opencc.OpenCC('zaonhe')
|
7 |
-
|
8 |
-
# List of (Latin alphabet, ipa) pairs:
|
9 |
-
_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
|
10 |
-
('A', 'ᴇ'),
|
11 |
-
('B', 'bi'),
|
12 |
-
('C', 'si'),
|
13 |
-
('D', 'di'),
|
14 |
-
('E', 'i'),
|
15 |
-
('F', 'ᴇf'),
|
16 |
-
('G', 'dʑi'),
|
17 |
-
('H', 'ᴇtɕʰ'),
|
18 |
-
('I', 'ᴀi'),
|
19 |
-
('J', 'dʑᴇ'),
|
20 |
-
('K', 'kʰᴇ'),
|
21 |
-
('L', 'ᴇl'),
|
22 |
-
('M', 'ᴇm'),
|
23 |
-
('N', 'ᴇn'),
|
24 |
-
('O', 'o'),
|
25 |
-
('P', 'pʰi'),
|
26 |
-
('Q', 'kʰiu'),
|
27 |
-
('R', 'ᴀl'),
|
28 |
-
('S', 'ᴇs'),
|
29 |
-
('T', 'tʰi'),
|
30 |
-
('U', 'ɦiu'),
|
31 |
-
('V', 'vi'),
|
32 |
-
('W', 'dᴀbɤliu'),
|
33 |
-
('X', 'ᴇks'),
|
34 |
-
('Y', 'uᴀi'),
|
35 |
-
('Z', 'zᴇ')
|
36 |
-
]]
|
37 |
-
|
38 |
-
|
39 |
-
def _number_to_shanghainese(num):
|
40 |
-
num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两')
|
41 |
-
return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num)
|
42 |
-
|
43 |
-
|
44 |
-
def number_to_shanghainese(text):
|
45 |
-
return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text)
|
46 |
-
|
47 |
-
|
48 |
-
def latin_to_ipa(text):
|
49 |
-
for regex, replacement in _latin_to_ipa:
|
50 |
-
text = re.sub(regex, replacement, text)
|
51 |
-
return text
|
52 |
-
|
53 |
-
|
54 |
-
def shanghainese_to_ipa(text):
|
55 |
-
text = number_to_shanghainese(text.upper())
|
56 |
-
text = converter.convert(text).replace('-','').replace('$',' ')
|
57 |
-
text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text)
|
58 |
-
text = re.sub(r'[、;:]', ',', text)
|
59 |
-
text = re.sub(r'\s*,\s*', ', ', text)
|
60 |
-
text = re.sub(r'\s*。\s*', '. ', text)
|
61 |
-
text = re.sub(r'\s*?\s*', '? ', text)
|
62 |
-
text = re.sub(r'\s*!\s*', '! ', text)
|
63 |
-
text = re.sub(r'\s*$', '', text)
|
64 |
-
return text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aloento/9Nine-PITS/text/paddle_zh.py
DELETED
@@ -1,115 +0,0 @@
|
|
1 |
-
from text.frontend.zh_frontend import Frontend
|
2 |
-
|
3 |
-
frontend = Frontend()
|
4 |
-
|
5 |
-
pu_symbols = ['!', '?', '…', ",", "."]
|
6 |
-
replacements = [
|
7 |
-
(u"yu", u"u:"), (u"ü", u"u:"), (u"v", u"u:"),
|
8 |
-
(u"yi", u"i"), (u"you", u"ㄧㄡ"), (u"y", u"i"),
|
9 |
-
(u"wu", u"u"), (u"wong", u"ㄨㄥ"), (u"w", u"u"),
|
10 |
-
]
|
11 |
-
|
12 |
-
table = [
|
13 |
-
# special cases
|
14 |
-
(u"ju", u"ㄐㄩ"), (u"qu", u"ㄑㄩ"), (u"xu", u"ㄒㄩ"),
|
15 |
-
(u"zhi", u"ㄓ"), (u"chi", u"ㄔ"), (u"shi", u"ㄕ"), (u"ri", u"ㄖ"),
|
16 |
-
(u"zi", u"ㄗ"), (u"ci", u"ㄘ"), (u"si", u"ㄙ"),
|
17 |
-
(u"r5", u"ㄦ"),
|
18 |
-
|
19 |
-
# initials
|
20 |
-
(u"b", u"ㄅ"), (u"p", u"ㄆ"), (u"m", u"ㄇ"), (u"f", u"ㄈ"),
|
21 |
-
(u"d", u"ㄉ"), (u"t", u"ㄊ"), (u"n", u"ㄋ"), (u"l", u"ㄌ"),
|
22 |
-
(u"g", u"ㄍ"), (u"k", u"ㄎ"), (u"h", u"ㄏ"),
|
23 |
-
(u"j", u"ㄐ"), (u"q", u"ㄑ"), (u"x", u"ㄒ"),
|
24 |
-
(u"zh", u"ㄓ"), (u"ch", u"ㄔ"), (u"sh", u"ㄕ"), (u"r", u"ㄖ"),
|
25 |
-
(u"z", u"ㄗ"), (u"c", u"ㄘ"), (u"s", u"ㄙ"),
|
26 |
-
|
27 |
-
# finals
|
28 |
-
(u"i", u"ㄧ"), (u"u", u"ㄨ"), (u"u:", u"ㄩ"),
|
29 |
-
(u"a", u"ㄚ"), (u"o", u"ㄛ"), (u"e", u"ㄜ"), (u"ê", u"ㄝ"),
|
30 |
-
(u"ai", u"ㄞ"), (u"ei", u"ㄟ"), (u"ao", u"ㄠ"), (u"ou", u"ㄡ"),
|
31 |
-
(u"an", u"ㄢ"), (u"en", u"ㄣ"), (u"ang", u"ㄤ"), (u"eng", u"ㄥ"),
|
32 |
-
(u"er", u"ㄦ"),
|
33 |
-
(u"ia", u"ㄧㄚ"), (u"io", u"ㄧㄛ"), (u"ie", u"ㄧㄝ"), (u"iai", u"ㄧㄞ"),
|
34 |
-
(u"iao", u"ㄧㄠ"), (u"iu", u"ㄧㄡ"), (u"ian", u"ㄧㄢ"),
|
35 |
-
(u"in", u"ㄧㄣ"), (u"iang", u"ㄧㄤ"), (u"ing", u"ㄧㄥ"),
|
36 |
-
(u"ua", u"ㄨㄚ"), (u"uo", u"ㄨㄛ"), (u"uai", u"ㄨㄞ"),
|
37 |
-
(u"ui", u"ㄨㄟ"), (u"uan", u"ㄨㄢ"), (u"un", u"ㄨㄣ"),
|
38 |
-
(u"uang", u"ㄨㄤ"), (u"ong", u"ㄨㄥ"),
|
39 |
-
(u"u:e", u"ㄩㄝ"), (u"u:an", u"ㄩㄢ"), (u"u:n", u"ㄩㄣ"), (u"iong", u"ㄩㄥ"),
|
40 |
-
|
41 |
-
# tones
|
42 |
-
(u"1", u"ˉ"), (u"2", u"ˊ"),
|
43 |
-
(u"3", u"ˇ"), (u"4", u"ˋ"),
|
44 |
-
(u"5", u"˙"),
|
45 |
-
]
|
46 |
-
|
47 |
-
table.sort(key=lambda pair: len(pair[0]), reverse=True)
|
48 |
-
replacements.extend(table)
|
49 |
-
|
50 |
-
zh_dict = [i.strip() for i in open("text/zh_dict.dict").readlines()]
|
51 |
-
zh_dict = {i.split("\t")[0]: i.split("\t")[1] for i in zh_dict}
|
52 |
-
|
53 |
-
reversed_zh_dict = {}
|
54 |
-
all_zh_phones = set()
|
55 |
-
for k, v in zh_dict.items():
|
56 |
-
reversed_zh_dict[v] = k
|
57 |
-
[all_zh_phones.add(i) for i in v.split(" ")]
|
58 |
-
|
59 |
-
|
60 |
-
def bopomofo(pinyin):
|
61 |
-
"""
|
62 |
-
Convert a pinyin string to Bopomofo
|
63 |
-
The optional tone info must be given as a number suffix, eg: 'ni3'
|
64 |
-
"""
|
65 |
-
|
66 |
-
pinyin = pinyin.lower()
|
67 |
-
for pair in replacements:
|
68 |
-
pinyin = pinyin.replace(pair[0], pair[1])
|
69 |
-
|
70 |
-
return pinyin
|
71 |
-
|
72 |
-
|
73 |
-
def phones_to_pinyins(phones):
|
74 |
-
pinyins = ''
|
75 |
-
accu_ph = []
|
76 |
-
for ph in phones:
|
77 |
-
accu_ph.append(ph)
|
78 |
-
if ph not in all_zh_phones:
|
79 |
-
assert len(accu_ph) == 1
|
80 |
-
pinyins += ph
|
81 |
-
accu_ph = []
|
82 |
-
elif " ".join(accu_ph) in reversed_zh_dict.keys():
|
83 |
-
pinyins += " " + reversed_zh_dict[" ".join(accu_ph)]
|
84 |
-
accu_ph = []
|
85 |
-
if not accu_ph == []:
|
86 |
-
print(accu_ph)
|
87 |
-
return pinyins.strip()
|
88 |
-
|
89 |
-
|
90 |
-
def pu_symbol_replace(data):
|
91 |
-
chinaTab = ['!', '?', "…", ",", "。", '、', "..."]
|
92 |
-
englishTab = ['!', '?', "…", ",", ".", ",", "…"]
|
93 |
-
for index in range(len(chinaTab)):
|
94 |
-
if chinaTab[index] in data:
|
95 |
-
data = data.replace(chinaTab[index], englishTab[index])
|
96 |
-
return data
|
97 |
-
|
98 |
-
|
99 |
-
def zh_to_bopomofo(text):
|
100 |
-
phones = zh_to_phonemes(text)
|
101 |
-
pinyins = phones_to_pinyins(phones)
|
102 |
-
bopomofos = bopomofo(pinyins)
|
103 |
-
return bopomofos.replace(" ", "").replace("#", " ")
|
104 |
-
|
105 |
-
|
106 |
-
def pinyin_to_bopomofo(pinyin):
|
107 |
-
bopomofos = bopomofo(pinyin)
|
108 |
-
return bopomofos.replace(" ", "").replace("#", " ").replace("%", "% ")
|
109 |
-
|
110 |
-
|
111 |
-
def zh_to_phonemes(text):
|
112 |
-
# 替换标点为英文标点
|
113 |
-
text = pu_symbol_replace(text)
|
114 |
-
phones = frontend.get_phonemes(text)[0]
|
115 |
-
return phones
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Alycer/VITS-Umamusume-voice-synthesizer/data_utils.py
DELETED
@@ -1,393 +0,0 @@
|
|
1 |
-
import time
|
2 |
-
import os
|
3 |
-
import random
|
4 |
-
import numpy as np
|
5 |
-
import torch
|
6 |
-
import torch.utils.data
|
7 |
-
|
8 |
-
import commons
|
9 |
-
from mel_processing import spectrogram_torch
|
10 |
-
from utils import load_wav_to_torch, load_filepaths_and_text
|
11 |
-
from text import text_to_sequence, cleaned_text_to_sequence
|
12 |
-
|
13 |
-
|
14 |
-
class TextAudioLoader(torch.utils.data.Dataset):
|
15 |
-
"""
|
16 |
-
1) loads audio, text pairs
|
17 |
-
2) normalizes text and converts them to sequences of integers
|
18 |
-
3) computes spectrograms from audio files.
|
19 |
-
"""
|
20 |
-
def __init__(self, audiopaths_and_text, hparams):
|
21 |
-
self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
|
22 |
-
self.text_cleaners = hparams.text_cleaners
|
23 |
-
self.max_wav_value = hparams.max_wav_value
|
24 |
-
self.sampling_rate = hparams.sampling_rate
|
25 |
-
self.filter_length = hparams.filter_length
|
26 |
-
self.hop_length = hparams.hop_length
|
27 |
-
self.win_length = hparams.win_length
|
28 |
-
self.sampling_rate = hparams.sampling_rate
|
29 |
-
|
30 |
-
self.cleaned_text = getattr(hparams, "cleaned_text", False)
|
31 |
-
|
32 |
-
self.add_blank = hparams.add_blank
|
33 |
-
self.min_text_len = getattr(hparams, "min_text_len", 1)
|
34 |
-
self.max_text_len = getattr(hparams, "max_text_len", 190)
|
35 |
-
|
36 |
-
random.seed(1234)
|
37 |
-
random.shuffle(self.audiopaths_and_text)
|
38 |
-
self._filter()
|
39 |
-
|
40 |
-
|
41 |
-
def _filter(self):
|
42 |
-
"""
|
43 |
-
Filter text & store spec lengths
|
44 |
-
"""
|
45 |
-
# Store spectrogram lengths for Bucketing
|
46 |
-
# wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
|
47 |
-
# spec_length = wav_length // hop_length
|
48 |
-
|
49 |
-
audiopaths_and_text_new = []
|
50 |
-
lengths = []
|
51 |
-
for audiopath, text in self.audiopaths_and_text:
|
52 |
-
if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
|
53 |
-
audiopaths_and_text_new.append([audiopath, text])
|
54 |
-
lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
|
55 |
-
self.audiopaths_and_text = audiopaths_and_text_new
|
56 |
-
self.lengths = lengths
|
57 |
-
|
58 |
-
def get_audio_text_pair(self, audiopath_and_text):
|
59 |
-
# separate filename and text
|
60 |
-
audiopath, text = audiopath_and_text[0], audiopath_and_text[1]
|
61 |
-
text = self.get_text(text)
|
62 |
-
spec, wav = self.get_audio(audiopath)
|
63 |
-
return (text, spec, wav)
|
64 |
-
|
65 |
-
def get_audio(self, filename):
|
66 |
-
audio, sampling_rate = load_wav_to_torch(filename)
|
67 |
-
if sampling_rate != self.sampling_rate:
|
68 |
-
raise ValueError("{} {} SR doesn't match target {} SR".format(
|
69 |
-
sampling_rate, self.sampling_rate))
|
70 |
-
audio_norm = audio / self.max_wav_value
|
71 |
-
audio_norm = audio_norm.unsqueeze(0)
|
72 |
-
spec_filename = filename.replace(".wav", ".spec.pt")
|
73 |
-
if os.path.exists(spec_filename):
|
74 |
-
spec = torch.load(spec_filename)
|
75 |
-
else:
|
76 |
-
spec = spectrogram_torch(audio_norm, self.filter_length,
|
77 |
-
self.sampling_rate, self.hop_length, self.win_length,
|
78 |
-
center=False)
|
79 |
-
spec = torch.squeeze(spec, 0)
|
80 |
-
torch.save(spec, spec_filename)
|
81 |
-
return spec, audio_norm
|
82 |
-
|
83 |
-
def get_text(self, text):
|
84 |
-
if self.cleaned_text:
|
85 |
-
text_norm = cleaned_text_to_sequence(text)
|
86 |
-
else:
|
87 |
-
text_norm = text_to_sequence(text, self.text_cleaners)
|
88 |
-
if self.add_blank:
|
89 |
-
text_norm = commons.intersperse(text_norm, 0)
|
90 |
-
text_norm = torch.LongTensor(text_norm)
|
91 |
-
return text_norm
|
92 |
-
|
93 |
-
def __getitem__(self, index):
|
94 |
-
return self.get_audio_text_pair(self.audiopaths_and_text[index])
|
95 |
-
|
96 |
-
def __len__(self):
|
97 |
-
return len(self.audiopaths_and_text)
|
98 |
-
|
99 |
-
|
100 |
-
class TextAudioCollate():
|
101 |
-
""" Zero-pads model inputs and targets
|
102 |
-
"""
|
103 |
-
def __init__(self, return_ids=False):
|
104 |
-
self.return_ids = return_ids
|
105 |
-
|
106 |
-
def __call__(self, batch):
|
107 |
-
"""Collate's training batch from normalized text and aduio
|
108 |
-
PARAMS
|
109 |
-
------
|
110 |
-
batch: [text_normalized, spec_normalized, wav_normalized]
|
111 |
-
"""
|
112 |
-
# Right zero-pad all one-hot text sequences to max input length
|
113 |
-
_, ids_sorted_decreasing = torch.sort(
|
114 |
-
torch.LongTensor([x[1].size(1) for x in batch]),
|
115 |
-
dim=0, descending=True)
|
116 |
-
|
117 |
-
max_text_len = max([len(x[0]) for x in batch])
|
118 |
-
max_spec_len = max([x[1].size(1) for x in batch])
|
119 |
-
max_wav_len = max([x[2].size(1) for x in batch])
|
120 |
-
|
121 |
-
text_lengths = torch.LongTensor(len(batch))
|
122 |
-
spec_lengths = torch.LongTensor(len(batch))
|
123 |
-
wav_lengths = torch.LongTensor(len(batch))
|
124 |
-
|
125 |
-
text_padded = torch.LongTensor(len(batch), max_text_len)
|
126 |
-
spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
|
127 |
-
wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
|
128 |
-
text_padded.zero_()
|
129 |
-
spec_padded.zero_()
|
130 |
-
wav_padded.zero_()
|
131 |
-
for i in range(len(ids_sorted_decreasing)):
|
132 |
-
row = batch[ids_sorted_decreasing[i]]
|
133 |
-
|
134 |
-
text = row[0]
|
135 |
-
text_padded[i, :text.size(0)] = text
|
136 |
-
text_lengths[i] = text.size(0)
|
137 |
-
|
138 |
-
spec = row[1]
|
139 |
-
spec_padded[i, :, :spec.size(1)] = spec
|
140 |
-
spec_lengths[i] = spec.size(1)
|
141 |
-
|
142 |
-
wav = row[2]
|
143 |
-
wav_padded[i, :, :wav.size(1)] = wav
|
144 |
-
wav_lengths[i] = wav.size(1)
|
145 |
-
|
146 |
-
if self.return_ids:
|
147 |
-
return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing
|
148 |
-
return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths
|
149 |
-
|
150 |
-
|
151 |
-
"""Multi speaker version"""
|
152 |
-
class TextAudioSpeakerLoader(torch.utils.data.Dataset):
|
153 |
-
"""
|
154 |
-
1) loads audio, speaker_id, text pairs
|
155 |
-
2) normalizes text and converts them to sequences of integers
|
156 |
-
3) computes spectrograms from audio files.
|
157 |
-
"""
|
158 |
-
def __init__(self, audiopaths_sid_text, hparams):
|
159 |
-
self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text)
|
160 |
-
self.text_cleaners = hparams.text_cleaners
|
161 |
-
self.max_wav_value = hparams.max_wav_value
|
162 |
-
self.sampling_rate = hparams.sampling_rate
|
163 |
-
self.filter_length = hparams.filter_length
|
164 |
-
self.hop_length = hparams.hop_length
|
165 |
-
self.win_length = hparams.win_length
|
166 |
-
self.sampling_rate = hparams.sampling_rate
|
167 |
-
|
168 |
-
self.cleaned_text = getattr(hparams, "cleaned_text", False)
|
169 |
-
|
170 |
-
self.add_blank = hparams.add_blank
|
171 |
-
self.min_text_len = getattr(hparams, "min_text_len", 1)
|
172 |
-
self.max_text_len = getattr(hparams, "max_text_len", 190)
|
173 |
-
|
174 |
-
random.seed(1234)
|
175 |
-
random.shuffle(self.audiopaths_sid_text)
|
176 |
-
self._filter()
|
177 |
-
|
178 |
-
def _filter(self):
|
179 |
-
"""
|
180 |
-
Filter text & store spec lengths
|
181 |
-
"""
|
182 |
-
# Store spectrogram lengths for Bucketing
|
183 |
-
# wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
|
184 |
-
# spec_length = wav_length // hop_length
|
185 |
-
|
186 |
-
audiopaths_sid_text_new = []
|
187 |
-
lengths = []
|
188 |
-
for audiopath, sid, text in self.audiopaths_sid_text:
|
189 |
-
audiopath = "E:/uma_voice/" + audiopath
|
190 |
-
if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
|
191 |
-
audiopaths_sid_text_new.append([audiopath, sid, text])
|
192 |
-
lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
|
193 |
-
self.audiopaths_sid_text = audiopaths_sid_text_new
|
194 |
-
self.lengths = lengths
|
195 |
-
|
196 |
-
def get_audio_text_speaker_pair(self, audiopath_sid_text):
|
197 |
-
# separate filename, speaker_id and text
|
198 |
-
audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2]
|
199 |
-
text = self.get_text(text)
|
200 |
-
spec, wav = self.get_audio(audiopath)
|
201 |
-
sid = self.get_sid(sid)
|
202 |
-
return (text, spec, wav, sid)
|
203 |
-
|
204 |
-
def get_audio(self, filename):
|
205 |
-
audio, sampling_rate = load_wav_to_torch(filename)
|
206 |
-
if sampling_rate != self.sampling_rate:
|
207 |
-
raise ValueError("{} {} SR doesn't match target {} SR".format(
|
208 |
-
sampling_rate, self.sampling_rate))
|
209 |
-
audio_norm = audio / self.max_wav_value
|
210 |
-
audio_norm = audio_norm.unsqueeze(0)
|
211 |
-
spec_filename = filename.replace(".wav", ".spec.pt")
|
212 |
-
if os.path.exists(spec_filename):
|
213 |
-
spec = torch.load(spec_filename)
|
214 |
-
else:
|
215 |
-
spec = spectrogram_torch(audio_norm, self.filter_length,
|
216 |
-
self.sampling_rate, self.hop_length, self.win_length,
|
217 |
-
center=False)
|
218 |
-
spec = torch.squeeze(spec, 0)
|
219 |
-
torch.save(spec, spec_filename)
|
220 |
-
return spec, audio_norm
|
221 |
-
|
222 |
-
def get_text(self, text):
|
223 |
-
if self.cleaned_text:
|
224 |
-
text_norm = cleaned_text_to_sequence(text)
|
225 |
-
else:
|
226 |
-
text_norm = text_to_sequence(text, self.text_cleaners)
|
227 |
-
if self.add_blank:
|
228 |
-
text_norm = commons.intersperse(text_norm, 0)
|
229 |
-
text_norm = torch.LongTensor(text_norm)
|
230 |
-
return text_norm
|
231 |
-
|
232 |
-
def get_sid(self, sid):
|
233 |
-
sid = torch.LongTensor([int(sid)])
|
234 |
-
return sid
|
235 |
-
|
236 |
-
def __getitem__(self, index):
|
237 |
-
return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index])
|
238 |
-
|
239 |
-
def __len__(self):
|
240 |
-
return len(self.audiopaths_sid_text)
|
241 |
-
|
242 |
-
|
243 |
-
class TextAudioSpeakerCollate():
|
244 |
-
""" Zero-pads model inputs and targets
|
245 |
-
"""
|
246 |
-
def __init__(self, return_ids=False):
|
247 |
-
self.return_ids = return_ids
|
248 |
-
|
249 |
-
def __call__(self, batch):
|
250 |
-
"""Collate's training batch from normalized text, audio and speaker identities
|
251 |
-
PARAMS
|
252 |
-
------
|
253 |
-
batch: [text_normalized, spec_normalized, wav_normalized, sid]
|
254 |
-
"""
|
255 |
-
# Right zero-pad all one-hot text sequences to max input length
|
256 |
-
_, ids_sorted_decreasing = torch.sort(
|
257 |
-
torch.LongTensor([x[1].size(1) for x in batch]),
|
258 |
-
dim=0, descending=True)
|
259 |
-
|
260 |
-
max_text_len = max([len(x[0]) for x in batch])
|
261 |
-
max_spec_len = max([x[1].size(1) for x in batch])
|
262 |
-
max_wav_len = max([x[2].size(1) for x in batch])
|
263 |
-
|
264 |
-
text_lengths = torch.LongTensor(len(batch))
|
265 |
-
spec_lengths = torch.LongTensor(len(batch))
|
266 |
-
wav_lengths = torch.LongTensor(len(batch))
|
267 |
-
sid = torch.LongTensor(len(batch))
|
268 |
-
|
269 |
-
text_padded = torch.LongTensor(len(batch), max_text_len)
|
270 |
-
spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
|
271 |
-
wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
|
272 |
-
text_padded.zero_()
|
273 |
-
spec_padded.zero_()
|
274 |
-
wav_padded.zero_()
|
275 |
-
for i in range(len(ids_sorted_decreasing)):
|
276 |
-
row = batch[ids_sorted_decreasing[i]]
|
277 |
-
|
278 |
-
text = row[0]
|
279 |
-
text_padded[i, :text.size(0)] = text
|
280 |
-
text_lengths[i] = text.size(0)
|
281 |
-
|
282 |
-
spec = row[1]
|
283 |
-
spec_padded[i, :, :spec.size(1)] = spec
|
284 |
-
spec_lengths[i] = spec.size(1)
|
285 |
-
|
286 |
-
wav = row[2]
|
287 |
-
wav_padded[i, :, :wav.size(1)] = wav
|
288 |
-
wav_lengths[i] = wav.size(1)
|
289 |
-
|
290 |
-
sid[i] = row[3]
|
291 |
-
|
292 |
-
if self.return_ids:
|
293 |
-
return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing
|
294 |
-
return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid
|
295 |
-
|
296 |
-
|
297 |
-
class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
|
298 |
-
"""
|
299 |
-
Maintain similar input lengths in a batch.
|
300 |
-
Length groups are specified by boundaries.
|
301 |
-
Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
|
302 |
-
|
303 |
-
It removes samples which are not included in the boundaries.
|
304 |
-
Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
|
305 |
-
"""
|
306 |
-
def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True):
|
307 |
-
super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
|
308 |
-
self.lengths = dataset.lengths
|
309 |
-
self.batch_size = batch_size
|
310 |
-
self.boundaries = boundaries
|
311 |
-
|
312 |
-
self.buckets, self.num_samples_per_bucket = self._create_buckets()
|
313 |
-
self.total_size = sum(self.num_samples_per_bucket)
|
314 |
-
self.num_samples = self.total_size // self.num_replicas
|
315 |
-
|
316 |
-
def _create_buckets(self):
|
317 |
-
buckets = [[] for _ in range(len(self.boundaries) - 1)]
|
318 |
-
for i in range(len(self.lengths)):
|
319 |
-
length = self.lengths[i]
|
320 |
-
idx_bucket = self._bisect(length)
|
321 |
-
if idx_bucket != -1:
|
322 |
-
buckets[idx_bucket].append(i)
|
323 |
-
|
324 |
-
for i in range(len(buckets) - 1, 0, -1):
|
325 |
-
if len(buckets[i]) == 0:
|
326 |
-
buckets.pop(i)
|
327 |
-
self.boundaries.pop(i+1)
|
328 |
-
|
329 |
-
num_samples_per_bucket = []
|
330 |
-
for i in range(len(buckets)):
|
331 |
-
len_bucket = len(buckets[i])
|
332 |
-
total_batch_size = self.num_replicas * self.batch_size
|
333 |
-
rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size
|
334 |
-
num_samples_per_bucket.append(len_bucket + rem)
|
335 |
-
return buckets, num_samples_per_bucket
|
336 |
-
|
337 |
-
def __iter__(self):
|
338 |
-
# deterministically shuffle based on epoch
|
339 |
-
g = torch.Generator()
|
340 |
-
g.manual_seed(self.epoch)
|
341 |
-
|
342 |
-
indices = []
|
343 |
-
if self.shuffle:
|
344 |
-
for bucket in self.buckets:
|
345 |
-
indices.append(torch.randperm(len(bucket), generator=g).tolist())
|
346 |
-
else:
|
347 |
-
for bucket in self.buckets:
|
348 |
-
indices.append(list(range(len(bucket))))
|
349 |
-
|
350 |
-
batches = []
|
351 |
-
for i in range(len(self.buckets)):
|
352 |
-
bucket = self.buckets[i]
|
353 |
-
len_bucket = len(bucket)
|
354 |
-
ids_bucket = indices[i]
|
355 |
-
num_samples_bucket = self.num_samples_per_bucket[i]
|
356 |
-
|
357 |
-
# add extra samples to make it evenly divisible
|
358 |
-
rem = num_samples_bucket - len_bucket
|
359 |
-
ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)]
|
360 |
-
|
361 |
-
# subsample
|
362 |
-
ids_bucket = ids_bucket[self.rank::self.num_replicas]
|
363 |
-
|
364 |
-
# batching
|
365 |
-
for j in range(len(ids_bucket) // self.batch_size):
|
366 |
-
batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]]
|
367 |
-
batches.append(batch)
|
368 |
-
|
369 |
-
if self.shuffle:
|
370 |
-
batch_ids = torch.randperm(len(batches), generator=g).tolist()
|
371 |
-
batches = [batches[i] for i in batch_ids]
|
372 |
-
self.batches = batches
|
373 |
-
|
374 |
-
assert len(self.batches) * self.batch_size == self.num_samples
|
375 |
-
return iter(self.batches)
|
376 |
-
|
377 |
-
def _bisect(self, x, lo=0, hi=None):
|
378 |
-
if hi is None:
|
379 |
-
hi = len(self.boundaries) - 1
|
380 |
-
|
381 |
-
if hi > lo:
|
382 |
-
mid = (hi + lo) // 2
|
383 |
-
if self.boundaries[mid] < x and x <= self.boundaries[mid+1]:
|
384 |
-
return mid
|
385 |
-
elif x <= self.boundaries[mid]:
|
386 |
-
return self._bisect(x, lo, mid)
|
387 |
-
else:
|
388 |
-
return self._bisect(x, mid + 1, hi)
|
389 |
-
else:
|
390 |
-
return -1
|
391 |
-
|
392 |
-
def __len__(self):
|
393 |
-
return self.num_samples // self.batch_size
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py
DELETED
@@ -1,290 +0,0 @@
|
|
1 |
-
from typing import Callable, List, Optional, Union
|
2 |
-
|
3 |
-
import torch
|
4 |
-
|
5 |
-
from ...models import UNet2DModel
|
6 |
-
from ...schedulers import CMStochasticIterativeScheduler
|
7 |
-
from ...utils import (
|
8 |
-
is_accelerate_available,
|
9 |
-
is_accelerate_version,
|
10 |
-
logging,
|
11 |
-
randn_tensor,
|
12 |
-
replace_example_docstring,
|
13 |
-
)
|
14 |
-
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
|
15 |
-
|
16 |
-
|
17 |
-
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
|
18 |
-
|
19 |
-
|
20 |
-
EXAMPLE_DOC_STRING = """
|
21 |
-
Examples:
|
22 |
-
```py
|
23 |
-
>>> import torch
|
24 |
-
|
25 |
-
>>> from diffusers import ConsistencyModelPipeline
|
26 |
-
|
27 |
-
>>> device = "cuda"
|
28 |
-
>>> # Load the cd_imagenet64_l2 checkpoint.
|
29 |
-
>>> model_id_or_path = "openai/diffusers-cd_imagenet64_l2"
|
30 |
-
>>> pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
|
31 |
-
>>> pipe.to(device)
|
32 |
-
|
33 |
-
>>> # Onestep Sampling
|
34 |
-
>>> image = pipe(num_inference_steps=1).images[0]
|
35 |
-
>>> image.save("cd_imagenet64_l2_onestep_sample.png")
|
36 |
-
|
37 |
-
>>> # Onestep sampling, class-conditional image generation
|
38 |
-
>>> # ImageNet-64 class label 145 corresponds to king penguins
|
39 |
-
>>> image = pipe(num_inference_steps=1, class_labels=145).images[0]
|
40 |
-
>>> image.save("cd_imagenet64_l2_onestep_sample_penguin.png")
|
41 |
-
|
42 |
-
>>> # Multistep sampling, class-conditional image generation
|
43 |
-
>>> # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo:
|
44 |
-
>>> # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L77
|
45 |
-
>>> image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0]
|
46 |
-
>>> image.save("cd_imagenet64_l2_multistep_sample_penguin.png")
|
47 |
-
```
|
48 |
-
"""
|
49 |
-
|
50 |
-
|
51 |
-
class ConsistencyModelPipeline(DiffusionPipeline):
|
52 |
-
r"""
|
53 |
-
Pipeline for unconditional or class-conditional image generation.
|
54 |
-
|
55 |
-
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
|
56 |
-
implemented for all pipelines (downloading, saving, running on a particular device, etc.).
|
57 |
-
|
58 |
-
Args:
|
59 |
-
unet ([`UNet2DModel`]):
|
60 |
-
A `UNet2DModel` to denoise the encoded image latents.
|
61 |
-
scheduler ([`SchedulerMixin`]):
|
62 |
-
A scheduler to be used in combination with `unet` to denoise the encoded image latents. Currently only
|
63 |
-
compatible with [`CMStochasticIterativeScheduler`].
|
64 |
-
"""
|
65 |
-
|
66 |
-
def __init__(self, unet: UNet2DModel, scheduler: CMStochasticIterativeScheduler) -> None:
|
67 |
-
super().__init__()
|
68 |
-
|
69 |
-
self.register_modules(
|
70 |
-
unet=unet,
|
71 |
-
scheduler=scheduler,
|
72 |
-
)
|
73 |
-
|
74 |
-
self.safety_checker = None
|
75 |
-
|
76 |
-
def enable_model_cpu_offload(self, gpu_id=0):
|
77 |
-
r"""
|
78 |
-
Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a
|
79 |
-
time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs.
|
80 |
-
Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the
|
81 |
-
iterative execution of the `unet`.
|
82 |
-
"""
|
83 |
-
if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
|
84 |
-
from accelerate import cpu_offload_with_hook
|
85 |
-
else:
|
86 |
-
raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
|
87 |
-
|
88 |
-
device = torch.device(f"cuda:{gpu_id}")
|
89 |
-
|
90 |
-
if self.device.type != "cpu":
|
91 |
-
self.to("cpu", silence_dtype_warnings=True)
|
92 |
-
torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
|
93 |
-
|
94 |
-
hook = None
|
95 |
-
for cpu_offloaded_model in [self.unet]:
|
96 |
-
_, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
|
97 |
-
|
98 |
-
if self.safety_checker is not None:
|
99 |
-
_, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook)
|
100 |
-
|
101 |
-
# We'll offload the last model manually.
|
102 |
-
self.final_offload_hook = hook
|
103 |
-
|
104 |
-
def prepare_latents(self, batch_size, num_channels, height, width, dtype, device, generator, latents=None):
|
105 |
-
shape = (batch_size, num_channels, height, width)
|
106 |
-
if isinstance(generator, list) and len(generator) != batch_size:
|
107 |
-
raise ValueError(
|
108 |
-
f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
|
109 |
-
f" size of {batch_size}. Make sure the batch size matches the length of the generators."
|
110 |
-
)
|
111 |
-
|
112 |
-
if latents is None:
|
113 |
-
latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
|
114 |
-
else:
|
115 |
-
latents = latents.to(device=device, dtype=dtype)
|
116 |
-
|
117 |
-
# scale the initial noise by the standard deviation required by the scheduler
|
118 |
-
latents = latents * self.scheduler.init_noise_sigma
|
119 |
-
return latents
|
120 |
-
|
121 |
-
# Follows diffusers.VaeImageProcessor.postprocess
|
122 |
-
def postprocess_image(self, sample: torch.FloatTensor, output_type: str = "pil"):
|
123 |
-
if output_type not in ["pt", "np", "pil"]:
|
124 |
-
raise ValueError(
|
125 |
-
f"output_type={output_type} is not supported. Make sure to choose one of ['pt', 'np', or 'pil']"
|
126 |
-
)
|
127 |
-
|
128 |
-
# Equivalent to diffusers.VaeImageProcessor.denormalize
|
129 |
-
sample = (sample / 2 + 0.5).clamp(0, 1)
|
130 |
-
if output_type == "pt":
|
131 |
-
return sample
|
132 |
-
|
133 |
-
# Equivalent to diffusers.VaeImageProcessor.pt_to_numpy
|
134 |
-
sample = sample.cpu().permute(0, 2, 3, 1).numpy()
|
135 |
-
if output_type == "np":
|
136 |
-
return sample
|
137 |
-
|
138 |
-
# Output_type must be 'pil'
|
139 |
-
sample = self.numpy_to_pil(sample)
|
140 |
-
return sample
|
141 |
-
|
142 |
-
def prepare_class_labels(self, batch_size, device, class_labels=None):
|
143 |
-
if self.unet.config.num_class_embeds is not None:
|
144 |
-
if isinstance(class_labels, list):
|
145 |
-
class_labels = torch.tensor(class_labels, dtype=torch.int)
|
146 |
-
elif isinstance(class_labels, int):
|
147 |
-
assert batch_size == 1, "Batch size must be 1 if classes is an int"
|
148 |
-
class_labels = torch.tensor([class_labels], dtype=torch.int)
|
149 |
-
elif class_labels is None:
|
150 |
-
# Randomly generate batch_size class labels
|
151 |
-
# TODO: should use generator here? int analogue of randn_tensor is not exposed in ...utils
|
152 |
-
class_labels = torch.randint(0, self.unet.config.num_class_embeds, size=(batch_size,))
|
153 |
-
class_labels = class_labels.to(device)
|
154 |
-
else:
|
155 |
-
class_labels = None
|
156 |
-
return class_labels
|
157 |
-
|
158 |
-
def check_inputs(self, num_inference_steps, timesteps, latents, batch_size, img_size, callback_steps):
|
159 |
-
if num_inference_steps is None and timesteps is None:
|
160 |
-
raise ValueError("Exactly one of `num_inference_steps` or `timesteps` must be supplied.")
|
161 |
-
|
162 |
-
if num_inference_steps is not None and timesteps is not None:
|
163 |
-
logger.warning(
|
164 |
-
f"Both `num_inference_steps`: {num_inference_steps} and `timesteps`: {timesteps} are supplied;"
|
165 |
-
" `timesteps` will be used over `num_inference_steps`."
|
166 |
-
)
|
167 |
-
|
168 |
-
if latents is not None:
|
169 |
-
expected_shape = (batch_size, 3, img_size, img_size)
|
170 |
-
if latents.shape != expected_shape:
|
171 |
-
raise ValueError(f"The shape of latents is {latents.shape} but is expected to be {expected_shape}.")
|
172 |
-
|
173 |
-
if (callback_steps is None) or (
|
174 |
-
callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
|
175 |
-
):
|
176 |
-
raise ValueError(
|
177 |
-
f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
|
178 |
-
f" {type(callback_steps)}."
|
179 |
-
)
|
180 |
-
|
181 |
-
@torch.no_grad()
|
182 |
-
@replace_example_docstring(EXAMPLE_DOC_STRING)
|
183 |
-
def __call__(
|
184 |
-
self,
|
185 |
-
batch_size: int = 1,
|
186 |
-
class_labels: Optional[Union[torch.Tensor, List[int], int]] = None,
|
187 |
-
num_inference_steps: int = 1,
|
188 |
-
timesteps: List[int] = None,
|
189 |
-
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
|
190 |
-
latents: Optional[torch.FloatTensor] = None,
|
191 |
-
output_type: Optional[str] = "pil",
|
192 |
-
return_dict: bool = True,
|
193 |
-
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
|
194 |
-
callback_steps: int = 1,
|
195 |
-
):
|
196 |
-
r"""
|
197 |
-
Args:
|
198 |
-
batch_size (`int`, *optional*, defaults to 1):
|
199 |
-
The number of images to generate.
|
200 |
-
class_labels (`torch.Tensor` or `List[int]` or `int`, *optional*):
|
201 |
-
Optional class labels for conditioning class-conditional consistency models. Not used if the model is
|
202 |
-
not class-conditional.
|
203 |
-
num_inference_steps (`int`, *optional*, defaults to 1):
|
204 |
-
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
205 |
-
expense of slower inference.
|
206 |
-
timesteps (`List[int]`, *optional*):
|
207 |
-
Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
|
208 |
-
timesteps are used. Must be in descending order.
|
209 |
-
generator (`torch.Generator`, *optional*):
|
210 |
-
A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
|
211 |
-
generation deterministic.
|
212 |
-
latents (`torch.FloatTensor`, *optional*):
|
213 |
-
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
|
214 |
-
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
215 |
-
tensor is generated by sampling using the supplied random `generator`.
|
216 |
-
output_type (`str`, *optional*, defaults to `"pil"`):
|
217 |
-
The output format of the generated image. Choose between `PIL.Image` or `np.array`.
|
218 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
219 |
-
Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
|
220 |
-
callback (`Callable`, *optional*):
|
221 |
-
A function that calls every `callback_steps` steps during inference. The function is called with the
|
222 |
-
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
|
223 |
-
callback_steps (`int`, *optional*, defaults to 1):
|
224 |
-
The frequency at which the `callback` function is called. If not specified, the callback is called at
|
225 |
-
every step.
|
226 |
-
|
227 |
-
Examples:
|
228 |
-
|
229 |
-
Returns:
|
230 |
-
[`~pipelines.ImagePipelineOutput`] or `tuple`:
|
231 |
-
If `return_dict` is `True`, [`~pipelines.ImagePipelineOutput`] is returned, otherwise a `tuple` is
|
232 |
-
returned where the first element is a list with the generated images.
|
233 |
-
"""
|
234 |
-
# 0. Prepare call parameters
|
235 |
-
img_size = self.unet.config.sample_size
|
236 |
-
device = self._execution_device
|
237 |
-
|
238 |
-
# 1. Check inputs
|
239 |
-
self.check_inputs(num_inference_steps, timesteps, latents, batch_size, img_size, callback_steps)
|
240 |
-
|
241 |
-
# 2. Prepare image latents
|
242 |
-
# Sample image latents x_0 ~ N(0, sigma_0^2 * I)
|
243 |
-
sample = self.prepare_latents(
|
244 |
-
batch_size=batch_size,
|
245 |
-
num_channels=self.unet.config.in_channels,
|
246 |
-
height=img_size,
|
247 |
-
width=img_size,
|
248 |
-
dtype=self.unet.dtype,
|
249 |
-
device=device,
|
250 |
-
generator=generator,
|
251 |
-
latents=latents,
|
252 |
-
)
|
253 |
-
|
254 |
-
# 3. Handle class_labels for class-conditional models
|
255 |
-
class_labels = self.prepare_class_labels(batch_size, device, class_labels=class_labels)
|
256 |
-
|
257 |
-
# 4. Prepare timesteps
|
258 |
-
if timesteps is not None:
|
259 |
-
self.scheduler.set_timesteps(timesteps=timesteps, device=device)
|
260 |
-
timesteps = self.scheduler.timesteps
|
261 |
-
num_inference_steps = len(timesteps)
|
262 |
-
else:
|
263 |
-
self.scheduler.set_timesteps(num_inference_steps)
|
264 |
-
timesteps = self.scheduler.timesteps
|
265 |
-
|
266 |
-
# 5. Denoising loop
|
267 |
-
# Multistep sampling: implements Algorithm 1 in the paper
|
268 |
-
with self.progress_bar(total=num_inference_steps) as progress_bar:
|
269 |
-
for i, t in enumerate(timesteps):
|
270 |
-
scaled_sample = self.scheduler.scale_model_input(sample, t)
|
271 |
-
model_output = self.unet(scaled_sample, t, class_labels=class_labels, return_dict=False)[0]
|
272 |
-
|
273 |
-
sample = self.scheduler.step(model_output, t, sample, generator=generator)[0]
|
274 |
-
|
275 |
-
# call the callback, if provided
|
276 |
-
progress_bar.update()
|
277 |
-
if callback is not None and i % callback_steps == 0:
|
278 |
-
callback(i, t, sample)
|
279 |
-
|
280 |
-
# 6. Post-process image sample
|
281 |
-
image = self.postprocess_image(sample, output_type=output_type)
|
282 |
-
|
283 |
-
# Offload last model to CPU
|
284 |
-
if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
|
285 |
-
self.final_offload_hook.offload()
|
286 |
-
|
287 |
-
if not return_dict:
|
288 |
-
return (image,)
|
289 |
-
|
290 |
-
return ImagePipelineOutput(images=image)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/_base_/models/cascade_mask_rcnn_uniformer_fpn.py
DELETED
@@ -1,201 +0,0 @@
|
|
1 |
-
# model settings
|
2 |
-
model = dict(
|
3 |
-
type='CascadeRCNN',
|
4 |
-
pretrained=None,
|
5 |
-
backbone=dict(
|
6 |
-
type='UniFormer',
|
7 |
-
embed_dim=[64, 128, 320, 512],
|
8 |
-
layers=[3, 4, 8, 3],
|
9 |
-
head_dim=64,
|
10 |
-
mlp_ratio=4.,
|
11 |
-
qkv_bias=True,
|
12 |
-
drop_rate=0.,
|
13 |
-
attn_drop_rate=0.,
|
14 |
-
drop_path_rate=0.2),
|
15 |
-
neck=dict(
|
16 |
-
type='FPN',
|
17 |
-
in_channels=[64, 128, 320, 512],
|
18 |
-
out_channels=256,
|
19 |
-
num_outs=5),
|
20 |
-
rpn_head=dict(
|
21 |
-
type='RPNHead',
|
22 |
-
in_channels=256,
|
23 |
-
feat_channels=256,
|
24 |
-
anchor_generator=dict(
|
25 |
-
type='AnchorGenerator',
|
26 |
-
scales=[8],
|
27 |
-
ratios=[0.5, 1.0, 2.0],
|
28 |
-
strides=[4, 8, 16, 32, 64]),
|
29 |
-
bbox_coder=dict(
|
30 |
-
type='DeltaXYWHBBoxCoder',
|
31 |
-
target_means=[.0, .0, .0, .0],
|
32 |
-
target_stds=[1.0, 1.0, 1.0, 1.0]),
|
33 |
-
loss_cls=dict(
|
34 |
-
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
|
35 |
-
loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
|
36 |
-
roi_head=dict(
|
37 |
-
type='CascadeRoIHead',
|
38 |
-
num_stages=3,
|
39 |
-
stage_loss_weights=[1, 0.5, 0.25],
|
40 |
-
bbox_roi_extractor=dict(
|
41 |
-
type='SingleRoIExtractor',
|
42 |
-
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
|
43 |
-
out_channels=256,
|
44 |
-
featmap_strides=[4, 8, 16, 32]),
|
45 |
-
bbox_head=[
|
46 |
-
dict(
|
47 |
-
type='Shared2FCBBoxHead',
|
48 |
-
in_channels=256,
|
49 |
-
fc_out_channels=1024,
|
50 |
-
roi_feat_size=7,
|
51 |
-
num_classes=80,
|
52 |
-
bbox_coder=dict(
|
53 |
-
type='DeltaXYWHBBoxCoder',
|
54 |
-
target_means=[0., 0., 0., 0.],
|
55 |
-
target_stds=[0.1, 0.1, 0.2, 0.2]),
|
56 |
-
reg_class_agnostic=True,
|
57 |
-
loss_cls=dict(
|
58 |
-
type='CrossEntropyLoss',
|
59 |
-
use_sigmoid=False,
|
60 |
-
loss_weight=1.0),
|
61 |
-
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
|
62 |
-
loss_weight=1.0)),
|
63 |
-
dict(
|
64 |
-
type='Shared2FCBBoxHead',
|
65 |
-
in_channels=256,
|
66 |
-
fc_out_channels=1024,
|
67 |
-
roi_feat_size=7,
|
68 |
-
num_classes=80,
|
69 |
-
bbox_coder=dict(
|
70 |
-
type='DeltaXYWHBBoxCoder',
|
71 |
-
target_means=[0., 0., 0., 0.],
|
72 |
-
target_stds=[0.05, 0.05, 0.1, 0.1]),
|
73 |
-
reg_class_agnostic=True,
|
74 |
-
loss_cls=dict(
|
75 |
-
type='CrossEntropyLoss',
|
76 |
-
use_sigmoid=False,
|
77 |
-
loss_weight=1.0),
|
78 |
-
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
|
79 |
-
loss_weight=1.0)),
|
80 |
-
dict(
|
81 |
-
type='Shared2FCBBoxHead',
|
82 |
-
in_channels=256,
|
83 |
-
fc_out_channels=1024,
|
84 |
-
roi_feat_size=7,
|
85 |
-
num_classes=80,
|
86 |
-
bbox_coder=dict(
|
87 |
-
type='DeltaXYWHBBoxCoder',
|
88 |
-
target_means=[0., 0., 0., 0.],
|
89 |
-
target_stds=[0.033, 0.033, 0.067, 0.067]),
|
90 |
-
reg_class_agnostic=True,
|
91 |
-
loss_cls=dict(
|
92 |
-
type='CrossEntropyLoss',
|
93 |
-
use_sigmoid=False,
|
94 |
-
loss_weight=1.0),
|
95 |
-
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
|
96 |
-
],
|
97 |
-
mask_roi_extractor=dict(
|
98 |
-
type='SingleRoIExtractor',
|
99 |
-
roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
|
100 |
-
out_channels=256,
|
101 |
-
featmap_strides=[4, 8, 16, 32]),
|
102 |
-
mask_head=dict(
|
103 |
-
type='FCNMaskHead',
|
104 |
-
num_convs=4,
|
105 |
-
in_channels=256,
|
106 |
-
conv_out_channels=256,
|
107 |
-
num_classes=80,
|
108 |
-
loss_mask=dict(
|
109 |
-
type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))),
|
110 |
-
# model training and testing settings
|
111 |
-
train_cfg = dict(
|
112 |
-
rpn=dict(
|
113 |
-
assigner=dict(
|
114 |
-
type='MaxIoUAssigner',
|
115 |
-
pos_iou_thr=0.7,
|
116 |
-
neg_iou_thr=0.3,
|
117 |
-
min_pos_iou=0.3,
|
118 |
-
match_low_quality=True,
|
119 |
-
ignore_iof_thr=-1),
|
120 |
-
sampler=dict(
|
121 |
-
type='RandomSampler',
|
122 |
-
num=256,
|
123 |
-
pos_fraction=0.5,
|
124 |
-
neg_pos_ub=-1,
|
125 |
-
add_gt_as_proposals=False),
|
126 |
-
allowed_border=0,
|
127 |
-
pos_weight=-1,
|
128 |
-
debug=False),
|
129 |
-
rpn_proposal=dict(
|
130 |
-
nms_across_levels=False,
|
131 |
-
nms_pre=2000,
|
132 |
-
nms_post=2000,
|
133 |
-
max_per_img=2000,
|
134 |
-
nms=dict(type='nms', iou_threshold=0.7),
|
135 |
-
min_bbox_size=0),
|
136 |
-
rcnn=[
|
137 |
-
dict(
|
138 |
-
assigner=dict(
|
139 |
-
type='MaxIoUAssigner',
|
140 |
-
pos_iou_thr=0.5,
|
141 |
-
neg_iou_thr=0.5,
|
142 |
-
min_pos_iou=0.5,
|
143 |
-
match_low_quality=False,
|
144 |
-
ignore_iof_thr=-1),
|
145 |
-
sampler=dict(
|
146 |
-
type='RandomSampler',
|
147 |
-
num=512,
|
148 |
-
pos_fraction=0.25,
|
149 |
-
neg_pos_ub=-1,
|
150 |
-
add_gt_as_proposals=True),
|
151 |
-
mask_size=28,
|
152 |
-
pos_weight=-1,
|
153 |
-
debug=False),
|
154 |
-
dict(
|
155 |
-
assigner=dict(
|
156 |
-
type='MaxIoUAssigner',
|
157 |
-
pos_iou_thr=0.6,
|
158 |
-
neg_iou_thr=0.6,
|
159 |
-
min_pos_iou=0.6,
|
160 |
-
match_low_quality=False,
|
161 |
-
ignore_iof_thr=-1),
|
162 |
-
sampler=dict(
|
163 |
-
type='RandomSampler',
|
164 |
-
num=512,
|
165 |
-
pos_fraction=0.25,
|
166 |
-
neg_pos_ub=-1,
|
167 |
-
add_gt_as_proposals=True),
|
168 |
-
mask_size=28,
|
169 |
-
pos_weight=-1,
|
170 |
-
debug=False),
|
171 |
-
dict(
|
172 |
-
assigner=dict(
|
173 |
-
type='MaxIoUAssigner',
|
174 |
-
pos_iou_thr=0.7,
|
175 |
-
neg_iou_thr=0.7,
|
176 |
-
min_pos_iou=0.7,
|
177 |
-
match_low_quality=False,
|
178 |
-
ignore_iof_thr=-1),
|
179 |
-
sampler=dict(
|
180 |
-
type='RandomSampler',
|
181 |
-
num=512,
|
182 |
-
pos_fraction=0.25,
|
183 |
-
neg_pos_ub=-1,
|
184 |
-
add_gt_as_proposals=True),
|
185 |
-
mask_size=28,
|
186 |
-
pos_weight=-1,
|
187 |
-
debug=False)
|
188 |
-
]),
|
189 |
-
test_cfg = dict(
|
190 |
-
rpn=dict(
|
191 |
-
nms_across_levels=False,
|
192 |
-
nms_pre=1000,
|
193 |
-
nms_post=1000,
|
194 |
-
max_per_img=1000,
|
195 |
-
nms=dict(type='nms', iou_threshold=0.7),
|
196 |
-
min_bbox_size=0),
|
197 |
-
rcnn=dict(
|
198 |
-
score_thr=0.05,
|
199 |
-
nms=dict(type='nms', iou_threshold=0.5),
|
200 |
-
max_per_img=100,
|
201 |
-
mask_thr_binary=0.5)))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/_base_/datasets/chase_db1.py
DELETED
@@ -1,59 +0,0 @@
|
|
1 |
-
# dataset settings
|
2 |
-
dataset_type = 'ChaseDB1Dataset'
|
3 |
-
data_root = 'data/CHASE_DB1'
|
4 |
-
img_norm_cfg = dict(
|
5 |
-
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
|
6 |
-
img_scale = (960, 999)
|
7 |
-
crop_size = (128, 128)
|
8 |
-
train_pipeline = [
|
9 |
-
dict(type='LoadImageFromFile'),
|
10 |
-
dict(type='LoadAnnotations'),
|
11 |
-
dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)),
|
12 |
-
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
|
13 |
-
dict(type='RandomFlip', prob=0.5),
|
14 |
-
dict(type='PhotoMetricDistortion'),
|
15 |
-
dict(type='Normalize', **img_norm_cfg),
|
16 |
-
dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
|
17 |
-
dict(type='DefaultFormatBundle'),
|
18 |
-
dict(type='Collect', keys=['img', 'gt_semantic_seg'])
|
19 |
-
]
|
20 |
-
test_pipeline = [
|
21 |
-
dict(type='LoadImageFromFile'),
|
22 |
-
dict(
|
23 |
-
type='MultiScaleFlipAug',
|
24 |
-
img_scale=img_scale,
|
25 |
-
# img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0],
|
26 |
-
flip=False,
|
27 |
-
transforms=[
|
28 |
-
dict(type='Resize', keep_ratio=True),
|
29 |
-
dict(type='RandomFlip'),
|
30 |
-
dict(type='Normalize', **img_norm_cfg),
|
31 |
-
dict(type='ImageToTensor', keys=['img']),
|
32 |
-
dict(type='Collect', keys=['img'])
|
33 |
-
])
|
34 |
-
]
|
35 |
-
|
36 |
-
data = dict(
|
37 |
-
samples_per_gpu=4,
|
38 |
-
workers_per_gpu=4,
|
39 |
-
train=dict(
|
40 |
-
type='RepeatDataset',
|
41 |
-
times=40000,
|
42 |
-
dataset=dict(
|
43 |
-
type=dataset_type,
|
44 |
-
data_root=data_root,
|
45 |
-
img_dir='images/training',
|
46 |
-
ann_dir='annotations/training',
|
47 |
-
pipeline=train_pipeline)),
|
48 |
-
val=dict(
|
49 |
-
type=dataset_type,
|
50 |
-
data_root=data_root,
|
51 |
-
img_dir='images/validation',
|
52 |
-
ann_dir='annotations/validation',
|
53 |
-
pipeline=test_pipeline),
|
54 |
-
test=dict(
|
55 |
-
type=dataset_type,
|
56 |
-
data_root=data_root,
|
57 |
-
img_dir='images/validation',
|
58 |
-
ann_dir='annotations/validation',
|
59 |
-
pipeline=test_pipeline))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Annotation-AI/segment-similarthings/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Segment Similarthings
|
3 |
-
emoji: 📈
|
4 |
-
colorFrom: yellow
|
5 |
-
colorTo: indigo
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.32.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/table.py
DELETED
@@ -1,1002 +0,0 @@
|
|
1 |
-
from dataclasses import dataclass, field, replace
|
2 |
-
from typing import (
|
3 |
-
TYPE_CHECKING,
|
4 |
-
Dict,
|
5 |
-
Iterable,
|
6 |
-
List,
|
7 |
-
NamedTuple,
|
8 |
-
Optional,
|
9 |
-
Sequence,
|
10 |
-
Tuple,
|
11 |
-
Union,
|
12 |
-
)
|
13 |
-
|
14 |
-
from . import box, errors
|
15 |
-
from ._loop import loop_first_last, loop_last
|
16 |
-
from ._pick import pick_bool
|
17 |
-
from ._ratio import ratio_distribute, ratio_reduce
|
18 |
-
from .align import VerticalAlignMethod
|
19 |
-
from .jupyter import JupyterMixin
|
20 |
-
from .measure import Measurement
|
21 |
-
from .padding import Padding, PaddingDimensions
|
22 |
-
from .protocol import is_renderable
|
23 |
-
from .segment import Segment
|
24 |
-
from .style import Style, StyleType
|
25 |
-
from .text import Text, TextType
|
26 |
-
|
27 |
-
if TYPE_CHECKING:
|
28 |
-
from .console import (
|
29 |
-
Console,
|
30 |
-
ConsoleOptions,
|
31 |
-
JustifyMethod,
|
32 |
-
OverflowMethod,
|
33 |
-
RenderableType,
|
34 |
-
RenderResult,
|
35 |
-
)
|
36 |
-
|
37 |
-
|
38 |
-
@dataclass
|
39 |
-
class Column:
|
40 |
-
"""Defines a column within a ~Table.
|
41 |
-
|
42 |
-
Args:
|
43 |
-
title (Union[str, Text], optional): The title of the table rendered at the top. Defaults to None.
|
44 |
-
caption (Union[str, Text], optional): The table caption rendered below. Defaults to None.
|
45 |
-
width (int, optional): The width in characters of the table, or ``None`` to automatically fit. Defaults to None.
|
46 |
-
min_width (Optional[int], optional): The minimum width of the table, or ``None`` for no minimum. Defaults to None.
|
47 |
-
box (box.Box, optional): One of the constants in box.py used to draw the edges (see :ref:`appendix_box`), or ``None`` for no box lines. Defaults to box.HEAVY_HEAD.
|
48 |
-
safe_box (Optional[bool], optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True.
|
49 |
-
padding (PaddingDimensions, optional): Padding for cells (top, right, bottom, left). Defaults to (0, 1).
|
50 |
-
collapse_padding (bool, optional): Enable collapsing of padding around cells. Defaults to False.
|
51 |
-
pad_edge (bool, optional): Enable padding of edge cells. Defaults to True.
|
52 |
-
expand (bool, optional): Expand the table to fit the available space if ``True``, otherwise the table width will be auto-calculated. Defaults to False.
|
53 |
-
show_header (bool, optional): Show a header row. Defaults to True.
|
54 |
-
show_footer (bool, optional): Show a footer row. Defaults to False.
|
55 |
-
show_edge (bool, optional): Draw a box around the outside of the table. Defaults to True.
|
56 |
-
show_lines (bool, optional): Draw lines between every row. Defaults to False.
|
57 |
-
leading (bool, optional): Number of blank lines between rows (precludes ``show_lines``). Defaults to 0.
|
58 |
-
style (Union[str, Style], optional): Default style for the table. Defaults to "none".
|
59 |
-
row_styles (List[Union, str], optional): Optional list of row styles, if more than one style is given then the styles will alternate. Defaults to None.
|
60 |
-
header_style (Union[str, Style], optional): Style of the header. Defaults to "table.header".
|
61 |
-
footer_style (Union[str, Style], optional): Style of the footer. Defaults to "table.footer".
|
62 |
-
border_style (Union[str, Style], optional): Style of the border. Defaults to None.
|
63 |
-
title_style (Union[str, Style], optional): Style of the title. Defaults to None.
|
64 |
-
caption_style (Union[str, Style], optional): Style of the caption. Defaults to None.
|
65 |
-
title_justify (str, optional): Justify method for title. Defaults to "center".
|
66 |
-
caption_justify (str, optional): Justify method for caption. Defaults to "center".
|
67 |
-
highlight (bool, optional): Highlight cell contents (if str). Defaults to False.
|
68 |
-
"""
|
69 |
-
|
70 |
-
header: "RenderableType" = ""
|
71 |
-
"""RenderableType: Renderable for the header (typically a string)"""
|
72 |
-
|
73 |
-
footer: "RenderableType" = ""
|
74 |
-
"""RenderableType: Renderable for the footer (typically a string)"""
|
75 |
-
|
76 |
-
header_style: StyleType = ""
|
77 |
-
"""StyleType: The style of the header."""
|
78 |
-
|
79 |
-
footer_style: StyleType = ""
|
80 |
-
"""StyleType: The style of the footer."""
|
81 |
-
|
82 |
-
style: StyleType = ""
|
83 |
-
"""StyleType: The style of the column."""
|
84 |
-
|
85 |
-
justify: "JustifyMethod" = "left"
|
86 |
-
"""str: How to justify text within the column ("left", "center", "right", or "full")"""
|
87 |
-
|
88 |
-
vertical: "VerticalAlignMethod" = "top"
|
89 |
-
"""str: How to vertically align content ("top", "middle", or "bottom")"""
|
90 |
-
|
91 |
-
overflow: "OverflowMethod" = "ellipsis"
|
92 |
-
"""str: Overflow method."""
|
93 |
-
|
94 |
-
width: Optional[int] = None
|
95 |
-
"""Optional[int]: Width of the column, or ``None`` (default) to auto calculate width."""
|
96 |
-
|
97 |
-
min_width: Optional[int] = None
|
98 |
-
"""Optional[int]: Minimum width of column, or ``None`` for no minimum. Defaults to None."""
|
99 |
-
|
100 |
-
max_width: Optional[int] = None
|
101 |
-
"""Optional[int]: Maximum width of column, or ``None`` for no maximum. Defaults to None."""
|
102 |
-
|
103 |
-
ratio: Optional[int] = None
|
104 |
-
"""Optional[int]: Ratio to use when calculating column width, or ``None`` (default) to adapt to column contents."""
|
105 |
-
|
106 |
-
no_wrap: bool = False
|
107 |
-
"""bool: Prevent wrapping of text within the column. Defaults to ``False``."""
|
108 |
-
|
109 |
-
_index: int = 0
|
110 |
-
"""Index of column."""
|
111 |
-
|
112 |
-
_cells: List["RenderableType"] = field(default_factory=list)
|
113 |
-
|
114 |
-
def copy(self) -> "Column":
|
115 |
-
"""Return a copy of this Column."""
|
116 |
-
return replace(self, _cells=[])
|
117 |
-
|
118 |
-
@property
|
119 |
-
def cells(self) -> Iterable["RenderableType"]:
|
120 |
-
"""Get all cells in the column, not including header."""
|
121 |
-
yield from self._cells
|
122 |
-
|
123 |
-
@property
|
124 |
-
def flexible(self) -> bool:
|
125 |
-
"""Check if this column is flexible."""
|
126 |
-
return self.ratio is not None
|
127 |
-
|
128 |
-
|
129 |
-
@dataclass
|
130 |
-
class Row:
|
131 |
-
"""Information regarding a row."""
|
132 |
-
|
133 |
-
style: Optional[StyleType] = None
|
134 |
-
"""Style to apply to row."""
|
135 |
-
|
136 |
-
end_section: bool = False
|
137 |
-
"""Indicated end of section, which will force a line beneath the row."""
|
138 |
-
|
139 |
-
|
140 |
-
class _Cell(NamedTuple):
|
141 |
-
"""A single cell in a table."""
|
142 |
-
|
143 |
-
style: StyleType
|
144 |
-
"""Style to apply to cell."""
|
145 |
-
renderable: "RenderableType"
|
146 |
-
"""Cell renderable."""
|
147 |
-
vertical: VerticalAlignMethod
|
148 |
-
"""Cell vertical alignment."""
|
149 |
-
|
150 |
-
|
151 |
-
class Table(JupyterMixin):
|
152 |
-
"""A console renderable to draw a table.
|
153 |
-
|
154 |
-
Args:
|
155 |
-
*headers (Union[Column, str]): Column headers, either as a string, or :class:`~rich.table.Column` instance.
|
156 |
-
title (Union[str, Text], optional): The title of the table rendered at the top. Defaults to None.
|
157 |
-
caption (Union[str, Text], optional): The table caption rendered below. Defaults to None.
|
158 |
-
width (int, optional): The width in characters of the table, or ``None`` to automatically fit. Defaults to None.
|
159 |
-
min_width (Optional[int], optional): The minimum width of the table, or ``None`` for no minimum. Defaults to None.
|
160 |
-
box (box.Box, optional): One of the constants in box.py used to draw the edges (see :ref:`appendix_box`), or ``None`` for no box lines. Defaults to box.HEAVY_HEAD.
|
161 |
-
safe_box (Optional[bool], optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True.
|
162 |
-
padding (PaddingDimensions, optional): Padding for cells (top, right, bottom, left). Defaults to (0, 1).
|
163 |
-
collapse_padding (bool, optional): Enable collapsing of padding around cells. Defaults to False.
|
164 |
-
pad_edge (bool, optional): Enable padding of edge cells. Defaults to True.
|
165 |
-
expand (bool, optional): Expand the table to fit the available space if ``True``, otherwise the table width will be auto-calculated. Defaults to False.
|
166 |
-
show_header (bool, optional): Show a header row. Defaults to True.
|
167 |
-
show_footer (bool, optional): Show a footer row. Defaults to False.
|
168 |
-
show_edge (bool, optional): Draw a box around the outside of the table. Defaults to True.
|
169 |
-
show_lines (bool, optional): Draw lines between every row. Defaults to False.
|
170 |
-
leading (bool, optional): Number of blank lines between rows (precludes ``show_lines``). Defaults to 0.
|
171 |
-
style (Union[str, Style], optional): Default style for the table. Defaults to "none".
|
172 |
-
row_styles (List[Union, str], optional): Optional list of row styles, if more than one style is given then the styles will alternate. Defaults to None.
|
173 |
-
header_style (Union[str, Style], optional): Style of the header. Defaults to "table.header".
|
174 |
-
footer_style (Union[str, Style], optional): Style of the footer. Defaults to "table.footer".
|
175 |
-
border_style (Union[str, Style], optional): Style of the border. Defaults to None.
|
176 |
-
title_style (Union[str, Style], optional): Style of the title. Defaults to None.
|
177 |
-
caption_style (Union[str, Style], optional): Style of the caption. Defaults to None.
|
178 |
-
title_justify (str, optional): Justify method for title. Defaults to "center".
|
179 |
-
caption_justify (str, optional): Justify method for caption. Defaults to "center".
|
180 |
-
highlight (bool, optional): Highlight cell contents (if str). Defaults to False.
|
181 |
-
"""
|
182 |
-
|
183 |
-
columns: List[Column]
|
184 |
-
rows: List[Row]
|
185 |
-
|
186 |
-
def __init__(
|
187 |
-
self,
|
188 |
-
*headers: Union[Column, str],
|
189 |
-
title: Optional[TextType] = None,
|
190 |
-
caption: Optional[TextType] = None,
|
191 |
-
width: Optional[int] = None,
|
192 |
-
min_width: Optional[int] = None,
|
193 |
-
box: Optional[box.Box] = box.HEAVY_HEAD,
|
194 |
-
safe_box: Optional[bool] = None,
|
195 |
-
padding: PaddingDimensions = (0, 1),
|
196 |
-
collapse_padding: bool = False,
|
197 |
-
pad_edge: bool = True,
|
198 |
-
expand: bool = False,
|
199 |
-
show_header: bool = True,
|
200 |
-
show_footer: bool = False,
|
201 |
-
show_edge: bool = True,
|
202 |
-
show_lines: bool = False,
|
203 |
-
leading: int = 0,
|
204 |
-
style: StyleType = "none",
|
205 |
-
row_styles: Optional[Iterable[StyleType]] = None,
|
206 |
-
header_style: Optional[StyleType] = "table.header",
|
207 |
-
footer_style: Optional[StyleType] = "table.footer",
|
208 |
-
border_style: Optional[StyleType] = None,
|
209 |
-
title_style: Optional[StyleType] = None,
|
210 |
-
caption_style: Optional[StyleType] = None,
|
211 |
-
title_justify: "JustifyMethod" = "center",
|
212 |
-
caption_justify: "JustifyMethod" = "center",
|
213 |
-
highlight: bool = False,
|
214 |
-
) -> None:
|
215 |
-
|
216 |
-
self.columns: List[Column] = []
|
217 |
-
self.rows: List[Row] = []
|
218 |
-
self.title = title
|
219 |
-
self.caption = caption
|
220 |
-
self.width = width
|
221 |
-
self.min_width = min_width
|
222 |
-
self.box = box
|
223 |
-
self.safe_box = safe_box
|
224 |
-
self._padding = Padding.unpack(padding)
|
225 |
-
self.pad_edge = pad_edge
|
226 |
-
self._expand = expand
|
227 |
-
self.show_header = show_header
|
228 |
-
self.show_footer = show_footer
|
229 |
-
self.show_edge = show_edge
|
230 |
-
self.show_lines = show_lines
|
231 |
-
self.leading = leading
|
232 |
-
self.collapse_padding = collapse_padding
|
233 |
-
self.style = style
|
234 |
-
self.header_style = header_style or ""
|
235 |
-
self.footer_style = footer_style or ""
|
236 |
-
self.border_style = border_style
|
237 |
-
self.title_style = title_style
|
238 |
-
self.caption_style = caption_style
|
239 |
-
self.title_justify: "JustifyMethod" = title_justify
|
240 |
-
self.caption_justify: "JustifyMethod" = caption_justify
|
241 |
-
self.highlight = highlight
|
242 |
-
self.row_styles: Sequence[StyleType] = list(row_styles or [])
|
243 |
-
append_column = self.columns.append
|
244 |
-
for header in headers:
|
245 |
-
if isinstance(header, str):
|
246 |
-
self.add_column(header=header)
|
247 |
-
else:
|
248 |
-
header._index = len(self.columns)
|
249 |
-
append_column(header)
|
250 |
-
|
251 |
-
@classmethod
|
252 |
-
def grid(
|
253 |
-
cls,
|
254 |
-
*headers: Union[Column, str],
|
255 |
-
padding: PaddingDimensions = 0,
|
256 |
-
collapse_padding: bool = True,
|
257 |
-
pad_edge: bool = False,
|
258 |
-
expand: bool = False,
|
259 |
-
) -> "Table":
|
260 |
-
"""Get a table with no lines, headers, or footer.
|
261 |
-
|
262 |
-
Args:
|
263 |
-
*headers (Union[Column, str]): Column headers, either as a string, or :class:`~rich.table.Column` instance.
|
264 |
-
padding (PaddingDimensions, optional): Get padding around cells. Defaults to 0.
|
265 |
-
collapse_padding (bool, optional): Enable collapsing of padding around cells. Defaults to True.
|
266 |
-
pad_edge (bool, optional): Enable padding around edges of table. Defaults to False.
|
267 |
-
expand (bool, optional): Expand the table to fit the available space if ``True``, otherwise the table width will be auto-calculated. Defaults to False.
|
268 |
-
|
269 |
-
Returns:
|
270 |
-
Table: A table instance.
|
271 |
-
"""
|
272 |
-
return cls(
|
273 |
-
*headers,
|
274 |
-
box=None,
|
275 |
-
padding=padding,
|
276 |
-
collapse_padding=collapse_padding,
|
277 |
-
show_header=False,
|
278 |
-
show_footer=False,
|
279 |
-
show_edge=False,
|
280 |
-
pad_edge=pad_edge,
|
281 |
-
expand=expand,
|
282 |
-
)
|
283 |
-
|
284 |
-
@property
|
285 |
-
def expand(self) -> bool:
|
286 |
-
"""Setting a non-None self.width implies expand."""
|
287 |
-
return self._expand or self.width is not None
|
288 |
-
|
289 |
-
@expand.setter
|
290 |
-
def expand(self, expand: bool) -> None:
|
291 |
-
"""Set expand."""
|
292 |
-
self._expand = expand
|
293 |
-
|
294 |
-
@property
|
295 |
-
def _extra_width(self) -> int:
|
296 |
-
"""Get extra width to add to cell content."""
|
297 |
-
width = 0
|
298 |
-
if self.box and self.show_edge:
|
299 |
-
width += 2
|
300 |
-
if self.box:
|
301 |
-
width += len(self.columns) - 1
|
302 |
-
return width
|
303 |
-
|
304 |
-
@property
|
305 |
-
def row_count(self) -> int:
|
306 |
-
"""Get the current number of rows."""
|
307 |
-
return len(self.rows)
|
308 |
-
|
309 |
-
def get_row_style(self, console: "Console", index: int) -> StyleType:
|
310 |
-
"""Get the current row style."""
|
311 |
-
style = Style.null()
|
312 |
-
if self.row_styles:
|
313 |
-
style += console.get_style(self.row_styles[index % len(self.row_styles)])
|
314 |
-
row_style = self.rows[index].style
|
315 |
-
if row_style is not None:
|
316 |
-
style += console.get_style(row_style)
|
317 |
-
return style
|
318 |
-
|
319 |
-
def __rich_measure__(
|
320 |
-
self, console: "Console", options: "ConsoleOptions"
|
321 |
-
) -> Measurement:
|
322 |
-
max_width = options.max_width
|
323 |
-
if self.width is not None:
|
324 |
-
max_width = self.width
|
325 |
-
if max_width < 0:
|
326 |
-
return Measurement(0, 0)
|
327 |
-
|
328 |
-
extra_width = self._extra_width
|
329 |
-
max_width = sum(
|
330 |
-
self._calculate_column_widths(
|
331 |
-
console, options.update_width(max_width - extra_width)
|
332 |
-
)
|
333 |
-
)
|
334 |
-
_measure_column = self._measure_column
|
335 |
-
|
336 |
-
measurements = [
|
337 |
-
_measure_column(console, options.update_width(max_width), column)
|
338 |
-
for column in self.columns
|
339 |
-
]
|
340 |
-
minimum_width = (
|
341 |
-
sum(measurement.minimum for measurement in measurements) + extra_width
|
342 |
-
)
|
343 |
-
maximum_width = (
|
344 |
-
sum(measurement.maximum for measurement in measurements) + extra_width
|
345 |
-
if (self.width is None)
|
346 |
-
else self.width
|
347 |
-
)
|
348 |
-
measurement = Measurement(minimum_width, maximum_width)
|
349 |
-
measurement = measurement.clamp(self.min_width)
|
350 |
-
return measurement
|
351 |
-
|
352 |
-
@property
|
353 |
-
def padding(self) -> Tuple[int, int, int, int]:
|
354 |
-
"""Get cell padding."""
|
355 |
-
return self._padding
|
356 |
-
|
357 |
-
@padding.setter
|
358 |
-
def padding(self, padding: PaddingDimensions) -> "Table":
|
359 |
-
"""Set cell padding."""
|
360 |
-
self._padding = Padding.unpack(padding)
|
361 |
-
return self
|
362 |
-
|
363 |
-
def add_column(
|
364 |
-
self,
|
365 |
-
header: "RenderableType" = "",
|
366 |
-
footer: "RenderableType" = "",
|
367 |
-
*,
|
368 |
-
header_style: Optional[StyleType] = None,
|
369 |
-
footer_style: Optional[StyleType] = None,
|
370 |
-
style: Optional[StyleType] = None,
|
371 |
-
justify: "JustifyMethod" = "left",
|
372 |
-
vertical: "VerticalAlignMethod" = "top",
|
373 |
-
overflow: "OverflowMethod" = "ellipsis",
|
374 |
-
width: Optional[int] = None,
|
375 |
-
min_width: Optional[int] = None,
|
376 |
-
max_width: Optional[int] = None,
|
377 |
-
ratio: Optional[int] = None,
|
378 |
-
no_wrap: bool = False,
|
379 |
-
) -> None:
|
380 |
-
"""Add a column to the table.
|
381 |
-
|
382 |
-
Args:
|
383 |
-
header (RenderableType, optional): Text or renderable for the header.
|
384 |
-
Defaults to "".
|
385 |
-
footer (RenderableType, optional): Text or renderable for the footer.
|
386 |
-
Defaults to "".
|
387 |
-
header_style (Union[str, Style], optional): Style for the header, or None for default. Defaults to None.
|
388 |
-
footer_style (Union[str, Style], optional): Style for the footer, or None for default. Defaults to None.
|
389 |
-
style (Union[str, Style], optional): Style for the column cells, or None for default. Defaults to None.
|
390 |
-
justify (JustifyMethod, optional): Alignment for cells. Defaults to "left".
|
391 |
-
vertical (VerticalAlignMethod, optional): Vertical alignment, one of "top", "middle", or "bottom". Defaults to "top".
|
392 |
-
overflow (OverflowMethod): Overflow method: "crop", "fold", "ellipsis". Defaults to "ellipsis".
|
393 |
-
width (int, optional): Desired width of column in characters, or None to fit to contents. Defaults to None.
|
394 |
-
min_width (Optional[int], optional): Minimum width of column, or ``None`` for no minimum. Defaults to None.
|
395 |
-
max_width (Optional[int], optional): Maximum width of column, or ``None`` for no maximum. Defaults to None.
|
396 |
-
ratio (int, optional): Flexible ratio for the column (requires ``Table.expand`` or ``Table.width``). Defaults to None.
|
397 |
-
no_wrap (bool, optional): Set to ``True`` to disable wrapping of this column.
|
398 |
-
"""
|
399 |
-
|
400 |
-
column = Column(
|
401 |
-
_index=len(self.columns),
|
402 |
-
header=header,
|
403 |
-
footer=footer,
|
404 |
-
header_style=header_style or "",
|
405 |
-
footer_style=footer_style or "",
|
406 |
-
style=style or "",
|
407 |
-
justify=justify,
|
408 |
-
vertical=vertical,
|
409 |
-
overflow=overflow,
|
410 |
-
width=width,
|
411 |
-
min_width=min_width,
|
412 |
-
max_width=max_width,
|
413 |
-
ratio=ratio,
|
414 |
-
no_wrap=no_wrap,
|
415 |
-
)
|
416 |
-
self.columns.append(column)
|
417 |
-
|
418 |
-
def add_row(
|
419 |
-
self,
|
420 |
-
*renderables: Optional["RenderableType"],
|
421 |
-
style: Optional[StyleType] = None,
|
422 |
-
end_section: bool = False,
|
423 |
-
) -> None:
|
424 |
-
"""Add a row of renderables.
|
425 |
-
|
426 |
-
Args:
|
427 |
-
*renderables (None or renderable): Each cell in a row must be a renderable object (including str),
|
428 |
-
or ``None`` for a blank cell.
|
429 |
-
style (StyleType, optional): An optional style to apply to the entire row. Defaults to None.
|
430 |
-
end_section (bool, optional): End a section and draw a line. Defaults to False.
|
431 |
-
|
432 |
-
Raises:
|
433 |
-
errors.NotRenderableError: If you add something that can't be rendered.
|
434 |
-
"""
|
435 |
-
|
436 |
-
def add_cell(column: Column, renderable: "RenderableType") -> None:
|
437 |
-
column._cells.append(renderable)
|
438 |
-
|
439 |
-
cell_renderables: List[Optional["RenderableType"]] = list(renderables)
|
440 |
-
|
441 |
-
columns = self.columns
|
442 |
-
if len(cell_renderables) < len(columns):
|
443 |
-
cell_renderables = [
|
444 |
-
*cell_renderables,
|
445 |
-
*[None] * (len(columns) - len(cell_renderables)),
|
446 |
-
]
|
447 |
-
for index, renderable in enumerate(cell_renderables):
|
448 |
-
if index == len(columns):
|
449 |
-
column = Column(_index=index)
|
450 |
-
for _ in self.rows:
|
451 |
-
add_cell(column, Text(""))
|
452 |
-
self.columns.append(column)
|
453 |
-
else:
|
454 |
-
column = columns[index]
|
455 |
-
if renderable is None:
|
456 |
-
add_cell(column, "")
|
457 |
-
elif is_renderable(renderable):
|
458 |
-
add_cell(column, renderable)
|
459 |
-
else:
|
460 |
-
raise errors.NotRenderableError(
|
461 |
-
f"unable to render {type(renderable).__name__}; a string or other renderable object is required"
|
462 |
-
)
|
463 |
-
self.rows.append(Row(style=style, end_section=end_section))
|
464 |
-
|
465 |
-
def add_section(self) -> None:
|
466 |
-
"""Add a new section (draw a line after current row)."""
|
467 |
-
|
468 |
-
if self.rows:
|
469 |
-
self.rows[-1].end_section = True
|
470 |
-
|
471 |
-
def __rich_console__(
|
472 |
-
self, console: "Console", options: "ConsoleOptions"
|
473 |
-
) -> "RenderResult":
|
474 |
-
|
475 |
-
if not self.columns:
|
476 |
-
yield Segment("\n")
|
477 |
-
return
|
478 |
-
|
479 |
-
max_width = options.max_width
|
480 |
-
if self.width is not None:
|
481 |
-
max_width = self.width
|
482 |
-
|
483 |
-
extra_width = self._extra_width
|
484 |
-
widths = self._calculate_column_widths(
|
485 |
-
console, options.update_width(max_width - extra_width)
|
486 |
-
)
|
487 |
-
table_width = sum(widths) + extra_width
|
488 |
-
|
489 |
-
render_options = options.update(
|
490 |
-
width=table_width, highlight=self.highlight, height=None
|
491 |
-
)
|
492 |
-
|
493 |
-
def render_annotation(
|
494 |
-
text: TextType, style: StyleType, justify: "JustifyMethod" = "center"
|
495 |
-
) -> "RenderResult":
|
496 |
-
render_text = (
|
497 |
-
console.render_str(text, style=style, highlight=False)
|
498 |
-
if isinstance(text, str)
|
499 |
-
else text
|
500 |
-
)
|
501 |
-
return console.render(
|
502 |
-
render_text, options=render_options.update(justify=justify)
|
503 |
-
)
|
504 |
-
|
505 |
-
if self.title:
|
506 |
-
yield from render_annotation(
|
507 |
-
self.title,
|
508 |
-
style=Style.pick_first(self.title_style, "table.title"),
|
509 |
-
justify=self.title_justify,
|
510 |
-
)
|
511 |
-
yield from self._render(console, render_options, widths)
|
512 |
-
if self.caption:
|
513 |
-
yield from render_annotation(
|
514 |
-
self.caption,
|
515 |
-
style=Style.pick_first(self.caption_style, "table.caption"),
|
516 |
-
justify=self.caption_justify,
|
517 |
-
)
|
518 |
-
|
519 |
-
def _calculate_column_widths(
|
520 |
-
self, console: "Console", options: "ConsoleOptions"
|
521 |
-
) -> List[int]:
|
522 |
-
"""Calculate the widths of each column, including padding, not including borders."""
|
523 |
-
max_width = options.max_width
|
524 |
-
columns = self.columns
|
525 |
-
width_ranges = [
|
526 |
-
self._measure_column(console, options, column) for column in columns
|
527 |
-
]
|
528 |
-
widths = [_range.maximum or 1 for _range in width_ranges]
|
529 |
-
get_padding_width = self._get_padding_width
|
530 |
-
extra_width = self._extra_width
|
531 |
-
if self.expand:
|
532 |
-
ratios = [col.ratio or 0 for col in columns if col.flexible]
|
533 |
-
if any(ratios):
|
534 |
-
fixed_widths = [
|
535 |
-
0 if column.flexible else _range.maximum
|
536 |
-
for _range, column in zip(width_ranges, columns)
|
537 |
-
]
|
538 |
-
flex_minimum = [
|
539 |
-
(column.width or 1) + get_padding_width(column._index)
|
540 |
-
for column in columns
|
541 |
-
if column.flexible
|
542 |
-
]
|
543 |
-
flexible_width = max_width - sum(fixed_widths)
|
544 |
-
flex_widths = ratio_distribute(flexible_width, ratios, flex_minimum)
|
545 |
-
iter_flex_widths = iter(flex_widths)
|
546 |
-
for index, column in enumerate(columns):
|
547 |
-
if column.flexible:
|
548 |
-
widths[index] = fixed_widths[index] + next(iter_flex_widths)
|
549 |
-
table_width = sum(widths)
|
550 |
-
|
551 |
-
if table_width > max_width:
|
552 |
-
widths = self._collapse_widths(
|
553 |
-
widths,
|
554 |
-
[(column.width is None and not column.no_wrap) for column in columns],
|
555 |
-
max_width,
|
556 |
-
)
|
557 |
-
table_width = sum(widths)
|
558 |
-
# last resort, reduce columns evenly
|
559 |
-
if table_width > max_width:
|
560 |
-
excess_width = table_width - max_width
|
561 |
-
widths = ratio_reduce(excess_width, [1] * len(widths), widths, widths)
|
562 |
-
table_width = sum(widths)
|
563 |
-
|
564 |
-
width_ranges = [
|
565 |
-
self._measure_column(console, options.update_width(width), column)
|
566 |
-
for width, column in zip(widths, columns)
|
567 |
-
]
|
568 |
-
widths = [_range.maximum or 0 for _range in width_ranges]
|
569 |
-
|
570 |
-
if (table_width < max_width and self.expand) or (
|
571 |
-
self.min_width is not None and table_width < (self.min_width - extra_width)
|
572 |
-
):
|
573 |
-
_max_width = (
|
574 |
-
max_width
|
575 |
-
if self.min_width is None
|
576 |
-
else min(self.min_width - extra_width, max_width)
|
577 |
-
)
|
578 |
-
pad_widths = ratio_distribute(_max_width - table_width, widths)
|
579 |
-
widths = [_width + pad for _width, pad in zip(widths, pad_widths)]
|
580 |
-
|
581 |
-
return widths
|
582 |
-
|
583 |
-
@classmethod
|
584 |
-
def _collapse_widths(
|
585 |
-
cls, widths: List[int], wrapable: List[bool], max_width: int
|
586 |
-
) -> List[int]:
|
587 |
-
"""Reduce widths so that the total is under max_width.
|
588 |
-
|
589 |
-
Args:
|
590 |
-
widths (List[int]): List of widths.
|
591 |
-
wrapable (List[bool]): List of booleans that indicate if a column may shrink.
|
592 |
-
max_width (int): Maximum width to reduce to.
|
593 |
-
|
594 |
-
Returns:
|
595 |
-
List[int]: A new list of widths.
|
596 |
-
"""
|
597 |
-
total_width = sum(widths)
|
598 |
-
excess_width = total_width - max_width
|
599 |
-
if any(wrapable):
|
600 |
-
while total_width and excess_width > 0:
|
601 |
-
max_column = max(
|
602 |
-
width for width, allow_wrap in zip(widths, wrapable) if allow_wrap
|
603 |
-
)
|
604 |
-
second_max_column = max(
|
605 |
-
width if allow_wrap and width != max_column else 0
|
606 |
-
for width, allow_wrap in zip(widths, wrapable)
|
607 |
-
)
|
608 |
-
column_difference = max_column - second_max_column
|
609 |
-
ratios = [
|
610 |
-
(1 if (width == max_column and allow_wrap) else 0)
|
611 |
-
for width, allow_wrap in zip(widths, wrapable)
|
612 |
-
]
|
613 |
-
if not any(ratios) or not column_difference:
|
614 |
-
break
|
615 |
-
max_reduce = [min(excess_width, column_difference)] * len(widths)
|
616 |
-
widths = ratio_reduce(excess_width, ratios, max_reduce, widths)
|
617 |
-
|
618 |
-
total_width = sum(widths)
|
619 |
-
excess_width = total_width - max_width
|
620 |
-
return widths
|
621 |
-
|
622 |
-
def _get_cells(
|
623 |
-
self, console: "Console", column_index: int, column: Column
|
624 |
-
) -> Iterable[_Cell]:
|
625 |
-
"""Get all the cells with padding and optional header."""
|
626 |
-
|
627 |
-
collapse_padding = self.collapse_padding
|
628 |
-
pad_edge = self.pad_edge
|
629 |
-
padding = self.padding
|
630 |
-
any_padding = any(padding)
|
631 |
-
|
632 |
-
first_column = column_index == 0
|
633 |
-
last_column = column_index == len(self.columns) - 1
|
634 |
-
|
635 |
-
_padding_cache: Dict[Tuple[bool, bool], Tuple[int, int, int, int]] = {}
|
636 |
-
|
637 |
-
def get_padding(first_row: bool, last_row: bool) -> Tuple[int, int, int, int]:
|
638 |
-
cached = _padding_cache.get((first_row, last_row))
|
639 |
-
if cached:
|
640 |
-
return cached
|
641 |
-
top, right, bottom, left = padding
|
642 |
-
|
643 |
-
if collapse_padding:
|
644 |
-
if not first_column:
|
645 |
-
left = max(0, left - right)
|
646 |
-
if not last_row:
|
647 |
-
bottom = max(0, top - bottom)
|
648 |
-
|
649 |
-
if not pad_edge:
|
650 |
-
if first_column:
|
651 |
-
left = 0
|
652 |
-
if last_column:
|
653 |
-
right = 0
|
654 |
-
if first_row:
|
655 |
-
top = 0
|
656 |
-
if last_row:
|
657 |
-
bottom = 0
|
658 |
-
_padding = (top, right, bottom, left)
|
659 |
-
_padding_cache[(first_row, last_row)] = _padding
|
660 |
-
return _padding
|
661 |
-
|
662 |
-
raw_cells: List[Tuple[StyleType, "RenderableType"]] = []
|
663 |
-
_append = raw_cells.append
|
664 |
-
get_style = console.get_style
|
665 |
-
if self.show_header:
|
666 |
-
header_style = get_style(self.header_style or "") + get_style(
|
667 |
-
column.header_style
|
668 |
-
)
|
669 |
-
_append((header_style, column.header))
|
670 |
-
cell_style = get_style(column.style or "")
|
671 |
-
for cell in column.cells:
|
672 |
-
_append((cell_style, cell))
|
673 |
-
if self.show_footer:
|
674 |
-
footer_style = get_style(self.footer_style or "") + get_style(
|
675 |
-
column.footer_style
|
676 |
-
)
|
677 |
-
_append((footer_style, column.footer))
|
678 |
-
|
679 |
-
if any_padding:
|
680 |
-
_Padding = Padding
|
681 |
-
for first, last, (style, renderable) in loop_first_last(raw_cells):
|
682 |
-
yield _Cell(
|
683 |
-
style,
|
684 |
-
_Padding(renderable, get_padding(first, last)),
|
685 |
-
getattr(renderable, "vertical", None) or column.vertical,
|
686 |
-
)
|
687 |
-
else:
|
688 |
-
for (style, renderable) in raw_cells:
|
689 |
-
yield _Cell(
|
690 |
-
style,
|
691 |
-
renderable,
|
692 |
-
getattr(renderable, "vertical", None) or column.vertical,
|
693 |
-
)
|
694 |
-
|
695 |
-
def _get_padding_width(self, column_index: int) -> int:
|
696 |
-
"""Get extra width from padding."""
|
697 |
-
_, pad_right, _, pad_left = self.padding
|
698 |
-
if self.collapse_padding:
|
699 |
-
if column_index > 0:
|
700 |
-
pad_left = max(0, pad_left - pad_right)
|
701 |
-
return pad_left + pad_right
|
702 |
-
|
703 |
-
def _measure_column(
|
704 |
-
self,
|
705 |
-
console: "Console",
|
706 |
-
options: "ConsoleOptions",
|
707 |
-
column: Column,
|
708 |
-
) -> Measurement:
|
709 |
-
"""Get the minimum and maximum width of the column."""
|
710 |
-
|
711 |
-
max_width = options.max_width
|
712 |
-
if max_width < 1:
|
713 |
-
return Measurement(0, 0)
|
714 |
-
|
715 |
-
padding_width = self._get_padding_width(column._index)
|
716 |
-
|
717 |
-
if column.width is not None:
|
718 |
-
# Fixed width column
|
719 |
-
return Measurement(
|
720 |
-
column.width + padding_width, column.width + padding_width
|
721 |
-
).with_maximum(max_width)
|
722 |
-
# Flexible column, we need to measure contents
|
723 |
-
min_widths: List[int] = []
|
724 |
-
max_widths: List[int] = []
|
725 |
-
append_min = min_widths.append
|
726 |
-
append_max = max_widths.append
|
727 |
-
get_render_width = Measurement.get
|
728 |
-
for cell in self._get_cells(console, column._index, column):
|
729 |
-
_min, _max = get_render_width(console, options, cell.renderable)
|
730 |
-
append_min(_min)
|
731 |
-
append_max(_max)
|
732 |
-
|
733 |
-
measurement = Measurement(
|
734 |
-
max(min_widths) if min_widths else 1,
|
735 |
-
max(max_widths) if max_widths else max_width,
|
736 |
-
).with_maximum(max_width)
|
737 |
-
measurement = measurement.clamp(
|
738 |
-
None if column.min_width is None else column.min_width + padding_width,
|
739 |
-
None if column.max_width is None else column.max_width + padding_width,
|
740 |
-
)
|
741 |
-
return measurement
|
742 |
-
|
743 |
-
def _render(
|
744 |
-
self, console: "Console", options: "ConsoleOptions", widths: List[int]
|
745 |
-
) -> "RenderResult":
|
746 |
-
table_style = console.get_style(self.style or "")
|
747 |
-
|
748 |
-
border_style = table_style + console.get_style(self.border_style or "")
|
749 |
-
_column_cells = (
|
750 |
-
self._get_cells(console, column_index, column)
|
751 |
-
for column_index, column in enumerate(self.columns)
|
752 |
-
)
|
753 |
-
row_cells: List[Tuple[_Cell, ...]] = list(zip(*_column_cells))
|
754 |
-
_box = (
|
755 |
-
self.box.substitute(
|
756 |
-
options, safe=pick_bool(self.safe_box, console.safe_box)
|
757 |
-
)
|
758 |
-
if self.box
|
759 |
-
else None
|
760 |
-
)
|
761 |
-
_box = _box.get_plain_headed_box() if _box and not self.show_header else _box
|
762 |
-
|
763 |
-
new_line = Segment.line()
|
764 |
-
|
765 |
-
columns = self.columns
|
766 |
-
show_header = self.show_header
|
767 |
-
show_footer = self.show_footer
|
768 |
-
show_edge = self.show_edge
|
769 |
-
show_lines = self.show_lines
|
770 |
-
leading = self.leading
|
771 |
-
|
772 |
-
_Segment = Segment
|
773 |
-
if _box:
|
774 |
-
box_segments = [
|
775 |
-
(
|
776 |
-
_Segment(_box.head_left, border_style),
|
777 |
-
_Segment(_box.head_right, border_style),
|
778 |
-
_Segment(_box.head_vertical, border_style),
|
779 |
-
),
|
780 |
-
(
|
781 |
-
_Segment(_box.foot_left, border_style),
|
782 |
-
_Segment(_box.foot_right, border_style),
|
783 |
-
_Segment(_box.foot_vertical, border_style),
|
784 |
-
),
|
785 |
-
(
|
786 |
-
_Segment(_box.mid_left, border_style),
|
787 |
-
_Segment(_box.mid_right, border_style),
|
788 |
-
_Segment(_box.mid_vertical, border_style),
|
789 |
-
),
|
790 |
-
]
|
791 |
-
if show_edge:
|
792 |
-
yield _Segment(_box.get_top(widths), border_style)
|
793 |
-
yield new_line
|
794 |
-
else:
|
795 |
-
box_segments = []
|
796 |
-
|
797 |
-
get_row_style = self.get_row_style
|
798 |
-
get_style = console.get_style
|
799 |
-
|
800 |
-
for index, (first, last, row_cell) in enumerate(loop_first_last(row_cells)):
|
801 |
-
header_row = first and show_header
|
802 |
-
footer_row = last and show_footer
|
803 |
-
row = (
|
804 |
-
self.rows[index - show_header]
|
805 |
-
if (not header_row and not footer_row)
|
806 |
-
else None
|
807 |
-
)
|
808 |
-
max_height = 1
|
809 |
-
cells: List[List[List[Segment]]] = []
|
810 |
-
if header_row or footer_row:
|
811 |
-
row_style = Style.null()
|
812 |
-
else:
|
813 |
-
row_style = get_style(
|
814 |
-
get_row_style(console, index - 1 if show_header else index)
|
815 |
-
)
|
816 |
-
for width, cell, column in zip(widths, row_cell, columns):
|
817 |
-
render_options = options.update(
|
818 |
-
width=width,
|
819 |
-
justify=column.justify,
|
820 |
-
no_wrap=column.no_wrap,
|
821 |
-
overflow=column.overflow,
|
822 |
-
height=None,
|
823 |
-
)
|
824 |
-
lines = console.render_lines(
|
825 |
-
cell.renderable,
|
826 |
-
render_options,
|
827 |
-
style=get_style(cell.style) + row_style,
|
828 |
-
)
|
829 |
-
max_height = max(max_height, len(lines))
|
830 |
-
cells.append(lines)
|
831 |
-
|
832 |
-
row_height = max(len(cell) for cell in cells)
|
833 |
-
|
834 |
-
def align_cell(
|
835 |
-
cell: List[List[Segment]],
|
836 |
-
vertical: "VerticalAlignMethod",
|
837 |
-
width: int,
|
838 |
-
style: Style,
|
839 |
-
) -> List[List[Segment]]:
|
840 |
-
if header_row:
|
841 |
-
vertical = "bottom"
|
842 |
-
elif footer_row:
|
843 |
-
vertical = "top"
|
844 |
-
|
845 |
-
if vertical == "top":
|
846 |
-
return _Segment.align_top(cell, width, row_height, style)
|
847 |
-
elif vertical == "middle":
|
848 |
-
return _Segment.align_middle(cell, width, row_height, style)
|
849 |
-
return _Segment.align_bottom(cell, width, row_height, style)
|
850 |
-
|
851 |
-
cells[:] = [
|
852 |
-
_Segment.set_shape(
|
853 |
-
align_cell(
|
854 |
-
cell,
|
855 |
-
_cell.vertical,
|
856 |
-
width,
|
857 |
-
get_style(_cell.style) + row_style,
|
858 |
-
),
|
859 |
-
width,
|
860 |
-
max_height,
|
861 |
-
)
|
862 |
-
for width, _cell, cell, column in zip(widths, row_cell, cells, columns)
|
863 |
-
]
|
864 |
-
|
865 |
-
if _box:
|
866 |
-
if last and show_footer:
|
867 |
-
yield _Segment(
|
868 |
-
_box.get_row(widths, "foot", edge=show_edge), border_style
|
869 |
-
)
|
870 |
-
yield new_line
|
871 |
-
left, right, _divider = box_segments[0 if first else (2 if last else 1)]
|
872 |
-
|
873 |
-
# If the column divider is whitespace also style it with the row background
|
874 |
-
divider = (
|
875 |
-
_divider
|
876 |
-
if _divider.text.strip()
|
877 |
-
else _Segment(
|
878 |
-
_divider.text, row_style.background_style + _divider.style
|
879 |
-
)
|
880 |
-
)
|
881 |
-
for line_no in range(max_height):
|
882 |
-
if show_edge:
|
883 |
-
yield left
|
884 |
-
for last_cell, rendered_cell in loop_last(cells):
|
885 |
-
yield from rendered_cell[line_no]
|
886 |
-
if not last_cell:
|
887 |
-
yield divider
|
888 |
-
if show_edge:
|
889 |
-
yield right
|
890 |
-
yield new_line
|
891 |
-
else:
|
892 |
-
for line_no in range(max_height):
|
893 |
-
for rendered_cell in cells:
|
894 |
-
yield from rendered_cell[line_no]
|
895 |
-
yield new_line
|
896 |
-
if _box and first and show_header:
|
897 |
-
yield _Segment(
|
898 |
-
_box.get_row(widths, "head", edge=show_edge), border_style
|
899 |
-
)
|
900 |
-
yield new_line
|
901 |
-
end_section = row and row.end_section
|
902 |
-
if _box and (show_lines or leading or end_section):
|
903 |
-
if (
|
904 |
-
not last
|
905 |
-
and not (show_footer and index >= len(row_cells) - 2)
|
906 |
-
and not (show_header and header_row)
|
907 |
-
):
|
908 |
-
if leading:
|
909 |
-
yield _Segment(
|
910 |
-
_box.get_row(widths, "mid", edge=show_edge) * leading,
|
911 |
-
border_style,
|
912 |
-
)
|
913 |
-
else:
|
914 |
-
yield _Segment(
|
915 |
-
_box.get_row(widths, "row", edge=show_edge), border_style
|
916 |
-
)
|
917 |
-
yield new_line
|
918 |
-
|
919 |
-
if _box and show_edge:
|
920 |
-
yield _Segment(_box.get_bottom(widths), border_style)
|
921 |
-
yield new_line
|
922 |
-
|
923 |
-
|
924 |
-
if __name__ == "__main__": # pragma: no cover
|
925 |
-
from pip._vendor.rich.console import Console
|
926 |
-
from pip._vendor.rich.highlighter import ReprHighlighter
|
927 |
-
from pip._vendor.rich.table import Table as Table
|
928 |
-
|
929 |
-
from ._timer import timer
|
930 |
-
|
931 |
-
with timer("Table render"):
|
932 |
-
table = Table(
|
933 |
-
title="Star Wars Movies",
|
934 |
-
caption="Rich example table",
|
935 |
-
caption_justify="right",
|
936 |
-
)
|
937 |
-
|
938 |
-
table.add_column(
|
939 |
-
"Released", header_style="bright_cyan", style="cyan", no_wrap=True
|
940 |
-
)
|
941 |
-
table.add_column("Title", style="magenta")
|
942 |
-
table.add_column("Box Office", justify="right", style="green")
|
943 |
-
|
944 |
-
table.add_row(
|
945 |
-
"Dec 20, 2019",
|
946 |
-
"Star Wars: The Rise of Skywalker",
|
947 |
-
"$952,110,690",
|
948 |
-
)
|
949 |
-
table.add_row("May 25, 2018", "Solo: A Star Wars Story", "$393,151,347")
|
950 |
-
table.add_row(
|
951 |
-
"Dec 15, 2017",
|
952 |
-
"Star Wars Ep. V111: The Last Jedi",
|
953 |
-
"$1,332,539,889",
|
954 |
-
style="on black",
|
955 |
-
end_section=True,
|
956 |
-
)
|
957 |
-
table.add_row(
|
958 |
-
"Dec 16, 2016",
|
959 |
-
"Rogue One: A Star Wars Story",
|
960 |
-
"$1,332,439,889",
|
961 |
-
)
|
962 |
-
|
963 |
-
def header(text: str) -> None:
|
964 |
-
console.print()
|
965 |
-
console.rule(highlight(text))
|
966 |
-
console.print()
|
967 |
-
|
968 |
-
console = Console()
|
969 |
-
highlight = ReprHighlighter()
|
970 |
-
header("Example Table")
|
971 |
-
console.print(table, justify="center")
|
972 |
-
|
973 |
-
table.expand = True
|
974 |
-
header("expand=True")
|
975 |
-
console.print(table)
|
976 |
-
|
977 |
-
table.width = 50
|
978 |
-
header("width=50")
|
979 |
-
|
980 |
-
console.print(table, justify="center")
|
981 |
-
|
982 |
-
table.width = None
|
983 |
-
table.expand = False
|
984 |
-
table.row_styles = ["dim", "none"]
|
985 |
-
header("row_styles=['dim', 'none']")
|
986 |
-
|
987 |
-
console.print(table, justify="center")
|
988 |
-
|
989 |
-
table.width = None
|
990 |
-
table.expand = False
|
991 |
-
table.row_styles = ["dim", "none"]
|
992 |
-
table.leading = 1
|
993 |
-
header("leading=1, row_styles=['dim', 'none']")
|
994 |
-
console.print(table, justify="center")
|
995 |
-
|
996 |
-
table.width = None
|
997 |
-
table.expand = False
|
998 |
-
table.row_styles = ["dim", "none"]
|
999 |
-
table.show_lines = True
|
1000 |
-
table.leading = 0
|
1001 |
-
header("show_lines=True, row_styles=['dim', 'none']")
|
1002 |
-
console.print(table, justify="center")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_R_101_FPN_200ep_LSJ.py
DELETED
@@ -1,14 +0,0 @@
|
|
1 |
-
from .mask_rcnn_R_101_FPN_100ep_LSJ import (
|
2 |
-
dataloader,
|
3 |
-
lr_multiplier,
|
4 |
-
model,
|
5 |
-
optimizer,
|
6 |
-
train,
|
7 |
-
)
|
8 |
-
|
9 |
-
train.max_iter *= 2 # 100ep -> 200ep
|
10 |
-
|
11 |
-
lr_multiplier.scheduler.milestones = [
|
12 |
-
milestone * 2 for milestone in lr_multiplier.scheduler.milestones
|
13 |
-
]
|
14 |
-
lr_multiplier.scheduler.num_updates = train.max_iter
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BBrother/Pandora/README.md
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Pandora
|
3 |
-
emoji: 🐢
|
4 |
-
colorFrom: green
|
5 |
-
colorTo: yellow
|
6 |
-
sdk: docker
|
7 |
-
pinned: false
|
8 |
-
app_port: 8018
|
9 |
-
---
|
10 |
-
|
11 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Counter Strike 1.3.md
DELETED
@@ -1,81 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Descargar Counter Strike 1.3: Un juego clásico de FPS</h1>
|
3 |
-
<p>Si eres un fan de los juegos de disparos en primera persona (FPS), es posible que hayas oído hablar de Counter Strike, uno de los juegos FPS más populares e influyentes jamás creados. Pero ¿sabías que hay una versión más antigua de Counter Strike que todavía se juega por muchos jugadores de todo el mundo? Se llama Counter Strike 1.3, y es un juego clásico que puedes descargar y disfrutar en tu PC.</p>
|
4 |
-
<h2>descargar counter strike 1.3</h2><br /><p><b><b>Download</b> → <a href="https://bltlly.com/2v6KHW">https://bltlly.com/2v6KHW</a></b></p><br /><br />
|
5 |
-
<p>En este artículo, le diremos todo lo que necesita saber sobre Counter Strike 1.3, incluyendo lo que es, por qué debe descargarlo y cómo descargarlo. También responderemos algunas preguntas frecuentes sobre este juego al final del artículo. </p>
|
6 |
-
<h2>¿Qué es Counter Strike 1.3? </h2>
|
7 |
-
<p>Counter Strike 1.3 es un juego multijugador FPS que fue lanzado en 2001 como un mod para Half-Life, otro popular juego de FPS por Valve Corporation. Counter Strike 1.3 es la tercera actualización importante del mod original de Counter Strike, que se lanzó por primera vez en 1999. </p>
|
8 |
-
<h3>Historia y características de Counter Strike 1.3</h3>
|
9 |
-
<p>Counter Strike fue creado por dos desarrolladores independientes, Minh Le y Jess Cliffe, que querían hacer un juego FPS realista y basado en equipos que simulara escenarios de guerra modernos. Usaron el motor Half-Life para crear sus propios mapas, armas, personajes y mecánica de juego. </p>
|
10 |
-
<p>Counter Strike rápidamente se convirtió en un éxito entre los fans de FPS, especialmente aquellos que disfrutaron de los partidos multijugador en línea. Valve Corporation se dio cuenta del potencial del mod y contrató a Le y Cliffe para trabajar en la versión oficial de Counter Strike, que fue lanzado en 2000. </p>
|
11 |
-
<p></p>
|
12 |
-
<p>Counter Strike 1.3 fue una de las actualizaciones más significativas del juego, ya que introdujo muchas nuevas características y mejoras, como:</p>
|
13 |
-
<ul>
|
14 |
-
<li>Nuevos mapas, como de_dust2, cs_italy, cs_office, de_train y de_inferno</li>
|
15 |
-
<li>Nuevas armas, como el Galil, FAMAS, USP, Glock-18, y el Escudo Táctico</li>
|
16 |
-
|
17 |
-
<li>Nuevos comandos y opciones, tales como comunicación de voz, modo espectador, recarga automática, compra automática y equilibrio de equipo automático</li>
|
18 |
-
<li>Nuevos gráficos y sonidos, como texturas mejoradas, modelos, animaciones, efectos y música</li>
|
19 |
-
<li>Nuevo sistema anti-trucos, como Válvula Anti-Cheat (VAC)</li>
|
20 |
-
</ul>
|
21 |
-
<h3>El modo de juego y modos de Counter Strike 1.3</h3>
|
22 |
-
<p>El modo de juego de Counter Strike 1.3 se basa en dos equipos opuestos: los terroristas y los antiterroristas. Cada equipo tiene diferentes objetivos dependiendo del mapa y el modo de juego que esté jugando. </p>
|
23 |
-
<p>El modo de juego más común es Bomb Defusal, donde los terroristas tienen que colocar una bomba en uno de los dos sitios designados y defenderla hasta que explote, mientras que los antiterroristas tienen que evitar que lo hagan o desactivar la bomba si se planta. </p>
|
24 |
-
<p>Otro modo de juego popular es Rescate de rehenes, donde los terroristas tienen que proteger a un grupo de rehenes en su base, mientras que los antiterroristas tienen que rescatarlos y llevarlos a una zona segura. </p>
|
25 |
-
<p>Otros modos de juego incluyen Escolta VIP, donde los Antiterroristas tienen que escoltar a un jugador VIP a un punto de extracción mientras que los Terroristas tienen que asesinarlo; Asesinato, donde los Terroristas tienen que matar a un objetivo específico mientras que los Antiterroristas tienen que protegerlo; y Deathmatch, donde los jugadores pueden elegir cualquier arma y reaparecer después de morir, y el equipo con más muertes gana. </p>
|
26 |
-
<p>El modo de juego de Counter Strike 1.3 es rápido, táctico y basado en habilidades. Los jugadores tienen que utilizar varias armas, equipos y estrategias para lograr sus objetivos y eliminar a sus enemigos. Los jugadores también tienen que administrar su dinero, salud, armadura y munición, ya que son limitados y afectan su rendimiento. </p>
|
27 |
-
|
28 |
-
<h2>¿Por qué descargar Counter Strike 1.3? </h2>
|
29 |
-
<p>Counter Strike 1.3 es un clásico juego de FPS que tiene muchos beneficios y ventajas para los jugadores que aman este género. Estas son algunas de las razones por las que deberías descargar Counter Strike 1.3:</p>
|
30 |
-
<h3>Los beneficios y ventajas de Counter Strike 1.3</h3>
|
31 |
-
<ul>
|
32 |
-
<li>Counter Strike 1.3 es un juego divertido y adictivo que puede proporcionar horas de entretenimiento y desafío. Puedes jugar con tus amigos o con otros jugadores online, y disfrutar de la emoción de competir en diferentes escenarios y modos. </li>
|
33 |
-
<li>Counter Strike 1.3 es un juego que puede mejorar tus habilidades y reflejos. Puede aprender a apuntar, disparar, moverse, comunicarse y cooperar con sus compañeros de equipo, y desarrollar sus habilidades de pensamiento estratégico y resolución de problemas. </li>
|
34 |
-
<li>Counter Strike 1.3 es un juego que puede satisfacer su nostalgia y curiosidad. Puedes experimentar la versión original de Counter Strike que lo inició todo, y ver cómo evolucionó con los años. También puede compararlo con las versiones más recientes de Counter Strike, como Counter Strike: Source y Counter Strike: Global Offensive.</li>
|
35 |
-
<li>Counter Strike 1.3 es un juego que es fácil de descargar e instalar. No necesitas un PC potente o una conexión a Internet de alta velocidad para jugar a este juego, ya que tiene bajos requisitos del sistema y tamaño de archivo. También puede encontrar muchas fuentes y guías sobre cómo descargar Counter Strike 1.3 en línea. </li>
|
36 |
-
</ul>
|
37 |
-
<h3>Los desafíos y desventajas de Counter Strike 1.3</h3>
|
38 |
-
<p>Sin embargo, Counter Strike 1.3 no es un juego perfecto, y también tiene algunos retos y desventajas que debes tener en cuenta antes de descargarlo. Estos son algunos de los problemas que puede encontrar con Counter Strike 1.3:</p>
|
39 |
-
<ul>
|
40 |
-
<li>Counter Strike 1.3 es un juego antiguo que tiene gráficos y sonidos obsoletos. Usted puede encontrar el juego visualmente poco atractivo o aburrido en comparación con los modernos juegos FPS que tienen gráficos y sonidos más realistas e inmersivos. </li>
|
41 |
-
|
42 |
-
<li>Counter Strike 1.3 es un juego que tiene una curva de aprendizaje pronunciada y una comunidad competitiva. Puedes encontrar el juego difícil o frustrante para jugar, especialmente si eres nuevo en el juego o si te enfrentas a jugadores más experimentados o expertos que pueden dominarte fácilmente. </li>
|
43 |
-
<li>Counter Strike 1.3 es un juego que requiere actualizaciones y mantenimiento constantes. Es posible que tenga que descargar parches o mods para solucionar algunos de los problemas o mejorar algunas de las características del juego, o para mantenerse al día con las últimas versiones o tendencias del juego. </li>
|
44 |
-
</ul>
|
45 |
-
<h2>Cómo descargar Counter Strike 1.3? </h2>
|
46 |
-
<p>Si está interesado en descargar Counter Strike 1.3, tendrá que seguir algunos pasos y requisitos para garantizar un proceso de instalación sin problemas y con éxito. Estas son algunas de las cosas que necesitas saber sobre la descarga de Counter Strike 1.3:</p>
|
47 |
-
<h3>Los requisitos y la compatibilidad de Counter Strike 1.3</h3>
|
48 |
-
<p>Antes de descargar Counter Strike 1.3, debe asegurarse de que su PC cumple con los requisitos mínimos del sistema para ejecutar el juego. Aquí están las especificaciones que necesita:</p>
|
49 |
-
<tabla>
|
50 |
-
<tr><th>Sistema operativo</th><th>Windows XP/Vista/7/8/10</th></tr>
|
51 |
-
<tr><th>Procesador</th><th>Pentium III 500 MHz o equivalente</th></tr>
|
52 |
-
<tr><th>Memoria</th><th>96 MB de RAM</th></tr>
|
53 |
-
<tr><th>Gráficos</th><th><th>16 MB tarjeta de vídeo</th></tr>
|
54 |
-
<tr><th>Almacenamiento</th><th>500 MB de espacio disponible</th></tr>
|
55 |
-
<tr><th>Tarjeta de sonido</th><th>Tarjeta de sonido compatible con DirectX</th></tr>
|
56 |
-
<tr <th>Red</th><th>Conexión a Internet de banda ancha</th></tr>
|
57 |
-
</tabla>
|
58 |
-
<p>También debe asegurarse de que tiene Half-Life instalado en su PC, ya que Counter Strike 1.3 es un mod para Half-Life y requiere que se ejecute. Puedes comprar Half-Life en Steam u otras plataformas online, o usar tu propio CD-ROM si tienes uno. </p>
|
59 |
-
<h3>Las fuentes y pasos de la descarga de Counter Strike 1.3</h3>
|
60 |
-
|
61 |
-
<ul>
|
62 |
-
<li>Puedes descargar Counter Strike 1.3 de Steam, la plataforma oficial para juegos de Valve Corporation. Deberás crear una cuenta de Steam e instalar el cliente de Steam en tu PC, luego buscar Counter Strike 1.3 en la tienda de Steam y hacer clic en el botón de descarga. Steam instalará y actualizará el juego automáticamente. </li>
|
63 |
-
<li>Puede descargar Counter Strike 1.3 de CS-Download, un sitio web que proporciona descargas gratuitas y seguras de las versiones y mods de Counter Strike. Usted tendrá que visitar el sitio web y haga clic en el enlace de descarga para Counter Strike 1.3, a continuación, siga las instrucciones y solicitudes para instalar el juego en su PC.</li>
|
64 |
-
<li>Puede descargar Counter Strike 1.3 de FilePlanet, un sitio web que alberga varios archivos y descargas de juegos y software. Deberá visitar el sitio web y buscar Counter Strike 1.3, luego elegir el archivo que coincida con su sistema y preferencias, y haga clic en el botón de descarga. A continuación, tendrá que extraer y ejecutar el archivo para instalar el juego en su PC.</li>
|
65 |
-
</ul>
|
66 |
-
<p>Después de haber descargado Counter Strike 1.3, puede iniciar el juego desde su escritorio o menú de inicio, y disfrutar jugando con sus amigos u otros jugadores en línea. </p>
|
67 |
-
<h2>Conclusión</h2>
|
68 |
-
<h4>Resumen y recomendaciones</h4>
|
69 |
-
<p>Counter Strike 1.3 es un clásico juego de FPS que puedes descargar y jugar en tu PC. Es un juego multijugador que enfrenta a dos equipos de Terroristas y Antiterroristas entre sí en varios mapas y modos. Es un juego que tiene muchas características y beneficios, como un juego divertido y adictivo, mejora de habilidades, satisfacción por la nostalgia y fácil instalación. También es un juego que tiene algunos desafíos y desventajas, tales como gráficos anticuados, errores y fallas, curva de aprendizaje empinada, y actualizaciones constantes. </p>
|
70 |
-
|
71 |
-
<h4>Preguntas frecuentes</h4>
|
72 |
-
<p>Aquí están algunas de las preguntas más frecuentes sobre Counter Strike 1.3:</p>
|
73 |
-
<ol>
|
74 |
-
<li><b>¿Está libre Counter Strike 1.3? </b><br>Sí, Counter Strike 1.3 es gratis para descargar y jugar, siempre y cuando tengas Half-Life instalado en tu PC.</li>
|
75 |
-
¿Es seguro el Counter Strike 1.3? </b><br>Sí, Counter Strike 1.3 es seguro para descargar y jugar, siempre y cuando utilice fuentes y sitios web de confianza, como Steam, CS-Download o FilePlanet.</li>
|
76 |
-
¿Sigue siendo popular Counter Strike 1.3? </b><br>Sí, Counter Strike 1.3 sigue siendo popular entre muchos jugadores que aman esta versión clásica del juego. Puede encontrar muchos servidores y comunidades que albergan Counter Strike 1.3 partidos y torneos en línea. </li>
|
77 |
-
<li><b>¿Cómo puedo mejorar mis habilidades en Counter Strike 1.3? </b><br>Puedes mejorar tus habilidades en Counter Strike 1.3 practicando regularmente, aprendiendo de otros jugadores, viendo tutoriales y guías en línea, y uniéndote a clanes o equipos que pueden ayudarte a mejorar. </li>
|
78 |
-
<li><b>¿Cuáles son algunos de los mejores mapas en Counter Strike 1.3? </b><br>Some of the best maps in Counter Strike 1.3 are de_dust2, cs_italy, cs_office, de_train, de_inferno, cs_assault, de_aztec, cs_militia, de_nuke, and cs_siege. </li>
|
79 |
-
</ol></p> 64aa2da5cf<br />
|
80 |
-
<br />
|
81 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BongoCaat/ArtGenerator/app.py
DELETED
@@ -1,611 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"cells": [
|
3 |
-
{
|
4 |
-
"cell_type": "markdown",
|
5 |
-
"metadata": {
|
6 |
-
"id": "view-in-github",
|
7 |
-
"colab_type": "text"
|
8 |
-
},
|
9 |
-
"source": [
|
10 |
-
"<a href=\"https://colab.research.google.com/github/qunash/stable-diffusion-2-gui/blob/main/stable_diffusion_2_0.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
11 |
-
]
|
12 |
-
},
|
13 |
-
{
|
14 |
-
"cell_type": "markdown",
|
15 |
-
"metadata": {
|
16 |
-
"id": "620o1BxdNbgq"
|
17 |
-
},
|
18 |
-
"source": [
|
19 |
-
"# **Stable Diffusion 2.1**\n",
|
20 |
-
"Gradio app for [Stable Diffusion 2](https://huggingface.co/stabilityai/stable-diffusion-2) by [Stability AI](https://stability.ai/) (v2-1_768-ema-pruned.ckpt).\n",
|
21 |
-
"It uses [Hugging Face](https://huggingface.co/) Diffusers🧨 implementation.\n",
|
22 |
-
"\n",
|
23 |
-
"Currently supported pipelines are `text-to-image`, `image-to-image`, `inpainting`, `4x upscaling` and `depth-to-image`.\n",
|
24 |
-
"\n",
|
25 |
-
"<br>\n",
|
26 |
-
"\n",
|
27 |
-
"Colab by [anzorq](https://twitter.com/hahahahohohe). If you like it, please consider supporting me:\n",
|
28 |
-
"\n",
|
29 |
-
"[<a href=\"https://www.buymeacoffee.com/anzorq\" target=\"_blank\"><img src=\"https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png\" height=\"32px\" width=\"108px\" alt=\"Buy Me A Coffee\"></a>](https://www.buymeacoffee.com/anzorq)\n",
|
30 |
-
"<br>\n",
|
31 |
-
"[](https://github.com/qunash/stable-diffusion-2-gui)\n",
|
32 |
-
"\n",
|
33 |
-
""
|
34 |
-
]
|
35 |
-
},
|
36 |
-
{
|
37 |
-
"cell_type": "markdown",
|
38 |
-
"metadata": {
|
39 |
-
"id": "KQI4RX20DW_8"
|
40 |
-
},
|
41 |
-
"source": [
|
42 |
-
"# Install dependencies (~1.5 mins)"
|
43 |
-
]
|
44 |
-
},
|
45 |
-
{
|
46 |
-
"cell_type": "code",
|
47 |
-
"execution_count": 1,
|
48 |
-
"metadata": {
|
49 |
-
"id": "78HoqRAB-cES",
|
50 |
-
"cellView": "form"
|
51 |
-
},
|
52 |
-
"outputs": [],
|
53 |
-
"source": [
|
54 |
-
"!pip install --upgrade git+https://github.com/huggingface/diffusers.git\n",
|
55 |
-
"# !pip install diffusers\n",
|
56 |
-
"!pip install --upgrade git+https://github.com/huggingface/transformers/\n",
|
57 |
-
"# !pip install transformers\n",
|
58 |
-
"!pip install accelerate==0.12.0\n",
|
59 |
-
"!pip install scipy\n",
|
60 |
-
"!pip install ftfy\n",
|
61 |
-
"!pip install gradio -q\n",
|
62 |
-
"\n",
|
63 |
-
"#@markdown ### ⬅️ Run this cell\n",
|
64 |
-
"#@markdown ---\n",
|
65 |
-
"#@markdown ### Install **xformers**?\n",
|
66 |
-
"#@markdown This will take an additional ~3.5 mins.<br>But images will generate 25-40% faster.\n",
|
67 |
-
"install_xformers = False #@param {type:\"boolean\"}\n",
|
68 |
-
"\n",
|
69 |
-
"if install_xformers:\n",
|
70 |
-
" import os\n",
|
71 |
-
" from subprocess import getoutput\n",
|
72 |
-
"\n",
|
73 |
-
" os.system(\"pip install --extra-index-url https://download.pytorch.org/whl/cu113 torch torchvision==0.13.1+cu113\")\n",
|
74 |
-
" os.system(\"pip install triton==2.0.0.dev20220701\")\n",
|
75 |
-
" gpu_info = getoutput('nvidia-smi')\n",
|
76 |
-
" if(\"A10G\" in gpu_info):\n",
|
77 |
-
" os.system(f\"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl\")\n",
|
78 |
-
" elif(\"T4\" in gpu_info):\n",
|
79 |
-
" os.system(f\"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl\")\n",
|
80 |
-
"\n",
|
81 |
-
"\n",
|
82 |
-
"# ### install xformers\n",
|
83 |
-
"# from IPython.utils import capture\n",
|
84 |
-
"# from subprocess import getoutput\n",
|
85 |
-
"# from re import search\n",
|
86 |
-
"\n",
|
87 |
-
"# with capture.capture_output() as cap:\n",
|
88 |
-
" \n",
|
89 |
-
"# smi_out = getoutput('nvidia-smi')\n",
|
90 |
-
"# supported = search('(T4|P100|V100|A100|K80)', smi_out)\n",
|
91 |
-
"\n",
|
92 |
-
"# if not supported:\n",
|
93 |
-
"# while True:\n",
|
94 |
-
"# print(\"\\x1b[1;31mThe current GPU is not supported, try starting a new session.\\x1b[0m\")\n",
|
95 |
-
"# else:\n",
|
96 |
-
"# supported = supported.group(0)\n",
|
97 |
-
"\n",
|
98 |
-
"# !pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/{supported}/xformers-0.0.13.dev0-py3-none-any.whl\n",
|
99 |
-
"# !pip install -q https://github.com/ShivamShrirao/xformers-wheels/releases/download/4c06c79/xformers-0.0.15.dev0+4c06c79.d20221201-cp38-cp38-linux_x86_64.whl"
|
100 |
-
]
|
101 |
-
},
|
102 |
-
{
|
103 |
-
"cell_type": "markdown",
|
104 |
-
"metadata": {
|
105 |
-
"id": "OOPHNsFYDbc0"
|
106 |
-
},
|
107 |
-
"source": [
|
108 |
-
"# Run the app"
|
109 |
-
]
|
110 |
-
},
|
111 |
-
{
|
112 |
-
"cell_type": "code",
|
113 |
-
"execution_count": 1,
|
114 |
-
"metadata": {
|
115 |
-
"cellView": "form",
|
116 |
-
"id": "gId0-asCBVwL"
|
117 |
-
},
|
118 |
-
"outputs": [],
|
119 |
-
"source": [
|
120 |
-
"#@title ⬇️🖼️\n",
|
121 |
-
"from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, StableDiffusionUpscalePipeline, DiffusionPipeline, StableDiffusionDepth2ImgPipeline, DPMSolverMultistepScheduler\n",
|
122 |
-
"import gradio as gr\n",
|
123 |
-
"import torch\n",
|
124 |
-
"from PIL import Image\n",
|
125 |
-
"import random\n",
|
126 |
-
"\n",
|
127 |
-
"state = None\n",
|
128 |
-
"current_steps = 25\n",
|
129 |
-
"attn_slicing_enabled = True\n",
|
130 |
-
"mem_eff_attn_enabled = install_xformers\n",
|
131 |
-
"\n",
|
132 |
-
"# model_id = 'stabilityai/stable-diffusion-2'\n",
|
133 |
-
"model_id = 'stabilityai/stable-diffusion-2-1'\n",
|
134 |
-
"\n",
|
135 |
-
"scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder=\"scheduler\")\n",
|
136 |
-
"\n",
|
137 |
-
"pipe = StableDiffusionPipeline.from_pretrained(\n",
|
138 |
-
" model_id,\n",
|
139 |
-
" revision=\"fp16\" if torch.cuda.is_available() else \"fp32\",\n",
|
140 |
-
" torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,\n",
|
141 |
-
" scheduler=scheduler\n",
|
142 |
-
" ).to(\"cuda\")\n",
|
143 |
-
"pipe.enable_attention_slicing()\n",
|
144 |
-
"if mem_eff_attn_enabled:\n",
|
145 |
-
" pipe.enable_xformers_memory_efficient_attention()\n",
|
146 |
-
"\n",
|
147 |
-
"pipe_i2i = None\n",
|
148 |
-
"pipe_upscale = None\n",
|
149 |
-
"pipe_inpaint = None\n",
|
150 |
-
"pipe_depth2img = None\n",
|
151 |
-
"\n",
|
152 |
-
"\n",
|
153 |
-
"modes = {\n",
|
154 |
-
" 'txt2img': 'Text to Image',\n",
|
155 |
-
" 'img2img': 'Image to Image',\n",
|
156 |
-
" 'inpaint': 'Inpainting',\n",
|
157 |
-
" 'upscale4x': 'Upscale 4x',\n",
|
158 |
-
" 'depth2img': 'Depth to Image'\n",
|
159 |
-
"}\n",
|
160 |
-
"current_mode = modes['txt2img']\n",
|
161 |
-
"\n",
|
162 |
-
"def error_str(error, title=\"Error\"):\n",
|
163 |
-
" return f\"\"\"#### {title}\n",
|
164 |
-
" {error}\"\"\" if error else \"\"\n",
|
165 |
-
"\n",
|
166 |
-
"def update_state(new_state):\n",
|
167 |
-
" global state\n",
|
168 |
-
" state = new_state\n",
|
169 |
-
"\n",
|
170 |
-
"def update_state_info(old_state):\n",
|
171 |
-
" if state and state != old_state:\n",
|
172 |
-
" return gr.update(value=state)\n",
|
173 |
-
"\n",
|
174 |
-
"def set_mem_optimizations(pipe):\n",
|
175 |
-
" if attn_slicing_enabled:\n",
|
176 |
-
" pipe.enable_attention_slicing()\n",
|
177 |
-
" else:\n",
|
178 |
-
" pipe.disable_attention_slicing()\n",
|
179 |
-
" \n",
|
180 |
-
" if mem_eff_attn_enabled:\n",
|
181 |
-
" pipe.enable_xformers_memory_efficient_attention()\n",
|
182 |
-
" else:\n",
|
183 |
-
" pipe.disable_xformers_memory_efficient_attention()\n",
|
184 |
-
"\n",
|
185 |
-
"def get_i2i_pipe(scheduler):\n",
|
186 |
-
" \n",
|
187 |
-
" update_state(\"Loading image to image model...\")\n",
|
188 |
-
"\n",
|
189 |
-
" pipe = StableDiffusionImg2ImgPipeline.from_pretrained(\n",
|
190 |
-
" model_id,\n",
|
191 |
-
" revision=\"fp16\" if torch.cuda.is_available() else \"fp32\",\n",
|
192 |
-
" torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,\n",
|
193 |
-
" scheduler=scheduler\n",
|
194 |
-
" )\n",
|
195 |
-
" set_mem_optimizations(pipe)\n",
|
196 |
-
" pipe.to(\"cuda\")\n",
|
197 |
-
" return pipe\n",
|
198 |
-
"\n",
|
199 |
-
"def get_inpaint_pipe():\n",
|
200 |
-
" \n",
|
201 |
-
" update_state(\"Loading inpainting model...\")\n",
|
202 |
-
"\n",
|
203 |
-
" pipe = DiffusionPipeline.from_pretrained(\n",
|
204 |
-
" \"stabilityai/stable-diffusion-2-inpainting\",\n",
|
205 |
-
" revision=\"fp16\" if torch.cuda.is_available() else \"fp32\",\n",
|
206 |
-
" torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,\n",
|
207 |
-
" # scheduler=scheduler # TODO currently setting scheduler here messes up the end result. A bug in Diffusers🧨\n",
|
208 |
-
" ).to(\"cuda\")\n",
|
209 |
-
" pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)\n",
|
210 |
-
" pipe.enable_attention_slicing()\n",
|
211 |
-
" pipe.enable_xformers_memory_efficient_attention()\n",
|
212 |
-
" return pipe\n",
|
213 |
-
"\n",
|
214 |
-
"def get_upscale_pipe(scheduler):\n",
|
215 |
-
" \n",
|
216 |
-
" update_state(\"Loading upscale model...\")\n",
|
217 |
-
"\n",
|
218 |
-
" pipe = StableDiffusionUpscalePipeline.from_pretrained(\n",
|
219 |
-
" \"stabilityai/stable-diffusion-x4-upscaler\",\n",
|
220 |
-
" revision=\"fp16\" if torch.cuda.is_available() else \"fp32\",\n",
|
221 |
-
" torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,\n",
|
222 |
-
" # scheduler=scheduler\n",
|
223 |
-
" )\n",
|
224 |
-
" # pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)\n",
|
225 |
-
" set_mem_optimizations(pipe)\n",
|
226 |
-
" pipe.to(\"cuda\")\n",
|
227 |
-
" return pipe\n",
|
228 |
-
" \n",
|
229 |
-
"def get_depth2img_pipe():\n",
|
230 |
-
" \n",
|
231 |
-
" update_state(\"Loading depth to image model...\")\n",
|
232 |
-
"\n",
|
233 |
-
" pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(\n",
|
234 |
-
" \"stabilityai/stable-diffusion-2-depth\",\n",
|
235 |
-
" revision=\"fp16\" if torch.cuda.is_available() else \"fp32\",\n",
|
236 |
-
" torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,\n",
|
237 |
-
" # scheduler=scheduler\n",
|
238 |
-
" )\n",
|
239 |
-
" pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)\n",
|
240 |
-
" set_mem_optimizations(pipe)\n",
|
241 |
-
" pipe.to(\"cuda\")\n",
|
242 |
-
" return pipe\n",
|
243 |
-
"\n",
|
244 |
-
"def switch_attention_slicing(attn_slicing):\n",
|
245 |
-
" global attn_slicing_enabled\n",
|
246 |
-
" attn_slicing_enabled = attn_slicing\n",
|
247 |
-
"\n",
|
248 |
-
"def switch_mem_eff_attn(mem_eff_attn):\n",
|
249 |
-
" global mem_eff_attn_enabled\n",
|
250 |
-
" mem_eff_attn_enabled = mem_eff_attn\n",
|
251 |
-
"\n",
|
252 |
-
"def pipe_callback(step: int, timestep: int, latents: torch.FloatTensor):\n",
|
253 |
-
" update_state(f\"{step}/{current_steps} steps\")#\\nTime left, sec: {timestep/100:.0f}\")\n",
|
254 |
-
"\n",
|
255 |
-
"def inference(inf_mode, prompt, n_images, guidance, steps, width=768, height=768, seed=0, img=None, strength=0.5, neg_prompt=\"\"):\n",
|
256 |
-
"\n",
|
257 |
-
" update_state(\" \")\n",
|
258 |
-
"\n",
|
259 |
-
" global current_mode\n",
|
260 |
-
" if inf_mode != current_mode:\n",
|
261 |
-
" pipe.to(\"cuda\" if inf_mode == modes['txt2img'] else \"cpu\")\n",
|
262 |
-
"\n",
|
263 |
-
" if pipe_i2i is not None:\n",
|
264 |
-
" pipe_i2i.to(\"cuda\" if inf_mode == modes['img2img'] else \"cpu\")\n",
|
265 |
-
"\n",
|
266 |
-
" if pipe_inpaint is not None:\n",
|
267 |
-
" pipe_inpaint.to(\"cuda\" if inf_mode == modes['inpaint'] else \"cpu\")\n",
|
268 |
-
"\n",
|
269 |
-
" if pipe_upscale is not None:\n",
|
270 |
-
" pipe_upscale.to(\"cuda\" if inf_mode == modes['upscale4x'] else \"cpu\")\n",
|
271 |
-
" \n",
|
272 |
-
" if pipe_depth2img is not None:\n",
|
273 |
-
" pipe_depth2img.to(\"cuda\" if inf_mode == modes['depth2img'] else \"cpu\")\n",
|
274 |
-
"\n",
|
275 |
-
" current_mode = inf_mode\n",
|
276 |
-
" \n",
|
277 |
-
" if seed == 0:\n",
|
278 |
-
" seed = random.randint(0, 2147483647)\n",
|
279 |
-
"\n",
|
280 |
-
" generator = torch.Generator('cuda').manual_seed(seed)\n",
|
281 |
-
" prompt = prompt\n",
|
282 |
-
"\n",
|
283 |
-
" try:\n",
|
284 |
-
" \n",
|
285 |
-
" if inf_mode == modes['txt2img']:\n",
|
286 |
-
" return txt_to_img(prompt, n_images, neg_prompt, guidance, steps, width, height, generator, seed), gr.update(visible=False, value=None)\n",
|
287 |
-
" \n",
|
288 |
-
" elif inf_mode == modes['img2img']:\n",
|
289 |
-
" if img is None:\n",
|
290 |
-
" return None, gr.update(visible=True, value=error_str(\"Image is required for Image to Image mode\"))\n",
|
291 |
-
"\n",
|
292 |
-
" return img_to_img(prompt, n_images, neg_prompt, img, strength, guidance, steps, width, height, generator, seed), gr.update(visible=False, value=None)\n",
|
293 |
-
" \n",
|
294 |
-
" elif inf_mode == modes['inpaint']:\n",
|
295 |
-
" if img is None:\n",
|
296 |
-
" return None, gr.update(visible=True, value=error_str(\"Image is required for Inpainting mode\"))\n",
|
297 |
-
"\n",
|
298 |
-
" return inpaint(prompt, n_images, neg_prompt, img, guidance, steps, width, height, generator, seed), gr.update(visible=False, value=None)\n",
|
299 |
-
"\n",
|
300 |
-
" elif inf_mode == modes['upscale4x']:\n",
|
301 |
-
" if img is None:\n",
|
302 |
-
" return None, gr.update(visible=True, value=error_str(\"Image is required for Upscale mode\"))\n",
|
303 |
-
"\n",
|
304 |
-
" return upscale(prompt, n_images, neg_prompt, img, guidance, steps, generator), gr.update(visible=False, value=None)\n",
|
305 |
-
"\n",
|
306 |
-
" elif inf_mode == modes['depth2img']:\n",
|
307 |
-
" if img is None:\n",
|
308 |
-
" return None, gr.update(visible=True, value=error_str(\"Image is required for Depth to Image mode\"))\n",
|
309 |
-
"\n",
|
310 |
-
" return depth2img(prompt, n_images, neg_prompt, img, guidance, steps, generator, seed), gr.update(visible=False, value=None)\n",
|
311 |
-
"\n",
|
312 |
-
" except Exception as e:\n",
|
313 |
-
" return None, gr.update(visible=True, value=error_str(e))\n",
|
314 |
-
"\n",
|
315 |
-
"def txt_to_img(prompt, n_images, neg_prompt, guidance, steps, width, height, generator, seed):\n",
|
316 |
-
"\n",
|
317 |
-
" result = pipe(\n",
|
318 |
-
" prompt,\n",
|
319 |
-
" num_images_per_prompt = n_images,\n",
|
320 |
-
" negative_prompt = neg_prompt,\n",
|
321 |
-
" num_inference_steps = int(steps),\n",
|
322 |
-
" guidance_scale = guidance,\n",
|
323 |
-
" width = width,\n",
|
324 |
-
" height = height,\n",
|
325 |
-
" generator = generator,\n",
|
326 |
-
" callback=pipe_callback).images\n",
|
327 |
-
"\n",
|
328 |
-
" update_state(f\"Done. Seed: {seed}\")\n",
|
329 |
-
"\n",
|
330 |
-
" return result\n",
|
331 |
-
"\n",
|
332 |
-
"def img_to_img(prompt, n_images, neg_prompt, img, strength, guidance, steps, width, height, generator, seed):\n",
|
333 |
-
"\n",
|
334 |
-
" global pipe_i2i\n",
|
335 |
-
" if pipe_i2i is None:\n",
|
336 |
-
" pipe_i2i = get_i2i_pipe(scheduler)\n",
|
337 |
-
"\n",
|
338 |
-
" img = img['image']\n",
|
339 |
-
" ratio = min(height / img.height, width / img.width)\n",
|
340 |
-
" img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS)\n",
|
341 |
-
" result = pipe_i2i(\n",
|
342 |
-
" prompt,\n",
|
343 |
-
" num_images_per_prompt = n_images,\n",
|
344 |
-
" negative_prompt = neg_prompt,\n",
|
345 |
-
" image = img,\n",
|
346 |
-
" num_inference_steps = int(steps),\n",
|
347 |
-
" strength = strength,\n",
|
348 |
-
" guidance_scale = guidance,\n",
|
349 |
-
" # width = width,\n",
|
350 |
-
" # height = height,\n",
|
351 |
-
" generator = generator,\n",
|
352 |
-
" callback=pipe_callback).images\n",
|
353 |
-
"\n",
|
354 |
-
" update_state(f\"Done. Seed: {seed}\")\n",
|
355 |
-
" \n",
|
356 |
-
" return result\n",
|
357 |
-
"\n",
|
358 |
-
"# TODO Currently supports only 512x512 images\n",
|
359 |
-
"def inpaint(prompt, n_images, neg_prompt, img, guidance, steps, width, height, generator, seed):\n",
|
360 |
-
"\n",
|
361 |
-
" global pipe_inpaint\n",
|
362 |
-
" if pipe_inpaint is None:\n",
|
363 |
-
" pipe_inpaint = get_inpaint_pipe()\n",
|
364 |
-
"\n",
|
365 |
-
" inp_img = img['image']\n",
|
366 |
-
" mask = img['mask']\n",
|
367 |
-
" inp_img = square_padding(inp_img)\n",
|
368 |
-
" mask = square_padding(mask)\n",
|
369 |
-
"\n",
|
370 |
-
" # # ratio = min(height / inp_img.height, width / inp_img.width)\n",
|
371 |
-
" # ratio = min(512 / inp_img.height, 512 / inp_img.width)\n",
|
372 |
-
" # inp_img = inp_img.resize((int(inp_img.width * ratio), int(inp_img.height * ratio)), Image.LANCZOS)\n",
|
373 |
-
" # mask = mask.resize((int(mask.width * ratio), int(mask.height * ratio)), Image.LANCZOS)\n",
|
374 |
-
"\n",
|
375 |
-
" inp_img = inp_img.resize((512, 512))\n",
|
376 |
-
" mask = mask.resize((512, 512))\n",
|
377 |
-
"\n",
|
378 |
-
" result = pipe_inpaint(\n",
|
379 |
-
" prompt,\n",
|
380 |
-
" image = inp_img,\n",
|
381 |
-
" mask_image = mask,\n",
|
382 |
-
" num_images_per_prompt = n_images,\n",
|
383 |
-
" negative_prompt = neg_prompt,\n",
|
384 |
-
" num_inference_steps = int(steps),\n",
|
385 |
-
" guidance_scale = guidance,\n",
|
386 |
-
" # width = width,\n",
|
387 |
-
" # height = height,\n",
|
388 |
-
" generator = generator,\n",
|
389 |
-
" callback=pipe_callback).images\n",
|
390 |
-
" \n",
|
391 |
-
" update_state(f\"Done. Seed: {seed}\")\n",
|
392 |
-
"\n",
|
393 |
-
" return result\n",
|
394 |
-
"\n",
|
395 |
-
"def depth2img(prompt, n_images, neg_prompt, img, guidance, steps, generator, seed):\n",
|
396 |
-
"\n",
|
397 |
-
" global pipe_depth2img\n",
|
398 |
-
" if pipe_depth2img is None:\n",
|
399 |
-
" pipe_depth2img = get_depth2img_pipe()\n",
|
400 |
-
"\n",
|
401 |
-
" img = img['image']\n",
|
402 |
-
" result = pipe_depth2img(\n",
|
403 |
-
" prompt,\n",
|
404 |
-
" num_images_per_prompt = n_images,\n",
|
405 |
-
" negative_prompt = neg_prompt,\n",
|
406 |
-
" image = img,\n",
|
407 |
-
" num_inference_steps = int(steps),\n",
|
408 |
-
" guidance_scale = guidance,\n",
|
409 |
-
" # width = width,\n",
|
410 |
-
" # height = height,\n",
|
411 |
-
" generator = generator,\n",
|
412 |
-
" callback=pipe_callback).images\n",
|
413 |
-
"\n",
|
414 |
-
" update_state(f\"Done. Seed: {seed}\")\n",
|
415 |
-
" \n",
|
416 |
-
" return result\n",
|
417 |
-
"\n",
|
418 |
-
"def square_padding(img):\n",
|
419 |
-
" width, height = img.size\n",
|
420 |
-
" if width == height:\n",
|
421 |
-
" return img\n",
|
422 |
-
" new_size = max(width, height)\n",
|
423 |
-
" new_img = Image.new('RGB', (new_size, new_size), (0, 0, 0, 255))\n",
|
424 |
-
" new_img.paste(img, ((new_size - width) // 2, (new_size - height) // 2))\n",
|
425 |
-
" return new_img\n",
|
426 |
-
"\n",
|
427 |
-
"def upscale(prompt, n_images, neg_prompt, img, guidance, steps, generator):\n",
|
428 |
-
"\n",
|
429 |
-
" global pipe_upscale\n",
|
430 |
-
" if pipe_upscale is None:\n",
|
431 |
-
" pipe_upscale = get_upscale_pipe(scheduler)\n",
|
432 |
-
"\n",
|
433 |
-
" img = img['image']\n",
|
434 |
-
" return upscale_tiling(prompt, neg_prompt, img, guidance, steps, generator)\n",
|
435 |
-
"\n",
|
436 |
-
" # result = pipe_upscale(\n",
|
437 |
-
" # prompt,\n",
|
438 |
-
" # image = img,\n",
|
439 |
-
" # num_inference_steps = int(steps),\n",
|
440 |
-
" # guidance_scale = guidance,\n",
|
441 |
-
" # negative_prompt = neg_prompt,\n",
|
442 |
-
" # num_images_per_prompt = n_images,\n",
|
443 |
-
" # generator = generator).images[0]\n",
|
444 |
-
"\n",
|
445 |
-
" # return result\n",
|
446 |
-
"\n",
|
447 |
-
"def upscale_tiling(prompt, neg_prompt, img, guidance, steps, generator):\n",
|
448 |
-
"\n",
|
449 |
-
" width, height = img.size\n",
|
450 |
-
"\n",
|
451 |
-
" # calculate the padding needed to make the image dimensions a multiple of 128\n",
|
452 |
-
" padding_x = 128 - (width % 128) if width % 128 != 0 else 0\n",
|
453 |
-
" padding_y = 128 - (height % 128) if height % 128 != 0 else 0\n",
|
454 |
-
"\n",
|
455 |
-
" # create a white image of the right size to be used as padding\n",
|
456 |
-
" padding_img = Image.new('RGB', (padding_x, padding_y), color=(255, 255, 255, 0))\n",
|
457 |
-
"\n",
|
458 |
-
" # paste the padding image onto the original image to add the padding\n",
|
459 |
-
" img.paste(padding_img, (width, height))\n",
|
460 |
-
"\n",
|
461 |
-
" # update the image dimensions to include the padding\n",
|
462 |
-
" width += padding_x\n",
|
463 |
-
" height += padding_y\n",
|
464 |
-
"\n",
|
465 |
-
" if width > 128 or height > 128:\n",
|
466 |
-
"\n",
|
467 |
-
" num_tiles_x = int(width / 128)\n",
|
468 |
-
" num_tiles_y = int(height / 128)\n",
|
469 |
-
"\n",
|
470 |
-
" upscaled_img = Image.new('RGB', (img.size[0] * 4, img.size[1] * 4))\n",
|
471 |
-
" for x in range(num_tiles_x):\n",
|
472 |
-
" for y in range(num_tiles_y):\n",
|
473 |
-
" update_state(f\"Upscaling tile {x * num_tiles_y + y + 1}/{num_tiles_x * num_tiles_y}\")\n",
|
474 |
-
" tile = img.crop((x * 128, y * 128, (x + 1) * 128, (y + 1) * 128))\n",
|
475 |
-
"\n",
|
476 |
-
" upscaled_tile = pipe_upscale(\n",
|
477 |
-
" prompt=\"\",\n",
|
478 |
-
" image=tile,\n",
|
479 |
-
" num_inference_steps=steps,\n",
|
480 |
-
" guidance_scale=guidance,\n",
|
481 |
-
" # negative_prompt = neg_prompt,\n",
|
482 |
-
" generator=generator,\n",
|
483 |
-
" ).images[0]\n",
|
484 |
-
"\n",
|
485 |
-
" upscaled_img.paste(upscaled_tile, (x * upscaled_tile.size[0], y * upscaled_tile.size[1]))\n",
|
486 |
-
"\n",
|
487 |
-
" return [upscaled_img]\n",
|
488 |
-
" else:\n",
|
489 |
-
" return pipe_upscale(\n",
|
490 |
-
" prompt=prompt,\n",
|
491 |
-
" image=img,\n",
|
492 |
-
" num_inference_steps=steps,\n",
|
493 |
-
" guidance_scale=guidance,\n",
|
494 |
-
" negative_prompt = neg_prompt,\n",
|
495 |
-
" generator=generator,\n",
|
496 |
-
" ).images\n",
|
497 |
-
"\n",
|
498 |
-
"\n",
|
499 |
-
"\n",
|
500 |
-
"def on_mode_change(mode):\n",
|
501 |
-
" return gr.update(visible = mode in (modes['img2img'], modes['inpaint'], modes['upscale4x'], modes['depth2img'])), \\\n",
|
502 |
-
" gr.update(visible = mode == modes['inpaint']), \\\n",
|
503 |
-
" gr.update(visible = mode == modes['upscale4x']), \\\n",
|
504 |
-
" gr.update(visible = mode == modes['img2img'])\n",
|
505 |
-
"\n",
|
506 |
-
"def on_steps_change(steps):\n",
|
507 |
-
" global current_steps\n",
|
508 |
-
" current_steps = steps\n",
|
509 |
-
"\n",
|
510 |
-
"css = \"\"\".main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem}\n",
|
511 |
-
"\"\"\"\n",
|
512 |
-
"with gr.Blocks(css=css) as demo:\n",
|
513 |
-
" gr.HTML(\n",
|
514 |
-
" f\"\"\"\n",
|
515 |
-
" <div class=\"main-div\">\n",
|
516 |
-
" <div>\n",
|
517 |
-
" <h1>Stable Diffusion 2.1</h1>\n",
|
518 |
-
" </div><br>\n",
|
519 |
-
" <p> Model used: <a href=\"https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.ckpt\" target=\"_blank\">v2-1_768-ema-pruned.ckpt</a></p>\n",
|
520 |
-
" Running on <b>{\"GPU 🔥\" if torch.cuda.is_available() else \"CPU 🥶\"}</b>\n",
|
521 |
-
" </div>\n",
|
522 |
-
" \"\"\"\n",
|
523 |
-
" )\n",
|
524 |
-
" with gr.Row():\n",
|
525 |
-
" \n",
|
526 |
-
" with gr.Column(scale=70):\n",
|
527 |
-
" with gr.Group():\n",
|
528 |
-
" with gr.Row():\n",
|
529 |
-
" prompt = gr.Textbox(label=\"Prompt\", show_label=False, max_lines=2,placeholder=f\"Enter prompt\").style(container=False)\n",
|
530 |
-
" generate = gr.Button(value=\"Generate\").style(rounded=(False, True, True, False))\n",
|
531 |
-
"\n",
|
532 |
-
" gallery = gr.Gallery(label=\"Generated images\", show_label=False).style(grid=[2], height=\"auto\")\n",
|
533 |
-
" state_info = gr.Textbox(label=\"State\", show_label=False, max_lines=2).style(container=False)\n",
|
534 |
-
" error_output = gr.Markdown(visible=False)\n",
|
535 |
-
"\n",
|
536 |
-
" with gr.Column(scale=30):\n",
|
537 |
-
" inf_mode = gr.Radio(label=\"Inference Mode\", choices=list(modes.values()), value=modes['txt2img'])\n",
|
538 |
-
" \n",
|
539 |
-
" with gr.Group(visible=False) as i2i_options:\n",
|
540 |
-
" image = gr.Image(label=\"Image\", height=128, type=\"pil\", tool='sketch')\n",
|
541 |
-
" inpaint_info = gr.Markdown(\"Inpainting resizes and pads images to 512x512\", visible=False)\n",
|
542 |
-
" upscale_info = gr.Markdown(\"\"\"Best for small images (128x128 or smaller).<br>\n",
|
543 |
-
" Bigger images will be sliced into 128x128 tiles which will be upscaled individually.<br>\n",
|
544 |
-
" This is done to avoid running out of GPU memory.\"\"\", visible=False)\n",
|
545 |
-
" strength = gr.Slider(label=\"Transformation strength\", minimum=0, maximum=1, step=0.01, value=0.5)\n",
|
546 |
-
"\n",
|
547 |
-
" with gr.Group():\n",
|
548 |
-
" neg_prompt = gr.Textbox(label=\"Negative prompt\", placeholder=\"What to exclude from the image\")\n",
|
549 |
-
"\n",
|
550 |
-
" n_images = gr.Slider(label=\"Number of images\", value=1, minimum=1, maximum=4, step=1)\n",
|
551 |
-
" with gr.Row():\n",
|
552 |
-
" guidance = gr.Slider(label=\"Guidance scale\", value=7.5, maximum=15)\n",
|
553 |
-
" steps = gr.Slider(label=\"Steps\", value=current_steps, minimum=2, maximum=100, step=1)\n",
|
554 |
-
"\n",
|
555 |
-
" with gr.Row():\n",
|
556 |
-
" width = gr.Slider(label=\"Width\", value=768, minimum=64, maximum=1024, step=8)\n",
|
557 |
-
" height = gr.Slider(label=\"Height\", value=768, minimum=64, maximum=1024, step=8)\n",
|
558 |
-
"\n",
|
559 |
-
" seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1)\n",
|
560 |
-
" with gr.Accordion(\"Memory optimization\"):\n",
|
561 |
-
" attn_slicing = gr.Checkbox(label=\"Attention slicing (a bit slower, but uses less memory)\", value=attn_slicing_enabled)\n",
|
562 |
-
" # mem_eff_attn = gr.Checkbox(label=\"Memory efficient attention (xformers)\", value=mem_eff_attn_enabled)\n",
|
563 |
-
"\n",
|
564 |
-
" inf_mode.change(on_mode_change, inputs=[inf_mode], outputs=[i2i_options, inpaint_info, upscale_info, strength], queue=False)\n",
|
565 |
-
" steps.change(on_steps_change, inputs=[steps], outputs=[], queue=False)\n",
|
566 |
-
" attn_slicing.change(lambda x: switch_attention_slicing(x), inputs=[attn_slicing], queue=False)\n",
|
567 |
-
" # mem_eff_attn.change(lambda x: switch_mem_eff_attn(x), inputs=[mem_eff_attn], queue=False)\n",
|
568 |
-
"\n",
|
569 |
-
" inputs = [inf_mode, prompt, n_images, guidance, steps, width, height, seed, image, strength, neg_prompt]\n",
|
570 |
-
" outputs = [gallery, error_output]\n",
|
571 |
-
" prompt.submit(inference, inputs=inputs, outputs=outputs)\n",
|
572 |
-
" generate.click(inference, inputs=inputs, outputs=outputs)\n",
|
573 |
-
"\n",
|
574 |
-
" demo.load(update_state_info, inputs=state_info, outputs=state_info, every=0.5, show_progress=False)\n",
|
575 |
-
"\n",
|
576 |
-
" gr.HTML(\"\"\"\n",
|
577 |
-
" <div style=\"border-top: 1px solid #303030;\">\n",
|
578 |
-
" <br>\n",
|
579 |
-
" <p>Space by: <a href=\"https://twitter.com/hahahahohohe\"><img src=\"https://img.shields.io/twitter/follow/hahahahohohe?label=%40anzorq&style=social\" alt=\"Twitter Follow\"></a></p><br>\n",
|
580 |
-
" <p>Enjoying this app? Please consider <a href=\"https://www.buymeacoffee.com/anzorq\">supporting me</a></p>\n",
|
581 |
-
" <a href=\"https://www.buymeacoffee.com/anzorq\" target=\"_blank\"><img src=\"https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png\" alt=\"Buy Me A Coffee\" style=\"height: 45px !important;width: 162px !important;\" ></a><br><br>\n",
|
582 |
-
" <a href=\"https://github.com/qunash/stable-diffusion-2-gui\" target=\"_blank\"><img alt=\"GitHub Repo stars\" src=\"https://img.shields.io/github/stars/qunash/stable-diffusion-2-gui?style=social\"></a>\n",
|
583 |
-
" <p><img src=\"https://visitor-badge.glitch.me/badge?page_id=anzorq.sd-2-colab\" alt=\"visitors\"></p>\n",
|
584 |
-
" </div>\n",
|
585 |
-
" \"\"\")\n",
|
586 |
-
"\n",
|
587 |
-
"demo.queue()\n",
|
588 |
-
"demo.launch(debug=True, share=True, height=768)\n"
|
589 |
-
]
|
590 |
-
}
|
591 |
-
],
|
592 |
-
"metadata": {
|
593 |
-
"accelerator": "GPU",
|
594 |
-
"colab": {
|
595 |
-
"private_outputs": True,
|
596 |
-
"provenance": [],
|
597 |
-
"toc_visible": True,
|
598 |
-
"include_colab_link": False
|
599 |
-
},
|
600 |
-
"gpuClass": "standard",
|
601 |
-
"kernelspec": {
|
602 |
-
"display_name": "Python 3",
|
603 |
-
"name": "python3"
|
604 |
-
},
|
605 |
-
"language_info": {
|
606 |
-
"name": "python"
|
607 |
-
}
|
608 |
-
},
|
609 |
-
"nbformat": 4,
|
610 |
-
"nbformat_minor": 0
|
611 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BradAllgood/fastai_chapter2_new/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Fastai Chapter2 New
|
3 |
-
emoji: 🦀
|
4 |
-
colorFrom: purple
|
5 |
-
colorTo: blue
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.29.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: apache-2.0
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated.h
DELETED
@@ -1,35 +0,0 @@
|
|
1 |
-
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
2 |
-
#pragma once
|
3 |
-
#include <torch/types.h>
|
4 |
-
|
5 |
-
namespace detectron2 {
|
6 |
-
|
7 |
-
at::Tensor box_iou_rotated_cpu(
|
8 |
-
const at::Tensor& boxes1,
|
9 |
-
const at::Tensor& boxes2);
|
10 |
-
|
11 |
-
#ifdef WITH_CUDA
|
12 |
-
at::Tensor box_iou_rotated_cuda(
|
13 |
-
const at::Tensor& boxes1,
|
14 |
-
const at::Tensor& boxes2);
|
15 |
-
#endif
|
16 |
-
|
17 |
-
// Interface for Python
|
18 |
-
// inline is needed to prevent multiple function definitions when this header is
|
19 |
-
// included by different cpps
|
20 |
-
inline at::Tensor box_iou_rotated(
|
21 |
-
const at::Tensor& boxes1,
|
22 |
-
const at::Tensor& boxes2) {
|
23 |
-
assert(boxes1.device().is_cuda() == boxes2.device().is_cuda());
|
24 |
-
if (boxes1.device().is_cuda()) {
|
25 |
-
#ifdef WITH_CUDA
|
26 |
-
return box_iou_rotated_cuda(boxes1, boxes2);
|
27 |
-
#else
|
28 |
-
AT_ERROR("Not compiled with GPU support");
|
29 |
-
#endif
|
30 |
-
}
|
31 |
-
|
32 |
-
return box_iou_rotated_cpu(boxes1, boxes2);
|
33 |
-
}
|
34 |
-
|
35 |
-
} // namespace detectron2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/cmake/thrust-config.cmake
DELETED
@@ -1,652 +0,0 @@
|
|
1 |
-
#
|
2 |
-
# find_package(Thrust) config file.
|
3 |
-
#
|
4 |
-
# Provided by NVIDIA under the same license as the associated Thrust library.
|
5 |
-
#
|
6 |
-
# Reply-To: Allison Vacanti <[email protected]>
|
7 |
-
#
|
8 |
-
# *****************************************************************************
|
9 |
-
# ** The following is a short reference to using Thrust from CMake. **
|
10 |
-
# ** For more details, see the README.md in the same directory as this file. **
|
11 |
-
# *****************************************************************************
|
12 |
-
#
|
13 |
-
# # General Usage:
|
14 |
-
# find_package(Thrust REQUIRED CONFIG)
|
15 |
-
# thrust_create_target(Thrust [options])
|
16 |
-
# target_link_libraries(some_project_lib Thrust)
|
17 |
-
#
|
18 |
-
# # Create default target with: HOST=CPP DEVICE=CUDA
|
19 |
-
# thrust_create_target(TargetName)
|
20 |
-
#
|
21 |
-
# # Create target with: HOST=CPP DEVICE=TBB
|
22 |
-
# thrust_create_target(TargetName DEVICE TBB)
|
23 |
-
#
|
24 |
-
# # Create target with: HOST=TBB DEVICE=OMP
|
25 |
-
# thrust_create_target(TargetName HOST TBB DEVICE OMP)
|
26 |
-
#
|
27 |
-
# # Create CMake cache options THRUST_[HOST|DEVICE]_SYSTEM and configure a
|
28 |
-
# # target from them. This allows these systems to be changed by developers at
|
29 |
-
# # configure time, per build.
|
30 |
-
# thrust_create_target(TargetName FROM_OPTIONS
|
31 |
-
# [HOST_OPTION <option_name>] # Optionally rename the host system option
|
32 |
-
# [DEVICE_OPTION <option_name>] # Optionally rename the device system option
|
33 |
-
# [HOST_OPTION_DOC <doc_string>] # Optionally change the cache label
|
34 |
-
# [DEVICE_OPTION_DOC <doc_string>] # Optionally change the cache label
|
35 |
-
# [HOST <default system>] # Optionally change the default backend
|
36 |
-
# [DEVICE <default system>] # Optionally change the default backend
|
37 |
-
# [ADVANCED] # Optionally mark options as advanced
|
38 |
-
# )
|
39 |
-
#
|
40 |
-
# # Use a custom TBB, CUB, and/or OMP
|
41 |
-
# # (Note that once set, these cannot be changed. This includes COMPONENT
|
42 |
-
# # preloading and lazy lookups in thrust_create_target)
|
43 |
-
# find_package(Thrust REQUIRED)
|
44 |
-
# thrust_set_CUB_target(MyCUBTarget) # MyXXXTarget contains an existing
|
45 |
-
# thrust_set_TBB_target(MyTBBTarget) # interface to XXX for Thrust to use.
|
46 |
-
# thrust_set_OMP_target(MyOMPTarget)
|
47 |
-
# thrust_create_target(ThrustWithMyCUB DEVICE CUDA)
|
48 |
-
# thrust_create_target(ThrustWithMyTBB DEVICE TBB)
|
49 |
-
# thrust_create_target(ThrustWithMyOMP DEVICE OMP)
|
50 |
-
#
|
51 |
-
# # Create target with HOST=CPP DEVICE=CUDA and some advanced flags set
|
52 |
-
# thrust_create_target(TargetName
|
53 |
-
# IGNORE_DEPRECATED_CPP_DIALECT # Silence build warnings about deprecated compilers and C++ standards
|
54 |
-
# IGNORE_DEPRECATED_CPP_11 # Only silence deprecation warnings for C++11
|
55 |
-
# IGNORE_DEPRECATED_COMPILER # Only silence deprecation warnings for old compilers
|
56 |
-
# IGNORE_CUB_VERSION # Skip configure-time and compile-time CUB version checks
|
57 |
-
# )
|
58 |
-
#
|
59 |
-
# # Test if a particular system has been loaded. ${var_name} is set to TRUE or
|
60 |
-
# # FALSE to indicate if "system" is found.
|
61 |
-
# thrust_is_system_found(<system> <var_name>)
|
62 |
-
# thrust_is_cuda_system_found(<var_name>)
|
63 |
-
# thrust_is_tbb_system_found(<var_name>)
|
64 |
-
# thrust_is_omp_system_found(<var_name>)
|
65 |
-
# thrust_is_cpp_system_found(<var_name>)
|
66 |
-
#
|
67 |
-
# # Define / update THRUST_${system}_FOUND flags in current scope
|
68 |
-
# thrust_update_system_found_flags()
|
69 |
-
#
|
70 |
-
# # View verbose log with target and dependency information:
|
71 |
-
# $ cmake . --log-level=VERBOSE (CMake 3.15.7 and above)
|
72 |
-
#
|
73 |
-
# # Print debugging output to status channel:
|
74 |
-
# thrust_debug_internal_targets()
|
75 |
-
# thrust_debug_target(TargetName "${THRUST_VERSION}")
|
76 |
-
|
77 |
-
cmake_minimum_required(VERSION 3.15)
|
78 |
-
|
79 |
-
################################################################################
|
80 |
-
# User variables and APIs. Users can rely on these:
|
81 |
-
#
|
82 |
-
|
83 |
-
# Advertise system options:
|
84 |
-
set(THRUST_HOST_SYSTEM_OPTIONS
|
85 |
-
CPP OMP TBB
|
86 |
-
CACHE INTERNAL "Valid Thrust host systems."
|
87 |
-
)
|
88 |
-
set(THRUST_DEVICE_SYSTEM_OPTIONS
|
89 |
-
CUDA CPP OMP TBB
|
90 |
-
CACHE INTERNAL "Valid Thrust device systems"
|
91 |
-
)
|
92 |
-
|
93 |
-
# Workaround cmake issue #20670 https://gitlab.kitware.com/cmake/cmake/-/issues/20670
|
94 |
-
set(THRUST_VERSION ${${CMAKE_FIND_PACKAGE_NAME}_VERSION} CACHE INTERNAL "")
|
95 |
-
set(THRUST_VERSION_MAJOR ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_MAJOR} CACHE INTERNAL "")
|
96 |
-
set(THRUST_VERSION_MINOR ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_MINOR} CACHE INTERNAL "")
|
97 |
-
set(THRUST_VERSION_PATCH ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_PATCH} CACHE INTERNAL "")
|
98 |
-
set(THRUST_VERSION_TWEAK ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_TWEAK} CACHE INTERNAL "")
|
99 |
-
set(THRUST_VERSION_COUNT ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_COUNT} CACHE INTERNAL "")
|
100 |
-
|
101 |
-
function(thrust_create_target target_name)
|
102 |
-
thrust_debug("Assembling target ${target_name}. Options: ${ARGN}" internal)
|
103 |
-
set(options
|
104 |
-
ADVANCED
|
105 |
-
FROM_OPTIONS
|
106 |
-
IGNORE_CUB_VERSION_CHECK
|
107 |
-
IGNORE_DEPRECATED_COMPILER
|
108 |
-
IGNORE_DEPRECATED_CPP_11
|
109 |
-
IGNORE_DEPRECATED_CPP_DIALECT
|
110 |
-
)
|
111 |
-
set(keys
|
112 |
-
DEVICE
|
113 |
-
DEVICE_OPTION
|
114 |
-
DEVICE_OPTION_DOC
|
115 |
-
HOST
|
116 |
-
HOST_OPTION
|
117 |
-
HOST_OPTION_DOC
|
118 |
-
)
|
119 |
-
cmake_parse_arguments(TCT "${options}" "${keys}" "" ${ARGN})
|
120 |
-
if (TCT_UNPARSED_ARGUMENTS)
|
121 |
-
message(AUTHOR_WARNING
|
122 |
-
"Unrecognized arguments passed to thrust_create_target: "
|
123 |
-
${TCT_UNPARSED_ARGUMENTS}
|
124 |
-
)
|
125 |
-
endif()
|
126 |
-
|
127 |
-
# Check that the main Thrust internal target is available
|
128 |
-
# (functions have global scope, targets have directory scope, so this
|
129 |
-
# might happen)
|
130 |
-
if (NOT TARGET Thrust::Thrust)
|
131 |
-
message(AUTHOR_WARNING
|
132 |
-
"The `thrust_create_target` function was called outside the scope of the "
|
133 |
-
"thrust targets. Call find_package again to recreate targets."
|
134 |
-
)
|
135 |
-
endif()
|
136 |
-
|
137 |
-
_thrust_set_if_undefined(TCT_HOST CPP)
|
138 |
-
_thrust_set_if_undefined(TCT_DEVICE CUDA)
|
139 |
-
_thrust_set_if_undefined(TCT_HOST_OPTION THRUST_HOST_SYSTEM)
|
140 |
-
_thrust_set_if_undefined(TCT_DEVICE_OPTION THRUST_DEVICE_SYSTEM)
|
141 |
-
_thrust_set_if_undefined(TCT_HOST_OPTION_DOC "Thrust host system.")
|
142 |
-
_thrust_set_if_undefined(TCT_DEVICE_OPTION_DOC "Thrust device system.")
|
143 |
-
|
144 |
-
if (NOT TCT_HOST IN_LIST THRUST_HOST_SYSTEM_OPTIONS)
|
145 |
-
message(FATAL_ERROR
|
146 |
-
"Requested HOST=${TCT_HOST}; must be one of ${THRUST_HOST_SYSTEM_OPTIONS}")
|
147 |
-
endif()
|
148 |
-
|
149 |
-
if (NOT TCT_DEVICE IN_LIST THRUST_DEVICE_SYSTEM_OPTIONS)
|
150 |
-
message(FATAL_ERROR
|
151 |
-
"Requested DEVICE=${TCT_DEVICE}; must be one of ${THRUST_DEVICE_SYSTEM_OPTIONS}")
|
152 |
-
endif()
|
153 |
-
|
154 |
-
if (TCT_FROM_OPTIONS)
|
155 |
-
_thrust_create_cache_options(
|
156 |
-
${TCT_HOST} ${TCT_DEVICE}
|
157 |
-
${TCT_HOST_OPTION} ${TCT_DEVICE_OPTION}
|
158 |
-
${TCT_HOST_OPTION_DOC} ${TCT_DEVICE_OPTION_DOC}
|
159 |
-
${TCT_ADVANCED}
|
160 |
-
)
|
161 |
-
set(TCT_HOST ${${TCT_HOST_OPTION}})
|
162 |
-
set(TCT_DEVICE ${${TCT_DEVICE_OPTION}})
|
163 |
-
thrust_debug("Current option settings:" internal)
|
164 |
-
thrust_debug(" - ${TCT_HOST_OPTION}=${TCT_HOST}" internal)
|
165 |
-
thrust_debug(" - ${TCT_DEVICE_OPTION}=${TCT_DEVICE}" internal)
|
166 |
-
endif()
|
167 |
-
|
168 |
-
_thrust_find_backend(${TCT_HOST} REQUIRED)
|
169 |
-
_thrust_find_backend(${TCT_DEVICE} REQUIRED)
|
170 |
-
|
171 |
-
# We can just create an INTERFACE IMPORTED target here instead of going
|
172 |
-
# through _thrust_declare_interface_alias as long as we aren't hanging any
|
173 |
-
# Thrust/CUB include paths on ${target_name}.
|
174 |
-
add_library(${target_name} INTERFACE IMPORTED)
|
175 |
-
target_link_libraries(${target_name}
|
176 |
-
INTERFACE
|
177 |
-
Thrust::${TCT_HOST}::Host
|
178 |
-
Thrust::${TCT_DEVICE}::Device
|
179 |
-
)
|
180 |
-
|
181 |
-
# This would be nice to enforce, but breaks when using old cmake + new
|
182 |
-
# compiler, since cmake doesn't know what features the new compiler version
|
183 |
-
# supports.
|
184 |
-
# Leaving this here as a reminder not to add it back. Just let the
|
185 |
-
# compile-time checks in thrust/detail/config/cpp_dialect.h handle it.
|
186 |
-
#
|
187 |
-
# if (NOT TCT_IGNORE_DEPRECATED_CPP_DIALECT)
|
188 |
-
# if (TCT_IGNORE_DEPRECATED_CPP_11)
|
189 |
-
# target_compile_features(${target_name} INTERFACE cxx_std_11)
|
190 |
-
# else()
|
191 |
-
# target_compile_features(${target_name} INTERFACE cxx_std_14)
|
192 |
-
# endif()
|
193 |
-
# endif()
|
194 |
-
|
195 |
-
if (TCT_IGNORE_DEPRECATED_CPP_DIALECT)
|
196 |
-
target_compile_definitions(${target_name} INTERFACE "THRUST_IGNORE_DEPRECATED_CPP_DIALECT")
|
197 |
-
endif()
|
198 |
-
|
199 |
-
if (TCT_IGNORE_DEPRECATED_CPP_11)
|
200 |
-
target_compile_definitions(${target_name} INTERFACE "THRUST_IGNORE_DEPRECATED_CPP_11")
|
201 |
-
endif()
|
202 |
-
|
203 |
-
if (TCT_IGNORE_DEPRECATED_COMPILER)
|
204 |
-
target_compile_definitions(${target_name} INTERFACE "THRUST_IGNORE_DEPRECATED_COMPILER")
|
205 |
-
endif()
|
206 |
-
|
207 |
-
if (TCT_IGNORE_CUB_VERSION_CHECK)
|
208 |
-
target_compile_definitions(${target_name} INTERFACE "THRUST_IGNORE_CUB_VERSION_CHECK")
|
209 |
-
else()
|
210 |
-
if (("${TCT_HOST}" STREQUAL "CUDA" OR "${TCT_DEVICE}" STREQUAL "CUDA") AND
|
211 |
-
(NOT THRUST_VERSION VERSION_EQUAL THRUST_CUB_VERSION))
|
212 |
-
message(FATAL_ERROR
|
213 |
-
"The version of CUB found by CMake is not compatible with this release of Thrust. "
|
214 |
-
"CUB is now included in the CUDA Toolkit, so you no longer need to use your own checkout of CUB. "
|
215 |
-
"Pass IGNORE_CUB_VERSION_CHECK to thrust_create_target to ignore. "
|
216 |
-
"(CUB ${THRUST_CUB_VERSION}, Thrust ${THRUST_VERSION})."
|
217 |
-
)
|
218 |
-
endif()
|
219 |
-
endif()
|
220 |
-
|
221 |
-
thrust_debug_target(${target_name} "Thrust ${THRUST_VERSION}" internal)
|
222 |
-
endfunction()
|
223 |
-
|
224 |
-
function(thrust_is_system_found system var_name)
|
225 |
-
if (TARGET Thrust::${system})
|
226 |
-
set(${var_name} TRUE PARENT_SCOPE)
|
227 |
-
else()
|
228 |
-
set(${var_name} FALSE PARENT_SCOPE)
|
229 |
-
endif()
|
230 |
-
endfunction()
|
231 |
-
|
232 |
-
function(thrust_is_cpp_system_found var_name)
|
233 |
-
thrust_is_system_found(CPP ${var_name})
|
234 |
-
set(${var_name} ${${var_name}} PARENT_SCOPE)
|
235 |
-
endfunction()
|
236 |
-
|
237 |
-
function(thrust_is_cuda_system_found var_name)
|
238 |
-
thrust_is_system_found(CUDA ${var_name})
|
239 |
-
set(${var_name} ${${var_name}} PARENT_SCOPE)
|
240 |
-
endfunction()
|
241 |
-
|
242 |
-
function(thrust_is_tbb_system_found var_name)
|
243 |
-
thrust_is_system_found(TBB ${var_name})
|
244 |
-
set(${var_name} ${${var_name}} PARENT_SCOPE)
|
245 |
-
endfunction()
|
246 |
-
|
247 |
-
function(thrust_is_omp_system_found var_name)
|
248 |
-
thrust_is_system_found(OMP ${var_name})
|
249 |
-
set(${var_name} ${${var_name}} PARENT_SCOPE)
|
250 |
-
endfunction()
|
251 |
-
|
252 |
-
# Since components are loaded lazily, this will refresh the
|
253 |
-
# THRUST_${component}_FOUND flags in the current scope.
|
254 |
-
# Alternatively, check system states individually using the
|
255 |
-
# thrust_is_system_found functions.
|
256 |
-
macro(thrust_update_system_found_flags)
|
257 |
-
set(THRUST_FOUND TRUE)
|
258 |
-
thrust_is_system_found(CPP THRUST_CPP_FOUND)
|
259 |
-
thrust_is_system_found(CUDA THRUST_CUDA_FOUND)
|
260 |
-
thrust_is_system_found(TBB THRUST_TBB_FOUND)
|
261 |
-
thrust_is_system_found(OMP THRUST_OMP_FOUND)
|
262 |
-
endmacro()
|
263 |
-
|
264 |
-
function(thrust_debug msg)
|
265 |
-
# Use the VERBOSE channel when called internally
|
266 |
-
# Run `cmake . --log-level=VERBOSE` to view.
|
267 |
-
if ("${ARGN}" STREQUAL "internal")
|
268 |
-
# If CMake is too old to know about the VERBOSE channel, just be silent.
|
269 |
-
# Users reproduce much the same output on the STATUS channel by using:
|
270 |
-
# thrust_create_target(Thrust [...])
|
271 |
-
# thrust_debug_internal_targets()
|
272 |
-
# thrust_debug_target(Thrust)
|
273 |
-
if (CMAKE_VERSION VERSION_GREATER_EQUAL "3.15.7")
|
274 |
-
set(channel VERBOSE)
|
275 |
-
else()
|
276 |
-
return()
|
277 |
-
endif()
|
278 |
-
else()
|
279 |
-
set(channel STATUS)
|
280 |
-
endif()
|
281 |
-
|
282 |
-
message(${channel} "Thrust: ${msg}")
|
283 |
-
endfunction()
|
284 |
-
|
285 |
-
# Print details of the specified target.
|
286 |
-
function(thrust_debug_target target_name version)
|
287 |
-
if (NOT TARGET ${target_name})
|
288 |
-
return()
|
289 |
-
endif()
|
290 |
-
|
291 |
-
set(is_internal "${ARGN}")
|
292 |
-
|
293 |
-
if (version)
|
294 |
-
set(version "(${version})")
|
295 |
-
endif()
|
296 |
-
|
297 |
-
thrust_debug("TargetInfo: ${target_name}: ${version}" ${is_internal})
|
298 |
-
|
299 |
-
function(_thrust_print_prop_if_set target_name prop)
|
300 |
-
get_target_property(value ${target_name} ${prop})
|
301 |
-
if (value)
|
302 |
-
thrust_debug("TargetInfo: ${target_name} > ${prop}: ${value}" ${is_internal})
|
303 |
-
endif()
|
304 |
-
endfunction()
|
305 |
-
|
306 |
-
function(_thrust_print_imported_prop_if_set target_name prop)
|
307 |
-
get_target_property(imported ${target_name} IMPORTED)
|
308 |
-
get_target_property(type ${target_name} TYPE)
|
309 |
-
if (imported AND NOT ${type} STREQUAL "INTERFACE_LIBRARY")
|
310 |
-
_thrust_print_prop_if_set(${target_name} ${prop})
|
311 |
-
endif()
|
312 |
-
endfunction()
|
313 |
-
|
314 |
-
_thrust_print_prop_if_set(${target_name} ALIASED_TARGET)
|
315 |
-
_thrust_print_prop_if_set(${target_name} IMPORTED)
|
316 |
-
_thrust_print_prop_if_set(${target_name} INTERFACE_COMPILE_DEFINITIONS)
|
317 |
-
_thrust_print_prop_if_set(${target_name} INTERFACE_COMPILE_FEATURES)
|
318 |
-
_thrust_print_prop_if_set(${target_name} INTERFACE_COMPILE_OPTIONS)
|
319 |
-
_thrust_print_prop_if_set(${target_name} INTERFACE_INCLUDE_DIRECTORIES)
|
320 |
-
_thrust_print_prop_if_set(${target_name} INTERFACE_LINK_DEPENDS)
|
321 |
-
_thrust_print_prop_if_set(${target_name} INTERFACE_LINK_DIRECTORIES)
|
322 |
-
_thrust_print_prop_if_set(${target_name} INTERFACE_LINK_LIBRARIES)
|
323 |
-
_thrust_print_prop_if_set(${target_name} INTERFACE_LINK_OPTIONS)
|
324 |
-
_thrust_print_prop_if_set(${target_name} INTERFACE_SYSTEM_INCLUDE_DIRECTORIES)
|
325 |
-
_thrust_print_prop_if_set(${target_name} INTERFACE_THRUST_HOST)
|
326 |
-
_thrust_print_prop_if_set(${target_name} INTERFACE_THRUST_DEVICE)
|
327 |
-
_thrust_print_imported_prop_if_set(${target_name} IMPORTED_LOCATION)
|
328 |
-
_thrust_print_imported_prop_if_set(${target_name} IMPORTED_LOCATION_DEBUG)
|
329 |
-
_thrust_print_imported_prop_if_set(${target_name} IMPORTED_LOCATION_RELEASE)
|
330 |
-
endfunction()
|
331 |
-
|
332 |
-
function(thrust_debug_internal_targets)
|
333 |
-
function(_thrust_debug_backend_targets backend version)
|
334 |
-
thrust_debug_target(Thrust::${backend} "${version}")
|
335 |
-
thrust_debug_target(Thrust::${backend}::Host "${version}")
|
336 |
-
thrust_debug_target(Thrust::${backend}::Device "${version}")
|
337 |
-
endfunction()
|
338 |
-
|
339 |
-
thrust_debug_target(Thrust::Thrust "${THRUST_VERSION}")
|
340 |
-
|
341 |
-
_thrust_debug_backend_targets(CPP "Thrust ${THRUST_VERSION}")
|
342 |
-
|
343 |
-
_thrust_debug_backend_targets(CUDA "CUB ${THRUST_CUB_VERSION}")
|
344 |
-
thrust_debug_target(CUB::CUB "${THRUST_CUB_VERSION}")
|
345 |
-
|
346 |
-
_thrust_debug_backend_targets(TBB "${THRUST_TBB_VERSION}")
|
347 |
-
thrust_debug_target(TBB:tbb "${THRUST_TBB_VERSION}")
|
348 |
-
|
349 |
-
_thrust_debug_backend_targets(OMP "${THRUST_OMP_VERSION}")
|
350 |
-
thrust_debug_target(OpenMP::OpenMP_CXX "${THRUST_OMP_VERSION}")
|
351 |
-
endfunction()
|
352 |
-
|
353 |
-
################################################################################
|
354 |
-
# Internal utilities. Subject to change.
|
355 |
-
#
|
356 |
-
|
357 |
-
function(_thrust_set_if_undefined var)
|
358 |
-
if (NOT DEFINED ${var})
|
359 |
-
set(${var} ${ARGN} PARENT_SCOPE)
|
360 |
-
endif()
|
361 |
-
endfunction()
|
362 |
-
|
363 |
-
function(_thrust_declare_interface_alias alias_name ugly_name)
|
364 |
-
# 1) Only IMPORTED and ALIAS targets can be placed in a namespace.
|
365 |
-
# 2) When an IMPORTED library is linked to another target, its include
|
366 |
-
# directories are treated as SYSTEM includes.
|
367 |
-
# 3) nvcc will automatically check the CUDA Toolkit include path *before* the
|
368 |
-
# system includes. This means that the Toolkit Thrust will *always* be used
|
369 |
-
# during compilation, and the include paths of an IMPORTED Thrust::Thrust
|
370 |
-
# target will never have any effect.
|
371 |
-
# 4) This behavior can be fixed by setting the property NO_SYSTEM_FROM_IMPORTED
|
372 |
-
# on EVERY target that links to Thrust::Thrust. This would be a burden and a
|
373 |
-
# footgun for our users. Forgetting this would silently pull in the wrong thrust!
|
374 |
-
# 5) A workaround is to make a non-IMPORTED library outside of the namespace,
|
375 |
-
# configure it, and then ALIAS it into the namespace (or ALIAS and then
|
376 |
-
# configure, that seems to work too).
|
377 |
-
add_library(${ugly_name} INTERFACE)
|
378 |
-
add_library(${alias_name} ALIAS ${ugly_name})
|
379 |
-
endfunction()
|
380 |
-
|
381 |
-
# Create cache options for selecting the user/device systems with ccmake/cmake-gui.
|
382 |
-
function(_thrust_create_cache_options host device host_option device_option host_doc device_doc advanced)
|
383 |
-
thrust_debug("Creating system cache options: (advanced=${advanced})" internal)
|
384 |
-
thrust_debug(" - Host Option=${host_option} Default=${host} Doc='${host_doc}'" internal)
|
385 |
-
thrust_debug(" - Device Option=${device_option} Default=${device} Doc='${device_doc}'" internal)
|
386 |
-
set(${host_option} ${host} CACHE STRING "${host_doc}")
|
387 |
-
set_property(CACHE ${host_option} PROPERTY STRINGS ${THRUST_HOST_SYSTEM_OPTIONS})
|
388 |
-
set(${device_option} ${device} CACHE STRING "${device_doc}")
|
389 |
-
set_property(CACHE ${device_option} PROPERTY STRINGS ${THRUST_DEVICE_SYSTEM_OPTIONS})
|
390 |
-
if (advanced)
|
391 |
-
mark_as_advanced(${host_option} ${device_option})
|
392 |
-
endif()
|
393 |
-
endfunction()
|
394 |
-
|
395 |
-
# Create Thrust::${backend}::Host and Thrust::${backend}::Device targets.
|
396 |
-
# Assumes that `Thrust::${backend}` and `_Thrust_${backend}` have been created
|
397 |
-
# by _thrust_declare_interface_alias and configured to bring in system
|
398 |
-
# dependency interfaces (including Thrust::Thrust).
|
399 |
-
function(_thrust_setup_system backend)
|
400 |
-
set(backend_target_alias "Thrust::${backend}")
|
401 |
-
|
402 |
-
if (backend IN_LIST THRUST_HOST_SYSTEM_OPTIONS)
|
403 |
-
set(host_target "_Thrust_${backend}_Host")
|
404 |
-
set(host_target_alias "Thrust::${backend}::Host")
|
405 |
-
if (NOT TARGET ${host_target_alias})
|
406 |
-
_thrust_declare_interface_alias(${host_target_alias} ${host_target})
|
407 |
-
target_compile_definitions(${host_target} INTERFACE
|
408 |
-
"THRUST_HOST_SYSTEM=THRUST_HOST_SYSTEM_${backend}")
|
409 |
-
target_link_libraries(${host_target} INTERFACE ${backend_target_alias})
|
410 |
-
set_property(TARGET ${host_target} PROPERTY INTERFACE_THRUST_HOST ${backend})
|
411 |
-
set_property(TARGET ${host_target} APPEND PROPERTY COMPATIBLE_INTERFACE_STRING THRUST_HOST)
|
412 |
-
thrust_debug_target(${host_target_alias} "" internal)
|
413 |
-
endif()
|
414 |
-
endif()
|
415 |
-
|
416 |
-
if (backend IN_LIST THRUST_DEVICE_SYSTEM_OPTIONS)
|
417 |
-
set(device_target "_Thrust_${backend}_Device")
|
418 |
-
set(device_target_alias "Thrust::${backend}::Device")
|
419 |
-
if (NOT TARGET ${device_target_alias})
|
420 |
-
_thrust_declare_interface_alias(${device_target_alias} ${device_target})
|
421 |
-
target_compile_definitions(${device_target} INTERFACE
|
422 |
-
"THRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_${backend}")
|
423 |
-
target_link_libraries(${device_target} INTERFACE ${backend_target_alias})
|
424 |
-
set_property(TARGET ${device_target} PROPERTY INTERFACE_THRUST_DEVICE ${backend})
|
425 |
-
set_property(TARGET ${device_target} APPEND PROPERTY COMPATIBLE_INTERFACE_STRING THRUST_DEVICE)
|
426 |
-
thrust_debug_target(${device_target_alias} "" internal)
|
427 |
-
endif()
|
428 |
-
endif()
|
429 |
-
endfunction()
|
430 |
-
|
431 |
-
# Use the provided cub_target for the CUDA backend. If Thrust::CUDA already
|
432 |
-
# exists, this call has no effect.
|
433 |
-
function(thrust_set_CUB_target cub_target)
|
434 |
-
if (NOT TARGET Thrust::CUDA)
|
435 |
-
thrust_debug("Setting CUB target to ${cub_target}" internal)
|
436 |
-
# Workaround cmake issue #20670 https://gitlab.kitware.com/cmake/cmake/-/issues/20670
|
437 |
-
set(THRUST_CUB_VERSION ${CUB_VERSION} CACHE INTERNAL "CUB version used by Thrust")
|
438 |
-
_thrust_declare_interface_alias(Thrust::CUDA _Thrust_CUDA)
|
439 |
-
target_link_libraries(_Thrust_CUDA INTERFACE Thrust::Thrust ${cub_target})
|
440 |
-
thrust_debug_target(${cub_target} "${THRUST_CUB_VERSION}" internal)
|
441 |
-
thrust_debug_target(Thrust::CUDA "CUB ${THRUST_CUB_VERSION}" internal)
|
442 |
-
_thrust_setup_system(CUDA)
|
443 |
-
endif()
|
444 |
-
endfunction()
|
445 |
-
|
446 |
-
# Use the provided tbb_target for the TBB backend. If Thrust::TBB already
|
447 |
-
# exists, this call has no effect.
|
448 |
-
function(thrust_set_TBB_target tbb_target)
|
449 |
-
if (NOT TARGET Thrust::TBB)
|
450 |
-
thrust_debug("Setting TBB target to ${tbb_target}" internal)
|
451 |
-
# Workaround cmake issue #20670 https://gitlab.kitware.com/cmake/cmake/-/issues/20670
|
452 |
-
set(THRUST_TBB_VERSION ${TBB_VERSION} CACHE INTERNAL "TBB version used by Thrust")
|
453 |
-
_thrust_declare_interface_alias(Thrust::TBB _Thrust_TBB)
|
454 |
-
target_link_libraries(_Thrust_TBB INTERFACE Thrust::Thrust ${tbb_target})
|
455 |
-
thrust_debug_target(${tbb_target} "${THRUST_TBB_VERSION}" internal)
|
456 |
-
thrust_debug_target(Thrust::TBB "${THRUST_TBB_VERSION}" internal)
|
457 |
-
_thrust_setup_system(TBB)
|
458 |
-
endif()
|
459 |
-
endfunction()
|
460 |
-
|
461 |
-
# Use the provided omp_target for the OMP backend. If Thrust::OMP already
|
462 |
-
# exists, this call has no effect.
|
463 |
-
function(thrust_set_OMP_target omp_target)
|
464 |
-
if (NOT TARGET Thrust::OMP)
|
465 |
-
thrust_debug("Setting OMP target to ${omp_target}" internal)
|
466 |
-
# Workaround cmake issue #20670 https://gitlab.kitware.com/cmake/cmake/-/issues/20670
|
467 |
-
set(THRUST_OMP_VERSION ${OpenMP_CXX_VERSION} CACHE INTERNAL "OpenMP version used by Thrust")
|
468 |
-
_thrust_declare_interface_alias(Thrust::OMP _Thrust_OMP)
|
469 |
-
target_link_libraries(_Thrust_OMP INTERFACE Thrust::Thrust ${omp_target})
|
470 |
-
thrust_debug_target(${omp_target} "${THRUST_OMP_VERSION}" internal)
|
471 |
-
thrust_debug_target(Thrust::OMP "${THRUST_OMP_VERSION}" internal)
|
472 |
-
_thrust_setup_system(OMP)
|
473 |
-
endif()
|
474 |
-
endfunction()
|
475 |
-
|
476 |
-
function(_thrust_find_CPP required)
|
477 |
-
if (NOT TARGET Thrust::CPP)
|
478 |
-
thrust_debug("Generating CPP targets." internal)
|
479 |
-
_thrust_declare_interface_alias(Thrust::CPP _Thrust_CPP)
|
480 |
-
target_link_libraries(_Thrust_CPP INTERFACE Thrust::Thrust)
|
481 |
-
thrust_debug_target(Thrust::CPP "Thrust ${THRUST_VERSION}" internal)
|
482 |
-
_thrust_setup_system(CPP)
|
483 |
-
endif()
|
484 |
-
endfunction()
|
485 |
-
|
486 |
-
# This must be a macro instead of a function to ensure that backends passed to
|
487 |
-
# find_package(Thrust COMPONENTS [...]) have their full configuration loaded
|
488 |
-
# into the current scope. This provides at least some remedy for CMake issue
|
489 |
-
# #20670 -- otherwise variables like CUB_VERSION, etc won't be in the caller's
|
490 |
-
# scope.
|
491 |
-
macro(_thrust_find_CUDA required)
|
492 |
-
if (NOT TARGET Thrust::CUDA)
|
493 |
-
thrust_debug("Searching for CUB ${required}" internal)
|
494 |
-
find_package(CUB CONFIG
|
495 |
-
${_THRUST_QUIET_FLAG}
|
496 |
-
${required}
|
497 |
-
NO_DEFAULT_PATH # Only check the explicit HINTS below:
|
498 |
-
HINTS
|
499 |
-
"${_THRUST_INCLUDE_DIR}/dependencies/cub" # Source layout
|
500 |
-
"${_THRUST_INCLUDE_DIR}" # Install layout
|
501 |
-
)
|
502 |
-
|
503 |
-
if (TARGET CUB::CUB)
|
504 |
-
thrust_set_CUB_target(CUB::CUB)
|
505 |
-
else()
|
506 |
-
thrust_debug("CUB not found!" internal)
|
507 |
-
endif()
|
508 |
-
endif()
|
509 |
-
endmacro()
|
510 |
-
|
511 |
-
# This must be a macro instead of a function to ensure that backends passed to
|
512 |
-
# find_package(Thrust COMPONENTS [...]) have their full configuration loaded
|
513 |
-
# into the current scope. This provides at least some remedy for CMake issue
|
514 |
-
# #20670 -- otherwise variables like TBB_VERSION, etc won't be in the caller's
|
515 |
-
# scope.
|
516 |
-
macro(_thrust_find_TBB required)
|
517 |
-
if(NOT TARGET Thrust::TBB)
|
518 |
-
thrust_debug("Searching for TBB ${required}" internal)
|
519 |
-
# Swap in a temporary module path to make sure we use our FindTBB.cmake
|
520 |
-
set(_THRUST_STASH_MODULE_PATH "${CMAKE_MODULE_PATH}")
|
521 |
-
set(CMAKE_MODULE_PATH "${_THRUST_CMAKE_DIR}")
|
522 |
-
|
523 |
-
# Push policy CMP0074 to silence warnings about TBB_ROOT being set. This
|
524 |
-
# var is used unconventionally in this FindTBB.cmake module.
|
525 |
-
# Someday we'll have a suitable TBB cmake configuration and can avoid this.
|
526 |
-
cmake_policy(PUSH)
|
527 |
-
cmake_policy(SET CMP0074 OLD)
|
528 |
-
set(THRUST_TBB_ROOT "" CACHE PATH "Path to the root of the TBB installation.")
|
529 |
-
if (TBB_ROOT AND NOT THRUST_TBB_ROOT)
|
530 |
-
message(
|
531 |
-
"Warning: TBB_ROOT is set. "
|
532 |
-
"Thrust uses THRUST_TBB_ROOT to avoid issues with CMake Policy CMP0074. "
|
533 |
-
"Please set this variable instead when using Thrust with TBB."
|
534 |
-
)
|
535 |
-
endif()
|
536 |
-
set(TBB_ROOT "${THRUST_TBB_ROOT}")
|
537 |
-
set(_THRUST_STASH_TBB_ROOT "${TBB_ROOT}")
|
538 |
-
|
539 |
-
find_package(TBB
|
540 |
-
${_THRUST_QUIET_FLAG}
|
541 |
-
${required}
|
542 |
-
)
|
543 |
-
|
544 |
-
cmake_policy(POP)
|
545 |
-
set(TBB_ROOT "${_THRUST_STASH_TBB_ROOT}")
|
546 |
-
set(CMAKE_MODULE_PATH "${_THRUST_STASH_MODULE_PATH}")
|
547 |
-
|
548 |
-
if (TARGET TBB::tbb)
|
549 |
-
thrust_set_TBB_target(TBB::tbb)
|
550 |
-
else()
|
551 |
-
thrust_debug("TBB not found!" internal)
|
552 |
-
endif()
|
553 |
-
endif()
|
554 |
-
endmacro()
|
555 |
-
|
556 |
-
# Wrap the OpenMP flags for CUDA targets
|
557 |
-
function(thrust_fixup_omp_target omp_target)
|
558 |
-
get_target_property(opts ${omp_target} INTERFACE_COMPILE_OPTIONS)
|
559 |
-
if (opts MATCHES "\\$<\\$<COMPILE_LANGUAGE:CXX>:([^>]*)>")
|
560 |
-
target_compile_options(${omp_target} INTERFACE
|
561 |
-
$<$<AND:$<COMPILE_LANGUAGE:CUDA>,$<CUDA_COMPILER_ID:NVIDIA>>:-Xcompiler=${CMAKE_MATCH_1}>
|
562 |
-
)
|
563 |
-
endif()
|
564 |
-
endfunction()
|
565 |
-
|
566 |
-
# This must be a macro instead of a function to ensure that backends passed to
|
567 |
-
# find_package(Thrust COMPONENTS [...]) have their full configuration loaded
|
568 |
-
# into the current scope. This provides at least some remedy for CMake issue
|
569 |
-
# #20670 -- otherwise variables like OpenMP_CXX_VERSION, etc won't be in the caller's
|
570 |
-
# scope.
|
571 |
-
macro(_thrust_find_OMP required)
|
572 |
-
if (NOT TARGET Thrust::OMP)
|
573 |
-
thrust_debug("Searching for OMP ${required}" internal)
|
574 |
-
find_package(OpenMP
|
575 |
-
${_THRUST_QUIET_FLAG}
|
576 |
-
${_THRUST_REQUIRED_FLAG_OMP}
|
577 |
-
COMPONENTS CXX
|
578 |
-
)
|
579 |
-
|
580 |
-
if (TARGET OpenMP::OpenMP_CXX)
|
581 |
-
thrust_fixup_omp_target(OpenMP::OpenMP_CXX)
|
582 |
-
thrust_set_OMP_target(OpenMP::OpenMP_CXX)
|
583 |
-
else()
|
584 |
-
thrust_debug("OpenMP::OpenMP_CXX not found!" internal)
|
585 |
-
endif()
|
586 |
-
endif()
|
587 |
-
endmacro()
|
588 |
-
|
589 |
-
# This must be a macro instead of a function to ensure that backends passed to
|
590 |
-
# find_package(Thrust COMPONENTS [...]) have their full configuration loaded
|
591 |
-
# into the current scope. This provides at least some remedy for CMake issue
|
592 |
-
# #20670 -- otherwise variables like CUB_VERSION, etc won't be in the caller's
|
593 |
-
# scope.
|
594 |
-
macro(_thrust_find_backend backend required)
|
595 |
-
# Unfortunately, _thrust_find_${backend}(req) is not valid CMake syntax. Hence
|
596 |
-
# why this function exists.
|
597 |
-
if ("${backend}" STREQUAL "CPP")
|
598 |
-
_thrust_find_CPP("${required}")
|
599 |
-
elseif ("${backend}" STREQUAL "CUDA")
|
600 |
-
_thrust_find_CUDA("${required}")
|
601 |
-
elseif ("${backend}" STREQUAL "TBB")
|
602 |
-
_thrust_find_TBB("${required}")
|
603 |
-
elseif ("${backend}" STREQUAL "OMP")
|
604 |
-
_thrust_find_OMP("${required}")
|
605 |
-
else()
|
606 |
-
message(FATAL_ERROR "_thrust_find_backend: Invalid system: ${backend}")
|
607 |
-
endif()
|
608 |
-
endmacro()
|
609 |
-
|
610 |
-
################################################################################
|
611 |
-
# Initialization. Executed inside find_package(Thrust) call.
|
612 |
-
#
|
613 |
-
|
614 |
-
if (${CMAKE_FIND_PACKAGE_NAME}_FIND_QUIETLY)
|
615 |
-
set(_THRUST_QUIET ON CACHE INTERNAL "Quiet mode enabled for Thrust find_package calls.")
|
616 |
-
set(_THRUST_QUIET_FLAG "QUIET" CACHE INTERNAL "")
|
617 |
-
else()
|
618 |
-
unset(_THRUST_QUIET CACHE)
|
619 |
-
unset(_THRUST_QUIET_FLAG CACHE)
|
620 |
-
endif()
|
621 |
-
|
622 |
-
set(_THRUST_CMAKE_DIR "${CMAKE_CURRENT_LIST_DIR}" CACHE INTERNAL "Location of thrust-config.cmake")
|
623 |
-
|
624 |
-
# Internal target that actually holds the Thrust interface. Used by all other Thrust targets.
|
625 |
-
if (NOT TARGET Thrust::Thrust)
|
626 |
-
_thrust_declare_interface_alias(Thrust::Thrust _Thrust_Thrust)
|
627 |
-
# Strip out the 'thrust/cmake/' from '[thrust_include_path]/thrust/cmake/':
|
628 |
-
get_filename_component(_THRUST_INCLUDE_DIR "../.." ABSOLUTE BASE_DIR "${_THRUST_CMAKE_DIR}")
|
629 |
-
set(_THRUST_INCLUDE_DIR "${_THRUST_INCLUDE_DIR}"
|
630 |
-
CACHE INTERNAL "Location of thrust headers."
|
631 |
-
)
|
632 |
-
target_include_directories(_Thrust_Thrust INTERFACE "${_THRUST_INCLUDE_DIR}")
|
633 |
-
thrust_debug_target(Thrust::Thrust "${THRUST_VERSION}" internal)
|
634 |
-
endif()
|
635 |
-
|
636 |
-
# Handle find_package COMPONENT requests:
|
637 |
-
foreach(component ${${CMAKE_FIND_PACKAGE_NAME}_FIND_COMPONENTS})
|
638 |
-
if (NOT component IN_LIST THRUST_HOST_SYSTEM_OPTIONS AND
|
639 |
-
NOT component IN_LIST THRUST_DEVICE_SYSTEM_OPTIONS)
|
640 |
-
message(FATAL_ERROR "Invalid component requested: '${component}'")
|
641 |
-
endif()
|
642 |
-
|
643 |
-
unset(req)
|
644 |
-
if (${CMAKE_FIND_PACKAGE_NAME}_FIND_REQUIRED_${component})
|
645 |
-
set(req "REQUIRED")
|
646 |
-
endif()
|
647 |
-
|
648 |
-
thrust_debug("Preloading COMPONENT '${component}' ${req}" internal)
|
649 |
-
_thrust_find_backend(${component} "${req}")
|
650 |
-
endforeach()
|
651 |
-
|
652 |
-
thrust_update_system_found_flags()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/mr/pool.h
DELETED
@@ -1,505 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2018 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
/*! \file pool.h
|
18 |
-
* \brief A caching and pooling memory resource adaptor which uses a single upstream resource for memory allocation,
|
19 |
-
* and embeds bookkeeping information in allocated blocks.
|
20 |
-
*/
|
21 |
-
|
22 |
-
#pragma once
|
23 |
-
|
24 |
-
#include <thrust/detail/algorithm_wrapper.h>
|
25 |
-
|
26 |
-
#include <thrust/host_vector.h>
|
27 |
-
|
28 |
-
#include <thrust/mr/memory_resource.h>
|
29 |
-
#include <thrust/mr/allocator.h>
|
30 |
-
#include <thrust/mr/pool_options.h>
|
31 |
-
|
32 |
-
#include <cassert>
|
33 |
-
|
34 |
-
namespace thrust
|
35 |
-
{
|
36 |
-
namespace mr
|
37 |
-
{
|
38 |
-
|
39 |
-
/** \addtogroup memory_resources Memory Resources
|
40 |
-
* \ingroup memory_management_classes
|
41 |
-
* \{
|
42 |
-
*/
|
43 |
-
|
44 |
-
/*! A memory resource adaptor allowing for pooling and caching allocations from \p Upstream, using memory allocated
|
45 |
-
* from it for both blocks then allocated to the user and for internal bookkeeping of the cached memory.
|
46 |
-
*
|
47 |
-
* On a typical memory resource, calls to \p allocate and \p deallocate actually allocate and deallocate memory. Pooling
|
48 |
-
* memory resources only allocate and deallocate memory from an external resource (the upstream memory resource) when
|
49 |
-
* there's no suitable memory currently cached; otherwise, they use memory they have acquired beforehand, to make
|
50 |
-
* memory allocation faster and more efficient.
|
51 |
-
*
|
52 |
-
* The non-disjoint version of the pool resource uses a single upstream memory resource. Every allocation is larger than
|
53 |
-
* strictly necessary to fulfill the end-user's request, because it needs to account for the memory overhead of tracking
|
54 |
-
* the memory blocks and chunks inside those same memory regions. Nevertheless, this version should be more memory-efficient
|
55 |
-
* than the \p disjoint_unsynchronized_pool_resource, because it doesn't need to allocate additional blocks of memory
|
56 |
-
* from a separate resource, which in turn would necessitate the bookkeeping overhead in the upstream resource.
|
57 |
-
*
|
58 |
-
* This version requires that memory allocated from Upstream is accessible from device. It supports smart references,
|
59 |
-
* meaning that the non-managed CUDA resource, returning a device-tagged pointer, will work, but will be much less
|
60 |
-
* efficient than the disjoint version, which wouldn't need to touch device memory at all, and therefore wouldn't need
|
61 |
-
* to transfer it back and forth between the host and the device whenever an allocation or a deallocation happens.
|
62 |
-
*
|
63 |
-
* \tparam Upstream the type of memory resources that will be used for allocating memory blocks
|
64 |
-
*/
|
65 |
-
template<typename Upstream>
|
66 |
-
class unsynchronized_pool_resource THRUST_FINAL
|
67 |
-
: public memory_resource<typename Upstream::pointer>,
|
68 |
-
private validator<Upstream>
|
69 |
-
{
|
70 |
-
public:
|
71 |
-
/*! Get the default options for a pool. These are meant to be a sensible set of values for many use cases,
|
72 |
-
* and as such, may be tuned in the future. This function is exposed so that creating a set of options that are
|
73 |
-
* just a slight departure from the defaults is easy.
|
74 |
-
*/
|
75 |
-
static pool_options get_default_options()
|
76 |
-
{
|
77 |
-
pool_options ret;
|
78 |
-
|
79 |
-
ret.min_blocks_per_chunk = 16;
|
80 |
-
ret.min_bytes_per_chunk = 1024;
|
81 |
-
ret.max_blocks_per_chunk = static_cast<std::size_t>(1) << 20;
|
82 |
-
ret.max_bytes_per_chunk = static_cast<std::size_t>(1) << 30;
|
83 |
-
|
84 |
-
ret.smallest_block_size = THRUST_MR_DEFAULT_ALIGNMENT;
|
85 |
-
ret.largest_block_size = static_cast<std::size_t>(1) << 20;
|
86 |
-
|
87 |
-
ret.alignment = THRUST_MR_DEFAULT_ALIGNMENT;
|
88 |
-
|
89 |
-
ret.cache_oversized = true;
|
90 |
-
|
91 |
-
ret.cached_size_cutoff_factor = 16;
|
92 |
-
ret.cached_alignment_cutoff_factor = 16;
|
93 |
-
|
94 |
-
return ret;
|
95 |
-
}
|
96 |
-
|
97 |
-
/*! Constructor.
|
98 |
-
*
|
99 |
-
* \param upstream the upstream memory resource for allocations
|
100 |
-
* \param options pool options to use
|
101 |
-
*/
|
102 |
-
unsynchronized_pool_resource(Upstream * upstream, pool_options options = get_default_options())
|
103 |
-
: m_upstream(upstream),
|
104 |
-
m_options(options),
|
105 |
-
m_smallest_block_log2(detail::log2_ri(m_options.smallest_block_size)),
|
106 |
-
m_pools(upstream),
|
107 |
-
m_allocated(),
|
108 |
-
m_oversized(),
|
109 |
-
m_cached_oversized()
|
110 |
-
{
|
111 |
-
assert(m_options.validate());
|
112 |
-
|
113 |
-
pool p = { block_descriptor_ptr(), 0 };
|
114 |
-
m_pools.resize(detail::log2_ri(m_options.largest_block_size) - m_smallest_block_log2 + 1, p);
|
115 |
-
}
|
116 |
-
|
117 |
-
// TODO: C++11: use delegating constructors
|
118 |
-
|
119 |
-
/*! Constructor. The upstream resource is obtained by calling \p get_global_resource<Upstream>.
|
120 |
-
*
|
121 |
-
* \param options pool options to use
|
122 |
-
*/
|
123 |
-
unsynchronized_pool_resource(pool_options options = get_default_options())
|
124 |
-
: m_upstream(get_global_resource<Upstream>()),
|
125 |
-
m_options(options),
|
126 |
-
m_smallest_block_log2(detail::log2_ri(m_options.smallest_block_size)),
|
127 |
-
m_pools(get_global_resource<Upstream>()),
|
128 |
-
m_allocated(),
|
129 |
-
m_oversized(),
|
130 |
-
m_cached_oversized()
|
131 |
-
{
|
132 |
-
assert(m_options.validate());
|
133 |
-
|
134 |
-
pool p = { block_descriptor_ptr(), 0 };
|
135 |
-
m_pools.resize(detail::log2_ri(m_options.largest_block_size) - m_smallest_block_log2 + 1, p);
|
136 |
-
}
|
137 |
-
|
138 |
-
/*! Destructor. Releases all held memory to upstream.
|
139 |
-
*/
|
140 |
-
~unsynchronized_pool_resource()
|
141 |
-
{
|
142 |
-
release();
|
143 |
-
}
|
144 |
-
|
145 |
-
private:
|
146 |
-
typedef typename Upstream::pointer void_ptr;
|
147 |
-
typedef typename thrust::detail::pointer_traits<void_ptr>::template rebind<char>::other char_ptr;
|
148 |
-
|
149 |
-
struct block_descriptor;
|
150 |
-
struct chunk_descriptor;
|
151 |
-
struct oversized_block_descriptor;
|
152 |
-
|
153 |
-
typedef typename thrust::detail::pointer_traits<void_ptr>::template rebind<block_descriptor>::other block_descriptor_ptr;
|
154 |
-
typedef typename thrust::detail::pointer_traits<void_ptr>::template rebind<chunk_descriptor>::other chunk_descriptor_ptr;
|
155 |
-
typedef typename thrust::detail::pointer_traits<void_ptr>::template rebind<oversized_block_descriptor>::other oversized_block_descriptor_ptr;
|
156 |
-
|
157 |
-
struct block_descriptor
|
158 |
-
{
|
159 |
-
block_descriptor_ptr next;
|
160 |
-
};
|
161 |
-
|
162 |
-
struct chunk_descriptor
|
163 |
-
{
|
164 |
-
std::size_t size;
|
165 |
-
chunk_descriptor_ptr next;
|
166 |
-
};
|
167 |
-
|
168 |
-
// this was originally a forward list, but I made it a doubly linked list
|
169 |
-
// because that way deallocation when not caching is faster and doesn't require
|
170 |
-
// traversal of a linked list (it's still a forward list for the cached list,
|
171 |
-
// because allocation from that list already traverses)
|
172 |
-
//
|
173 |
-
// TODO: investigate whether it's better to have this be a doubly-linked list
|
174 |
-
// with fast do_deallocate when !m_options.cache_oversized, or to have this be
|
175 |
-
// a forward list and require traversal in do_deallocate
|
176 |
-
//
|
177 |
-
// I assume that it is better this way, but the additional pointer could
|
178 |
-
// potentially hurt? these are supposed to be oversized and/or overaligned,
|
179 |
-
// so they are kinda memory intensive already
|
180 |
-
struct oversized_block_descriptor
|
181 |
-
{
|
182 |
-
std::size_t size;
|
183 |
-
std::size_t alignment;
|
184 |
-
oversized_block_descriptor_ptr prev;
|
185 |
-
oversized_block_descriptor_ptr next;
|
186 |
-
oversized_block_descriptor_ptr next_cached;
|
187 |
-
};
|
188 |
-
|
189 |
-
struct pool
|
190 |
-
{
|
191 |
-
block_descriptor_ptr free_list;
|
192 |
-
std::size_t previous_allocated_count;
|
193 |
-
};
|
194 |
-
|
195 |
-
typedef thrust::host_vector<
|
196 |
-
pool,
|
197 |
-
allocator<pool, Upstream>
|
198 |
-
> pool_vector;
|
199 |
-
|
200 |
-
Upstream * m_upstream;
|
201 |
-
|
202 |
-
pool_options m_options;
|
203 |
-
std::size_t m_smallest_block_log2;
|
204 |
-
|
205 |
-
pool_vector m_pools;
|
206 |
-
chunk_descriptor_ptr m_allocated;
|
207 |
-
oversized_block_descriptor_ptr m_oversized;
|
208 |
-
oversized_block_descriptor_ptr m_cached_oversized;
|
209 |
-
|
210 |
-
public:
|
211 |
-
/*! Releases all held memory to upstream.
|
212 |
-
*/
|
213 |
-
void release()
|
214 |
-
{
|
215 |
-
// reset the buckets
|
216 |
-
for (std::size_t i = 0; i < m_pools.size(); ++i)
|
217 |
-
{
|
218 |
-
thrust::raw_reference_cast(m_pools[i]).free_list = block_descriptor_ptr();
|
219 |
-
thrust::raw_reference_cast(m_pools[i]).previous_allocated_count = 0;
|
220 |
-
}
|
221 |
-
|
222 |
-
// deallocate memory allocated for the buckets
|
223 |
-
while (detail::pointer_traits<chunk_descriptor_ptr>::get(m_allocated))
|
224 |
-
{
|
225 |
-
chunk_descriptor_ptr alloc = m_allocated;
|
226 |
-
m_allocated = thrust::raw_reference_cast(*m_allocated).next;
|
227 |
-
|
228 |
-
void_ptr p = static_cast<void_ptr>(
|
229 |
-
static_cast<char_ptr>(
|
230 |
-
static_cast<void_ptr>(alloc)
|
231 |
-
) - thrust::raw_reference_cast(*alloc).size
|
232 |
-
);
|
233 |
-
m_upstream->do_deallocate(p, thrust::raw_reference_cast(*alloc).size + sizeof(chunk_descriptor), m_options.alignment);
|
234 |
-
}
|
235 |
-
|
236 |
-
// deallocate cached oversized/overaligned memory
|
237 |
-
while (detail::pointer_traits<oversized_block_descriptor_ptr>::get(m_oversized))
|
238 |
-
{
|
239 |
-
oversized_block_descriptor_ptr alloc = m_oversized;
|
240 |
-
m_oversized = thrust::raw_reference_cast(*m_oversized).next;
|
241 |
-
|
242 |
-
void_ptr p = static_cast<void_ptr>(
|
243 |
-
static_cast<char_ptr>(
|
244 |
-
static_cast<void_ptr>(alloc)
|
245 |
-
) - thrust::raw_reference_cast(*alloc).size
|
246 |
-
);
|
247 |
-
m_upstream->do_deallocate(p, thrust::raw_reference_cast(*alloc).size + sizeof(oversized_block_descriptor), thrust::raw_reference_cast(*alloc).alignment);
|
248 |
-
}
|
249 |
-
|
250 |
-
m_cached_oversized = oversized_block_descriptor_ptr();
|
251 |
-
}
|
252 |
-
|
253 |
-
THRUST_NODISCARD virtual void_ptr do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE
|
254 |
-
{
|
255 |
-
bytes = (std::max)(bytes, m_options.smallest_block_size);
|
256 |
-
assert(detail::is_power_of_2(alignment));
|
257 |
-
|
258 |
-
// an oversized and/or overaligned allocation requested; needs to be allocated separately
|
259 |
-
if (bytes > m_options.largest_block_size || alignment > m_options.alignment)
|
260 |
-
{
|
261 |
-
if (m_options.cache_oversized)
|
262 |
-
{
|
263 |
-
oversized_block_descriptor_ptr ptr = m_cached_oversized;
|
264 |
-
oversized_block_descriptor_ptr * previous = &m_cached_oversized;
|
265 |
-
while (detail::pointer_traits<oversized_block_descriptor_ptr>::get(ptr))
|
266 |
-
{
|
267 |
-
oversized_block_descriptor desc = *ptr;
|
268 |
-
bool is_good = desc.size >= bytes && desc.alignment >= alignment;
|
269 |
-
|
270 |
-
// if the size is bigger than the requested size by a factor
|
271 |
-
// bigger than or equal to the specified cutoff for size,
|
272 |
-
// allocate a new block
|
273 |
-
if (is_good)
|
274 |
-
{
|
275 |
-
std::size_t size_factor = desc.size / bytes;
|
276 |
-
if (size_factor >= m_options.cached_size_cutoff_factor)
|
277 |
-
{
|
278 |
-
is_good = false;
|
279 |
-
}
|
280 |
-
}
|
281 |
-
|
282 |
-
// if the alignment is bigger than the requested one by a factor
|
283 |
-
// bigger than or equal to the specified cutoff for alignment,
|
284 |
-
// allocate a new block
|
285 |
-
if (is_good)
|
286 |
-
{
|
287 |
-
std::size_t alignment_factor = desc.alignment / alignment;
|
288 |
-
if (alignment_factor >= m_options.cached_alignment_cutoff_factor)
|
289 |
-
{
|
290 |
-
is_good = false;
|
291 |
-
}
|
292 |
-
}
|
293 |
-
|
294 |
-
if (is_good)
|
295 |
-
{
|
296 |
-
if (previous != &m_cached_oversized)
|
297 |
-
{
|
298 |
-
oversized_block_descriptor previous_desc = **previous;
|
299 |
-
previous_desc.next_cached = desc.next_cached;
|
300 |
-
**previous = previous_desc;
|
301 |
-
}
|
302 |
-
else
|
303 |
-
{
|
304 |
-
m_cached_oversized = desc.next_cached;
|
305 |
-
}
|
306 |
-
|
307 |
-
desc.next_cached = oversized_block_descriptor_ptr();
|
308 |
-
*ptr = desc;
|
309 |
-
|
310 |
-
return static_cast<void_ptr>(
|
311 |
-
static_cast<char_ptr>(
|
312 |
-
static_cast<void_ptr>(ptr)
|
313 |
-
) - desc.size
|
314 |
-
);
|
315 |
-
}
|
316 |
-
|
317 |
-
previous = &thrust::raw_reference_cast(*ptr).next_cached;
|
318 |
-
ptr = *previous;
|
319 |
-
}
|
320 |
-
}
|
321 |
-
|
322 |
-
// no fitting cached block found; allocate a new one that's just up to the specs
|
323 |
-
void_ptr allocated = m_upstream->do_allocate(bytes + sizeof(oversized_block_descriptor), alignment);
|
324 |
-
oversized_block_descriptor_ptr block = static_cast<oversized_block_descriptor_ptr>(
|
325 |
-
static_cast<void_ptr>(
|
326 |
-
static_cast<char_ptr>(allocated) + bytes
|
327 |
-
)
|
328 |
-
);
|
329 |
-
|
330 |
-
oversized_block_descriptor desc;
|
331 |
-
desc.size = bytes;
|
332 |
-
desc.alignment = alignment;
|
333 |
-
desc.prev = oversized_block_descriptor_ptr();
|
334 |
-
desc.next = m_oversized;
|
335 |
-
desc.next_cached = oversized_block_descriptor_ptr();
|
336 |
-
*block = desc;
|
337 |
-
m_oversized = block;
|
338 |
-
|
339 |
-
if (detail::pointer_traits<oversized_block_descriptor_ptr>::get(desc.next))
|
340 |
-
{
|
341 |
-
oversized_block_descriptor next = *desc.next;
|
342 |
-
next.prev = block;
|
343 |
-
*desc.next = next;
|
344 |
-
}
|
345 |
-
|
346 |
-
return allocated;
|
347 |
-
}
|
348 |
-
|
349 |
-
// the request is NOT for oversized and/or overaligned memory
|
350 |
-
// allocate a block from an appropriate bucket
|
351 |
-
std::size_t bytes_log2 = thrust::detail::log2_ri(bytes);
|
352 |
-
std::size_t bucket_idx = bytes_log2 - m_smallest_block_log2;
|
353 |
-
pool & bucket = thrust::raw_reference_cast(m_pools[bucket_idx]);
|
354 |
-
|
355 |
-
bytes = static_cast<std::size_t>(1) << bytes_log2;
|
356 |
-
|
357 |
-
// if the free list of the bucket has no elements, allocate a new chunk
|
358 |
-
// and split it into blocks pushed to the free list
|
359 |
-
if (!detail::pointer_traits<block_descriptor_ptr>::get(bucket.free_list))
|
360 |
-
{
|
361 |
-
std::size_t n = bucket.previous_allocated_count;
|
362 |
-
if (n == 0)
|
363 |
-
{
|
364 |
-
n = m_options.min_blocks_per_chunk;
|
365 |
-
if (n < (m_options.min_bytes_per_chunk >> bytes_log2))
|
366 |
-
{
|
367 |
-
n = m_options.min_bytes_per_chunk >> bytes_log2;
|
368 |
-
}
|
369 |
-
}
|
370 |
-
else
|
371 |
-
{
|
372 |
-
n = n * 3 / 2;
|
373 |
-
if (n > (m_options.max_bytes_per_chunk >> bytes_log2))
|
374 |
-
{
|
375 |
-
n = m_options.max_bytes_per_chunk >> bytes_log2;
|
376 |
-
}
|
377 |
-
if (n > m_options.max_blocks_per_chunk)
|
378 |
-
{
|
379 |
-
n = m_options.max_blocks_per_chunk;
|
380 |
-
}
|
381 |
-
}
|
382 |
-
|
383 |
-
std::size_t descriptor_size = (std::max)(sizeof(block_descriptor), m_options.alignment);
|
384 |
-
std::size_t block_size = bytes + descriptor_size;
|
385 |
-
block_size += m_options.alignment - block_size % m_options.alignment;
|
386 |
-
std::size_t chunk_size = block_size * n;
|
387 |
-
|
388 |
-
void_ptr allocated = m_upstream->do_allocate(chunk_size + sizeof(chunk_descriptor), m_options.alignment);
|
389 |
-
chunk_descriptor_ptr chunk = static_cast<chunk_descriptor_ptr>(
|
390 |
-
static_cast<void_ptr>(
|
391 |
-
static_cast<char_ptr>(allocated) + chunk_size
|
392 |
-
)
|
393 |
-
);
|
394 |
-
|
395 |
-
chunk_descriptor desc;
|
396 |
-
desc.size = chunk_size;
|
397 |
-
desc.next = m_allocated;
|
398 |
-
*chunk = desc;
|
399 |
-
m_allocated = chunk;
|
400 |
-
|
401 |
-
for (std::size_t i = 0; i < n; ++i)
|
402 |
-
{
|
403 |
-
block_descriptor_ptr block = static_cast<block_descriptor_ptr>(
|
404 |
-
static_cast<void_ptr>(
|
405 |
-
static_cast<char_ptr>(allocated) + block_size * i + bytes
|
406 |
-
)
|
407 |
-
);
|
408 |
-
|
409 |
-
block_descriptor desc;
|
410 |
-
desc.next = bucket.free_list;
|
411 |
-
*block = desc;
|
412 |
-
bucket.free_list = block;
|
413 |
-
}
|
414 |
-
}
|
415 |
-
|
416 |
-
// allocate a block from the front of the bucket's free list
|
417 |
-
block_descriptor_ptr block = bucket.free_list;
|
418 |
-
bucket.free_list = thrust::raw_reference_cast(*block).next;
|
419 |
-
return static_cast<void_ptr>(
|
420 |
-
static_cast<char_ptr>(
|
421 |
-
static_cast<void_ptr>(block)
|
422 |
-
) - bytes
|
423 |
-
);
|
424 |
-
}
|
425 |
-
|
426 |
-
virtual void do_deallocate(void_ptr p, std::size_t n, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE
|
427 |
-
{
|
428 |
-
n = (std::max)(n, m_options.smallest_block_size);
|
429 |
-
assert(detail::is_power_of_2(alignment));
|
430 |
-
|
431 |
-
// verify that the pointer is at least as aligned as claimed
|
432 |
-
assert(reinterpret_cast<detail::intmax_t>(detail::pointer_traits<void_ptr>::get(p)) % alignment == 0);
|
433 |
-
|
434 |
-
// the deallocated block is oversized and/or overaligned
|
435 |
-
if (n > m_options.largest_block_size || alignment > m_options.alignment)
|
436 |
-
{
|
437 |
-
oversized_block_descriptor_ptr block = static_cast<oversized_block_descriptor_ptr>(
|
438 |
-
static_cast<void_ptr>(
|
439 |
-
static_cast<char_ptr>(p) + n
|
440 |
-
)
|
441 |
-
);
|
442 |
-
|
443 |
-
oversized_block_descriptor desc = *block;
|
444 |
-
|
445 |
-
if (m_options.cache_oversized)
|
446 |
-
{
|
447 |
-
desc.next_cached = m_cached_oversized;
|
448 |
-
*block = desc;
|
449 |
-
m_cached_oversized = block;
|
450 |
-
|
451 |
-
return;
|
452 |
-
}
|
453 |
-
|
454 |
-
if (!detail::pointer_traits<oversized_block_descriptor_ptr>::get(desc.prev))
|
455 |
-
{
|
456 |
-
assert(m_oversized == block);
|
457 |
-
m_oversized = desc.next;
|
458 |
-
}
|
459 |
-
else
|
460 |
-
{
|
461 |
-
oversized_block_descriptor prev = *desc.prev;
|
462 |
-
assert(prev.next == block);
|
463 |
-
prev.next = desc.next;
|
464 |
-
*desc.prev = prev;
|
465 |
-
}
|
466 |
-
|
467 |
-
if (detail::pointer_traits<oversized_block_descriptor_ptr>::get(desc.next))
|
468 |
-
{
|
469 |
-
oversized_block_descriptor next = *desc.next;
|
470 |
-
assert(next.prev == block);
|
471 |
-
next.prev = desc.prev;
|
472 |
-
*desc.next = next;
|
473 |
-
}
|
474 |
-
|
475 |
-
m_upstream->do_deallocate(p, desc.size + sizeof(oversized_block_descriptor), desc.alignment);
|
476 |
-
|
477 |
-
return;
|
478 |
-
}
|
479 |
-
|
480 |
-
// push the block to the front of the appropriate bucket's free list
|
481 |
-
std::size_t n_log2 = thrust::detail::log2_ri(n);
|
482 |
-
std::size_t bucket_idx = n_log2 - m_smallest_block_log2;
|
483 |
-
pool & bucket = thrust::raw_reference_cast(m_pools[bucket_idx]);
|
484 |
-
|
485 |
-
n = static_cast<std::size_t>(1) << n_log2;
|
486 |
-
|
487 |
-
block_descriptor_ptr block = static_cast<block_descriptor_ptr>(
|
488 |
-
static_cast<void_ptr>(
|
489 |
-
static_cast<char_ptr>(p) + n
|
490 |
-
)
|
491 |
-
);
|
492 |
-
|
493 |
-
block_descriptor desc;
|
494 |
-
desc.next = bucket.free_list;
|
495 |
-
*block = desc;
|
496 |
-
bucket.free_list = block;
|
497 |
-
}
|
498 |
-
};
|
499 |
-
|
500 |
-
/*! \}
|
501 |
-
*/
|
502 |
-
|
503 |
-
} // end mr
|
504 |
-
} // end thrust
|
505 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/datasets/__init__.py
DELETED
@@ -1,24 +0,0 @@
|
|
1 |
-
from .builder import DATASETS, PIPELINES, build_dataloader, build_dataset
|
2 |
-
from .cityscapes import CityscapesDataset
|
3 |
-
from .coco import CocoDataset
|
4 |
-
from .custom import CustomDataset
|
5 |
-
from .dataset_wrappers import (ClassBalancedDataset, ConcatDataset,
|
6 |
-
RepeatDataset)
|
7 |
-
from .deepfashion import DeepFashionDataset
|
8 |
-
from .lvis import LVISDataset, LVISV1Dataset, LVISV05Dataset
|
9 |
-
from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler
|
10 |
-
from .utils import (NumClassCheckHook, get_loading_pipeline,
|
11 |
-
replace_ImageToTensor)
|
12 |
-
from .voc import VOCDataset
|
13 |
-
from .wider_face import WIDERFaceDataset
|
14 |
-
from .xml_style import XMLDataset
|
15 |
-
|
16 |
-
__all__ = [
|
17 |
-
'CustomDataset', 'XMLDataset', 'CocoDataset', 'DeepFashionDataset',
|
18 |
-
'VOCDataset', 'CityscapesDataset', 'LVISDataset', 'LVISV05Dataset',
|
19 |
-
'LVISV1Dataset', 'GroupSampler', 'DistributedGroupSampler',
|
20 |
-
'DistributedSampler', 'build_dataloader', 'ConcatDataset', 'RepeatDataset',
|
21 |
-
'ClassBalancedDataset', 'WIDERFaceDataset', 'DATASETS', 'PIPELINES',
|
22 |
-
'build_dataset', 'replace_ImageToTensor', 'get_loading_pipeline',
|
23 |
-
'NumClassCheckHook'
|
24 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/__init__.py
DELETED
@@ -1,18 +0,0 @@
|
|
1 |
-
from .coarse_mask_head import CoarseMaskHead
|
2 |
-
from .fcn_mask_head import FCNMaskHead
|
3 |
-
from .fcn_occmask_head import FCNOccMaskHead
|
4 |
-
from .feature_relay_head import FeatureRelayHead
|
5 |
-
from .fused_semantic_head import FusedSemanticHead
|
6 |
-
from .global_context_head import GlobalContextHead
|
7 |
-
from .grid_head import GridHead
|
8 |
-
from .htc_mask_head import HTCMaskHead
|
9 |
-
from .mask_point_head import MaskPointHead
|
10 |
-
from .maskiou_head import MaskIoUHead
|
11 |
-
from .scnet_mask_head import SCNetMaskHead
|
12 |
-
from .scnet_semantic_head import SCNetSemanticHead
|
13 |
-
|
14 |
-
__all__ = [
|
15 |
-
'FCNMaskHead', 'FCNOccMaskHead', 'HTCMaskHead', 'FusedSemanticHead', 'GridHead',
|
16 |
-
'MaskIoUHead', 'CoarseMaskHead', 'MaskPointHead', 'SCNetMaskHead',
|
17 |
-
'SCNetSemanticHead', 'GlobalContextHead', 'FeatureRelayHead'
|
18 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/drawings-to-human/static/_app/immutable/chunks/index-bcf2726a.js
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
function N(){}function H(t,n){for(const e in n)t[e]=n[e];return t}function B(t){return t()}function M(){return Object.create(null)}function p(t){t.forEach(B)}function I(t){return typeof t=="function"}function lt(t,n){return t!=t?n==n:t!==n||t&&typeof t=="object"||typeof t=="function"}let g;function ot(t,n){return g||(g=document.createElement("a")),g.href=n,t===g.href}function W(t){return Object.keys(t).length===0}function G(t,...n){if(t==null)return N;const e=t.subscribe(...n);return e.unsubscribe?()=>e.unsubscribe():e}function st(t,n,e){t.$$.on_destroy.push(G(n,e))}function at(t,n,e,i){if(t){const c=L(t,n,e,i);return t[0](c)}}function L(t,n,e,i){return t[1]&&i?H(e.ctx.slice(),t[1](i(n))):e.ctx}function ft(t,n,e,i){if(t[2]&&i){const c=t[2](i(e));if(n.dirty===void 0)return c;if(typeof c=="object"){const s=[],u=Math.max(n.dirty.length,c.length);for(let l=0;l<u;l+=1)s[l]=n.dirty[l]|c[l];return s}return n.dirty|c}return n.dirty}function _t(t,n,e,i,c,s){if(c){const u=L(n,e,i,s);t.p(u,c)}}function dt(t){if(t.ctx.length>32){const n=[],e=t.ctx.length/32;for(let i=0;i<e;i++)n[i]=-1;return n}return-1}function ht(t,n,e){return t.set(e),n}let w=!1;function J(){w=!0}function K(){w=!1}function Q(t,n,e,i){for(;t<n;){const c=t+(n-t>>1);e(c)<=i?t=c+1:n=c}return t}function R(t){if(t.hydrate_init)return;t.hydrate_init=!0;let n=t.childNodes;if(t.nodeName==="HEAD"){const r=[];for(let o=0;o<n.length;o++){const f=n[o];f.claim_order!==void 0&&r.push(f)}n=r}const e=new Int32Array(n.length+1),i=new Int32Array(n.length);e[0]=-1;let c=0;for(let r=0;r<n.length;r++){const o=n[r].claim_order,f=(c>0&&n[e[c]].claim_order<=o?c+1:Q(1,c,y=>n[e[y]].claim_order,o))-1;i[r]=e[f]+1;const a=f+1;e[a]=r,c=Math.max(a,c)}const s=[],u=[];let l=n.length-1;for(let r=e[c]+1;r!=0;r=i[r-1]){for(s.push(n[r-1]);l>=r;l--)u.push(n[l]);l--}for(;l>=0;l--)u.push(n[l]);s.reverse(),u.sort((r,o)=>r.claim_order-o.claim_order);for(let r=0,o=0;r<u.length;r++){for(;o<s.length&&u[r].claim_order>=s[o].claim_order;)o++;const f=o<s.length?s[o]:null;t.insertBefore(u[r],f)}}function U(t,n){if(w){for(R(t),(t.actual_end_child===void 0||t.actual_end_child!==null&&t.actual_end_child.parentElement!==t)&&(t.actual_end_child=t.firstChild);t.actual_end_child!==null&&t.actual_end_child.claim_order===void 0;)t.actual_end_child=t.actual_end_child.nextSibling;n!==t.actual_end_child?(n.claim_order!==void 0||n.parentNode!==t)&&t.insertBefore(n,t.actual_end_child):t.actual_end_child=n.nextSibling}else(n.parentNode!==t||n.nextSibling!==null)&&t.appendChild(n)}function mt(t,n,e){w&&!e?U(t,n):(n.parentNode!==t||n.nextSibling!=e)&&t.insertBefore(n,e||null)}function V(t){t.parentNode.removeChild(t)}function pt(t,n){for(let e=0;e<t.length;e+=1)t[e]&&t[e].d(n)}function X(t){return document.createElement(t)}function Y(t){return document.createElementNS("http://www.w3.org/2000/svg",t)}function j(t){return document.createTextNode(t)}function yt(){return j(" ")}function gt(){return j("")}function bt(t,n,e,i){return t.addEventListener(n,e,i),()=>t.removeEventListener(n,e,i)}function xt(t){return function(n){return n.preventDefault(),t.call(this,n)}}function $t(t,n,e){e==null?t.removeAttribute(n):t.getAttribute(n)!==e&&t.setAttribute(n,e)}function wt(t){return t===""?null:+t}function Z(t){return Array.from(t.childNodes)}function tt(t){t.claim_info===void 0&&(t.claim_info={last_index:0,total_claimed:0})}function O(t,n,e,i,c=!1){tt(t);const s=(()=>{for(let u=t.claim_info.last_index;u<t.length;u++){const l=t[u];if(n(l)){const r=e(l);return r===void 0?t.splice(u,1):t[u]=r,c||(t.claim_info.last_index=u),l}}for(let u=t.claim_info.last_index-1;u>=0;u--){const l=t[u];if(n(l)){const r=e(l);return r===void 0?t.splice(u,1):t[u]=r,c?r===void 0&&t.claim_info.last_index--:t.claim_info.last_index=u,l}}return i()})();return s.claim_order=t.claim_info.total_claimed,t.claim_info.total_claimed+=1,s}function P(t,n,e,i){return O(t,c=>c.nodeName===n,c=>{const s=[];for(let u=0;u<c.attributes.length;u++){const l=c.attributes[u];e[l.name]||s.push(l.name)}s.forEach(u=>c.removeAttribute(u))},()=>i(n))}function vt(t,n,e){return P(t,n,e,X)}function Et(t,n,e){return P(t,n,e,Y)}function nt(t,n){return O(t,e=>e.nodeType===3,e=>{const i=""+n;if(e.data.startsWith(i)){if(e.data.length!==i.length)return e.splitText(i.length)}else e.data=i},()=>j(n),!0)}function kt(t){return nt(t," ")}function Nt(t,n){n=""+n,t.wholeText!==n&&(t.data=n)}function jt(t,n){t.value=n==null?"":n}function St(t,n,e,i){e===null?t.style.removeProperty(n):t.style.setProperty(n,e,i?"important":"")}let m;function h(t){m=t}function S(){if(!m)throw new Error("Function called outside component initialization");return m}function At(t){S().$$.on_mount.push(t)}function Ct(t){S().$$.after_update.push(t)}function Mt(t,n){return S().$$.context.set(t,n),n}const d=[],T=[],x=[],q=[],D=Promise.resolve();let E=!1;function z(){E||(E=!0,D.then(F))}function Tt(){return z(),D}function k(t){x.push(t)}const v=new Set;let b=0;function F(){const t=m;do{for(;b<d.length;){const n=d[b];b++,h(n),et(n.$$)}for(h(null),d.length=0,b=0;T.length;)T.pop()();for(let n=0;n<x.length;n+=1){const e=x[n];v.has(e)||(v.add(e),e())}x.length=0}while(d.length);for(;q.length;)q.pop()();E=!1,v.clear(),h(t)}function et(t){if(t.fragment!==null){t.update(),p(t.before_update);const n=t.dirty;t.dirty=[-1],t.fragment&&t.fragment.p(t.ctx,n),t.after_update.forEach(k)}}const $=new Set;let _;function qt(){_={r:0,c:[],p:_}}function Bt(){_.r||p(_.c),_=_.p}function it(t,n){t&&t.i&&($.delete(t),t.i(n))}function Lt(t,n,e,i){if(t&&t.o){if($.has(t))return;$.add(t),_.c.push(()=>{$.delete(t),i&&(e&&t.d(1),i())}),t.o(n)}}function Ot(t,n){const e={},i={},c={$$scope:1};let s=t.length;for(;s--;){const u=t[s],l=n[s];if(l){for(const r in u)r in l||(i[r]=1);for(const r in l)c[r]||(e[r]=l[r],c[r]=1);t[s]=l}else for(const r in u)c[r]=1}for(const u in i)u in e||(e[u]=void 0);return e}function Pt(t){return typeof t=="object"&&t!==null?t:{}}function Dt(t){t&&t.c()}function zt(t,n){t&&t.l(n)}function rt(t,n,e,i){const{fragment:c,on_mount:s,on_destroy:u,after_update:l}=t.$$;c&&c.m(n,e),i||k(()=>{const r=s.map(B).filter(I);u?u.push(...r):p(r),t.$$.on_mount=[]}),l.forEach(k)}function ct(t,n){const e=t.$$;e.fragment!==null&&(p(e.on_destroy),e.fragment&&e.fragment.d(n),e.on_destroy=e.fragment=null,e.ctx=[])}function ut(t,n){t.$$.dirty[0]===-1&&(d.push(t),z(),t.$$.dirty.fill(0)),t.$$.dirty[n/31|0]|=1<<n%31}function Ft(t,n,e,i,c,s,u,l=[-1]){const r=m;h(t);const o=t.$$={fragment:null,ctx:null,props:s,update:N,not_equal:c,bound:M(),on_mount:[],on_destroy:[],on_disconnect:[],before_update:[],after_update:[],context:new Map(n.context||(r?r.$$.context:[])),callbacks:M(),dirty:l,skip_bound:!1,root:n.target||r.$$.root};u&&u(o.root);let f=!1;if(o.ctx=e?e(t,n.props||{},(a,y,...A)=>{const C=A.length?A[0]:y;return o.ctx&&c(o.ctx[a],o.ctx[a]=C)&&(!o.skip_bound&&o.bound[a]&&o.bound[a](C),f&&ut(t,a)),y}):[],o.update(),f=!0,p(o.before_update),o.fragment=i?i(o.ctx):!1,n.target){if(n.hydrate){J();const a=Z(n.target);o.fragment&&o.fragment.l(a),a.forEach(V)}else o.fragment&&o.fragment.c();n.intro&&it(t.$$.fragment),rt(t,n.target,n.anchor,n.customElement),K(),F()}h(r)}class Ht{$destroy(){ct(this,1),this.$destroy=N}$on(n,e){const i=this.$$.callbacks[n]||(this.$$.callbacks[n]=[]);return i.push(e),()=>{const c=i.indexOf(e);c!==-1&&i.splice(c,1)}}$set(n){this.$$set&&!W(n)&&(this.$$.skip_bound=!0,this.$$set(n),this.$$.skip_bound=!1)}}export{Pt as A,ct as B,H as C,Tt as D,N as E,at as F,_t as G,dt as H,ft as I,U as J,ot as K,bt as L,pt as M,st as N,ht as O,Y as P,Et as Q,jt as R,Ht as S,xt as T,p as U,wt as V,T as W,Z as a,$t as b,vt as c,V as d,X as e,St as f,mt as g,nt as h,Ft as i,Nt as j,yt as k,gt as l,kt as m,qt as n,Lt as o,Bt as p,it as q,Mt as r,lt as s,j as t,Ct as u,At as v,Dt as w,zt as x,rt as y,Ot as z};
|
|
|
|
spaces/CVPR/monoscene_lite/helpers.py
DELETED
@@ -1,336 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import torch
|
3 |
-
import fusion
|
4 |
-
import pandas as pd
|
5 |
-
import plotly.express as px
|
6 |
-
import plotly.graph_objects as go
|
7 |
-
|
8 |
-
def read_calib(calib_path):
|
9 |
-
"""
|
10 |
-
Modify from https://github.com/utiasSTARS/pykitti/blob/d3e1bb81676e831886726cc5ed79ce1f049aef2c/pykitti/utils.py#L68
|
11 |
-
:param calib_path: Path to a calibration text file.
|
12 |
-
:return: dict with calibration matrices.
|
13 |
-
"""
|
14 |
-
calib_all = {}
|
15 |
-
with open(calib_path, "r") as f:
|
16 |
-
for line in f.readlines():
|
17 |
-
if line == "\n":
|
18 |
-
break
|
19 |
-
key, value = line.split(":", 1)
|
20 |
-
calib_all[key] = np.array([float(x) for x in value.split()])
|
21 |
-
|
22 |
-
# reshape matrices
|
23 |
-
calib_out = {}
|
24 |
-
# 3x4 projection matrix for left camera
|
25 |
-
calib_out["P2"] = calib_all["P2"].reshape(3, 4)
|
26 |
-
calib_out["Tr"] = np.identity(4) # 4x4 matrix
|
27 |
-
calib_out["Tr"][:3, :4] = calib_all["Tr"].reshape(3, 4)
|
28 |
-
return calib_out
|
29 |
-
|
30 |
-
|
31 |
-
def vox2pix(cam_E, cam_k,
|
32 |
-
vox_origin, voxel_size,
|
33 |
-
img_W, img_H,
|
34 |
-
scene_size):
|
35 |
-
"""
|
36 |
-
compute the 2D projection of voxels centroids
|
37 |
-
|
38 |
-
Parameters:
|
39 |
-
----------
|
40 |
-
cam_E: 4x4
|
41 |
-
=camera pose in case of NYUv2 dataset
|
42 |
-
=Transformation from camera to lidar coordinate in case of SemKITTI
|
43 |
-
cam_k: 3x3
|
44 |
-
camera intrinsics
|
45 |
-
vox_origin: (3,)
|
46 |
-
world(NYU)/lidar(SemKITTI) cooridnates of the voxel at index (0, 0, 0)
|
47 |
-
img_W: int
|
48 |
-
image width
|
49 |
-
img_H: int
|
50 |
-
image height
|
51 |
-
scene_size: (3,)
|
52 |
-
scene size in meter: (51.2, 51.2, 6.4) for SemKITTI and (4.8, 4.8, 2.88) for NYUv2
|
53 |
-
|
54 |
-
Returns
|
55 |
-
-------
|
56 |
-
projected_pix: (N, 2)
|
57 |
-
Projected 2D positions of voxels
|
58 |
-
fov_mask: (N,)
|
59 |
-
Voxels mask indice voxels inside image's FOV
|
60 |
-
pix_z: (N,)
|
61 |
-
Voxels'distance to the sensor in meter
|
62 |
-
"""
|
63 |
-
# Compute the x, y, z bounding of the scene in meter
|
64 |
-
vol_bnds = np.zeros((3,2))
|
65 |
-
vol_bnds[:,0] = vox_origin
|
66 |
-
vol_bnds[:,1] = vox_origin + np.array(scene_size)
|
67 |
-
|
68 |
-
# Compute the voxels centroids in lidar cooridnates
|
69 |
-
vol_dim = np.ceil((vol_bnds[:,1]- vol_bnds[:,0])/ voxel_size).copy(order='C').astype(int)
|
70 |
-
xv, yv, zv = np.meshgrid(
|
71 |
-
range(vol_dim[0]),
|
72 |
-
range(vol_dim[1]),
|
73 |
-
range(vol_dim[2]),
|
74 |
-
indexing='ij'
|
75 |
-
)
|
76 |
-
vox_coords = np.concatenate([
|
77 |
-
xv.reshape(1,-1),
|
78 |
-
yv.reshape(1,-1),
|
79 |
-
zv.reshape(1,-1)
|
80 |
-
], axis=0).astype(int).T
|
81 |
-
|
82 |
-
# Project voxels'centroid from lidar coordinates to camera coordinates
|
83 |
-
cam_pts = fusion.TSDFVolume.vox2world(vox_origin, vox_coords, voxel_size)
|
84 |
-
cam_pts = fusion.rigid_transform(cam_pts, cam_E)
|
85 |
-
|
86 |
-
# Project camera coordinates to pixel positions
|
87 |
-
projected_pix = fusion.TSDFVolume.cam2pix(cam_pts, cam_k)
|
88 |
-
pix_x, pix_y = projected_pix[:, 0], projected_pix[:, 1]
|
89 |
-
|
90 |
-
# Eliminate pixels outside view frustum
|
91 |
-
pix_z = cam_pts[:, 2]
|
92 |
-
fov_mask = np.logical_and(pix_x >= 0,
|
93 |
-
np.logical_and(pix_x < img_W,
|
94 |
-
np.logical_and(pix_y >= 0,
|
95 |
-
np.logical_and(pix_y < img_H,
|
96 |
-
pix_z > 0))))
|
97 |
-
|
98 |
-
|
99 |
-
return torch.from_numpy(projected_pix), torch.from_numpy(fov_mask), torch.from_numpy(pix_z)
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
def get_grid_coords(dims, resolution):
|
104 |
-
"""
|
105 |
-
:param dims: the dimensions of the grid [x, y, z] (i.e. [256, 256, 32])
|
106 |
-
:return coords_grid: is the center coords of voxels in the grid
|
107 |
-
"""
|
108 |
-
|
109 |
-
g_xx = np.arange(0, dims[0] + 1)
|
110 |
-
g_yy = np.arange(0, dims[1] + 1)
|
111 |
-
sensor_pose = 10
|
112 |
-
g_zz = np.arange(0, dims[2] + 1)
|
113 |
-
|
114 |
-
# Obtaining the grid with coords...
|
115 |
-
xx, yy, zz = np.meshgrid(g_xx[:-1], g_yy[:-1], g_zz[:-1])
|
116 |
-
coords_grid = np.array([xx.flatten(), yy.flatten(), zz.flatten()]).T
|
117 |
-
coords_grid = coords_grid.astype(np.float)
|
118 |
-
|
119 |
-
coords_grid = (coords_grid * resolution) + resolution / 2
|
120 |
-
|
121 |
-
temp = np.copy(coords_grid)
|
122 |
-
temp[:, 0] = coords_grid[:, 1]
|
123 |
-
temp[:, 1] = coords_grid[:, 0]
|
124 |
-
coords_grid = np.copy(temp)
|
125 |
-
|
126 |
-
return coords_grid
|
127 |
-
|
128 |
-
def get_projections(img_W, img_H):
|
129 |
-
scale_3ds = [2, 4]
|
130 |
-
data = {}
|
131 |
-
for scale_3d in scale_3ds:
|
132 |
-
scene_size = (51.2, 51.2, 6.4)
|
133 |
-
vox_origin = np.array([0, -25.6, -2])
|
134 |
-
voxel_size = 0.2
|
135 |
-
|
136 |
-
calib = read_calib("calib.txt")
|
137 |
-
cam_k = calib["P2"][:3, :3]
|
138 |
-
T_velo_2_cam = calib["Tr"]
|
139 |
-
|
140 |
-
# compute the 3D-2D mapping
|
141 |
-
projected_pix, fov_mask, pix_z = vox2pix(
|
142 |
-
T_velo_2_cam,
|
143 |
-
cam_k,
|
144 |
-
vox_origin,
|
145 |
-
voxel_size * scale_3d,
|
146 |
-
img_W,
|
147 |
-
img_H,
|
148 |
-
scene_size,
|
149 |
-
)
|
150 |
-
|
151 |
-
data["projected_pix_{}".format(scale_3d)] = projected_pix
|
152 |
-
data["pix_z_{}".format(scale_3d)] = pix_z
|
153 |
-
data["fov_mask_{}".format(scale_3d)] = fov_mask
|
154 |
-
return data
|
155 |
-
|
156 |
-
|
157 |
-
def majority_pooling(grid, k_size=2):
|
158 |
-
result = np.zeros(
|
159 |
-
(grid.shape[0] // k_size, grid.shape[1] // k_size, grid.shape[2] // k_size)
|
160 |
-
)
|
161 |
-
for xx in range(0, int(np.floor(grid.shape[0] / k_size))):
|
162 |
-
for yy in range(0, int(np.floor(grid.shape[1] / k_size))):
|
163 |
-
for zz in range(0, int(np.floor(grid.shape[2] / k_size))):
|
164 |
-
|
165 |
-
sub_m = grid[
|
166 |
-
(xx * k_size) : (xx * k_size) + k_size,
|
167 |
-
(yy * k_size) : (yy * k_size) + k_size,
|
168 |
-
(zz * k_size) : (zz * k_size) + k_size,
|
169 |
-
]
|
170 |
-
unique, counts = np.unique(sub_m, return_counts=True)
|
171 |
-
if True in ((unique != 0) & (unique != 255)):
|
172 |
-
# Remove counts with 0 and 255
|
173 |
-
counts = counts[((unique != 0) & (unique != 255))]
|
174 |
-
unique = unique[((unique != 0) & (unique != 255))]
|
175 |
-
else:
|
176 |
-
if True in (unique == 0):
|
177 |
-
counts = counts[(unique != 255)]
|
178 |
-
unique = unique[(unique != 255)]
|
179 |
-
value = unique[np.argmax(counts)]
|
180 |
-
result[xx, yy, zz] = value
|
181 |
-
return result
|
182 |
-
|
183 |
-
|
184 |
-
def draw(
|
185 |
-
voxels,
|
186 |
-
# T_velo_2_cam,
|
187 |
-
# vox_origin,
|
188 |
-
fov_mask,
|
189 |
-
# img_size,
|
190 |
-
# f,
|
191 |
-
voxel_size=0.4,
|
192 |
-
# d=7, # 7m - determine the size of the mesh representing the camera
|
193 |
-
):
|
194 |
-
|
195 |
-
fov_mask = fov_mask.reshape(-1)
|
196 |
-
# Compute the voxels coordinates
|
197 |
-
grid_coords = get_grid_coords(
|
198 |
-
[voxels.shape[0], voxels.shape[1], voxels.shape[2]], voxel_size
|
199 |
-
)
|
200 |
-
|
201 |
-
|
202 |
-
# Attach the predicted class to every voxel
|
203 |
-
grid_coords = np.vstack([grid_coords.T, voxels.reshape(-1)]).T
|
204 |
-
|
205 |
-
# Get the voxels inside FOV
|
206 |
-
fov_grid_coords = grid_coords[fov_mask, :]
|
207 |
-
|
208 |
-
# Get the voxels outside FOV
|
209 |
-
outfov_grid_coords = grid_coords[~fov_mask, :]
|
210 |
-
|
211 |
-
# Remove empty and unknown voxels
|
212 |
-
fov_voxels = fov_grid_coords[
|
213 |
-
(fov_grid_coords[:, 3] > 0) & (fov_grid_coords[:, 3] < 255), :
|
214 |
-
]
|
215 |
-
# print(np.unique(fov_voxels[:, 3], return_counts=True))
|
216 |
-
outfov_voxels = outfov_grid_coords[
|
217 |
-
(outfov_grid_coords[:, 3] > 0) & (outfov_grid_coords[:, 3] < 255), :
|
218 |
-
]
|
219 |
-
|
220 |
-
# figure = mlab.figure(size=(1400, 1400), bgcolor=(1, 1, 1))
|
221 |
-
colors = np.array(
|
222 |
-
[
|
223 |
-
[0,0,0],
|
224 |
-
[100, 150, 245],
|
225 |
-
[100, 230, 245],
|
226 |
-
[30, 60, 150],
|
227 |
-
[80, 30, 180],
|
228 |
-
[100, 80, 250],
|
229 |
-
[255, 30, 30],
|
230 |
-
[255, 40, 200],
|
231 |
-
[150, 30, 90],
|
232 |
-
[255, 0, 255],
|
233 |
-
[255, 150, 255],
|
234 |
-
[75, 0, 75],
|
235 |
-
[175, 0, 75],
|
236 |
-
[255, 200, 0],
|
237 |
-
[255, 120, 50],
|
238 |
-
[0, 175, 0],
|
239 |
-
[135, 60, 0],
|
240 |
-
[150, 240, 80],
|
241 |
-
[255, 240, 150],
|
242 |
-
[255, 0, 0],
|
243 |
-
]
|
244 |
-
).astype(np.uint8)
|
245 |
-
|
246 |
-
pts_colors = [f'rgb({colors[int(i)][0]}, {colors[int(i)][1]}, {colors[int(i)][2]})' for i in fov_voxels[:, 3]]
|
247 |
-
out_fov_colors = [f'rgb({colors[int(i)][0]//3*2}, {colors[int(i)][1]//3*2}, {colors[int(i)][2]//3*2})' for i in outfov_voxels[:, 3]]
|
248 |
-
pts_colors = pts_colors + out_fov_colors
|
249 |
-
|
250 |
-
fov_voxels = np.concatenate([fov_voxels, outfov_voxels], axis=0)
|
251 |
-
x = fov_voxels[:, 0].flatten()
|
252 |
-
y = fov_voxels[:, 1].flatten()
|
253 |
-
z = fov_voxels[:, 2].flatten()
|
254 |
-
# label = fov_voxels[:, 3].flatten()
|
255 |
-
fig = go.Figure(data=[go.Scatter3d(x=x, y=y, z=z,mode='markers',
|
256 |
-
marker=dict(
|
257 |
-
size=2,
|
258 |
-
color=pts_colors, # set color to an array/list of desired values
|
259 |
-
# colorscale='Viridis', # choose a colorscale
|
260 |
-
opacity=1.0,
|
261 |
-
symbol='square'
|
262 |
-
))])
|
263 |
-
fig.update_layout(
|
264 |
-
scene = dict(
|
265 |
-
aspectmode='data',
|
266 |
-
xaxis = dict(
|
267 |
-
backgroundcolor="rgb(255, 255, 255)",
|
268 |
-
gridcolor="black",
|
269 |
-
showbackground=True,
|
270 |
-
zerolinecolor="black",
|
271 |
-
nticks=4,
|
272 |
-
visible=False,
|
273 |
-
range=[-1,55],),
|
274 |
-
yaxis = dict(
|
275 |
-
backgroundcolor="rgb(255, 255, 255)",
|
276 |
-
gridcolor="black",
|
277 |
-
showbackground=True,
|
278 |
-
zerolinecolor="black",
|
279 |
-
visible=False,
|
280 |
-
nticks=4, range=[-1,55],),
|
281 |
-
zaxis = dict(
|
282 |
-
backgroundcolor="rgb(255, 255, 255)",
|
283 |
-
gridcolor="black",
|
284 |
-
showbackground=True,
|
285 |
-
zerolinecolor="black",
|
286 |
-
visible=False,
|
287 |
-
nticks=4, range=[-1,7],),
|
288 |
-
bgcolor="black",
|
289 |
-
),
|
290 |
-
|
291 |
-
)
|
292 |
-
|
293 |
-
# fig = px.scatter_3d(
|
294 |
-
# fov_voxels,
|
295 |
-
# x=fov_voxels[:, 0], y="y", z="z", color="label")
|
296 |
-
# Draw occupied inside FOV voxels
|
297 |
-
# plt_plot_fov = mlab.points3d(
|
298 |
-
# fov_voxels[:, 0],
|
299 |
-
# fov_voxels[:, 1],
|
300 |
-
# fov_voxels[:, 2],
|
301 |
-
# fov_voxels[:, 3],
|
302 |
-
# colormap="viridis",
|
303 |
-
# scale_factor=voxel_size - 0.05 * voxel_size,
|
304 |
-
# mode="cube",
|
305 |
-
# opacity=1.0,
|
306 |
-
# vmin=1,
|
307 |
-
# vmax=19,
|
308 |
-
# )
|
309 |
-
|
310 |
-
# # Draw occupied outside FOV voxels
|
311 |
-
# plt_plot_outfov = mlab.points3d(
|
312 |
-
# outfov_voxels[:, 0],
|
313 |
-
# outfov_voxels[:, 1],
|
314 |
-
# outfov_voxels[:, 2],
|
315 |
-
# outfov_voxels[:, 3],
|
316 |
-
# colormap="viridis",
|
317 |
-
# scale_factor=voxel_size - 0.05 * voxel_size,
|
318 |
-
# mode="cube",
|
319 |
-
# opacity=1.0,
|
320 |
-
# vmin=1,
|
321 |
-
# vmax=19,
|
322 |
-
# )
|
323 |
-
|
324 |
-
|
325 |
-
|
326 |
-
# plt_plot_fov.glyph.scale_mode = "scale_by_vector"
|
327 |
-
# plt_plot_outfov.glyph.scale_mode = "scale_by_vector"
|
328 |
-
|
329 |
-
# plt_plot_fov.module_manager.scalar_lut_manager.lut.table = colors
|
330 |
-
|
331 |
-
# outfov_colors = colors
|
332 |
-
# outfov_colors[:, :3] = outfov_colors[:, :3] // 3 * 2
|
333 |
-
# plt_plot_outfov.module_manager.scalar_lut_manager.lut.table = outfov_colors
|
334 |
-
|
335 |
-
# mlab.show()
|
336 |
-
return fig
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/regionclip-demo/detectron2/checkpoint/catalog.py
DELETED
@@ -1,115 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
import logging
|
3 |
-
|
4 |
-
from detectron2.utils.file_io import PathHandler, PathManager
|
5 |
-
|
6 |
-
|
7 |
-
class ModelCatalog(object):
|
8 |
-
"""
|
9 |
-
Store mappings from names to third-party models.
|
10 |
-
"""
|
11 |
-
|
12 |
-
S3_C2_DETECTRON_PREFIX = "https://dl.fbaipublicfiles.com/detectron"
|
13 |
-
|
14 |
-
# MSRA models have STRIDE_IN_1X1=True. False otherwise.
|
15 |
-
# NOTE: all BN models here have fused BN into an affine layer.
|
16 |
-
# As a result, you should only load them to a model with "FrozenBN".
|
17 |
-
# Loading them to a model with regular BN or SyncBN is wrong.
|
18 |
-
# Even when loaded to FrozenBN, it is still different from affine by an epsilon,
|
19 |
-
# which should be negligible for training.
|
20 |
-
# NOTE: all models here uses PIXEL_STD=[1,1,1]
|
21 |
-
# NOTE: Most of the BN models here are no longer used. We use the
|
22 |
-
# re-converted pre-trained models under detectron2 model zoo instead.
|
23 |
-
C2_IMAGENET_MODELS = {
|
24 |
-
"MSRA/R-50": "ImageNetPretrained/MSRA/R-50.pkl",
|
25 |
-
"MSRA/R-101": "ImageNetPretrained/MSRA/R-101.pkl",
|
26 |
-
"FAIR/R-50-GN": "ImageNetPretrained/47261647/R-50-GN.pkl",
|
27 |
-
"FAIR/R-101-GN": "ImageNetPretrained/47592356/R-101-GN.pkl",
|
28 |
-
"FAIR/X-101-32x8d": "ImageNetPretrained/20171220/X-101-32x8d.pkl",
|
29 |
-
"FAIR/X-101-64x4d": "ImageNetPretrained/FBResNeXt/X-101-64x4d.pkl",
|
30 |
-
"FAIR/X-152-32x8d-IN5k": "ImageNetPretrained/25093814/X-152-32x8d-IN5k.pkl",
|
31 |
-
}
|
32 |
-
|
33 |
-
C2_DETECTRON_PATH_FORMAT = (
|
34 |
-
"{prefix}/{url}/output/train/{dataset}/{type}/model_final.pkl" # noqa B950
|
35 |
-
)
|
36 |
-
|
37 |
-
C2_DATASET_COCO = "coco_2014_train%3Acoco_2014_valminusminival"
|
38 |
-
C2_DATASET_COCO_KEYPOINTS = "keypoints_coco_2014_train%3Akeypoints_coco_2014_valminusminival"
|
39 |
-
|
40 |
-
# format: {model_name} -> part of the url
|
41 |
-
C2_DETECTRON_MODELS = {
|
42 |
-
"35857197/e2e_faster_rcnn_R-50-C4_1x": "35857197/12_2017_baselines/e2e_faster_rcnn_R-50-C4_1x.yaml.01_33_49.iAX0mXvW", # noqa B950
|
43 |
-
"35857345/e2e_faster_rcnn_R-50-FPN_1x": "35857345/12_2017_baselines/e2e_faster_rcnn_R-50-FPN_1x.yaml.01_36_30.cUF7QR7I", # noqa B950
|
44 |
-
"35857890/e2e_faster_rcnn_R-101-FPN_1x": "35857890/12_2017_baselines/e2e_faster_rcnn_R-101-FPN_1x.yaml.01_38_50.sNxI7sX7", # noqa B950
|
45 |
-
"36761737/e2e_faster_rcnn_X-101-32x8d-FPN_1x": "36761737/12_2017_baselines/e2e_faster_rcnn_X-101-32x8d-FPN_1x.yaml.06_31_39.5MIHi1fZ", # noqa B950
|
46 |
-
"35858791/e2e_mask_rcnn_R-50-C4_1x": "35858791/12_2017_baselines/e2e_mask_rcnn_R-50-C4_1x.yaml.01_45_57.ZgkA7hPB", # noqa B950
|
47 |
-
"35858933/e2e_mask_rcnn_R-50-FPN_1x": "35858933/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml.01_48_14.DzEQe4wC", # noqa B950
|
48 |
-
"35861795/e2e_mask_rcnn_R-101-FPN_1x": "35861795/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_1x.yaml.02_31_37.KqyEK4tT", # noqa B950
|
49 |
-
"36761843/e2e_mask_rcnn_X-101-32x8d-FPN_1x": "36761843/12_2017_baselines/e2e_mask_rcnn_X-101-32x8d-FPN_1x.yaml.06_35_59.RZotkLKI", # noqa B950
|
50 |
-
"48616381/e2e_mask_rcnn_R-50-FPN_2x_gn": "GN/48616381/04_2018_gn_baselines/e2e_mask_rcnn_R-50-FPN_2x_gn_0416.13_23_38.bTlTI97Q", # noqa B950
|
51 |
-
"37697547/e2e_keypoint_rcnn_R-50-FPN_1x": "37697547/12_2017_baselines/e2e_keypoint_rcnn_R-50-FPN_1x.yaml.08_42_54.kdzV35ao", # noqa B950
|
52 |
-
"35998355/rpn_R-50-C4_1x": "35998355/12_2017_baselines/rpn_R-50-C4_1x.yaml.08_00_43.njH5oD9L", # noqa B950
|
53 |
-
"35998814/rpn_R-50-FPN_1x": "35998814/12_2017_baselines/rpn_R-50-FPN_1x.yaml.08_06_03.Axg0r179", # noqa B950
|
54 |
-
"36225147/fast_R-50-FPN_1x": "36225147/12_2017_baselines/fast_rcnn_R-50-FPN_1x.yaml.08_39_09.L3obSdQ2", # noqa B950
|
55 |
-
}
|
56 |
-
|
57 |
-
@staticmethod
|
58 |
-
def get(name):
|
59 |
-
if name.startswith("Caffe2Detectron/COCO"):
|
60 |
-
return ModelCatalog._get_c2_detectron_baseline(name)
|
61 |
-
if name.startswith("ImageNetPretrained/"):
|
62 |
-
return ModelCatalog._get_c2_imagenet_pretrained(name)
|
63 |
-
raise RuntimeError("model not present in the catalog: {}".format(name))
|
64 |
-
|
65 |
-
@staticmethod
|
66 |
-
def _get_c2_imagenet_pretrained(name):
|
67 |
-
prefix = ModelCatalog.S3_C2_DETECTRON_PREFIX
|
68 |
-
name = name[len("ImageNetPretrained/") :]
|
69 |
-
name = ModelCatalog.C2_IMAGENET_MODELS[name]
|
70 |
-
url = "/".join([prefix, name])
|
71 |
-
return url
|
72 |
-
|
73 |
-
@staticmethod
|
74 |
-
def _get_c2_detectron_baseline(name):
|
75 |
-
name = name[len("Caffe2Detectron/COCO/") :]
|
76 |
-
url = ModelCatalog.C2_DETECTRON_MODELS[name]
|
77 |
-
if "keypoint_rcnn" in name:
|
78 |
-
dataset = ModelCatalog.C2_DATASET_COCO_KEYPOINTS
|
79 |
-
else:
|
80 |
-
dataset = ModelCatalog.C2_DATASET_COCO
|
81 |
-
|
82 |
-
if "35998355/rpn_R-50-C4_1x" in name:
|
83 |
-
# this one model is somehow different from others ..
|
84 |
-
type = "rpn"
|
85 |
-
else:
|
86 |
-
type = "generalized_rcnn"
|
87 |
-
|
88 |
-
# Detectron C2 models are stored in the structure defined in `C2_DETECTRON_PATH_FORMAT`.
|
89 |
-
url = ModelCatalog.C2_DETECTRON_PATH_FORMAT.format(
|
90 |
-
prefix=ModelCatalog.S3_C2_DETECTRON_PREFIX, url=url, type=type, dataset=dataset
|
91 |
-
)
|
92 |
-
return url
|
93 |
-
|
94 |
-
|
95 |
-
class ModelCatalogHandler(PathHandler):
|
96 |
-
"""
|
97 |
-
Resolve URL like catalog://.
|
98 |
-
"""
|
99 |
-
|
100 |
-
PREFIX = "catalog://"
|
101 |
-
|
102 |
-
def _get_supported_prefixes(self):
|
103 |
-
return [self.PREFIX]
|
104 |
-
|
105 |
-
def _get_local_path(self, path, **kwargs):
|
106 |
-
logger = logging.getLogger(__name__)
|
107 |
-
catalog_path = ModelCatalog.get(path[len(self.PREFIX) :])
|
108 |
-
logger.info("Catalog entry {} points to {}".format(path, catalog_path))
|
109 |
-
return PathManager.get_local_path(catalog_path, **kwargs)
|
110 |
-
|
111 |
-
def _open(self, path, mode="r", **kwargs):
|
112 |
-
return PathManager.open(self._get_local_path(path), mode, **kwargs)
|
113 |
-
|
114 |
-
|
115 |
-
PathManager.register_handler(ModelCatalogHandler())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cam-Brazy/BearTest/app.py
DELETED
@@ -1,20 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
from fastai.vision.all import *
|
3 |
-
|
4 |
-
__all__ = ["learn", "classify_image", "categories", "image", "label", "examples", "intf"]
|
5 |
-
|
6 |
-
learn = load_learner('export.pkl')
|
7 |
-
|
8 |
-
categories = ('Black', 'Grizzly', 'Teddy')
|
9 |
-
|
10 |
-
def classify_image(inp):
|
11 |
-
pred,idx,probs = learn.predict(inp)
|
12 |
-
return dict(zip(categories, map(float, probs)))
|
13 |
-
|
14 |
-
|
15 |
-
image = gr.inputs.Image(shape=(192, 192))
|
16 |
-
label = gr.outputs.Label()
|
17 |
-
examples = ["grizzly.jpg", "teddy.jpg"]
|
18 |
-
|
19 |
-
iface = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples)
|
20 |
-
iface.launch(inline=False)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Chaitanya01/InvestingPlatform/alerts.py
DELETED
@@ -1,85 +0,0 @@
|
|
1 |
-
from distutils.command.sdist import sdist
|
2 |
-
from numpy import tri
|
3 |
-
import pandas as pd
|
4 |
-
import json, requests
|
5 |
-
import slack, time
|
6 |
-
from datetime import datetime
|
7 |
-
# from bs4 import BeautifulSoup
|
8 |
-
from config import *
|
9 |
-
def get_yahoo_finance_quote(symbol):
|
10 |
-
# Get the symbol quote from yahoo finance, we are using Beautiful Soup for scraping
|
11 |
-
URL = f"https://finance.yahoo.com/quote/{symbol}"
|
12 |
-
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36'}
|
13 |
-
page = requests.get(URL, headers = headers)
|
14 |
-
soup = BeautifulSoup(page.text, "html.parser")
|
15 |
-
price = soup.find('div',{'class':'D(ib) Mend(20px)'}).find_all('fin-streamer')[0].text
|
16 |
-
return float(price.replace(",",""))
|
17 |
-
def get_cnbc_data(symbol):
|
18 |
-
ticker = symbol.replace(" ","")
|
19 |
-
if ticker == "NASDAQ":
|
20 |
-
ticker = "NDX"
|
21 |
-
elif ticker == "NIFTY50":
|
22 |
-
ticker = ".NSEI"
|
23 |
-
# Get the symbol quote from yahoo finance, we are using Beautiful Soup for scraping
|
24 |
-
df = pd.DataFrame(requests.get(f"https://ts-api.cnbc.com/harmony/app/charts/1Y.json?symbol={ticker}").json()["barData"]["priceBars"])
|
25 |
-
# df_1D = pd.DataFrame(requests.get(f"https://ts-api.cnbc.com/harmony/app/charts/1D.json?symbol={ticker}").json()["barData"]["priceBars"])
|
26 |
-
df["datetime"] = pd.to_datetime(df['tradeTimeinMills'],unit='ms')
|
27 |
-
df["close"] = df["close"].astype(float)
|
28 |
-
# df_1D["close"] = df_1D["close"].astype(float)
|
29 |
-
df.set_index("datetime",inplace = True)
|
30 |
-
dma200 = (df["close"].rolling(200).mean()).iloc[-1]
|
31 |
-
close = (df["close"].iloc[-1])
|
32 |
-
return dma200, close
|
33 |
-
|
34 |
-
client = slack.WebClient(token = SLACK_TOKEN)
|
35 |
-
|
36 |
-
while True:
|
37 |
-
df = pd.read_csv('watchlist.csv')
|
38 |
-
df.set_index("Symbol",inplace = True)
|
39 |
-
# df_crypto = pd.DataFrame(json.loads(requests.get("https://ftx.com/api/markets").text)["result"])
|
40 |
-
# df_crypto = df_crypto[df_crypto["quoteCurrency"].isin(["USD","USDT"])]
|
41 |
-
# df_crypto.set_index("name",inplace = True)
|
42 |
-
|
43 |
-
if len(df)>0:
|
44 |
-
req_df_price = df[df["status"] == "Pending"]
|
45 |
-
req_df_dma = df[df["dma_status"] == "Pending"]
|
46 |
-
for symbol in req_df_price.index:
|
47 |
-
if symbol in ["SPX","US 2Y","US 5Y","US 10Y","US 30Y","HYG","LQD","NASDAQ","VIX","NIFTY50"]:
|
48 |
-
dma200, ltp= get_cnbc_data(symbol)
|
49 |
-
# else:
|
50 |
-
# ltp = df_crypto.loc[symbol]["last"]
|
51 |
-
trigger_level = req_df_price.loc[symbol]["Trigger"]
|
52 |
-
triggered = 0
|
53 |
-
|
54 |
-
if req_df_price.loc[symbol]["view_type"] == "Above":
|
55 |
-
if trigger_level<=ltp:
|
56 |
-
triggered = 1
|
57 |
-
elif req_df_price.loc[symbol]["view_type"] == "Below":
|
58 |
-
if trigger_level>=ltp:
|
59 |
-
triggered = 1
|
60 |
-
|
61 |
-
if triggered == 1:
|
62 |
-
df.at[symbol,"status"] = "Triggered"
|
63 |
-
client.chat_postMessage(channel = f"#{df.loc[symbol]['alert_type'].lower()}_signal",
|
64 |
-
text = f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} {symbol} is {df.loc[symbol]['view_type']} {trigger_level} at {ltp}")
|
65 |
-
for symbol in req_df_dma.index:
|
66 |
-
dma_check = req_df_dma.loc[symbol]["dma200"]
|
67 |
-
if dma_check == False:
|
68 |
-
continue
|
69 |
-
triggered_dma200 = 0
|
70 |
-
dma200, ltp= get_cnbc_data(symbol)
|
71 |
-
print(dma200)
|
72 |
-
if req_df_dma.loc[symbol]["dma200_view_type"] == "Above":
|
73 |
-
if dma200<=ltp:
|
74 |
-
triggered_dma200 = 1
|
75 |
-
elif req_df_dma.loc[symbol]["dma200_view_type"] == "Below":
|
76 |
-
if dma200>=ltp:
|
77 |
-
triggered_dma200 = 1
|
78 |
-
|
79 |
-
if triggered_dma200 == 1:
|
80 |
-
df.at[symbol,"dma_status"] = "Triggered"
|
81 |
-
client.chat_postMessage(channel = f"#{df.loc[symbol]['alert_type'].lower()}_signal",
|
82 |
-
text = f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} {symbol} is {df.loc[symbol]['dma200_view_type']} DMA200 at {ltp}")
|
83 |
-
df.to_csv("watchlist.csv")
|
84 |
-
# Recheck again after 60 minutes
|
85 |
-
time.sleep(60*60)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ChandraMohanNayal/AutoGPT/run_continuous.bat
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
@echo off
|
2 |
-
set argument=--continuous
|
3 |
-
call run.bat %argument%
|
|
|
|
|
|
|
|
spaces/ChillyFaze/runwayml-stable-diffusion-v1-5/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Runwayml Stable Diffusion V1 5
|
3 |
-
emoji: 🌖
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: purple
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.20.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: openrail
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ClearLove443/Robby-chatbot/modules/history.py
DELETED
@@ -1,58 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import streamlit as st
|
3 |
-
from streamlit_chat import message
|
4 |
-
|
5 |
-
class ChatHistory:
|
6 |
-
|
7 |
-
def __init__(self):
|
8 |
-
self.history = st.session_state.get("history", [])
|
9 |
-
st.session_state["history"] = self.history
|
10 |
-
|
11 |
-
def default_greeting(self):
|
12 |
-
return "Hey Robby ! 👋"
|
13 |
-
|
14 |
-
def default_prompt(self, topic):
|
15 |
-
return f"Hello ! Ask me anything about {topic} 🤗"
|
16 |
-
|
17 |
-
def initialize_user_history(self):
|
18 |
-
st.session_state["user"] = [self.default_greeting()]
|
19 |
-
|
20 |
-
def initialize_assistant_history(self, uploaded_file):
|
21 |
-
st.session_state["assistant"] = [self.default_prompt(uploaded_file.name)]
|
22 |
-
|
23 |
-
def initialize(self, uploaded_file):
|
24 |
-
if "assistant" not in st.session_state:
|
25 |
-
self.initialize_assistant_history(uploaded_file)
|
26 |
-
if "user" not in st.session_state:
|
27 |
-
self.initialize_user_history()
|
28 |
-
|
29 |
-
def reset(self, uploaded_file):
|
30 |
-
st.session_state["history"] = []
|
31 |
-
|
32 |
-
self.initialize_user_history()
|
33 |
-
self.initialize_assistant_history(uploaded_file)
|
34 |
-
st.session_state["reset_chat"] = False
|
35 |
-
|
36 |
-
def append(self, mode, message):
|
37 |
-
st.session_state[mode].append(message)
|
38 |
-
|
39 |
-
def generate_messages(self, container):
|
40 |
-
if st.session_state["assistant"]:
|
41 |
-
with container:
|
42 |
-
for i in range(len(st.session_state["assistant"])):
|
43 |
-
message(
|
44 |
-
st.session_state["user"][i],
|
45 |
-
is_user=True,
|
46 |
-
key=f"history_{i}_user",
|
47 |
-
avatar_style="big-smile",
|
48 |
-
)
|
49 |
-
message(st.session_state["assistant"][i], key=str(i), avatar_style="thumbs")
|
50 |
-
|
51 |
-
def load(self):
|
52 |
-
if os.path.exists(self.history_file):
|
53 |
-
with open(self.history_file, "r") as f:
|
54 |
-
self.history = f.read().splitlines()
|
55 |
-
|
56 |
-
def save(self):
|
57 |
-
with open(self.history_file, "w") as f:
|
58 |
-
f.write("\n".join(self.history))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cletrason/Cletrason-toad-mario-movie/app_text_to_video.py
DELETED
@@ -1,97 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
from model import Model
|
3 |
-
import os
|
4 |
-
from hf_utils import get_model_list
|
5 |
-
|
6 |
-
on_huggingspace = os.environ.get("SPACE_AUTHOR_NAME") == "PAIR"
|
7 |
-
|
8 |
-
examples = [
|
9 |
-
["an astronaut waving the arm on the moon"],
|
10 |
-
["a sloth surfing on a wakeboard"],
|
11 |
-
["an astronaut walking on a street"],
|
12 |
-
["a cute cat walking on grass"],
|
13 |
-
["a horse is galloping on a street"],
|
14 |
-
["an astronaut is skiing down the hill"],
|
15 |
-
["a gorilla walking alone down the street"],
|
16 |
-
["a gorilla dancing on times square"],
|
17 |
-
["A panda dancing dancing like crazy on Times Square"],
|
18 |
-
]
|
19 |
-
|
20 |
-
|
21 |
-
def create_demo(model: Model):
|
22 |
-
|
23 |
-
with gr.Blocks() as demo:
|
24 |
-
with gr.Row():
|
25 |
-
gr.Markdown('## Text2Video-Zero: Video Generation')
|
26 |
-
with gr.Row():
|
27 |
-
gr.HTML(
|
28 |
-
"""
|
29 |
-
<div style="text-align: left; auto;">
|
30 |
-
<h2 style="font-weight: 450; font-size: 1rem; margin: 0rem">
|
31 |
-
Description: Simply input <b>any textual prompt</b> to generate videos right away and unleash your creativity and imagination! You can also select from the examples below. For performance purposes, our current preview release allows to generate up to 16 frames, which can be configured in the Advanced Options.
|
32 |
-
</h3>
|
33 |
-
</div>
|
34 |
-
""")
|
35 |
-
|
36 |
-
with gr.Row():
|
37 |
-
with gr.Column():
|
38 |
-
model_name = gr.Dropdown(
|
39 |
-
label="Model",
|
40 |
-
choices=get_model_list(),
|
41 |
-
value="dreamlike-art/dreamlike-photoreal-2.0",
|
42 |
-
)
|
43 |
-
prompt = gr.Textbox(label='Prompt')
|
44 |
-
run_button = gr.Button(label='Run')
|
45 |
-
with gr.Accordion('Advanced options', open=False):
|
46 |
-
watermark = gr.Radio(["Picsart AI Research", "Text2Video-Zero",
|
47 |
-
"None"], label="Watermark", value='Picsart AI Research')
|
48 |
-
|
49 |
-
if on_huggingspace:
|
50 |
-
video_length = gr.Slider(
|
51 |
-
label="Video length", minimum=8, maximum=16, step=1)
|
52 |
-
else:
|
53 |
-
video_length = gr.Number(
|
54 |
-
label="Video length", value=8, precision=0)
|
55 |
-
chunk_size = gr.Slider(
|
56 |
-
label="Chunk size", minimum=2, maximum=16, value=12 if on_huggingspace else 8, step=1, visible=not on_huggingspace)
|
57 |
-
|
58 |
-
motion_field_strength_x = gr.Slider(
|
59 |
-
label='Global Translation $\delta_{x}$', minimum=-20, maximum=20, value=12, step=1)
|
60 |
-
motion_field_strength_y = gr.Slider(
|
61 |
-
label='Global Translation $\delta_{y}$', minimum=-20, maximum=20, value=12, step=1)
|
62 |
-
|
63 |
-
t0 = gr.Slider(label="Timestep t0", minimum=0,
|
64 |
-
maximum=49, value=44, step=1)
|
65 |
-
t1 = gr.Slider(label="Timestep t1", minimum=0,
|
66 |
-
maximum=49, value=47, step=1)
|
67 |
-
|
68 |
-
n_prompt = gr.Textbox(
|
69 |
-
label="Optional Negative Prompt", value='')
|
70 |
-
with gr.Column():
|
71 |
-
result = gr.Video(label="Generated Video")
|
72 |
-
|
73 |
-
inputs = [
|
74 |
-
prompt,
|
75 |
-
model_name,
|
76 |
-
motion_field_strength_x,
|
77 |
-
motion_field_strength_y,
|
78 |
-
t0,
|
79 |
-
t1,
|
80 |
-
n_prompt,
|
81 |
-
chunk_size,
|
82 |
-
video_length,
|
83 |
-
watermark,
|
84 |
-
]
|
85 |
-
|
86 |
-
gr.Examples(examples=examples,
|
87 |
-
inputs=inputs,
|
88 |
-
outputs=result,
|
89 |
-
fn=model.process_text2video,
|
90 |
-
run_on_click=False,
|
91 |
-
cache_examples=on_huggingspace,
|
92 |
-
)
|
93 |
-
|
94 |
-
run_button.click(fn=model.process_text2video,
|
95 |
-
inputs=inputs,
|
96 |
-
outputs=result,)
|
97 |
-
return demo
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat/g4f/Provider/Providers/helpers/theb.py
DELETED
@@ -1,48 +0,0 @@
|
|
1 |
-
import json
|
2 |
-
import sys
|
3 |
-
from re import findall
|
4 |
-
from curl_cffi import requests
|
5 |
-
|
6 |
-
config = json.loads(sys.argv[1])
|
7 |
-
prompt = config['messages'][-1]['content']
|
8 |
-
|
9 |
-
headers = {
|
10 |
-
'authority': 'chatbot.theb.ai',
|
11 |
-
'accept': 'application/json, text/plain, */*',
|
12 |
-
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
|
13 |
-
'content-type': 'application/json',
|
14 |
-
'origin': 'https://chatbot.theb.ai',
|
15 |
-
'referer': 'https://chatbot.theb.ai/',
|
16 |
-
'sec-ch-ua': '"Google Chrome";v="113", "Chromium";v="113", "Not-A.Brand";v="24"',
|
17 |
-
'sec-ch-ua-mobile': '?0',
|
18 |
-
'sec-ch-ua-platform': '"macOS"',
|
19 |
-
'sec-fetch-dest': 'empty',
|
20 |
-
'sec-fetch-mode': 'cors',
|
21 |
-
'sec-fetch-site': 'same-origin',
|
22 |
-
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36',
|
23 |
-
}
|
24 |
-
|
25 |
-
json_data = {
|
26 |
-
'prompt': prompt,
|
27 |
-
'options': {}
|
28 |
-
}
|
29 |
-
|
30 |
-
def format(chunk):
|
31 |
-
try:
|
32 |
-
completion_chunk = findall(r'content":"(.*)"},"fin', chunk.decode())[0]
|
33 |
-
print(completion_chunk, flush=True, end='')
|
34 |
-
|
35 |
-
except Exception as e:
|
36 |
-
print(f'[ERROR] an error occured, retrying... | [[{chunk.decode()}]]', flush=True)
|
37 |
-
return
|
38 |
-
|
39 |
-
while True:
|
40 |
-
try:
|
41 |
-
response = requests.post('https://chatbot.theb.ai/api/chat-process',
|
42 |
-
headers=headers, json=json_data, content_callback=format, impersonate='chrome110')
|
43 |
-
|
44 |
-
exit(0)
|
45 |
-
|
46 |
-
except Exception as e:
|
47 |
-
print('[ERROR] an error occured, retrying... |', e, flush=True)
|
48 |
-
continue
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cong723/gpt-academic-public/docs/WithFastapi.md
DELETED
@@ -1,43 +0,0 @@
|
|
1 |
-
# Running with fastapi
|
2 |
-
|
3 |
-
We currently support fastapi in order to solve sub-path deploy issue.
|
4 |
-
|
5 |
-
1. change CUSTOM_PATH setting in `config.py`
|
6 |
-
|
7 |
-
``` sh
|
8 |
-
nano config.py
|
9 |
-
```
|
10 |
-
|
11 |
-
2. Edit main.py
|
12 |
-
|
13 |
-
```diff
|
14 |
-
auto_opentab_delay()
|
15 |
-
- demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
16 |
-
+ demo.queue(concurrency_count=CONCURRENT_COUNT)
|
17 |
-
|
18 |
-
- # 如果需要在二级路径下运行
|
19 |
-
- # CUSTOM_PATH, = get_conf('CUSTOM_PATH')
|
20 |
-
- # if CUSTOM_PATH != "/":
|
21 |
-
- # from toolbox import run_gradio_in_subpath
|
22 |
-
- # run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
|
23 |
-
- # else:
|
24 |
-
- # demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
25 |
-
|
26 |
-
+ 如果需要在二级路径下运行
|
27 |
-
+ CUSTOM_PATH, = get_conf('CUSTOM_PATH')
|
28 |
-
+ if CUSTOM_PATH != "/":
|
29 |
-
+ from toolbox import run_gradio_in_subpath
|
30 |
-
+ run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
|
31 |
-
+ else:
|
32 |
-
+ demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
33 |
-
|
34 |
-
if __name__ == "__main__":
|
35 |
-
main()
|
36 |
-
```
|
37 |
-
|
38 |
-
|
39 |
-
3. Go!
|
40 |
-
|
41 |
-
``` sh
|
42 |
-
python main.py
|
43 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|