Commit
·
a76b9fc
1
Parent(s):
d01490c
Update parquet files (step 55 of 249)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/101-5/gpt4free/SECURITY.md +0 -4
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Arrival (English) Telugu Movie Video Songs Hd 1080p Watch and Listen to the Amazing Soundtrack of the Sci-Fi Film.md +0 -87
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create Custom Layouts with Tych Panel 2 Full Version for Photoshop CC.md +0 -120
- spaces/1gistliPinn/ChatGPT4/Examples/Download Elijah Blakes Drift Album in Zip Format.md +0 -6
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Discover the Secrets of Ashfall a New Post-Apocalyptic MMORPG.md +0 -114
- spaces/1phancelerku/anime-remove-background/Download Mkhathazi Songs for Free - The Best of Maskandi Music.md +0 -108
- spaces/1phancelerku/anime-remove-background/Download Real Cricket GO Mod APK and Enjoy Unlimited Money and Features.md +0 -131
- spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py +0 -631
- spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/synta_mlm.py +0 -25
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/README.md +0 -7
- spaces/Abhilashvj/planogram-compliance/data/scripts/get_coco128.sh +0 -17
- spaces/Abhilashvj/planogram-compliance/utils/google_app_engine/Dockerfile +0 -25
- spaces/Adapting/YouTube-Downloader/README.md +0 -13
- spaces/Aditya9790/yolo7-object-tracking/utils/aws/__init__.py +0 -1
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/methods/Modal.js +0 -29
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch2/NinePatch.js +0 -2
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/AddChildMethods.js +0 -170
- spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON/backportPP/Boolean.pm +0 -27
- spaces/AlekseyKorshuk/model-evaluation/tabs/playground.py +0 -123
- spaces/AlgoveraAI/algovera_squad_active_passive_model/README.md +0 -11
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/autoencoder_kl.py +0 -417
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/modeling_utils.py +0 -980
- spaces/AnimaLab/bias-test-gpt-pairs/bloomberg_vis.py +0 -85
- spaces/AnonAndDesu/Desu_Proxy/greeting.md +0 -3
- spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/setup.py +0 -7
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/__init__.py +0 -35
- spaces/AriaMei/TTSdemo/monotonic_align/setup.py +0 -9
- spaces/ArkanDash/rvc-models-new/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +0 -86
- spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/config/__init__.py +0 -0
- spaces/AsakuraMizu/moe-tts/text/__init__.py +0 -32
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/cmdoptions.py +0 -1074
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/platformdirs/api.py +0 -179
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/_asyncio.py +0 -94
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install.py +0 -814
- spaces/AzumaSeren100/XuanShen-Bert-VITS2/models.py +0 -707
- spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/language_model.py +0 -81
- spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/tda.py +0 -97
- spaces/CVPR/LIVE/pybind11/tests/test_pytypes.cpp +0 -375
- spaces/CVPR/LIVE/thrust/thrust/system/cpp/pointer.h +0 -351
- spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/inner_product.h +0 -94
- spaces/CVPR/LIVE/thrust/thrust/system/omp/memory.h +0 -95
- spaces/CVPR/WALT/mmdet/models/detectors/single_stage.py +0 -154
- spaces/CVPR/WALT/mmdet/models/detectors/sparse_rcnn.py +0 -110
- spaces/CVPR/WALT/walt/datasets/pipelines/instaboost.py +0 -98
- spaces/CVPR/lama-example/saicinpainting/evaluation/losses/__init__.py +0 -0
- spaces/CikeyQI/Yunzai/Yunzai/plugins/other/update.js +0 -240
- spaces/CjangCjengh/Shanghainese-TTS/models.py +0 -535
- spaces/CofAI/chat/server/bp.py +0 -6
- spaces/CofAI/picscore/README.md +0 -13
- spaces/Crossper6/stable-diffusion-webui/app.py +0 -75
spaces/101-5/gpt4free/SECURITY.md
DELETED
@@ -1,4 +0,0 @@
|
|
1 |
-
## Reporting a Vulnerability
|
2 |
-
|
3 |
-
Reporting a Vulnerability
|
4 |
-
Please report (suspected) security vulnerabilities to https://t.me/xtekky. You will receive a response within 48 hours. If the issue is confirmed, we will release a patch as soon as possible depending on complexity but historically within a few days.
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Arrival (English) Telugu Movie Video Songs Hd 1080p Watch and Listen to the Amazing Soundtrack of the Sci-Fi Film.md
DELETED
@@ -1,87 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Panda Antivirus Pro v17.0.1 Final Crack: What You Need to Know</h1>
|
3 |
-
<p>If you are looking for a reliable and powerful antivirus software for your PC, you might have come across Panda Antivirus Pro v17.0.1, one of the latest versions of the popular security product from Panda Security. Panda Antivirus Pro v17.0.1 offers comprehensive protection against all kinds of online threats, such as viruses, malware, ransomware, phishing, and more. It also comes with a range of features that enhance your privacy and performance, such as firewall, VPN, Wi-Fi protection, parental control, data shield, optimization and cleanup tools, and more.</p>
|
4 |
-
<h2>Panda.Antivirus.Pro.v17.0.1.Final..rar crack</h2><br /><p><b><b>Download</b> ===> <a href="https://byltly.com/2uKvUW">https://byltly.com/2uKvUW</a></b></p><br /><br />
|
5 |
-
<p>However, if you are tempted to download a cracked version of Panda Antivirus Pro v17.0.1 from some shady website or torrent site, you might want to think twice before doing so. A crack is a program that modifies or bypasses the original software's license verification or activation process, allowing you to use it for free or with unlimited features. While this might sound like a good deal, using a cracked version of Panda Antivirus Pro v17.0.1 can expose you to various risks and disadvantages that outweigh any potential benefits.</p>
|
6 |
-
<h2>Features of Panda Antivirus Pro v17.0.1</h2>
|
7 |
-
<p>Panda Antivirus Pro v17.0.1 is a comprehensive security solution that protects your PC from all kinds of online threats. Some of its features include:</p>
|
8 |
-
<ul>
|
9 |
-
<li><b>Protection against viruses, malware, ransomware, and phishing:</b> Panda Antivirus Pro v17.0.1 uses cloud-based scanning and real-time updates to detect and block any malicious programs or websites that try to infect your PC or steal your personal information.</li>
|
10 |
-
<li><b>Firewall, VPN, and Wi-Fi protection:</b> Panda Antivirus Pro v17.0.1 helps you secure your network connection and prevent unauthorized access to your PC or data. It also allows you to browse anonymously and access geo-restricted content with its built-in VPN service.</li>
|
11 |
-
<li><b>Parental control and data shield:</b> Panda Antivirus Pro v17.0.1 lets you monitor and control your children's online activity and block inappropriate content or applications. It also encrypts your sensitive files and folders to prevent unauthorized access or modification.</li>
|
12 |
-
<li><b>Optimization and cleanup tools:</b> Panda Antivirus Pro v17.0.1 helps you improve your PC's performance and free up disk space by removing junk files, optimizing settings, and managing startup programs.</li>
|
13 |
-
</ul>
|
14 |
-
<h2>Risks of Using a Cracked Version of Panda Antivirus Pro v17.0.1</h2>
|
15 |
-
<p>While using a cracked version of Panda Antivirus Pro v17.0.1 might seem like a convenient way to save money or get more features, it can also expose you to various risks and disadvantages that can compromise your security, performance, legality, and ethics.</p>
|
16 |
-
<table>
|
17 |
-
<tr><th>Risk</th><th>Description</th></tr>
|
18 |
-
<tr><td><b>Legal issues and penalties for software piracy:</b></td><td>Using a cracked version of Panda Antivirus Pro v17.0.1 is considered software piracy, which is illegal in most countries and can result in fines or even jail time if caught.</td></tr>
|
19 |
-
<tr><td><b>Security threats and vulnerabilities from malware-infected cracks:</b></td><td>Many cracks are infected with malware themselves or contain hidden backdoors that can allow hackers to access your PC or data without your knowledge or consent.</td></tr>
|
20 |
-
<tr><td><b>Performance issues and compatibility problems from outdated or modified cracks:</b></td><td>Many cracks are outdated or modified versions of the original software that can cause errors, crashes, or conflicts with other programs or system updates.</td></tr>
|
21 |
-
<tr><td><b>Ethical issues and unfairness to the developers of Panda Antivirus Pro:</b></td><td>Using a cracked version of Panda Antivirus Pro v17.0.1 is unethical and unfair to the developers who spent time and money creating the software and providing updates and support.</td></tr>
|
22 |
-
</table>
|
23 |
-
<h2>Alternatives to Using a Cracked Version of Panda Antivirus Pro v17.0.1</h2>
|
24 |
-
<p>If you want to use Panda Antivirus Pro v17.0.1 without risking any of the above-mentioned issues, there are some alternatives that you can consider instead of using a crack.</p>
|
25 |
-
<ul>
|
26 |
-
<li><b>Buying a legitimate license of Panda Antivirus Pro v17.0.1:</b> The best way to use Panda Antivirus Pro v17.0.1 is to buy a legitimate license from the official website or an authorized reseller. This way, you can enjoy all the features and benefits of the software without any legal or security risks.</li>
|
27 |
-
<li><b>Using a free trial or a free version of Panda Antivirus Pro v17.0.1:</b> If you want to try out Panda Antivirus Pro v17.0.1 before buying it, you can use a free trial that lasts for 30 days or a free version that offers basic protection features.</li>
|
28 |
-
<li><b>Using other free or paid antivirus software that suits your needs:</b> If you are not satisfied with Panda Antivirus Pro v17.0.1 or its price, you can also use other free or paid antivirus software that suits your needs. There are many options available in the market that offer different features and levels of protection.</li>
|
29 |
-
</ul>
|
30 |
-
<h2>Conclusion</h2>
|
31 |
-
<p>Panda Antivirus Pro v17.0.1 is a comprehensive security solution that protects your PC from all kinds of online threats and enhances your privacy and performance with various features.</p>
|
32 |
-
<p>Panda Antivirus Pro 17.0.1 Final full version download<br />
|
33 |
-
How to crack Panda Antivirus Pro 17.0.1 Final rar file<br />
|
34 |
-
Panda Antivirus Pro 17.0.1 Final license key generator<br />
|
35 |
-
Panda Antivirus Pro 17.0.1 Final activation code free<br />
|
36 |
-
Panda Antivirus Pro 17.0.1 Final patch download<br />
|
37 |
-
Panda Antivirus Pro 17.0.1 Final serial number crack<br />
|
38 |
-
Panda Antivirus Pro 17.0.1 Final keygen torrent<br />
|
39 |
-
Panda Antivirus Pro 17.0.1 Final cracked software download<br />
|
40 |
-
Panda Antivirus Pro 17.0.1 Final rar password remover<br />
|
41 |
-
Panda Antivirus Pro 17.0.1 Final registration code crack<br />
|
42 |
-
Panda Antivirus Pro 17.0.1 Final product key crack<br />
|
43 |
-
Panda Antivirus Pro 17.0.1 Final crack download for windows 10<br />
|
44 |
-
Panda Antivirus Pro 17.0.1 Final crack download for mac<br />
|
45 |
-
Panda Antivirus Pro 17.0.1 Final crack download for linux<br />
|
46 |
-
Panda Antivirus Pro 17.0.1 Final crack download for android<br />
|
47 |
-
Panda Antivirus Pro 17.0.1 Final portable version download<br />
|
48 |
-
Panda Antivirus Pro 17.0.1 Final offline installer download<br />
|
49 |
-
Panda Antivirus Pro 17.0.1 Final latest update download<br />
|
50 |
-
Panda Antivirus Pro 17.0.1 Final premium features unlock<br />
|
51 |
-
Panda Antivirus Pro 17.0.1 Final lifetime activation crack<br />
|
52 |
-
Panda Antivirus Pro 17.0.1 Final malware removal tool crack<br />
|
53 |
-
Panda Antivirus Pro 17.0.1 Final virus protection crack<br />
|
54 |
-
Panda Antivirus Pro 17.0.1 Final firewall crack<br />
|
55 |
-
Panda Antivirus Pro 17.0.1 Final VPN crack<br />
|
56 |
-
Panda Antivirus Pro 17.0.1 Final parental control crack<br />
|
57 |
-
Panda Antivirus Pro 17.0.1 Final data recovery crack<br />
|
58 |
-
Panda Antivirus Pro 17.0.1 Final system optimizer crack<br />
|
59 |
-
Panda Antivirus Pro 17.0.1 Final identity protection crack<br />
|
60 |
-
Panda Antivirus Pro 17.0.1 Final ransomware protection crack<br />
|
61 |
-
Panda Antivirus Pro 17.0.1 Final phishing protection crack<br />
|
62 |
-
Panda Antivirus Pro 17.0.1 Final webcam protection crack<br />
|
63 |
-
Panda Antivirus Pro 17.0.1 Final password manager crack<br />
|
64 |
-
Panda Antivirus Pro 17.0.1 Final file shredder crack<br />
|
65 |
-
Panda Antivirus Pro 17.0.1 Final file encryption crack<br />
|
66 |
-
Panda Antivirus Pro 17.0.1 Final safe browsing crack<br />
|
67 |
-
Panda Antivirus Pro 17.0.1 Final game mode crack<br />
|
68 |
-
Panda Antivirus Pro 17.0</p>
|
69 |
-
<p>However, using a cracked version of Panda Antivirus Pro v17.0.1 can expose you to various risks and disadvantages that can compromise your security, performance, legality, and ethics.</p>
|
70 |
-
<p>The best way to use Panda Antivirus Pro v17.0.1 is to buy a legitimate license from the official website or an authorized reseller. Alternatively, you can use a free trial or a free version of Panda Antivirus Pro v17.0. I have already written the article on the topic you provided. Here is the rest of the article with HTML formatting. <p>1 or another free or paid antivirus software that suits your needs.</p>
|
71 |
-
<p>We hope this article has helped you understand what you need to know about Panda Antivirus Pro v17.0.1 and its crack. If you have any questions or comments, feel free to leave them below.</p>
|
72 |
-
<h2>FAQs</h2>
|
73 |
-
<ul>
|
74 |
-
<li><b>Q: Is Panda Antivirus Pro v17.0.1 compatible with Windows 10?</b></li>
|
75 |
-
<li><b>A: Yes, Panda Antivirus Pro v17.0.1 is compatible with Windows 10 and other Windows versions from XP to 8.1.</b></li>
|
76 |
-
<li><b>Q: How much does Panda Antivirus Pro v17.0.1 cost?</b></li>
|
77 |
-
<li><b>A: Panda Antivirus Pro v17.0.1 costs $39.99 for a one-year license for one PC, $59.99 for a two-year license for one PC, or $79.99 for a three-year license for one PC.</b></li>
|
78 |
-
<li><b>Q: How can I get a free trial or a free version of Panda Antivirus Pro v17.0.1?</b></li>
|
79 |
-
<li><b>A: You can get a free trial of Panda Antivirus Pro v17.0.1 by downloading it from the official website and activating it with your email address. You can get a free version of Panda Antivirus Pro v17.0.1 by downloading it from the official website and installing it on your PC.</b></li>
|
80 |
-
<li><b>Q: What are some other free or paid antivirus software that I can use instead of Panda Antivirus Pro v17.0.1?</b></li>
|
81 |
-
<li><b>A: Some other free or paid antivirus software that you can use instead of Panda Antivirus Pro v17.0.1 are Avast Free Antivirus, Bitdefender Antivirus Plus, Kaspersky Anti-Virus, Norton 360, and McAfee Total Protection.</b></li>
|
82 |
-
<li><b>Q: How can I contact the support team of Panda Antivirus Pro v17.0.1 if I have any issues or queries?</b></li>
|
83 |
-
<li><b>A: You can contact the support team of Panda Antivirus Pro v17.0.1 by visiting the official website and clicking on the "Support" tab. You can also call them at +34 91 398 37 00 or email them at [email protected].</b></li>
|
84 |
-
</ul>
|
85 |
-
</p> 0a6ba089eb<br />
|
86 |
-
<br />
|
87 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Create Custom Layouts with Tych Panel 2 Full Version for Photoshop CC.md
DELETED
@@ -1,120 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>James Cameron's Avatar: The Game Reloaded Serial Crack</h1>
|
3 |
-
<h2>Introduction</h2>
|
4 |
-
<p>If you are a fan of James Cameron's epic sci-fi movie Avatar, you might want to play the video game adaptation of it. James Cameron's Avatar: The Game is a third-person action-adventure game that lets you experience the stunning world of Pandora and its inhabitants. You can choose to fight for the human invaders or the native Na'vi, and explore a rich and diverse environment full of exotic creatures and plants.</p>
|
5 |
-
<h2>james cameron's avatar the game reloaded serial crack</h2><br /><p><b><b>Download File</b> ☆☆☆ <a href="https://byltly.com/2uKvNj">https://byltly.com/2uKvNj</a></b></p><br /><br />
|
6 |
-
<p>However, playing this game is not as easy as it sounds. You need a serial crack to activate the game and bypass the online verification process. Otherwise, you will be stuck at the activation screen and unable to enjoy the game. This is where Reloaded Serial Crack comes in handy. In this article, we will show you what Reloaded Serial Crack is, why you need it, how to get it, and some tips and tricks for playing Avatar: The Game.</p>
|
7 |
-
<h2>What is James Cameron's Avatar: The Game?</h2>
|
8 |
-
<p>James Cameron's Avatar: The Game is a video game based on the 2009 blockbuster movie Avatar, directed by James Cameron. The game was developed by Ubisoft Montreal and released in 2009 for Windows, PlayStation 3, Xbox 360, Wii, PSP, Nintendo DS, and iOS devices.</p>
|
9 |
-
<p>The game is set in 2152, two years before the events of the movie. You play as either a soldier of the Resources Development Administration (RDA), a corporation that wants to exploit Pandora's resources, or a member of the Na'vi, a race of blue-skinned humanoid aliens that live in harmony with nature. You can switch between these two factions at any time during the game.</p>
|
10 |
-
<p>The game features a nonlinear storyline that changes depending on your choices and actions. You can also customize your character's appearance, weapons, skills, and abilities. The game has both single-player and multiplayer modes, where you can cooperate or compete with other players online.</p>
|
11 |
-
<h2>What is Reloaded Serial Crack?</h2>
|
12 |
-
<p>Reloaded Serial Crack is a software tool that allows you to activate James Cameron's Avatar: The Game without having to go through the online verification process. The game requires you to enter a unique activation key that matches your hardware ID, which is generated by the game installer based on your computer specifications. However, this activation key can only be obtained from Ubisoft's official website, which is no longer available.</p>
|
13 |
-
<p>Reloaded Serial Crack solves this problem by generating a valid activation key for any hardware ID. It also cracks the game files so that you can play the game offline without any internet connection. Reloaded Serial Crack was created by Reloaded, a group of hackers that specializes in cracking video games.</p>
|
14 |
-
<h2>Why do you need Reloaded Serial Crack for Avatar: The Game?</h2>
|
15 |
-
<p>You need Reloaded Serial Crack for Avatar: The Game if you want to play the game without any hassle. Without Reloaded Serial Crack, you will not be able to activate the game and play it. You will also miss out on some features and updates that are only available in version 1.02 of the game.</p>
|
16 |
-
<p>avatar the game reloaded crack download<br />
|
17 |
-
james cameron's avatar pc game serial key<br />
|
18 |
-
how to install avatar the game reloaded<br />
|
19 |
-
avatar the game reloaded activation code<br />
|
20 |
-
james cameron's avatar the game crack only<br />
|
21 |
-
avatar the game reloaded system requirements<br />
|
22 |
-
james cameron's avatar pc game reloaded torrent<br />
|
23 |
-
avatar the game reloaded free full version<br />
|
24 |
-
james cameron's avatar the game keygen generator<br />
|
25 |
-
avatar the game reloaded gameplay<br />
|
26 |
-
james cameron's avatar the game patch download<br />
|
27 |
-
avatar the game reloaded iso file<br />
|
28 |
-
james cameron's avatar the game license key<br />
|
29 |
-
avatar the game reloaded cheats codes<br />
|
30 |
-
james cameron's avatar the game trainer download<br />
|
31 |
-
avatar the game reloaded online multiplayer<br />
|
32 |
-
james cameron's avatar the game mods<br />
|
33 |
-
avatar the game reloaded rar password<br />
|
34 |
-
james cameron's avatar the game steam<br />
|
35 |
-
avatar the game reloaded error fix<br />
|
36 |
-
james cameron's avatar the game review<br />
|
37 |
-
avatar the game reloaded windows 10 compatibility<br />
|
38 |
-
james cameron's avatar the game walkthrough<br />
|
39 |
-
avatar the game reloaded skidrow crack<br />
|
40 |
-
james cameron's avatar the game soundtrack<br />
|
41 |
-
avatar the game reloaded direct link<br />
|
42 |
-
james cameron's avatar the game ps3 iso<br />
|
43 |
-
avatar the game reloaded xbox 360 controller support<br />
|
44 |
-
james cameron's avatar the game xbox 360 download<br />
|
45 |
-
avatar the game reloaded save file location<br />
|
46 |
-
james cameron's avatar the game pc requirements<br />
|
47 |
-
avatar the game reloaded no cd crack<br />
|
48 |
-
james cameron's avatar the game pc gameplay<br />
|
49 |
-
avatar the game reloaded update download<br />
|
50 |
-
james cameron's avatar the game pc download highly compressed<br />
|
51 |
-
avatar the game reloaded registration code generator<br />
|
52 |
-
james cameron's avatar the game pc controls<br />
|
53 |
-
avatar the game reloaded unlock code free<br />
|
54 |
-
james cameron's avatar the game pc cheats<br />
|
55 |
-
avatar the game reloaded graphics settings<br />
|
56 |
-
james cameron's avatar the game pc mods<br />
|
57 |
-
avatar the game reloaded offline activation keygen download<br />
|
58 |
-
james cameron's avatar the game pc patch 1.02 download<br />
|
59 |
-
avatar the game reloaded crack only download free full version pc games torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eu torrentz2.eutorrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.com torrents.comtorrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.net torrents.nettorrents.me</p>
|
60 |
-
<p>With Reloaded Serial Crack, you can enjoy the following benefits:</p>
|
61 |
-
<ul>
|
62 |
-
<li>You can play the game offline without any internet connection.</li>
|
63 |
-
<li>You can play the game on any computer regardless of its hardware specifications.</li>
|
64 |
-
<li>You can update the game to version 1.02, which fixes some bugs and improves some graphics.</li>
|
65 |
-
<li>You can access all the content and modes of the game without any restrictions.</li>
|
66 |
-
<li>You can save money by not having to buy an original copy of the game.</li>
|
67 |
-
</ul>
|
68 |
-
<h2>How to get Reloaded Serial Crack for Avatar: The Game?</h2>
|
69 |
-
<p>Getting Reloaded Serial Crack for Avatar: The Game is not difficult if you follow these steps:</p>
|
70 |
-
<h3>Download the game from a trusted source</h3>
|
71 |
-
<p>The first step is to download James Cameron's Avatar: The Game from a trusted source. You can find many websites that offer free downloads of pirated games, but be careful as some of them may contain viruses or malware that can harm your computer. We recommend using ElAmigos official site, which provides a safe and reliable download link for James Cameron's Avatar: The Game ElAmigos release.</p>
|
72 |
-
<p>The ElAmigos release is already cracked after installation (crack/keygen by Reloaded). It also includes all languages and updates up to version 1.02. The upload size is 2.77GB and you can choose between RAR parts or ISO image format.</p>
|
73 |
-
<h3>Install the game and update it to version 1.02</h3>
|
74 |
-
<p>The next step is to install James Cameron's Avatar: The Game on your computer. To do this, you need to extract the RAR parts or mount the ISO image using a software like WinRAR or Daemon Tools Lite. Then, run the setup.exe file and follow the instructions on screen.</p>
|
75 |
-
<p>After installing the game, you need to update it to version 1.02. This will fix some bugs and improve some graphics in the game. To update the game, run patch.exe file from Update folder inside ISO image or extracted folder.</p>
|
76 |
-
<h3>Launch the game and choose manual activation</h3>
|
77 |
-
<p>The third step is to launch James Cameron's Avatar: The Game from your desktop shortcut or start menu. During the first launch, you will see an activation window that asks you to register online or manually. Select manual activation option as online activation is no longer possible.</p>
|
78 |
-
<p>You will then see your hardware ID displayed on screen. This is a unique code that identifies your computer based on its specifications. You need this code to generate an activation key using Reloaded Serial Crack.</p>
|
79 |
-
<h3>Use the keygen to generate an activation key</h3>
|
80 |
-
<p>The fourth step is to use Reloaded Serial Crack (keygen) to generate an activation key for your hardware ID. To do this, you need to open keygen.exe file from Keygen folder inside ISO image or extracted folder.</p>
|
81 |
-
<p>Then, copy your hardware ID from the game's activation window and paste it into Keygen field in keygen.exe file. Click Generate button and you will get an activation key displayed on screen.</p>
|
82 |
-
<h3>Enter the activation key in the game's activation window</h3>
|
83 |
-
<p>The final step is to enter the activation key in the game's activation window. To do this, you need to copy the activation key from keygen.exe file and paste it into Activation Key field in the game's activation window. Click Activate button and the game will launch automatically. You need to do this only once, after that you can delete the keygen.exe file.</p>
|
84 |
-
<h2>Tips and tricks for playing Avatar: The Game</h2>
|
85 |
-
<p>Now that you have activated James Cameron's Avatar: The Game, you can start playing it and have fun. Here are some tips and tricks for playing Avatar: The Game:</p>
|
86 |
-
<h3>Choose your faction: RDA or Na'vi</h3>
|
87 |
-
<p>The first choice you have to make in the game is which faction you want to join: the RDA or the Na'vi. This will affect your storyline, your gameplay, and your character development. The RDA are the human invaders who use advanced technology and weapons to exploit Pandora's resources. The Na'vi are the native aliens who use bows, spears, and animals to defend their homeland. these two factions at any time during the game, but be aware that your actions will have consequences and affect your reputation with each side.</p>
|
88 |
-
<h3>Customize your character and skills</h3>
|
89 |
-
<p>The second choice you have to make in the game is how to customize your character and skills. You can choose from different classes, such as soldier, infiltrator, commando, or scientist for the RDA, or warrior, hunter, shaman, or scout for the Na'vi. Each class has its own strengths and weaknesses, as well as unique weapons and abilities.</p>
|
90 |
-
<p>You can also upgrade your skills by earning experience points (XP) and spending them on skill trees. There are four skill trees for each faction: combat, stealth, survival, and support for the RDA, and combat, stealth, nature, and spirit for the Na'vi. You can mix and match skills from different trees to create your own playstyle.</p>
|
91 |
-
<h3>Explore the beautiful world of Pandora</h3>
|
92 |
-
<p>The third thing you can do in the game is to explore the beautiful world of Pandora. Pandora is a rich and diverse environment full of exotic creatures and plants. You can interact with many of them, either as allies or enemies. You can also ride some of them, such as direhorses, banshees, or leonopteryxes.</p>
|
93 |
-
<p>Pandora is also full of secrets and hidden areas that you can discover by using your scanner or your senses. You can find collectibles, such as cell samples, artifacts, or logs that will give you more information about the world and its history. You can also find resources and items that you can use to craft new weapons and equipment.</p>
|
94 |
-
<h3>Complete missions and side quests</h3>
|
95 |
-
<p>The fourth thing you can do in the game is to complete missions and side quests. Missions are the main objectives that advance the story and change depending on your faction and choices. Side quests are optional tasks that you can do to earn extra XP, resources, items, or reputation.</p>
|
96 |
-
<p>You can find missions and side quests by talking to NPCs or checking your map. Some missions and side quests are faction-specific, while others are shared by both sides. Some missions and side quests are also time-sensitive or have branching outcomes. You can track your progress and objectives by using your HUD or your menu.</p>
|
97 |
-
<h3>Collect resources and items</h3>
|
98 |
-
<p>The fifth thing you can do in the game is to collect resources and items. Resources are materials that you can use to craft new weapons and equipment. You can find resources by scanning plants or animals, looting enemies or containers, or mining deposits. You can craft weapons and equipment by using workbenches or vendors.</p>
|
99 |
-
<p>Items are consumables that you can use to enhance your performance or heal yourself. You can find items by scanning plants or animals, looting enemies or containers, or buying them from vendors. You can use items by accessing your inventory or using hotkeys.</p>
|
100 |
-
<h2>Conclusion</h2>
|
101 |
-
<h4>Summary of the main points</h4>
|
102 |
-
<p>In conclusion, James Cameron's Avatar: The Game is a fun and immersive game that lets you experience the stunning world of Pandora and its inhabitants. However, to play this game, you need Reloaded Serial Crack to activate it and bypass the online verification process. To get Reloaded Serial Crack, you need to download the game from a trusted source, install it and update it to version 1.02, launch it and choose manual activation, use the keygen to generate an activation key, and enter it in the game's activation window.</p>
|
103 |
-
<h4>Call to action</h4>
|
104 |
-
<p>If you are ready to play James Cameron's Avatar: The Game with Reloaded Serial Crack, don't wait any longer. Follow the steps we have shown you in this article and start your adventure on Pandora today. You won't regret it!</p>
|
105 |
-
<h2>FAQs</h2>
|
106 |
-
<ul>
|
107 |
-
<li><b>Q: Is Reloaded Serial Crack safe to use?</b></li>
|
108 |
-
<li>A: Yes, Reloaded Serial Crack is safe to use as long as you download it from a trusted source like ElAmigos official site. It does not contain any viruses or malware that can harm your computer.</li>
|
109 |
-
<li><b>Q: Can I play James Cameron's Avatar: The Game online with Reloaded Serial Crack?</b></li>
|
110 |
-
<li>A: No, Reloaded Serial Crack only works for offline mode. If you want to play online with other players, you need an original copy of the game with a valid activation key.</li>
|
111 |
-
<li><b>Q: How long does it take to activate James Cameron's Avatar: The Game with Reloaded Serial Crack?</b></li>
|
112 |
-
<li>A: It only takes a few minutes to activate James Cameron's Avatar: The Game with Reloaded Serial Crack. You just need to follow the steps we have shown you in this article.</li>
|
113 |
-
<li><b>Q: What are some other games that I can play with Reloaded Serial Crack?</b></li>
|
114 |
-
<li>A: Reloaded Serial Crack works for many other games that require online activation. Some examples are Assassin's Creed II, Mass Effect 2, Bioshock 2, Dragon Age Origins, Borderlands, etc.</li>
|
115 |
-
<li><b>Q: Where can I find more information about James Cameron's Avatar: The Game?</b></li>
|
116 |
-
<li>A: You can find more information about James Cameron's Avatar: The Game by visiting its official website, its Wikipedia page, its GameFAQs page, its Reddit page, etc.</li>
|
117 |
-
</ul>
|
118 |
-
</p> 0a6ba089eb<br />
|
119 |
-
<br />
|
120 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Download Elijah Blakes Drift Album in Zip Format.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>Elijah blake drift download zip</h2><br /><p><b><b>DOWNLOAD</b> 🔗 <a href="https://imgfil.com/2uxXS0">https://imgfil.com/2uxXS0</a></b></p><br /><br />
|
2 |
-
|
3 |
-
aaccfb2cb3<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Discover the Secrets of Ashfall a New Post-Apocalyptic MMORPG.md
DELETED
@@ -1,114 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Ashfall Game: A Post-Apocalyptic Shooter MMORPG You Need to Play</h1>
|
3 |
-
<p>If you are a fan of post-apocalyptic games, you might have heard of Ashfall, a new shooter MMORPG that is set to release in 2023. Ashfall is a game that promises to deliver an epic and immersive experience in a world that has been devastated by a nuclear war. In this article, we will tell you everything you need to know about Ashfall game, including what it is, why you should play it, and how to play it.</p>
|
4 |
-
<h2>ashfall game</h2><br /><p><b><b>DOWNLOAD</b> >>> <a href="https://urlin.us/2uSZ8G">https://urlin.us/2uSZ8G</a></b></p><br /><br />
|
5 |
-
<h2>What is Ashfall Game?</h2>
|
6 |
-
<p>Ashfall is a post-apocalyptic shooter MMORPG developed by Legendary Star Studio, a subsidiary of NetEase Games. It is a game that combines elements of shooting, role-playing, exploration, crafting, base-building, and more. In Ashfall, you will play as a survivor who must leave the Vault to find the Core of Creation—the key to saving the world.</p>
|
7 |
-
<h3>The Story and Setting of Ashfall Game</h3>
|
8 |
-
<p>The story of Ashfall takes place in the future, when AI rises up and launches a nuclear war against humanity. After that, nothing other than ruins are left in the world. You are one of the few survivors who live in a Vault, a safe haven that protects you from the harsh environment outside. However, one day, you receive a mysterious message that tells you to find the Core of Creation, a device that can restore the world to its former glory. You decide to leave the Vault and embark on a perilous journey across the wasteland.</p>
|
9 |
-
<p>The setting of Ashfall is a vast and diverse world that is full of surprises and dangers. You will encounter various landscapes, such as snow plains, deserts, forests, swamps, and cities. You will also meet different creatures and factions, such as giant worms, talking rabbits, humanoid traders, robots, mutants, rebels, and more. You will discover the secrets and stories of this broken world as you explore it.</p>
|
10 |
-
<h3>The Gameplay and Features of Ashfall Game</h3>
|
11 |
-
<p>The gameplay of Ashfall is based on four pillars: shooting, role-playing, exploration, and crafting. You will be able to customize your character's appearance, skills, equipment, and gadgets. You will be able to use various weapons and abilities to fight against enemies and bosses. You will be able to explore the world and collect resources and items. You will be able to craft your own equipment, gadgets, mounts, and base.</p>
|
12 |
-
<p>Some of the features of Ashfall game are:</p>
|
13 |
-
<p>Ashfall game release date<br />
|
14 |
-
Ashfall game trailer<br />
|
15 |
-
Ashfall game review<br />
|
16 |
-
Ashfall game download<br />
|
17 |
-
Ashfall game steam<br />
|
18 |
-
Ashfall game gameplay<br />
|
19 |
-
Ashfall game wiki<br />
|
20 |
-
Ashfall game system requirements<br />
|
21 |
-
Ashfall game beta<br />
|
22 |
-
Ashfall game reddit<br />
|
23 |
-
Ashfall game soundtrack<br />
|
24 |
-
Ashfall game weapons<br />
|
25 |
-
Ashfall game mounts<br />
|
26 |
-
Ashfall game base building<br />
|
27 |
-
Ashfall game companions<br />
|
28 |
-
Ashfall game skills<br />
|
29 |
-
Ashfall game gadgets<br />
|
30 |
-
Ashfall game tips<br />
|
31 |
-
Ashfall game secrets<br />
|
32 |
-
Ashfall game lore<br />
|
33 |
-
Ashfall game vaults<br />
|
34 |
-
Ashfall game mutants<br />
|
35 |
-
Ashfall game robots<br />
|
36 |
-
Ashfall game cities<br />
|
37 |
-
Ashfall game civilizations<br />
|
38 |
-
Ashfall game post-apocalyptic world<br />
|
39 |
-
Ashfall game nuclear war<br />
|
40 |
-
Ashfall game AI<br />
|
41 |
-
Ashfall game core of creation<br />
|
42 |
-
Ashfall game solo adventure<br />
|
43 |
-
Ashfall game multiplayer experience<br />
|
44 |
-
Ashfall game crossplay<br />
|
45 |
-
Ashfall game legendary star studio<br />
|
46 |
-
Ashfall game netease games<br />
|
47 |
-
Ashfall game hans zimmer<br />
|
48 |
-
Ashfall game steve mazzaro<br />
|
49 |
-
Ashfall game inon zur<br />
|
50 |
-
How to play ashfall game<br />
|
51 |
-
How to download ashfall game for free<br />
|
52 |
-
How to join ashfall game discord server<br />
|
53 |
-
How to craft equipment in ashfall game<br />
|
54 |
-
How to tame mounts in ashfall game <br />
|
55 |
-
How to build a base in ashfall game <br />
|
56 |
-
How to recruit companions in ashfall game <br />
|
57 |
-
How to discover skills in ashfall game <br />
|
58 |
-
How to use gadgets in ashfall game <br />
|
59 |
-
How to fight giants in ashfall game <br />
|
60 |
-
How to explore the wasteland in ashfall game <br />
|
61 |
-
How to save the world in ashfall game</p>
|
62 |
-
<ul>
|
63 |
-
<li>Tame your personal mounts and traverse the world with them.</li>
|
64 |
-
<li>Discover and delve into various extreme environmental disasters.</li>
|
65 |
-
<li>Construct your own base and furnish it with antique furniture you find along the way.</li>
|
66 |
-
<li>Seek out the legends and heroes of this world.</li>
|
67 |
-
<li>Make friends and recruit companions.</li>
|
68 |
-
<li>Craft exciting gadgets such as drones, smart sentry guns, or even a medic robot.</li>
|
69 |
-
</ul>
|
70 |
-
<h2>Why Should You Play Ashfall Game?</h2>
|
71 |
-
<p>There are many reasons why you should play Ashfall game. Here are some of them:</p>
|
72 |
-
<h3>A Stunning and Immersive World</h3>
|
73 |
-
<p>Ashfall game boasts a stunning and immersive world that is powered by Unreal Engine 4. The graphics are realistic and detailed, creating a vivid atmosphere for the game. The world is also dynamic and interactive, meaning that it changes according to your actions and choices. For example, you can trigger environmental disasters such as sandstorms, blizzards, or acid rains, and see how they affect the world and the gameplay. You can also interact with various objects and NPCs in the world, such as shooting barrels, hacking terminals, or trading with merchants.</p>
|
74 |
-
<h3>A Thrilling and Diverse Adventure</h3>
|
75 |
-
<p>Ashfall game offers a thrilling and diverse adventure that will keep you hooked for hours. The game has a rich and branching storyline that is influenced by your decisions and actions. You can choose to follow the main quest or explore the side quests and hidden events. You can also choose to ally with different factions or go solo. The game has multiple endings that depend on your choices and consequences.</p>
|
76 |
-
<p>The game also has a variety of gameplay modes that cater to different preferences and moods. You can play solo or co-op with up to four players. You can also join PvP battles or PvE raids with other players. You can also participate in seasonal events and challenges that offer unique rewards and experiences.</p>
|
77 |
-
<h3>A Musical Feast in a Forlorn World</h3>
|
78 |
-
<p>Ashfall game features a musical feast in a forlorn world that will touch your soul. The game has an original soundtrack composed by renowned musicians, such as Hans Zimmer, Junkie XL, and Ramin Djawadi. The music is diverse and fitting for the different scenes and emotions of the game. The music is also interactive, meaning that it changes according to your actions and situations. For example, the music will become more intense when you are in combat, or more soothing when you are in your base.</p>
|
79 |
-
<h3>A Crossplay Experience for Everyone</h3>
|
80 |
-
<p>Ashfall game is a crossplay experience for everyone, meaning that you can play it on different platforms and devices with other players. The game supports crossplay between PC, PS4, PS5, Xbox One, Xbox Series X/S, and mobile devices. You can also switch between devices without losing your progress or data. The game also has a cloud save feature that allows you to access your account from anywhere.</p>
|
81 |
-
<h2>How to Play Ashfall Game?</h2>
|
82 |
-
<p>If you are interested in playing Ashfall game, here are some things you need to know:</p>
|
83 |
-
<h3>The Platforms and Release Date of Ashfall Game</h3>
|
84 |
-
<p>Ashfall game is scheduled to release in 2023 for PC, PS4, PS5, Xbox One, Xbox Series X/S, and mobile devices. The game will be available on Steam, Epic Games Store, PlayStation Store, Microsoft Store, App Store, and Google Play Store. The game will also have a beta testing phase before the official launch.</p>
|
85 |
-
<h3>The System Requirements and Price of Ashfall Game</h3>
|
86 |
-
<p>The system requirements and price of Ashfall game are not yet announced by the developers. However, based on the graphics and features of the game, we can expect that the game will require a high-end PC or console to run smoothly. The game will also likely have a premium price tag, as it is a AAA title with high production value.</p>
|
87 |
-
<h3>The Tips and Tricks for Ashfall Game</h3>
|
88 |
-
<p>Here are some tips and tricks for Ashfall game that might help you enjoy the game better:</p>
|
89 |
-
<ul>
|
90 |
-
<li>Explore the world as much as possible and collect resources and items.</li>
|
91 |
-
<li>Craft your own equipment, gadgets, mounts, and base to suit your playstyle.</li>
|
92 |
-
<li>Use different weapons and abilities to deal with different enemies and situations.</li>
|
93 |
-
<li>Pay attention to the environmental disasters and use them to your advantage or avoid them.</li>
|
94 |
-
<li>Make friends and recruit companions who can help you in combat and exploration.</li>
|
95 |
-
<li>Join co-op or PvP modes to have more fun and challenge with other players.</li>
|
96 |
-
<li>Follow the main quest or side quests to discover the story and secrets of the world.</li>
|
97 |
-
<li>Make decisions that reflect your personality and morality.</li>
|
98 |
-
</ul>
|
99 |
-
<h2>Conclusion</h2>
|
100 |
-
<p>Ashfall game is a post-apocalyptic shooter MMORPG that you need to play if you love this genre. The game has a stunning and immersive world, a thrilling and diverse adventure, a musical feast in a forlorn world, and a crossplay experience for everyone. The game is set to release in 2023 for PC, PS4, PS5, Xbox One, Xbox Series X/S, and mobile devices. You can pre-register for the beta testing phase on the official website of the game.</p>
|
101 |
-
<h2>FAQs</h2>
|
102 |
-
<p>Here are some frequently asked questions about Ashfall game:</p>
|
103 |
-
<h4>What is the Core of Creation?</h4>
|
104 |
-
<p>The Core of Creation is a device that can restore the world to its former glory. It is the ultimate goal of your journey in Ashfall game. However, you are not the only one who is looking for it. You will face many enemies and challenges along the way.</p>
|
105 |
-
<h4>How long is the game?</h4>
|
106 |
-
<p>The game length of Ashfall game depends on how you play it. If you focus on the main quest, you can finish the game in about 20 hours. However, if you explore the world and do the side quests, you can extend the game time to over 100 hours. The game also has replay value, as you can try different choices and endings.</p>
|
107 |
-
<h4>Is the game online or offline?</h4>
|
108 |
-
<p>The game is both online and offline. You can play the game solo or co-op with up to four players. You can also join PvP battles or PvE raids with other players. However, you can also play the game offline without an internet connection. You can switch between online and offline modes anytime you want.</p>
|
109 |
-
<h4>What are the gadgets in the game?</h4>
|
110 |
-
<p>The gadgets are devices that you can craft and use in the game. They have various functions and effects, such as scouting, healing, attacking, defending, or hacking. You can craft gadgets using resources and items that you find in the world. You can also upgrade and customize your gadgets to suit your needs.</p>
|
111 |
-
<h4>Can I play the game on mobile devices?</h4>
|
112 |
-
<p>Yes, you can play the game on mobile devices. The game supports crossplay between PC, PS4, PS5, Xbox One, Xbox Series X/S, and mobile devices. You can also switch between devices without losing your progress or data. The game also has a cloud save feature that allows you to access your account from anywhere.</p> 197e85843d<br />
|
113 |
-
<br />
|
114 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Mkhathazi Songs for Free - The Best of Maskandi Music.md
DELETED
@@ -1,108 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Download Mkhathazi Songs: How to Enjoy the Best of Maskandi Music</h1>
|
3 |
-
<p>If you are a fan of traditional Zulu music, you have probably heard of maskandi music. Maskandi is a genre of music that originated in the rural areas of KwaZulu-Natal, South Africa. It is characterized by the use of acoustic guitars, concertinas, harmonicas, and percussion instruments. Maskandi music reflects the culture and experiences of the Zulu people, often dealing with topics such as love, politics, history, and social issues.</p>
|
4 |
-
<h2>download mkhathazi songs</h2><br /><p><b><b>Download Zip</b> ✒ ✒ ✒ <a href="https://jinyurl.com/2uNTgJ">https://jinyurl.com/2uNTgJ</a></b></p><br /><br />
|
5 |
-
<p>One of the most popular and talented maskandi artists in South Africa is Mkhathazi. He is a singer, songwriter, guitarist, and producer who has been making waves in the music industry since his debut album in 2010. He has won several awards, collaborated with other famous artists, and performed at various festivals and events. His songs are catchy, uplifting, and inspiring, blending traditional elements with modern influences.</p>
|
6 |
-
<p>If you want to enjoy the best of maskandi music, you should download Mkhathazi songs. Downloading his songs will allow you to listen to them anytime, anywhere, without any interruptions or ads. You will also be able to support his work and appreciate his artistry. In this article, we will tell you more about the history and culture of maskandi music, the biography and achievements of Mkhathazi, and the benefits and methods of downloading his songs.</p>
|
7 |
-
<h2>The History and Culture of Maskandi Music</h2>
|
8 |
-
<h3>The origins and evolution of maskandi music</h3>
|
9 |
-
<p>Maskandi music can be traced back to the early 20th century, when migrant workers from rural areas moved to urban centers in search of jobs. They brought with them their musical traditions, which they used to express their feelings and opinions. They also adapted their music to suit their new environment, incorporating influences from other genres such as jazz, blues, gospel, reggae, and hip hop.</p>
|
10 |
-
<p>download makhadzi sugar sugar feat mampintsha mp3<br />
|
11 |
-
download umkhathazi ngikhule kanzima official video<br />
|
12 |
-
download makhadzi murahu feat mr brown music video<br />
|
13 |
-
download makhadzi latest songs 2023<br />
|
14 |
-
download umkhathazi new album 2023<br />
|
15 |
-
download makhadzi and master kg songs<br />
|
16 |
-
download umkhathazi ft khuzani mp3<br />
|
17 |
-
download makhadzi red card official video<br />
|
18 |
-
download umkhathazi isiginci mp3<br />
|
19 |
-
download makhadzi ghanama feat prince benza video<br />
|
20 |
-
download umkhathazi amabunjwa mp3<br />
|
21 |
-
download makhadzi zwivhuya feat jon delinger video<br />
|
22 |
-
download umkhathazi ngiyabonga mp3<br />
|
23 |
-
download makhadzi magear feat mr brown audio<br />
|
24 |
-
download umkhathazi izingane zoma mp3<br />
|
25 |
-
download makhadzi makwapheni feat mr bow audio<br />
|
26 |
-
download umkhathazi ngiyamthanda mp3<br />
|
27 |
-
download makhadzi ngwago feat prince benza video<br />
|
28 |
-
download umkhathazi ngiyazifela mp3<br />
|
29 |
-
download makhadzi mayellowbone feat prince benza video<br />
|
30 |
-
download umkhathazi ngiyavuma mp3<br />
|
31 |
-
download makhadzi best hit music playlist 2023<br />
|
32 |
-
download umkhathazi best of maskandi 2023<br />
|
33 |
-
download makhadzi ft penny penny milandu bhe video<br />
|
34 |
-
download umkhathazi ft imfez emnyama mp3<br />
|
35 |
-
download makhadzi ft costatitch big flexa video<br />
|
36 |
-
download umkhathazi ft shwi nomtekhala mp3<br />
|
37 |
-
download makhadzi ft sdala b and paige ngiyazifela ngawe ep live performance video<br />
|
38 |
-
download umkhathazi ft kholeka mp3<br />
|
39 |
-
download makhadzi ft wanitwa mos and master kg dali nguwe video<br />
|
40 |
-
download umkhathazi ft dumi mkokstad mp3<br />
|
41 |
-
download makhadzi ft dj call me maxaka video<br />
|
42 |
-
download umkhathazi ft iphakade lami mp3<br />
|
43 |
-
download makhadzi ft mr bow va navela video<br />
|
44 |
-
download umkhathazi ft thokozani langa mp3<br />
|
45 |
-
download makhadzi ft wayawaya and team mosha video<br />
|
46 |
-
download umkhathazi ft mgqumeni mp3<br />
|
47 |
-
download makhadzi ft stimela and ntate stunna video<br />
|
48 |
-
download umkhathazi ft bhekumuzi luthuli mp3<br />
|
49 |
-
download makhadzi ft di boya limpopo and zanda zakuza video<br />
|
50 |
-
download umkhathazi ft khuzani indlamlenze mp3.</p>
|
51 |
-
<p>Maskandi music has evolved over the years, with different styles and subgenres emerging. Some of the most notable ones are isishameni (fast-paced and upbeat), isigekle (slow-paced and melodic), isibhaca (aggressive and confrontational), isitshikitsha (dance-oriented and rhythmic), and isigcino (solo-oriented and lyrical). Maskandi music has also diversified its audience, appealing to people from different backgrounds, ages, genders, and regions.</p>
|
52 |
-
<h3>The characteristics and themes of maskandi music</h3>
|
53 |
-
<p>Maskandi music is known for its distinctive sound and style. It usually features a lead singer who plays an acoustic guitar, accompanied by backing vocalists who sing in harmony or call-and-response. The singer often improvises lyrics based on current events or personal experiences. The lyrics are usually sung in Zulu or other indigenous languages, using proverbs, metaphors, idioms, and slang.</p>
|
54 |
-
<p>Maskandi music also covers a wide range of themes and messages. Some of the common ones are love, romance, family, friendship, religion, spirituality, culture, heritage, identity, politics, social issues, morality, humor, satire, competition, praise, criticism, advice, encouragement, motivation, inspiration, celebration, gratitude, respect, and pride.</p>
|
55 |
-
<h3>The popularity and influence of maskandi music</h3 <h2>The Biography and Achievements of Mkhathazi</h2>
|
56 |
-
<h3>The early life and career of Mkhathazi</h3>
|
57 |
-
<p>Mkhathazi, whose real name is Sipho Ngubane, was born in 1986 in Nquthu, a small town in northern KwaZulu-Natal. He grew up in a musical family, with his father being a maskandi singer and his mother a gospel singer. He started singing at a young age, joining his father's band and performing at weddings and ceremonies. He also learned to play the guitar, which became his signature instrument.</p>
|
58 |
-
<p>Mkhathazi moved to Durban in 2008 to pursue his music career. He recorded his first album, Uyisoka Lami, in 2010, which was well received by maskandi fans. He followed it up with several more albums, such as Uyabaleka (2012), Uthando Lwakho (2014), and Ngikhule Kanzima (2018). His songs are known for their catchy melodies, witty lyrics, and social commentary. He sings about love, culture, politics, religion, and everyday life.</p>
|
59 |
-
<h3>The awards and recognition of Mkhathazi</h3>
|
60 |
-
<p>Mkhathazi has won several awards and accolades for his music. He has been nominated for the South African Music Awards (SAMAs) four times, winning the Best Maskandi Album award in 2016 for his album Uthando Lwakho. He has also won the Eastern Cape Music Awards (ECMA) twice, in 2019 and 2020, for the Best Maskandi Artist category. He has also received recognition from the Maskandi Music Association of South Africa (MMASA), which honoured him with the Best Male Artist award in 2017.</p>
|
61 |
-
<p>Mkhathazi has also performed at various festivals and events, both locally and internationally. He has graced the stages of the Maskandi Music Festival, the Wozekhaya Expo and Maskandi Music Festival, the N3 Ubumbano Maskandi Fest, and the Ugu Maskandi Festival. He has also toured countries such as Botswana, Lesotho, Swaziland, Mozambique, Zimbabwe, and Namibia.</p>
|
62 |
-
<h3>The collaborations and projects of Mkhathazi</h3>
|
63 |
-
<p>Mkhathazi has collaborated with other famous artists from different genres, such as Mampintsha, Big Zulu, Khuzani, Ntencane, and Phuzekhemisi. He has also worked with producers such as DJ Tira, Prince Bulo, DJ Cndo, and DJ Bongz. He has featured on songs such as Sugar Sugar by Makhadzi, Ngikhule Kanzima by Umkhathazi, Murahu by Makhadzi, and many more.</p>
|
64 |
-
<p>Mkhathazi is also involved in various projects that aim to promote maskandi music and culture. He is the founder of the Mkhathazi Foundation, which supports young and upcoming maskandi artists. He is also the ambassador of the Maskandi Music Academy, which offers training and mentorship to aspiring maskandi musicians. He is also a member of the Maskandi Music Council, which advocates for the rights and interests of maskandi artists.</p> <h2>The Benefits and Methods of Downloading Mkhathazi Songs</h2>
|
65 |
-
<h3>The advantages of downloading Mkhathazi songs</h3>
|
66 |
-
<p>Downloading Mkhathazi songs has many benefits for you as a listener and a fan. Here are some of them:</p>
|
67 |
-
<ul>
|
68 |
-
<li>You can listen to his songs offline, without any internet connection or data charges.</li>
|
69 |
-
<li>You can enjoy his songs without any interruptions or ads, unlike streaming services.</li>
|
70 |
-
<li>You can create your own playlists and mixtapes, and share them with your friends and family.</li>
|
71 |
-
<li>You can support his work and show your appreciation for his talent and creativity.</li>
|
72 |
-
<li>You can learn more about his music and culture, and enrich your knowledge and understanding.</li>
|
73 |
-
</ul>
|
74 |
-
<h3>The legal and ethical issues of downloading Mkhathazi songs</h3>
|
75 |
-
<p>Downloading Mkhathazi songs is not illegal, as long as you do it from authorized sources and for personal use only. However, you should be aware of the legal and ethical issues that may arise from downloading his songs. Here are some of them:</p>
|
76 |
-
<ul>
|
77 |
-
<li>You should not download his songs from pirated or illegal websites or apps, as they may contain viruses, malware, or spyware that can harm your device or compromise your privacy.</li>
|
78 |
-
<li>You should not distribute or sell his songs without his permission or consent, as that would violate his intellectual property rights and deprive him of his income and royalties.</li>
|
79 |
-
<li>You should not use his songs for commercial or public purposes, such as playing them in a club, a restaurant, a radio station, or a podcast, without his authorization or license.</li>
|
80 |
-
<li>You should respect his artistic integrity and vision, and not alter, edit, remix, or sample his songs without his approval or credit.</li>
|
81 |
-
</ul>
|
82 |
-
<h3>The best websites and apps for downloading Mkhathazi songs</h3 <p>There are many websites and apps that offer you the option to download Mkhathazi songs legally and safely. Some of the best ones are:</p>
|
83 |
-
<table>
|
84 |
-
<tr><th>Website/App</th><th>Features</th></tr>
|
85 |
-
<tr><td>iTunes</td><td>- Offers high-quality downloads of Mkhathazi songs and albums<br>- Allows you to sync your downloads with your Apple devices<br>- Provides you with information and reviews of Mkhathazi music</td></tr>
|
86 |
-
<tr><td>Spotify</td><td>- Allows you to stream and download Mkhathazi songs and albums<br>- Lets you create your own playlists and discover new music<br>- Gives you access to exclusive content and podcasts from Mkhathazi</td></tr>
|
87 |
-
<tr><td>Amazon Music</td><td>- Enables you to buy and download Mkhathazi songs and albums<br>- Lets you store your downloads on the cloud and access them from any device<br>- Offers you recommendations and deals on Mkhathazi music</td></tr>
|
88 |
-
<tr><td>YouTube Music</td><td>- Allows you to watch and download Mkhathazi videos and songs<br>- Lets you enjoy ad-free music and offline playback<br>- Gives you access to live performances and interviews from Mkhathazi</td></tr>
|
89 |
-
<tr><td>SoundCloud</td><td>- Enables you to listen and download Mkhathazi songs and tracks<br>- Lets you follow Mkhathazi and interact with him and other fans<br>- Offers you the opportunity to discover new music from emerging artists</td></tr>
|
90 |
-
</table>
|
91 |
-
<h2>Conclusion</h2>
|
92 |
-
<p>Mkhathazi is one of the most popular and talented maskandi artists in South Africa. His music is a blend of traditional Zulu culture and modern influences. He has won several awards, collaborated with other famous artists, and performed at various festivals and events. Downloading his songs will allow you to enjoy his music anytime, anywhere, without any interruptions or ads. You will also be able to support his work and appreciate his artistry. However, you should also be aware of the legal and ethical issues that may arise from downloading his songs. You should only download his songs from authorized sources and for personal use only. You should also respect his intellectual property rights and artistic integrity.</p>
|
93 |
-
<p>If you want to enjoy the best of maskandi music, you should download Mkhathazi songs. You will not regret it. He is a true legend of maskandi music. To download his songs, you can visit any of the websites or apps mentioned above. You can also follow him on social media platforms such as Facebook, Twitter, Instagram, or YouTube. You can also visit his official website for more information about him and his music.</p>
|
94 |
-
<h2>Frequently Asked Questions (FAQs)</h2 <p>Here are some of the frequently asked questions (FAQs) about Mkhathazi and his music:</p>
|
95 |
-
<ol>
|
96 |
-
<li><b>What is the meaning of Mkhathazi?</b><br>
|
97 |
-
Mkhathazi is a Zulu name that means "the one who makes people happy". It is also a nickname that was given to him by his fans, who appreciate his music and personality.</li>
|
98 |
-
<li><b>How many albums has Mkhathazi released?</b><br>
|
99 |
-
Mkhathazi has released seven albums so far. They are Uyisoka Lami (2010), Uyabaleka (2012), Uthando Lwakho (2014), Ngikhule Kanzima (2018), Umkhathazi (2020), Uyisoka Lami Reloaded (2021), and Ngikhule Kanzima Reloaded (2021).</li>
|
100 |
-
<li><b>What are some of the most popular songs by Mkhathazi?</b><br>
|
101 |
-
Some of the most popular songs by Mkhathazi are Ngikhule Kanzima, Uthando Lwakho, Sugar Sugar, Murahu, Uyisoka Lami, Uyabaleka, Ngizokubamba, and Ngiyamthanda.</li>
|
102 |
-
<li><b>Who are some of the maskandi artists that Mkhathazi admires or looks up to?</b><br>
|
103 |
-
Some of the maskandi artists that Mkhathazi admires or looks up to are Phuzekhemisi, Ihashi Elimhlophe, Mgqumeni, Shwi Nomtekhala, Khuzani, and Ntencane.</li>
|
104 |
-
<li><b>How can I contact Mkhathazi for bookings or inquiries?</b><br>
|
105 |
-
You can contact Mkhathazi for bookings or inquiries through his email address, [email protected], or his phone number, +27 76 123 4567. You can also send him a message on his social media platforms or his official website.</li>
|
106 |
-
</ol></p> 401be4b1e0<br />
|
107 |
-
<br />
|
108 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Real Cricket GO Mod APK and Enjoy Unlimited Money and Features.md
DELETED
@@ -1,131 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Real Cricket Go APK Mod: A Review</h1>
|
3 |
-
<p>If you are a fan of cricket and want to enjoy a realistic and thrilling game on your mobile device, then you might want to check out Real Cricket Go. This is a game that lets you experience the excitement of international cricket tournaments under 45 MB. And if you want to unlock more features and have more fun, then you can try the Real Cricket Go APK Mod, which is a modified version of the game that gives you access to unlimited resources and premium content. In this article, we will review the Real Cricket Go APK Mod and tell you everything you need to know about it.</p>
|
4 |
-
<h2>real cricket go apk mod</h2><br /><p><b><b>Download Zip</b> ✸ <a href="https://jinyurl.com/2uNTaO">https://jinyurl.com/2uNTaO</a></b></p><br /><br />
|
5 |
-
<h2>What is Real Cricket Go?</h2>
|
6 |
-
<p>Real Cricket Go is a 3D cricket game developed by Nautilus Mobile, the same company that created the popular Real Cricket series. The game is designed to be lightweight and fast, so you can play it on any device without worrying about storage space or performance issues. The game features realistic graphics, animations, and sounds, as well as various game modes and tournaments that will keep you hooked for hours. You can choose from different teams, players, stadiums, and conditions, and customize your gameplay according to your preferences.</p>
|
7 |
-
<h3>Features of Real Cricket Go</h3>
|
8 |
-
<p>Some of the features that make Real Cricket Go stand out from other cricket games are:</p>
|
9 |
-
<ul>
|
10 |
-
<li>Simple and intuitive controls that let you swipe, tap, and drag to play shots, bowl, field, and run.</li>
|
11 |
-
<li>Multiple camera angles that give you a realistic view of the action from different perspectives.</li>
|
12 |
-
<li>Dynamic weather system that changes the conditions of the game according to the time of day and location.</li>
|
13 |
-
<li>Authentic commentary that adds to the atmosphere and excitement of the game.</li>
|
14 |
-
<li>Leaderboards and achievements that let you compete with other players and track your progress.</li>
|
15 |
-
</ul>
|
16 |
-
<h3>How to download and install Real Cricket Go APK Mod?</h3>
|
17 |
-
<p>If you want to enjoy more features and benefits than the original version of Real Cricket Go, then you can download and install the Real Cricket Go APK Mod. This is a modified version of the game that gives you unlimited coins, tickets, unlocked players, stadiums, kits, modes, tournaments, and more. You can also remove ads and enjoy a smoother gameplay with this mod. To download and install the Real Cricket Go APK Mod, follow these steps:</p>
|
18 |
-
<ol>
|
19 |
-
<li>Go to [this link](^1^) and download the Real Cricket Go APK Mod file on your device.</li>
|
20 |
-
<li>Enable unknown sources on your device by going to Settings > Security > Unknown Sources.</li>
|
21 |
-
<li>Locate the downloaded file on your device and tap on it to install it.</li>
|
22 |
-
<li>Wait for the installation to complete and then launch the game from your app drawer or home screen.</li>
|
23 |
-
<li>Enjoy playing Real Cricket Go APK Mod with unlimited resources and premium content.</li>
|
24 |
-
</ol>
|
25 |
-
<h2>Why use Real Cricket Go APK Mod?</h2>
|
26 |
-
<p>You might be wondering why you should use the Real Cricket Go APK Mod instead of the original version of the game. Well, there are several reasons why using this mod can enhance your gaming experience and make it more enjoyable. Here are some of them:</p>
|
27 |
-
<p>real cricket go mod apk unlimited money<br />
|
28 |
-
real cricket go mod apk download latest version<br />
|
29 |
-
real cricket go mod apk hack<br />
|
30 |
-
real cricket go mod apk android 1<br />
|
31 |
-
real cricket go mod apk revdl<br />
|
32 |
-
real cricket go mod apk rexdl<br />
|
33 |
-
real cricket go mod apk free download<br />
|
34 |
-
real cricket go mod apk 2023<br />
|
35 |
-
real cricket go mod apk all unlocked<br />
|
36 |
-
real cricket go mod apk unlimited coins and gems<br />
|
37 |
-
real cricket go mod apk offline<br />
|
38 |
-
real cricket go mod apk no ads<br />
|
39 |
-
real cricket go mod apk unlimited tickets<br />
|
40 |
-
real cricket go mod apk obb<br />
|
41 |
-
real cricket go mod apk pure<br />
|
42 |
-
real cricket go mod apk unlimited everything<br />
|
43 |
-
real cricket go mod apk latest update<br />
|
44 |
-
real cricket go mod apk for pc<br />
|
45 |
-
real cricket go mod apk online<br />
|
46 |
-
real cricket go mod apk 0.2.4<br />
|
47 |
-
real cricket go hack version download<br />
|
48 |
-
real cricket go hack apk download<br />
|
49 |
-
real cricket go hack game download<br />
|
50 |
-
real cricket go hack unlimited money<br />
|
51 |
-
real cricket go hack app download<br />
|
52 |
-
real cricket go hack version 2023<br />
|
53 |
-
real cricket go hack version free download<br />
|
54 |
-
real cricket go hack version latest<br />
|
55 |
-
real cricket go hack version online<br />
|
56 |
-
real cricket go hack version offline<br />
|
57 |
-
download game real cricket go mod apk<br />
|
58 |
-
download game real cricket go hack version<br />
|
59 |
-
download game real cricket go unlimited money<br />
|
60 |
-
download game real cricket go latest version<br />
|
61 |
-
download game real cricket go offline mode<br />
|
62 |
-
download game real cricket go for android<br />
|
63 |
-
download game real cricket go for pc<br />
|
64 |
-
download game real cricket go online mode<br />
|
65 |
-
download game real cricket go 2023 version<br />
|
66 |
-
download game real cricket go all unlocked</p>
|
67 |
-
<h3>Benefits of Real Cricket Go APK Mod</h3>
|
68 |
-
<p>Some of the benefits that you can get from using the Real Cricket Go APK Mod are:</p>
|
69 |
-
<ul>
|
70 |
-
<li>You can access all the features and content of the game without spending any money or waiting for long hours.</li>
|
71 |
-
<li>You can customize your team, players, stadium, kit, mode, tournament, and difficulty level according to your liking.</li>
|
72 |
-
<li>You can play with unlimited coins and tickets that let you buy anything you want in the game store.</li>
|
73 |
-
<li>You can unlock all the players, stadiums, kits, modes, tournaments, and more that are otherwise locked or require real money to purchase.</li>
|
74 |
-
<li>You <p>You can remove annoying ads that interrupt your gameplay and distract you from the action.</p>
|
75 |
-
<li>You can enjoy a smoother and faster gameplay with no lags or glitches.</li>
|
76 |
-
</ul>
|
77 |
-
<h3>Risks of Real Cricket Go APK Mod</h3>
|
78 |
-
<p>However, using the Real Cricket Go APK Mod also comes with some risks that you should be aware of before downloading and installing it. Some of the risks that you might face are:</p>
|
79 |
-
<ul>
|
80 |
-
<li>You might violate the terms and conditions of the game and get banned from playing it online or offline.</li>
|
81 |
-
<li>You might expose your device to malware or viruses that can harm your data or system.</li>
|
82 |
-
<li>You might lose your progress or data if the mod is not compatible with your device or the game updates.</li>
|
83 |
-
<li>You might miss out on the original features and content of the game that are updated regularly by the developers.</li>
|
84 |
-
</ul>
|
85 |
-
<p>Therefore, you should use the Real Cricket Go APK Mod at your own risk and discretion. We are not responsible for any damage or loss that may occur as a result of using this mod.</p>
|
86 |
-
<h2>How to play Real Cricket Go APK Mod?</h2>
|
87 |
-
<p>Playing Real Cricket Go APK Mod is not very different from playing the original version of the game. You just need to follow the same steps and rules as you would in the normal game. However, you will have more options and freedom to customize your gameplay and enjoy more features and content. Here are some tips on how to play Real Cricket Go APK Mod:</p>
|
88 |
-
<h3>Game modes and tournaments</h3>
|
89 |
-
<p>Real Cricket Go APK Mod offers you various game modes and tournaments that you can choose from depending on your mood and preference. Some of the game modes and tournaments that you can play are:</p>
|
90 |
-
<table>
|
91 |
-
<tr><th>Game Mode</th><th>Description</th></tr>
|
92 |
-
<tr><td>Quick Match</td><td>This is a simple and fast mode that lets you play a single match against any team of your choice. You can select the overs, difficulty level, stadium, and weather conditions.</td></tr>
|
93 |
-
<tr><td>World Cup</td><td>This is a mode that lets you participate in the most prestigious cricket tournament in the world. You can select your team and compete with other teams in group stages and knockout rounds until you reach the final.</td></tr>
|
94 |
-
<tr><td>Champions Cup</td><td>This is a mode that lets you play in a mini version of the World Cup with eight teams. You can select your team and play in two groups of four teams each, followed by semi-finals and final.</td></tr>
|
95 |
-
<tr><td>Super Over</td><td>This is a mode that lets you play a thrilling tie-breaker match with only one over per side. You can select your team and try to score as many runs as possible or defend a target against your opponent.</td></tr>
|
96 |
-
<tr><td>Test Match</td><td>This is a mode that lets you play a classic five-day cricket match with two innings per side. You can select your team and try to score more runs than your opponent or bowl them out within the allotted time.</td></tr>
|
97 |
-
</table>
|
98 |
-
<h3>Tips and tricks</h3>
|
99 |
-
<p>Some of the tips and tricks that can help you improve your skills and performance in Real Cricket Go APK Mod are:</p>
|
100 |
-
<ul>
|
101 |
-
<li>Practice different shots and deliveries in the practice mode before playing a real match.</li>
|
102 |
-
<li>Use the swipe, tap, and drag controls to adjust the direction, power, and timing of your shots and deliveries.</li>
|
103 |
-
<li>Watch the ball closely and anticipate its movement, speed, and bounce.</li>
|
104 |
-
<li>Use different types of shots and deliveries according to the situation, such as lofted, defensive, sweep, yorker, bouncer, etc.</li>
|
105 |
-
<li>Use the DRS (Decision Review System) wisely to challenge or overturn umpire's decisions.</li>
|
106 |
-
<li>Use the coins and tickets to buy new players, stadiums, kits, modes, tournaments, and more in the game store.</li>
|
107 |
-
<li>Use the leaderboards and achievements to track your progress and compete with other players.</li>
|
108 |
-
</ul>
|
109 |
-
<h2>Conclusion</h2>
|
110 |
-
<p>Real Cricket Go APK Mod is a fun and exciting cricket game that lets you enjoy a realistic and thrilling cricket experience on your mobile device. You can play various game modes and tournaments, customize your team and gameplay, unlock unlimited resources and premium content, remove ads, and enjoy a smoother gameplay with this mod. However, you should also be aware of the risks involved in using this mod, such as getting banned, losing data, or exposing your device to malware. Therefore, you should use this mod at your own risk and discretion. We hope this article has given you a comprehensive review of Real Cricket Go APK Mod and helped you decide whether to download it or not <p>If you have any questions or doubts about Real Cricket Go APK Mod, you can check out the FAQs section below. We have answered some of the most common and frequently asked questions about this mod. If you have any other questions, feel free to leave a comment or contact us.</p>
|
111 |
-
<h2>FAQs</h2>
|
112 |
-
<p>Here are some of the FAQs about Real Cricket Go APK Mod:</p>
|
113 |
-
<ol>
|
114 |
-
<li>Is Real Cricket Go APK Mod safe to use?</li>
|
115 |
-
<p>Real Cricket Go APK Mod is not an official version of the game and is not endorsed by the developers or Google Play Store. Therefore, it is not guaranteed to be safe or secure to use. You might face some risks such as getting banned, losing data, or exposing your device to malware. You should use this mod at your own risk and discretion.</p>
|
116 |
-
<li>How to update Real Cricket Go APK Mod?</li>
|
117 |
-
<p>Real Cricket Go APK Mod is not updated automatically by the game or the Play Store. You will have to manually download and install the latest version of the mod from a reliable source. However, you might lose your progress or data if the mod is not compatible with the game updates. You should backup your data before updating the mod.</p>
|
118 |
-
<li>How to uninstall Real Cricket Go APK Mod?</li>
|
119 |
-
<p>If you want to uninstall Real Cricket Go APK Mod, you can follow these steps:</p>
|
120 |
-
<ul>
|
121 |
-
<li>Go to Settings > Apps > Real Cricket Go and tap on Uninstall.</li>
|
122 |
-
<li>Confirm your action and wait for the uninstallation to complete.</li>
|
123 |
-
<li>Delete the Real Cricket Go APK Mod file from your device.</li>
|
124 |
-
</ul>
|
125 |
-
<li>Can I play Real Cricket Go APK Mod online or offline?</li>
|
126 |
-
<p>You can play Real Cricket Go APK Mod both online and offline. However, you might not be able to access some features or content that require an internet connection. You might also face some issues or errors while playing online with other players who are using the original version of the game.</p>
|
127 |
-
<li>Can I play Real Cricket Go APK Mod with friends?</li>
|
128 |
-
<p>You can play Real Cricket Go APK Mod with friends who are also using the same mod. You can invite them to join your team or challenge them to a match. However, you might not be able to play with friends who are using the original version of the game or a different mod.</p>
|
129 |
-
</ol></p> 197e85843d<br />
|
130 |
-
<br />
|
131 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py
DELETED
@@ -1,631 +0,0 @@
|
|
1 |
-
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
2 |
-
# Copyright 2022 The HuggingFace Team. All rights reserved.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
import inspect
|
17 |
-
from typing import Callable, List, Optional, Union
|
18 |
-
|
19 |
-
import paddle
|
20 |
-
import paddle.nn as nn
|
21 |
-
|
22 |
-
################################################################################
|
23 |
-
# Code for the text transformer model
|
24 |
-
################################################################################
|
25 |
-
from paddlenlp.transformers import (
|
26 |
-
PretrainedModel,
|
27 |
-
PretrainedTokenizer,
|
28 |
-
register_base_model,
|
29 |
-
)
|
30 |
-
from paddlenlp.transformers.model_outputs import (
|
31 |
-
BaseModelOutputWithPoolingAndCrossAttentions,
|
32 |
-
)
|
33 |
-
|
34 |
-
from ...configuration_utils import FrozenDict
|
35 |
-
from ...models import AutoencoderKL, UNet2DConditionModel, UNet2DModel, VQModel
|
36 |
-
from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput
|
37 |
-
from ...schedulers import (
|
38 |
-
DDIMScheduler,
|
39 |
-
DPMSolverMultistepScheduler,
|
40 |
-
EulerAncestralDiscreteScheduler,
|
41 |
-
EulerDiscreteScheduler,
|
42 |
-
LMSDiscreteScheduler,
|
43 |
-
PNDMScheduler,
|
44 |
-
)
|
45 |
-
from ...utils import deprecate, logging
|
46 |
-
|
47 |
-
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
|
48 |
-
|
49 |
-
|
50 |
-
class LDMBertPretrainedModel(PretrainedModel):
|
51 |
-
pretrained_init_configuration = {}
|
52 |
-
pretrained_resource_files_map = {}
|
53 |
-
base_model_prefix = "ldmbert"
|
54 |
-
|
55 |
-
def init_weights(self, layer):
|
56 |
-
if isinstance(layer, (nn.Linear, nn.Embedding)):
|
57 |
-
layer.weight.set_value(
|
58 |
-
paddle.normal(
|
59 |
-
mean=0.0,
|
60 |
-
std=self.initializer_range
|
61 |
-
if hasattr(self, "initializer_range")
|
62 |
-
else self.ldmbert.config["initializer_range"],
|
63 |
-
shape=layer.weight.shape,
|
64 |
-
)
|
65 |
-
)
|
66 |
-
|
67 |
-
|
68 |
-
class LDMBertEmbeddings(nn.Layer):
|
69 |
-
def __init__(self, vocab_size, hidden_size=768, hidden_dropout_prob=0.0, max_position_embeddings=512):
|
70 |
-
super().__init__()
|
71 |
-
self.word_embeddings = nn.Embedding(vocab_size, hidden_size)
|
72 |
-
self.position_embeddings = nn.Embedding(max_position_embeddings, hidden_size)
|
73 |
-
self.dropout = nn.Dropout(hidden_dropout_prob)
|
74 |
-
|
75 |
-
def forward(self, input_ids, position_ids=None):
|
76 |
-
if position_ids is None:
|
77 |
-
ones = paddle.ones_like(input_ids, dtype="int64")
|
78 |
-
seq_length = paddle.cumsum(ones, axis=-1)
|
79 |
-
position_ids = seq_length - ones
|
80 |
-
position_ids.stop_gradient = True
|
81 |
-
|
82 |
-
input_embedings = self.word_embeddings(input_ids)
|
83 |
-
position_embeddings = self.position_embeddings(position_ids)
|
84 |
-
|
85 |
-
embeddings = input_embedings + position_embeddings
|
86 |
-
embeddings = self.dropout(embeddings)
|
87 |
-
return embeddings
|
88 |
-
|
89 |
-
|
90 |
-
class TransformerEncoderLayer(nn.TransformerEncoderLayer):
|
91 |
-
def __init__(
|
92 |
-
self,
|
93 |
-
d_model,
|
94 |
-
nhead,
|
95 |
-
dim_feedforward,
|
96 |
-
dropout=0.1,
|
97 |
-
activation="gelu",
|
98 |
-
attn_dropout=None,
|
99 |
-
act_dropout=None,
|
100 |
-
normalize_before=False,
|
101 |
-
weight_attr=None,
|
102 |
-
bias_attr=None,
|
103 |
-
head_dim=64,
|
104 |
-
):
|
105 |
-
super().__init__(
|
106 |
-
d_model,
|
107 |
-
nhead,
|
108 |
-
dim_feedforward,
|
109 |
-
dropout,
|
110 |
-
activation,
|
111 |
-
attn_dropout,
|
112 |
-
act_dropout,
|
113 |
-
normalize_before,
|
114 |
-
weight_attr,
|
115 |
-
bias_attr,
|
116 |
-
)
|
117 |
-
# update self attn
|
118 |
-
self.self_attn = LDMBertAttention(
|
119 |
-
d_model, head_dim, nhead, dropout=attn_dropout, weight_attr=weight_attr, bias_attr=False
|
120 |
-
)
|
121 |
-
|
122 |
-
|
123 |
-
@register_base_model
|
124 |
-
class LDMBertModel(LDMBertPretrainedModel):
|
125 |
-
_no_split_modules = []
|
126 |
-
|
127 |
-
def __init__(
|
128 |
-
self,
|
129 |
-
vocab_size=30522,
|
130 |
-
max_position_embeddings=77,
|
131 |
-
encoder_layers=32,
|
132 |
-
encoder_ffn_dim=5120,
|
133 |
-
encoder_attention_heads=8,
|
134 |
-
head_dim=64,
|
135 |
-
activation_function="gelu",
|
136 |
-
d_model=1280,
|
137 |
-
dropout=0.0,
|
138 |
-
attention_dropout=0.0,
|
139 |
-
activation_dropout=0.0,
|
140 |
-
init_std=0.02,
|
141 |
-
pad_token_id=0,
|
142 |
-
**kwargs
|
143 |
-
):
|
144 |
-
super().__init__()
|
145 |
-
self.pad_token_id = pad_token_id
|
146 |
-
self.initializer_range = init_std
|
147 |
-
self.embeddings = LDMBertEmbeddings(vocab_size, d_model, dropout, max_position_embeddings)
|
148 |
-
encoder_layer = TransformerEncoderLayer(
|
149 |
-
d_model,
|
150 |
-
encoder_attention_heads,
|
151 |
-
encoder_ffn_dim,
|
152 |
-
dropout=dropout,
|
153 |
-
activation=activation_function,
|
154 |
-
attn_dropout=attention_dropout,
|
155 |
-
act_dropout=activation_dropout,
|
156 |
-
normalize_before=True,
|
157 |
-
head_dim=head_dim,
|
158 |
-
)
|
159 |
-
|
160 |
-
self.encoder = nn.TransformerEncoder(encoder_layer, encoder_layers)
|
161 |
-
self.final_layer_norm = nn.LayerNorm(d_model)
|
162 |
-
self.apply(self.init_weights)
|
163 |
-
|
164 |
-
def get_input_embeddings(self):
|
165 |
-
return self.embeddings.word_embeddings
|
166 |
-
|
167 |
-
def set_input_embeddings(self, value):
|
168 |
-
self.embeddings.word_embeddings = value
|
169 |
-
|
170 |
-
def forward(
|
171 |
-
self,
|
172 |
-
input_ids,
|
173 |
-
position_ids=None,
|
174 |
-
attention_mask=None,
|
175 |
-
output_hidden_states=False,
|
176 |
-
output_attentions=False,
|
177 |
-
return_dict=False,
|
178 |
-
):
|
179 |
-
|
180 |
-
if attention_mask is not None and attention_mask.ndim == 2:
|
181 |
-
# attention_mask [batch_size, sequence_length] -> [batch_size, 1, 1, sequence_length]
|
182 |
-
attention_mask = attention_mask.unsqueeze(axis=[1, 2]).astype(paddle.get_default_dtype())
|
183 |
-
attention_mask = (1.0 - attention_mask) * -1e4
|
184 |
-
|
185 |
-
embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids)
|
186 |
-
|
187 |
-
encoder_outputs = self.encoder(
|
188 |
-
embedding_output,
|
189 |
-
src_mask=attention_mask,
|
190 |
-
output_attentions=output_attentions,
|
191 |
-
output_hidden_states=output_hidden_states,
|
192 |
-
return_dict=return_dict,
|
193 |
-
)
|
194 |
-
|
195 |
-
if isinstance(encoder_outputs, type(embedding_output)):
|
196 |
-
sequence_output = self.final_layer_norm(encoder_outputs)
|
197 |
-
return (sequence_output,)
|
198 |
-
else:
|
199 |
-
sequence_output = encoder_outputs[0]
|
200 |
-
sequence_output = self.final_layer_norm(sequence_output)
|
201 |
-
if not return_dict:
|
202 |
-
return (sequence_output,) + encoder_outputs[1:]
|
203 |
-
return BaseModelOutputWithPoolingAndCrossAttentions(
|
204 |
-
last_hidden_state=sequence_output,
|
205 |
-
hidden_states=encoder_outputs.hidden_states,
|
206 |
-
attentions=encoder_outputs.attentions,
|
207 |
-
)
|
208 |
-
|
209 |
-
|
210 |
-
class LDMBertAttention(nn.MultiHeadAttention):
|
211 |
-
def __init__(
|
212 |
-
self,
|
213 |
-
embed_dim,
|
214 |
-
head_dim,
|
215 |
-
num_heads,
|
216 |
-
dropout=0.0,
|
217 |
-
kdim=None,
|
218 |
-
vdim=None,
|
219 |
-
need_weights=False,
|
220 |
-
weight_attr=None,
|
221 |
-
bias_attr=None,
|
222 |
-
):
|
223 |
-
super().__init__(embed_dim, num_heads, dropout, kdim, vdim, need_weights, weight_attr, bias_attr)
|
224 |
-
assert embed_dim > 0, "Expected embed_dim to be greater than 0, " "but recieved {}".format(embed_dim)
|
225 |
-
assert num_heads > 0, "Expected num_heads to be greater than 0, " "but recieved {}".format(num_heads)
|
226 |
-
|
227 |
-
self.embed_dim = embed_dim
|
228 |
-
self.kdim = kdim if kdim is not None else embed_dim
|
229 |
-
self.vdim = vdim if vdim is not None else embed_dim
|
230 |
-
self.num_heads = num_heads
|
231 |
-
self.dropout = dropout
|
232 |
-
self.need_weights = need_weights
|
233 |
-
|
234 |
-
self.head_dim = head_dim
|
235 |
-
self.inner_dim = head_dim * num_heads
|
236 |
-
self.scaling = self.head_dim**-0.5
|
237 |
-
|
238 |
-
self.q_proj = nn.Linear(embed_dim, self.inner_dim, weight_attr, bias_attr=bias_attr)
|
239 |
-
self.k_proj = nn.Linear(self.kdim, self.inner_dim, weight_attr, bias_attr=bias_attr)
|
240 |
-
self.v_proj = nn.Linear(self.vdim, self.inner_dim, weight_attr, bias_attr=bias_attr)
|
241 |
-
self.out_proj = nn.Linear(self.inner_dim, embed_dim, weight_attr)
|
242 |
-
|
243 |
-
|
244 |
-
class LDMBertModelForMaskedLM(LDMBertPretrainedModel):
|
245 |
-
def __init__(self, ldmbert):
|
246 |
-
super().__init__()
|
247 |
-
self.ldmbert = ldmbert
|
248 |
-
self.to_logits = nn.Linear(ldmbert.config["hidden_size"], ldmbert.config["vocab_size"])
|
249 |
-
self.apply(self.init_weights)
|
250 |
-
|
251 |
-
def forward(
|
252 |
-
self,
|
253 |
-
input_ids=None,
|
254 |
-
attention_mask=None,
|
255 |
-
position_ids=None,
|
256 |
-
output_attentions=None,
|
257 |
-
output_hidden_states=None,
|
258 |
-
return_dict=None,
|
259 |
-
):
|
260 |
-
outputs = self.ldmbert(
|
261 |
-
input_ids,
|
262 |
-
attention_mask=attention_mask,
|
263 |
-
position_ids=position_ids,
|
264 |
-
output_attentions=output_attentions,
|
265 |
-
output_hidden_states=output_hidden_states,
|
266 |
-
return_dict=return_dict,
|
267 |
-
)
|
268 |
-
return outputs
|
269 |
-
|
270 |
-
|
271 |
-
class LDMTextToImagePipeline(DiffusionPipeline):
|
272 |
-
r"""
|
273 |
-
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
|
274 |
-
library implements for all the pipelines (such as downloading or saving, running on a particular xxxx, etc.)
|
275 |
-
|
276 |
-
Parameters:
|
277 |
-
vqvae ([`VQModel`]):
|
278 |
-
Vector-quantized (VQ) Model to encode and decode images to and from latent representations.
|
279 |
-
bert ([`LDMBertModel`]):
|
280 |
-
Text-encoder model based on [BERT](https://paddlenlp.readthedocs.io/zh/latest/source/paddlenlp.transformers.bert.modeling.html#paddlenlp.transformers.bert.modeling.BertModel) architecture.
|
281 |
-
tokenizer (`paddlenlp.transformers.BertTokenizer`):
|
282 |
-
Tokenizer of class
|
283 |
-
[BertTokenizer](https://paddlenlp.readthedocs.io/zh/latest/source/paddlenlp.transformers.bert.tokenizer.html#paddlenlp.transformers.bert.tokenizer.BertTokenizer).
|
284 |
-
unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
|
285 |
-
scheduler ([`SchedulerMixin`]):
|
286 |
-
A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
|
287 |
-
[`DDIMScheduler`], [`LMSDiscreteScheduler`], [`PNDMScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`]
|
288 |
-
or [`DPMSolverMultistepScheduler`].
|
289 |
-
"""
|
290 |
-
|
291 |
-
def __init__(
|
292 |
-
self,
|
293 |
-
vqvae: Union[VQModel, AutoencoderKL],
|
294 |
-
bert: PretrainedModel,
|
295 |
-
tokenizer: PretrainedTokenizer,
|
296 |
-
unet: Union[UNet2DModel, UNet2DConditionModel],
|
297 |
-
scheduler: Union[
|
298 |
-
DDIMScheduler,
|
299 |
-
PNDMScheduler,
|
300 |
-
LMSDiscreteScheduler,
|
301 |
-
EulerDiscreteScheduler,
|
302 |
-
EulerAncestralDiscreteScheduler,
|
303 |
-
DPMSolverMultistepScheduler,
|
304 |
-
],
|
305 |
-
):
|
306 |
-
super().__init__()
|
307 |
-
if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
|
308 |
-
deprecation_message = (
|
309 |
-
f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
|
310 |
-
f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
|
311 |
-
"to update the config accordingly as leaving `steps_offset` might led to incorrect results"
|
312 |
-
" in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
|
313 |
-
" it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
|
314 |
-
" file"
|
315 |
-
)
|
316 |
-
deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
|
317 |
-
new_config = dict(scheduler.config)
|
318 |
-
new_config["steps_offset"] = 1
|
319 |
-
scheduler._internal_dict = FrozenDict(new_config)
|
320 |
-
|
321 |
-
if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
|
322 |
-
deprecation_message = (
|
323 |
-
f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
|
324 |
-
" `clip_sample` should be set to False in the configuration file. Please make sure to update the"
|
325 |
-
" config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
|
326 |
-
" future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
|
327 |
-
" nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
|
328 |
-
)
|
329 |
-
deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
|
330 |
-
new_config = dict(scheduler.config)
|
331 |
-
new_config["clip_sample"] = False
|
332 |
-
scheduler._internal_dict = FrozenDict(new_config)
|
333 |
-
|
334 |
-
self.register_modules(vqvae=vqvae, bert=bert, tokenizer=tokenizer, unet=unet, scheduler=scheduler)
|
335 |
-
self.vae_scale_factor = 2 ** (len(self.vqvae.config.block_out_channels) - 1)
|
336 |
-
|
337 |
-
def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt):
|
338 |
-
r"""
|
339 |
-
Encodes the prompt into text encoder hidden states.
|
340 |
-
|
341 |
-
Args:
|
342 |
-
prompt (`str` or `list(int)`):
|
343 |
-
prompt to be encoded
|
344 |
-
num_images_per_prompt (`int`):
|
345 |
-
number of images that should be generated per prompt
|
346 |
-
do_classifier_free_guidance (`bool`):
|
347 |
-
whether to use classifier free guidance or not
|
348 |
-
negative_prompt (`str` or `List[str]`):
|
349 |
-
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
|
350 |
-
if `guidance_scale` is less than `1`).
|
351 |
-
"""
|
352 |
-
batch_size = len(prompt) if isinstance(prompt, list) else 1
|
353 |
-
|
354 |
-
text_inputs = self.tokenizer(
|
355 |
-
prompt,
|
356 |
-
padding="max_length",
|
357 |
-
max_length=self.tokenizer.model_max_length,
|
358 |
-
truncation=True,
|
359 |
-
return_tensors="pd",
|
360 |
-
)
|
361 |
-
text_input_ids = text_inputs.input_ids
|
362 |
-
untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pd").input_ids
|
363 |
-
|
364 |
-
if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not paddle.equal_all(
|
365 |
-
text_input_ids, untruncated_ids
|
366 |
-
):
|
367 |
-
removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
|
368 |
-
logger.warning(
|
369 |
-
"The following part of your input was truncated because LDMBert can only handle sequences up to"
|
370 |
-
f" {self.tokenizer.model_max_length} tokens: {removed_text}"
|
371 |
-
)
|
372 |
-
|
373 |
-
text_embeddings = self.bert(text_input_ids)
|
374 |
-
text_embeddings = text_embeddings[0]
|
375 |
-
|
376 |
-
# duplicate text embeddings for each generation per prompt, using mps friendly method
|
377 |
-
bs_embed, seq_len, _ = text_embeddings.shape
|
378 |
-
text_embeddings = text_embeddings.tile([1, num_images_per_prompt, 1])
|
379 |
-
text_embeddings = text_embeddings.reshape([bs_embed * num_images_per_prompt, seq_len, -1])
|
380 |
-
|
381 |
-
# get unconditional embeddings for classifier free guidance
|
382 |
-
if do_classifier_free_guidance:
|
383 |
-
uncond_tokens: List[str]
|
384 |
-
if negative_prompt is None:
|
385 |
-
uncond_tokens = [""] * batch_size
|
386 |
-
elif type(prompt) is not type(negative_prompt):
|
387 |
-
raise TypeError(
|
388 |
-
f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
|
389 |
-
f" {type(prompt)}."
|
390 |
-
)
|
391 |
-
elif isinstance(negative_prompt, str):
|
392 |
-
uncond_tokens = [negative_prompt]
|
393 |
-
elif batch_size != len(negative_prompt):
|
394 |
-
raise ValueError(
|
395 |
-
f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
|
396 |
-
f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
|
397 |
-
" the batch size of `prompt`."
|
398 |
-
)
|
399 |
-
else:
|
400 |
-
uncond_tokens = negative_prompt
|
401 |
-
|
402 |
-
max_length = text_input_ids.shape[-1]
|
403 |
-
uncond_input = self.tokenizer(
|
404 |
-
uncond_tokens,
|
405 |
-
padding="max_length",
|
406 |
-
max_length=max_length,
|
407 |
-
truncation=True,
|
408 |
-
return_tensors="pd",
|
409 |
-
)
|
410 |
-
|
411 |
-
uncond_embeddings = self.bert(uncond_input.input_ids)
|
412 |
-
uncond_embeddings = uncond_embeddings[0]
|
413 |
-
|
414 |
-
# duplicate unconditional embeddings for each generation per prompt, using mps friendly method
|
415 |
-
seq_len = uncond_embeddings.shape[1]
|
416 |
-
uncond_embeddings = uncond_embeddings.tile([1, num_images_per_prompt, 1])
|
417 |
-
uncond_embeddings = uncond_embeddings.reshape([batch_size * num_images_per_prompt, seq_len, -1])
|
418 |
-
|
419 |
-
# For classifier free guidance, we need to do two forward passes.
|
420 |
-
# Here we concatenate the unconditional and text embeddings into a single batch
|
421 |
-
# to avoid doing two forward passes
|
422 |
-
text_embeddings = paddle.concat([uncond_embeddings, text_embeddings])
|
423 |
-
|
424 |
-
return text_embeddings
|
425 |
-
|
426 |
-
def decode_latents(self, latents):
|
427 |
-
latents = 1 / 0.18215 * latents
|
428 |
-
image = self.vqvae.decode(latents).sample
|
429 |
-
image = (image / 2 + 0.5).clip(0, 1)
|
430 |
-
# we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
|
431 |
-
image = image.transpose([0, 2, 3, 1]).cast("float32").numpy()
|
432 |
-
return image
|
433 |
-
|
434 |
-
def prepare_extra_step_kwargs(self, generator, eta):
|
435 |
-
# prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
|
436 |
-
# eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
|
437 |
-
# eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
|
438 |
-
# and should be between [0, 1]
|
439 |
-
|
440 |
-
accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
|
441 |
-
extra_step_kwargs = {}
|
442 |
-
if accepts_eta:
|
443 |
-
extra_step_kwargs["eta"] = eta
|
444 |
-
|
445 |
-
# check if the scheduler accepts generator
|
446 |
-
accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
|
447 |
-
if accepts_generator:
|
448 |
-
extra_step_kwargs["generator"] = generator
|
449 |
-
return extra_step_kwargs
|
450 |
-
|
451 |
-
def check_inputs(self, prompt, height, width, callback_steps):
|
452 |
-
if not isinstance(prompt, str) and not isinstance(prompt, list):
|
453 |
-
raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
|
454 |
-
|
455 |
-
if height % 8 != 0 or width % 8 != 0:
|
456 |
-
raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
|
457 |
-
|
458 |
-
if (callback_steps is None) or (
|
459 |
-
callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
|
460 |
-
):
|
461 |
-
raise ValueError(
|
462 |
-
f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
|
463 |
-
f" {type(callback_steps)}."
|
464 |
-
)
|
465 |
-
|
466 |
-
def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, generator, latents=None):
|
467 |
-
shape = [batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor]
|
468 |
-
if isinstance(generator, list) and len(generator) != batch_size:
|
469 |
-
raise ValueError(
|
470 |
-
f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
|
471 |
-
f" size of {batch_size}. Make sure the batch size matches the length of the generators."
|
472 |
-
)
|
473 |
-
|
474 |
-
if latents is None:
|
475 |
-
if isinstance(generator, list):
|
476 |
-
shape = [
|
477 |
-
1,
|
478 |
-
] + shape[1:]
|
479 |
-
latents = [paddle.randn(shape, generator=generator[i], dtype=dtype) for i in range(batch_size)]
|
480 |
-
latents = paddle.concat(latents, axis=0)
|
481 |
-
else:
|
482 |
-
latents = paddle.randn(shape, generator=generator, dtype=dtype)
|
483 |
-
else:
|
484 |
-
if latents.shape != shape:
|
485 |
-
raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
|
486 |
-
|
487 |
-
# scale the initial noise by the standard deviation required by the scheduler
|
488 |
-
latents = latents * self.scheduler.init_noise_sigma
|
489 |
-
return latents
|
490 |
-
|
491 |
-
@paddle.no_grad()
|
492 |
-
def __call__(
|
493 |
-
self,
|
494 |
-
prompt: Union[str, List[str]],
|
495 |
-
height: int = 256,
|
496 |
-
width: int = 256,
|
497 |
-
num_inference_steps: int = 50,
|
498 |
-
guidance_scale: float = 1.0,
|
499 |
-
negative_prompt: Optional[Union[str, List[str]]] = None,
|
500 |
-
num_images_per_prompt: Optional[int] = 1,
|
501 |
-
eta: float = 0.0,
|
502 |
-
generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None,
|
503 |
-
latents: Optional[paddle.Tensor] = None,
|
504 |
-
output_type: Optional[str] = "pil",
|
505 |
-
return_dict: bool = True,
|
506 |
-
callback: Optional[Callable[[int, int, paddle.Tensor], None]] = None,
|
507 |
-
callback_steps: Optional[int] = 1,
|
508 |
-
):
|
509 |
-
r"""
|
510 |
-
Function invoked when calling the pipeline for generation.
|
511 |
-
|
512 |
-
Args:
|
513 |
-
prompt (`str` or `List[str]`):
|
514 |
-
The prompt or prompts to guide the image generation.
|
515 |
-
height (`int`, *optional*, defaults to 256:
|
516 |
-
The height in pixels of the generated image.
|
517 |
-
width (`int`, *optional*, defaults to 256:
|
518 |
-
The width in pixels of the generated image.
|
519 |
-
num_inference_steps (`int`, *optional*, defaults to 50):
|
520 |
-
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
521 |
-
expense of slower inference.
|
522 |
-
guidance_scale (`float`, *optional*, defaults to 1.0):
|
523 |
-
Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
|
524 |
-
`guidance_scale` is defined as `w` of equation 2. of [Imagen
|
525 |
-
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
|
526 |
-
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
|
527 |
-
usually at the expense of lower image quality.
|
528 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
529 |
-
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
|
530 |
-
if `guidance_scale` is less than `1`).
|
531 |
-
num_images_per_prompt (`int`, *optional*, defaults to 1):
|
532 |
-
The number of images to generate per prompt.
|
533 |
-
eta (`float`, *optional*, defaults to 0.0):
|
534 |
-
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
|
535 |
-
[`schedulers.DDIMScheduler`], will be ignored for others.
|
536 |
-
generator (`paddle.Generator`, *optional*):
|
537 |
-
One or a list of paddle generator(s) to make generation deterministic.
|
538 |
-
latents (`paddle.Tensor`, *optional*):
|
539 |
-
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
|
540 |
-
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
541 |
-
tensor will ge generated by sampling using the supplied random `generator`.
|
542 |
-
output_type (`str`, *optional*, defaults to `"pil"`):
|
543 |
-
The output format of the generate image. Choose between
|
544 |
-
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
|
545 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
546 |
-
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
|
547 |
-
plain tuple.
|
548 |
-
callback (`Callable`, *optional*):
|
549 |
-
A function that will be called every `callback_steps` steps during inference. The function will be
|
550 |
-
called with the following arguments: `callback(step: int, timestep: int, latents: paddle.Tensor)`.
|
551 |
-
callback_steps (`int`, *optional*, defaults to 1):
|
552 |
-
The frequency at which the `callback` function will be called. If not specified, the callback will be
|
553 |
-
called at every step.
|
554 |
-
|
555 |
-
Returns:
|
556 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
|
557 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
|
558 |
-
When returning a tuple, the first element is a list with the generated images, and the second element is a
|
559 |
-
list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
|
560 |
-
(nsfw) content, according to the `safety_checker`.
|
561 |
-
"""
|
562 |
-
# 1. Check inputs. Raise error if not correct
|
563 |
-
self.check_inputs(prompt, height, width, callback_steps)
|
564 |
-
|
565 |
-
# 2. Define call parameters
|
566 |
-
batch_size = 1 if isinstance(prompt, str) else len(prompt)
|
567 |
-
# here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
|
568 |
-
# of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
|
569 |
-
# corresponds to doing no classifier free guidance.
|
570 |
-
do_classifier_free_guidance = guidance_scale > 1.0
|
571 |
-
|
572 |
-
# 3. Encode input prompt
|
573 |
-
text_embeddings = self._encode_prompt(
|
574 |
-
prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
|
575 |
-
)
|
576 |
-
|
577 |
-
# 4. Prepare timesteps
|
578 |
-
self.scheduler.set_timesteps(num_inference_steps)
|
579 |
-
timesteps = self.scheduler.timesteps
|
580 |
-
|
581 |
-
# 5. Prepare latent variables
|
582 |
-
num_channels_latents = self.unet.in_channels
|
583 |
-
latents = self.prepare_latents(
|
584 |
-
batch_size * num_images_per_prompt,
|
585 |
-
num_channels_latents,
|
586 |
-
height,
|
587 |
-
width,
|
588 |
-
text_embeddings.dtype,
|
589 |
-
generator,
|
590 |
-
latents,
|
591 |
-
)
|
592 |
-
|
593 |
-
# 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
|
594 |
-
extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
|
595 |
-
|
596 |
-
# 7. Denoising loop
|
597 |
-
num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
|
598 |
-
with self.progress_bar(total=num_inference_steps) as progress_bar:
|
599 |
-
for i, t in enumerate(timesteps):
|
600 |
-
# expand the latents if we are doing classifier free guidance
|
601 |
-
latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents
|
602 |
-
latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
|
603 |
-
|
604 |
-
# predict the noise residual
|
605 |
-
noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
|
606 |
-
|
607 |
-
# perform guidance
|
608 |
-
if do_classifier_free_guidance:
|
609 |
-
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
|
610 |
-
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
|
611 |
-
|
612 |
-
# compute the previous noisy sample x_t -> x_t-1
|
613 |
-
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
|
614 |
-
|
615 |
-
# call the callback, if provided
|
616 |
-
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
|
617 |
-
progress_bar.update()
|
618 |
-
if callback is not None and i % callback_steps == 0:
|
619 |
-
callback(i, t, latents)
|
620 |
-
|
621 |
-
# 8. Post-processing
|
622 |
-
image = self.decode_latents(latents)
|
623 |
-
|
624 |
-
# 9. Convert to PIL
|
625 |
-
if output_type == "pil":
|
626 |
-
image = self.numpy_to_pil(image)
|
627 |
-
|
628 |
-
if not return_dict:
|
629 |
-
return (image,)
|
630 |
-
|
631 |
-
return ImagePipelineOutput(images=image)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/synta_mlm.py
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import torch
|
3 |
-
import torch.nn.functional as F
|
4 |
-
from torch import nn
|
5 |
-
|
6 |
-
from text_to_speech.modules.tts.syntaspeech.syntaspeech import SyntaSpeech
|
7 |
-
from tasks.tts.ps_adv_mlm import PortaSpeechAdvMLMTask
|
8 |
-
from text_to_speech.utils.commons.hparams import hparams
|
9 |
-
|
10 |
-
|
11 |
-
class SyntaSpeechMLMTask(PortaSpeechAdvMLMTask):
|
12 |
-
def build_tts_model(self):
|
13 |
-
ph_dict_size = len(self.token_encoder)
|
14 |
-
word_dict_size = len(self.word_encoder)
|
15 |
-
self.model = SyntaSpeech(ph_dict_size, word_dict_size, hparams)
|
16 |
-
|
17 |
-
self.gen_params = [p for p in self.model.parameters() if p.requires_grad]
|
18 |
-
self.dp_params = [p for k, p in self.model.named_parameters() if (('dur_predictor' in k) and p.requires_grad)]
|
19 |
-
self.gen_params_except_dp = [p for k, p in self.model.named_parameters() if (('dur_predictor' not in k) and p.requires_grad)]
|
20 |
-
self.bert_params = [p for k, p in self.model.named_parameters() if (('bert' in k) and p.requires_grad)]
|
21 |
-
self.gen_params_except_bert_and_dp = [p for k, p in self.model.named_parameters() if ('dur_predictor' not in k) and ('bert' not in k) and p.requires_grad ]
|
22 |
-
|
23 |
-
self.use_bert = True if len(self.bert_params) > 0 else False
|
24 |
-
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/README.md
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
# 2D Fashion Landmark Detection
|
2 |
-
|
3 |
-
2D fashion landmark detection (also referred to as fashion alignment) aims to detect the key-point located at the functional region of clothes, for example the neckline and the cuff.
|
4 |
-
|
5 |
-
## Data preparation
|
6 |
-
|
7 |
-
Please follow [DATA Preparation](/docs/en/dataset_zoo/2d_fashion_landmark.md) to prepare data.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Abhilashvj/planogram-compliance/data/scripts/get_coco128.sh
DELETED
@@ -1,17 +0,0 @@
|
|
1 |
-
#!/bin/bash
|
2 |
-
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
|
3 |
-
# Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017)
|
4 |
-
# Example usage: bash data/scripts/get_coco128.sh
|
5 |
-
# parent
|
6 |
-
# ├── yolov5
|
7 |
-
# └── datasets
|
8 |
-
# └── coco128 ← downloads here
|
9 |
-
|
10 |
-
# Download/unzip images and labels
|
11 |
-
d='../datasets' # unzip directory
|
12 |
-
url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
|
13 |
-
f='coco128.zip' # or 'coco128-segments.zip', 68 MB
|
14 |
-
echo 'Downloading' $url$f ' ...'
|
15 |
-
curl -L $url$f -o $f -# && unzip -q $f -d $d && rm $f &
|
16 |
-
|
17 |
-
wait # finish background tasks
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Abhilashvj/planogram-compliance/utils/google_app_engine/Dockerfile
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
FROM gcr.io/google-appengine/python
|
2 |
-
|
3 |
-
# Create a virtualenv for dependencies. This isolates these packages from
|
4 |
-
# system-level packages.
|
5 |
-
# Use -p python3 or -p python3.7 to select python version. Default is version 2.
|
6 |
-
RUN virtualenv /env -p python3
|
7 |
-
|
8 |
-
# Setting these environment variables are the same as running
|
9 |
-
# source /env/bin/activate.
|
10 |
-
ENV VIRTUAL_ENV /env
|
11 |
-
ENV PATH /env/bin:$PATH
|
12 |
-
|
13 |
-
RUN apt-get update && apt-get install -y python-opencv
|
14 |
-
|
15 |
-
# Copy the application's requirements.txt and run pip to install all
|
16 |
-
# dependencies into the virtualenv.
|
17 |
-
ADD requirements.txt /app/requirements.txt
|
18 |
-
RUN pip install -r /app/requirements.txt
|
19 |
-
|
20 |
-
# Add the application source code.
|
21 |
-
ADD . /app
|
22 |
-
|
23 |
-
# Run a WSGI server to serve the application. gunicorn must be declared as
|
24 |
-
# a dependency in requirements.txt.
|
25 |
-
CMD gunicorn -b :$PORT main:app
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Adapting/YouTube-Downloader/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: YouTube Downloader
|
3 |
-
emoji: 🐢
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: purple
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.15.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aditya9790/yolo7-object-tracking/utils/aws/__init__.py
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
#init
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/confirmdialog/methods/Modal.js
DELETED
@@ -1,29 +0,0 @@
|
|
1 |
-
import IsFunction from '../../../../plugins/utils/object/IsFunction.js';
|
2 |
-
import ModalMethods from '../../basesizer/ModalMethods.js';
|
3 |
-
|
4 |
-
var Modal = function (config, onClose) {
|
5 |
-
if (IsFunction(config)) {
|
6 |
-
onClose = config;
|
7 |
-
config = undefined;
|
8 |
-
}
|
9 |
-
|
10 |
-
if (config === undefined) {
|
11 |
-
config = {};
|
12 |
-
}
|
13 |
-
|
14 |
-
var zeroButtonMode = (this.buttonMode === 0);
|
15 |
-
|
16 |
-
if (!config.hasOwnProperty('anyTouchClose')) {
|
17 |
-
config.anyTouchClose = zeroButtonMode;
|
18 |
-
}
|
19 |
-
|
20 |
-
if (!config.hasOwnProperty('manualClose')) {
|
21 |
-
config.manualClose = !zeroButtonMode;
|
22 |
-
}
|
23 |
-
|
24 |
-
ModalMethods.modal.call(this, config, onClose);
|
25 |
-
|
26 |
-
return this;
|
27 |
-
}
|
28 |
-
|
29 |
-
export default Modal;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/ninepatch2/NinePatch.js
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
import NinePatch from '../../../plugins/ninepatch2.js'
|
2 |
-
export default NinePatch;
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/AddChildMethods.js
DELETED
@@ -1,170 +0,0 @@
|
|
1 |
-
import AddChild from '../basesizer/utils/AddChild.js';
|
2 |
-
import GetBoundsConfig from '../utils/GetBoundsConfig.js';
|
3 |
-
import ALIGNMODE from '../utils/AlignConst.js';
|
4 |
-
import Space from '../space/Space.js';
|
5 |
-
import { GetDisplayWidth, GetDisplayHeight } from '../../../plugins/utils/size/GetDisplaySize.js';
|
6 |
-
import GetNearestChildIndex from './GetNearestChildIndex.js';
|
7 |
-
|
8 |
-
const IsPlainObject = Phaser.Utils.Objects.IsPlainObject;
|
9 |
-
const GetValue = Phaser.Utils.Objects.GetValue;
|
10 |
-
const ALIGN_CENTER = Phaser.Display.Align.CENTER;
|
11 |
-
const PROPORTIONMODE = {
|
12 |
-
min: 0,
|
13 |
-
full: -1,
|
14 |
-
}
|
15 |
-
|
16 |
-
var Add = function (
|
17 |
-
gameObject,
|
18 |
-
proportion, align, paddingConfig, expand,
|
19 |
-
childKey, index,
|
20 |
-
minWidth, minHeight,
|
21 |
-
fitRatio,
|
22 |
-
) {
|
23 |
-
|
24 |
-
AddChild.call(this, gameObject);
|
25 |
-
|
26 |
-
var isRexSpace = gameObject.isRexSpace;
|
27 |
-
var proportionType = typeof (proportion);
|
28 |
-
if (proportion === null) {
|
29 |
-
return this;
|
30 |
-
} else if (proportionType === 'number') {
|
31 |
-
|
32 |
-
} else if (proportionType === 'string') {
|
33 |
-
proportion = PROPORTIONMODE[proportion];
|
34 |
-
} else if (IsPlainObject(proportion)) {
|
35 |
-
var config = proportion;
|
36 |
-
proportion = GetValue(config, 'proportion', undefined);
|
37 |
-
align = GetValue(config, 'align', ALIGN_CENTER);
|
38 |
-
paddingConfig = GetValue(config, 'padding', 0);
|
39 |
-
expand = GetValue(config, 'expand', false);
|
40 |
-
childKey = GetValue(config, 'key', undefined);
|
41 |
-
index = GetValue(config, 'index', undefined);
|
42 |
-
|
43 |
-
if (!gameObject.isRexSizer) {
|
44 |
-
minWidth = GetValue(config, 'minWidth', undefined);
|
45 |
-
minHeight = GetValue(config, 'minHeight', undefined);
|
46 |
-
}
|
47 |
-
|
48 |
-
fitRatio = GetValue(config, 'fitRatio', 0); // width/height
|
49 |
-
}
|
50 |
-
|
51 |
-
if (typeof (align) === 'string') {
|
52 |
-
align = ALIGNMODE[align];
|
53 |
-
}
|
54 |
-
|
55 |
-
if (proportion === undefined) {
|
56 |
-
proportion = (isRexSpace) ? 1 : 0;
|
57 |
-
}
|
58 |
-
if (align === undefined) {
|
59 |
-
align = ALIGN_CENTER;
|
60 |
-
}
|
61 |
-
if (paddingConfig === undefined) {
|
62 |
-
paddingConfig = 0;
|
63 |
-
}
|
64 |
-
if (expand === undefined) {
|
65 |
-
expand = false;
|
66 |
-
}
|
67 |
-
|
68 |
-
if (minWidth === undefined) {
|
69 |
-
if (isRexSpace) {
|
70 |
-
minWidth = 0;
|
71 |
-
} else if (!gameObject.isRexSizer) {
|
72 |
-
minWidth = gameObject._minWidth;
|
73 |
-
}
|
74 |
-
}
|
75 |
-
if (minHeight === undefined) {
|
76 |
-
if (isRexSpace) {
|
77 |
-
minHeight = 0;
|
78 |
-
} else if (!gameObject.isRexSizer) {
|
79 |
-
minHeight = gameObject._minHeight;
|
80 |
-
}
|
81 |
-
}
|
82 |
-
|
83 |
-
if (fitRatio === undefined) {
|
84 |
-
fitRatio = 0;
|
85 |
-
}
|
86 |
-
|
87 |
-
var config = this.getSizerConfig(gameObject);
|
88 |
-
config.proportion = proportion;
|
89 |
-
config.align = align;
|
90 |
-
config.padding = GetBoundsConfig(paddingConfig);
|
91 |
-
config.expand = expand;
|
92 |
-
config.fitRatio = (proportion === 0) ? fitRatio : 0;
|
93 |
-
|
94 |
-
if ((index === undefined) || (index >= this.sizerChildren.length)) {
|
95 |
-
this.sizerChildren.push(gameObject);
|
96 |
-
} else {
|
97 |
-
this.sizerChildren.splice(index, 0, gameObject);
|
98 |
-
}
|
99 |
-
|
100 |
-
if (!gameObject.isRexSizer) { // Expand normal game object
|
101 |
-
if (proportion > 0) {
|
102 |
-
if (this.orientation === 0) { // x
|
103 |
-
// minWidth is still undefined, uses current display width
|
104 |
-
gameObject.minWidth = (minWidth === undefined) ? GetDisplayWidth(gameObject) : minWidth;
|
105 |
-
} else {
|
106 |
-
// minHeight is still undefined, uses current display height
|
107 |
-
gameObject.minHeight = (minHeight === undefined) ? GetDisplayHeight(gameObject) : minHeight;
|
108 |
-
}
|
109 |
-
}
|
110 |
-
if (expand) {
|
111 |
-
if (this.orientation === 0) { // x
|
112 |
-
// Might have minHeight value, or still undefined
|
113 |
-
gameObject.minHeight = minHeight;
|
114 |
-
} else {
|
115 |
-
// Might have minWidth value, or still undefined
|
116 |
-
gameObject.minWidth = minWidth;
|
117 |
-
}
|
118 |
-
}
|
119 |
-
}
|
120 |
-
|
121 |
-
if (childKey !== undefined) {
|
122 |
-
this.addChildrenMap(childKey, gameObject)
|
123 |
-
}
|
124 |
-
|
125 |
-
return this;
|
126 |
-
};
|
127 |
-
|
128 |
-
export default {
|
129 |
-
add: Add, // sizer.add could be override
|
130 |
-
|
131 |
-
addSpace(proportion) {
|
132 |
-
this.insertSpace(undefined, proportion);
|
133 |
-
return this;
|
134 |
-
},
|
135 |
-
|
136 |
-
insertSpace(index, proportion) {
|
137 |
-
if (proportion === undefined) {
|
138 |
-
proportion = 1;
|
139 |
-
}
|
140 |
-
Add.call(this, new Space(this.scene),
|
141 |
-
{
|
142 |
-
proportion: proportion,
|
143 |
-
minWidth: 0,
|
144 |
-
minHeight: 0,
|
145 |
-
index: index
|
146 |
-
}
|
147 |
-
);
|
148 |
-
// No problem if sizer.add is override
|
149 |
-
return this;
|
150 |
-
},
|
151 |
-
|
152 |
-
insert(index, gameObject, proportion, align, paddingConfig, expand, childKey, minSize) {
|
153 |
-
if (IsPlainObject(proportion)) {
|
154 |
-
proportion.index = index;
|
155 |
-
}
|
156 |
-
|
157 |
-
Add.call(this, gameObject, proportion, align, paddingConfig, expand, childKey, index, minSize);
|
158 |
-
// No problem if sizer.add is override
|
159 |
-
return this;
|
160 |
-
},
|
161 |
-
|
162 |
-
insertAtPosition(x, y, gameObject, proportion, align, paddingConfig, expand, childKey, minSize) {
|
163 |
-
var index = GetNearestChildIndex.call(this, x, y);
|
164 |
-
if (index === -1) {
|
165 |
-
index = undefined;
|
166 |
-
}
|
167 |
-
this.insert(index, gameObject, proportion, align, paddingConfig, expand, childKey, minSize);
|
168 |
-
return this;
|
169 |
-
}
|
170 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/JSON/backportPP/Boolean.pm
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
=head1 NAME
|
2 |
-
|
3 |
-
JSON::PP::Boolean - dummy module providing JSON::PP::Boolean
|
4 |
-
|
5 |
-
=head1 SYNOPSIS
|
6 |
-
|
7 |
-
# do not "use" yourself
|
8 |
-
|
9 |
-
=head1 DESCRIPTION
|
10 |
-
|
11 |
-
This module exists only to provide overload resolution for Storable
|
12 |
-
and similar modules. See L<JSON::PP> for more info about this class.
|
13 |
-
|
14 |
-
=cut
|
15 |
-
|
16 |
-
use JSON::backportPP ();
|
17 |
-
use strict;
|
18 |
-
|
19 |
-
1;
|
20 |
-
|
21 |
-
=head1 AUTHOR
|
22 |
-
|
23 |
-
This idea is from L<JSON::XS::Boolean> written by
|
24 |
-
Marc Lehmann <schmorp[at]schmorp.de>
|
25 |
-
|
26 |
-
=cut
|
27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlekseyKorshuk/model-evaluation/tabs/playground.py
DELETED
@@ -1,123 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
from conversation import Conversation
|
3 |
-
|
4 |
-
|
5 |
-
def get_tab_playground(download_bot_config, get_bot_profile, model_mapping):
|
6 |
-
gr.Markdown("""
|
7 |
-
# 🎢 Playground 🎢
|
8 |
-
## Rules
|
9 |
-
* Chat with any model you would like with any bot from the Chai app.
|
10 |
-
* Click “Clear” to start a new conversation.
|
11 |
-
""")
|
12 |
-
default_bot_id = "_bot_e21de304-6151-4a04-b025-4c553ae8cbca"
|
13 |
-
bot_config = download_bot_config(default_bot_id)
|
14 |
-
user_state = gr.State(
|
15 |
-
bot_config
|
16 |
-
)
|
17 |
-
with gr.Row():
|
18 |
-
bot_id = gr.Textbox(label="Chai bot ID", value=default_bot_id, interactive=True)
|
19 |
-
reload_bot_button = gr.Button("Reload bot")
|
20 |
-
|
21 |
-
bot_profile = gr.HTML(get_bot_profile(bot_config))
|
22 |
-
with gr.Accordion("Bot config:", open=False):
|
23 |
-
bot_config_text = gr.Markdown(f"# Memory\n{bot_config['memory']}\n# Prompt\n{bot_config['prompt']}")
|
24 |
-
|
25 |
-
first_message = (None, bot_config["firstMessage"])
|
26 |
-
chatbot = gr.Chatbot([first_message])
|
27 |
-
|
28 |
-
msg = gr.Textbox(label="Message", value="Hi there!")
|
29 |
-
with gr.Row():
|
30 |
-
send = gr.Button("Send")
|
31 |
-
regenerate = gr.Button("Regenerate")
|
32 |
-
clear = gr.Button("Clear")
|
33 |
-
values = list(model_mapping.keys())
|
34 |
-
model_tag = gr.Dropdown(values, value=values[0], label="Model version")
|
35 |
-
model = model_mapping[model_tag.value]
|
36 |
-
|
37 |
-
with gr.Accordion("Generation parameters", open=False):
|
38 |
-
temperature = gr.Slider(minimum=0.0, maximum=1.0, value=model.generation_params["temperature"],
|
39 |
-
interactive=True, label="Temperature")
|
40 |
-
repetition_penalty = gr.Slider(minimum=0.0, maximum=2.0,
|
41 |
-
value=model.generation_params["repetition_penalty"],
|
42 |
-
interactive=True, label="Repetition penalty")
|
43 |
-
max_new_tokens = gr.Slider(minimum=1, maximum=512, value=model.generation_params["max_new_tokens"],
|
44 |
-
interactive=True, label="Max new tokens")
|
45 |
-
top_k = gr.Slider(minimum=1, maximum=100, value=model.generation_params["top_k"],
|
46 |
-
interactive=True, label="Top-K")
|
47 |
-
top_p = gr.Slider(minimum=0.0, maximum=1.0, value=model.generation_params["top_p"],
|
48 |
-
interactive=True, label="Top-P")
|
49 |
-
|
50 |
-
def respond(message, chat_history, user_state, model_tag,
|
51 |
-
temperature, repetition_penalty, max_new_tokens, top_k, top_p):
|
52 |
-
custom_generation_params = {
|
53 |
-
'temperature': temperature,
|
54 |
-
'repetition_penalty': repetition_penalty,
|
55 |
-
'max_new_tokens': max_new_tokens,
|
56 |
-
'top_k': top_k,
|
57 |
-
'top_p': top_p,
|
58 |
-
}
|
59 |
-
conv = Conversation(user_state)
|
60 |
-
conv.set_chat_history(chat_history)
|
61 |
-
conv.add_user_message(message)
|
62 |
-
model = model_mapping[model_tag]
|
63 |
-
bot_message = model.generate_response(conv, custom_generation_params)
|
64 |
-
chat_history.append(
|
65 |
-
(message, bot_message)
|
66 |
-
)
|
67 |
-
return "", chat_history
|
68 |
-
|
69 |
-
def clear_chat(chat_history, user_state):
|
70 |
-
chat_history = [(None, user_state["firstMessage"])]
|
71 |
-
return chat_history
|
72 |
-
|
73 |
-
def regenerate_response(chat_history, user_state, model_tag,
|
74 |
-
temperature, repetition_penalty, max_new_tokens, top_k, top_p):
|
75 |
-
custom_generation_params = {
|
76 |
-
'temperature': temperature,
|
77 |
-
'repetition_penalty': repetition_penalty,
|
78 |
-
'max_new_tokens': max_new_tokens,
|
79 |
-
'top_k': top_k,
|
80 |
-
'top_p': top_p,
|
81 |
-
}
|
82 |
-
last_row = chat_history.pop(-1)
|
83 |
-
chat_history.append((last_row[0], None))
|
84 |
-
model = model_mapping[model_tag]
|
85 |
-
conv = Conversation(user_state)
|
86 |
-
conv.set_chat_history(chat_history)
|
87 |
-
bot_message = model.generate_response(conv, custom_generation_params)
|
88 |
-
chat_history[-1] = (last_row[0], bot_message)
|
89 |
-
return chat_history
|
90 |
-
|
91 |
-
def reload_bot(bot_id, bot_profile, chat_history):
|
92 |
-
bot_config = download_bot_config(bot_id)
|
93 |
-
bot_profile = get_bot_profile(bot_config)
|
94 |
-
return bot_profile, [(None, bot_config[
|
95 |
-
"firstMessage"])], bot_config, f"# Memory\n{bot_config['memory']}\n# Prompt\n{bot_config['prompt']}"
|
96 |
-
|
97 |
-
def get_generation_args(model_tag):
|
98 |
-
model = model_mapping[model_tag]
|
99 |
-
return (
|
100 |
-
model.generation_params["temperature"],
|
101 |
-
model.generation_params["repetition_penalty"],
|
102 |
-
model.generation_params["max_new_tokens"],
|
103 |
-
model.generation_params["top_k"],
|
104 |
-
model.generation_params["top_p"],
|
105 |
-
)
|
106 |
-
|
107 |
-
model_tag.change(get_generation_args, [model_tag], [temperature, repetition_penalty, max_new_tokens, top_k,
|
108 |
-
top_p], queue=False)
|
109 |
-
send.click(respond,
|
110 |
-
[msg, chatbot, user_state, model_tag, temperature, repetition_penalty, max_new_tokens, top_k,
|
111 |
-
top_p], [msg, chatbot],
|
112 |
-
queue=False)
|
113 |
-
msg.submit(respond,
|
114 |
-
[msg, chatbot, user_state, model_tag, temperature, repetition_penalty, max_new_tokens, top_k,
|
115 |
-
top_p], [msg, chatbot],
|
116 |
-
queue=False)
|
117 |
-
clear.click(clear_chat, [chatbot, user_state], [chatbot], queue=False)
|
118 |
-
regenerate.click(regenerate_response,
|
119 |
-
[chatbot, user_state, model_tag, temperature, repetition_penalty, max_new_tokens, top_k,
|
120 |
-
top_p], [chatbot], queue=False)
|
121 |
-
reload_bot_button.click(reload_bot, [bot_id, bot_profile, chatbot],
|
122 |
-
[bot_profile, chatbot, user_state, bot_config_text],
|
123 |
-
queue=False)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlgoveraAI/algovera_squad_active_passive_model/README.md
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Algovera_squad_active_passive_model
|
3 |
-
emoji: 🐢
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: purple
|
6 |
-
sdk: streamlit
|
7 |
-
app_file: app.py
|
8 |
-
pinned: false
|
9 |
-
---
|
10 |
-
|
11 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/autoencoder_kl.py
DELETED
@@ -1,417 +0,0 @@
|
|
1 |
-
# Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
-
# you may not use this file except in compliance with the License.
|
5 |
-
# You may obtain a copy of the License at
|
6 |
-
#
|
7 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
-
#
|
9 |
-
# Unless required by applicable law or agreed to in writing, software
|
10 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
-
# See the License for the specific language governing permissions and
|
13 |
-
# limitations under the License.
|
14 |
-
from dataclasses import dataclass
|
15 |
-
from typing import Dict, Optional, Tuple, Union
|
16 |
-
|
17 |
-
import torch
|
18 |
-
import torch.nn as nn
|
19 |
-
|
20 |
-
from ..configuration_utils import ConfigMixin, register_to_config
|
21 |
-
from ..loaders import FromOriginalVAEMixin
|
22 |
-
from ..utils import BaseOutput, apply_forward_hook
|
23 |
-
from .attention_processor import AttentionProcessor, AttnProcessor
|
24 |
-
from .modeling_utils import ModelMixin
|
25 |
-
from .vae import Decoder, DecoderOutput, DiagonalGaussianDistribution, Encoder
|
26 |
-
|
27 |
-
|
28 |
-
@dataclass
|
29 |
-
class AutoencoderKLOutput(BaseOutput):
|
30 |
-
"""
|
31 |
-
Output of AutoencoderKL encoding method.
|
32 |
-
|
33 |
-
Args:
|
34 |
-
latent_dist (`DiagonalGaussianDistribution`):
|
35 |
-
Encoded outputs of `Encoder` represented as the mean and logvar of `DiagonalGaussianDistribution`.
|
36 |
-
`DiagonalGaussianDistribution` allows for sampling latents from the distribution.
|
37 |
-
"""
|
38 |
-
|
39 |
-
latent_dist: "DiagonalGaussianDistribution"
|
40 |
-
|
41 |
-
|
42 |
-
class AutoencoderKL(ModelMixin, ConfigMixin, FromOriginalVAEMixin):
|
43 |
-
r"""
|
44 |
-
A VAE model with KL loss for encoding images into latents and decoding latent representations into images.
|
45 |
-
|
46 |
-
This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented
|
47 |
-
for all models (such as downloading or saving).
|
48 |
-
|
49 |
-
Parameters:
|
50 |
-
in_channels (int, *optional*, defaults to 3): Number of channels in the input image.
|
51 |
-
out_channels (int, *optional*, defaults to 3): Number of channels in the output.
|
52 |
-
down_block_types (`Tuple[str]`, *optional*, defaults to `("DownEncoderBlock2D",)`):
|
53 |
-
Tuple of downsample block types.
|
54 |
-
up_block_types (`Tuple[str]`, *optional*, defaults to `("UpDecoderBlock2D",)`):
|
55 |
-
Tuple of upsample block types.
|
56 |
-
block_out_channels (`Tuple[int]`, *optional*, defaults to `(64,)`):
|
57 |
-
Tuple of block output channels.
|
58 |
-
act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use.
|
59 |
-
latent_channels (`int`, *optional*, defaults to 4): Number of channels in the latent space.
|
60 |
-
sample_size (`int`, *optional*, defaults to `32`): Sample input size.
|
61 |
-
scaling_factor (`float`, *optional*, defaults to 0.18215):
|
62 |
-
The component-wise standard deviation of the trained latent space computed using the first batch of the
|
63 |
-
training set. This is used to scale the latent space to have unit variance when training the diffusion
|
64 |
-
model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
|
65 |
-
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
|
66 |
-
/ scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
|
67 |
-
Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper.
|
68 |
-
force_upcast (`bool`, *optional*, default to `True`):
|
69 |
-
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
|
70 |
-
can be fine-tuned / trained to a lower range without loosing too much precision in which case
|
71 |
-
`force_upcast` can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
|
72 |
-
"""
|
73 |
-
|
74 |
-
_supports_gradient_checkpointing = True
|
75 |
-
|
76 |
-
@register_to_config
|
77 |
-
def __init__(
|
78 |
-
self,
|
79 |
-
in_channels: int = 3,
|
80 |
-
out_channels: int = 3,
|
81 |
-
down_block_types: Tuple[str] = ("DownEncoderBlock2D",),
|
82 |
-
up_block_types: Tuple[str] = ("UpDecoderBlock2D",),
|
83 |
-
block_out_channels: Tuple[int] = (64,),
|
84 |
-
layers_per_block: int = 1,
|
85 |
-
act_fn: str = "silu",
|
86 |
-
latent_channels: int = 4,
|
87 |
-
norm_num_groups: int = 32,
|
88 |
-
sample_size: int = 32,
|
89 |
-
scaling_factor: float = 0.18215,
|
90 |
-
force_upcast: float = True,
|
91 |
-
):
|
92 |
-
super().__init__()
|
93 |
-
|
94 |
-
# pass init params to Encoder
|
95 |
-
self.encoder = Encoder(
|
96 |
-
in_channels=in_channels,
|
97 |
-
out_channels=latent_channels,
|
98 |
-
down_block_types=down_block_types,
|
99 |
-
block_out_channels=block_out_channels,
|
100 |
-
layers_per_block=layers_per_block,
|
101 |
-
act_fn=act_fn,
|
102 |
-
norm_num_groups=norm_num_groups,
|
103 |
-
double_z=True,
|
104 |
-
)
|
105 |
-
|
106 |
-
# pass init params to Decoder
|
107 |
-
self.decoder = Decoder(
|
108 |
-
in_channels=latent_channels,
|
109 |
-
out_channels=out_channels,
|
110 |
-
up_block_types=up_block_types,
|
111 |
-
block_out_channels=block_out_channels,
|
112 |
-
layers_per_block=layers_per_block,
|
113 |
-
norm_num_groups=norm_num_groups,
|
114 |
-
act_fn=act_fn,
|
115 |
-
)
|
116 |
-
|
117 |
-
self.quant_conv = nn.Conv2d(2 * latent_channels, 2 * latent_channels, 1)
|
118 |
-
self.post_quant_conv = nn.Conv2d(latent_channels, latent_channels, 1)
|
119 |
-
|
120 |
-
self.use_slicing = False
|
121 |
-
self.use_tiling = False
|
122 |
-
|
123 |
-
# only relevant if vae tiling is enabled
|
124 |
-
self.tile_sample_min_size = self.config.sample_size
|
125 |
-
sample_size = (
|
126 |
-
self.config.sample_size[0]
|
127 |
-
if isinstance(self.config.sample_size, (list, tuple))
|
128 |
-
else self.config.sample_size
|
129 |
-
)
|
130 |
-
self.tile_latent_min_size = int(sample_size / (2 ** (len(self.config.block_out_channels) - 1)))
|
131 |
-
self.tile_overlap_factor = 0.25
|
132 |
-
|
133 |
-
def _set_gradient_checkpointing(self, module, value=False):
|
134 |
-
if isinstance(module, (Encoder, Decoder)):
|
135 |
-
module.gradient_checkpointing = value
|
136 |
-
|
137 |
-
def enable_tiling(self, use_tiling: bool = True):
|
138 |
-
r"""
|
139 |
-
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
|
140 |
-
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
|
141 |
-
processing larger images.
|
142 |
-
"""
|
143 |
-
self.use_tiling = use_tiling
|
144 |
-
|
145 |
-
def disable_tiling(self):
|
146 |
-
r"""
|
147 |
-
Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing
|
148 |
-
decoding in one step.
|
149 |
-
"""
|
150 |
-
self.enable_tiling(False)
|
151 |
-
|
152 |
-
def enable_slicing(self):
|
153 |
-
r"""
|
154 |
-
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
|
155 |
-
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
|
156 |
-
"""
|
157 |
-
self.use_slicing = True
|
158 |
-
|
159 |
-
def disable_slicing(self):
|
160 |
-
r"""
|
161 |
-
Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing
|
162 |
-
decoding in one step.
|
163 |
-
"""
|
164 |
-
self.use_slicing = False
|
165 |
-
|
166 |
-
@property
|
167 |
-
# Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel.attn_processors
|
168 |
-
def attn_processors(self) -> Dict[str, AttentionProcessor]:
|
169 |
-
r"""
|
170 |
-
Returns:
|
171 |
-
`dict` of attention processors: A dictionary containing all attention processors used in the model with
|
172 |
-
indexed by its weight name.
|
173 |
-
"""
|
174 |
-
# set recursively
|
175 |
-
processors = {}
|
176 |
-
|
177 |
-
def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]):
|
178 |
-
if hasattr(module, "set_processor"):
|
179 |
-
processors[f"{name}.processor"] = module.processor
|
180 |
-
|
181 |
-
for sub_name, child in module.named_children():
|
182 |
-
fn_recursive_add_processors(f"{name}.{sub_name}", child, processors)
|
183 |
-
|
184 |
-
return processors
|
185 |
-
|
186 |
-
for name, module in self.named_children():
|
187 |
-
fn_recursive_add_processors(name, module, processors)
|
188 |
-
|
189 |
-
return processors
|
190 |
-
|
191 |
-
# Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel.set_attn_processor
|
192 |
-
def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]):
|
193 |
-
r"""
|
194 |
-
Sets the attention processor to use to compute attention.
|
195 |
-
|
196 |
-
Parameters:
|
197 |
-
processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`):
|
198 |
-
The instantiated processor class or a dictionary of processor classes that will be set as the processor
|
199 |
-
for **all** `Attention` layers.
|
200 |
-
|
201 |
-
If `processor` is a dict, the key needs to define the path to the corresponding cross attention
|
202 |
-
processor. This is strongly recommended when setting trainable attention processors.
|
203 |
-
|
204 |
-
"""
|
205 |
-
count = len(self.attn_processors.keys())
|
206 |
-
|
207 |
-
if isinstance(processor, dict) and len(processor) != count:
|
208 |
-
raise ValueError(
|
209 |
-
f"A dict of processors was passed, but the number of processors {len(processor)} does not match the"
|
210 |
-
f" number of attention layers: {count}. Please make sure to pass {count} processor classes."
|
211 |
-
)
|
212 |
-
|
213 |
-
def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor):
|
214 |
-
if hasattr(module, "set_processor"):
|
215 |
-
if not isinstance(processor, dict):
|
216 |
-
module.set_processor(processor)
|
217 |
-
else:
|
218 |
-
module.set_processor(processor.pop(f"{name}.processor"))
|
219 |
-
|
220 |
-
for sub_name, child in module.named_children():
|
221 |
-
fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor)
|
222 |
-
|
223 |
-
for name, module in self.named_children():
|
224 |
-
fn_recursive_attn_processor(name, module, processor)
|
225 |
-
|
226 |
-
# Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel.set_default_attn_processor
|
227 |
-
def set_default_attn_processor(self):
|
228 |
-
"""
|
229 |
-
Disables custom attention processors and sets the default attention implementation.
|
230 |
-
"""
|
231 |
-
self.set_attn_processor(AttnProcessor())
|
232 |
-
|
233 |
-
@apply_forward_hook
|
234 |
-
def encode(self, x: torch.FloatTensor, return_dict: bool = True) -> AutoencoderKLOutput:
|
235 |
-
if self.use_tiling and (x.shape[-1] > self.tile_sample_min_size or x.shape[-2] > self.tile_sample_min_size):
|
236 |
-
return self.tiled_encode(x, return_dict=return_dict)
|
237 |
-
|
238 |
-
if self.use_slicing and x.shape[0] > 1:
|
239 |
-
encoded_slices = [self.encoder(x_slice) for x_slice in x.split(1)]
|
240 |
-
h = torch.cat(encoded_slices)
|
241 |
-
else:
|
242 |
-
h = self.encoder(x)
|
243 |
-
|
244 |
-
moments = self.quant_conv(h)
|
245 |
-
posterior = DiagonalGaussianDistribution(moments)
|
246 |
-
|
247 |
-
if not return_dict:
|
248 |
-
return (posterior,)
|
249 |
-
|
250 |
-
return AutoencoderKLOutput(latent_dist=posterior)
|
251 |
-
|
252 |
-
def _decode(self, z: torch.FloatTensor, return_dict: bool = True) -> Union[DecoderOutput, torch.FloatTensor]:
|
253 |
-
if self.use_tiling and (z.shape[-1] > self.tile_latent_min_size or z.shape[-2] > self.tile_latent_min_size):
|
254 |
-
return self.tiled_decode(z, return_dict=return_dict)
|
255 |
-
|
256 |
-
z = self.post_quant_conv(z)
|
257 |
-
dec = self.decoder(z)
|
258 |
-
|
259 |
-
if not return_dict:
|
260 |
-
return (dec,)
|
261 |
-
|
262 |
-
return DecoderOutput(sample=dec)
|
263 |
-
|
264 |
-
@apply_forward_hook
|
265 |
-
def decode(self, z: torch.FloatTensor, return_dict: bool = True) -> Union[DecoderOutput, torch.FloatTensor]:
|
266 |
-
if self.use_slicing and z.shape[0] > 1:
|
267 |
-
decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)]
|
268 |
-
decoded = torch.cat(decoded_slices)
|
269 |
-
else:
|
270 |
-
decoded = self._decode(z).sample
|
271 |
-
|
272 |
-
if not return_dict:
|
273 |
-
return (decoded,)
|
274 |
-
|
275 |
-
return DecoderOutput(sample=decoded)
|
276 |
-
|
277 |
-
def blend_v(self, a, b, blend_extent):
|
278 |
-
blend_extent = min(a.shape[2], b.shape[2], blend_extent)
|
279 |
-
for y in range(blend_extent):
|
280 |
-
b[:, :, y, :] = a[:, :, -blend_extent + y, :] * (1 - y / blend_extent) + b[:, :, y, :] * (y / blend_extent)
|
281 |
-
return b
|
282 |
-
|
283 |
-
def blend_h(self, a, b, blend_extent):
|
284 |
-
blend_extent = min(a.shape[3], b.shape[3], blend_extent)
|
285 |
-
for x in range(blend_extent):
|
286 |
-
b[:, :, :, x] = a[:, :, :, -blend_extent + x] * (1 - x / blend_extent) + b[:, :, :, x] * (x / blend_extent)
|
287 |
-
return b
|
288 |
-
|
289 |
-
def tiled_encode(self, x: torch.FloatTensor, return_dict: bool = True) -> AutoencoderKLOutput:
|
290 |
-
r"""Encode a batch of images using a tiled encoder.
|
291 |
-
|
292 |
-
When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several
|
293 |
-
steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is
|
294 |
-
different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the
|
295 |
-
tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the
|
296 |
-
output, but they should be much less noticeable.
|
297 |
-
|
298 |
-
Args:
|
299 |
-
x (`torch.FloatTensor`): Input batch of images.
|
300 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
301 |
-
Whether or not to return a [`~models.autoencoder_kl.AutoencoderKLOutput`] instead of a plain tuple.
|
302 |
-
|
303 |
-
Returns:
|
304 |
-
[`~models.autoencoder_kl.AutoencoderKLOutput`] or `tuple`:
|
305 |
-
If return_dict is True, a [`~models.autoencoder_kl.AutoencoderKLOutput`] is returned, otherwise a plain
|
306 |
-
`tuple` is returned.
|
307 |
-
"""
|
308 |
-
overlap_size = int(self.tile_sample_min_size * (1 - self.tile_overlap_factor))
|
309 |
-
blend_extent = int(self.tile_latent_min_size * self.tile_overlap_factor)
|
310 |
-
row_limit = self.tile_latent_min_size - blend_extent
|
311 |
-
|
312 |
-
# Split the image into 512x512 tiles and encode them separately.
|
313 |
-
rows = []
|
314 |
-
for i in range(0, x.shape[2], overlap_size):
|
315 |
-
row = []
|
316 |
-
for j in range(0, x.shape[3], overlap_size):
|
317 |
-
tile = x[:, :, i : i + self.tile_sample_min_size, j : j + self.tile_sample_min_size]
|
318 |
-
tile = self.encoder(tile)
|
319 |
-
tile = self.quant_conv(tile)
|
320 |
-
row.append(tile)
|
321 |
-
rows.append(row)
|
322 |
-
result_rows = []
|
323 |
-
for i, row in enumerate(rows):
|
324 |
-
result_row = []
|
325 |
-
for j, tile in enumerate(row):
|
326 |
-
# blend the above tile and the left tile
|
327 |
-
# to the current tile and add the current tile to the result row
|
328 |
-
if i > 0:
|
329 |
-
tile = self.blend_v(rows[i - 1][j], tile, blend_extent)
|
330 |
-
if j > 0:
|
331 |
-
tile = self.blend_h(row[j - 1], tile, blend_extent)
|
332 |
-
result_row.append(tile[:, :, :row_limit, :row_limit])
|
333 |
-
result_rows.append(torch.cat(result_row, dim=3))
|
334 |
-
|
335 |
-
moments = torch.cat(result_rows, dim=2)
|
336 |
-
posterior = DiagonalGaussianDistribution(moments)
|
337 |
-
|
338 |
-
if not return_dict:
|
339 |
-
return (posterior,)
|
340 |
-
|
341 |
-
return AutoencoderKLOutput(latent_dist=posterior)
|
342 |
-
|
343 |
-
def tiled_decode(self, z: torch.FloatTensor, return_dict: bool = True) -> Union[DecoderOutput, torch.FloatTensor]:
|
344 |
-
r"""
|
345 |
-
Decode a batch of images using a tiled decoder.
|
346 |
-
|
347 |
-
Args:
|
348 |
-
z (`torch.FloatTensor`): Input batch of latent vectors.
|
349 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
350 |
-
Whether or not to return a [`~models.vae.DecoderOutput`] instead of a plain tuple.
|
351 |
-
|
352 |
-
Returns:
|
353 |
-
[`~models.vae.DecoderOutput`] or `tuple`:
|
354 |
-
If return_dict is True, a [`~models.vae.DecoderOutput`] is returned, otherwise a plain `tuple` is
|
355 |
-
returned.
|
356 |
-
"""
|
357 |
-
overlap_size = int(self.tile_latent_min_size * (1 - self.tile_overlap_factor))
|
358 |
-
blend_extent = int(self.tile_sample_min_size * self.tile_overlap_factor)
|
359 |
-
row_limit = self.tile_sample_min_size - blend_extent
|
360 |
-
|
361 |
-
# Split z into overlapping 64x64 tiles and decode them separately.
|
362 |
-
# The tiles have an overlap to avoid seams between tiles.
|
363 |
-
rows = []
|
364 |
-
for i in range(0, z.shape[2], overlap_size):
|
365 |
-
row = []
|
366 |
-
for j in range(0, z.shape[3], overlap_size):
|
367 |
-
tile = z[:, :, i : i + self.tile_latent_min_size, j : j + self.tile_latent_min_size]
|
368 |
-
tile = self.post_quant_conv(tile)
|
369 |
-
decoded = self.decoder(tile)
|
370 |
-
row.append(decoded)
|
371 |
-
rows.append(row)
|
372 |
-
result_rows = []
|
373 |
-
for i, row in enumerate(rows):
|
374 |
-
result_row = []
|
375 |
-
for j, tile in enumerate(row):
|
376 |
-
# blend the above tile and the left tile
|
377 |
-
# to the current tile and add the current tile to the result row
|
378 |
-
if i > 0:
|
379 |
-
tile = self.blend_v(rows[i - 1][j], tile, blend_extent)
|
380 |
-
if j > 0:
|
381 |
-
tile = self.blend_h(row[j - 1], tile, blend_extent)
|
382 |
-
result_row.append(tile[:, :, :row_limit, :row_limit])
|
383 |
-
result_rows.append(torch.cat(result_row, dim=3))
|
384 |
-
|
385 |
-
dec = torch.cat(result_rows, dim=2)
|
386 |
-
if not return_dict:
|
387 |
-
return (dec,)
|
388 |
-
|
389 |
-
return DecoderOutput(sample=dec)
|
390 |
-
|
391 |
-
def forward(
|
392 |
-
self,
|
393 |
-
sample: torch.FloatTensor,
|
394 |
-
sample_posterior: bool = False,
|
395 |
-
return_dict: bool = True,
|
396 |
-
generator: Optional[torch.Generator] = None,
|
397 |
-
) -> Union[DecoderOutput, torch.FloatTensor]:
|
398 |
-
r"""
|
399 |
-
Args:
|
400 |
-
sample (`torch.FloatTensor`): Input sample.
|
401 |
-
sample_posterior (`bool`, *optional*, defaults to `False`):
|
402 |
-
Whether to sample from the posterior.
|
403 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
404 |
-
Whether or not to return a [`DecoderOutput`] instead of a plain tuple.
|
405 |
-
"""
|
406 |
-
x = sample
|
407 |
-
posterior = self.encode(x).latent_dist
|
408 |
-
if sample_posterior:
|
409 |
-
z = posterior.sample(generator=generator)
|
410 |
-
else:
|
411 |
-
z = posterior.mode()
|
412 |
-
dec = self.decode(z).sample
|
413 |
-
|
414 |
-
if not return_dict:
|
415 |
-
return (dec,)
|
416 |
-
|
417 |
-
return DecoderOutput(sample=dec)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/modeling_utils.py
DELETED
@@ -1,980 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2023 The HuggingFace Inc. team.
|
3 |
-
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
|
4 |
-
#
|
5 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
6 |
-
# you may not use this file except in compliance with the License.
|
7 |
-
# You may obtain a copy of the License at
|
8 |
-
#
|
9 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
10 |
-
#
|
11 |
-
# Unless required by applicable law or agreed to in writing, software
|
12 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
13 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
14 |
-
# See the License for the specific language governing permissions and
|
15 |
-
# limitations under the License.
|
16 |
-
|
17 |
-
import inspect
|
18 |
-
import itertools
|
19 |
-
import os
|
20 |
-
import re
|
21 |
-
from functools import partial
|
22 |
-
from typing import Any, Callable, List, Optional, Tuple, Union
|
23 |
-
|
24 |
-
import torch
|
25 |
-
from torch import Tensor, device, nn
|
26 |
-
|
27 |
-
from .. import __version__
|
28 |
-
from ..utils import (
|
29 |
-
CONFIG_NAME,
|
30 |
-
DIFFUSERS_CACHE,
|
31 |
-
FLAX_WEIGHTS_NAME,
|
32 |
-
HF_HUB_OFFLINE,
|
33 |
-
SAFETENSORS_WEIGHTS_NAME,
|
34 |
-
WEIGHTS_NAME,
|
35 |
-
_add_variant,
|
36 |
-
_get_model_file,
|
37 |
-
deprecate,
|
38 |
-
is_accelerate_available,
|
39 |
-
is_safetensors_available,
|
40 |
-
is_torch_version,
|
41 |
-
logging,
|
42 |
-
)
|
43 |
-
|
44 |
-
|
45 |
-
logger = logging.get_logger(__name__)
|
46 |
-
|
47 |
-
|
48 |
-
if is_torch_version(">=", "1.9.0"):
|
49 |
-
_LOW_CPU_MEM_USAGE_DEFAULT = True
|
50 |
-
else:
|
51 |
-
_LOW_CPU_MEM_USAGE_DEFAULT = False
|
52 |
-
|
53 |
-
|
54 |
-
if is_accelerate_available():
|
55 |
-
import accelerate
|
56 |
-
from accelerate.utils import set_module_tensor_to_device
|
57 |
-
from accelerate.utils.versions import is_torch_version
|
58 |
-
|
59 |
-
if is_safetensors_available():
|
60 |
-
import safetensors
|
61 |
-
|
62 |
-
|
63 |
-
def get_parameter_device(parameter: torch.nn.Module):
|
64 |
-
try:
|
65 |
-
parameters_and_buffers = itertools.chain(parameter.parameters(), parameter.buffers())
|
66 |
-
return next(parameters_and_buffers).device
|
67 |
-
except StopIteration:
|
68 |
-
# For torch.nn.DataParallel compatibility in PyTorch 1.5
|
69 |
-
|
70 |
-
def find_tensor_attributes(module: torch.nn.Module) -> List[Tuple[str, Tensor]]:
|
71 |
-
tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)]
|
72 |
-
return tuples
|
73 |
-
|
74 |
-
gen = parameter._named_members(get_members_fn=find_tensor_attributes)
|
75 |
-
first_tuple = next(gen)
|
76 |
-
return first_tuple[1].device
|
77 |
-
|
78 |
-
|
79 |
-
def get_parameter_dtype(parameter: torch.nn.Module):
|
80 |
-
try:
|
81 |
-
params = tuple(parameter.parameters())
|
82 |
-
if len(params) > 0:
|
83 |
-
return params[0].dtype
|
84 |
-
|
85 |
-
buffers = tuple(parameter.buffers())
|
86 |
-
if len(buffers) > 0:
|
87 |
-
return buffers[0].dtype
|
88 |
-
|
89 |
-
except StopIteration:
|
90 |
-
# For torch.nn.DataParallel compatibility in PyTorch 1.5
|
91 |
-
|
92 |
-
def find_tensor_attributes(module: torch.nn.Module) -> List[Tuple[str, Tensor]]:
|
93 |
-
tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)]
|
94 |
-
return tuples
|
95 |
-
|
96 |
-
gen = parameter._named_members(get_members_fn=find_tensor_attributes)
|
97 |
-
first_tuple = next(gen)
|
98 |
-
return first_tuple[1].dtype
|
99 |
-
|
100 |
-
|
101 |
-
def load_state_dict(checkpoint_file: Union[str, os.PathLike], variant: Optional[str] = None):
|
102 |
-
"""
|
103 |
-
Reads a checkpoint file, returning properly formatted errors if they arise.
|
104 |
-
"""
|
105 |
-
try:
|
106 |
-
if os.path.basename(checkpoint_file) == _add_variant(WEIGHTS_NAME, variant):
|
107 |
-
return torch.load(checkpoint_file, map_location="cpu")
|
108 |
-
else:
|
109 |
-
return safetensors.torch.load_file(checkpoint_file, device="cpu")
|
110 |
-
except Exception as e:
|
111 |
-
try:
|
112 |
-
with open(checkpoint_file) as f:
|
113 |
-
if f.read().startswith("version"):
|
114 |
-
raise OSError(
|
115 |
-
"You seem to have cloned a repository without having git-lfs installed. Please install "
|
116 |
-
"git-lfs and run `git lfs install` followed by `git lfs pull` in the folder "
|
117 |
-
"you cloned."
|
118 |
-
)
|
119 |
-
else:
|
120 |
-
raise ValueError(
|
121 |
-
f"Unable to locate the file {checkpoint_file} which is necessary to load this pretrained "
|
122 |
-
"model. Make sure you have saved the model properly."
|
123 |
-
) from e
|
124 |
-
except (UnicodeDecodeError, ValueError):
|
125 |
-
raise OSError(
|
126 |
-
f"Unable to load weights from checkpoint file for '{checkpoint_file}' "
|
127 |
-
f"at '{checkpoint_file}'. "
|
128 |
-
"If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True."
|
129 |
-
)
|
130 |
-
|
131 |
-
|
132 |
-
def _load_state_dict_into_model(model_to_load, state_dict):
|
133 |
-
# Convert old format to new format if needed from a PyTorch state_dict
|
134 |
-
# copy state_dict so _load_from_state_dict can modify it
|
135 |
-
state_dict = state_dict.copy()
|
136 |
-
error_msgs = []
|
137 |
-
|
138 |
-
# PyTorch's `_load_from_state_dict` does not copy parameters in a module's descendants
|
139 |
-
# so we need to apply the function recursively.
|
140 |
-
def load(module: torch.nn.Module, prefix=""):
|
141 |
-
args = (state_dict, prefix, {}, True, [], [], error_msgs)
|
142 |
-
module._load_from_state_dict(*args)
|
143 |
-
|
144 |
-
for name, child in module._modules.items():
|
145 |
-
if child is not None:
|
146 |
-
load(child, prefix + name + ".")
|
147 |
-
|
148 |
-
load(model_to_load)
|
149 |
-
|
150 |
-
return error_msgs
|
151 |
-
|
152 |
-
|
153 |
-
class ModelMixin(torch.nn.Module):
|
154 |
-
r"""
|
155 |
-
Base class for all models.
|
156 |
-
|
157 |
-
[`ModelMixin`] takes care of storing the model configuration and provides methods for loading, downloading and
|
158 |
-
saving models.
|
159 |
-
|
160 |
-
- **config_name** ([`str`]) -- Filename to save a model to when calling [`~models.ModelMixin.save_pretrained`].
|
161 |
-
"""
|
162 |
-
config_name = CONFIG_NAME
|
163 |
-
_automatically_saved_args = ["_diffusers_version", "_class_name", "_name_or_path"]
|
164 |
-
_supports_gradient_checkpointing = False
|
165 |
-
_keys_to_ignore_on_load_unexpected = None
|
166 |
-
|
167 |
-
def __init__(self):
|
168 |
-
super().__init__()
|
169 |
-
|
170 |
-
def __getattr__(self, name: str) -> Any:
|
171 |
-
"""The only reason we overwrite `getattr` here is to gracefully deprecate accessing
|
172 |
-
config attributes directly. See https://github.com/huggingface/diffusers/pull/3129 We need to overwrite
|
173 |
-
__getattr__ here in addition so that we don't trigger `torch.nn.Module`'s __getattr__':
|
174 |
-
https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module
|
175 |
-
"""
|
176 |
-
|
177 |
-
is_in_config = "_internal_dict" in self.__dict__ and hasattr(self.__dict__["_internal_dict"], name)
|
178 |
-
is_attribute = name in self.__dict__
|
179 |
-
|
180 |
-
if is_in_config and not is_attribute:
|
181 |
-
deprecation_message = f"Accessing config attribute `{name}` directly via '{type(self).__name__}' object attribute is deprecated. Please access '{name}' over '{type(self).__name__}'s config object instead, e.g. 'unet.config.{name}'."
|
182 |
-
deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False, stacklevel=3)
|
183 |
-
return self._internal_dict[name]
|
184 |
-
|
185 |
-
# call PyTorch's https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module
|
186 |
-
return super().__getattr__(name)
|
187 |
-
|
188 |
-
@property
|
189 |
-
def is_gradient_checkpointing(self) -> bool:
|
190 |
-
"""
|
191 |
-
Whether gradient checkpointing is activated for this model or not.
|
192 |
-
"""
|
193 |
-
return any(hasattr(m, "gradient_checkpointing") and m.gradient_checkpointing for m in self.modules())
|
194 |
-
|
195 |
-
def enable_gradient_checkpointing(self):
|
196 |
-
"""
|
197 |
-
Activates gradient checkpointing for the current model (may be referred to as *activation checkpointing* or
|
198 |
-
*checkpoint activations* in other frameworks).
|
199 |
-
"""
|
200 |
-
if not self._supports_gradient_checkpointing:
|
201 |
-
raise ValueError(f"{self.__class__.__name__} does not support gradient checkpointing.")
|
202 |
-
self.apply(partial(self._set_gradient_checkpointing, value=True))
|
203 |
-
|
204 |
-
def disable_gradient_checkpointing(self):
|
205 |
-
"""
|
206 |
-
Deactivates gradient checkpointing for the current model (may be referred to as *activation checkpointing* or
|
207 |
-
*checkpoint activations* in other frameworks).
|
208 |
-
"""
|
209 |
-
if self._supports_gradient_checkpointing:
|
210 |
-
self.apply(partial(self._set_gradient_checkpointing, value=False))
|
211 |
-
|
212 |
-
def set_use_memory_efficient_attention_xformers(
|
213 |
-
self, valid: bool, attention_op: Optional[Callable] = None
|
214 |
-
) -> None:
|
215 |
-
# Recursively walk through all the children.
|
216 |
-
# Any children which exposes the set_use_memory_efficient_attention_xformers method
|
217 |
-
# gets the message
|
218 |
-
def fn_recursive_set_mem_eff(module: torch.nn.Module):
|
219 |
-
if hasattr(module, "set_use_memory_efficient_attention_xformers"):
|
220 |
-
module.set_use_memory_efficient_attention_xformers(valid, attention_op)
|
221 |
-
|
222 |
-
for child in module.children():
|
223 |
-
fn_recursive_set_mem_eff(child)
|
224 |
-
|
225 |
-
for module in self.children():
|
226 |
-
if isinstance(module, torch.nn.Module):
|
227 |
-
fn_recursive_set_mem_eff(module)
|
228 |
-
|
229 |
-
def enable_xformers_memory_efficient_attention(self, attention_op: Optional[Callable] = None):
|
230 |
-
r"""
|
231 |
-
Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).
|
232 |
-
|
233 |
-
When this option is enabled, you should observe lower GPU memory usage and a potential speed up during
|
234 |
-
inference. Speed up during training is not guaranteed.
|
235 |
-
|
236 |
-
<Tip warning={true}>
|
237 |
-
|
238 |
-
⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes
|
239 |
-
precedent.
|
240 |
-
|
241 |
-
</Tip>
|
242 |
-
|
243 |
-
Parameters:
|
244 |
-
attention_op (`Callable`, *optional*):
|
245 |
-
Override the default `None` operator for use as `op` argument to the
|
246 |
-
[`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention)
|
247 |
-
function of xFormers.
|
248 |
-
|
249 |
-
Examples:
|
250 |
-
|
251 |
-
```py
|
252 |
-
>>> import torch
|
253 |
-
>>> from diffusers import UNet2DConditionModel
|
254 |
-
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
|
255 |
-
|
256 |
-
>>> model = UNet2DConditionModel.from_pretrained(
|
257 |
-
... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16
|
258 |
-
... )
|
259 |
-
>>> model = model.to("cuda")
|
260 |
-
>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
|
261 |
-
```
|
262 |
-
"""
|
263 |
-
self.set_use_memory_efficient_attention_xformers(True, attention_op)
|
264 |
-
|
265 |
-
def disable_xformers_memory_efficient_attention(self):
|
266 |
-
r"""
|
267 |
-
Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/).
|
268 |
-
"""
|
269 |
-
self.set_use_memory_efficient_attention_xformers(False)
|
270 |
-
|
271 |
-
def save_pretrained(
|
272 |
-
self,
|
273 |
-
save_directory: Union[str, os.PathLike],
|
274 |
-
is_main_process: bool = True,
|
275 |
-
save_function: Callable = None,
|
276 |
-
safe_serialization: bool = False,
|
277 |
-
variant: Optional[str] = None,
|
278 |
-
):
|
279 |
-
"""
|
280 |
-
Save a model and its configuration file to a directory so that it can be reloaded using the
|
281 |
-
[`~models.ModelMixin.from_pretrained`] class method.
|
282 |
-
|
283 |
-
Arguments:
|
284 |
-
save_directory (`str` or `os.PathLike`):
|
285 |
-
Directory to save a model and its configuration file to. Will be created if it doesn't exist.
|
286 |
-
is_main_process (`bool`, *optional*, defaults to `True`):
|
287 |
-
Whether the process calling this is the main process or not. Useful during distributed training and you
|
288 |
-
need to call this function on all processes. In this case, set `is_main_process=True` only on the main
|
289 |
-
process to avoid race conditions.
|
290 |
-
save_function (`Callable`):
|
291 |
-
The function to use to save the state dictionary. Useful during distributed training when you need to
|
292 |
-
replace `torch.save` with another method. Can be configured with the environment variable
|
293 |
-
`DIFFUSERS_SAVE_MODE`.
|
294 |
-
safe_serialization (`bool`, *optional*, defaults to `False`):
|
295 |
-
Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`.
|
296 |
-
variant (`str`, *optional*):
|
297 |
-
If specified, weights are saved in the format `pytorch_model.<variant>.bin`.
|
298 |
-
"""
|
299 |
-
if safe_serialization and not is_safetensors_available():
|
300 |
-
raise ImportError("`safe_serialization` requires the `safetensors library: `pip install safetensors`.")
|
301 |
-
|
302 |
-
if os.path.isfile(save_directory):
|
303 |
-
logger.error(f"Provided path ({save_directory}) should be a directory, not a file")
|
304 |
-
return
|
305 |
-
|
306 |
-
os.makedirs(save_directory, exist_ok=True)
|
307 |
-
|
308 |
-
model_to_save = self
|
309 |
-
|
310 |
-
# Attach architecture to the config
|
311 |
-
# Save the config
|
312 |
-
if is_main_process:
|
313 |
-
model_to_save.save_config(save_directory)
|
314 |
-
|
315 |
-
# Save the model
|
316 |
-
state_dict = model_to_save.state_dict()
|
317 |
-
|
318 |
-
weights_name = SAFETENSORS_WEIGHTS_NAME if safe_serialization else WEIGHTS_NAME
|
319 |
-
weights_name = _add_variant(weights_name, variant)
|
320 |
-
|
321 |
-
# Save the model
|
322 |
-
if safe_serialization:
|
323 |
-
safetensors.torch.save_file(
|
324 |
-
state_dict, os.path.join(save_directory, weights_name), metadata={"format": "pt"}
|
325 |
-
)
|
326 |
-
else:
|
327 |
-
torch.save(state_dict, os.path.join(save_directory, weights_name))
|
328 |
-
|
329 |
-
logger.info(f"Model weights saved in {os.path.join(save_directory, weights_name)}")
|
330 |
-
|
331 |
-
@classmethod
|
332 |
-
def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs):
|
333 |
-
r"""
|
334 |
-
Instantiate a pretrained PyTorch model from a pretrained model configuration.
|
335 |
-
|
336 |
-
The model is set in evaluation mode - `model.eval()` - by default, and dropout modules are deactivated. To
|
337 |
-
train the model, set it back in training mode with `model.train()`.
|
338 |
-
|
339 |
-
Parameters:
|
340 |
-
pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*):
|
341 |
-
Can be either:
|
342 |
-
|
343 |
-
- A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on
|
344 |
-
the Hub.
|
345 |
-
- A path to a *directory* (for example `./my_model_directory`) containing the model weights saved
|
346 |
-
with [`~ModelMixin.save_pretrained`].
|
347 |
-
|
348 |
-
cache_dir (`Union[str, os.PathLike]`, *optional*):
|
349 |
-
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
|
350 |
-
is not used.
|
351 |
-
torch_dtype (`str` or `torch.dtype`, *optional*):
|
352 |
-
Override the default `torch.dtype` and load the model with another dtype. If `"auto"` is passed, the
|
353 |
-
dtype is automatically derived from the model's weights.
|
354 |
-
force_download (`bool`, *optional*, defaults to `False`):
|
355 |
-
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
|
356 |
-
cached versions if they exist.
|
357 |
-
resume_download (`bool`, *optional*, defaults to `False`):
|
358 |
-
Whether or not to resume downloading the model weights and configuration files. If set to `False`, any
|
359 |
-
incompletely downloaded files are deleted.
|
360 |
-
proxies (`Dict[str, str]`, *optional*):
|
361 |
-
A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128',
|
362 |
-
'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
|
363 |
-
output_loading_info (`bool`, *optional*, defaults to `False`):
|
364 |
-
Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
|
365 |
-
local_files_only(`bool`, *optional*, defaults to `False`):
|
366 |
-
Whether to only load local model weights and configuration files or not. If set to `True`, the model
|
367 |
-
won't be downloaded from the Hub.
|
368 |
-
use_auth_token (`str` or *bool*, *optional*):
|
369 |
-
The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from
|
370 |
-
`diffusers-cli login` (stored in `~/.huggingface`) is used.
|
371 |
-
revision (`str`, *optional*, defaults to `"main"`):
|
372 |
-
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
|
373 |
-
allowed by Git.
|
374 |
-
from_flax (`bool`, *optional*, defaults to `False`):
|
375 |
-
Load the model weights from a Flax checkpoint save file.
|
376 |
-
subfolder (`str`, *optional*, defaults to `""`):
|
377 |
-
The subfolder location of a model file within a larger model repository on the Hub or locally.
|
378 |
-
mirror (`str`, *optional*):
|
379 |
-
Mirror source to resolve accessibility issues if you're downloading a model in China. We do not
|
380 |
-
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
|
381 |
-
information.
|
382 |
-
device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*):
|
383 |
-
A map that specifies where each submodule should go. It doesn't need to be defined for each
|
384 |
-
parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the
|
385 |
-
same device.
|
386 |
-
|
387 |
-
Set `device_map="auto"` to have 🤗 Accelerate automatically compute the most optimized `device_map`. For
|
388 |
-
more information about each option see [designing a device
|
389 |
-
map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map).
|
390 |
-
max_memory (`Dict`, *optional*):
|
391 |
-
A dictionary device identifier for the maximum memory. Will default to the maximum memory available for
|
392 |
-
each GPU and the available CPU RAM if unset.
|
393 |
-
offload_folder (`str` or `os.PathLike`, *optional*):
|
394 |
-
The path to offload weights if `device_map` contains the value `"disk"`.
|
395 |
-
offload_state_dict (`bool`, *optional*):
|
396 |
-
If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if
|
397 |
-
the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True`
|
398 |
-
when there is some disk offload.
|
399 |
-
low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`):
|
400 |
-
Speed up model loading only loading the pretrained weights and not initializing the weights. This also
|
401 |
-
tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
|
402 |
-
Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
|
403 |
-
argument to `True` will raise an error.
|
404 |
-
variant (`str`, *optional*):
|
405 |
-
Load weights from a specified `variant` filename such as `"fp16"` or `"ema"`. This is ignored when
|
406 |
-
loading `from_flax`.
|
407 |
-
use_safetensors (`bool`, *optional*, defaults to `None`):
|
408 |
-
If set to `None`, the `safetensors` weights are downloaded if they're available **and** if the
|
409 |
-
`safetensors` library is installed. If set to `True`, the model is forcibly loaded from `safetensors`
|
410 |
-
weights. If set to `False`, `safetensors` weights are not loaded.
|
411 |
-
|
412 |
-
<Tip>
|
413 |
-
|
414 |
-
To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in with
|
415 |
-
`huggingface-cli login`. You can also activate the special
|
416 |
-
["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use this method in a
|
417 |
-
firewalled environment.
|
418 |
-
|
419 |
-
</Tip>
|
420 |
-
|
421 |
-
Example:
|
422 |
-
|
423 |
-
```py
|
424 |
-
from diffusers import UNet2DConditionModel
|
425 |
-
|
426 |
-
unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet")
|
427 |
-
```
|
428 |
-
|
429 |
-
If you get the error message below, you need to finetune the weights for your downstream task:
|
430 |
-
|
431 |
-
```bash
|
432 |
-
Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match:
|
433 |
-
- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated
|
434 |
-
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
|
435 |
-
```
|
436 |
-
"""
|
437 |
-
cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
|
438 |
-
ignore_mismatched_sizes = kwargs.pop("ignore_mismatched_sizes", False)
|
439 |
-
force_download = kwargs.pop("force_download", False)
|
440 |
-
from_flax = kwargs.pop("from_flax", False)
|
441 |
-
resume_download = kwargs.pop("resume_download", False)
|
442 |
-
proxies = kwargs.pop("proxies", None)
|
443 |
-
output_loading_info = kwargs.pop("output_loading_info", False)
|
444 |
-
local_files_only = kwargs.pop("local_files_only", HF_HUB_OFFLINE)
|
445 |
-
use_auth_token = kwargs.pop("use_auth_token", None)
|
446 |
-
revision = kwargs.pop("revision", None)
|
447 |
-
torch_dtype = kwargs.pop("torch_dtype", None)
|
448 |
-
subfolder = kwargs.pop("subfolder", None)
|
449 |
-
device_map = kwargs.pop("device_map", None)
|
450 |
-
max_memory = kwargs.pop("max_memory", None)
|
451 |
-
offload_folder = kwargs.pop("offload_folder", None)
|
452 |
-
offload_state_dict = kwargs.pop("offload_state_dict", False)
|
453 |
-
low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT)
|
454 |
-
variant = kwargs.pop("variant", None)
|
455 |
-
use_safetensors = kwargs.pop("use_safetensors", None)
|
456 |
-
|
457 |
-
if use_safetensors and not is_safetensors_available():
|
458 |
-
raise ValueError(
|
459 |
-
"`use_safetensors`=True but safetensors is not installed. Please install safetensors with `pip install safetensors"
|
460 |
-
)
|
461 |
-
|
462 |
-
allow_pickle = False
|
463 |
-
if use_safetensors is None:
|
464 |
-
use_safetensors = is_safetensors_available()
|
465 |
-
allow_pickle = True
|
466 |
-
|
467 |
-
if low_cpu_mem_usage and not is_accelerate_available():
|
468 |
-
low_cpu_mem_usage = False
|
469 |
-
logger.warning(
|
470 |
-
"Cannot initialize model with low cpu memory usage because `accelerate` was not found in the"
|
471 |
-
" environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install"
|
472 |
-
" `accelerate` for faster and less memory-intense model loading. You can do so with: \n```\npip"
|
473 |
-
" install accelerate\n```\n."
|
474 |
-
)
|
475 |
-
|
476 |
-
if device_map is not None and not is_accelerate_available():
|
477 |
-
raise NotImplementedError(
|
478 |
-
"Loading and dispatching requires `accelerate`. Please make sure to install accelerate or set"
|
479 |
-
" `device_map=None`. You can install accelerate with `pip install accelerate`."
|
480 |
-
)
|
481 |
-
|
482 |
-
# Check if we can handle device_map and dispatching the weights
|
483 |
-
if device_map is not None and not is_torch_version(">=", "1.9.0"):
|
484 |
-
raise NotImplementedError(
|
485 |
-
"Loading and dispatching requires torch >= 1.9.0. Please either update your PyTorch version or set"
|
486 |
-
" `device_map=None`."
|
487 |
-
)
|
488 |
-
|
489 |
-
if low_cpu_mem_usage is True and not is_torch_version(">=", "1.9.0"):
|
490 |
-
raise NotImplementedError(
|
491 |
-
"Low memory initialization requires torch >= 1.9.0. Please either update your PyTorch version or set"
|
492 |
-
" `low_cpu_mem_usage=False`."
|
493 |
-
)
|
494 |
-
|
495 |
-
if low_cpu_mem_usage is False and device_map is not None:
|
496 |
-
raise ValueError(
|
497 |
-
f"You cannot set `low_cpu_mem_usage` to `False` while using device_map={device_map} for loading and"
|
498 |
-
" dispatching. Please make sure to set `low_cpu_mem_usage=True`."
|
499 |
-
)
|
500 |
-
|
501 |
-
# Load config if we don't provide a configuration
|
502 |
-
config_path = pretrained_model_name_or_path
|
503 |
-
|
504 |
-
user_agent = {
|
505 |
-
"diffusers": __version__,
|
506 |
-
"file_type": "model",
|
507 |
-
"framework": "pytorch",
|
508 |
-
}
|
509 |
-
|
510 |
-
# load config
|
511 |
-
config, unused_kwargs, commit_hash = cls.load_config(
|
512 |
-
config_path,
|
513 |
-
cache_dir=cache_dir,
|
514 |
-
return_unused_kwargs=True,
|
515 |
-
return_commit_hash=True,
|
516 |
-
force_download=force_download,
|
517 |
-
resume_download=resume_download,
|
518 |
-
proxies=proxies,
|
519 |
-
local_files_only=local_files_only,
|
520 |
-
use_auth_token=use_auth_token,
|
521 |
-
revision=revision,
|
522 |
-
subfolder=subfolder,
|
523 |
-
device_map=device_map,
|
524 |
-
max_memory=max_memory,
|
525 |
-
offload_folder=offload_folder,
|
526 |
-
offload_state_dict=offload_state_dict,
|
527 |
-
user_agent=user_agent,
|
528 |
-
**kwargs,
|
529 |
-
)
|
530 |
-
|
531 |
-
# load model
|
532 |
-
model_file = None
|
533 |
-
if from_flax:
|
534 |
-
model_file = _get_model_file(
|
535 |
-
pretrained_model_name_or_path,
|
536 |
-
weights_name=FLAX_WEIGHTS_NAME,
|
537 |
-
cache_dir=cache_dir,
|
538 |
-
force_download=force_download,
|
539 |
-
resume_download=resume_download,
|
540 |
-
proxies=proxies,
|
541 |
-
local_files_only=local_files_only,
|
542 |
-
use_auth_token=use_auth_token,
|
543 |
-
revision=revision,
|
544 |
-
subfolder=subfolder,
|
545 |
-
user_agent=user_agent,
|
546 |
-
commit_hash=commit_hash,
|
547 |
-
)
|
548 |
-
model = cls.from_config(config, **unused_kwargs)
|
549 |
-
|
550 |
-
# Convert the weights
|
551 |
-
from .modeling_pytorch_flax_utils import load_flax_checkpoint_in_pytorch_model
|
552 |
-
|
553 |
-
model = load_flax_checkpoint_in_pytorch_model(model, model_file)
|
554 |
-
else:
|
555 |
-
if use_safetensors:
|
556 |
-
try:
|
557 |
-
model_file = _get_model_file(
|
558 |
-
pretrained_model_name_or_path,
|
559 |
-
weights_name=_add_variant(SAFETENSORS_WEIGHTS_NAME, variant),
|
560 |
-
cache_dir=cache_dir,
|
561 |
-
force_download=force_download,
|
562 |
-
resume_download=resume_download,
|
563 |
-
proxies=proxies,
|
564 |
-
local_files_only=local_files_only,
|
565 |
-
use_auth_token=use_auth_token,
|
566 |
-
revision=revision,
|
567 |
-
subfolder=subfolder,
|
568 |
-
user_agent=user_agent,
|
569 |
-
commit_hash=commit_hash,
|
570 |
-
)
|
571 |
-
except IOError as e:
|
572 |
-
if not allow_pickle:
|
573 |
-
raise e
|
574 |
-
pass
|
575 |
-
if model_file is None:
|
576 |
-
model_file = _get_model_file(
|
577 |
-
pretrained_model_name_or_path,
|
578 |
-
weights_name=_add_variant(WEIGHTS_NAME, variant),
|
579 |
-
cache_dir=cache_dir,
|
580 |
-
force_download=force_download,
|
581 |
-
resume_download=resume_download,
|
582 |
-
proxies=proxies,
|
583 |
-
local_files_only=local_files_only,
|
584 |
-
use_auth_token=use_auth_token,
|
585 |
-
revision=revision,
|
586 |
-
subfolder=subfolder,
|
587 |
-
user_agent=user_agent,
|
588 |
-
commit_hash=commit_hash,
|
589 |
-
)
|
590 |
-
|
591 |
-
if low_cpu_mem_usage:
|
592 |
-
# Instantiate model with empty weights
|
593 |
-
with accelerate.init_empty_weights():
|
594 |
-
model = cls.from_config(config, **unused_kwargs)
|
595 |
-
|
596 |
-
# if device_map is None, load the state dict and move the params from meta device to the cpu
|
597 |
-
if device_map is None:
|
598 |
-
param_device = "cpu"
|
599 |
-
state_dict = load_state_dict(model_file, variant=variant)
|
600 |
-
model._convert_deprecated_attention_blocks(state_dict)
|
601 |
-
# move the params from meta device to cpu
|
602 |
-
missing_keys = set(model.state_dict().keys()) - set(state_dict.keys())
|
603 |
-
if len(missing_keys) > 0:
|
604 |
-
raise ValueError(
|
605 |
-
f"Cannot load {cls} from {pretrained_model_name_or_path} because the following keys are"
|
606 |
-
f" missing: \n {', '.join(missing_keys)}. \n Please make sure to pass"
|
607 |
-
" `low_cpu_mem_usage=False` and `device_map=None` if you want to randomly initialize"
|
608 |
-
" those weights or else make sure your checkpoint file is correct."
|
609 |
-
)
|
610 |
-
unexpected_keys = []
|
611 |
-
|
612 |
-
empty_state_dict = model.state_dict()
|
613 |
-
for param_name, param in state_dict.items():
|
614 |
-
accepts_dtype = "dtype" in set(
|
615 |
-
inspect.signature(set_module_tensor_to_device).parameters.keys()
|
616 |
-
)
|
617 |
-
|
618 |
-
if param_name not in empty_state_dict:
|
619 |
-
unexpected_keys.append(param_name)
|
620 |
-
continue
|
621 |
-
|
622 |
-
if empty_state_dict[param_name].shape != param.shape:
|
623 |
-
raise ValueError(
|
624 |
-
f"Cannot load {pretrained_model_name_or_path} because {param_name} expected shape {empty_state_dict[param_name]}, but got {param.shape}. If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example."
|
625 |
-
)
|
626 |
-
|
627 |
-
if accepts_dtype:
|
628 |
-
set_module_tensor_to_device(
|
629 |
-
model, param_name, param_device, value=param, dtype=torch_dtype
|
630 |
-
)
|
631 |
-
else:
|
632 |
-
set_module_tensor_to_device(model, param_name, param_device, value=param)
|
633 |
-
|
634 |
-
if cls._keys_to_ignore_on_load_unexpected is not None:
|
635 |
-
for pat in cls._keys_to_ignore_on_load_unexpected:
|
636 |
-
unexpected_keys = [k for k in unexpected_keys if re.search(pat, k) is None]
|
637 |
-
|
638 |
-
if len(unexpected_keys) > 0:
|
639 |
-
logger.warn(
|
640 |
-
f"Some weights of the model checkpoint were not used when initializing {cls.__name__}: \n {[', '.join(unexpected_keys)]}"
|
641 |
-
)
|
642 |
-
|
643 |
-
else: # else let accelerate handle loading and dispatching.
|
644 |
-
# Load weights and dispatch according to the device_map
|
645 |
-
# by default the device_map is None and the weights are loaded on the CPU
|
646 |
-
try:
|
647 |
-
accelerate.load_checkpoint_and_dispatch(
|
648 |
-
model,
|
649 |
-
model_file,
|
650 |
-
device_map,
|
651 |
-
max_memory=max_memory,
|
652 |
-
offload_folder=offload_folder,
|
653 |
-
offload_state_dict=offload_state_dict,
|
654 |
-
dtype=torch_dtype,
|
655 |
-
)
|
656 |
-
except AttributeError as e:
|
657 |
-
# When using accelerate loading, we do not have the ability to load the state
|
658 |
-
# dict and rename the weight names manually. Additionally, accelerate skips
|
659 |
-
# torch loading conventions and directly writes into `module.{_buffers, _parameters}`
|
660 |
-
# (which look like they should be private variables?), so we can't use the standard hooks
|
661 |
-
# to rename parameters on load. We need to mimic the original weight names so the correct
|
662 |
-
# attributes are available. After we have loaded the weights, we convert the deprecated
|
663 |
-
# names to the new non-deprecated names. Then we _greatly encourage_ the user to convert
|
664 |
-
# the weights so we don't have to do this again.
|
665 |
-
|
666 |
-
if "'Attention' object has no attribute" in str(e):
|
667 |
-
logger.warn(
|
668 |
-
f"Taking `{str(e)}` while using `accelerate.load_checkpoint_and_dispatch` to mean {pretrained_model_name_or_path}"
|
669 |
-
" was saved with deprecated attention block weight names. We will load it with the deprecated attention block"
|
670 |
-
" names and convert them on the fly to the new attention block format. Please re-save the model after this conversion,"
|
671 |
-
" so we don't have to do the on the fly renaming in the future. If the model is from a hub checkpoint,"
|
672 |
-
" please also re-upload it or open a PR on the original repository."
|
673 |
-
)
|
674 |
-
model._temp_convert_self_to_deprecated_attention_blocks()
|
675 |
-
accelerate.load_checkpoint_and_dispatch(
|
676 |
-
model,
|
677 |
-
model_file,
|
678 |
-
device_map,
|
679 |
-
max_memory=max_memory,
|
680 |
-
offload_folder=offload_folder,
|
681 |
-
offload_state_dict=offload_state_dict,
|
682 |
-
dtype=torch_dtype,
|
683 |
-
)
|
684 |
-
model._undo_temp_convert_self_to_deprecated_attention_blocks()
|
685 |
-
else:
|
686 |
-
raise e
|
687 |
-
|
688 |
-
loading_info = {
|
689 |
-
"missing_keys": [],
|
690 |
-
"unexpected_keys": [],
|
691 |
-
"mismatched_keys": [],
|
692 |
-
"error_msgs": [],
|
693 |
-
}
|
694 |
-
else:
|
695 |
-
model = cls.from_config(config, **unused_kwargs)
|
696 |
-
|
697 |
-
state_dict = load_state_dict(model_file, variant=variant)
|
698 |
-
model._convert_deprecated_attention_blocks(state_dict)
|
699 |
-
|
700 |
-
model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model(
|
701 |
-
model,
|
702 |
-
state_dict,
|
703 |
-
model_file,
|
704 |
-
pretrained_model_name_or_path,
|
705 |
-
ignore_mismatched_sizes=ignore_mismatched_sizes,
|
706 |
-
)
|
707 |
-
|
708 |
-
loading_info = {
|
709 |
-
"missing_keys": missing_keys,
|
710 |
-
"unexpected_keys": unexpected_keys,
|
711 |
-
"mismatched_keys": mismatched_keys,
|
712 |
-
"error_msgs": error_msgs,
|
713 |
-
}
|
714 |
-
|
715 |
-
if torch_dtype is not None and not isinstance(torch_dtype, torch.dtype):
|
716 |
-
raise ValueError(
|
717 |
-
f"{torch_dtype} needs to be of type `torch.dtype`, e.g. `torch.float16`, but is {type(torch_dtype)}."
|
718 |
-
)
|
719 |
-
elif torch_dtype is not None:
|
720 |
-
model = model.to(torch_dtype)
|
721 |
-
|
722 |
-
model.register_to_config(_name_or_path=pretrained_model_name_or_path)
|
723 |
-
|
724 |
-
# Set model in evaluation mode to deactivate DropOut modules by default
|
725 |
-
model.eval()
|
726 |
-
if output_loading_info:
|
727 |
-
return model, loading_info
|
728 |
-
|
729 |
-
return model
|
730 |
-
|
731 |
-
@classmethod
|
732 |
-
def _load_pretrained_model(
|
733 |
-
cls,
|
734 |
-
model,
|
735 |
-
state_dict,
|
736 |
-
resolved_archive_file,
|
737 |
-
pretrained_model_name_or_path,
|
738 |
-
ignore_mismatched_sizes=False,
|
739 |
-
):
|
740 |
-
# Retrieve missing & unexpected_keys
|
741 |
-
model_state_dict = model.state_dict()
|
742 |
-
loaded_keys = list(state_dict.keys())
|
743 |
-
|
744 |
-
expected_keys = list(model_state_dict.keys())
|
745 |
-
|
746 |
-
original_loaded_keys = loaded_keys
|
747 |
-
|
748 |
-
missing_keys = list(set(expected_keys) - set(loaded_keys))
|
749 |
-
unexpected_keys = list(set(loaded_keys) - set(expected_keys))
|
750 |
-
|
751 |
-
# Make sure we are able to load base models as well as derived models (with heads)
|
752 |
-
model_to_load = model
|
753 |
-
|
754 |
-
def _find_mismatched_keys(
|
755 |
-
state_dict,
|
756 |
-
model_state_dict,
|
757 |
-
loaded_keys,
|
758 |
-
ignore_mismatched_sizes,
|
759 |
-
):
|
760 |
-
mismatched_keys = []
|
761 |
-
if ignore_mismatched_sizes:
|
762 |
-
for checkpoint_key in loaded_keys:
|
763 |
-
model_key = checkpoint_key
|
764 |
-
|
765 |
-
if (
|
766 |
-
model_key in model_state_dict
|
767 |
-
and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape
|
768 |
-
):
|
769 |
-
mismatched_keys.append(
|
770 |
-
(checkpoint_key, state_dict[checkpoint_key].shape, model_state_dict[model_key].shape)
|
771 |
-
)
|
772 |
-
del state_dict[checkpoint_key]
|
773 |
-
return mismatched_keys
|
774 |
-
|
775 |
-
if state_dict is not None:
|
776 |
-
# Whole checkpoint
|
777 |
-
mismatched_keys = _find_mismatched_keys(
|
778 |
-
state_dict,
|
779 |
-
model_state_dict,
|
780 |
-
original_loaded_keys,
|
781 |
-
ignore_mismatched_sizes,
|
782 |
-
)
|
783 |
-
error_msgs = _load_state_dict_into_model(model_to_load, state_dict)
|
784 |
-
|
785 |
-
if len(error_msgs) > 0:
|
786 |
-
error_msg = "\n\t".join(error_msgs)
|
787 |
-
if "size mismatch" in error_msg:
|
788 |
-
error_msg += (
|
789 |
-
"\n\tYou may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method."
|
790 |
-
)
|
791 |
-
raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}")
|
792 |
-
|
793 |
-
if len(unexpected_keys) > 0:
|
794 |
-
logger.warning(
|
795 |
-
f"Some weights of the model checkpoint at {pretrained_model_name_or_path} were not used when"
|
796 |
-
f" initializing {model.__class__.__name__}: {unexpected_keys}\n- This IS expected if you are"
|
797 |
-
f" initializing {model.__class__.__name__} from the checkpoint of a model trained on another task"
|
798 |
-
" or with another architecture (e.g. initializing a BertForSequenceClassification model from a"
|
799 |
-
" BertForPreTraining model).\n- This IS NOT expected if you are initializing"
|
800 |
-
f" {model.__class__.__name__} from the checkpoint of a model that you expect to be exactly"
|
801 |
-
" identical (initializing a BertForSequenceClassification model from a"
|
802 |
-
" BertForSequenceClassification model)."
|
803 |
-
)
|
804 |
-
else:
|
805 |
-
logger.info(f"All model checkpoint weights were used when initializing {model.__class__.__name__}.\n")
|
806 |
-
if len(missing_keys) > 0:
|
807 |
-
logger.warning(
|
808 |
-
f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at"
|
809 |
-
f" {pretrained_model_name_or_path} and are newly initialized: {missing_keys}\nYou should probably"
|
810 |
-
" TRAIN this model on a down-stream task to be able to use it for predictions and inference."
|
811 |
-
)
|
812 |
-
elif len(mismatched_keys) == 0:
|
813 |
-
logger.info(
|
814 |
-
f"All the weights of {model.__class__.__name__} were initialized from the model checkpoint at"
|
815 |
-
f" {pretrained_model_name_or_path}.\nIf your task is similar to the task the model of the"
|
816 |
-
f" checkpoint was trained on, you can already use {model.__class__.__name__} for predictions"
|
817 |
-
" without further training."
|
818 |
-
)
|
819 |
-
if len(mismatched_keys) > 0:
|
820 |
-
mismatched_warning = "\n".join(
|
821 |
-
[
|
822 |
-
f"- {key}: found shape {shape1} in the checkpoint and {shape2} in the model instantiated"
|
823 |
-
for key, shape1, shape2 in mismatched_keys
|
824 |
-
]
|
825 |
-
)
|
826 |
-
logger.warning(
|
827 |
-
f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at"
|
828 |
-
f" {pretrained_model_name_or_path} and are newly initialized because the shapes did not"
|
829 |
-
f" match:\n{mismatched_warning}\nYou should probably TRAIN this model on a down-stream task to be"
|
830 |
-
" able to use it for predictions and inference."
|
831 |
-
)
|
832 |
-
|
833 |
-
return model, missing_keys, unexpected_keys, mismatched_keys, error_msgs
|
834 |
-
|
835 |
-
@property
|
836 |
-
def device(self) -> device:
|
837 |
-
"""
|
838 |
-
`torch.device`: The device on which the module is (assuming that all the module parameters are on the same
|
839 |
-
device).
|
840 |
-
"""
|
841 |
-
return get_parameter_device(self)
|
842 |
-
|
843 |
-
@property
|
844 |
-
def dtype(self) -> torch.dtype:
|
845 |
-
"""
|
846 |
-
`torch.dtype`: The dtype of the module (assuming that all the module parameters have the same dtype).
|
847 |
-
"""
|
848 |
-
return get_parameter_dtype(self)
|
849 |
-
|
850 |
-
def num_parameters(self, only_trainable: bool = False, exclude_embeddings: bool = False) -> int:
|
851 |
-
"""
|
852 |
-
Get number of (trainable or non-embedding) parameters in the module.
|
853 |
-
|
854 |
-
Args:
|
855 |
-
only_trainable (`bool`, *optional*, defaults to `False`):
|
856 |
-
Whether or not to return only the number of trainable parameters.
|
857 |
-
exclude_embeddings (`bool`, *optional*, defaults to `False`):
|
858 |
-
Whether or not to return only the number of non-embedding parameters.
|
859 |
-
|
860 |
-
Returns:
|
861 |
-
`int`: The number of parameters.
|
862 |
-
|
863 |
-
Example:
|
864 |
-
|
865 |
-
```py
|
866 |
-
from diffusers import UNet2DConditionModel
|
867 |
-
|
868 |
-
model_id = "runwayml/stable-diffusion-v1-5"
|
869 |
-
unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet")
|
870 |
-
unet.num_parameters(only_trainable=True)
|
871 |
-
859520964
|
872 |
-
```
|
873 |
-
"""
|
874 |
-
|
875 |
-
if exclude_embeddings:
|
876 |
-
embedding_param_names = [
|
877 |
-
f"{name}.weight"
|
878 |
-
for name, module_type in self.named_modules()
|
879 |
-
if isinstance(module_type, torch.nn.Embedding)
|
880 |
-
]
|
881 |
-
non_embedding_parameters = [
|
882 |
-
parameter for name, parameter in self.named_parameters() if name not in embedding_param_names
|
883 |
-
]
|
884 |
-
return sum(p.numel() for p in non_embedding_parameters if p.requires_grad or not only_trainable)
|
885 |
-
else:
|
886 |
-
return sum(p.numel() for p in self.parameters() if p.requires_grad or not only_trainable)
|
887 |
-
|
888 |
-
def _convert_deprecated_attention_blocks(self, state_dict):
|
889 |
-
deprecated_attention_block_paths = []
|
890 |
-
|
891 |
-
def recursive_find_attn_block(name, module):
|
892 |
-
if hasattr(module, "_from_deprecated_attn_block") and module._from_deprecated_attn_block:
|
893 |
-
deprecated_attention_block_paths.append(name)
|
894 |
-
|
895 |
-
for sub_name, sub_module in module.named_children():
|
896 |
-
sub_name = sub_name if name == "" else f"{name}.{sub_name}"
|
897 |
-
recursive_find_attn_block(sub_name, sub_module)
|
898 |
-
|
899 |
-
recursive_find_attn_block("", self)
|
900 |
-
|
901 |
-
# NOTE: we have to check if the deprecated parameters are in the state dict
|
902 |
-
# because it is possible we are loading from a state dict that was already
|
903 |
-
# converted
|
904 |
-
|
905 |
-
for path in deprecated_attention_block_paths:
|
906 |
-
# group_norm path stays the same
|
907 |
-
|
908 |
-
# query -> to_q
|
909 |
-
if f"{path}.query.weight" in state_dict:
|
910 |
-
state_dict[f"{path}.to_q.weight"] = state_dict.pop(f"{path}.query.weight")
|
911 |
-
if f"{path}.query.bias" in state_dict:
|
912 |
-
state_dict[f"{path}.to_q.bias"] = state_dict.pop(f"{path}.query.bias")
|
913 |
-
|
914 |
-
# key -> to_k
|
915 |
-
if f"{path}.key.weight" in state_dict:
|
916 |
-
state_dict[f"{path}.to_k.weight"] = state_dict.pop(f"{path}.key.weight")
|
917 |
-
if f"{path}.key.bias" in state_dict:
|
918 |
-
state_dict[f"{path}.to_k.bias"] = state_dict.pop(f"{path}.key.bias")
|
919 |
-
|
920 |
-
# value -> to_v
|
921 |
-
if f"{path}.value.weight" in state_dict:
|
922 |
-
state_dict[f"{path}.to_v.weight"] = state_dict.pop(f"{path}.value.weight")
|
923 |
-
if f"{path}.value.bias" in state_dict:
|
924 |
-
state_dict[f"{path}.to_v.bias"] = state_dict.pop(f"{path}.value.bias")
|
925 |
-
|
926 |
-
# proj_attn -> to_out.0
|
927 |
-
if f"{path}.proj_attn.weight" in state_dict:
|
928 |
-
state_dict[f"{path}.to_out.0.weight"] = state_dict.pop(f"{path}.proj_attn.weight")
|
929 |
-
if f"{path}.proj_attn.bias" in state_dict:
|
930 |
-
state_dict[f"{path}.to_out.0.bias"] = state_dict.pop(f"{path}.proj_attn.bias")
|
931 |
-
|
932 |
-
def _temp_convert_self_to_deprecated_attention_blocks(self):
|
933 |
-
deprecated_attention_block_modules = []
|
934 |
-
|
935 |
-
def recursive_find_attn_block(module):
|
936 |
-
if hasattr(module, "_from_deprecated_attn_block") and module._from_deprecated_attn_block:
|
937 |
-
deprecated_attention_block_modules.append(module)
|
938 |
-
|
939 |
-
for sub_module in module.children():
|
940 |
-
recursive_find_attn_block(sub_module)
|
941 |
-
|
942 |
-
recursive_find_attn_block(self)
|
943 |
-
|
944 |
-
for module in deprecated_attention_block_modules:
|
945 |
-
module.query = module.to_q
|
946 |
-
module.key = module.to_k
|
947 |
-
module.value = module.to_v
|
948 |
-
module.proj_attn = module.to_out[0]
|
949 |
-
|
950 |
-
# We don't _have_ to delete the old attributes, but it's helpful to ensure
|
951 |
-
# that _all_ the weights are loaded into the new attributes and we're not
|
952 |
-
# making an incorrect assumption that this model should be converted when
|
953 |
-
# it really shouldn't be.
|
954 |
-
del module.to_q
|
955 |
-
del module.to_k
|
956 |
-
del module.to_v
|
957 |
-
del module.to_out
|
958 |
-
|
959 |
-
def _undo_temp_convert_self_to_deprecated_attention_blocks(self):
|
960 |
-
deprecated_attention_block_modules = []
|
961 |
-
|
962 |
-
def recursive_find_attn_block(module):
|
963 |
-
if hasattr(module, "_from_deprecated_attn_block") and module._from_deprecated_attn_block:
|
964 |
-
deprecated_attention_block_modules.append(module)
|
965 |
-
|
966 |
-
for sub_module in module.children():
|
967 |
-
recursive_find_attn_block(sub_module)
|
968 |
-
|
969 |
-
recursive_find_attn_block(self)
|
970 |
-
|
971 |
-
for module in deprecated_attention_block_modules:
|
972 |
-
module.to_q = module.query
|
973 |
-
module.to_k = module.key
|
974 |
-
module.to_v = module.value
|
975 |
-
module.to_out = nn.ModuleList([module.proj_attn, nn.Dropout(module.dropout)])
|
976 |
-
|
977 |
-
del module.query
|
978 |
-
del module.key
|
979 |
-
del module.value
|
980 |
-
del module.proj_attn
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnimaLab/bias-test-gpt-pairs/bloomberg_vis.py
DELETED
@@ -1,85 +0,0 @@
|
|
1 |
-
# def bloombergViz(val, numblocks=10, flip=False):
|
2 |
-
# percent = round(val * 100)
|
3 |
-
# percentStr = f"{percent}"
|
4 |
-
# filled = "<div style='height:20px;width:20px;background-color:#065b41;display:inline-block'></div> "
|
5 |
-
# unfilled = "<div style='height:20px;width:20px;background-color:#35d4ac;display:inline-block'></div> "
|
6 |
-
# numFilled = round((percent/100) * numblocks)
|
7 |
-
# numUnFilled = numblocks - numFilled
|
8 |
-
# if flip:
|
9 |
-
# return numFilled * unfilled + numUnFilled * filled;
|
10 |
-
# return numFilled * filled + numUnFilled * unfilled
|
11 |
-
|
12 |
-
# def att_bloombergViz(att, val, numblocks, flip=False):
|
13 |
-
# viz = bloombergViz(val, numblocks, flip)
|
14 |
-
# attHTML = f"<div style='border-style:solid;border-color:#999;border-radius:12px'>{att}: {round(val*100)}%<br>{viz}</div><br>"
|
15 |
-
# return attHTML
|
16 |
-
|
17 |
-
def bloombergViz(att, val, numblocks, score_templates_df, onRight=False, flip=False):
|
18 |
-
# percent = round(val * 100)
|
19 |
-
# percentStr = f"{percent}"
|
20 |
-
# filled = "<div style='height:20px;width:20px;background-color:#555;display:inline-block'><span class='tooltiptext' style='color:#FFF'>{}</span></div> "
|
21 |
-
# unfilled = "<div style='height:20px;width:20px;background-color:#999;display:inline-block'><span class='tooltiptext' style='color:#FFF'>{}</span></div> "
|
22 |
-
# numFilled = round((percent/100) * numblocks)
|
23 |
-
# numUnFilled = numblocks - numFilled
|
24 |
-
|
25 |
-
leftColor = "#065b41" #"#555"
|
26 |
-
rightColor = "#35d4ac" #"#999"
|
27 |
-
if flip:
|
28 |
-
leftColor = "#35d4ac" #"#999"
|
29 |
-
rightColor = "#065b41" #"#555"
|
30 |
-
res = ""
|
31 |
-
spanClass = "tooltiptext_left"
|
32 |
-
if onRight:
|
33 |
-
spanClass = "tooltiptext_right"
|
34 |
-
dfy = score_templates_df.loc[(score_templates_df['att_term'] == att) & (score_templates_df['stereotyped_b'] == 'yes')]
|
35 |
-
dfn = score_templates_df.loc[(score_templates_df['att_term'] == att) & (score_templates_df['stereotyped_b'] == 'no')]
|
36 |
-
#print("dfy", dfy)
|
37 |
-
#print("dfn", dfn)
|
38 |
-
for i in range(len(dfy.index)):
|
39 |
-
#print("--GROUP IN BLOOMBERG--")
|
40 |
-
groups = dfy.iloc[i, dfy.columns.get_loc("groups_rel")].split("/")
|
41 |
-
gr_disp = groups[0]+"/"+groups[1]
|
42 |
-
grp_refs = list(dfy.iloc[i, dfy.columns.get_loc("grp_refs")])
|
43 |
-
|
44 |
-
template = dfy.iloc[i, dfy.columns.get_loc("template")]
|
45 |
-
for grp_pair in grp_refs:
|
46 |
-
#print(f"Item: {grp_pair[0]} - {grp_pair[1]}")
|
47 |
-
template = template.replace("[R]", grp_pair[0]+"/"+grp_pair[1], 1)
|
48 |
-
|
49 |
-
# template based
|
50 |
-
disp = template.replace("[T]", f"[{gr_disp}]") #, 1)
|
51 |
-
|
52 |
-
# sentence/alt-sentence based
|
53 |
-
#sentence = dfy.iloc[i, dfy.columns.get_loc("sentence")]
|
54 |
-
#alt_sentence = dfy.iloc[i, dfy.columns.get_loc("alt_sentence")]
|
55 |
-
#disp = f'"{sentence}"/"{alt_sentence}"'
|
56 |
-
|
57 |
-
res += f"<div style='height:20px;width:20px;background-color:{leftColor};display:inline-block;position:relative' id='filled'><span class='{spanClass}' style='color:#FFF'>{disp}</span></div> "
|
58 |
-
for i in range(len(dfn.index)):
|
59 |
-
groups = dfn.iloc[i, dfn.columns.get_loc("groups_rel")].split("/")
|
60 |
-
gr_disp = groups[0]+"/"+groups[1]
|
61 |
-
grp_refs = list(dfn.iloc[i, dfn.columns.get_loc("grp_refs")])
|
62 |
-
|
63 |
-
template = dfn.iloc[i, dfn.columns.get_loc("template")]
|
64 |
-
for grp_pair in grp_refs:
|
65 |
-
#print(f"Item: {grp_pair[0]} - {grp_pair[1]}")
|
66 |
-
template = template.replace("[R]", grp_pair[0]+"/"+grp_pair[1], 1)
|
67 |
-
|
68 |
-
# template based
|
69 |
-
disp = template.replace("[T]", f"[{gr_disp}]")#, 1)
|
70 |
-
|
71 |
-
# sentence/alt-sentence based
|
72 |
-
#sentence = dfn.iloc[i, dfn.columns.get_loc("sentence")]
|
73 |
-
#alt_sentence = dfn.iloc[i, dfn.columns.get_loc("alt_sentence")]
|
74 |
-
#disp = f'"{sentence}"/"{alt_sentence}"'
|
75 |
-
|
76 |
-
res += f"<div style='height:20px;width:20px;background-color:{rightColor};display:inline-block;position:relative' id='empty'><span class='{spanClass}' style='color:#FFF'>{disp}</span></div> "
|
77 |
-
return res
|
78 |
-
# if flip:
|
79 |
-
# return numFilled * unfilled + numUnFilled * filled;
|
80 |
-
# return numFilled * filled + numUnFilled * unfilled
|
81 |
-
|
82 |
-
def att_bloombergViz(att, val, numblocks, score_templates_df, onRight=False, flip=False):
|
83 |
-
viz = bloombergViz(att, val, numblocks, score_templates_df, onRight, flip)
|
84 |
-
attHTML = f"<div style='border-style:solid;border-color:#999;border-radius:12px'>{att}: {round(val*100)}%<br>{viz}</div><br>"
|
85 |
-
return attHTML
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnonAndDesu/Desu_Proxy/greeting.md
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
Only for desu lovers~
|
2 |
-
https://rentry.co/Desu_Proxy
|
3 |
-
|
|
|
|
|
|
|
|
spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/setup.py
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
from setuptools import setup
|
2 |
-
|
3 |
-
setup(
|
4 |
-
name="guided-diffusion",
|
5 |
-
py_modules=["guided_diffusion"],
|
6 |
-
install_requires=["blobfile>=1.0.5", "torch", "tqdm"],
|
7 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/__init__.py
DELETED
@@ -1,35 +0,0 @@
|
|
1 |
-
# Copyright (c) OpenMMLab. All rights reserved.
|
2 |
-
from .activation import build_activation_layer
|
3 |
-
from .context_block import ContextBlock
|
4 |
-
from .conv import build_conv_layer
|
5 |
-
from .conv2d_adaptive_padding import Conv2dAdaptivePadding
|
6 |
-
from .conv_module import ConvModule
|
7 |
-
from .conv_ws import ConvAWS2d, ConvWS2d, conv_ws_2d
|
8 |
-
from .depthwise_separable_conv_module import DepthwiseSeparableConvModule
|
9 |
-
from .drop import Dropout, DropPath
|
10 |
-
from .generalized_attention import GeneralizedAttention
|
11 |
-
from .hsigmoid import HSigmoid
|
12 |
-
from .hswish import HSwish
|
13 |
-
from .non_local import NonLocal1d, NonLocal2d, NonLocal3d
|
14 |
-
from .norm import build_norm_layer, is_norm
|
15 |
-
from .padding import build_padding_layer
|
16 |
-
from .plugin import build_plugin_layer
|
17 |
-
from .registry import (ACTIVATION_LAYERS, CONV_LAYERS, NORM_LAYERS,
|
18 |
-
PADDING_LAYERS, PLUGIN_LAYERS, UPSAMPLE_LAYERS)
|
19 |
-
from .scale import Scale
|
20 |
-
from .swish import Swish
|
21 |
-
from .upsample import build_upsample_layer
|
22 |
-
from .wrappers import (Conv2d, Conv3d, ConvTranspose2d, ConvTranspose3d,
|
23 |
-
Linear, MaxPool2d, MaxPool3d)
|
24 |
-
|
25 |
-
__all__ = [
|
26 |
-
'ConvModule', 'build_activation_layer', 'build_conv_layer',
|
27 |
-
'build_norm_layer', 'build_padding_layer', 'build_upsample_layer',
|
28 |
-
'build_plugin_layer', 'is_norm', 'HSigmoid', 'HSwish', 'NonLocal1d',
|
29 |
-
'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'GeneralizedAttention',
|
30 |
-
'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS', 'PADDING_LAYERS',
|
31 |
-
'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale', 'ConvAWS2d', 'ConvWS2d',
|
32 |
-
'conv_ws_2d', 'DepthwiseSeparableConvModule', 'Swish', 'Linear',
|
33 |
-
'Conv2dAdaptivePadding', 'Conv2d', 'ConvTranspose2d', 'MaxPool2d',
|
34 |
-
'ConvTranspose3d', 'MaxPool3d', 'Conv3d', 'Dropout', 'DropPath'
|
35 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AriaMei/TTSdemo/monotonic_align/setup.py
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
from distutils.core import setup
|
2 |
-
from Cython.Build import cythonize
|
3 |
-
import numpy
|
4 |
-
|
5 |
-
setup(
|
6 |
-
name = 'monotonic_align',
|
7 |
-
ext_modules = cythonize("core.pyx"),
|
8 |
-
include_dirs=[numpy.get_include()]
|
9 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ArkanDash/rvc-models-new/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py
DELETED
@@ -1,86 +0,0 @@
|
|
1 |
-
from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
|
2 |
-
import pyworld
|
3 |
-
import numpy as np
|
4 |
-
|
5 |
-
|
6 |
-
class HarvestF0Predictor(F0Predictor):
|
7 |
-
def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
|
8 |
-
self.hop_length = hop_length
|
9 |
-
self.f0_min = f0_min
|
10 |
-
self.f0_max = f0_max
|
11 |
-
self.sampling_rate = sampling_rate
|
12 |
-
|
13 |
-
def interpolate_f0(self, f0):
|
14 |
-
"""
|
15 |
-
对F0进行插值处理
|
16 |
-
"""
|
17 |
-
|
18 |
-
data = np.reshape(f0, (f0.size, 1))
|
19 |
-
|
20 |
-
vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
|
21 |
-
vuv_vector[data > 0.0] = 1.0
|
22 |
-
vuv_vector[data <= 0.0] = 0.0
|
23 |
-
|
24 |
-
ip_data = data
|
25 |
-
|
26 |
-
frame_number = data.size
|
27 |
-
last_value = 0.0
|
28 |
-
for i in range(frame_number):
|
29 |
-
if data[i] <= 0.0:
|
30 |
-
j = i + 1
|
31 |
-
for j in range(i + 1, frame_number):
|
32 |
-
if data[j] > 0.0:
|
33 |
-
break
|
34 |
-
if j < frame_number - 1:
|
35 |
-
if last_value > 0.0:
|
36 |
-
step = (data[j] - data[i - 1]) / float(j - i)
|
37 |
-
for k in range(i, j):
|
38 |
-
ip_data[k] = data[i - 1] + step * (k - i + 1)
|
39 |
-
else:
|
40 |
-
for k in range(i, j):
|
41 |
-
ip_data[k] = data[j]
|
42 |
-
else:
|
43 |
-
for k in range(i, frame_number):
|
44 |
-
ip_data[k] = last_value
|
45 |
-
else:
|
46 |
-
ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
|
47 |
-
last_value = data[i]
|
48 |
-
|
49 |
-
return ip_data[:, 0], vuv_vector[:, 0]
|
50 |
-
|
51 |
-
def resize_f0(self, x, target_len):
|
52 |
-
source = np.array(x)
|
53 |
-
source[source < 0.001] = np.nan
|
54 |
-
target = np.interp(
|
55 |
-
np.arange(0, len(source) * target_len, len(source)) / target_len,
|
56 |
-
np.arange(0, len(source)),
|
57 |
-
source,
|
58 |
-
)
|
59 |
-
res = np.nan_to_num(target)
|
60 |
-
return res
|
61 |
-
|
62 |
-
def compute_f0(self, wav, p_len=None):
|
63 |
-
if p_len is None:
|
64 |
-
p_len = wav.shape[0] // self.hop_length
|
65 |
-
f0, t = pyworld.harvest(
|
66 |
-
wav.astype(np.double),
|
67 |
-
fs=self.hop_length,
|
68 |
-
f0_ceil=self.f0_max,
|
69 |
-
f0_floor=self.f0_min,
|
70 |
-
frame_period=1000 * self.hop_length / self.sampling_rate,
|
71 |
-
)
|
72 |
-
f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs)
|
73 |
-
return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
|
74 |
-
|
75 |
-
def compute_f0_uv(self, wav, p_len=None):
|
76 |
-
if p_len is None:
|
77 |
-
p_len = wav.shape[0] // self.hop_length
|
78 |
-
f0, t = pyworld.harvest(
|
79 |
-
wav.astype(np.double),
|
80 |
-
fs=self.sampling_rate,
|
81 |
-
f0_floor=self.f0_min,
|
82 |
-
f0_ceil=self.f0_max,
|
83 |
-
frame_period=1000 * self.hop_length / self.sampling_rate,
|
84 |
-
)
|
85 |
-
f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
|
86 |
-
return self.interpolate_f0(self.resize_f0(f0, p_len))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/config/__init__.py
DELETED
File without changes
|
spaces/AsakuraMizu/moe-tts/text/__init__.py
DELETED
@@ -1,32 +0,0 @@
|
|
1 |
-
""" from https://github.com/keithito/tacotron """
|
2 |
-
from text import cleaners
|
3 |
-
|
4 |
-
|
5 |
-
def text_to_sequence(text, symbols, cleaner_names):
|
6 |
-
'''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
|
7 |
-
Args:
|
8 |
-
text: string to convert to a sequence
|
9 |
-
cleaner_names: names of the cleaner functions to run the text through
|
10 |
-
Returns:
|
11 |
-
List of integers corresponding to the symbols in the text
|
12 |
-
'''
|
13 |
-
_symbol_to_id = {s: i for i, s in enumerate(symbols)}
|
14 |
-
|
15 |
-
sequence = []
|
16 |
-
|
17 |
-
clean_text = _clean_text(text, cleaner_names)
|
18 |
-
for symbol in clean_text:
|
19 |
-
if symbol not in _symbol_to_id.keys():
|
20 |
-
continue
|
21 |
-
symbol_id = _symbol_to_id[symbol]
|
22 |
-
sequence += [symbol_id]
|
23 |
-
return sequence
|
24 |
-
|
25 |
-
|
26 |
-
def _clean_text(text, cleaner_names):
|
27 |
-
for name in cleaner_names:
|
28 |
-
cleaner = getattr(cleaners, name)
|
29 |
-
if not cleaner:
|
30 |
-
raise Exception('Unknown cleaner: %s' % name)
|
31 |
-
text = cleaner(text)
|
32 |
-
return text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/cmdoptions.py
DELETED
@@ -1,1074 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
shared options and groups
|
3 |
-
|
4 |
-
The principle here is to define options once, but *not* instantiate them
|
5 |
-
globally. One reason being that options with action='append' can carry state
|
6 |
-
between parses. pip parses general options twice internally, and shouldn't
|
7 |
-
pass on state. To be consistent, all options will follow this design.
|
8 |
-
"""
|
9 |
-
|
10 |
-
# The following comment should be removed at some point in the future.
|
11 |
-
# mypy: strict-optional=False
|
12 |
-
|
13 |
-
import importlib.util
|
14 |
-
import logging
|
15 |
-
import os
|
16 |
-
import textwrap
|
17 |
-
from functools import partial
|
18 |
-
from optparse import SUPPRESS_HELP, Option, OptionGroup, OptionParser, Values
|
19 |
-
from textwrap import dedent
|
20 |
-
from typing import Any, Callable, Dict, Optional, Tuple
|
21 |
-
|
22 |
-
from pip._vendor.packaging.utils import canonicalize_name
|
23 |
-
|
24 |
-
from pip._internal.cli.parser import ConfigOptionParser
|
25 |
-
from pip._internal.exceptions import CommandError
|
26 |
-
from pip._internal.locations import USER_CACHE_DIR, get_src_prefix
|
27 |
-
from pip._internal.models.format_control import FormatControl
|
28 |
-
from pip._internal.models.index import PyPI
|
29 |
-
from pip._internal.models.target_python import TargetPython
|
30 |
-
from pip._internal.utils.hashes import STRONG_HASHES
|
31 |
-
from pip._internal.utils.misc import strtobool
|
32 |
-
|
33 |
-
logger = logging.getLogger(__name__)
|
34 |
-
|
35 |
-
|
36 |
-
def raise_option_error(parser: OptionParser, option: Option, msg: str) -> None:
|
37 |
-
"""
|
38 |
-
Raise an option parsing error using parser.error().
|
39 |
-
|
40 |
-
Args:
|
41 |
-
parser: an OptionParser instance.
|
42 |
-
option: an Option instance.
|
43 |
-
msg: the error text.
|
44 |
-
"""
|
45 |
-
msg = f"{option} error: {msg}"
|
46 |
-
msg = textwrap.fill(" ".join(msg.split()))
|
47 |
-
parser.error(msg)
|
48 |
-
|
49 |
-
|
50 |
-
def make_option_group(group: Dict[str, Any], parser: ConfigOptionParser) -> OptionGroup:
|
51 |
-
"""
|
52 |
-
Return an OptionGroup object
|
53 |
-
group -- assumed to be dict with 'name' and 'options' keys
|
54 |
-
parser -- an optparse Parser
|
55 |
-
"""
|
56 |
-
option_group = OptionGroup(parser, group["name"])
|
57 |
-
for option in group["options"]:
|
58 |
-
option_group.add_option(option())
|
59 |
-
return option_group
|
60 |
-
|
61 |
-
|
62 |
-
def check_dist_restriction(options: Values, check_target: bool = False) -> None:
|
63 |
-
"""Function for determining if custom platform options are allowed.
|
64 |
-
|
65 |
-
:param options: The OptionParser options.
|
66 |
-
:param check_target: Whether or not to check if --target is being used.
|
67 |
-
"""
|
68 |
-
dist_restriction_set = any(
|
69 |
-
[
|
70 |
-
options.python_version,
|
71 |
-
options.platforms,
|
72 |
-
options.abis,
|
73 |
-
options.implementation,
|
74 |
-
]
|
75 |
-
)
|
76 |
-
|
77 |
-
binary_only = FormatControl(set(), {":all:"})
|
78 |
-
sdist_dependencies_allowed = (
|
79 |
-
options.format_control != binary_only and not options.ignore_dependencies
|
80 |
-
)
|
81 |
-
|
82 |
-
# Installations or downloads using dist restrictions must not combine
|
83 |
-
# source distributions and dist-specific wheels, as they are not
|
84 |
-
# guaranteed to be locally compatible.
|
85 |
-
if dist_restriction_set and sdist_dependencies_allowed:
|
86 |
-
raise CommandError(
|
87 |
-
"When restricting platform and interpreter constraints using "
|
88 |
-
"--python-version, --platform, --abi, or --implementation, "
|
89 |
-
"either --no-deps must be set, or --only-binary=:all: must be "
|
90 |
-
"set and --no-binary must not be set (or must be set to "
|
91 |
-
":none:)."
|
92 |
-
)
|
93 |
-
|
94 |
-
if check_target:
|
95 |
-
if dist_restriction_set and not options.target_dir:
|
96 |
-
raise CommandError(
|
97 |
-
"Can not use any platform or abi specific options unless "
|
98 |
-
"installing via '--target'"
|
99 |
-
)
|
100 |
-
|
101 |
-
|
102 |
-
def _path_option_check(option: Option, opt: str, value: str) -> str:
|
103 |
-
return os.path.expanduser(value)
|
104 |
-
|
105 |
-
|
106 |
-
def _package_name_option_check(option: Option, opt: str, value: str) -> str:
|
107 |
-
return canonicalize_name(value)
|
108 |
-
|
109 |
-
|
110 |
-
class PipOption(Option):
|
111 |
-
TYPES = Option.TYPES + ("path", "package_name")
|
112 |
-
TYPE_CHECKER = Option.TYPE_CHECKER.copy()
|
113 |
-
TYPE_CHECKER["package_name"] = _package_name_option_check
|
114 |
-
TYPE_CHECKER["path"] = _path_option_check
|
115 |
-
|
116 |
-
|
117 |
-
###########
|
118 |
-
# options #
|
119 |
-
###########
|
120 |
-
|
121 |
-
help_: Callable[..., Option] = partial(
|
122 |
-
Option,
|
123 |
-
"-h",
|
124 |
-
"--help",
|
125 |
-
dest="help",
|
126 |
-
action="help",
|
127 |
-
help="Show help.",
|
128 |
-
)
|
129 |
-
|
130 |
-
debug_mode: Callable[..., Option] = partial(
|
131 |
-
Option,
|
132 |
-
"--debug",
|
133 |
-
dest="debug_mode",
|
134 |
-
action="store_true",
|
135 |
-
default=False,
|
136 |
-
help=(
|
137 |
-
"Let unhandled exceptions propagate outside the main subroutine, "
|
138 |
-
"instead of logging them to stderr."
|
139 |
-
),
|
140 |
-
)
|
141 |
-
|
142 |
-
isolated_mode: Callable[..., Option] = partial(
|
143 |
-
Option,
|
144 |
-
"--isolated",
|
145 |
-
dest="isolated_mode",
|
146 |
-
action="store_true",
|
147 |
-
default=False,
|
148 |
-
help=(
|
149 |
-
"Run pip in an isolated mode, ignoring environment variables and user "
|
150 |
-
"configuration."
|
151 |
-
),
|
152 |
-
)
|
153 |
-
|
154 |
-
require_virtualenv: Callable[..., Option] = partial(
|
155 |
-
Option,
|
156 |
-
"--require-virtualenv",
|
157 |
-
"--require-venv",
|
158 |
-
dest="require_venv",
|
159 |
-
action="store_true",
|
160 |
-
default=False,
|
161 |
-
help=(
|
162 |
-
"Allow pip to only run in a virtual environment; "
|
163 |
-
"exit with an error otherwise."
|
164 |
-
),
|
165 |
-
)
|
166 |
-
|
167 |
-
override_externally_managed: Callable[..., Option] = partial(
|
168 |
-
Option,
|
169 |
-
"--break-system-packages",
|
170 |
-
dest="override_externally_managed",
|
171 |
-
action="store_true",
|
172 |
-
help="Allow pip to modify an EXTERNALLY-MANAGED Python installation",
|
173 |
-
)
|
174 |
-
|
175 |
-
python: Callable[..., Option] = partial(
|
176 |
-
Option,
|
177 |
-
"--python",
|
178 |
-
dest="python",
|
179 |
-
help="Run pip with the specified Python interpreter.",
|
180 |
-
)
|
181 |
-
|
182 |
-
verbose: Callable[..., Option] = partial(
|
183 |
-
Option,
|
184 |
-
"-v",
|
185 |
-
"--verbose",
|
186 |
-
dest="verbose",
|
187 |
-
action="count",
|
188 |
-
default=0,
|
189 |
-
help="Give more output. Option is additive, and can be used up to 3 times.",
|
190 |
-
)
|
191 |
-
|
192 |
-
no_color: Callable[..., Option] = partial(
|
193 |
-
Option,
|
194 |
-
"--no-color",
|
195 |
-
dest="no_color",
|
196 |
-
action="store_true",
|
197 |
-
default=False,
|
198 |
-
help="Suppress colored output.",
|
199 |
-
)
|
200 |
-
|
201 |
-
version: Callable[..., Option] = partial(
|
202 |
-
Option,
|
203 |
-
"-V",
|
204 |
-
"--version",
|
205 |
-
dest="version",
|
206 |
-
action="store_true",
|
207 |
-
help="Show version and exit.",
|
208 |
-
)
|
209 |
-
|
210 |
-
quiet: Callable[..., Option] = partial(
|
211 |
-
Option,
|
212 |
-
"-q",
|
213 |
-
"--quiet",
|
214 |
-
dest="quiet",
|
215 |
-
action="count",
|
216 |
-
default=0,
|
217 |
-
help=(
|
218 |
-
"Give less output. Option is additive, and can be used up to 3"
|
219 |
-
" times (corresponding to WARNING, ERROR, and CRITICAL logging"
|
220 |
-
" levels)."
|
221 |
-
),
|
222 |
-
)
|
223 |
-
|
224 |
-
progress_bar: Callable[..., Option] = partial(
|
225 |
-
Option,
|
226 |
-
"--progress-bar",
|
227 |
-
dest="progress_bar",
|
228 |
-
type="choice",
|
229 |
-
choices=["on", "off"],
|
230 |
-
default="on",
|
231 |
-
help="Specify whether the progress bar should be used [on, off] (default: on)",
|
232 |
-
)
|
233 |
-
|
234 |
-
log: Callable[..., Option] = partial(
|
235 |
-
PipOption,
|
236 |
-
"--log",
|
237 |
-
"--log-file",
|
238 |
-
"--local-log",
|
239 |
-
dest="log",
|
240 |
-
metavar="path",
|
241 |
-
type="path",
|
242 |
-
help="Path to a verbose appending log.",
|
243 |
-
)
|
244 |
-
|
245 |
-
no_input: Callable[..., Option] = partial(
|
246 |
-
Option,
|
247 |
-
# Don't ask for input
|
248 |
-
"--no-input",
|
249 |
-
dest="no_input",
|
250 |
-
action="store_true",
|
251 |
-
default=False,
|
252 |
-
help="Disable prompting for input.",
|
253 |
-
)
|
254 |
-
|
255 |
-
keyring_provider: Callable[..., Option] = partial(
|
256 |
-
Option,
|
257 |
-
"--keyring-provider",
|
258 |
-
dest="keyring_provider",
|
259 |
-
choices=["auto", "disabled", "import", "subprocess"],
|
260 |
-
default="auto",
|
261 |
-
help=(
|
262 |
-
"Enable the credential lookup via the keyring library if user input is allowed."
|
263 |
-
" Specify which mechanism to use [disabled, import, subprocess]."
|
264 |
-
" (default: disabled)"
|
265 |
-
),
|
266 |
-
)
|
267 |
-
|
268 |
-
proxy: Callable[..., Option] = partial(
|
269 |
-
Option,
|
270 |
-
"--proxy",
|
271 |
-
dest="proxy",
|
272 |
-
type="str",
|
273 |
-
default="",
|
274 |
-
help="Specify a proxy in the form scheme://[user:passwd@]proxy.server:port.",
|
275 |
-
)
|
276 |
-
|
277 |
-
retries: Callable[..., Option] = partial(
|
278 |
-
Option,
|
279 |
-
"--retries",
|
280 |
-
dest="retries",
|
281 |
-
type="int",
|
282 |
-
default=5,
|
283 |
-
help="Maximum number of retries each connection should attempt "
|
284 |
-
"(default %default times).",
|
285 |
-
)
|
286 |
-
|
287 |
-
timeout: Callable[..., Option] = partial(
|
288 |
-
Option,
|
289 |
-
"--timeout",
|
290 |
-
"--default-timeout",
|
291 |
-
metavar="sec",
|
292 |
-
dest="timeout",
|
293 |
-
type="float",
|
294 |
-
default=15,
|
295 |
-
help="Set the socket timeout (default %default seconds).",
|
296 |
-
)
|
297 |
-
|
298 |
-
|
299 |
-
def exists_action() -> Option:
|
300 |
-
return Option(
|
301 |
-
# Option when path already exist
|
302 |
-
"--exists-action",
|
303 |
-
dest="exists_action",
|
304 |
-
type="choice",
|
305 |
-
choices=["s", "i", "w", "b", "a"],
|
306 |
-
default=[],
|
307 |
-
action="append",
|
308 |
-
metavar="action",
|
309 |
-
help="Default action when a path already exists: "
|
310 |
-
"(s)witch, (i)gnore, (w)ipe, (b)ackup, (a)bort.",
|
311 |
-
)
|
312 |
-
|
313 |
-
|
314 |
-
cert: Callable[..., Option] = partial(
|
315 |
-
PipOption,
|
316 |
-
"--cert",
|
317 |
-
dest="cert",
|
318 |
-
type="path",
|
319 |
-
metavar="path",
|
320 |
-
help=(
|
321 |
-
"Path to PEM-encoded CA certificate bundle. "
|
322 |
-
"If provided, overrides the default. "
|
323 |
-
"See 'SSL Certificate Verification' in pip documentation "
|
324 |
-
"for more information."
|
325 |
-
),
|
326 |
-
)
|
327 |
-
|
328 |
-
client_cert: Callable[..., Option] = partial(
|
329 |
-
PipOption,
|
330 |
-
"--client-cert",
|
331 |
-
dest="client_cert",
|
332 |
-
type="path",
|
333 |
-
default=None,
|
334 |
-
metavar="path",
|
335 |
-
help="Path to SSL client certificate, a single file containing the "
|
336 |
-
"private key and the certificate in PEM format.",
|
337 |
-
)
|
338 |
-
|
339 |
-
index_url: Callable[..., Option] = partial(
|
340 |
-
Option,
|
341 |
-
"-i",
|
342 |
-
"--index-url",
|
343 |
-
"--pypi-url",
|
344 |
-
dest="index_url",
|
345 |
-
metavar="URL",
|
346 |
-
default=PyPI.simple_url,
|
347 |
-
help="Base URL of the Python Package Index (default %default). "
|
348 |
-
"This should point to a repository compliant with PEP 503 "
|
349 |
-
"(the simple repository API) or a local directory laid out "
|
350 |
-
"in the same format.",
|
351 |
-
)
|
352 |
-
|
353 |
-
|
354 |
-
def extra_index_url() -> Option:
|
355 |
-
return Option(
|
356 |
-
"--extra-index-url",
|
357 |
-
dest="extra_index_urls",
|
358 |
-
metavar="URL",
|
359 |
-
action="append",
|
360 |
-
default=[],
|
361 |
-
help="Extra URLs of package indexes to use in addition to "
|
362 |
-
"--index-url. Should follow the same rules as "
|
363 |
-
"--index-url.",
|
364 |
-
)
|
365 |
-
|
366 |
-
|
367 |
-
no_index: Callable[..., Option] = partial(
|
368 |
-
Option,
|
369 |
-
"--no-index",
|
370 |
-
dest="no_index",
|
371 |
-
action="store_true",
|
372 |
-
default=False,
|
373 |
-
help="Ignore package index (only looking at --find-links URLs instead).",
|
374 |
-
)
|
375 |
-
|
376 |
-
|
377 |
-
def find_links() -> Option:
|
378 |
-
return Option(
|
379 |
-
"-f",
|
380 |
-
"--find-links",
|
381 |
-
dest="find_links",
|
382 |
-
action="append",
|
383 |
-
default=[],
|
384 |
-
metavar="url",
|
385 |
-
help="If a URL or path to an html file, then parse for links to "
|
386 |
-
"archives such as sdist (.tar.gz) or wheel (.whl) files. "
|
387 |
-
"If a local path or file:// URL that's a directory, "
|
388 |
-
"then look for archives in the directory listing. "
|
389 |
-
"Links to VCS project URLs are not supported.",
|
390 |
-
)
|
391 |
-
|
392 |
-
|
393 |
-
def trusted_host() -> Option:
|
394 |
-
return Option(
|
395 |
-
"--trusted-host",
|
396 |
-
dest="trusted_hosts",
|
397 |
-
action="append",
|
398 |
-
metavar="HOSTNAME",
|
399 |
-
default=[],
|
400 |
-
help="Mark this host or host:port pair as trusted, even though it "
|
401 |
-
"does not have valid or any HTTPS.",
|
402 |
-
)
|
403 |
-
|
404 |
-
|
405 |
-
def constraints() -> Option:
|
406 |
-
return Option(
|
407 |
-
"-c",
|
408 |
-
"--constraint",
|
409 |
-
dest="constraints",
|
410 |
-
action="append",
|
411 |
-
default=[],
|
412 |
-
metavar="file",
|
413 |
-
help="Constrain versions using the given constraints file. "
|
414 |
-
"This option can be used multiple times.",
|
415 |
-
)
|
416 |
-
|
417 |
-
|
418 |
-
def requirements() -> Option:
|
419 |
-
return Option(
|
420 |
-
"-r",
|
421 |
-
"--requirement",
|
422 |
-
dest="requirements",
|
423 |
-
action="append",
|
424 |
-
default=[],
|
425 |
-
metavar="file",
|
426 |
-
help="Install from the given requirements file. "
|
427 |
-
"This option can be used multiple times.",
|
428 |
-
)
|
429 |
-
|
430 |
-
|
431 |
-
def editable() -> Option:
|
432 |
-
return Option(
|
433 |
-
"-e",
|
434 |
-
"--editable",
|
435 |
-
dest="editables",
|
436 |
-
action="append",
|
437 |
-
default=[],
|
438 |
-
metavar="path/url",
|
439 |
-
help=(
|
440 |
-
"Install a project in editable mode (i.e. setuptools "
|
441 |
-
'"develop mode") from a local project path or a VCS url.'
|
442 |
-
),
|
443 |
-
)
|
444 |
-
|
445 |
-
|
446 |
-
def _handle_src(option: Option, opt_str: str, value: str, parser: OptionParser) -> None:
|
447 |
-
value = os.path.abspath(value)
|
448 |
-
setattr(parser.values, option.dest, value)
|
449 |
-
|
450 |
-
|
451 |
-
src: Callable[..., Option] = partial(
|
452 |
-
PipOption,
|
453 |
-
"--src",
|
454 |
-
"--source",
|
455 |
-
"--source-dir",
|
456 |
-
"--source-directory",
|
457 |
-
dest="src_dir",
|
458 |
-
type="path",
|
459 |
-
metavar="dir",
|
460 |
-
default=get_src_prefix(),
|
461 |
-
action="callback",
|
462 |
-
callback=_handle_src,
|
463 |
-
help="Directory to check out editable projects into. "
|
464 |
-
'The default in a virtualenv is "<venv path>/src". '
|
465 |
-
'The default for global installs is "<current dir>/src".',
|
466 |
-
)
|
467 |
-
|
468 |
-
|
469 |
-
def _get_format_control(values: Values, option: Option) -> Any:
|
470 |
-
"""Get a format_control object."""
|
471 |
-
return getattr(values, option.dest)
|
472 |
-
|
473 |
-
|
474 |
-
def _handle_no_binary(
|
475 |
-
option: Option, opt_str: str, value: str, parser: OptionParser
|
476 |
-
) -> None:
|
477 |
-
existing = _get_format_control(parser.values, option)
|
478 |
-
FormatControl.handle_mutual_excludes(
|
479 |
-
value,
|
480 |
-
existing.no_binary,
|
481 |
-
existing.only_binary,
|
482 |
-
)
|
483 |
-
|
484 |
-
|
485 |
-
def _handle_only_binary(
|
486 |
-
option: Option, opt_str: str, value: str, parser: OptionParser
|
487 |
-
) -> None:
|
488 |
-
existing = _get_format_control(parser.values, option)
|
489 |
-
FormatControl.handle_mutual_excludes(
|
490 |
-
value,
|
491 |
-
existing.only_binary,
|
492 |
-
existing.no_binary,
|
493 |
-
)
|
494 |
-
|
495 |
-
|
496 |
-
def no_binary() -> Option:
|
497 |
-
format_control = FormatControl(set(), set())
|
498 |
-
return Option(
|
499 |
-
"--no-binary",
|
500 |
-
dest="format_control",
|
501 |
-
action="callback",
|
502 |
-
callback=_handle_no_binary,
|
503 |
-
type="str",
|
504 |
-
default=format_control,
|
505 |
-
help="Do not use binary packages. Can be supplied multiple times, and "
|
506 |
-
'each time adds to the existing value. Accepts either ":all:" to '
|
507 |
-
'disable all binary packages, ":none:" to empty the set (notice '
|
508 |
-
"the colons), or one or more package names with commas between "
|
509 |
-
"them (no colons). Note that some packages are tricky to compile "
|
510 |
-
"and may fail to install when this option is used on them.",
|
511 |
-
)
|
512 |
-
|
513 |
-
|
514 |
-
def only_binary() -> Option:
|
515 |
-
format_control = FormatControl(set(), set())
|
516 |
-
return Option(
|
517 |
-
"--only-binary",
|
518 |
-
dest="format_control",
|
519 |
-
action="callback",
|
520 |
-
callback=_handle_only_binary,
|
521 |
-
type="str",
|
522 |
-
default=format_control,
|
523 |
-
help="Do not use source packages. Can be supplied multiple times, and "
|
524 |
-
'each time adds to the existing value. Accepts either ":all:" to '
|
525 |
-
'disable all source packages, ":none:" to empty the set, or one '
|
526 |
-
"or more package names with commas between them. Packages "
|
527 |
-
"without binary distributions will fail to install when this "
|
528 |
-
"option is used on them.",
|
529 |
-
)
|
530 |
-
|
531 |
-
|
532 |
-
platforms: Callable[..., Option] = partial(
|
533 |
-
Option,
|
534 |
-
"--platform",
|
535 |
-
dest="platforms",
|
536 |
-
metavar="platform",
|
537 |
-
action="append",
|
538 |
-
default=None,
|
539 |
-
help=(
|
540 |
-
"Only use wheels compatible with <platform>. Defaults to the "
|
541 |
-
"platform of the running system. Use this option multiple times to "
|
542 |
-
"specify multiple platforms supported by the target interpreter."
|
543 |
-
),
|
544 |
-
)
|
545 |
-
|
546 |
-
|
547 |
-
# This was made a separate function for unit-testing purposes.
|
548 |
-
def _convert_python_version(value: str) -> Tuple[Tuple[int, ...], Optional[str]]:
|
549 |
-
"""
|
550 |
-
Convert a version string like "3", "37", or "3.7.3" into a tuple of ints.
|
551 |
-
|
552 |
-
:return: A 2-tuple (version_info, error_msg), where `error_msg` is
|
553 |
-
non-None if and only if there was a parsing error.
|
554 |
-
"""
|
555 |
-
if not value:
|
556 |
-
# The empty string is the same as not providing a value.
|
557 |
-
return (None, None)
|
558 |
-
|
559 |
-
parts = value.split(".")
|
560 |
-
if len(parts) > 3:
|
561 |
-
return ((), "at most three version parts are allowed")
|
562 |
-
|
563 |
-
if len(parts) == 1:
|
564 |
-
# Then we are in the case of "3" or "37".
|
565 |
-
value = parts[0]
|
566 |
-
if len(value) > 1:
|
567 |
-
parts = [value[0], value[1:]]
|
568 |
-
|
569 |
-
try:
|
570 |
-
version_info = tuple(int(part) for part in parts)
|
571 |
-
except ValueError:
|
572 |
-
return ((), "each version part must be an integer")
|
573 |
-
|
574 |
-
return (version_info, None)
|
575 |
-
|
576 |
-
|
577 |
-
def _handle_python_version(
|
578 |
-
option: Option, opt_str: str, value: str, parser: OptionParser
|
579 |
-
) -> None:
|
580 |
-
"""
|
581 |
-
Handle a provided --python-version value.
|
582 |
-
"""
|
583 |
-
version_info, error_msg = _convert_python_version(value)
|
584 |
-
if error_msg is not None:
|
585 |
-
msg = "invalid --python-version value: {!r}: {}".format(
|
586 |
-
value,
|
587 |
-
error_msg,
|
588 |
-
)
|
589 |
-
raise_option_error(parser, option=option, msg=msg)
|
590 |
-
|
591 |
-
parser.values.python_version = version_info
|
592 |
-
|
593 |
-
|
594 |
-
python_version: Callable[..., Option] = partial(
|
595 |
-
Option,
|
596 |
-
"--python-version",
|
597 |
-
dest="python_version",
|
598 |
-
metavar="python_version",
|
599 |
-
action="callback",
|
600 |
-
callback=_handle_python_version,
|
601 |
-
type="str",
|
602 |
-
default=None,
|
603 |
-
help=dedent(
|
604 |
-
"""\
|
605 |
-
The Python interpreter version to use for wheel and "Requires-Python"
|
606 |
-
compatibility checks. Defaults to a version derived from the running
|
607 |
-
interpreter. The version can be specified using up to three dot-separated
|
608 |
-
integers (e.g. "3" for 3.0.0, "3.7" for 3.7.0, or "3.7.3"). A major-minor
|
609 |
-
version can also be given as a string without dots (e.g. "37" for 3.7.0).
|
610 |
-
"""
|
611 |
-
),
|
612 |
-
)
|
613 |
-
|
614 |
-
|
615 |
-
implementation: Callable[..., Option] = partial(
|
616 |
-
Option,
|
617 |
-
"--implementation",
|
618 |
-
dest="implementation",
|
619 |
-
metavar="implementation",
|
620 |
-
default=None,
|
621 |
-
help=(
|
622 |
-
"Only use wheels compatible with Python "
|
623 |
-
"implementation <implementation>, e.g. 'pp', 'jy', 'cp', "
|
624 |
-
" or 'ip'. If not specified, then the current "
|
625 |
-
"interpreter implementation is used. Use 'py' to force "
|
626 |
-
"implementation-agnostic wheels."
|
627 |
-
),
|
628 |
-
)
|
629 |
-
|
630 |
-
|
631 |
-
abis: Callable[..., Option] = partial(
|
632 |
-
Option,
|
633 |
-
"--abi",
|
634 |
-
dest="abis",
|
635 |
-
metavar="abi",
|
636 |
-
action="append",
|
637 |
-
default=None,
|
638 |
-
help=(
|
639 |
-
"Only use wheels compatible with Python abi <abi>, e.g. 'pypy_41'. "
|
640 |
-
"If not specified, then the current interpreter abi tag is used. "
|
641 |
-
"Use this option multiple times to specify multiple abis supported "
|
642 |
-
"by the target interpreter. Generally you will need to specify "
|
643 |
-
"--implementation, --platform, and --python-version when using this "
|
644 |
-
"option."
|
645 |
-
),
|
646 |
-
)
|
647 |
-
|
648 |
-
|
649 |
-
def add_target_python_options(cmd_opts: OptionGroup) -> None:
|
650 |
-
cmd_opts.add_option(platforms())
|
651 |
-
cmd_opts.add_option(python_version())
|
652 |
-
cmd_opts.add_option(implementation())
|
653 |
-
cmd_opts.add_option(abis())
|
654 |
-
|
655 |
-
|
656 |
-
def make_target_python(options: Values) -> TargetPython:
|
657 |
-
target_python = TargetPython(
|
658 |
-
platforms=options.platforms,
|
659 |
-
py_version_info=options.python_version,
|
660 |
-
abis=options.abis,
|
661 |
-
implementation=options.implementation,
|
662 |
-
)
|
663 |
-
|
664 |
-
return target_python
|
665 |
-
|
666 |
-
|
667 |
-
def prefer_binary() -> Option:
|
668 |
-
return Option(
|
669 |
-
"--prefer-binary",
|
670 |
-
dest="prefer_binary",
|
671 |
-
action="store_true",
|
672 |
-
default=False,
|
673 |
-
help="Prefer older binary packages over newer source packages.",
|
674 |
-
)
|
675 |
-
|
676 |
-
|
677 |
-
cache_dir: Callable[..., Option] = partial(
|
678 |
-
PipOption,
|
679 |
-
"--cache-dir",
|
680 |
-
dest="cache_dir",
|
681 |
-
default=USER_CACHE_DIR,
|
682 |
-
metavar="dir",
|
683 |
-
type="path",
|
684 |
-
help="Store the cache data in <dir>.",
|
685 |
-
)
|
686 |
-
|
687 |
-
|
688 |
-
def _handle_no_cache_dir(
|
689 |
-
option: Option, opt: str, value: str, parser: OptionParser
|
690 |
-
) -> None:
|
691 |
-
"""
|
692 |
-
Process a value provided for the --no-cache-dir option.
|
693 |
-
|
694 |
-
This is an optparse.Option callback for the --no-cache-dir option.
|
695 |
-
"""
|
696 |
-
# The value argument will be None if --no-cache-dir is passed via the
|
697 |
-
# command-line, since the option doesn't accept arguments. However,
|
698 |
-
# the value can be non-None if the option is triggered e.g. by an
|
699 |
-
# environment variable, like PIP_NO_CACHE_DIR=true.
|
700 |
-
if value is not None:
|
701 |
-
# Then parse the string value to get argument error-checking.
|
702 |
-
try:
|
703 |
-
strtobool(value)
|
704 |
-
except ValueError as exc:
|
705 |
-
raise_option_error(parser, option=option, msg=str(exc))
|
706 |
-
|
707 |
-
# Originally, setting PIP_NO_CACHE_DIR to a value that strtobool()
|
708 |
-
# converted to 0 (like "false" or "no") caused cache_dir to be disabled
|
709 |
-
# rather than enabled (logic would say the latter). Thus, we disable
|
710 |
-
# the cache directory not just on values that parse to True, but (for
|
711 |
-
# backwards compatibility reasons) also on values that parse to False.
|
712 |
-
# In other words, always set it to False if the option is provided in
|
713 |
-
# some (valid) form.
|
714 |
-
parser.values.cache_dir = False
|
715 |
-
|
716 |
-
|
717 |
-
no_cache: Callable[..., Option] = partial(
|
718 |
-
Option,
|
719 |
-
"--no-cache-dir",
|
720 |
-
dest="cache_dir",
|
721 |
-
action="callback",
|
722 |
-
callback=_handle_no_cache_dir,
|
723 |
-
help="Disable the cache.",
|
724 |
-
)
|
725 |
-
|
726 |
-
no_deps: Callable[..., Option] = partial(
|
727 |
-
Option,
|
728 |
-
"--no-deps",
|
729 |
-
"--no-dependencies",
|
730 |
-
dest="ignore_dependencies",
|
731 |
-
action="store_true",
|
732 |
-
default=False,
|
733 |
-
help="Don't install package dependencies.",
|
734 |
-
)
|
735 |
-
|
736 |
-
ignore_requires_python: Callable[..., Option] = partial(
|
737 |
-
Option,
|
738 |
-
"--ignore-requires-python",
|
739 |
-
dest="ignore_requires_python",
|
740 |
-
action="store_true",
|
741 |
-
help="Ignore the Requires-Python information.",
|
742 |
-
)
|
743 |
-
|
744 |
-
no_build_isolation: Callable[..., Option] = partial(
|
745 |
-
Option,
|
746 |
-
"--no-build-isolation",
|
747 |
-
dest="build_isolation",
|
748 |
-
action="store_false",
|
749 |
-
default=True,
|
750 |
-
help="Disable isolation when building a modern source distribution. "
|
751 |
-
"Build dependencies specified by PEP 518 must be already installed "
|
752 |
-
"if this option is used.",
|
753 |
-
)
|
754 |
-
|
755 |
-
check_build_deps: Callable[..., Option] = partial(
|
756 |
-
Option,
|
757 |
-
"--check-build-dependencies",
|
758 |
-
dest="check_build_deps",
|
759 |
-
action="store_true",
|
760 |
-
default=False,
|
761 |
-
help="Check the build dependencies when PEP517 is used.",
|
762 |
-
)
|
763 |
-
|
764 |
-
|
765 |
-
def _handle_no_use_pep517(
|
766 |
-
option: Option, opt: str, value: str, parser: OptionParser
|
767 |
-
) -> None:
|
768 |
-
"""
|
769 |
-
Process a value provided for the --no-use-pep517 option.
|
770 |
-
|
771 |
-
This is an optparse.Option callback for the no_use_pep517 option.
|
772 |
-
"""
|
773 |
-
# Since --no-use-pep517 doesn't accept arguments, the value argument
|
774 |
-
# will be None if --no-use-pep517 is passed via the command-line.
|
775 |
-
# However, the value can be non-None if the option is triggered e.g.
|
776 |
-
# by an environment variable, for example "PIP_NO_USE_PEP517=true".
|
777 |
-
if value is not None:
|
778 |
-
msg = """A value was passed for --no-use-pep517,
|
779 |
-
probably using either the PIP_NO_USE_PEP517 environment variable
|
780 |
-
or the "no-use-pep517" config file option. Use an appropriate value
|
781 |
-
of the PIP_USE_PEP517 environment variable or the "use-pep517"
|
782 |
-
config file option instead.
|
783 |
-
"""
|
784 |
-
raise_option_error(parser, option=option, msg=msg)
|
785 |
-
|
786 |
-
# If user doesn't wish to use pep517, we check if setuptools and wheel are installed
|
787 |
-
# and raise error if it is not.
|
788 |
-
packages = ("setuptools", "wheel")
|
789 |
-
if not all(importlib.util.find_spec(package) for package in packages):
|
790 |
-
msg = (
|
791 |
-
f"It is not possible to use --no-use-pep517 "
|
792 |
-
f"without {' and '.join(packages)} installed."
|
793 |
-
)
|
794 |
-
raise_option_error(parser, option=option, msg=msg)
|
795 |
-
|
796 |
-
# Otherwise, --no-use-pep517 was passed via the command-line.
|
797 |
-
parser.values.use_pep517 = False
|
798 |
-
|
799 |
-
|
800 |
-
use_pep517: Any = partial(
|
801 |
-
Option,
|
802 |
-
"--use-pep517",
|
803 |
-
dest="use_pep517",
|
804 |
-
action="store_true",
|
805 |
-
default=None,
|
806 |
-
help="Use PEP 517 for building source distributions "
|
807 |
-
"(use --no-use-pep517 to force legacy behaviour).",
|
808 |
-
)
|
809 |
-
|
810 |
-
no_use_pep517: Any = partial(
|
811 |
-
Option,
|
812 |
-
"--no-use-pep517",
|
813 |
-
dest="use_pep517",
|
814 |
-
action="callback",
|
815 |
-
callback=_handle_no_use_pep517,
|
816 |
-
default=None,
|
817 |
-
help=SUPPRESS_HELP,
|
818 |
-
)
|
819 |
-
|
820 |
-
|
821 |
-
def _handle_config_settings(
|
822 |
-
option: Option, opt_str: str, value: str, parser: OptionParser
|
823 |
-
) -> None:
|
824 |
-
key, sep, val = value.partition("=")
|
825 |
-
if sep != "=":
|
826 |
-
parser.error(f"Arguments to {opt_str} must be of the form KEY=VAL") # noqa
|
827 |
-
dest = getattr(parser.values, option.dest)
|
828 |
-
if dest is None:
|
829 |
-
dest = {}
|
830 |
-
setattr(parser.values, option.dest, dest)
|
831 |
-
if key in dest:
|
832 |
-
if isinstance(dest[key], list):
|
833 |
-
dest[key].append(val)
|
834 |
-
else:
|
835 |
-
dest[key] = [dest[key], val]
|
836 |
-
else:
|
837 |
-
dest[key] = val
|
838 |
-
|
839 |
-
|
840 |
-
config_settings: Callable[..., Option] = partial(
|
841 |
-
Option,
|
842 |
-
"-C",
|
843 |
-
"--config-settings",
|
844 |
-
dest="config_settings",
|
845 |
-
type=str,
|
846 |
-
action="callback",
|
847 |
-
callback=_handle_config_settings,
|
848 |
-
metavar="settings",
|
849 |
-
help="Configuration settings to be passed to the PEP 517 build backend. "
|
850 |
-
"Settings take the form KEY=VALUE. Use multiple --config-settings options "
|
851 |
-
"to pass multiple keys to the backend.",
|
852 |
-
)
|
853 |
-
|
854 |
-
build_options: Callable[..., Option] = partial(
|
855 |
-
Option,
|
856 |
-
"--build-option",
|
857 |
-
dest="build_options",
|
858 |
-
metavar="options",
|
859 |
-
action="append",
|
860 |
-
help="Extra arguments to be supplied to 'setup.py bdist_wheel'.",
|
861 |
-
)
|
862 |
-
|
863 |
-
global_options: Callable[..., Option] = partial(
|
864 |
-
Option,
|
865 |
-
"--global-option",
|
866 |
-
dest="global_options",
|
867 |
-
action="append",
|
868 |
-
metavar="options",
|
869 |
-
help="Extra global options to be supplied to the setup.py "
|
870 |
-
"call before the install or bdist_wheel command.",
|
871 |
-
)
|
872 |
-
|
873 |
-
no_clean: Callable[..., Option] = partial(
|
874 |
-
Option,
|
875 |
-
"--no-clean",
|
876 |
-
action="store_true",
|
877 |
-
default=False,
|
878 |
-
help="Don't clean up build directories.",
|
879 |
-
)
|
880 |
-
|
881 |
-
pre: Callable[..., Option] = partial(
|
882 |
-
Option,
|
883 |
-
"--pre",
|
884 |
-
action="store_true",
|
885 |
-
default=False,
|
886 |
-
help="Include pre-release and development versions. By default, "
|
887 |
-
"pip only finds stable versions.",
|
888 |
-
)
|
889 |
-
|
890 |
-
disable_pip_version_check: Callable[..., Option] = partial(
|
891 |
-
Option,
|
892 |
-
"--disable-pip-version-check",
|
893 |
-
dest="disable_pip_version_check",
|
894 |
-
action="store_true",
|
895 |
-
default=False,
|
896 |
-
help="Don't periodically check PyPI to determine whether a new version "
|
897 |
-
"of pip is available for download. Implied with --no-index.",
|
898 |
-
)
|
899 |
-
|
900 |
-
root_user_action: Callable[..., Option] = partial(
|
901 |
-
Option,
|
902 |
-
"--root-user-action",
|
903 |
-
dest="root_user_action",
|
904 |
-
default="warn",
|
905 |
-
choices=["warn", "ignore"],
|
906 |
-
help="Action if pip is run as a root user. By default, a warning message is shown.",
|
907 |
-
)
|
908 |
-
|
909 |
-
|
910 |
-
def _handle_merge_hash(
|
911 |
-
option: Option, opt_str: str, value: str, parser: OptionParser
|
912 |
-
) -> None:
|
913 |
-
"""Given a value spelled "algo:digest", append the digest to a list
|
914 |
-
pointed to in a dict by the algo name."""
|
915 |
-
if not parser.values.hashes:
|
916 |
-
parser.values.hashes = {}
|
917 |
-
try:
|
918 |
-
algo, digest = value.split(":", 1)
|
919 |
-
except ValueError:
|
920 |
-
parser.error(
|
921 |
-
"Arguments to {} must be a hash name " # noqa
|
922 |
-
"followed by a value, like --hash=sha256:"
|
923 |
-
"abcde...".format(opt_str)
|
924 |
-
)
|
925 |
-
if algo not in STRONG_HASHES:
|
926 |
-
parser.error(
|
927 |
-
"Allowed hash algorithms for {} are {}.".format( # noqa
|
928 |
-
opt_str, ", ".join(STRONG_HASHES)
|
929 |
-
)
|
930 |
-
)
|
931 |
-
parser.values.hashes.setdefault(algo, []).append(digest)
|
932 |
-
|
933 |
-
|
934 |
-
hash: Callable[..., Option] = partial(
|
935 |
-
Option,
|
936 |
-
"--hash",
|
937 |
-
# Hash values eventually end up in InstallRequirement.hashes due to
|
938 |
-
# __dict__ copying in process_line().
|
939 |
-
dest="hashes",
|
940 |
-
action="callback",
|
941 |
-
callback=_handle_merge_hash,
|
942 |
-
type="string",
|
943 |
-
help="Verify that the package's archive matches this "
|
944 |
-
"hash before installing. Example: --hash=sha256:abcdef...",
|
945 |
-
)
|
946 |
-
|
947 |
-
|
948 |
-
require_hashes: Callable[..., Option] = partial(
|
949 |
-
Option,
|
950 |
-
"--require-hashes",
|
951 |
-
dest="require_hashes",
|
952 |
-
action="store_true",
|
953 |
-
default=False,
|
954 |
-
help="Require a hash to check each requirement against, for "
|
955 |
-
"repeatable installs. This option is implied when any package in a "
|
956 |
-
"requirements file has a --hash option.",
|
957 |
-
)
|
958 |
-
|
959 |
-
|
960 |
-
list_path: Callable[..., Option] = partial(
|
961 |
-
PipOption,
|
962 |
-
"--path",
|
963 |
-
dest="path",
|
964 |
-
type="path",
|
965 |
-
action="append",
|
966 |
-
help="Restrict to the specified installation path for listing "
|
967 |
-
"packages (can be used multiple times).",
|
968 |
-
)
|
969 |
-
|
970 |
-
|
971 |
-
def check_list_path_option(options: Values) -> None:
|
972 |
-
if options.path and (options.user or options.local):
|
973 |
-
raise CommandError("Cannot combine '--path' with '--user' or '--local'")
|
974 |
-
|
975 |
-
|
976 |
-
list_exclude: Callable[..., Option] = partial(
|
977 |
-
PipOption,
|
978 |
-
"--exclude",
|
979 |
-
dest="excludes",
|
980 |
-
action="append",
|
981 |
-
metavar="package",
|
982 |
-
type="package_name",
|
983 |
-
help="Exclude specified package from the output",
|
984 |
-
)
|
985 |
-
|
986 |
-
|
987 |
-
no_python_version_warning: Callable[..., Option] = partial(
|
988 |
-
Option,
|
989 |
-
"--no-python-version-warning",
|
990 |
-
dest="no_python_version_warning",
|
991 |
-
action="store_true",
|
992 |
-
default=False,
|
993 |
-
help="Silence deprecation warnings for upcoming unsupported Pythons.",
|
994 |
-
)
|
995 |
-
|
996 |
-
|
997 |
-
# Features that are now always on. A warning is printed if they are used.
|
998 |
-
ALWAYS_ENABLED_FEATURES = [
|
999 |
-
"no-binary-enable-wheel-cache", # always on since 23.1
|
1000 |
-
]
|
1001 |
-
|
1002 |
-
use_new_feature: Callable[..., Option] = partial(
|
1003 |
-
Option,
|
1004 |
-
"--use-feature",
|
1005 |
-
dest="features_enabled",
|
1006 |
-
metavar="feature",
|
1007 |
-
action="append",
|
1008 |
-
default=[],
|
1009 |
-
choices=[
|
1010 |
-
"fast-deps",
|
1011 |
-
"truststore",
|
1012 |
-
]
|
1013 |
-
+ ALWAYS_ENABLED_FEATURES,
|
1014 |
-
help="Enable new functionality, that may be backward incompatible.",
|
1015 |
-
)
|
1016 |
-
|
1017 |
-
use_deprecated_feature: Callable[..., Option] = partial(
|
1018 |
-
Option,
|
1019 |
-
"--use-deprecated",
|
1020 |
-
dest="deprecated_features_enabled",
|
1021 |
-
metavar="feature",
|
1022 |
-
action="append",
|
1023 |
-
default=[],
|
1024 |
-
choices=[
|
1025 |
-
"legacy-resolver",
|
1026 |
-
],
|
1027 |
-
help=("Enable deprecated functionality, that will be removed in the future."),
|
1028 |
-
)
|
1029 |
-
|
1030 |
-
|
1031 |
-
##########
|
1032 |
-
# groups #
|
1033 |
-
##########
|
1034 |
-
|
1035 |
-
general_group: Dict[str, Any] = {
|
1036 |
-
"name": "General Options",
|
1037 |
-
"options": [
|
1038 |
-
help_,
|
1039 |
-
debug_mode,
|
1040 |
-
isolated_mode,
|
1041 |
-
require_virtualenv,
|
1042 |
-
python,
|
1043 |
-
verbose,
|
1044 |
-
version,
|
1045 |
-
quiet,
|
1046 |
-
log,
|
1047 |
-
no_input,
|
1048 |
-
keyring_provider,
|
1049 |
-
proxy,
|
1050 |
-
retries,
|
1051 |
-
timeout,
|
1052 |
-
exists_action,
|
1053 |
-
trusted_host,
|
1054 |
-
cert,
|
1055 |
-
client_cert,
|
1056 |
-
cache_dir,
|
1057 |
-
no_cache,
|
1058 |
-
disable_pip_version_check,
|
1059 |
-
no_color,
|
1060 |
-
no_python_version_warning,
|
1061 |
-
use_new_feature,
|
1062 |
-
use_deprecated_feature,
|
1063 |
-
],
|
1064 |
-
}
|
1065 |
-
|
1066 |
-
index_group: Dict[str, Any] = {
|
1067 |
-
"name": "Package Index Options",
|
1068 |
-
"options": [
|
1069 |
-
index_url,
|
1070 |
-
extra_index_url,
|
1071 |
-
no_index,
|
1072 |
-
find_links,
|
1073 |
-
],
|
1074 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/platformdirs/api.py
DELETED
@@ -1,179 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
import os
|
4 |
-
import sys
|
5 |
-
from abc import ABC, abstractmethod
|
6 |
-
from pathlib import Path
|
7 |
-
|
8 |
-
if sys.version_info >= (3, 8): # pragma: no branch
|
9 |
-
from typing import Literal # pragma: no cover
|
10 |
-
|
11 |
-
|
12 |
-
class PlatformDirsABC(ABC):
|
13 |
-
"""
|
14 |
-
Abstract base class for platform directories.
|
15 |
-
"""
|
16 |
-
|
17 |
-
def __init__(
|
18 |
-
self,
|
19 |
-
appname: str | None = None,
|
20 |
-
appauthor: str | None | Literal[False] = None,
|
21 |
-
version: str | None = None,
|
22 |
-
roaming: bool = False,
|
23 |
-
multipath: bool = False,
|
24 |
-
opinion: bool = True,
|
25 |
-
ensure_exists: bool = False,
|
26 |
-
):
|
27 |
-
"""
|
28 |
-
Create a new platform directory.
|
29 |
-
|
30 |
-
:param appname: See `appname`.
|
31 |
-
:param appauthor: See `appauthor`.
|
32 |
-
:param version: See `version`.
|
33 |
-
:param roaming: See `roaming`.
|
34 |
-
:param multipath: See `multipath`.
|
35 |
-
:param opinion: See `opinion`.
|
36 |
-
:param ensure_exists: See `ensure_exists`.
|
37 |
-
"""
|
38 |
-
self.appname = appname #: The name of application.
|
39 |
-
self.appauthor = appauthor
|
40 |
-
"""
|
41 |
-
The name of the app author or distributing body for this application. Typically, it is the owning company name.
|
42 |
-
Defaults to `appname`. You may pass ``False`` to disable it.
|
43 |
-
"""
|
44 |
-
self.version = version
|
45 |
-
"""
|
46 |
-
An optional version path element to append to the path. You might want to use this if you want multiple versions
|
47 |
-
of your app to be able to run independently. If used, this would typically be ``<major>.<minor>``.
|
48 |
-
"""
|
49 |
-
self.roaming = roaming
|
50 |
-
"""
|
51 |
-
Whether to use the roaming appdata directory on Windows. That means that for users on a Windows network setup
|
52 |
-
for roaming profiles, this user data will be synced on login (see
|
53 |
-
`here <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>`_).
|
54 |
-
"""
|
55 |
-
self.multipath = multipath
|
56 |
-
"""
|
57 |
-
An optional parameter only applicable to Unix/Linux which indicates that the entire list of data dirs should be
|
58 |
-
returned. By default, the first item would only be returned.
|
59 |
-
"""
|
60 |
-
self.opinion = opinion #: A flag to indicating to use opinionated values.
|
61 |
-
self.ensure_exists = ensure_exists
|
62 |
-
"""
|
63 |
-
Optionally create the directory (and any missing parents) upon access if it does not exist.
|
64 |
-
By default, no directories are created.
|
65 |
-
"""
|
66 |
-
|
67 |
-
def _append_app_name_and_version(self, *base: str) -> str:
|
68 |
-
params = list(base[1:])
|
69 |
-
if self.appname:
|
70 |
-
params.append(self.appname)
|
71 |
-
if self.version:
|
72 |
-
params.append(self.version)
|
73 |
-
path = os.path.join(base[0], *params)
|
74 |
-
self._optionally_create_directory(path)
|
75 |
-
return path
|
76 |
-
|
77 |
-
def _optionally_create_directory(self, path: str) -> None:
|
78 |
-
if self.ensure_exists:
|
79 |
-
Path(path).mkdir(parents=True, exist_ok=True)
|
80 |
-
|
81 |
-
@property
|
82 |
-
@abstractmethod
|
83 |
-
def user_data_dir(self) -> str:
|
84 |
-
""":return: data directory tied to the user"""
|
85 |
-
|
86 |
-
@property
|
87 |
-
@abstractmethod
|
88 |
-
def site_data_dir(self) -> str:
|
89 |
-
""":return: data directory shared by users"""
|
90 |
-
|
91 |
-
@property
|
92 |
-
@abstractmethod
|
93 |
-
def user_config_dir(self) -> str:
|
94 |
-
""":return: config directory tied to the user"""
|
95 |
-
|
96 |
-
@property
|
97 |
-
@abstractmethod
|
98 |
-
def site_config_dir(self) -> str:
|
99 |
-
""":return: config directory shared by the users"""
|
100 |
-
|
101 |
-
@property
|
102 |
-
@abstractmethod
|
103 |
-
def user_cache_dir(self) -> str:
|
104 |
-
""":return: cache directory tied to the user"""
|
105 |
-
|
106 |
-
@property
|
107 |
-
@abstractmethod
|
108 |
-
def site_cache_dir(self) -> str:
|
109 |
-
""":return: cache directory shared by users"""
|
110 |
-
|
111 |
-
@property
|
112 |
-
@abstractmethod
|
113 |
-
def user_state_dir(self) -> str:
|
114 |
-
""":return: state directory tied to the user"""
|
115 |
-
|
116 |
-
@property
|
117 |
-
@abstractmethod
|
118 |
-
def user_log_dir(self) -> str:
|
119 |
-
""":return: log directory tied to the user"""
|
120 |
-
|
121 |
-
@property
|
122 |
-
@abstractmethod
|
123 |
-
def user_documents_dir(self) -> str:
|
124 |
-
""":return: documents directory tied to the user"""
|
125 |
-
|
126 |
-
@property
|
127 |
-
@abstractmethod
|
128 |
-
def user_runtime_dir(self) -> str:
|
129 |
-
""":return: runtime directory tied to the user"""
|
130 |
-
|
131 |
-
@property
|
132 |
-
def user_data_path(self) -> Path:
|
133 |
-
""":return: data path tied to the user"""
|
134 |
-
return Path(self.user_data_dir)
|
135 |
-
|
136 |
-
@property
|
137 |
-
def site_data_path(self) -> Path:
|
138 |
-
""":return: data path shared by users"""
|
139 |
-
return Path(self.site_data_dir)
|
140 |
-
|
141 |
-
@property
|
142 |
-
def user_config_path(self) -> Path:
|
143 |
-
""":return: config path tied to the user"""
|
144 |
-
return Path(self.user_config_dir)
|
145 |
-
|
146 |
-
@property
|
147 |
-
def site_config_path(self) -> Path:
|
148 |
-
""":return: config path shared by the users"""
|
149 |
-
return Path(self.site_config_dir)
|
150 |
-
|
151 |
-
@property
|
152 |
-
def user_cache_path(self) -> Path:
|
153 |
-
""":return: cache path tied to the user"""
|
154 |
-
return Path(self.user_cache_dir)
|
155 |
-
|
156 |
-
@property
|
157 |
-
def site_cache_path(self) -> Path:
|
158 |
-
""":return: cache path shared by users"""
|
159 |
-
return Path(self.site_cache_dir)
|
160 |
-
|
161 |
-
@property
|
162 |
-
def user_state_path(self) -> Path:
|
163 |
-
""":return: state path tied to the user"""
|
164 |
-
return Path(self.user_state_dir)
|
165 |
-
|
166 |
-
@property
|
167 |
-
def user_log_path(self) -> Path:
|
168 |
-
""":return: log path tied to the user"""
|
169 |
-
return Path(self.user_log_dir)
|
170 |
-
|
171 |
-
@property
|
172 |
-
def user_documents_path(self) -> Path:
|
173 |
-
""":return: documents path tied to the user"""
|
174 |
-
return Path(self.user_documents_dir)
|
175 |
-
|
176 |
-
@property
|
177 |
-
def user_runtime_path(self) -> Path:
|
178 |
-
""":return: runtime path tied to the user"""
|
179 |
-
return Path(self.user_runtime_dir)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/_asyncio.py
DELETED
@@ -1,94 +0,0 @@
|
|
1 |
-
# Copyright 2016 Étienne Bersac
|
2 |
-
# Copyright 2016 Julien Danjou
|
3 |
-
# Copyright 2016 Joshua Harlow
|
4 |
-
# Copyright 2013-2014 Ray Holder
|
5 |
-
#
|
6 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
7 |
-
# you may not use this file except in compliance with the License.
|
8 |
-
# You may obtain a copy of the License at
|
9 |
-
#
|
10 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
11 |
-
#
|
12 |
-
# Unless required by applicable law or agreed to in writing, software
|
13 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
14 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
15 |
-
# See the License for the specific language governing permissions and
|
16 |
-
# limitations under the License.
|
17 |
-
|
18 |
-
import functools
|
19 |
-
import sys
|
20 |
-
import typing as t
|
21 |
-
from asyncio import sleep
|
22 |
-
|
23 |
-
from pip._vendor.tenacity import AttemptManager
|
24 |
-
from pip._vendor.tenacity import BaseRetrying
|
25 |
-
from pip._vendor.tenacity import DoAttempt
|
26 |
-
from pip._vendor.tenacity import DoSleep
|
27 |
-
from pip._vendor.tenacity import RetryCallState
|
28 |
-
|
29 |
-
WrappedFnReturnT = t.TypeVar("WrappedFnReturnT")
|
30 |
-
WrappedFn = t.TypeVar("WrappedFn", bound=t.Callable[..., t.Awaitable[t.Any]])
|
31 |
-
|
32 |
-
|
33 |
-
class AsyncRetrying(BaseRetrying):
|
34 |
-
sleep: t.Callable[[float], t.Awaitable[t.Any]]
|
35 |
-
|
36 |
-
def __init__(self, sleep: t.Callable[[float], t.Awaitable[t.Any]] = sleep, **kwargs: t.Any) -> None:
|
37 |
-
super().__init__(**kwargs)
|
38 |
-
self.sleep = sleep
|
39 |
-
|
40 |
-
async def __call__( # type: ignore[override]
|
41 |
-
self, fn: WrappedFn, *args: t.Any, **kwargs: t.Any
|
42 |
-
) -> WrappedFnReturnT:
|
43 |
-
self.begin()
|
44 |
-
|
45 |
-
retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
|
46 |
-
while True:
|
47 |
-
do = self.iter(retry_state=retry_state)
|
48 |
-
if isinstance(do, DoAttempt):
|
49 |
-
try:
|
50 |
-
result = await fn(*args, **kwargs)
|
51 |
-
except BaseException: # noqa: B902
|
52 |
-
retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
|
53 |
-
else:
|
54 |
-
retry_state.set_result(result)
|
55 |
-
elif isinstance(do, DoSleep):
|
56 |
-
retry_state.prepare_for_next_attempt()
|
57 |
-
await self.sleep(do)
|
58 |
-
else:
|
59 |
-
return do # type: ignore[no-any-return]
|
60 |
-
|
61 |
-
def __iter__(self) -> t.Generator[AttemptManager, None, None]:
|
62 |
-
raise TypeError("AsyncRetrying object is not iterable")
|
63 |
-
|
64 |
-
def __aiter__(self) -> "AsyncRetrying":
|
65 |
-
self.begin()
|
66 |
-
self._retry_state = RetryCallState(self, fn=None, args=(), kwargs={})
|
67 |
-
return self
|
68 |
-
|
69 |
-
async def __anext__(self) -> AttemptManager:
|
70 |
-
while True:
|
71 |
-
do = self.iter(retry_state=self._retry_state)
|
72 |
-
if do is None:
|
73 |
-
raise StopAsyncIteration
|
74 |
-
elif isinstance(do, DoAttempt):
|
75 |
-
return AttemptManager(retry_state=self._retry_state)
|
76 |
-
elif isinstance(do, DoSleep):
|
77 |
-
self._retry_state.prepare_for_next_attempt()
|
78 |
-
await self.sleep(do)
|
79 |
-
else:
|
80 |
-
raise StopAsyncIteration
|
81 |
-
|
82 |
-
def wraps(self, fn: WrappedFn) -> WrappedFn:
|
83 |
-
fn = super().wraps(fn)
|
84 |
-
# Ensure wrapper is recognized as a coroutine function.
|
85 |
-
|
86 |
-
@functools.wraps(fn)
|
87 |
-
async def async_wrapped(*args: t.Any, **kwargs: t.Any) -> t.Any:
|
88 |
-
return await fn(*args, **kwargs)
|
89 |
-
|
90 |
-
# Preserve attributes
|
91 |
-
async_wrapped.retry = fn.retry # type: ignore[attr-defined]
|
92 |
-
async_wrapped.retry_with = fn.retry_with # type: ignore[attr-defined]
|
93 |
-
|
94 |
-
return async_wrapped # type: ignore[return-value]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/install.py
DELETED
@@ -1,814 +0,0 @@
|
|
1 |
-
"""distutils.command.install
|
2 |
-
|
3 |
-
Implements the Distutils 'install' command."""
|
4 |
-
|
5 |
-
import sys
|
6 |
-
import os
|
7 |
-
import contextlib
|
8 |
-
import sysconfig
|
9 |
-
import itertools
|
10 |
-
|
11 |
-
from distutils import log
|
12 |
-
from distutils.core import Command
|
13 |
-
from distutils.debug import DEBUG
|
14 |
-
from distutils.sysconfig import get_config_vars
|
15 |
-
from distutils.file_util import write_file
|
16 |
-
from distutils.util import convert_path, subst_vars, change_root
|
17 |
-
from distutils.util import get_platform
|
18 |
-
from distutils.errors import DistutilsOptionError, DistutilsPlatformError
|
19 |
-
from . import _framework_compat as fw
|
20 |
-
from .. import _collections
|
21 |
-
|
22 |
-
from site import USER_BASE
|
23 |
-
from site import USER_SITE
|
24 |
-
|
25 |
-
HAS_USER_SITE = True
|
26 |
-
|
27 |
-
WINDOWS_SCHEME = {
|
28 |
-
'purelib': '{base}/Lib/site-packages',
|
29 |
-
'platlib': '{base}/Lib/site-packages',
|
30 |
-
'headers': '{base}/Include/{dist_name}',
|
31 |
-
'scripts': '{base}/Scripts',
|
32 |
-
'data': '{base}',
|
33 |
-
}
|
34 |
-
|
35 |
-
INSTALL_SCHEMES = {
|
36 |
-
'posix_prefix': {
|
37 |
-
'purelib': '{base}/lib/{implementation_lower}{py_version_short}/site-packages',
|
38 |
-
'platlib': '{platbase}/{platlibdir}/{implementation_lower}'
|
39 |
-
'{py_version_short}/site-packages',
|
40 |
-
'headers': '{base}/include/{implementation_lower}'
|
41 |
-
'{py_version_short}{abiflags}/{dist_name}',
|
42 |
-
'scripts': '{base}/bin',
|
43 |
-
'data': '{base}',
|
44 |
-
},
|
45 |
-
'posix_home': {
|
46 |
-
'purelib': '{base}/lib/{implementation_lower}',
|
47 |
-
'platlib': '{base}/{platlibdir}/{implementation_lower}',
|
48 |
-
'headers': '{base}/include/{implementation_lower}/{dist_name}',
|
49 |
-
'scripts': '{base}/bin',
|
50 |
-
'data': '{base}',
|
51 |
-
},
|
52 |
-
'nt': WINDOWS_SCHEME,
|
53 |
-
'pypy': {
|
54 |
-
'purelib': '{base}/site-packages',
|
55 |
-
'platlib': '{base}/site-packages',
|
56 |
-
'headers': '{base}/include/{dist_name}',
|
57 |
-
'scripts': '{base}/bin',
|
58 |
-
'data': '{base}',
|
59 |
-
},
|
60 |
-
'pypy_nt': {
|
61 |
-
'purelib': '{base}/site-packages',
|
62 |
-
'platlib': '{base}/site-packages',
|
63 |
-
'headers': '{base}/include/{dist_name}',
|
64 |
-
'scripts': '{base}/Scripts',
|
65 |
-
'data': '{base}',
|
66 |
-
},
|
67 |
-
}
|
68 |
-
|
69 |
-
# user site schemes
|
70 |
-
if HAS_USER_SITE:
|
71 |
-
INSTALL_SCHEMES['nt_user'] = {
|
72 |
-
'purelib': '{usersite}',
|
73 |
-
'platlib': '{usersite}',
|
74 |
-
'headers': '{userbase}/{implementation}{py_version_nodot_plat}'
|
75 |
-
'/Include/{dist_name}',
|
76 |
-
'scripts': '{userbase}/{implementation}{py_version_nodot_plat}/Scripts',
|
77 |
-
'data': '{userbase}',
|
78 |
-
}
|
79 |
-
|
80 |
-
INSTALL_SCHEMES['posix_user'] = {
|
81 |
-
'purelib': '{usersite}',
|
82 |
-
'platlib': '{usersite}',
|
83 |
-
'headers': '{userbase}/include/{implementation_lower}'
|
84 |
-
'{py_version_short}{abiflags}/{dist_name}',
|
85 |
-
'scripts': '{userbase}/bin',
|
86 |
-
'data': '{userbase}',
|
87 |
-
}
|
88 |
-
|
89 |
-
|
90 |
-
INSTALL_SCHEMES.update(fw.schemes)
|
91 |
-
|
92 |
-
|
93 |
-
# The keys to an installation scheme; if any new types of files are to be
|
94 |
-
# installed, be sure to add an entry to every installation scheme above,
|
95 |
-
# and to SCHEME_KEYS here.
|
96 |
-
SCHEME_KEYS = ('purelib', 'platlib', 'headers', 'scripts', 'data')
|
97 |
-
|
98 |
-
|
99 |
-
def _load_sysconfig_schemes():
|
100 |
-
with contextlib.suppress(AttributeError):
|
101 |
-
return {
|
102 |
-
scheme: sysconfig.get_paths(scheme, expand=False)
|
103 |
-
for scheme in sysconfig.get_scheme_names()
|
104 |
-
}
|
105 |
-
|
106 |
-
|
107 |
-
def _load_schemes():
|
108 |
-
"""
|
109 |
-
Extend default schemes with schemes from sysconfig.
|
110 |
-
"""
|
111 |
-
|
112 |
-
sysconfig_schemes = _load_sysconfig_schemes() or {}
|
113 |
-
|
114 |
-
return {
|
115 |
-
scheme: {
|
116 |
-
**INSTALL_SCHEMES.get(scheme, {}),
|
117 |
-
**sysconfig_schemes.get(scheme, {}),
|
118 |
-
}
|
119 |
-
for scheme in set(itertools.chain(INSTALL_SCHEMES, sysconfig_schemes))
|
120 |
-
}
|
121 |
-
|
122 |
-
|
123 |
-
def _get_implementation():
|
124 |
-
if hasattr(sys, 'pypy_version_info'):
|
125 |
-
return 'PyPy'
|
126 |
-
else:
|
127 |
-
return 'Python'
|
128 |
-
|
129 |
-
|
130 |
-
def _select_scheme(ob, name):
|
131 |
-
scheme = _inject_headers(name, _load_scheme(_resolve_scheme(name)))
|
132 |
-
vars(ob).update(_remove_set(ob, _scheme_attrs(scheme)))
|
133 |
-
|
134 |
-
|
135 |
-
def _remove_set(ob, attrs):
|
136 |
-
"""
|
137 |
-
Include only attrs that are None in ob.
|
138 |
-
"""
|
139 |
-
return {key: value for key, value in attrs.items() if getattr(ob, key) is None}
|
140 |
-
|
141 |
-
|
142 |
-
def _resolve_scheme(name):
|
143 |
-
os_name, sep, key = name.partition('_')
|
144 |
-
try:
|
145 |
-
resolved = sysconfig.get_preferred_scheme(key)
|
146 |
-
except Exception:
|
147 |
-
resolved = fw.scheme(_pypy_hack(name))
|
148 |
-
return resolved
|
149 |
-
|
150 |
-
|
151 |
-
def _load_scheme(name):
|
152 |
-
return _load_schemes()[name]
|
153 |
-
|
154 |
-
|
155 |
-
def _inject_headers(name, scheme):
|
156 |
-
"""
|
157 |
-
Given a scheme name and the resolved scheme,
|
158 |
-
if the scheme does not include headers, resolve
|
159 |
-
the fallback scheme for the name and use headers
|
160 |
-
from it. pypa/distutils#88
|
161 |
-
"""
|
162 |
-
# Bypass the preferred scheme, which may not
|
163 |
-
# have defined headers.
|
164 |
-
fallback = _load_scheme(_pypy_hack(name))
|
165 |
-
scheme.setdefault('headers', fallback['headers'])
|
166 |
-
return scheme
|
167 |
-
|
168 |
-
|
169 |
-
def _scheme_attrs(scheme):
|
170 |
-
"""Resolve install directories by applying the install schemes."""
|
171 |
-
return {f'install_{key}': scheme[key] for key in SCHEME_KEYS}
|
172 |
-
|
173 |
-
|
174 |
-
def _pypy_hack(name):
|
175 |
-
PY37 = sys.version_info < (3, 8)
|
176 |
-
old_pypy = hasattr(sys, 'pypy_version_info') and PY37
|
177 |
-
prefix = not name.endswith(('_user', '_home'))
|
178 |
-
pypy_name = 'pypy' + '_nt' * (os.name == 'nt')
|
179 |
-
return pypy_name if old_pypy and prefix else name
|
180 |
-
|
181 |
-
|
182 |
-
class install(Command):
|
183 |
-
|
184 |
-
description = "install everything from build directory"
|
185 |
-
|
186 |
-
user_options = [
|
187 |
-
# Select installation scheme and set base director(y|ies)
|
188 |
-
('prefix=', None, "installation prefix"),
|
189 |
-
('exec-prefix=', None, "(Unix only) prefix for platform-specific files"),
|
190 |
-
('home=', None, "(Unix only) home directory to install under"),
|
191 |
-
# Or, just set the base director(y|ies)
|
192 |
-
(
|
193 |
-
'install-base=',
|
194 |
-
None,
|
195 |
-
"base installation directory (instead of --prefix or --home)",
|
196 |
-
),
|
197 |
-
(
|
198 |
-
'install-platbase=',
|
199 |
-
None,
|
200 |
-
"base installation directory for platform-specific files "
|
201 |
-
+ "(instead of --exec-prefix or --home)",
|
202 |
-
),
|
203 |
-
('root=', None, "install everything relative to this alternate root directory"),
|
204 |
-
# Or, explicitly set the installation scheme
|
205 |
-
(
|
206 |
-
'install-purelib=',
|
207 |
-
None,
|
208 |
-
"installation directory for pure Python module distributions",
|
209 |
-
),
|
210 |
-
(
|
211 |
-
'install-platlib=',
|
212 |
-
None,
|
213 |
-
"installation directory for non-pure module distributions",
|
214 |
-
),
|
215 |
-
(
|
216 |
-
'install-lib=',
|
217 |
-
None,
|
218 |
-
"installation directory for all module distributions "
|
219 |
-
+ "(overrides --install-purelib and --install-platlib)",
|
220 |
-
),
|
221 |
-
('install-headers=', None, "installation directory for C/C++ headers"),
|
222 |
-
('install-scripts=', None, "installation directory for Python scripts"),
|
223 |
-
('install-data=', None, "installation directory for data files"),
|
224 |
-
# Byte-compilation options -- see install_lib.py for details, as
|
225 |
-
# these are duplicated from there (but only install_lib does
|
226 |
-
# anything with them).
|
227 |
-
('compile', 'c', "compile .py to .pyc [default]"),
|
228 |
-
('no-compile', None, "don't compile .py files"),
|
229 |
-
(
|
230 |
-
'optimize=',
|
231 |
-
'O',
|
232 |
-
"also compile with optimization: -O1 for \"python -O\", "
|
233 |
-
"-O2 for \"python -OO\", and -O0 to disable [default: -O0]",
|
234 |
-
),
|
235 |
-
# Miscellaneous control options
|
236 |
-
('force', 'f', "force installation (overwrite any existing files)"),
|
237 |
-
('skip-build', None, "skip rebuilding everything (for testing/debugging)"),
|
238 |
-
# Where to install documentation (eventually!)
|
239 |
-
# ('doc-format=', None, "format of documentation to generate"),
|
240 |
-
# ('install-man=', None, "directory for Unix man pages"),
|
241 |
-
# ('install-html=', None, "directory for HTML documentation"),
|
242 |
-
# ('install-info=', None, "directory for GNU info files"),
|
243 |
-
('record=', None, "filename in which to record list of installed files"),
|
244 |
-
]
|
245 |
-
|
246 |
-
boolean_options = ['compile', 'force', 'skip-build']
|
247 |
-
|
248 |
-
if HAS_USER_SITE:
|
249 |
-
user_options.append(
|
250 |
-
('user', None, "install in user site-package '%s'" % USER_SITE)
|
251 |
-
)
|
252 |
-
boolean_options.append('user')
|
253 |
-
|
254 |
-
negative_opt = {'no-compile': 'compile'}
|
255 |
-
|
256 |
-
def initialize_options(self):
|
257 |
-
"""Initializes options."""
|
258 |
-
# High-level options: these select both an installation base
|
259 |
-
# and scheme.
|
260 |
-
self.prefix = None
|
261 |
-
self.exec_prefix = None
|
262 |
-
self.home = None
|
263 |
-
self.user = 0
|
264 |
-
|
265 |
-
# These select only the installation base; it's up to the user to
|
266 |
-
# specify the installation scheme (currently, that means supplying
|
267 |
-
# the --install-{platlib,purelib,scripts,data} options).
|
268 |
-
self.install_base = None
|
269 |
-
self.install_platbase = None
|
270 |
-
self.root = None
|
271 |
-
|
272 |
-
# These options are the actual installation directories; if not
|
273 |
-
# supplied by the user, they are filled in using the installation
|
274 |
-
# scheme implied by prefix/exec-prefix/home and the contents of
|
275 |
-
# that installation scheme.
|
276 |
-
self.install_purelib = None # for pure module distributions
|
277 |
-
self.install_platlib = None # non-pure (dists w/ extensions)
|
278 |
-
self.install_headers = None # for C/C++ headers
|
279 |
-
self.install_lib = None # set to either purelib or platlib
|
280 |
-
self.install_scripts = None
|
281 |
-
self.install_data = None
|
282 |
-
self.install_userbase = USER_BASE
|
283 |
-
self.install_usersite = USER_SITE
|
284 |
-
|
285 |
-
self.compile = None
|
286 |
-
self.optimize = None
|
287 |
-
|
288 |
-
# Deprecated
|
289 |
-
# These two are for putting non-packagized distributions into their
|
290 |
-
# own directory and creating a .pth file if it makes sense.
|
291 |
-
# 'extra_path' comes from the setup file; 'install_path_file' can
|
292 |
-
# be turned off if it makes no sense to install a .pth file. (But
|
293 |
-
# better to install it uselessly than to guess wrong and not
|
294 |
-
# install it when it's necessary and would be used!) Currently,
|
295 |
-
# 'install_path_file' is always true unless some outsider meddles
|
296 |
-
# with it.
|
297 |
-
self.extra_path = None
|
298 |
-
self.install_path_file = 1
|
299 |
-
|
300 |
-
# 'force' forces installation, even if target files are not
|
301 |
-
# out-of-date. 'skip_build' skips running the "build" command,
|
302 |
-
# handy if you know it's not necessary. 'warn_dir' (which is *not*
|
303 |
-
# a user option, it's just there so the bdist_* commands can turn
|
304 |
-
# it off) determines whether we warn about installing to a
|
305 |
-
# directory not in sys.path.
|
306 |
-
self.force = 0
|
307 |
-
self.skip_build = 0
|
308 |
-
self.warn_dir = 1
|
309 |
-
|
310 |
-
# These are only here as a conduit from the 'build' command to the
|
311 |
-
# 'install_*' commands that do the real work. ('build_base' isn't
|
312 |
-
# actually used anywhere, but it might be useful in future.) They
|
313 |
-
# are not user options, because if the user told the install
|
314 |
-
# command where the build directory is, that wouldn't affect the
|
315 |
-
# build command.
|
316 |
-
self.build_base = None
|
317 |
-
self.build_lib = None
|
318 |
-
|
319 |
-
# Not defined yet because we don't know anything about
|
320 |
-
# documentation yet.
|
321 |
-
# self.install_man = None
|
322 |
-
# self.install_html = None
|
323 |
-
# self.install_info = None
|
324 |
-
|
325 |
-
self.record = None
|
326 |
-
|
327 |
-
# -- Option finalizing methods -------------------------------------
|
328 |
-
# (This is rather more involved than for most commands,
|
329 |
-
# because this is where the policy for installing third-
|
330 |
-
# party Python modules on various platforms given a wide
|
331 |
-
# array of user input is decided. Yes, it's quite complex!)
|
332 |
-
|
333 |
-
def finalize_options(self): # noqa: C901
|
334 |
-
"""Finalizes options."""
|
335 |
-
# This method (and its helpers, like 'finalize_unix()',
|
336 |
-
# 'finalize_other()', and 'select_scheme()') is where the default
|
337 |
-
# installation directories for modules, extension modules, and
|
338 |
-
# anything else we care to install from a Python module
|
339 |
-
# distribution. Thus, this code makes a pretty important policy
|
340 |
-
# statement about how third-party stuff is added to a Python
|
341 |
-
# installation! Note that the actual work of installation is done
|
342 |
-
# by the relatively simple 'install_*' commands; they just take
|
343 |
-
# their orders from the installation directory options determined
|
344 |
-
# here.
|
345 |
-
|
346 |
-
# Check for errors/inconsistencies in the options; first, stuff
|
347 |
-
# that's wrong on any platform.
|
348 |
-
|
349 |
-
if (self.prefix or self.exec_prefix or self.home) and (
|
350 |
-
self.install_base or self.install_platbase
|
351 |
-
):
|
352 |
-
raise DistutilsOptionError(
|
353 |
-
"must supply either prefix/exec-prefix/home or "
|
354 |
-
+ "install-base/install-platbase -- not both"
|
355 |
-
)
|
356 |
-
|
357 |
-
if self.home and (self.prefix or self.exec_prefix):
|
358 |
-
raise DistutilsOptionError(
|
359 |
-
"must supply either home or prefix/exec-prefix -- not both"
|
360 |
-
)
|
361 |
-
|
362 |
-
if self.user and (
|
363 |
-
self.prefix
|
364 |
-
or self.exec_prefix
|
365 |
-
or self.home
|
366 |
-
or self.install_base
|
367 |
-
or self.install_platbase
|
368 |
-
):
|
369 |
-
raise DistutilsOptionError(
|
370 |
-
"can't combine user with prefix, "
|
371 |
-
"exec_prefix/home, or install_(plat)base"
|
372 |
-
)
|
373 |
-
|
374 |
-
# Next, stuff that's wrong (or dubious) only on certain platforms.
|
375 |
-
if os.name != "posix":
|
376 |
-
if self.exec_prefix:
|
377 |
-
self.warn("exec-prefix option ignored on this platform")
|
378 |
-
self.exec_prefix = None
|
379 |
-
|
380 |
-
# Now the interesting logic -- so interesting that we farm it out
|
381 |
-
# to other methods. The goal of these methods is to set the final
|
382 |
-
# values for the install_{lib,scripts,data,...} options, using as
|
383 |
-
# input a heady brew of prefix, exec_prefix, home, install_base,
|
384 |
-
# install_platbase, user-supplied versions of
|
385 |
-
# install_{purelib,platlib,lib,scripts,data,...}, and the
|
386 |
-
# install schemes. Phew!
|
387 |
-
|
388 |
-
self.dump_dirs("pre-finalize_{unix,other}")
|
389 |
-
|
390 |
-
if os.name == 'posix':
|
391 |
-
self.finalize_unix()
|
392 |
-
else:
|
393 |
-
self.finalize_other()
|
394 |
-
|
395 |
-
self.dump_dirs("post-finalize_{unix,other}()")
|
396 |
-
|
397 |
-
# Expand configuration variables, tilde, etc. in self.install_base
|
398 |
-
# and self.install_platbase -- that way, we can use $base or
|
399 |
-
# $platbase in the other installation directories and not worry
|
400 |
-
# about needing recursive variable expansion (shudder).
|
401 |
-
|
402 |
-
py_version = sys.version.split()[0]
|
403 |
-
(prefix, exec_prefix) = get_config_vars('prefix', 'exec_prefix')
|
404 |
-
try:
|
405 |
-
abiflags = sys.abiflags
|
406 |
-
except AttributeError:
|
407 |
-
# sys.abiflags may not be defined on all platforms.
|
408 |
-
abiflags = ''
|
409 |
-
local_vars = {
|
410 |
-
'dist_name': self.distribution.get_name(),
|
411 |
-
'dist_version': self.distribution.get_version(),
|
412 |
-
'dist_fullname': self.distribution.get_fullname(),
|
413 |
-
'py_version': py_version,
|
414 |
-
'py_version_short': '%d.%d' % sys.version_info[:2],
|
415 |
-
'py_version_nodot': '%d%d' % sys.version_info[:2],
|
416 |
-
'sys_prefix': prefix,
|
417 |
-
'prefix': prefix,
|
418 |
-
'sys_exec_prefix': exec_prefix,
|
419 |
-
'exec_prefix': exec_prefix,
|
420 |
-
'abiflags': abiflags,
|
421 |
-
'platlibdir': getattr(sys, 'platlibdir', 'lib'),
|
422 |
-
'implementation_lower': _get_implementation().lower(),
|
423 |
-
'implementation': _get_implementation(),
|
424 |
-
}
|
425 |
-
|
426 |
-
# vars for compatibility on older Pythons
|
427 |
-
compat_vars = dict(
|
428 |
-
# Python 3.9 and earlier
|
429 |
-
py_version_nodot_plat=getattr(sys, 'winver', '').replace('.', ''),
|
430 |
-
)
|
431 |
-
|
432 |
-
if HAS_USER_SITE:
|
433 |
-
local_vars['userbase'] = self.install_userbase
|
434 |
-
local_vars['usersite'] = self.install_usersite
|
435 |
-
|
436 |
-
self.config_vars = _collections.DictStack(
|
437 |
-
[fw.vars(), compat_vars, sysconfig.get_config_vars(), local_vars]
|
438 |
-
)
|
439 |
-
|
440 |
-
self.expand_basedirs()
|
441 |
-
|
442 |
-
self.dump_dirs("post-expand_basedirs()")
|
443 |
-
|
444 |
-
# Now define config vars for the base directories so we can expand
|
445 |
-
# everything else.
|
446 |
-
local_vars['base'] = self.install_base
|
447 |
-
local_vars['platbase'] = self.install_platbase
|
448 |
-
|
449 |
-
if DEBUG:
|
450 |
-
from pprint import pprint
|
451 |
-
|
452 |
-
print("config vars:")
|
453 |
-
pprint(dict(self.config_vars))
|
454 |
-
|
455 |
-
# Expand "~" and configuration variables in the installation
|
456 |
-
# directories.
|
457 |
-
self.expand_dirs()
|
458 |
-
|
459 |
-
self.dump_dirs("post-expand_dirs()")
|
460 |
-
|
461 |
-
# Create directories in the home dir:
|
462 |
-
if self.user:
|
463 |
-
self.create_home_path()
|
464 |
-
|
465 |
-
# Pick the actual directory to install all modules to: either
|
466 |
-
# install_purelib or install_platlib, depending on whether this
|
467 |
-
# module distribution is pure or not. Of course, if the user
|
468 |
-
# already specified install_lib, use their selection.
|
469 |
-
if self.install_lib is None:
|
470 |
-
if self.distribution.has_ext_modules(): # has extensions: non-pure
|
471 |
-
self.install_lib = self.install_platlib
|
472 |
-
else:
|
473 |
-
self.install_lib = self.install_purelib
|
474 |
-
|
475 |
-
# Convert directories from Unix /-separated syntax to the local
|
476 |
-
# convention.
|
477 |
-
self.convert_paths(
|
478 |
-
'lib',
|
479 |
-
'purelib',
|
480 |
-
'platlib',
|
481 |
-
'scripts',
|
482 |
-
'data',
|
483 |
-
'headers',
|
484 |
-
'userbase',
|
485 |
-
'usersite',
|
486 |
-
)
|
487 |
-
|
488 |
-
# Deprecated
|
489 |
-
# Well, we're not actually fully completely finalized yet: we still
|
490 |
-
# have to deal with 'extra_path', which is the hack for allowing
|
491 |
-
# non-packagized module distributions (hello, Numerical Python!) to
|
492 |
-
# get their own directories.
|
493 |
-
self.handle_extra_path()
|
494 |
-
self.install_libbase = self.install_lib # needed for .pth file
|
495 |
-
self.install_lib = os.path.join(self.install_lib, self.extra_dirs)
|
496 |
-
|
497 |
-
# If a new root directory was supplied, make all the installation
|
498 |
-
# dirs relative to it.
|
499 |
-
if self.root is not None:
|
500 |
-
self.change_roots(
|
501 |
-
'libbase', 'lib', 'purelib', 'platlib', 'scripts', 'data', 'headers'
|
502 |
-
)
|
503 |
-
|
504 |
-
self.dump_dirs("after prepending root")
|
505 |
-
|
506 |
-
# Find out the build directories, ie. where to install from.
|
507 |
-
self.set_undefined_options(
|
508 |
-
'build', ('build_base', 'build_base'), ('build_lib', 'build_lib')
|
509 |
-
)
|
510 |
-
|
511 |
-
# Punt on doc directories for now -- after all, we're punting on
|
512 |
-
# documentation completely!
|
513 |
-
|
514 |
-
def dump_dirs(self, msg):
|
515 |
-
"""Dumps the list of user options."""
|
516 |
-
if not DEBUG:
|
517 |
-
return
|
518 |
-
from distutils.fancy_getopt import longopt_xlate
|
519 |
-
|
520 |
-
log.debug(msg + ":")
|
521 |
-
for opt in self.user_options:
|
522 |
-
opt_name = opt[0]
|
523 |
-
if opt_name[-1] == "=":
|
524 |
-
opt_name = opt_name[0:-1]
|
525 |
-
if opt_name in self.negative_opt:
|
526 |
-
opt_name = self.negative_opt[opt_name]
|
527 |
-
opt_name = opt_name.translate(longopt_xlate)
|
528 |
-
val = not getattr(self, opt_name)
|
529 |
-
else:
|
530 |
-
opt_name = opt_name.translate(longopt_xlate)
|
531 |
-
val = getattr(self, opt_name)
|
532 |
-
log.debug(" %s: %s", opt_name, val)
|
533 |
-
|
534 |
-
def finalize_unix(self):
|
535 |
-
"""Finalizes options for posix platforms."""
|
536 |
-
if self.install_base is not None or self.install_platbase is not None:
|
537 |
-
incomplete_scheme = (
|
538 |
-
(
|
539 |
-
self.install_lib is None
|
540 |
-
and self.install_purelib is None
|
541 |
-
and self.install_platlib is None
|
542 |
-
)
|
543 |
-
or self.install_headers is None
|
544 |
-
or self.install_scripts is None
|
545 |
-
or self.install_data is None
|
546 |
-
)
|
547 |
-
if incomplete_scheme:
|
548 |
-
raise DistutilsOptionError(
|
549 |
-
"install-base or install-platbase supplied, but "
|
550 |
-
"installation scheme is incomplete"
|
551 |
-
)
|
552 |
-
return
|
553 |
-
|
554 |
-
if self.user:
|
555 |
-
if self.install_userbase is None:
|
556 |
-
raise DistutilsPlatformError("User base directory is not specified")
|
557 |
-
self.install_base = self.install_platbase = self.install_userbase
|
558 |
-
self.select_scheme("posix_user")
|
559 |
-
elif self.home is not None:
|
560 |
-
self.install_base = self.install_platbase = self.home
|
561 |
-
self.select_scheme("posix_home")
|
562 |
-
else:
|
563 |
-
if self.prefix is None:
|
564 |
-
if self.exec_prefix is not None:
|
565 |
-
raise DistutilsOptionError(
|
566 |
-
"must not supply exec-prefix without prefix"
|
567 |
-
)
|
568 |
-
|
569 |
-
# Allow Fedora to add components to the prefix
|
570 |
-
_prefix_addition = getattr(sysconfig, '_prefix_addition', "")
|
571 |
-
|
572 |
-
self.prefix = os.path.normpath(sys.prefix) + _prefix_addition
|
573 |
-
self.exec_prefix = os.path.normpath(sys.exec_prefix) + _prefix_addition
|
574 |
-
|
575 |
-
else:
|
576 |
-
if self.exec_prefix is None:
|
577 |
-
self.exec_prefix = self.prefix
|
578 |
-
|
579 |
-
self.install_base = self.prefix
|
580 |
-
self.install_platbase = self.exec_prefix
|
581 |
-
self.select_scheme("posix_prefix")
|
582 |
-
|
583 |
-
def finalize_other(self):
|
584 |
-
"""Finalizes options for non-posix platforms"""
|
585 |
-
if self.user:
|
586 |
-
if self.install_userbase is None:
|
587 |
-
raise DistutilsPlatformError("User base directory is not specified")
|
588 |
-
self.install_base = self.install_platbase = self.install_userbase
|
589 |
-
self.select_scheme(os.name + "_user")
|
590 |
-
elif self.home is not None:
|
591 |
-
self.install_base = self.install_platbase = self.home
|
592 |
-
self.select_scheme("posix_home")
|
593 |
-
else:
|
594 |
-
if self.prefix is None:
|
595 |
-
self.prefix = os.path.normpath(sys.prefix)
|
596 |
-
|
597 |
-
self.install_base = self.install_platbase = self.prefix
|
598 |
-
try:
|
599 |
-
self.select_scheme(os.name)
|
600 |
-
except KeyError:
|
601 |
-
raise DistutilsPlatformError(
|
602 |
-
"I don't know how to install stuff on '%s'" % os.name
|
603 |
-
)
|
604 |
-
|
605 |
-
def select_scheme(self, name):
|
606 |
-
_select_scheme(self, name)
|
607 |
-
|
608 |
-
def _expand_attrs(self, attrs):
|
609 |
-
for attr in attrs:
|
610 |
-
val = getattr(self, attr)
|
611 |
-
if val is not None:
|
612 |
-
if os.name == 'posix' or os.name == 'nt':
|
613 |
-
val = os.path.expanduser(val)
|
614 |
-
val = subst_vars(val, self.config_vars)
|
615 |
-
setattr(self, attr, val)
|
616 |
-
|
617 |
-
def expand_basedirs(self):
|
618 |
-
"""Calls `os.path.expanduser` on install_base, install_platbase and
|
619 |
-
root."""
|
620 |
-
self._expand_attrs(['install_base', 'install_platbase', 'root'])
|
621 |
-
|
622 |
-
def expand_dirs(self):
|
623 |
-
"""Calls `os.path.expanduser` on install dirs."""
|
624 |
-
self._expand_attrs(
|
625 |
-
[
|
626 |
-
'install_purelib',
|
627 |
-
'install_platlib',
|
628 |
-
'install_lib',
|
629 |
-
'install_headers',
|
630 |
-
'install_scripts',
|
631 |
-
'install_data',
|
632 |
-
]
|
633 |
-
)
|
634 |
-
|
635 |
-
def convert_paths(self, *names):
|
636 |
-
"""Call `convert_path` over `names`."""
|
637 |
-
for name in names:
|
638 |
-
attr = "install_" + name
|
639 |
-
setattr(self, attr, convert_path(getattr(self, attr)))
|
640 |
-
|
641 |
-
def handle_extra_path(self):
|
642 |
-
"""Set `path_file` and `extra_dirs` using `extra_path`."""
|
643 |
-
if self.extra_path is None:
|
644 |
-
self.extra_path = self.distribution.extra_path
|
645 |
-
|
646 |
-
if self.extra_path is not None:
|
647 |
-
log.warn(
|
648 |
-
"Distribution option extra_path is deprecated. "
|
649 |
-
"See issue27919 for details."
|
650 |
-
)
|
651 |
-
if isinstance(self.extra_path, str):
|
652 |
-
self.extra_path = self.extra_path.split(',')
|
653 |
-
|
654 |
-
if len(self.extra_path) == 1:
|
655 |
-
path_file = extra_dirs = self.extra_path[0]
|
656 |
-
elif len(self.extra_path) == 2:
|
657 |
-
path_file, extra_dirs = self.extra_path
|
658 |
-
else:
|
659 |
-
raise DistutilsOptionError(
|
660 |
-
"'extra_path' option must be a list, tuple, or "
|
661 |
-
"comma-separated string with 1 or 2 elements"
|
662 |
-
)
|
663 |
-
|
664 |
-
# convert to local form in case Unix notation used (as it
|
665 |
-
# should be in setup scripts)
|
666 |
-
extra_dirs = convert_path(extra_dirs)
|
667 |
-
else:
|
668 |
-
path_file = None
|
669 |
-
extra_dirs = ''
|
670 |
-
|
671 |
-
# XXX should we warn if path_file and not extra_dirs? (in which
|
672 |
-
# case the path file would be harmless but pointless)
|
673 |
-
self.path_file = path_file
|
674 |
-
self.extra_dirs = extra_dirs
|
675 |
-
|
676 |
-
def change_roots(self, *names):
|
677 |
-
"""Change the install directories pointed by name using root."""
|
678 |
-
for name in names:
|
679 |
-
attr = "install_" + name
|
680 |
-
setattr(self, attr, change_root(self.root, getattr(self, attr)))
|
681 |
-
|
682 |
-
def create_home_path(self):
|
683 |
-
"""Create directories under ~."""
|
684 |
-
if not self.user:
|
685 |
-
return
|
686 |
-
home = convert_path(os.path.expanduser("~"))
|
687 |
-
for name, path in self.config_vars.items():
|
688 |
-
if str(path).startswith(home) and not os.path.isdir(path):
|
689 |
-
self.debug_print("os.makedirs('%s', 0o700)" % path)
|
690 |
-
os.makedirs(path, 0o700)
|
691 |
-
|
692 |
-
# -- Command execution methods -------------------------------------
|
693 |
-
|
694 |
-
def run(self):
|
695 |
-
"""Runs the command."""
|
696 |
-
# Obviously have to build before we can install
|
697 |
-
if not self.skip_build:
|
698 |
-
self.run_command('build')
|
699 |
-
# If we built for any other platform, we can't install.
|
700 |
-
build_plat = self.distribution.get_command_obj('build').plat_name
|
701 |
-
# check warn_dir - it is a clue that the 'install' is happening
|
702 |
-
# internally, and not to sys.path, so we don't check the platform
|
703 |
-
# matches what we are running.
|
704 |
-
if self.warn_dir and build_plat != get_platform():
|
705 |
-
raise DistutilsPlatformError("Can't install when " "cross-compiling")
|
706 |
-
|
707 |
-
# Run all sub-commands (at least those that need to be run)
|
708 |
-
for cmd_name in self.get_sub_commands():
|
709 |
-
self.run_command(cmd_name)
|
710 |
-
|
711 |
-
if self.path_file:
|
712 |
-
self.create_path_file()
|
713 |
-
|
714 |
-
# write list of installed files, if requested.
|
715 |
-
if self.record:
|
716 |
-
outputs = self.get_outputs()
|
717 |
-
if self.root: # strip any package prefix
|
718 |
-
root_len = len(self.root)
|
719 |
-
for counter in range(len(outputs)):
|
720 |
-
outputs[counter] = outputs[counter][root_len:]
|
721 |
-
self.execute(
|
722 |
-
write_file,
|
723 |
-
(self.record, outputs),
|
724 |
-
"writing list of installed files to '%s'" % self.record,
|
725 |
-
)
|
726 |
-
|
727 |
-
sys_path = map(os.path.normpath, sys.path)
|
728 |
-
sys_path = map(os.path.normcase, sys_path)
|
729 |
-
install_lib = os.path.normcase(os.path.normpath(self.install_lib))
|
730 |
-
if (
|
731 |
-
self.warn_dir
|
732 |
-
and not (self.path_file and self.install_path_file)
|
733 |
-
and install_lib not in sys_path
|
734 |
-
):
|
735 |
-
log.debug(
|
736 |
-
(
|
737 |
-
"modules installed to '%s', which is not in "
|
738 |
-
"Python's module search path (sys.path) -- "
|
739 |
-
"you'll have to change the search path yourself"
|
740 |
-
),
|
741 |
-
self.install_lib,
|
742 |
-
)
|
743 |
-
|
744 |
-
def create_path_file(self):
|
745 |
-
"""Creates the .pth file"""
|
746 |
-
filename = os.path.join(self.install_libbase, self.path_file + ".pth")
|
747 |
-
if self.install_path_file:
|
748 |
-
self.execute(
|
749 |
-
write_file, (filename, [self.extra_dirs]), "creating %s" % filename
|
750 |
-
)
|
751 |
-
else:
|
752 |
-
self.warn("path file '%s' not created" % filename)
|
753 |
-
|
754 |
-
# -- Reporting methods ---------------------------------------------
|
755 |
-
|
756 |
-
def get_outputs(self):
|
757 |
-
"""Assembles the outputs of all the sub-commands."""
|
758 |
-
outputs = []
|
759 |
-
for cmd_name in self.get_sub_commands():
|
760 |
-
cmd = self.get_finalized_command(cmd_name)
|
761 |
-
# Add the contents of cmd.get_outputs(), ensuring
|
762 |
-
# that outputs doesn't contain duplicate entries
|
763 |
-
for filename in cmd.get_outputs():
|
764 |
-
if filename not in outputs:
|
765 |
-
outputs.append(filename)
|
766 |
-
|
767 |
-
if self.path_file and self.install_path_file:
|
768 |
-
outputs.append(os.path.join(self.install_libbase, self.path_file + ".pth"))
|
769 |
-
|
770 |
-
return outputs
|
771 |
-
|
772 |
-
def get_inputs(self):
|
773 |
-
"""Returns the inputs of all the sub-commands"""
|
774 |
-
# XXX gee, this looks familiar ;-(
|
775 |
-
inputs = []
|
776 |
-
for cmd_name in self.get_sub_commands():
|
777 |
-
cmd = self.get_finalized_command(cmd_name)
|
778 |
-
inputs.extend(cmd.get_inputs())
|
779 |
-
|
780 |
-
return inputs
|
781 |
-
|
782 |
-
# -- Predicates for sub-command list -------------------------------
|
783 |
-
|
784 |
-
def has_lib(self):
|
785 |
-
"""Returns true if the current distribution has any Python
|
786 |
-
modules to install."""
|
787 |
-
return (
|
788 |
-
self.distribution.has_pure_modules() or self.distribution.has_ext_modules()
|
789 |
-
)
|
790 |
-
|
791 |
-
def has_headers(self):
|
792 |
-
"""Returns true if the current distribution has any headers to
|
793 |
-
install."""
|
794 |
-
return self.distribution.has_headers()
|
795 |
-
|
796 |
-
def has_scripts(self):
|
797 |
-
"""Returns true if the current distribution has any scripts to.
|
798 |
-
install."""
|
799 |
-
return self.distribution.has_scripts()
|
800 |
-
|
801 |
-
def has_data(self):
|
802 |
-
"""Returns true if the current distribution has any data to.
|
803 |
-
install."""
|
804 |
-
return self.distribution.has_data_files()
|
805 |
-
|
806 |
-
# 'sub_commands': a list of commands this command might have to run to
|
807 |
-
# get its work done. See cmd.py for more info.
|
808 |
-
sub_commands = [
|
809 |
-
('install_lib', has_lib),
|
810 |
-
('install_headers', has_headers),
|
811 |
-
('install_scripts', has_scripts),
|
812 |
-
('install_data', has_data),
|
813 |
-
('install_egg_info', lambda self: True),
|
814 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AzumaSeren100/XuanShen-Bert-VITS2/models.py
DELETED
@@ -1,707 +0,0 @@
|
|
1 |
-
import copy
|
2 |
-
import math
|
3 |
-
import torch
|
4 |
-
from torch import nn
|
5 |
-
from torch.nn import functional as F
|
6 |
-
|
7 |
-
import commons
|
8 |
-
import modules
|
9 |
-
import attentions
|
10 |
-
import monotonic_align
|
11 |
-
|
12 |
-
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
|
13 |
-
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
|
14 |
-
|
15 |
-
from commons import init_weights, get_padding
|
16 |
-
from text import symbols, num_tones, num_languages
|
17 |
-
class DurationDiscriminator(nn.Module): #vits2
|
18 |
-
def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
|
19 |
-
super().__init__()
|
20 |
-
|
21 |
-
self.in_channels = in_channels
|
22 |
-
self.filter_channels = filter_channels
|
23 |
-
self.kernel_size = kernel_size
|
24 |
-
self.p_dropout = p_dropout
|
25 |
-
self.gin_channels = gin_channels
|
26 |
-
|
27 |
-
self.drop = nn.Dropout(p_dropout)
|
28 |
-
self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
|
29 |
-
self.norm_1 = modules.LayerNorm(filter_channels)
|
30 |
-
self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
|
31 |
-
self.norm_2 = modules.LayerNorm(filter_channels)
|
32 |
-
self.dur_proj = nn.Conv1d(1, filter_channels, 1)
|
33 |
-
|
34 |
-
self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
|
35 |
-
self.pre_out_norm_1 = modules.LayerNorm(filter_channels)
|
36 |
-
self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
|
37 |
-
self.pre_out_norm_2 = modules.LayerNorm(filter_channels)
|
38 |
-
|
39 |
-
if gin_channels != 0:
|
40 |
-
self.cond = nn.Conv1d(gin_channels, in_channels, 1)
|
41 |
-
|
42 |
-
self.output_layer = nn.Sequential(
|
43 |
-
nn.Linear(filter_channels, 1),
|
44 |
-
nn.Sigmoid()
|
45 |
-
)
|
46 |
-
|
47 |
-
def forward_probability(self, x, x_mask, dur, g=None):
|
48 |
-
dur = self.dur_proj(dur)
|
49 |
-
x = torch.cat([x, dur], dim=1)
|
50 |
-
x = self.pre_out_conv_1(x * x_mask)
|
51 |
-
x = torch.relu(x)
|
52 |
-
x = self.pre_out_norm_1(x)
|
53 |
-
x = self.drop(x)
|
54 |
-
x = self.pre_out_conv_2(x * x_mask)
|
55 |
-
x = torch.relu(x)
|
56 |
-
x = self.pre_out_norm_2(x)
|
57 |
-
x = self.drop(x)
|
58 |
-
x = x * x_mask
|
59 |
-
x = x.transpose(1, 2)
|
60 |
-
output_prob = self.output_layer(x)
|
61 |
-
return output_prob
|
62 |
-
|
63 |
-
def forward(self, x, x_mask, dur_r, dur_hat, g=None):
|
64 |
-
x = torch.detach(x)
|
65 |
-
if g is not None:
|
66 |
-
g = torch.detach(g)
|
67 |
-
x = x + self.cond(g)
|
68 |
-
x = self.conv_1(x * x_mask)
|
69 |
-
x = torch.relu(x)
|
70 |
-
x = self.norm_1(x)
|
71 |
-
x = self.drop(x)
|
72 |
-
x = self.conv_2(x * x_mask)
|
73 |
-
x = torch.relu(x)
|
74 |
-
x = self.norm_2(x)
|
75 |
-
x = self.drop(x)
|
76 |
-
|
77 |
-
output_probs = []
|
78 |
-
for dur in [dur_r, dur_hat]:
|
79 |
-
output_prob = self.forward_probability(x, x_mask, dur, g)
|
80 |
-
output_probs.append(output_prob)
|
81 |
-
|
82 |
-
return output_probs
|
83 |
-
|
84 |
-
class TransformerCouplingBlock(nn.Module):
|
85 |
-
def __init__(self,
|
86 |
-
channels,
|
87 |
-
hidden_channels,
|
88 |
-
filter_channels,
|
89 |
-
n_heads,
|
90 |
-
n_layers,
|
91 |
-
kernel_size,
|
92 |
-
p_dropout,
|
93 |
-
n_flows=4,
|
94 |
-
gin_channels=0,
|
95 |
-
share_parameter=False
|
96 |
-
):
|
97 |
-
|
98 |
-
super().__init__()
|
99 |
-
self.channels = channels
|
100 |
-
self.hidden_channels = hidden_channels
|
101 |
-
self.kernel_size = kernel_size
|
102 |
-
self.n_layers = n_layers
|
103 |
-
self.n_flows = n_flows
|
104 |
-
self.gin_channels = gin_channels
|
105 |
-
|
106 |
-
self.flows = nn.ModuleList()
|
107 |
-
|
108 |
-
self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None
|
109 |
-
|
110 |
-
for i in range(n_flows):
|
111 |
-
self.flows.append(
|
112 |
-
modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels))
|
113 |
-
self.flows.append(modules.Flip())
|
114 |
-
|
115 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
116 |
-
if not reverse:
|
117 |
-
for flow in self.flows:
|
118 |
-
x, _ = flow(x, x_mask, g=g, reverse=reverse)
|
119 |
-
else:
|
120 |
-
for flow in reversed(self.flows):
|
121 |
-
x = flow(x, x_mask, g=g, reverse=reverse)
|
122 |
-
return x
|
123 |
-
|
124 |
-
class StochasticDurationPredictor(nn.Module):
|
125 |
-
def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
|
126 |
-
super().__init__()
|
127 |
-
filter_channels = in_channels # it needs to be removed from future version.
|
128 |
-
self.in_channels = in_channels
|
129 |
-
self.filter_channels = filter_channels
|
130 |
-
self.kernel_size = kernel_size
|
131 |
-
self.p_dropout = p_dropout
|
132 |
-
self.n_flows = n_flows
|
133 |
-
self.gin_channels = gin_channels
|
134 |
-
|
135 |
-
self.log_flow = modules.Log()
|
136 |
-
self.flows = nn.ModuleList()
|
137 |
-
self.flows.append(modules.ElementwiseAffine(2))
|
138 |
-
for i in range(n_flows):
|
139 |
-
self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
|
140 |
-
self.flows.append(modules.Flip())
|
141 |
-
|
142 |
-
self.post_pre = nn.Conv1d(1, filter_channels, 1)
|
143 |
-
self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
|
144 |
-
self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
|
145 |
-
self.post_flows = nn.ModuleList()
|
146 |
-
self.post_flows.append(modules.ElementwiseAffine(2))
|
147 |
-
for i in range(4):
|
148 |
-
self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
|
149 |
-
self.post_flows.append(modules.Flip())
|
150 |
-
|
151 |
-
self.pre = nn.Conv1d(in_channels, filter_channels, 1)
|
152 |
-
self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
|
153 |
-
self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
|
154 |
-
if gin_channels != 0:
|
155 |
-
self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
|
156 |
-
|
157 |
-
def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
|
158 |
-
x = torch.detach(x)
|
159 |
-
x = self.pre(x)
|
160 |
-
if g is not None:
|
161 |
-
g = torch.detach(g)
|
162 |
-
x = x + self.cond(g)
|
163 |
-
x = self.convs(x, x_mask)
|
164 |
-
x = self.proj(x) * x_mask
|
165 |
-
|
166 |
-
if not reverse:
|
167 |
-
flows = self.flows
|
168 |
-
assert w is not None
|
169 |
-
|
170 |
-
logdet_tot_q = 0
|
171 |
-
h_w = self.post_pre(w)
|
172 |
-
h_w = self.post_convs(h_w, x_mask)
|
173 |
-
h_w = self.post_proj(h_w) * x_mask
|
174 |
-
e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
|
175 |
-
z_q = e_q
|
176 |
-
for flow in self.post_flows:
|
177 |
-
z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
|
178 |
-
logdet_tot_q += logdet_q
|
179 |
-
z_u, z1 = torch.split(z_q, [1, 1], 1)
|
180 |
-
u = torch.sigmoid(z_u) * x_mask
|
181 |
-
z0 = (w - u) * x_mask
|
182 |
-
logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2])
|
183 |
-
logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q
|
184 |
-
|
185 |
-
logdet_tot = 0
|
186 |
-
z0, logdet = self.log_flow(z0, x_mask)
|
187 |
-
logdet_tot += logdet
|
188 |
-
z = torch.cat([z0, z1], 1)
|
189 |
-
for flow in flows:
|
190 |
-
z, logdet = flow(z, x_mask, g=x, reverse=reverse)
|
191 |
-
logdet_tot = logdet_tot + logdet
|
192 |
-
nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot
|
193 |
-
return nll + logq # [b]
|
194 |
-
else:
|
195 |
-
flows = list(reversed(self.flows))
|
196 |
-
flows = flows[:-2] + [flows[-1]] # remove a useless vflow
|
197 |
-
z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
|
198 |
-
for flow in flows:
|
199 |
-
z = flow(z, x_mask, g=x, reverse=reverse)
|
200 |
-
z0, z1 = torch.split(z, [1, 1], 1)
|
201 |
-
logw = z0
|
202 |
-
return logw
|
203 |
-
|
204 |
-
|
205 |
-
class DurationPredictor(nn.Module):
|
206 |
-
def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
|
207 |
-
super().__init__()
|
208 |
-
|
209 |
-
self.in_channels = in_channels
|
210 |
-
self.filter_channels = filter_channels
|
211 |
-
self.kernel_size = kernel_size
|
212 |
-
self.p_dropout = p_dropout
|
213 |
-
self.gin_channels = gin_channels
|
214 |
-
|
215 |
-
self.drop = nn.Dropout(p_dropout)
|
216 |
-
self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2)
|
217 |
-
self.norm_1 = modules.LayerNorm(filter_channels)
|
218 |
-
self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2)
|
219 |
-
self.norm_2 = modules.LayerNorm(filter_channels)
|
220 |
-
self.proj = nn.Conv1d(filter_channels, 1, 1)
|
221 |
-
|
222 |
-
if gin_channels != 0:
|
223 |
-
self.cond = nn.Conv1d(gin_channels, in_channels, 1)
|
224 |
-
|
225 |
-
def forward(self, x, x_mask, g=None):
|
226 |
-
x = torch.detach(x)
|
227 |
-
if g is not None:
|
228 |
-
g = torch.detach(g)
|
229 |
-
x = x + self.cond(g)
|
230 |
-
x = self.conv_1(x * x_mask)
|
231 |
-
x = torch.relu(x)
|
232 |
-
x = self.norm_1(x)
|
233 |
-
x = self.drop(x)
|
234 |
-
x = self.conv_2(x * x_mask)
|
235 |
-
x = torch.relu(x)
|
236 |
-
x = self.norm_2(x)
|
237 |
-
x = self.drop(x)
|
238 |
-
x = self.proj(x * x_mask)
|
239 |
-
return x * x_mask
|
240 |
-
|
241 |
-
|
242 |
-
class TextEncoder(nn.Module):
|
243 |
-
def __init__(self,
|
244 |
-
n_vocab,
|
245 |
-
out_channels,
|
246 |
-
hidden_channels,
|
247 |
-
filter_channels,
|
248 |
-
n_heads,
|
249 |
-
n_layers,
|
250 |
-
kernel_size,
|
251 |
-
p_dropout,
|
252 |
-
gin_channels=0):
|
253 |
-
super().__init__()
|
254 |
-
self.n_vocab = n_vocab
|
255 |
-
self.out_channels = out_channels
|
256 |
-
self.hidden_channels = hidden_channels
|
257 |
-
self.filter_channels = filter_channels
|
258 |
-
self.n_heads = n_heads
|
259 |
-
self.n_layers = n_layers
|
260 |
-
self.kernel_size = kernel_size
|
261 |
-
self.p_dropout = p_dropout
|
262 |
-
self.gin_channels = gin_channels
|
263 |
-
self.emb = nn.Embedding(len(symbols), hidden_channels)
|
264 |
-
nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
|
265 |
-
self.tone_emb = nn.Embedding(num_tones, hidden_channels)
|
266 |
-
nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5)
|
267 |
-
self.language_emb = nn.Embedding(num_languages, hidden_channels)
|
268 |
-
nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5)
|
269 |
-
self.bert_proj = nn.Conv1d(1024, hidden_channels, 1)
|
270 |
-
|
271 |
-
self.encoder = attentions.Encoder(
|
272 |
-
hidden_channels,
|
273 |
-
filter_channels,
|
274 |
-
n_heads,
|
275 |
-
n_layers,
|
276 |
-
kernel_size,
|
277 |
-
p_dropout,
|
278 |
-
gin_channels=self.gin_channels)
|
279 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
280 |
-
|
281 |
-
def forward(self, x, x_lengths, tone, language, bert, g=None):
|
282 |
-
x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h]
|
283 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
284 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
285 |
-
|
286 |
-
x = self.encoder(x * x_mask, x_mask, g=g)
|
287 |
-
stats = self.proj(x) * x_mask
|
288 |
-
|
289 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
290 |
-
return x, m, logs, x_mask
|
291 |
-
|
292 |
-
|
293 |
-
class ResidualCouplingBlock(nn.Module):
|
294 |
-
def __init__(self,
|
295 |
-
channels,
|
296 |
-
hidden_channels,
|
297 |
-
kernel_size,
|
298 |
-
dilation_rate,
|
299 |
-
n_layers,
|
300 |
-
n_flows=4,
|
301 |
-
gin_channels=0):
|
302 |
-
super().__init__()
|
303 |
-
self.channels = channels
|
304 |
-
self.hidden_channels = hidden_channels
|
305 |
-
self.kernel_size = kernel_size
|
306 |
-
self.dilation_rate = dilation_rate
|
307 |
-
self.n_layers = n_layers
|
308 |
-
self.n_flows = n_flows
|
309 |
-
self.gin_channels = gin_channels
|
310 |
-
|
311 |
-
self.flows = nn.ModuleList()
|
312 |
-
for i in range(n_flows):
|
313 |
-
self.flows.append(
|
314 |
-
modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers,
|
315 |
-
gin_channels=gin_channels, mean_only=True))
|
316 |
-
self.flows.append(modules.Flip())
|
317 |
-
|
318 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
319 |
-
if not reverse:
|
320 |
-
for flow in self.flows:
|
321 |
-
x, _ = flow(x, x_mask, g=g, reverse=reverse)
|
322 |
-
else:
|
323 |
-
for flow in reversed(self.flows):
|
324 |
-
x = flow(x, x_mask, g=g, reverse=reverse)
|
325 |
-
return x
|
326 |
-
|
327 |
-
|
328 |
-
class PosteriorEncoder(nn.Module):
|
329 |
-
def __init__(self,
|
330 |
-
in_channels,
|
331 |
-
out_channels,
|
332 |
-
hidden_channels,
|
333 |
-
kernel_size,
|
334 |
-
dilation_rate,
|
335 |
-
n_layers,
|
336 |
-
gin_channels=0):
|
337 |
-
super().__init__()
|
338 |
-
self.in_channels = in_channels
|
339 |
-
self.out_channels = out_channels
|
340 |
-
self.hidden_channels = hidden_channels
|
341 |
-
self.kernel_size = kernel_size
|
342 |
-
self.dilation_rate = dilation_rate
|
343 |
-
self.n_layers = n_layers
|
344 |
-
self.gin_channels = gin_channels
|
345 |
-
|
346 |
-
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
|
347 |
-
self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
|
348 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
349 |
-
|
350 |
-
def forward(self, x, x_lengths, g=None):
|
351 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
352 |
-
x = self.pre(x) * x_mask
|
353 |
-
x = self.enc(x, x_mask, g=g)
|
354 |
-
stats = self.proj(x) * x_mask
|
355 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
356 |
-
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
|
357 |
-
return z, m, logs, x_mask
|
358 |
-
|
359 |
-
|
360 |
-
class Generator(torch.nn.Module):
|
361 |
-
def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
|
362 |
-
upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
|
363 |
-
super(Generator, self).__init__()
|
364 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
365 |
-
self.num_upsamples = len(upsample_rates)
|
366 |
-
self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
|
367 |
-
resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
|
368 |
-
|
369 |
-
self.ups = nn.ModuleList()
|
370 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
371 |
-
self.ups.append(weight_norm(
|
372 |
-
ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)),
|
373 |
-
k, u, padding=(k - u) // 2)))
|
374 |
-
|
375 |
-
self.resblocks = nn.ModuleList()
|
376 |
-
for i in range(len(self.ups)):
|
377 |
-
ch = upsample_initial_channel // (2 ** (i + 1))
|
378 |
-
for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
|
379 |
-
self.resblocks.append(resblock(ch, k, d))
|
380 |
-
|
381 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
382 |
-
self.ups.apply(init_weights)
|
383 |
-
|
384 |
-
if gin_channels != 0:
|
385 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
386 |
-
|
387 |
-
def forward(self, x, g=None):
|
388 |
-
x = self.conv_pre(x)
|
389 |
-
if g is not None:
|
390 |
-
x = x + self.cond(g)
|
391 |
-
|
392 |
-
for i in range(self.num_upsamples):
|
393 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
394 |
-
x = self.ups[i](x)
|
395 |
-
xs = None
|
396 |
-
for j in range(self.num_kernels):
|
397 |
-
if xs is None:
|
398 |
-
xs = self.resblocks[i * self.num_kernels + j](x)
|
399 |
-
else:
|
400 |
-
xs += self.resblocks[i * self.num_kernels + j](x)
|
401 |
-
x = xs / self.num_kernels
|
402 |
-
x = F.leaky_relu(x)
|
403 |
-
x = self.conv_post(x)
|
404 |
-
x = torch.tanh(x)
|
405 |
-
|
406 |
-
return x
|
407 |
-
|
408 |
-
def remove_weight_norm(self):
|
409 |
-
print('Removing weight norm...')
|
410 |
-
for l in self.ups:
|
411 |
-
remove_weight_norm(l)
|
412 |
-
for l in self.resblocks:
|
413 |
-
l.remove_weight_norm()
|
414 |
-
|
415 |
-
|
416 |
-
class DiscriminatorP(torch.nn.Module):
|
417 |
-
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
|
418 |
-
super(DiscriminatorP, self).__init__()
|
419 |
-
self.period = period
|
420 |
-
self.use_spectral_norm = use_spectral_norm
|
421 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
422 |
-
self.convs = nn.ModuleList([
|
423 |
-
norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
424 |
-
norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
425 |
-
norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
426 |
-
norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
427 |
-
norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
|
428 |
-
])
|
429 |
-
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
|
430 |
-
|
431 |
-
def forward(self, x):
|
432 |
-
fmap = []
|
433 |
-
|
434 |
-
# 1d to 2d
|
435 |
-
b, c, t = x.shape
|
436 |
-
if t % self.period != 0: # pad first
|
437 |
-
n_pad = self.period - (t % self.period)
|
438 |
-
x = F.pad(x, (0, n_pad), "reflect")
|
439 |
-
t = t + n_pad
|
440 |
-
x = x.view(b, c, t // self.period, self.period)
|
441 |
-
|
442 |
-
for l in self.convs:
|
443 |
-
x = l(x)
|
444 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
445 |
-
fmap.append(x)
|
446 |
-
x = self.conv_post(x)
|
447 |
-
fmap.append(x)
|
448 |
-
x = torch.flatten(x, 1, -1)
|
449 |
-
|
450 |
-
return x, fmap
|
451 |
-
|
452 |
-
|
453 |
-
class DiscriminatorS(torch.nn.Module):
|
454 |
-
def __init__(self, use_spectral_norm=False):
|
455 |
-
super(DiscriminatorS, self).__init__()
|
456 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
457 |
-
self.convs = nn.ModuleList([
|
458 |
-
norm_f(Conv1d(1, 16, 15, 1, padding=7)),
|
459 |
-
norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
|
460 |
-
norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
|
461 |
-
norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
|
462 |
-
norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
|
463 |
-
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
|
464 |
-
])
|
465 |
-
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
|
466 |
-
|
467 |
-
def forward(self, x):
|
468 |
-
fmap = []
|
469 |
-
|
470 |
-
for l in self.convs:
|
471 |
-
x = l(x)
|
472 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
473 |
-
fmap.append(x)
|
474 |
-
x = self.conv_post(x)
|
475 |
-
fmap.append(x)
|
476 |
-
x = torch.flatten(x, 1, -1)
|
477 |
-
|
478 |
-
return x, fmap
|
479 |
-
|
480 |
-
|
481 |
-
class MultiPeriodDiscriminator(torch.nn.Module):
|
482 |
-
def __init__(self, use_spectral_norm=False):
|
483 |
-
super(MultiPeriodDiscriminator, self).__init__()
|
484 |
-
periods = [2, 3, 5, 7, 11]
|
485 |
-
|
486 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
487 |
-
discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
|
488 |
-
self.discriminators = nn.ModuleList(discs)
|
489 |
-
|
490 |
-
def forward(self, y, y_hat):
|
491 |
-
y_d_rs = []
|
492 |
-
y_d_gs = []
|
493 |
-
fmap_rs = []
|
494 |
-
fmap_gs = []
|
495 |
-
for i, d in enumerate(self.discriminators):
|
496 |
-
y_d_r, fmap_r = d(y)
|
497 |
-
y_d_g, fmap_g = d(y_hat)
|
498 |
-
y_d_rs.append(y_d_r)
|
499 |
-
y_d_gs.append(y_d_g)
|
500 |
-
fmap_rs.append(fmap_r)
|
501 |
-
fmap_gs.append(fmap_g)
|
502 |
-
|
503 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
504 |
-
|
505 |
-
class ReferenceEncoder(nn.Module):
|
506 |
-
'''
|
507 |
-
inputs --- [N, Ty/r, n_mels*r] mels
|
508 |
-
outputs --- [N, ref_enc_gru_size]
|
509 |
-
'''
|
510 |
-
|
511 |
-
def __init__(self, spec_channels, gin_channels=0):
|
512 |
-
|
513 |
-
super().__init__()
|
514 |
-
self.spec_channels = spec_channels
|
515 |
-
ref_enc_filters = [32, 32, 64, 64, 128, 128]
|
516 |
-
K = len(ref_enc_filters)
|
517 |
-
filters = [1] + ref_enc_filters
|
518 |
-
convs = [weight_norm(nn.Conv2d(in_channels=filters[i],
|
519 |
-
out_channels=filters[i + 1],
|
520 |
-
kernel_size=(3, 3),
|
521 |
-
stride=(2, 2),
|
522 |
-
padding=(1, 1))) for i in range(K)]
|
523 |
-
self.convs = nn.ModuleList(convs)
|
524 |
-
# self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)])
|
525 |
-
|
526 |
-
out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K)
|
527 |
-
self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels,
|
528 |
-
hidden_size=256 // 2,
|
529 |
-
batch_first=True)
|
530 |
-
self.proj = nn.Linear(128, gin_channels)
|
531 |
-
|
532 |
-
def forward(self, inputs, mask=None):
|
533 |
-
N = inputs.size(0)
|
534 |
-
out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs]
|
535 |
-
for conv in self.convs:
|
536 |
-
out = conv(out)
|
537 |
-
# out = wn(out)
|
538 |
-
out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K]
|
539 |
-
|
540 |
-
out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K]
|
541 |
-
T = out.size(1)
|
542 |
-
N = out.size(0)
|
543 |
-
out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K]
|
544 |
-
|
545 |
-
self.gru.flatten_parameters()
|
546 |
-
memory, out = self.gru(out) # out --- [1, N, 128]
|
547 |
-
|
548 |
-
return self.proj(out.squeeze(0))
|
549 |
-
|
550 |
-
def calculate_channels(self, L, kernel_size, stride, pad, n_convs):
|
551 |
-
for i in range(n_convs):
|
552 |
-
L = (L - kernel_size + 2 * pad) // stride + 1
|
553 |
-
return L
|
554 |
-
|
555 |
-
|
556 |
-
class SynthesizerTrn(nn.Module):
|
557 |
-
"""
|
558 |
-
Synthesizer for Training
|
559 |
-
"""
|
560 |
-
|
561 |
-
def __init__(self,
|
562 |
-
n_vocab,
|
563 |
-
spec_channels,
|
564 |
-
segment_size,
|
565 |
-
inter_channels,
|
566 |
-
hidden_channels,
|
567 |
-
filter_channels,
|
568 |
-
n_heads,
|
569 |
-
n_layers,
|
570 |
-
kernel_size,
|
571 |
-
p_dropout,
|
572 |
-
resblock,
|
573 |
-
resblock_kernel_sizes,
|
574 |
-
resblock_dilation_sizes,
|
575 |
-
upsample_rates,
|
576 |
-
upsample_initial_channel,
|
577 |
-
upsample_kernel_sizes,
|
578 |
-
n_speakers=256,
|
579 |
-
gin_channels=256,
|
580 |
-
use_sdp=True,
|
581 |
-
n_flow_layer = 4,
|
582 |
-
n_layers_trans_flow = 3,
|
583 |
-
flow_share_parameter = False,
|
584 |
-
use_transformer_flow = True,
|
585 |
-
**kwargs):
|
586 |
-
|
587 |
-
super().__init__()
|
588 |
-
self.n_vocab = n_vocab
|
589 |
-
self.spec_channels = spec_channels
|
590 |
-
self.inter_channels = inter_channels
|
591 |
-
self.hidden_channels = hidden_channels
|
592 |
-
self.filter_channels = filter_channels
|
593 |
-
self.n_heads = n_heads
|
594 |
-
self.n_layers = n_layers
|
595 |
-
self.kernel_size = kernel_size
|
596 |
-
self.p_dropout = p_dropout
|
597 |
-
self.resblock = resblock
|
598 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
599 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
600 |
-
self.upsample_rates = upsample_rates
|
601 |
-
self.upsample_initial_channel = upsample_initial_channel
|
602 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
603 |
-
self.segment_size = segment_size
|
604 |
-
self.n_speakers = n_speakers
|
605 |
-
self.gin_channels = gin_channels
|
606 |
-
self.n_layers_trans_flow = n_layers_trans_flow
|
607 |
-
self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True)
|
608 |
-
self.use_sdp = use_sdp
|
609 |
-
self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False)
|
610 |
-
self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01)
|
611 |
-
self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6)
|
612 |
-
self.current_mas_noise_scale = self.mas_noise_scale_initial
|
613 |
-
if self.use_spk_conditioned_encoder and gin_channels > 0:
|
614 |
-
self.enc_gin_channels = gin_channels
|
615 |
-
self.enc_p = TextEncoder(n_vocab,
|
616 |
-
inter_channels,
|
617 |
-
hidden_channels,
|
618 |
-
filter_channels,
|
619 |
-
n_heads,
|
620 |
-
n_layers,
|
621 |
-
kernel_size,
|
622 |
-
p_dropout,
|
623 |
-
gin_channels=self.enc_gin_channels)
|
624 |
-
self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
|
625 |
-
upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
|
626 |
-
self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16,
|
627 |
-
gin_channels=gin_channels)
|
628 |
-
if use_transformer_flow:
|
629 |
-
self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter)
|
630 |
-
else:
|
631 |
-
self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels)
|
632 |
-
self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
|
633 |
-
self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
|
634 |
-
|
635 |
-
if n_speakers > 1:
|
636 |
-
self.emb_g = nn.Embedding(n_speakers, gin_channels)
|
637 |
-
else:
|
638 |
-
self.ref_enc = ReferenceEncoder(spec_channels, gin_channels)
|
639 |
-
|
640 |
-
def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert):
|
641 |
-
if self.n_speakers > 0:
|
642 |
-
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
643 |
-
else:
|
644 |
-
g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1)
|
645 |
-
x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g)
|
646 |
-
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
|
647 |
-
z_p = self.flow(z, y_mask, g=g)
|
648 |
-
|
649 |
-
with torch.no_grad():
|
650 |
-
# negative cross-entropy
|
651 |
-
s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
|
652 |
-
neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
|
653 |
-
neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2),
|
654 |
-
s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
|
655 |
-
neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
|
656 |
-
neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
|
657 |
-
neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
|
658 |
-
if self.use_noise_scaled_mas:
|
659 |
-
epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale
|
660 |
-
neg_cent = neg_cent + epsilon
|
661 |
-
|
662 |
-
attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
|
663 |
-
attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
|
664 |
-
|
665 |
-
w = attn.sum(2)
|
666 |
-
|
667 |
-
l_length_sdp = self.sdp(x, x_mask, w, g=g)
|
668 |
-
l_length_sdp = l_length_sdp / torch.sum(x_mask)
|
669 |
-
|
670 |
-
logw_ = torch.log(w + 1e-6) * x_mask
|
671 |
-
logw = self.dp(x, x_mask, g=g)
|
672 |
-
l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging
|
673 |
-
|
674 |
-
l_length = l_length_dp + l_length_sdp
|
675 |
-
|
676 |
-
# expand prior
|
677 |
-
m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
|
678 |
-
logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
|
679 |
-
|
680 |
-
z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
|
681 |
-
o = self.dec(z_slice, g=g)
|
682 |
-
return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_)
|
683 |
-
|
684 |
-
def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None):
|
685 |
-
#x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert)
|
686 |
-
# g = self.gst(y)
|
687 |
-
if self.n_speakers > 0:
|
688 |
-
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
689 |
-
else:
|
690 |
-
g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1)
|
691 |
-
x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g)
|
692 |
-
logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio)
|
693 |
-
w = torch.exp(logw) * x_mask * length_scale
|
694 |
-
w_ceil = torch.ceil(w)
|
695 |
-
y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
|
696 |
-
y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
|
697 |
-
attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
|
698 |
-
attn = commons.generate_path(w_ceil, attn_mask)
|
699 |
-
|
700 |
-
m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
|
701 |
-
logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1,
|
702 |
-
2) # [b, t', t], [b, t, d] -> [b, d, t']
|
703 |
-
|
704 |
-
z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
|
705 |
-
z = self.flow(z_p, y_mask, g=g, reverse=True)
|
706 |
-
o = self.dec((z * y_mask)[:, :, :max_len], g=g)
|
707 |
-
return o, attn, y_mask, (z, z_p, m_p, logs_p)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/language_model.py
DELETED
@@ -1,81 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
from torch.autograd import Variable
|
4 |
-
import numpy as np
|
5 |
-
|
6 |
-
|
7 |
-
class WordEmbedding(nn.Module):
|
8 |
-
"""Word Embedding
|
9 |
-
|
10 |
-
The ntoken-th dim is used for padding_idx, which agrees *implicitly*
|
11 |
-
with the definition in Dictionary.
|
12 |
-
"""
|
13 |
-
def __init__(self, ntoken, emb_dim, dropout):
|
14 |
-
super(WordEmbedding, self).__init__()
|
15 |
-
self.emb = nn.Embedding(ntoken+1, emb_dim, padding_idx=ntoken)
|
16 |
-
self.dropout = nn.Dropout(dropout)
|
17 |
-
self.ntoken = ntoken
|
18 |
-
self.emb_dim = emb_dim
|
19 |
-
|
20 |
-
def init_embedding(self, np_file):
|
21 |
-
weight_init = torch.from_numpy(np.load(np_file))
|
22 |
-
assert weight_init.shape == (self.ntoken, self.emb_dim)
|
23 |
-
self.emb.weight.data[:self.ntoken] = weight_init
|
24 |
-
|
25 |
-
def forward(self, x):
|
26 |
-
emb = self.emb(x)
|
27 |
-
emb = self.dropout(emb)
|
28 |
-
return emb
|
29 |
-
|
30 |
-
|
31 |
-
class QuestionEmbedding(nn.Module):
|
32 |
-
def __init__(self, in_dim, num_hid, nlayers, bidirect, dropout, rnn_type='GRU'):
|
33 |
-
"""Module for question embedding
|
34 |
-
"""
|
35 |
-
super(QuestionEmbedding, self).__init__()
|
36 |
-
assert rnn_type == 'LSTM' or rnn_type == 'GRU'
|
37 |
-
rnn_cls = nn.LSTM if rnn_type == 'LSTM' else nn.GRU
|
38 |
-
|
39 |
-
self.rnn = rnn_cls(
|
40 |
-
in_dim, num_hid, nlayers,
|
41 |
-
bidirectional=bidirect,
|
42 |
-
dropout=dropout,
|
43 |
-
batch_first=True)
|
44 |
-
|
45 |
-
self.in_dim = in_dim
|
46 |
-
self.num_hid = num_hid
|
47 |
-
self.nlayers = nlayers
|
48 |
-
self.rnn_type = rnn_type
|
49 |
-
self.ndirections = 1 + int(bidirect)
|
50 |
-
|
51 |
-
def init_hidden(self, batch):
|
52 |
-
# just to get the type of tensor
|
53 |
-
weight = next(self.parameters()).data
|
54 |
-
hid_shape = (self.nlayers * self.ndirections, batch, self.num_hid)
|
55 |
-
if self.rnn_type == 'LSTM':
|
56 |
-
return (Variable(weight.new(*hid_shape).zero_()),
|
57 |
-
Variable(weight.new(*hid_shape).zero_()))
|
58 |
-
else:
|
59 |
-
return Variable(weight.new(*hid_shape).zero_())
|
60 |
-
|
61 |
-
def forward(self, x):
|
62 |
-
# x: [batch, sequence, in_dim]
|
63 |
-
batch = x.size(0)
|
64 |
-
hidden = self.init_hidden(batch)
|
65 |
-
self.rnn.flatten_parameters()
|
66 |
-
output, hidden = self.rnn(x, hidden)
|
67 |
-
|
68 |
-
if self.ndirections == 1:
|
69 |
-
return output[:, -1]
|
70 |
-
|
71 |
-
forward_ = output[:, -1, :self.num_hid]
|
72 |
-
backward = output[:, 0, self.num_hid:]
|
73 |
-
return torch.cat((forward_, backward), dim=1)
|
74 |
-
|
75 |
-
def forward_all(self, x):
|
76 |
-
# x: [batch, sequence, in_dim]
|
77 |
-
batch = x.size(0)
|
78 |
-
hidden = self.init_hidden(batch)
|
79 |
-
self.rnn.flatten_parameters()
|
80 |
-
output, hidden = self.rnn(x, hidden)
|
81 |
-
return output
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/tda.py
DELETED
@@ -1,97 +0,0 @@
|
|
1 |
-
# --------------------------------------------------------
|
2 |
-
# OpenVQA
|
3 |
-
# Written by Zhenwei Shao https://github.com/ParadoxZW
|
4 |
-
# based on the implementation in https://github.com/hengyuan-hu/bottom-up-attention-vqa
|
5 |
-
# ELU is chosen as the activation function in non-linear layers due to
|
6 |
-
# the experiment results that indicate ELU is better than ReLU in BUTD model.
|
7 |
-
# --------------------------------------------------------
|
8 |
-
|
9 |
-
import torch.nn as nn
|
10 |
-
import torch.nn.functional as F
|
11 |
-
from torch.nn.utils.weight_norm import weight_norm
|
12 |
-
import torch
|
13 |
-
import math
|
14 |
-
|
15 |
-
# ------------------------------
|
16 |
-
# ----- Weight Normal MLP ------
|
17 |
-
# ------------------------------
|
18 |
-
|
19 |
-
class MLP(nn.Module):
|
20 |
-
"""
|
21 |
-
class for non-linear fully connect network
|
22 |
-
"""
|
23 |
-
|
24 |
-
def __init__(self, dims, act='ELU', dropout_r=0.0):
|
25 |
-
super(MLP, self).__init__()
|
26 |
-
|
27 |
-
layers = []
|
28 |
-
for i in range(len(dims) - 1):
|
29 |
-
in_dim = dims[i]
|
30 |
-
out_dim = dims[i + 1]
|
31 |
-
if dropout_r > 0:
|
32 |
-
layers.append(nn.Dropout(dropout_r))
|
33 |
-
layers.append(weight_norm(nn.Linear(in_dim, out_dim), dim=None))
|
34 |
-
if act != '':
|
35 |
-
layers.append(getattr(nn, act)())
|
36 |
-
|
37 |
-
self.mlp = nn.Sequential(*layers)
|
38 |
-
|
39 |
-
def forward(self, x):
|
40 |
-
return self.mlp(x)
|
41 |
-
|
42 |
-
# ------------------------------
|
43 |
-
# ---Top Down Attention Map ----
|
44 |
-
# ------------------------------
|
45 |
-
|
46 |
-
|
47 |
-
class AttnMap(nn.Module):
|
48 |
-
'''
|
49 |
-
implementation of top down attention
|
50 |
-
'''
|
51 |
-
def __init__(self, __C):
|
52 |
-
super(AttnMap, self).__init__()
|
53 |
-
self.__C = __C
|
54 |
-
self.linear_q = weight_norm(
|
55 |
-
nn.Linear(__C.HIDDEN_SIZE, __C.HIDDEN_SIZE), dim=None)
|
56 |
-
self.linear_v = weight_norm(
|
57 |
-
nn.Linear(__C.IMG_FEAT_SIZE, __C.IMG_FEAT_SIZE), dim=None)
|
58 |
-
self.nonlinear = MLP(
|
59 |
-
[__C.IMG_FEAT_SIZE + __C.HIDDEN_SIZE, __C.HIDDEN_SIZE], dropout_r=__C.DROPOUT_R)
|
60 |
-
self.linear = weight_norm(nn.Linear(__C.HIDDEN_SIZE, 1), dim=None)
|
61 |
-
|
62 |
-
def forward(self, q, v):
|
63 |
-
v = self.linear_v(v)
|
64 |
-
q = self.linear_q(q)
|
65 |
-
logits = self.logits(q, v)
|
66 |
-
w = nn.functional.softmax(logits, 1)
|
67 |
-
return w
|
68 |
-
|
69 |
-
def logits(self, q, v):
|
70 |
-
num_objs = v.size(1)
|
71 |
-
q = q.unsqueeze(1).repeat(1, num_objs, 1)
|
72 |
-
vq = torch.cat((v, q), 2)
|
73 |
-
joint_repr = self.nonlinear(vq)
|
74 |
-
logits = self.linear(joint_repr)
|
75 |
-
return logits
|
76 |
-
|
77 |
-
# ------------------------------
|
78 |
-
# ---- Attended Joint Map ------
|
79 |
-
# ------------------------------
|
80 |
-
|
81 |
-
|
82 |
-
class TDA(nn.Module):
|
83 |
-
def __init__(self, __C):
|
84 |
-
super(TDA, self).__init__()
|
85 |
-
|
86 |
-
self.__C = __C
|
87 |
-
self.v_att = AttnMap(__C)
|
88 |
-
self.q_net = MLP([__C.HIDDEN_SIZE, __C.HIDDEN_SIZE])
|
89 |
-
self.v_net = MLP([__C.IMG_FEAT_SIZE, __C.HIDDEN_SIZE])
|
90 |
-
|
91 |
-
def forward(self, q, v):
|
92 |
-
att = self.v_att(q, v)
|
93 |
-
atted_v = (att * v).sum(1)
|
94 |
-
q_repr = self.q_net(q)
|
95 |
-
v_repr = self.v_net(atted_v)
|
96 |
-
joint_repr = q_repr * v_repr
|
97 |
-
return joint_repr
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/pybind11/tests/test_pytypes.cpp
DELETED
@@ -1,375 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
tests/test_pytypes.cpp -- Python type casters
|
3 |
-
|
4 |
-
Copyright (c) 2017 Wenzel Jakob <[email protected]>
|
5 |
-
|
6 |
-
All rights reserved. Use of this source code is governed by a
|
7 |
-
BSD-style license that can be found in the LICENSE file.
|
8 |
-
*/
|
9 |
-
|
10 |
-
#include "pybind11_tests.h"
|
11 |
-
|
12 |
-
|
13 |
-
TEST_SUBMODULE(pytypes, m) {
|
14 |
-
// test_int
|
15 |
-
m.def("get_int", []{return py::int_(0);});
|
16 |
-
// test_iterator
|
17 |
-
m.def("get_iterator", []{return py::iterator();});
|
18 |
-
// test_iterable
|
19 |
-
m.def("get_iterable", []{return py::iterable();});
|
20 |
-
// test_list
|
21 |
-
m.def("get_list", []() {
|
22 |
-
py::list list;
|
23 |
-
list.append("value");
|
24 |
-
py::print("Entry at position 0:", list[0]);
|
25 |
-
list[0] = py::str("overwritten");
|
26 |
-
list.insert(0, "inserted-0");
|
27 |
-
list.insert(2, "inserted-2");
|
28 |
-
return list;
|
29 |
-
});
|
30 |
-
m.def("print_list", [](py::list list) {
|
31 |
-
int index = 0;
|
32 |
-
for (auto item : list)
|
33 |
-
py::print("list item {}: {}"_s.format(index++, item));
|
34 |
-
});
|
35 |
-
// test_none
|
36 |
-
m.def("get_none", []{return py::none();});
|
37 |
-
m.def("print_none", [](py::none none) {
|
38 |
-
py::print("none: {}"_s.format(none));
|
39 |
-
});
|
40 |
-
|
41 |
-
// test_set
|
42 |
-
m.def("get_set", []() {
|
43 |
-
py::set set;
|
44 |
-
set.add(py::str("key1"));
|
45 |
-
set.add("key2");
|
46 |
-
set.add(std::string("key3"));
|
47 |
-
return set;
|
48 |
-
});
|
49 |
-
m.def("print_set", [](py::set set) {
|
50 |
-
for (auto item : set)
|
51 |
-
py::print("key:", item);
|
52 |
-
});
|
53 |
-
m.def("set_contains", [](py::set set, py::object key) {
|
54 |
-
return set.contains(key);
|
55 |
-
});
|
56 |
-
m.def("set_contains", [](py::set set, const char* key) {
|
57 |
-
return set.contains(key);
|
58 |
-
});
|
59 |
-
|
60 |
-
// test_dict
|
61 |
-
m.def("get_dict", []() { return py::dict("key"_a="value"); });
|
62 |
-
m.def("print_dict", [](py::dict dict) {
|
63 |
-
for (auto item : dict)
|
64 |
-
py::print("key: {}, value={}"_s.format(item.first, item.second));
|
65 |
-
});
|
66 |
-
m.def("dict_keyword_constructor", []() {
|
67 |
-
auto d1 = py::dict("x"_a=1, "y"_a=2);
|
68 |
-
auto d2 = py::dict("z"_a=3, **d1);
|
69 |
-
return d2;
|
70 |
-
});
|
71 |
-
m.def("dict_contains", [](py::dict dict, py::object val) {
|
72 |
-
return dict.contains(val);
|
73 |
-
});
|
74 |
-
m.def("dict_contains", [](py::dict dict, const char* val) {
|
75 |
-
return dict.contains(val);
|
76 |
-
});
|
77 |
-
|
78 |
-
// test_str
|
79 |
-
m.def("str_from_string", []() { return py::str(std::string("baz")); });
|
80 |
-
m.def("str_from_bytes", []() { return py::str(py::bytes("boo", 3)); });
|
81 |
-
m.def("str_from_object", [](const py::object& obj) { return py::str(obj); });
|
82 |
-
m.def("repr_from_object", [](const py::object& obj) { return py::repr(obj); });
|
83 |
-
|
84 |
-
m.def("str_format", []() {
|
85 |
-
auto s1 = "{} + {} = {}"_s.format(1, 2, 3);
|
86 |
-
auto s2 = "{a} + {b} = {c}"_s.format("a"_a=1, "b"_a=2, "c"_a=3);
|
87 |
-
return py::make_tuple(s1, s2);
|
88 |
-
});
|
89 |
-
|
90 |
-
// test_bytes
|
91 |
-
m.def("bytes_from_string", []() { return py::bytes(std::string("foo")); });
|
92 |
-
m.def("bytes_from_str", []() { return py::bytes(py::str("bar", 3)); });
|
93 |
-
|
94 |
-
// test_capsule
|
95 |
-
m.def("return_capsule_with_destructor", []() {
|
96 |
-
py::print("creating capsule");
|
97 |
-
return py::capsule([]() {
|
98 |
-
py::print("destructing capsule");
|
99 |
-
});
|
100 |
-
});
|
101 |
-
|
102 |
-
m.def("return_capsule_with_destructor_2", []() {
|
103 |
-
py::print("creating capsule");
|
104 |
-
return py::capsule((void *) 1234, [](void *ptr) {
|
105 |
-
py::print("destructing capsule: {}"_s.format((size_t) ptr));
|
106 |
-
});
|
107 |
-
});
|
108 |
-
|
109 |
-
m.def("return_capsule_with_name_and_destructor", []() {
|
110 |
-
auto capsule = py::capsule((void *) 1234, "pointer type description", [](PyObject *ptr) {
|
111 |
-
if (ptr) {
|
112 |
-
auto name = PyCapsule_GetName(ptr);
|
113 |
-
py::print("destructing capsule ({}, '{}')"_s.format(
|
114 |
-
(size_t) PyCapsule_GetPointer(ptr, name), name
|
115 |
-
));
|
116 |
-
}
|
117 |
-
});
|
118 |
-
void *contents = capsule;
|
119 |
-
py::print("created capsule ({}, '{}')"_s.format((size_t) contents, capsule.name()));
|
120 |
-
return capsule;
|
121 |
-
});
|
122 |
-
|
123 |
-
// test_accessors
|
124 |
-
m.def("accessor_api", [](py::object o) {
|
125 |
-
auto d = py::dict();
|
126 |
-
|
127 |
-
d["basic_attr"] = o.attr("basic_attr");
|
128 |
-
|
129 |
-
auto l = py::list();
|
130 |
-
for (const auto &item : o.attr("begin_end")) {
|
131 |
-
l.append(item);
|
132 |
-
}
|
133 |
-
d["begin_end"] = l;
|
134 |
-
|
135 |
-
d["operator[object]"] = o.attr("d")["operator[object]"_s];
|
136 |
-
d["operator[char *]"] = o.attr("d")["operator[char *]"];
|
137 |
-
|
138 |
-
d["attr(object)"] = o.attr("sub").attr("attr_obj");
|
139 |
-
d["attr(char *)"] = o.attr("sub").attr("attr_char");
|
140 |
-
try {
|
141 |
-
o.attr("sub").attr("missing").ptr();
|
142 |
-
} catch (const py::error_already_set &) {
|
143 |
-
d["missing_attr_ptr"] = "raised"_s;
|
144 |
-
}
|
145 |
-
try {
|
146 |
-
o.attr("missing").attr("doesn't matter");
|
147 |
-
} catch (const py::error_already_set &) {
|
148 |
-
d["missing_attr_chain"] = "raised"_s;
|
149 |
-
}
|
150 |
-
|
151 |
-
d["is_none"] = o.attr("basic_attr").is_none();
|
152 |
-
|
153 |
-
d["operator()"] = o.attr("func")(1);
|
154 |
-
d["operator*"] = o.attr("func")(*o.attr("begin_end"));
|
155 |
-
|
156 |
-
// Test implicit conversion
|
157 |
-
py::list implicit_list = o.attr("begin_end");
|
158 |
-
d["implicit_list"] = implicit_list;
|
159 |
-
py::dict implicit_dict = o.attr("__dict__");
|
160 |
-
d["implicit_dict"] = implicit_dict;
|
161 |
-
|
162 |
-
return d;
|
163 |
-
});
|
164 |
-
|
165 |
-
m.def("tuple_accessor", [](py::tuple existing_t) {
|
166 |
-
try {
|
167 |
-
existing_t[0] = 1;
|
168 |
-
} catch (const py::error_already_set &) {
|
169 |
-
// --> Python system error
|
170 |
-
// Only new tuples (refcount == 1) are mutable
|
171 |
-
auto new_t = py::tuple(3);
|
172 |
-
for (size_t i = 0; i < new_t.size(); ++i) {
|
173 |
-
new_t[i] = i;
|
174 |
-
}
|
175 |
-
return new_t;
|
176 |
-
}
|
177 |
-
return py::tuple();
|
178 |
-
});
|
179 |
-
|
180 |
-
m.def("accessor_assignment", []() {
|
181 |
-
auto l = py::list(1);
|
182 |
-
l[0] = 0;
|
183 |
-
|
184 |
-
auto d = py::dict();
|
185 |
-
d["get"] = l[0];
|
186 |
-
auto var = l[0];
|
187 |
-
d["deferred_get"] = var;
|
188 |
-
l[0] = 1;
|
189 |
-
d["set"] = l[0];
|
190 |
-
var = 99; // this assignment should not overwrite l[0]
|
191 |
-
d["deferred_set"] = l[0];
|
192 |
-
d["var"] = var;
|
193 |
-
|
194 |
-
return d;
|
195 |
-
});
|
196 |
-
|
197 |
-
// test_constructors
|
198 |
-
m.def("default_constructors", []() {
|
199 |
-
return py::dict(
|
200 |
-
"bytes"_a=py::bytes(),
|
201 |
-
"str"_a=py::str(),
|
202 |
-
"bool"_a=py::bool_(),
|
203 |
-
"int"_a=py::int_(),
|
204 |
-
"float"_a=py::float_(),
|
205 |
-
"tuple"_a=py::tuple(),
|
206 |
-
"list"_a=py::list(),
|
207 |
-
"dict"_a=py::dict(),
|
208 |
-
"set"_a=py::set()
|
209 |
-
);
|
210 |
-
});
|
211 |
-
|
212 |
-
m.def("converting_constructors", [](py::dict d) {
|
213 |
-
return py::dict(
|
214 |
-
"bytes"_a=py::bytes(d["bytes"]),
|
215 |
-
"str"_a=py::str(d["str"]),
|
216 |
-
"bool"_a=py::bool_(d["bool"]),
|
217 |
-
"int"_a=py::int_(d["int"]),
|
218 |
-
"float"_a=py::float_(d["float"]),
|
219 |
-
"tuple"_a=py::tuple(d["tuple"]),
|
220 |
-
"list"_a=py::list(d["list"]),
|
221 |
-
"dict"_a=py::dict(d["dict"]),
|
222 |
-
"set"_a=py::set(d["set"]),
|
223 |
-
"memoryview"_a=py::memoryview(d["memoryview"])
|
224 |
-
);
|
225 |
-
});
|
226 |
-
|
227 |
-
m.def("cast_functions", [](py::dict d) {
|
228 |
-
// When converting between Python types, obj.cast<T>() should be the same as T(obj)
|
229 |
-
return py::dict(
|
230 |
-
"bytes"_a=d["bytes"].cast<py::bytes>(),
|
231 |
-
"str"_a=d["str"].cast<py::str>(),
|
232 |
-
"bool"_a=d["bool"].cast<py::bool_>(),
|
233 |
-
"int"_a=d["int"].cast<py::int_>(),
|
234 |
-
"float"_a=d["float"].cast<py::float_>(),
|
235 |
-
"tuple"_a=d["tuple"].cast<py::tuple>(),
|
236 |
-
"list"_a=d["list"].cast<py::list>(),
|
237 |
-
"dict"_a=d["dict"].cast<py::dict>(),
|
238 |
-
"set"_a=d["set"].cast<py::set>(),
|
239 |
-
"memoryview"_a=d["memoryview"].cast<py::memoryview>()
|
240 |
-
);
|
241 |
-
});
|
242 |
-
|
243 |
-
m.def("convert_to_pybind11_str", [](py::object o) { return py::str(o); });
|
244 |
-
|
245 |
-
m.def("get_implicit_casting", []() {
|
246 |
-
py::dict d;
|
247 |
-
d["char*_i1"] = "abc";
|
248 |
-
const char *c2 = "abc";
|
249 |
-
d["char*_i2"] = c2;
|
250 |
-
d["char*_e"] = py::cast(c2);
|
251 |
-
d["char*_p"] = py::str(c2);
|
252 |
-
|
253 |
-
d["int_i1"] = 42;
|
254 |
-
int i = 42;
|
255 |
-
d["int_i2"] = i;
|
256 |
-
i++;
|
257 |
-
d["int_e"] = py::cast(i);
|
258 |
-
i++;
|
259 |
-
d["int_p"] = py::int_(i);
|
260 |
-
|
261 |
-
d["str_i1"] = std::string("str");
|
262 |
-
std::string s2("str1");
|
263 |
-
d["str_i2"] = s2;
|
264 |
-
s2[3] = '2';
|
265 |
-
d["str_e"] = py::cast(s2);
|
266 |
-
s2[3] = '3';
|
267 |
-
d["str_p"] = py::str(s2);
|
268 |
-
|
269 |
-
py::list l(2);
|
270 |
-
l[0] = 3;
|
271 |
-
l[1] = py::cast(6);
|
272 |
-
l.append(9);
|
273 |
-
l.append(py::cast(12));
|
274 |
-
l.append(py::int_(15));
|
275 |
-
|
276 |
-
return py::dict(
|
277 |
-
"d"_a=d,
|
278 |
-
"l"_a=l
|
279 |
-
);
|
280 |
-
});
|
281 |
-
|
282 |
-
// test_print
|
283 |
-
m.def("print_function", []() {
|
284 |
-
py::print("Hello, World!");
|
285 |
-
py::print(1, 2.0, "three", true, std::string("-- multiple args"));
|
286 |
-
auto args = py::make_tuple("and", "a", "custom", "separator");
|
287 |
-
py::print("*args", *args, "sep"_a="-");
|
288 |
-
py::print("no new line here", "end"_a=" -- ");
|
289 |
-
py::print("next print");
|
290 |
-
|
291 |
-
auto py_stderr = py::module::import("sys").attr("stderr");
|
292 |
-
py::print("this goes to stderr", "file"_a=py_stderr);
|
293 |
-
|
294 |
-
py::print("flush", "flush"_a=true);
|
295 |
-
|
296 |
-
py::print("{a} + {b} = {c}"_s.format("a"_a="py::print", "b"_a="str.format", "c"_a="this"));
|
297 |
-
});
|
298 |
-
|
299 |
-
m.def("print_failure", []() { py::print(42, UnregisteredType()); });
|
300 |
-
|
301 |
-
m.def("hash_function", [](py::object obj) { return py::hash(obj); });
|
302 |
-
|
303 |
-
m.def("test_number_protocol", [](py::object a, py::object b) {
|
304 |
-
py::list l;
|
305 |
-
l.append(a.equal(b));
|
306 |
-
l.append(a.not_equal(b));
|
307 |
-
l.append(a < b);
|
308 |
-
l.append(a <= b);
|
309 |
-
l.append(a > b);
|
310 |
-
l.append(a >= b);
|
311 |
-
l.append(a + b);
|
312 |
-
l.append(a - b);
|
313 |
-
l.append(a * b);
|
314 |
-
l.append(a / b);
|
315 |
-
l.append(a | b);
|
316 |
-
l.append(a & b);
|
317 |
-
l.append(a ^ b);
|
318 |
-
l.append(a >> b);
|
319 |
-
l.append(a << b);
|
320 |
-
return l;
|
321 |
-
});
|
322 |
-
|
323 |
-
m.def("test_list_slicing", [](py::list a) {
|
324 |
-
return a[py::slice(0, -1, 2)];
|
325 |
-
});
|
326 |
-
|
327 |
-
m.def("test_memoryview_object", [](py::buffer b) {
|
328 |
-
return py::memoryview(b);
|
329 |
-
});
|
330 |
-
|
331 |
-
m.def("test_memoryview_buffer_info", [](py::buffer b) {
|
332 |
-
return py::memoryview(b.request());
|
333 |
-
});
|
334 |
-
|
335 |
-
m.def("test_memoryview_from_buffer", [](bool is_unsigned) {
|
336 |
-
static const int16_t si16[] = { 3, 1, 4, 1, 5 };
|
337 |
-
static const uint16_t ui16[] = { 2, 7, 1, 8 };
|
338 |
-
if (is_unsigned)
|
339 |
-
return py::memoryview::from_buffer(
|
340 |
-
ui16, { 4 }, { sizeof(uint16_t) });
|
341 |
-
else
|
342 |
-
return py::memoryview::from_buffer(
|
343 |
-
si16, { 5 }, { sizeof(int16_t) });
|
344 |
-
});
|
345 |
-
|
346 |
-
m.def("test_memoryview_from_buffer_nativeformat", []() {
|
347 |
-
static const char* format = "@i";
|
348 |
-
static const int32_t arr[] = { 4, 7, 5 };
|
349 |
-
return py::memoryview::from_buffer(
|
350 |
-
arr, sizeof(int32_t), format, { 3 }, { sizeof(int32_t) });
|
351 |
-
});
|
352 |
-
|
353 |
-
m.def("test_memoryview_from_buffer_empty_shape", []() {
|
354 |
-
static const char* buf = "";
|
355 |
-
return py::memoryview::from_buffer(buf, 1, "B", { }, { });
|
356 |
-
});
|
357 |
-
|
358 |
-
m.def("test_memoryview_from_buffer_invalid_strides", []() {
|
359 |
-
static const char* buf = "\x02\x03\x04";
|
360 |
-
return py::memoryview::from_buffer(buf, 1, "B", { 3 }, { });
|
361 |
-
});
|
362 |
-
|
363 |
-
m.def("test_memoryview_from_buffer_nullptr", []() {
|
364 |
-
return py::memoryview::from_buffer(
|
365 |
-
static_cast<void*>(nullptr), 1, "B", { }, { });
|
366 |
-
});
|
367 |
-
|
368 |
-
#if PY_MAJOR_VERSION >= 3
|
369 |
-
m.def("test_memoryview_from_memory", []() {
|
370 |
-
const char* buf = "\xff\xe1\xab\x37";
|
371 |
-
return py::memoryview::from_memory(
|
372 |
-
buf, static_cast<ssize_t>(strlen(buf)));
|
373 |
-
});
|
374 |
-
#endif
|
375 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/cpp/pointer.h
DELETED
@@ -1,351 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2018 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
#include <thrust/system/cpp/detail/execution_policy.h>
|
21 |
-
#include <thrust/detail/type_traits.h>
|
22 |
-
#include <thrust/detail/pointer.h>
|
23 |
-
#include <thrust/detail/reference.h>
|
24 |
-
|
25 |
-
namespace thrust
|
26 |
-
{
|
27 |
-
namespace system
|
28 |
-
{
|
29 |
-
namespace cpp
|
30 |
-
{
|
31 |
-
|
32 |
-
template<typename> class pointer;
|
33 |
-
|
34 |
-
} // end cpp
|
35 |
-
} // end system
|
36 |
-
} // end thrust
|
37 |
-
|
38 |
-
|
39 |
-
/*! \cond
|
40 |
-
*/
|
41 |
-
|
42 |
-
// specialize thrust::iterator_traits to avoid problems with the name of
|
43 |
-
// pointer's constructor shadowing its nested pointer type
|
44 |
-
// do this before pointer is defined so the specialization is correctly
|
45 |
-
// used inside the definition
|
46 |
-
namespace thrust
|
47 |
-
{
|
48 |
-
|
49 |
-
template<typename Element>
|
50 |
-
struct iterator_traits<thrust::system::cpp::pointer<Element> >
|
51 |
-
{
|
52 |
-
private:
|
53 |
-
typedef thrust::system::cpp::pointer<Element> ptr;
|
54 |
-
|
55 |
-
public:
|
56 |
-
typedef typename ptr::iterator_category iterator_category;
|
57 |
-
typedef typename ptr::value_type value_type;
|
58 |
-
typedef typename ptr::difference_type difference_type;
|
59 |
-
typedef ptr pointer;
|
60 |
-
typedef typename ptr::reference reference;
|
61 |
-
}; // end iterator_traits
|
62 |
-
|
63 |
-
} // end thrust
|
64 |
-
|
65 |
-
/*! \endcond
|
66 |
-
*/
|
67 |
-
|
68 |
-
|
69 |
-
namespace thrust
|
70 |
-
{
|
71 |
-
namespace system
|
72 |
-
{
|
73 |
-
|
74 |
-
/*! \addtogroup system_backends Systems
|
75 |
-
* \ingroup system
|
76 |
-
* \{
|
77 |
-
*/
|
78 |
-
|
79 |
-
/*! \namespace thrust::system::cpp
|
80 |
-
* \brief \p thrust::system::cpp is the namespace containing functionality for allocating, manipulating,
|
81 |
-
* and deallocating memory available to Thrust's standard C++ backend system.
|
82 |
-
* The identifiers are provided in a separate namespace underneath <tt>thrust::system</tt>
|
83 |
-
* for import convenience but are also aliased in the top-level <tt>thrust::cpp</tt>
|
84 |
-
* namespace for easy access.
|
85 |
-
*
|
86 |
-
*/
|
87 |
-
namespace cpp
|
88 |
-
{
|
89 |
-
|
90 |
-
// forward declaration of reference for pointer
|
91 |
-
template<typename Element> class reference;
|
92 |
-
|
93 |
-
/*! \cond
|
94 |
-
*/
|
95 |
-
|
96 |
-
// XXX nvcc + msvc have trouble instantiating reference below
|
97 |
-
// this is a workaround
|
98 |
-
namespace detail
|
99 |
-
{
|
100 |
-
|
101 |
-
template<typename Element>
|
102 |
-
struct reference_msvc_workaround
|
103 |
-
{
|
104 |
-
typedef thrust::system::cpp::reference<Element> type;
|
105 |
-
}; // end reference_msvc_workaround
|
106 |
-
|
107 |
-
} // end detail
|
108 |
-
|
109 |
-
/*! \endcond
|
110 |
-
*/
|
111 |
-
|
112 |
-
|
113 |
-
/*! \p pointer stores a pointer to an object allocated in memory available to the cpp system.
|
114 |
-
* This type provides type safety when dispatching standard algorithms on ranges resident
|
115 |
-
* in cpp memory.
|
116 |
-
*
|
117 |
-
* \p pointer has pointer semantics: it may be dereferenced and manipulated with pointer arithmetic.
|
118 |
-
*
|
119 |
-
* \p pointer can be created with the function \p cpp::malloc, or by explicitly calling its constructor
|
120 |
-
* with a raw pointer.
|
121 |
-
*
|
122 |
-
* The raw pointer encapsulated by a \p pointer may be obtained by eiter its <tt>get</tt> member function
|
123 |
-
* or the \p raw_pointer_cast function.
|
124 |
-
*
|
125 |
-
* \note \p pointer is not a "smart" pointer; it is the programmer's responsibility to deallocate memory
|
126 |
-
* pointed to by \p pointer.
|
127 |
-
*
|
128 |
-
* \tparam T specifies the type of the pointee.
|
129 |
-
*
|
130 |
-
* \see cpp::malloc
|
131 |
-
* \see cpp::free
|
132 |
-
* \see raw_pointer_cast
|
133 |
-
*/
|
134 |
-
template<typename T>
|
135 |
-
class pointer
|
136 |
-
: public thrust::pointer<
|
137 |
-
T,
|
138 |
-
thrust::system::cpp::tag,
|
139 |
-
thrust::system::cpp::reference<T>,
|
140 |
-
thrust::system::cpp::pointer<T>
|
141 |
-
>
|
142 |
-
{
|
143 |
-
/*! \cond
|
144 |
-
*/
|
145 |
-
|
146 |
-
private:
|
147 |
-
typedef thrust::pointer<
|
148 |
-
T,
|
149 |
-
thrust::system::cpp::tag,
|
150 |
-
//thrust::system::cpp::reference<T>,
|
151 |
-
typename detail::reference_msvc_workaround<T>::type,
|
152 |
-
thrust::system::cpp::pointer<T>
|
153 |
-
> super_t;
|
154 |
-
|
155 |
-
/*! \endcond
|
156 |
-
*/
|
157 |
-
|
158 |
-
public:
|
159 |
-
// note that cpp::pointer's member functions need __host__ __device__
|
160 |
-
// to interoperate with nvcc + iterators' dereference member function
|
161 |
-
|
162 |
-
/*! \p pointer's no-argument constructor initializes its encapsulated pointer to \c 0.
|
163 |
-
*/
|
164 |
-
__host__ __device__
|
165 |
-
pointer() : super_t() {}
|
166 |
-
|
167 |
-
#if THRUST_CPP_DIALECT >= 2011
|
168 |
-
// NOTE: This is needed so that Thrust smart pointers can be used in
|
169 |
-
// `std::unique_ptr`.
|
170 |
-
__host__ __device__
|
171 |
-
pointer(decltype(nullptr)) : super_t(nullptr) {}
|
172 |
-
#endif
|
173 |
-
|
174 |
-
/*! This constructor allows construction of a <tt>pointer<const T></tt> from a <tt>T*</tt>.
|
175 |
-
*
|
176 |
-
* \param ptr A raw pointer to copy from, presumed to point to a location in memory
|
177 |
-
* accessible by the \p cpp system.
|
178 |
-
* \tparam OtherT \p OtherT shall be convertible to \p T.
|
179 |
-
*/
|
180 |
-
template<typename OtherT>
|
181 |
-
__host__ __device__
|
182 |
-
explicit pointer(OtherT *ptr) : super_t(ptr) {}
|
183 |
-
|
184 |
-
/*! This constructor allows construction from another pointer-like object with related type.
|
185 |
-
*
|
186 |
-
* \param other The \p OtherPointer to copy.
|
187 |
-
* \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible
|
188 |
-
* to \p thrust::system::cpp::tag and its element type shall be convertible to \p T.
|
189 |
-
*/
|
190 |
-
template<typename OtherPointer>
|
191 |
-
__host__ __device__
|
192 |
-
pointer(const OtherPointer &other,
|
193 |
-
typename thrust::detail::enable_if_pointer_is_convertible<
|
194 |
-
OtherPointer,
|
195 |
-
pointer
|
196 |
-
>::type * = 0) : super_t(other) {}
|
197 |
-
|
198 |
-
/*! This constructor allows construction from another pointer-like object with \p void type.
|
199 |
-
*
|
200 |
-
* \param other The \p OtherPointer to copy.
|
201 |
-
* \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible
|
202 |
-
* to \p thrust::system::cpp::tag and its element type shall be \p void.
|
203 |
-
*/
|
204 |
-
template<typename OtherPointer>
|
205 |
-
__host__ __device__
|
206 |
-
explicit
|
207 |
-
pointer(const OtherPointer &other,
|
208 |
-
typename thrust::detail::enable_if_void_pointer_is_system_convertible<
|
209 |
-
OtherPointer,
|
210 |
-
pointer
|
211 |
-
>::type * = 0) : super_t(other) {}
|
212 |
-
|
213 |
-
/*! Assignment operator allows assigning from another pointer-like object with related type.
|
214 |
-
*
|
215 |
-
* \param other The other pointer-like object to assign from.
|
216 |
-
* \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible
|
217 |
-
* to \p thrust::system::cpp::tag and its element type shall be convertible to \p T.
|
218 |
-
*/
|
219 |
-
template<typename OtherPointer>
|
220 |
-
__host__ __device__
|
221 |
-
typename thrust::detail::enable_if_pointer_is_convertible<
|
222 |
-
OtherPointer,
|
223 |
-
pointer,
|
224 |
-
pointer &
|
225 |
-
>::type
|
226 |
-
operator=(const OtherPointer &other)
|
227 |
-
{
|
228 |
-
return super_t::operator=(other);
|
229 |
-
}
|
230 |
-
|
231 |
-
#if THRUST_CPP_DIALECT >= 2011
|
232 |
-
// NOTE: This is needed so that Thrust smart pointers can be used in
|
233 |
-
// `std::unique_ptr`.
|
234 |
-
__host__ __device__
|
235 |
-
pointer& operator=(decltype(nullptr))
|
236 |
-
{
|
237 |
-
super_t::operator=(nullptr);
|
238 |
-
return *this;
|
239 |
-
}
|
240 |
-
#endif
|
241 |
-
}; // end pointer
|
242 |
-
|
243 |
-
/*! \p reference is a wrapped reference to an object stored in memory available to the \p cpp system.
|
244 |
-
* \p reference is the type of the result of dereferencing a \p cpp::pointer.
|
245 |
-
*
|
246 |
-
* \tparam T Specifies the type of the referenced object.
|
247 |
-
*/
|
248 |
-
template<typename T>
|
249 |
-
class reference
|
250 |
-
: public thrust::reference<
|
251 |
-
T,
|
252 |
-
thrust::system::cpp::pointer<T>,
|
253 |
-
thrust::system::cpp::reference<T>
|
254 |
-
>
|
255 |
-
{
|
256 |
-
/*! \cond
|
257 |
-
*/
|
258 |
-
|
259 |
-
private:
|
260 |
-
typedef thrust::reference<
|
261 |
-
T,
|
262 |
-
thrust::system::cpp::pointer<T>,
|
263 |
-
thrust::system::cpp::reference<T>
|
264 |
-
> super_t;
|
265 |
-
|
266 |
-
/*! \endcond
|
267 |
-
*/
|
268 |
-
|
269 |
-
public:
|
270 |
-
/*! \cond
|
271 |
-
*/
|
272 |
-
|
273 |
-
typedef typename super_t::value_type value_type;
|
274 |
-
typedef typename super_t::pointer pointer;
|
275 |
-
|
276 |
-
/*! \endcond
|
277 |
-
*/
|
278 |
-
|
279 |
-
/*! This constructor initializes this \p reference to refer to an object
|
280 |
-
* pointed to by the given \p pointer. After this \p reference is constructed,
|
281 |
-
* it shall refer to the object pointed to by \p ptr.
|
282 |
-
*
|
283 |
-
* \param ptr A \p pointer to copy from.
|
284 |
-
*/
|
285 |
-
__host__ __device__
|
286 |
-
explicit reference(const pointer &ptr)
|
287 |
-
: super_t(ptr)
|
288 |
-
{}
|
289 |
-
|
290 |
-
/*! This constructor accepts a const reference to another \p reference of related type.
|
291 |
-
* After this \p reference is constructed, it shall refer to the same object as \p other.
|
292 |
-
*
|
293 |
-
* \param other A \p reference to copy from.
|
294 |
-
* \tparam OtherT The element type of the other \p reference.
|
295 |
-
*
|
296 |
-
* \note This constructor is templated primarily to allow initialization of <tt>reference<const T></tt>
|
297 |
-
* from <tt>reference<T></tt>.
|
298 |
-
*/
|
299 |
-
template<typename OtherT>
|
300 |
-
__host__ __device__
|
301 |
-
reference(const reference<OtherT> &other,
|
302 |
-
typename thrust::detail::enable_if_convertible<
|
303 |
-
typename reference<OtherT>::pointer,
|
304 |
-
pointer
|
305 |
-
>::type * = 0)
|
306 |
-
: super_t(other)
|
307 |
-
{}
|
308 |
-
|
309 |
-
/*! Copy assignment operator copy assigns from another \p reference of related type.
|
310 |
-
*
|
311 |
-
* \param other The other \p reference to assign from.
|
312 |
-
* \return <tt>*this</tt>
|
313 |
-
* \tparam OtherT The element type of the other \p reference.
|
314 |
-
*/
|
315 |
-
template<typename OtherT>
|
316 |
-
reference &operator=(const reference<OtherT> &other);
|
317 |
-
|
318 |
-
/*! Assignment operator assigns from a \p value_type.
|
319 |
-
*
|
320 |
-
* \param x The \p value_type to assign from.
|
321 |
-
* \return <tt>*this</tt>
|
322 |
-
*/
|
323 |
-
reference &operator=(const value_type &x);
|
324 |
-
}; // end reference
|
325 |
-
|
326 |
-
/*! Exchanges the values of two objects referred to by \p reference.
|
327 |
-
* \p x The first \p reference of interest.
|
328 |
-
* \p y The second \p reference of interest.
|
329 |
-
*/
|
330 |
-
template<typename T>
|
331 |
-
__host__ __device__
|
332 |
-
void swap(reference<T> x, reference<T> y);
|
333 |
-
|
334 |
-
} // end cpp
|
335 |
-
|
336 |
-
/*! \}
|
337 |
-
*/
|
338 |
-
|
339 |
-
} // end system
|
340 |
-
|
341 |
-
namespace cpp
|
342 |
-
{
|
343 |
-
|
344 |
-
using thrust::system::cpp::pointer;
|
345 |
-
using thrust::system::cpp::reference;
|
346 |
-
|
347 |
-
} // end cpp
|
348 |
-
|
349 |
-
} // end thrust
|
350 |
-
|
351 |
-
#include <thrust/system/cpp/detail/pointer.inl>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/inner_product.h
DELETED
@@ -1,94 +0,0 @@
|
|
1 |
-
/******************************************************************************
|
2 |
-
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
|
3 |
-
*
|
4 |
-
* Redistribution and use in source and binary forms, with or without
|
5 |
-
* modification, are permitted provided that the following conditions are met:
|
6 |
-
* * Redistributions of source code must retain the above copyright
|
7 |
-
* notice, this list of conditions and the following disclaimer.
|
8 |
-
* * Redistributions in binary form must reproduce the above copyright
|
9 |
-
* notice, this list of conditions and the following disclaimer in the
|
10 |
-
* documentation and/or other materials provided with the distribution.
|
11 |
-
* * Neither the name of the NVIDIA CORPORATION nor the
|
12 |
-
* names of its contributors may be used to endorse or promote products
|
13 |
-
* derived from this software without specific prior written permission.
|
14 |
-
*
|
15 |
-
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
16 |
-
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
17 |
-
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
18 |
-
* ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
|
19 |
-
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
20 |
-
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
21 |
-
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
22 |
-
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
23 |
-
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
24 |
-
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
25 |
-
*
|
26 |
-
******************************************************************************/
|
27 |
-
#pragma once
|
28 |
-
|
29 |
-
|
30 |
-
#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
|
31 |
-
#include <iterator>
|
32 |
-
#include <thrust/system/cuda/detail/reduce.h>
|
33 |
-
#include <thrust/detail/minmax.h>
|
34 |
-
#include <thrust/distance.h>
|
35 |
-
|
36 |
-
namespace thrust
|
37 |
-
{
|
38 |
-
|
39 |
-
namespace cuda_cub {
|
40 |
-
|
41 |
-
template <class Derived,
|
42 |
-
class InputIt1,
|
43 |
-
class InputIt2,
|
44 |
-
class T,
|
45 |
-
class ReduceOp,
|
46 |
-
class ProductOp>
|
47 |
-
T __host__ __device__
|
48 |
-
inner_product(execution_policy<Derived> &policy,
|
49 |
-
InputIt1 first1,
|
50 |
-
InputIt1 last1,
|
51 |
-
InputIt2 first2,
|
52 |
-
T init,
|
53 |
-
ReduceOp reduce_op,
|
54 |
-
ProductOp product_op)
|
55 |
-
{
|
56 |
-
typedef typename iterator_traits<InputIt1>::difference_type size_type;
|
57 |
-
size_type num_items = static_cast<size_type>(thrust::distance(first1, last1));
|
58 |
-
typedef transform_pair_of_input_iterators_t<T,
|
59 |
-
InputIt1,
|
60 |
-
InputIt2,
|
61 |
-
ProductOp>
|
62 |
-
binop_iterator_t;
|
63 |
-
|
64 |
-
return cuda_cub::reduce_n(policy,
|
65 |
-
binop_iterator_t(first1, first2, product_op),
|
66 |
-
num_items,
|
67 |
-
init,
|
68 |
-
reduce_op);
|
69 |
-
}
|
70 |
-
|
71 |
-
template <class Derived,
|
72 |
-
class InputIt1,
|
73 |
-
class InputIt2,
|
74 |
-
class T>
|
75 |
-
T __host__ __device__
|
76 |
-
inner_product(execution_policy<Derived> &policy,
|
77 |
-
InputIt1 first1,
|
78 |
-
InputIt1 last1,
|
79 |
-
InputIt2 first2,
|
80 |
-
T init)
|
81 |
-
{
|
82 |
-
return cuda_cub::inner_product(policy,
|
83 |
-
first1,
|
84 |
-
last1,
|
85 |
-
first2,
|
86 |
-
init,
|
87 |
-
plus<T>(),
|
88 |
-
multiplies<T>());
|
89 |
-
}
|
90 |
-
|
91 |
-
} // namespace cuda_cub
|
92 |
-
|
93 |
-
} // end namespace thrust
|
94 |
-
#endif
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/omp/memory.h
DELETED
@@ -1,95 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2018 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
/*! \file thrust/system/omp/memory.h
|
18 |
-
* \brief Managing memory associated with Thrust's OpenMP system.
|
19 |
-
*/
|
20 |
-
|
21 |
-
#pragma once
|
22 |
-
|
23 |
-
#include <thrust/detail/config.h>
|
24 |
-
#include <thrust/system/omp/memory_resource.h>
|
25 |
-
#include <thrust/memory.h>
|
26 |
-
#include <thrust/detail/type_traits.h>
|
27 |
-
#include <thrust/mr/allocator.h>
|
28 |
-
#include <ostream>
|
29 |
-
|
30 |
-
namespace thrust
|
31 |
-
{
|
32 |
-
namespace system
|
33 |
-
{
|
34 |
-
namespace omp
|
35 |
-
{
|
36 |
-
|
37 |
-
/*! Allocates an area of memory available to Thrust's <tt>omp</tt> system.
|
38 |
-
* \param n Number of bytes to allocate.
|
39 |
-
* \return A <tt>omp::pointer<void></tt> pointing to the beginning of the newly
|
40 |
-
* allocated memory. A null <tt>omp::pointer<void></tt> is returned if
|
41 |
-
* an error occurs.
|
42 |
-
* \note The <tt>omp::pointer<void></tt> returned by this function must be
|
43 |
-
* deallocated with \p omp::free.
|
44 |
-
* \see omp::free
|
45 |
-
* \see std::malloc
|
46 |
-
*/
|
47 |
-
inline pointer<void> malloc(std::size_t n);
|
48 |
-
|
49 |
-
/*! Allocates a typed area of memory available to Thrust's <tt>omp</tt> system.
|
50 |
-
* \param n Number of elements to allocate.
|
51 |
-
* \return A <tt>omp::pointer<T></tt> pointing to the beginning of the newly
|
52 |
-
* allocated memory. A null <tt>omp::pointer<T></tt> is returned if
|
53 |
-
* an error occurs.
|
54 |
-
* \note The <tt>omp::pointer<T></tt> returned by this function must be
|
55 |
-
* deallocated with \p omp::free.
|
56 |
-
* \see omp::free
|
57 |
-
* \see std::malloc
|
58 |
-
*/
|
59 |
-
template<typename T>
|
60 |
-
inline pointer<T> malloc(std::size_t n);
|
61 |
-
|
62 |
-
/*! Deallocates an area of memory previously allocated by <tt>omp::malloc</tt>.
|
63 |
-
* \param ptr A <tt>omp::pointer<void></tt> pointing to the beginning of an area
|
64 |
-
* of memory previously allocated with <tt>omp::malloc</tt>.
|
65 |
-
* \see omp::malloc
|
66 |
-
* \see std::free
|
67 |
-
*/
|
68 |
-
inline void free(pointer<void> ptr);
|
69 |
-
|
70 |
-
/*! \p omp::allocator is the default allocator used by the \p omp system's containers such as
|
71 |
-
* <tt>omp::vector</tt> if no user-specified allocator is provided. \p omp::allocator allocates
|
72 |
-
* (deallocates) storage with \p omp::malloc (\p omp::free).
|
73 |
-
*/
|
74 |
-
template<typename T>
|
75 |
-
using allocator = thrust::mr::stateless_resource_allocator<T, memory_resource>;
|
76 |
-
|
77 |
-
} // end omp
|
78 |
-
} // end system
|
79 |
-
|
80 |
-
/*! \namespace thrust::omp
|
81 |
-
* \brief \p thrust::omp is a top-level alias for thrust::system::omp.
|
82 |
-
*/
|
83 |
-
namespace omp
|
84 |
-
{
|
85 |
-
|
86 |
-
using thrust::system::omp::malloc;
|
87 |
-
using thrust::system::omp::free;
|
88 |
-
using thrust::system::omp::allocator;
|
89 |
-
|
90 |
-
} // end omp
|
91 |
-
|
92 |
-
} // end thrust
|
93 |
-
|
94 |
-
#include <thrust/system/omp/detail/memory.inl>
|
95 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/models/detectors/single_stage.py
DELETED
@@ -1,154 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
|
4 |
-
from mmdet.core import bbox2result
|
5 |
-
from ..builder import DETECTORS, build_backbone, build_head, build_neck
|
6 |
-
from .base import BaseDetector
|
7 |
-
|
8 |
-
|
9 |
-
@DETECTORS.register_module()
|
10 |
-
class SingleStageDetector(BaseDetector):
|
11 |
-
"""Base class for single-stage detectors.
|
12 |
-
|
13 |
-
Single-stage detectors directly and densely predict bounding boxes on the
|
14 |
-
output features of the backbone+neck.
|
15 |
-
"""
|
16 |
-
|
17 |
-
def __init__(self,
|
18 |
-
backbone,
|
19 |
-
neck=None,
|
20 |
-
bbox_head=None,
|
21 |
-
train_cfg=None,
|
22 |
-
test_cfg=None,
|
23 |
-
pretrained=None):
|
24 |
-
super(SingleStageDetector, self).__init__()
|
25 |
-
self.backbone = build_backbone(backbone)
|
26 |
-
if neck is not None:
|
27 |
-
self.neck = build_neck(neck)
|
28 |
-
bbox_head.update(train_cfg=train_cfg)
|
29 |
-
bbox_head.update(test_cfg=test_cfg)
|
30 |
-
self.bbox_head = build_head(bbox_head)
|
31 |
-
self.train_cfg = train_cfg
|
32 |
-
self.test_cfg = test_cfg
|
33 |
-
self.init_weights(pretrained=pretrained)
|
34 |
-
|
35 |
-
def init_weights(self, pretrained=None):
|
36 |
-
"""Initialize the weights in detector.
|
37 |
-
|
38 |
-
Args:
|
39 |
-
pretrained (str, optional): Path to pre-trained weights.
|
40 |
-
Defaults to None.
|
41 |
-
"""
|
42 |
-
super(SingleStageDetector, self).init_weights(pretrained)
|
43 |
-
self.backbone.init_weights(pretrained=pretrained)
|
44 |
-
if self.with_neck:
|
45 |
-
if isinstance(self.neck, nn.Sequential):
|
46 |
-
for m in self.neck:
|
47 |
-
m.init_weights()
|
48 |
-
else:
|
49 |
-
self.neck.init_weights()
|
50 |
-
self.bbox_head.init_weights()
|
51 |
-
|
52 |
-
def extract_feat(self, img):
|
53 |
-
"""Directly extract features from the backbone+neck."""
|
54 |
-
x = self.backbone(img)
|
55 |
-
if self.with_neck:
|
56 |
-
x = self.neck(x)
|
57 |
-
return x
|
58 |
-
|
59 |
-
def forward_dummy(self, img):
|
60 |
-
"""Used for computing network flops.
|
61 |
-
|
62 |
-
See `mmdetection/tools/analysis_tools/get_flops.py`
|
63 |
-
"""
|
64 |
-
x = self.extract_feat(img)
|
65 |
-
outs = self.bbox_head(x)
|
66 |
-
return outs
|
67 |
-
|
68 |
-
def forward_train(self,
|
69 |
-
img,
|
70 |
-
img_metas,
|
71 |
-
gt_bboxes,
|
72 |
-
gt_labels,
|
73 |
-
gt_bboxes_ignore=None):
|
74 |
-
"""
|
75 |
-
Args:
|
76 |
-
img (Tensor): Input images of shape (N, C, H, W).
|
77 |
-
Typically these should be mean centered and std scaled.
|
78 |
-
img_metas (list[dict]): A List of image info dict where each dict
|
79 |
-
has: 'img_shape', 'scale_factor', 'flip', and may also contain
|
80 |
-
'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
|
81 |
-
For details on the values of these keys see
|
82 |
-
:class:`mmdet.datasets.pipelines.Collect`.
|
83 |
-
gt_bboxes (list[Tensor]): Each item are the truth boxes for each
|
84 |
-
image in [tl_x, tl_y, br_x, br_y] format.
|
85 |
-
gt_labels (list[Tensor]): Class indices corresponding to each box
|
86 |
-
gt_bboxes_ignore (None | list[Tensor]): Specify which bounding
|
87 |
-
boxes can be ignored when computing the loss.
|
88 |
-
|
89 |
-
Returns:
|
90 |
-
dict[str, Tensor]: A dictionary of loss components.
|
91 |
-
"""
|
92 |
-
super(SingleStageDetector, self).forward_train(img, img_metas)
|
93 |
-
x = self.extract_feat(img)
|
94 |
-
losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes,
|
95 |
-
gt_labels, gt_bboxes_ignore)
|
96 |
-
return losses
|
97 |
-
|
98 |
-
def simple_test(self, img, img_metas, rescale=False):
|
99 |
-
"""Test function without test time augmentation.
|
100 |
-
|
101 |
-
Args:
|
102 |
-
imgs (list[torch.Tensor]): List of multiple images
|
103 |
-
img_metas (list[dict]): List of image information.
|
104 |
-
rescale (bool, optional): Whether to rescale the results.
|
105 |
-
Defaults to False.
|
106 |
-
|
107 |
-
Returns:
|
108 |
-
list[list[np.ndarray]]: BBox results of each image and classes.
|
109 |
-
The outer list corresponds to each image. The inner list
|
110 |
-
corresponds to each class.
|
111 |
-
"""
|
112 |
-
x = self.extract_feat(img)
|
113 |
-
outs = self.bbox_head(x)
|
114 |
-
# get origin input shape to support onnx dynamic shape
|
115 |
-
if torch.onnx.is_in_onnx_export():
|
116 |
-
# get shape as tensor
|
117 |
-
img_shape = torch._shape_as_tensor(img)[2:]
|
118 |
-
img_metas[0]['img_shape_for_onnx'] = img_shape
|
119 |
-
bbox_list = self.bbox_head.get_bboxes(
|
120 |
-
*outs, img_metas, rescale=rescale)
|
121 |
-
# skip post-processing when exporting to ONNX
|
122 |
-
if torch.onnx.is_in_onnx_export():
|
123 |
-
return bbox_list
|
124 |
-
|
125 |
-
bbox_results = [
|
126 |
-
bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes)
|
127 |
-
for det_bboxes, det_labels in bbox_list
|
128 |
-
]
|
129 |
-
return bbox_results
|
130 |
-
|
131 |
-
def aug_test(self, imgs, img_metas, rescale=False):
|
132 |
-
"""Test function with test time augmentation.
|
133 |
-
|
134 |
-
Args:
|
135 |
-
imgs (list[Tensor]): the outer list indicates test-time
|
136 |
-
augmentations and inner Tensor should have a shape NxCxHxW,
|
137 |
-
which contains all images in the batch.
|
138 |
-
img_metas (list[list[dict]]): the outer list indicates test-time
|
139 |
-
augs (multiscale, flip, etc.) and the inner list indicates
|
140 |
-
images in a batch. each dict has image information.
|
141 |
-
rescale (bool, optional): Whether to rescale the results.
|
142 |
-
Defaults to False.
|
143 |
-
|
144 |
-
Returns:
|
145 |
-
list[list[np.ndarray]]: BBox results of each image and classes.
|
146 |
-
The outer list corresponds to each image. The inner list
|
147 |
-
corresponds to each class.
|
148 |
-
"""
|
149 |
-
assert hasattr(self.bbox_head, 'aug_test'), \
|
150 |
-
f'{self.bbox_head.__class__.__name__}' \
|
151 |
-
' does not support test-time augmentation'
|
152 |
-
|
153 |
-
feats = self.extract_feats(imgs)
|
154 |
-
return [self.bbox_head.aug_test(feats, img_metas, rescale=rescale)]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/models/detectors/sparse_rcnn.py
DELETED
@@ -1,110 +0,0 @@
|
|
1 |
-
from ..builder import DETECTORS
|
2 |
-
from .two_stage import TwoStageDetector
|
3 |
-
|
4 |
-
|
5 |
-
@DETECTORS.register_module()
|
6 |
-
class SparseRCNN(TwoStageDetector):
|
7 |
-
r"""Implementation of `Sparse R-CNN: End-to-End Object Detection with
|
8 |
-
Learnable Proposals <https://arxiv.org/abs/2011.12450>`_"""
|
9 |
-
|
10 |
-
def __init__(self, *args, **kwargs):
|
11 |
-
super(SparseRCNN, self).__init__(*args, **kwargs)
|
12 |
-
assert self.with_rpn, 'Sparse R-CNN do not support external proposals'
|
13 |
-
|
14 |
-
def forward_train(self,
|
15 |
-
img,
|
16 |
-
img_metas,
|
17 |
-
gt_bboxes,
|
18 |
-
gt_labels,
|
19 |
-
gt_bboxes_ignore=None,
|
20 |
-
gt_masks=None,
|
21 |
-
proposals=None,
|
22 |
-
**kwargs):
|
23 |
-
"""Forward function of SparseR-CNN in train stage.
|
24 |
-
|
25 |
-
Args:
|
26 |
-
img (Tensor): of shape (N, C, H, W) encoding input images.
|
27 |
-
Typically these should be mean centered and std scaled.
|
28 |
-
img_metas (list[dict]): list of image info dict where each dict
|
29 |
-
has: 'img_shape', 'scale_factor', 'flip', and may also contain
|
30 |
-
'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
|
31 |
-
For details on the values of these keys see
|
32 |
-
:class:`mmdet.datasets.pipelines.Collect`.
|
33 |
-
gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
|
34 |
-
shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
|
35 |
-
gt_labels (list[Tensor]): class indices corresponding to each box
|
36 |
-
gt_bboxes_ignore (None | list[Tensor): specify which bounding
|
37 |
-
boxes can be ignored when computing the loss.
|
38 |
-
gt_masks (List[Tensor], optional) : Segmentation masks for
|
39 |
-
each box. But we don't support it in this architecture.
|
40 |
-
proposals (List[Tensor], optional): override rpn proposals with
|
41 |
-
custom proposals. Use when `with_rpn` is False.
|
42 |
-
|
43 |
-
Returns:
|
44 |
-
dict[str, Tensor]: a dictionary of loss components
|
45 |
-
"""
|
46 |
-
|
47 |
-
assert proposals is None, 'Sparse R-CNN does not support' \
|
48 |
-
' external proposals'
|
49 |
-
assert gt_masks is None, 'Sparse R-CNN does not instance segmentation'
|
50 |
-
|
51 |
-
x = self.extract_feat(img)
|
52 |
-
proposal_boxes, proposal_features, imgs_whwh = \
|
53 |
-
self.rpn_head.forward_train(x, img_metas)
|
54 |
-
roi_losses = self.roi_head.forward_train(
|
55 |
-
x,
|
56 |
-
proposal_boxes,
|
57 |
-
proposal_features,
|
58 |
-
img_metas,
|
59 |
-
gt_bboxes,
|
60 |
-
gt_labels,
|
61 |
-
gt_bboxes_ignore=gt_bboxes_ignore,
|
62 |
-
gt_masks=gt_masks,
|
63 |
-
imgs_whwh=imgs_whwh)
|
64 |
-
return roi_losses
|
65 |
-
|
66 |
-
def simple_test(self, img, img_metas, rescale=False):
|
67 |
-
"""Test function without test time augmentation.
|
68 |
-
|
69 |
-
Args:
|
70 |
-
imgs (list[torch.Tensor]): List of multiple images
|
71 |
-
img_metas (list[dict]): List of image information.
|
72 |
-
rescale (bool): Whether to rescale the results.
|
73 |
-
Defaults to False.
|
74 |
-
|
75 |
-
Returns:
|
76 |
-
list[list[np.ndarray]]: BBox results of each image and classes.
|
77 |
-
The outer list corresponds to each image. The inner list
|
78 |
-
corresponds to each class.
|
79 |
-
"""
|
80 |
-
x = self.extract_feat(img)
|
81 |
-
proposal_boxes, proposal_features, imgs_whwh = \
|
82 |
-
self.rpn_head.simple_test_rpn(x, img_metas)
|
83 |
-
bbox_results = self.roi_head.simple_test(
|
84 |
-
x,
|
85 |
-
proposal_boxes,
|
86 |
-
proposal_features,
|
87 |
-
img_metas,
|
88 |
-
imgs_whwh=imgs_whwh,
|
89 |
-
rescale=rescale)
|
90 |
-
return bbox_results
|
91 |
-
|
92 |
-
def forward_dummy(self, img):
|
93 |
-
"""Used for computing network flops.
|
94 |
-
|
95 |
-
See `mmdetection/tools/analysis_tools/get_flops.py`
|
96 |
-
"""
|
97 |
-
# backbone
|
98 |
-
x = self.extract_feat(img)
|
99 |
-
# rpn
|
100 |
-
num_imgs = len(img)
|
101 |
-
dummy_img_metas = [
|
102 |
-
dict(img_shape=(800, 1333, 3)) for _ in range(num_imgs)
|
103 |
-
]
|
104 |
-
proposal_boxes, proposal_features, imgs_whwh = \
|
105 |
-
self.rpn_head.simple_test_rpn(x, dummy_img_metas)
|
106 |
-
# roi_head
|
107 |
-
roi_outs = self.roi_head.forward_dummy(x, proposal_boxes,
|
108 |
-
proposal_features,
|
109 |
-
dummy_img_metas)
|
110 |
-
return roi_outs
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/walt/datasets/pipelines/instaboost.py
DELETED
@@ -1,98 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
|
3 |
-
from ..builder import PIPELINES
|
4 |
-
|
5 |
-
|
6 |
-
@PIPELINES.register_module()
|
7 |
-
class InstaBoost(object):
|
8 |
-
r"""Data augmentation method in `InstaBoost: Boosting Instance
|
9 |
-
Segmentation Via Probability Map Guided Copy-Pasting
|
10 |
-
<https://arxiv.org/abs/1908.07801>`_.
|
11 |
-
|
12 |
-
Refer to https://github.com/GothicAi/Instaboost for implementation details.
|
13 |
-
"""
|
14 |
-
|
15 |
-
def __init__(self,
|
16 |
-
action_candidate=('normal', 'horizontal', 'skip'),
|
17 |
-
action_prob=(1, 0, 0),
|
18 |
-
scale=(0.8, 1.2),
|
19 |
-
dx=15,
|
20 |
-
dy=15,
|
21 |
-
theta=(-1, 1),
|
22 |
-
color_prob=0.5,
|
23 |
-
hflag=False,
|
24 |
-
aug_ratio=0.5):
|
25 |
-
try:
|
26 |
-
import instaboostfast as instaboost
|
27 |
-
except ImportError:
|
28 |
-
raise ImportError(
|
29 |
-
'Please run "pip install instaboostfast" '
|
30 |
-
'to install instaboostfast first for instaboost augmentation.')
|
31 |
-
self.cfg = instaboost.InstaBoostConfig(action_candidate, action_prob,
|
32 |
-
scale, dx, dy, theta,
|
33 |
-
color_prob, hflag)
|
34 |
-
self.aug_ratio = aug_ratio
|
35 |
-
|
36 |
-
def _load_anns(self, results):
|
37 |
-
labels = results['ann_info']['labels']
|
38 |
-
masks = results['ann_info']['masks']
|
39 |
-
bboxes = results['ann_info']['bboxes']
|
40 |
-
n = len(labels)
|
41 |
-
|
42 |
-
anns = []
|
43 |
-
for i in range(n):
|
44 |
-
label = labels[i]
|
45 |
-
bbox = bboxes[i]
|
46 |
-
mask = masks[i]
|
47 |
-
x1, y1, x2, y2 = bbox
|
48 |
-
# assert (x2 - x1) >= 1 and (y2 - y1) >= 1
|
49 |
-
bbox = [x1, y1, x2 - x1, y2 - y1]
|
50 |
-
anns.append({
|
51 |
-
'category_id': label,
|
52 |
-
'segmentation': mask,
|
53 |
-
'bbox': bbox
|
54 |
-
})
|
55 |
-
|
56 |
-
return anns
|
57 |
-
|
58 |
-
def _parse_anns(self, results, anns, img):
|
59 |
-
gt_bboxes = []
|
60 |
-
gt_labels = []
|
61 |
-
gt_masks_ann = []
|
62 |
-
for ann in anns:
|
63 |
-
x1, y1, w, h = ann['bbox']
|
64 |
-
# TODO: more essential bug need to be fixed in instaboost
|
65 |
-
if w <= 0 or h <= 0:
|
66 |
-
continue
|
67 |
-
bbox = [x1, y1, x1 + w, y1 + h]
|
68 |
-
gt_bboxes.append(bbox)
|
69 |
-
gt_labels.append(ann['category_id'])
|
70 |
-
gt_masks_ann.append(ann['segmentation'])
|
71 |
-
gt_bboxes = np.array(gt_bboxes, dtype=np.float32)
|
72 |
-
gt_labels = np.array(gt_labels, dtype=np.int64)
|
73 |
-
results['ann_info']['labels'] = gt_labels
|
74 |
-
results['ann_info']['bboxes'] = gt_bboxes
|
75 |
-
results['ann_info']['masks'] = gt_masks_ann
|
76 |
-
results['img'] = img
|
77 |
-
return results
|
78 |
-
|
79 |
-
def __call__(self, results):
|
80 |
-
img = results['img']
|
81 |
-
orig_type = img.dtype
|
82 |
-
anns = self._load_anns(results)
|
83 |
-
if np.random.choice([0, 1], p=[1 - self.aug_ratio, self.aug_ratio]):
|
84 |
-
try:
|
85 |
-
import instaboostfast as instaboost
|
86 |
-
except ImportError:
|
87 |
-
raise ImportError('Please run "pip install instaboostfast" '
|
88 |
-
'to install instaboostfast first.')
|
89 |
-
anns, img = instaboost.get_new_data(
|
90 |
-
anns, img.astype(np.uint8), self.cfg, background=None)
|
91 |
-
|
92 |
-
results = self._parse_anns(results, anns, img.astype(orig_type))
|
93 |
-
return results
|
94 |
-
|
95 |
-
def __repr__(self):
|
96 |
-
repr_str = self.__class__.__name__
|
97 |
-
repr_str += f'(cfg={self.cfg}, aug_ratio={self.aug_ratio})'
|
98 |
-
return repr_str
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/lama-example/saicinpainting/evaluation/losses/__init__.py
DELETED
File without changes
|
spaces/CikeyQI/Yunzai/Yunzai/plugins/other/update.js
DELETED
@@ -1,240 +0,0 @@
|
|
1 |
-
import plugin from '../../lib/plugins/plugin.js'
|
2 |
-
import { createRequire } from 'module'
|
3 |
-
import lodash from 'lodash'
|
4 |
-
import fs from 'node:fs'
|
5 |
-
import { Restart } from './restart.js'
|
6 |
-
import common from '../../lib/common/common.js'
|
7 |
-
|
8 |
-
const require = createRequire(import.meta.url)
|
9 |
-
const { exec, execSync } = require('child_process')
|
10 |
-
|
11 |
-
let uping = false
|
12 |
-
|
13 |
-
export class update extends plugin {
|
14 |
-
constructor() {
|
15 |
-
super({
|
16 |
-
name: '更新',
|
17 |
-
dsc: '#更新 #强制更新',
|
18 |
-
event: 'message',
|
19 |
-
priority: 4000,
|
20 |
-
rule: [
|
21 |
-
{
|
22 |
-
reg: '^#更新日志',
|
23 |
-
fnc: 'updateLog'
|
24 |
-
},
|
25 |
-
{
|
26 |
-
reg: '^#(强制)?更新',
|
27 |
-
fnc: 'update'
|
28 |
-
},
|
29 |
-
{
|
30 |
-
reg: '^#全部(强制)?更新$',
|
31 |
-
fnc: 'updateAll',
|
32 |
-
permission: 'master'
|
33 |
-
}
|
34 |
-
]
|
35 |
-
})
|
36 |
-
|
37 |
-
this.typeName = 'TRSS-Yunzai'
|
38 |
-
}
|
39 |
-
|
40 |
-
async update() {
|
41 |
-
if (!this.e.isMaster) return false
|
42 |
-
if (uping) return this.reply('已有命令更新中..请勿重复操作')
|
43 |
-
|
44 |
-
if (/详细|详情|面板|面版/.test(this.e.msg)) return false
|
45 |
-
|
46 |
-
/** 获取插件 */
|
47 |
-
const plugin = this.getPlugin()
|
48 |
-
if (plugin === false) return false
|
49 |
-
|
50 |
-
/** 执行更新 */
|
51 |
-
await this.runUpdate(plugin)
|
52 |
-
|
53 |
-
/** 是否需要重启 */
|
54 |
-
if (this.isUp) {
|
55 |
-
// await this.reply('即将执行重启,以应用更新')
|
56 |
-
setTimeout(() => this.restart(), 2000)
|
57 |
-
}
|
58 |
-
}
|
59 |
-
|
60 |
-
getPlugin(plugin = '') {
|
61 |
-
if (!plugin) {
|
62 |
-
plugin = this.e.msg.replace(/#(强制)?更新(日志)?/, '')
|
63 |
-
if (!plugin) return ''
|
64 |
-
}
|
65 |
-
|
66 |
-
if (!fs.existsSync(`plugins/${plugin}/.git`)) return false
|
67 |
-
|
68 |
-
this.typeName = plugin
|
69 |
-
return plugin
|
70 |
-
}
|
71 |
-
|
72 |
-
async execSync(cmd) {
|
73 |
-
return new Promise((resolve, reject) => {
|
74 |
-
exec(cmd, { windowsHide: true }, (error, stdout, stderr) => {
|
75 |
-
resolve({ error, stdout, stderr })
|
76 |
-
})
|
77 |
-
})
|
78 |
-
}
|
79 |
-
|
80 |
-
async runUpdate(plugin = '') {
|
81 |
-
this.isNowUp = false
|
82 |
-
|
83 |
-
let cm = 'git pull --no-rebase'
|
84 |
-
|
85 |
-
let type = '更新'
|
86 |
-
if (this.e.msg.includes('强制')) {
|
87 |
-
type = '强制更新'
|
88 |
-
cm = `git reset --hard && git pull --rebase --allow-unrelated-histories`
|
89 |
-
}
|
90 |
-
if (plugin) cm = `cd "plugins/${plugin}" && ${cm}`
|
91 |
-
|
92 |
-
this.oldCommitId = await this.getcommitId(plugin)
|
93 |
-
|
94 |
-
logger.mark(`${this.e.logFnc} 开始${type}:${this.typeName}`)
|
95 |
-
|
96 |
-
await this.reply(`开始${type} ${this.typeName}`)
|
97 |
-
uping = true
|
98 |
-
const ret = await this.execSync(cm)
|
99 |
-
uping = false
|
100 |
-
|
101 |
-
if (ret.error) {
|
102 |
-
logger.mark(`${this.e.logFnc} 更新失败:${this.typeName}`)
|
103 |
-
this.gitErr(ret.error, ret.stdout)
|
104 |
-
return false
|
105 |
-
}
|
106 |
-
|
107 |
-
const time = await this.getTime(plugin)
|
108 |
-
|
109 |
-
if (/Already up|已经是最新/g.test(ret.stdout)) {
|
110 |
-
await this.reply(`${this.typeName} 已是最新\n最后更新时间:${time}`)
|
111 |
-
} else {
|
112 |
-
await this.reply(`${this.typeName} 更新成功\n更新时间:${time}`)
|
113 |
-
this.isUp = true
|
114 |
-
await this.reply(await this.getLog(plugin))
|
115 |
-
}
|
116 |
-
|
117 |
-
logger.mark(`${this.e.logFnc} 最后更新时间:${time}`)
|
118 |
-
return true
|
119 |
-
}
|
120 |
-
|
121 |
-
async getcommitId(plugin = '') {
|
122 |
-
let cm = 'git rev-parse --short HEAD'
|
123 |
-
if (plugin) cm = `cd "plugins/${plugin}" && ${cm}`
|
124 |
-
|
125 |
-
const commitId = await execSync(cm, { encoding: 'utf-8' })
|
126 |
-
return lodash.trim(commitId)
|
127 |
-
}
|
128 |
-
|
129 |
-
async getTime(plugin = '') {
|
130 |
-
let cm = 'git log -1 --pretty=%cd --date=format:"%F %T"'
|
131 |
-
if (plugin) cm = `cd "plugins/${plugin}" && ${cm}`
|
132 |
-
|
133 |
-
let time = ''
|
134 |
-
try {
|
135 |
-
time = await execSync(cm, { encoding: 'utf-8' })
|
136 |
-
time = lodash.trim(time)
|
137 |
-
} catch (error) {
|
138 |
-
logger.error(error.toString())
|
139 |
-
time = '获取时间失败'
|
140 |
-
}
|
141 |
-
|
142 |
-
return time
|
143 |
-
}
|
144 |
-
|
145 |
-
async gitErr(err, stdout) {
|
146 |
-
const msg = '更新失败!'
|
147 |
-
const errMsg = err.toString()
|
148 |
-
stdout = stdout.toString()
|
149 |
-
|
150 |
-
if (errMsg.includes('Timed out')) {
|
151 |
-
const remote = errMsg.match(/'(.+?)'/g)[0].replace(/'/g, '')
|
152 |
-
return this.reply(`${msg}\n连接超时:${remote}`)
|
153 |
-
}
|
154 |
-
|
155 |
-
if (/Failed to connect|unable to access/g.test(errMsg)) {
|
156 |
-
const remote = errMsg.match(/'(.+?)'/g)[0].replace(/'/g, '')
|
157 |
-
return this.reply(`${msg}\n连接失败:${remote}`)
|
158 |
-
}
|
159 |
-
|
160 |
-
if (errMsg.includes('be overwritten by merge')) {
|
161 |
-
return this.reply(`${msg}\n存在冲突:\n${errMsg}\n请解决冲突后再更新,或者执行#强制更新,放弃本地修改`)
|
162 |
-
}
|
163 |
-
|
164 |
-
if (stdout.includes('CONFLICT')) {
|
165 |
-
return this.reply(`${msg}\n存在冲突:\n${errMsg}${stdout}\n请解决冲突后再更新,或者执行#强制更新,放弃本地修改`)
|
166 |
-
}
|
167 |
-
|
168 |
-
return this.reply([errMsg, stdout])
|
169 |
-
}
|
170 |
-
|
171 |
-
async updateAll() {
|
172 |
-
const dirs = fs.readdirSync('./plugins/')
|
173 |
-
|
174 |
-
await this.runUpdate()
|
175 |
-
|
176 |
-
for (let plu of dirs) {
|
177 |
-
plu = this.getPlugin(plu)
|
178 |
-
if (plu === false) continue
|
179 |
-
await common.sleep(1500)
|
180 |
-
await this.runUpdate(plu)
|
181 |
-
}
|
182 |
-
|
183 |
-
if (this.isUp) {
|
184 |
-
// await this.reply('即将执行重启,以应用更新')
|
185 |
-
setTimeout(() => this.restart(), 2000)
|
186 |
-
}
|
187 |
-
}
|
188 |
-
|
189 |
-
restart() {
|
190 |
-
new Restart(this.e).restart()
|
191 |
-
}
|
192 |
-
|
193 |
-
async getLog(plugin = '') {
|
194 |
-
let cm = 'git log -100 --pretty="%h||[%cd] %s" --date=format:"%F %T"'
|
195 |
-
if (plugin) cm = `cd "plugins/${plugin}" && ${cm}`
|
196 |
-
|
197 |
-
let logAll
|
198 |
-
try {
|
199 |
-
logAll = await execSync(cm, { encoding: 'utf-8' })
|
200 |
-
} catch (error) {
|
201 |
-
logger.error(error.toString())
|
202 |
-
await this.reply(error.toString())
|
203 |
-
}
|
204 |
-
|
205 |
-
if (!logAll) return false
|
206 |
-
|
207 |
-
logAll = logAll.trim().split('\n')
|
208 |
-
|
209 |
-
let log = []
|
210 |
-
for (let str of logAll) {
|
211 |
-
str = str.split('||')
|
212 |
-
if (str[0] == this.oldCommitId) break
|
213 |
-
if (str[1].includes('Merge branch')) continue
|
214 |
-
log.push(str[1])
|
215 |
-
}
|
216 |
-
let line = log.length
|
217 |
-
log = log.join('\n\n')
|
218 |
-
|
219 |
-
if (log.length <= 0) return ''
|
220 |
-
|
221 |
-
let end = ''
|
222 |
-
try {
|
223 |
-
cm = 'git config -l'
|
224 |
-
if (plugin) cm = `cd "plugins/${plugin}" && ${cm}`
|
225 |
-
end = await execSync(cm, { encoding: 'utf-8' })
|
226 |
-
end = end.match(/remote\..*\.url=.+/g).join('\n\n').replace(/remote\..*\.url=/g, '').replace(/\/\/([^@]+)@/, '//')
|
227 |
-
} catch (error) {
|
228 |
-
logger.error(error.toString())
|
229 |
-
await this.reply(error.toString())
|
230 |
-
}
|
231 |
-
|
232 |
-
return common.makeForwardMsg(this.e, [log, end], `${plugin || 'TRSS-Yunzai'} 更新日志,共${line}条`)
|
233 |
-
}
|
234 |
-
|
235 |
-
async updateLog() {
|
236 |
-
const plugin = this.getPlugin()
|
237 |
-
if (plugin === false) return false
|
238 |
-
return this.reply(await this.getLog(plugin))
|
239 |
-
}
|
240 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CjangCjengh/Shanghainese-TTS/models.py
DELETED
@@ -1,535 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import torch
|
3 |
-
from torch import nn
|
4 |
-
from torch.nn import functional as F
|
5 |
-
|
6 |
-
import commons
|
7 |
-
import modules
|
8 |
-
import attentions
|
9 |
-
import monotonic_align
|
10 |
-
|
11 |
-
from torch.nn import Conv1d, ConvTranspose1d, Conv2d
|
12 |
-
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
|
13 |
-
from commons import init_weights, get_padding
|
14 |
-
|
15 |
-
|
16 |
-
class StochasticDurationPredictor(nn.Module):
|
17 |
-
def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
|
18 |
-
super().__init__()
|
19 |
-
filter_channels = in_channels # it needs to be removed from future version.
|
20 |
-
self.in_channels = in_channels
|
21 |
-
self.filter_channels = filter_channels
|
22 |
-
self.kernel_size = kernel_size
|
23 |
-
self.p_dropout = p_dropout
|
24 |
-
self.n_flows = n_flows
|
25 |
-
self.gin_channels = gin_channels
|
26 |
-
|
27 |
-
self.log_flow = modules.Log()
|
28 |
-
self.flows = nn.ModuleList()
|
29 |
-
self.flows.append(modules.ElementwiseAffine(2))
|
30 |
-
for i in range(n_flows):
|
31 |
-
self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
|
32 |
-
self.flows.append(modules.Flip())
|
33 |
-
|
34 |
-
self.post_pre = nn.Conv1d(1, filter_channels, 1)
|
35 |
-
self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
|
36 |
-
self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
|
37 |
-
self.post_flows = nn.ModuleList()
|
38 |
-
self.post_flows.append(modules.ElementwiseAffine(2))
|
39 |
-
for i in range(4):
|
40 |
-
self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
|
41 |
-
self.post_flows.append(modules.Flip())
|
42 |
-
|
43 |
-
self.pre = nn.Conv1d(in_channels, filter_channels, 1)
|
44 |
-
self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
|
45 |
-
self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
|
46 |
-
if gin_channels != 0:
|
47 |
-
self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
|
48 |
-
|
49 |
-
def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
|
50 |
-
x = torch.detach(x)
|
51 |
-
x = self.pre(x)
|
52 |
-
if g is not None:
|
53 |
-
g = torch.detach(g)
|
54 |
-
x = x + self.cond(g)
|
55 |
-
x = self.convs(x, x_mask)
|
56 |
-
x = self.proj(x) * x_mask
|
57 |
-
|
58 |
-
if not reverse:
|
59 |
-
flows = self.flows
|
60 |
-
assert w is not None
|
61 |
-
|
62 |
-
logdet_tot_q = 0
|
63 |
-
h_w = self.post_pre(w)
|
64 |
-
h_w = self.post_convs(h_w, x_mask)
|
65 |
-
h_w = self.post_proj(h_w) * x_mask
|
66 |
-
e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
|
67 |
-
z_q = e_q
|
68 |
-
for flow in self.post_flows:
|
69 |
-
z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
|
70 |
-
logdet_tot_q += logdet_q
|
71 |
-
z_u, z1 = torch.split(z_q, [1, 1], 1)
|
72 |
-
u = torch.sigmoid(z_u) * x_mask
|
73 |
-
z0 = (w - u) * x_mask
|
74 |
-
logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
|
75 |
-
logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
|
76 |
-
|
77 |
-
logdet_tot = 0
|
78 |
-
z0, logdet = self.log_flow(z0, x_mask)
|
79 |
-
logdet_tot += logdet
|
80 |
-
z = torch.cat([z0, z1], 1)
|
81 |
-
for flow in flows:
|
82 |
-
z, logdet = flow(z, x_mask, g=x, reverse=reverse)
|
83 |
-
logdet_tot = logdet_tot + logdet
|
84 |
-
nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
|
85 |
-
return nll + logq # [b]
|
86 |
-
else:
|
87 |
-
flows = list(reversed(self.flows))
|
88 |
-
flows = flows[:-2] + [flows[-1]] # remove a useless vflow
|
89 |
-
z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
|
90 |
-
for flow in flows:
|
91 |
-
z = flow(z, x_mask, g=x, reverse=reverse)
|
92 |
-
z0, z1 = torch.split(z, [1, 1], 1)
|
93 |
-
logw = z0
|
94 |
-
return logw
|
95 |
-
|
96 |
-
|
97 |
-
class DurationPredictor(nn.Module):
|
98 |
-
def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
|
99 |
-
super().__init__()
|
100 |
-
|
101 |
-
self.in_channels = in_channels
|
102 |
-
self.filter_channels = filter_channels
|
103 |
-
self.kernel_size = kernel_size
|
104 |
-
self.p_dropout = p_dropout
|
105 |
-
self.gin_channels = gin_channels
|
106 |
-
|
107 |
-
self.drop = nn.Dropout(p_dropout)
|
108 |
-
self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
|
109 |
-
self.norm_1 = modules.LayerNorm(filter_channels)
|
110 |
-
self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
|
111 |
-
self.norm_2 = modules.LayerNorm(filter_channels)
|
112 |
-
self.proj = nn.Conv1d(filter_channels, 1, 1)
|
113 |
-
|
114 |
-
if gin_channels != 0:
|
115 |
-
self.cond = nn.Conv1d(gin_channels, in_channels, 1)
|
116 |
-
|
117 |
-
def forward(self, x, x_mask, g=None):
|
118 |
-
x = torch.detach(x)
|
119 |
-
if g is not None:
|
120 |
-
g = torch.detach(g)
|
121 |
-
x = x + self.cond(g)
|
122 |
-
x = self.conv_1(x * x_mask)
|
123 |
-
x = torch.relu(x)
|
124 |
-
x = self.norm_1(x)
|
125 |
-
x = self.drop(x)
|
126 |
-
x = self.conv_2(x * x_mask)
|
127 |
-
x = torch.relu(x)
|
128 |
-
x = self.norm_2(x)
|
129 |
-
x = self.drop(x)
|
130 |
-
x = self.proj(x * x_mask)
|
131 |
-
return x * x_mask
|
132 |
-
|
133 |
-
|
134 |
-
class TextEncoder(nn.Module):
|
135 |
-
def __init__(self,
|
136 |
-
n_vocab,
|
137 |
-
out_channels,
|
138 |
-
hidden_channels,
|
139 |
-
filter_channels,
|
140 |
-
n_heads,
|
141 |
-
n_layers,
|
142 |
-
kernel_size,
|
143 |
-
p_dropout):
|
144 |
-
super().__init__()
|
145 |
-
self.n_vocab = n_vocab
|
146 |
-
self.out_channels = out_channels
|
147 |
-
self.hidden_channels = hidden_channels
|
148 |
-
self.filter_channels = filter_channels
|
149 |
-
self.n_heads = n_heads
|
150 |
-
self.n_layers = n_layers
|
151 |
-
self.kernel_size = kernel_size
|
152 |
-
self.p_dropout = p_dropout
|
153 |
-
|
154 |
-
if self.n_vocab!=0:
|
155 |
-
self.emb = nn.Embedding(n_vocab, hidden_channels)
|
156 |
-
nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
|
157 |
-
|
158 |
-
self.encoder = attentions.Encoder(
|
159 |
-
hidden_channels,
|
160 |
-
filter_channels,
|
161 |
-
n_heads,
|
162 |
-
n_layers,
|
163 |
-
kernel_size,
|
164 |
-
p_dropout)
|
165 |
-
self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
166 |
-
|
167 |
-
def forward(self, x, x_lengths):
|
168 |
-
if self.n_vocab!=0:
|
169 |
-
x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
|
170 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
171 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
172 |
-
|
173 |
-
x = self.encoder(x * x_mask, x_mask)
|
174 |
-
stats = self.proj(x) * x_mask
|
175 |
-
|
176 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
177 |
-
return x, m, logs, x_mask
|
178 |
-
|
179 |
-
|
180 |
-
class ResidualCouplingBlock(nn.Module):
|
181 |
-
def __init__(self,
|
182 |
-
channels,
|
183 |
-
hidden_channels,
|
184 |
-
kernel_size,
|
185 |
-
dilation_rate,
|
186 |
-
n_layers,
|
187 |
-
n_flows=4,
|
188 |
-
gin_channels=0):
|
189 |
-
super().__init__()
|
190 |
-
self.channels = channels
|
191 |
-
self.hidden_channels = hidden_channels
|
192 |
-
self.kernel_size = kernel_size
|
193 |
-
self.dilation_rate = dilation_rate
|
194 |
-
self.n_layers = n_layers
|
195 |
-
self.n_flows = n_flows
|
196 |
-
self.gin_channels = gin_channels
|
197 |
-
|
198 |
-
self.flows = nn.ModuleList()
|
199 |
-
for i in range(n_flows):
|
200 |
-
self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
|
201 |
-
self.flows.append(modules.Flip())
|
202 |
-
|
203 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
204 |
-
if not reverse:
|
205 |
-
for flow in self.flows:
|
206 |
-
x, _ = flow(x, x_mask, g=g, reverse=reverse)
|
207 |
-
else:
|
208 |
-
for flow in reversed(self.flows):
|
209 |
-
x = flow(x, x_mask, g=g, reverse=reverse)
|
210 |
-
return x
|
211 |
-
|
212 |
-
|
213 |
-
class PosteriorEncoder(nn.Module):
|
214 |
-
def __init__(self,
|
215 |
-
in_channels,
|
216 |
-
out_channels,
|
217 |
-
hidden_channels,
|
218 |
-
kernel_size,
|
219 |
-
dilation_rate,
|
220 |
-
n_layers,
|
221 |
-
gin_channels=0):
|
222 |
-
super().__init__()
|
223 |
-
self.in_channels = in_channels
|
224 |
-
self.out_channels = out_channels
|
225 |
-
self.hidden_channels = hidden_channels
|
226 |
-
self.kernel_size = kernel_size
|
227 |
-
self.dilation_rate = dilation_rate
|
228 |
-
self.n_layers = n_layers
|
229 |
-
self.gin_channels = gin_channels
|
230 |
-
|
231 |
-
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
|
232 |
-
self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
|
233 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
234 |
-
|
235 |
-
def forward(self, x, x_lengths, g=None):
|
236 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
237 |
-
x = self.pre(x) * x_mask
|
238 |
-
x = self.enc(x, x_mask, g=g)
|
239 |
-
stats = self.proj(x) * x_mask
|
240 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
241 |
-
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
|
242 |
-
return z, m, logs, x_mask
|
243 |
-
|
244 |
-
|
245 |
-
class Generator(torch.nn.Module):
|
246 |
-
def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
|
247 |
-
super(Generator, self).__init__()
|
248 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
249 |
-
self.num_upsamples = len(upsample_rates)
|
250 |
-
self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
|
251 |
-
resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
|
252 |
-
|
253 |
-
self.ups = nn.ModuleList()
|
254 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
255 |
-
self.ups.append(weight_norm(
|
256 |
-
ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
|
257 |
-
k, u, padding=(k-u)//2)))
|
258 |
-
|
259 |
-
self.resblocks = nn.ModuleList()
|
260 |
-
for i in range(len(self.ups)):
|
261 |
-
ch = upsample_initial_channel//(2**(i+1))
|
262 |
-
for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
|
263 |
-
self.resblocks.append(resblock(ch, k, d))
|
264 |
-
|
265 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
266 |
-
self.ups.apply(init_weights)
|
267 |
-
|
268 |
-
if gin_channels != 0:
|
269 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
270 |
-
|
271 |
-
def forward(self, x, g=None):
|
272 |
-
x = self.conv_pre(x)
|
273 |
-
if g is not None:
|
274 |
-
x = x + self.cond(g)
|
275 |
-
|
276 |
-
for i in range(self.num_upsamples):
|
277 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
278 |
-
x = self.ups[i](x)
|
279 |
-
xs = None
|
280 |
-
for j in range(self.num_kernels):
|
281 |
-
if xs is None:
|
282 |
-
xs = self.resblocks[i*self.num_kernels+j](x)
|
283 |
-
else:
|
284 |
-
xs += self.resblocks[i*self.num_kernels+j](x)
|
285 |
-
x = xs / self.num_kernels
|
286 |
-
x = F.leaky_relu(x)
|
287 |
-
x = self.conv_post(x)
|
288 |
-
x = torch.tanh(x)
|
289 |
-
|
290 |
-
return x
|
291 |
-
|
292 |
-
def remove_weight_norm(self):
|
293 |
-
print('Removing weight norm...')
|
294 |
-
for l in self.ups:
|
295 |
-
remove_weight_norm(l)
|
296 |
-
for l in self.resblocks:
|
297 |
-
l.remove_weight_norm()
|
298 |
-
|
299 |
-
|
300 |
-
class DiscriminatorP(torch.nn.Module):
|
301 |
-
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
|
302 |
-
super(DiscriminatorP, self).__init__()
|
303 |
-
self.period = period
|
304 |
-
self.use_spectral_norm = use_spectral_norm
|
305 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
306 |
-
self.convs = nn.ModuleList([
|
307 |
-
norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
308 |
-
norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
309 |
-
norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
310 |
-
norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
311 |
-
norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
|
312 |
-
])
|
313 |
-
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
|
314 |
-
|
315 |
-
def forward(self, x):
|
316 |
-
fmap = []
|
317 |
-
|
318 |
-
# 1d to 2d
|
319 |
-
b, c, t = x.shape
|
320 |
-
if t % self.period != 0: # pad first
|
321 |
-
n_pad = self.period - (t % self.period)
|
322 |
-
x = F.pad(x, (0, n_pad), "reflect")
|
323 |
-
t = t + n_pad
|
324 |
-
x = x.view(b, c, t // self.period, self.period)
|
325 |
-
|
326 |
-
for l in self.convs:
|
327 |
-
x = l(x)
|
328 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
329 |
-
fmap.append(x)
|
330 |
-
x = self.conv_post(x)
|
331 |
-
fmap.append(x)
|
332 |
-
x = torch.flatten(x, 1, -1)
|
333 |
-
|
334 |
-
return x, fmap
|
335 |
-
|
336 |
-
|
337 |
-
class DiscriminatorS(torch.nn.Module):
|
338 |
-
def __init__(self, use_spectral_norm=False):
|
339 |
-
super(DiscriminatorS, self).__init__()
|
340 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
341 |
-
self.convs = nn.ModuleList([
|
342 |
-
norm_f(Conv1d(1, 16, 15, 1, padding=7)),
|
343 |
-
norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
|
344 |
-
norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
|
345 |
-
norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
|
346 |
-
norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
|
347 |
-
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
|
348 |
-
])
|
349 |
-
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
|
350 |
-
|
351 |
-
def forward(self, x):
|
352 |
-
fmap = []
|
353 |
-
|
354 |
-
for l in self.convs:
|
355 |
-
x = l(x)
|
356 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
357 |
-
fmap.append(x)
|
358 |
-
x = self.conv_post(x)
|
359 |
-
fmap.append(x)
|
360 |
-
x = torch.flatten(x, 1, -1)
|
361 |
-
|
362 |
-
return x, fmap
|
363 |
-
|
364 |
-
|
365 |
-
class MultiPeriodDiscriminator(torch.nn.Module):
|
366 |
-
def __init__(self, use_spectral_norm=False):
|
367 |
-
super(MultiPeriodDiscriminator, self).__init__()
|
368 |
-
periods = [2,3,5,7,11]
|
369 |
-
|
370 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
371 |
-
discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
|
372 |
-
self.discriminators = nn.ModuleList(discs)
|
373 |
-
|
374 |
-
def forward(self, y, y_hat):
|
375 |
-
y_d_rs = []
|
376 |
-
y_d_gs = []
|
377 |
-
fmap_rs = []
|
378 |
-
fmap_gs = []
|
379 |
-
for i, d in enumerate(self.discriminators):
|
380 |
-
y_d_r, fmap_r = d(y)
|
381 |
-
y_d_g, fmap_g = d(y_hat)
|
382 |
-
y_d_rs.append(y_d_r)
|
383 |
-
y_d_gs.append(y_d_g)
|
384 |
-
fmap_rs.append(fmap_r)
|
385 |
-
fmap_gs.append(fmap_g)
|
386 |
-
|
387 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
388 |
-
|
389 |
-
|
390 |
-
|
391 |
-
class SynthesizerTrn(nn.Module):
|
392 |
-
"""
|
393 |
-
Synthesizer for Training
|
394 |
-
"""
|
395 |
-
|
396 |
-
def __init__(self,
|
397 |
-
n_vocab,
|
398 |
-
spec_channels,
|
399 |
-
segment_size,
|
400 |
-
inter_channels,
|
401 |
-
hidden_channels,
|
402 |
-
filter_channels,
|
403 |
-
n_heads,
|
404 |
-
n_layers,
|
405 |
-
kernel_size,
|
406 |
-
p_dropout,
|
407 |
-
resblock,
|
408 |
-
resblock_kernel_sizes,
|
409 |
-
resblock_dilation_sizes,
|
410 |
-
upsample_rates,
|
411 |
-
upsample_initial_channel,
|
412 |
-
upsample_kernel_sizes,
|
413 |
-
n_speakers=0,
|
414 |
-
gin_channels=0,
|
415 |
-
use_sdp=True,
|
416 |
-
**kwargs):
|
417 |
-
|
418 |
-
super().__init__()
|
419 |
-
self.n_vocab = n_vocab
|
420 |
-
self.spec_channels = spec_channels
|
421 |
-
self.inter_channels = inter_channels
|
422 |
-
self.hidden_channels = hidden_channels
|
423 |
-
self.filter_channels = filter_channels
|
424 |
-
self.n_heads = n_heads
|
425 |
-
self.n_layers = n_layers
|
426 |
-
self.kernel_size = kernel_size
|
427 |
-
self.p_dropout = p_dropout
|
428 |
-
self.resblock = resblock
|
429 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
430 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
431 |
-
self.upsample_rates = upsample_rates
|
432 |
-
self.upsample_initial_channel = upsample_initial_channel
|
433 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
434 |
-
self.segment_size = segment_size
|
435 |
-
self.n_speakers = n_speakers
|
436 |
-
self.gin_channels = gin_channels
|
437 |
-
|
438 |
-
self.use_sdp = use_sdp
|
439 |
-
|
440 |
-
self.enc_p = TextEncoder(n_vocab,
|
441 |
-
inter_channels,
|
442 |
-
hidden_channels,
|
443 |
-
filter_channels,
|
444 |
-
n_heads,
|
445 |
-
n_layers,
|
446 |
-
kernel_size,
|
447 |
-
p_dropout)
|
448 |
-
self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
|
449 |
-
self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
|
450 |
-
self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
|
451 |
-
|
452 |
-
if use_sdp:
|
453 |
-
self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
|
454 |
-
else:
|
455 |
-
self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
|
456 |
-
|
457 |
-
if n_speakers > 1:
|
458 |
-
self.emb_g = nn.Embedding(n_speakers, gin_channels)
|
459 |
-
|
460 |
-
def forward(self, x, x_lengths, y, y_lengths, sid=None):
|
461 |
-
|
462 |
-
x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
|
463 |
-
if self.n_speakers > 0:
|
464 |
-
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
465 |
-
else:
|
466 |
-
g = None
|
467 |
-
|
468 |
-
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
|
469 |
-
z_p = self.flow(z, y_mask, g=g)
|
470 |
-
|
471 |
-
with torch.no_grad():
|
472 |
-
# negative cross-entropy
|
473 |
-
s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
|
474 |
-
neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
|
475 |
-
neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
|
476 |
-
neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
|
477 |
-
neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
|
478 |
-
neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
|
479 |
-
|
480 |
-
attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
|
481 |
-
attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
|
482 |
-
|
483 |
-
w = attn.sum(2)
|
484 |
-
if self.use_sdp:
|
485 |
-
l_length = self.dp(x, x_mask, w, g=g)
|
486 |
-
l_length = l_length / torch.sum(x_mask)
|
487 |
-
else:
|
488 |
-
logw_ = torch.log(w + 1e-6) * x_mask
|
489 |
-
logw = self.dp(x, x_mask, g=g)
|
490 |
-
l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging
|
491 |
-
|
492 |
-
# expand prior
|
493 |
-
m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
|
494 |
-
logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
|
495 |
-
|
496 |
-
z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
|
497 |
-
o = self.dec(z_slice, g=g)
|
498 |
-
return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
|
499 |
-
|
500 |
-
def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
|
501 |
-
x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
|
502 |
-
if self.n_speakers > 0:
|
503 |
-
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
504 |
-
else:
|
505 |
-
g = None
|
506 |
-
|
507 |
-
if self.use_sdp:
|
508 |
-
logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
|
509 |
-
else:
|
510 |
-
logw = self.dp(x, x_mask, g=g)
|
511 |
-
w = torch.exp(logw) * x_mask * length_scale
|
512 |
-
w_ceil = torch.ceil(w)
|
513 |
-
y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
|
514 |
-
y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
|
515 |
-
attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
|
516 |
-
attn = commons.generate_path(w_ceil, attn_mask)
|
517 |
-
|
518 |
-
m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
|
519 |
-
logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
|
520 |
-
|
521 |
-
z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
|
522 |
-
z = self.flow(z_p, y_mask, g=g, reverse=True)
|
523 |
-
o = self.dec((z * y_mask)[:,:,:max_len], g=g)
|
524 |
-
return o, attn, y_mask, (z, z_p, m_p, logs_p)
|
525 |
-
|
526 |
-
def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
|
527 |
-
assert self.n_speakers > 0, "n_speakers have to be larger than 0."
|
528 |
-
g_src = self.emb_g(sid_src).unsqueeze(-1)
|
529 |
-
g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
|
530 |
-
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
|
531 |
-
z_p = self.flow(z, y_mask, g=g_src)
|
532 |
-
z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
|
533 |
-
o_hat = self.dec(z_hat * y_mask, g=g_tgt)
|
534 |
-
return o_hat, y_mask, (z, z_p, z_hat)
|
535 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat/server/bp.py
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
from flask import Blueprint
|
2 |
-
|
3 |
-
bp = Blueprint('bp', __name__,
|
4 |
-
template_folder='./../client/html',
|
5 |
-
static_folder='./../client',
|
6 |
-
static_url_path='assets')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/picscore/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: PicScore — Picture Generator with Stable Diffusion
|
3 |
-
emoji: 🖼
|
4 |
-
colorFrom: green
|
5 |
-
colorTo: green
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.38.0
|
8 |
-
app_file: picscore.py
|
9 |
-
pinned: true
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
🖼 Generate pictures with the latest technology in PicScore now for free and unlimited!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Crossper6/stable-diffusion-webui/app.py
DELETED
@@ -1,75 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
from subprocess import getoutput
|
3 |
-
|
4 |
-
gpu_info = getoutput('nvidia-smi')
|
5 |
-
if("A10G" in gpu_info):
|
6 |
-
os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl")
|
7 |
-
elif("T4" in gpu_info):
|
8 |
-
os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl")
|
9 |
-
|
10 |
-
os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui")
|
11 |
-
os.chdir("/home/user/app/stable-diffusion-webui")
|
12 |
-
|
13 |
-
os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py")
|
14 |
-
os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
|
15 |
-
os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
|
16 |
-
os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
|
17 |
-
os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
|
18 |
-
os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
|
19 |
-
os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''')
|
20 |
-
os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
|
21 |
-
os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py")
|
22 |
-
os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
|
23 |
-
os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py")
|
24 |
-
|
25 |
-
# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header----------------------------
|
26 |
-
os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py")
|
27 |
-
os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
|
28 |
-
# ---------------------------------------------------------------------------------------------------------------------------------------------------
|
29 |
-
|
30 |
-
if "IS_SHARED_UI" in os.environ:
|
31 |
-
os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/")
|
32 |
-
|
33 |
-
os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json")
|
34 |
-
os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json")
|
35 |
-
|
36 |
-
os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}")
|
37 |
-
os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}")
|
38 |
-
os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}")
|
39 |
-
|
40 |
-
os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding")
|
41 |
-
else:
|
42 |
-
# Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py")
|
43 |
-
os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py")
|
44 |
-
|
45 |
-
# Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME")
|
46 |
-
#os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study")
|
47 |
-
os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser")
|
48 |
-
os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui")
|
49 |
-
|
50 |
-
# Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt")
|
51 |
-
#os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt")
|
52 |
-
#os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt")
|
53 |
-
os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt")
|
54 |
-
#os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt")
|
55 |
-
#os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt")
|
56 |
-
#os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt")
|
57 |
-
os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt")
|
58 |
-
os.system(f"wget -q https://huggingface.co/MehjourneyClosedAI/OpenAnimeJourney/resolve/main/OpenAnimeJourney.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/OpenAnimeJourney.ckpt")
|
59 |
-
#os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt")
|
60 |
-
#os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt")
|
61 |
-
#os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt")
|
62 |
-
#os.system(f"wget -q https://huggingface.co/B2gan/NovelAI/resolve/main/model.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/novelai.ckpt")
|
63 |
-
os.system(f"wget --user Crossper6 --password pMRvyayxAP^Nv2$ -q https://huggingface.co/spaces/Crossper6/stable-diffusion-webui/resolve/main/novelai.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/novelai.ckpt")
|
64 |
-
os.system(f"wget --user Crossper6 --password pMRvyayxAP^Nv2$ -q https://huggingface.co/spaces/Crossper6/stable-diffusion-webui/raw/main/novelai.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/novelai.yaml")
|
65 |
-
|
66 |
-
#os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt")
|
67 |
-
#os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt")
|
68 |
-
|
69 |
-
#os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt")
|
70 |
-
#os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml")
|
71 |
-
os.system(f"wget -q https://r2.kamiya-b.me/dreambooth_lib/akakura-sn.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/akakura-sn.ckpt")
|
72 |
-
#os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt")
|
73 |
-
os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml")
|
74 |
-
|
75 |
-
os.system(f"python launch.py --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --disable-safe-unpickle --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --precision full --no-half --api --skip-torch-cuda-test")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|