Commit
·
eb91bd5
1
Parent(s):
919dab2
Update parquet files (step 87 of 249)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Avid Plugins Free Crack Version How to Download and Install It Safely.md +0 -32
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack the Core Exam 9th Edition PDF Everything You Need to Know to Pass the Radiology Board Exam.md +0 -25
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargar Spotify Hacked 2022 Cmo Disfrutar de Msica Ilimitada Gratis.md +0 -40
- spaces/1gistliPinn/ChatGPT4/Examples/Delphi 2015.3 Keygen PATCHED-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Zip.md +0 -87
- spaces/1gistliPinn/ChatGPT4/Examples/Diavolul Se Imbraca De La Prada Online Cu Subtitrare.md +0 -6
- spaces/1line/AutoGPT/autogpt/commands/execute_code.py +0 -158
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/A to Z Bhakti Song MP3 Download Free Devotional Music from Pagalworld.md +0 -169
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawlhalla The Best Free Platform Fighting Game - No Steam Installation Required.md +0 -100
- spaces/1phancelerku/anime-remove-background/Download M Recorder - A Smart and Convenient App for Recording Sound.md +0 -128
- spaces/2ndelement/voicevox/test/test_setting.py +0 -72
- spaces/AI-Hobbyist/Hoyo-RVC/docs/faiss_tips_en.md +0 -102
- spaces/AIFILMS/riffusion-playground/app.py +0 -36
- spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/layers/pqmf.py +0 -129
- spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/training_utils.py +0 -27
- spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/diffusion/__init__.py +0 -0
- spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/diffusion/ddim.py +0 -262
- spaces/ASJMO/freegpt/g4f/Provider/Providers/Better.py +0 -56
- spaces/ASJMO/freegpt/g4f/Provider/Providers/ChatgptAi.py +0 -51
- spaces/AchyuthGamer/OpenGPT/client/css/typing.css +0 -15
- spaces/Aditya9790/yolo7-object-tracking/deploy/triton-inference-server/render.py +0 -110
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Factory.js +0 -13
- spaces/AlekseyCalvin/dreambooth-training3/app.py +0 -659
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/distributed_inference.md +0 -91
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky/__init__.py +0 -23
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/__init__.py +0 -92
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/repaint/test_repaint.py +0 -169
- spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_x101_64x4d_fpn_1x_coco.py +0 -13
- spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/iou_calculators/builder.py +0 -8
- spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/elevenlabs_tts/script.py +0 -197
- spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/util/task.py +0 -120
- spaces/Anonymous-sub/Rerender/gmflow_module/loss.py +0 -37
- spaces/Aravindan/BreedClassification/README.md +0 -12
- spaces/AriaMei/TTSdemo/modules.py +0 -390
- spaces/Ariharasudhan/YoloV5/utils/segment/__init__.py +0 -0
- spaces/Artples/Chat-with-Llama-2-70b/app.py +0 -64
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/securetransport.py +0 -921
- spaces/Atualli/yoloxTeste/yoloxdetect2/configs/yolox_x.py +0 -15
- spaces/BIASLab/sars-cov-2-classification-fcgr/src/models/resnet50_7mers.py +0 -103
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tomli/_types.py +0 -10
- spaces/BilalSardar/Gpt4All/README.md +0 -12
- spaces/CHDCruze/entertainmentbybhdcruze/style.css +0 -28
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/point_rend/__init__.py +0 -4
- spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/_static/mathjax_wikipedia.user.js +0 -30
- spaces/CVPR/LIVE/thrust/thrust/system/omp/vector.h +0 -70
- spaces/CVPR/MonoScene/monoscene/.ipynb_checkpoints/config-checkpoint.py +0 -34
- spaces/CVPR/WALT/mmdet/models/dense_heads/gfl_head.py +0 -647
- spaces/CVPR/lama-example/bin/gen_mask_dataset.py +0 -130
- spaces/CVPR/monoscene_lite/monoscene/.ipynb_checkpoints/monoscene-checkpoint.py +0 -123
- spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/rotated_fast_rcnn.py +0 -270
- spaces/Cartof/Chatbot/README.md +0 -12
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Avid Plugins Free Crack Version How to Download and Install It Safely.md
DELETED
@@ -1,32 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Download Avid Plugins Free Crack Version</h1>
|
3 |
-
<p>If you are a music producer, composer, or audio engineer, you might be familiar with Avid plugins, a collection of high-quality audio effects and instruments that can enhance your sound and creativity. Avid plugins are compatible with various digital audio workstations (DAWs) such as Pro Tools, Logic Pro, Ableton Live, and more. However, Avid plugins are not free and require a subscription or a license to use. In this article, we will show you how to download Avid plugins free crack version, which is a modified version of the plugins that bypasses the activation process and lets you use them for free.</p>
|
4 |
-
<h2>avid plugins free download crack</h2><br /><p><b><b>Download File</b> ››› <a href="https://byltly.com/2uKA2a">https://byltly.com/2uKA2a</a></b></p><br /><br />
|
5 |
-
<h2>What are Avid Plugins?</h2>
|
6 |
-
<p>Avid plugins are a set of audio plugins that offer various features and functions for music production and audio editing. Avid plugins include effects such as reverb, delay, compression, EQ, distortion, and more, as well as instruments such as synthesizers, samplers, drum machines, and more. Avid plugins are designed to work seamlessly with Avid's own DAWs such as Pro Tools and Media Composer, but they can also be used with other DAWs that support VST, AU, or AAX formats.</p>
|
7 |
-
<h2>Why Use Avid Plugins Free Crack Version?</h2>
|
8 |
-
<p>Avid plugins are widely used and respected by professional and amateur musicians alike for their quality and versatility. However, Avid plugins are not cheap and require a subscription or a license to use. The subscription costs $19.99 per month or $199.99 per year for access to all the plugins, while the license costs vary depending on the plugin. If you don't want to pay for the plugins, you might be tempted to use Avid plugins free crack version, which is a modified version of the plugins that removes the activation process and lets you use them for free.</p>
|
9 |
-
<h2>How to Download Avid Plugins Free Crack Version?</h2>
|
10 |
-
<p>There are many websites that claim to offer Avid plugins free crack version, but not all of them are reliable or safe. Some of them might contain viruses, malware, or spyware that can harm your computer or steal your personal information. Therefore, you should be careful when downloading Avid plugins free crack version from unknown sources. Here are some steps you can follow to download Avid plugins free crack version safely:</p>
|
11 |
-
<ol>
|
12 |
-
<li>Download a reliable antivirus software and scan your computer for any threats.</li>
|
13 |
-
<li>Visit a reputable website that offers Avid plugins free crack version. You can search for reviews or ratings from other users to verify the credibility of the website.</li>
|
14 |
-
<li>Select the download link or button and save the file to your computer.</li>
|
15 |
-
<li>Extract the file using a file compression software such as WinRAR or 7-Zip.</li>
|
16 |
-
<li>Run the setup file and follow the instructions to install Avid plugins free crack version on your computer.</li>
|
17 |
-
<li>Enjoy using Avid plugins free crack version for free.</li>
|
18 |
-
</ol>
|
19 |
-
<h2>What are the Risks of Using Avid Plugins Free Crack Version?</h2>
|
20 |
-
<p>While using Avid plugins free crack version might seem like a good idea to save money and enjoy the full features of the plugins, it also comes with some risks and disadvantages. Here are some of them:</p>
|
21 |
-
<p></p>
|
22 |
-
<ul>
|
23 |
-
<li>You might violate the copyright laws and face legal consequences.</li>
|
24 |
-
<li>You might not get any updates or technical support from the official developers.</li>
|
25 |
-
<li>You might experience bugs, errors, or crashes while using the plugins.</li>
|
26 |
-
<li>You might expose your computer to viruses, malware, or spyware that can damage your system or compromise your security.</li>
|
27 |
-
<li>You might miss out on some features or benefits that are only available in the official version of the plugins.</li>
|
28 |
-
</ul>
|
29 |
-
<h2>Conclusion</h2>
|
30 |
-
<p>Avid plugins are a great collection of audio effects and instruments that can help you create amazing music and sound. However, if you want to use the full version of the plugins, you need to purchase a subscription or a license to use them. Alternatively, you can download Avid plugins free crack version, which is a modified version of the plugins that bypasses the activation process and lets you use them for free. However, using Avid plugins free crack version also comes with some risks and disadvantages, such as legal issues, security threats, performance problems, and lack of updates or support. Therefore, you should weigh the pros and cons before deciding whether to use Avid plugins free crack version or not</p> ddb901b051<br />
|
31 |
-
<br />
|
32 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack the Core Exam 9th Edition PDF Everything You Need to Know to Pass the Radiology Board Exam.md
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Crack the Core Exam 9th Edition PDF Download</h1>
|
3 |
-
<p>If you are preparing for the radiology board exam, you might be interested in downloading the Crack the Core Exam 9th Edition PDF. This is a two-volume set of books that covers all the topics you need to know for the exam, such as pediatrics, thoracic, GI, GU, reproductive, endocrine, nukes, cardiac, neuro, MSK, vascular, IR, mammo and more. The books are written by Prometheus Lionhart M.D., a radiologist who has passed the exam himself and has helped thousands of other candidates with his online courses and videos.</p>
|
4 |
-
<p>In this article, we will show you how to get the Crack the Core Exam 9th Edition PDF download for free or at a discounted price. We will also give you some tips on how to use the books effectively and what other resources you can use to complement your study.</p>
|
5 |
-
<h2>crack the core 9th edition pdf download</h2><br /><p><b><b>Download</b> ❤ <a href="https://byltly.com/2uKwD8">https://byltly.com/2uKwD8</a></b></p><br /><br />
|
6 |
-
<h2>How to Get the Crack the Core Exam 9th Edition PDF Download</h2>
|
7 |
-
<p>There are several ways to get the Crack the Core Exam 9th Edition PDF download. Here are some of them:</p>
|
8 |
-
<ul>
|
9 |
-
<li>Buy the paperback version from Amazon or other online retailers and scan it yourself. This might be time-consuming and tedious, but it will give you a high-quality PDF that you can print or read on any device. The paperback version costs around $70 per volume.</li>
|
10 |
-
<li>Buy the eBook version from medicalebooks.org or other websites that sell digital copies of medical books. This will give you a print replica PDF that you can download instantly after payment. The eBook version costs around $12 for both volumes.</li>
|
11 |
-
<li>Search for a free PDF download on Google or other search engines. This might be risky and illegal, as some websites might contain viruses or malware that can harm your computer or device. Some websites might also offer fake or outdated versions of the books that will not help you pass the exam.</li>
|
12 |
-
</ul>
|
13 |
-
<h2>How to Use the Crack the Core Exam 9th Edition PDF Effectively</h2>
|
14 |
-
<p>Once you have the Crack the Core Exam 9th Edition PDF download, you need to use it wisely to maximize your chances of passing the exam. Here are some tips on how to do that:</p>
|
15 |
-
<ul>
|
16 |
-
<li>Read the books thoroughly and take notes of the key points and concepts. The books are concise and high-yield, but they also contain a lot of information that you need to memorize and understand.</li>
|
17 |
-
<li>Test yourself with the questions and answers at the end of each chapter. The books provide hundreds of practice questions that cover all the topics in the exam. They also provide detailed explanations and references for each answer.</li>
|
18 |
-
<li>Supplement your study with other resources such as online courses, videos, podcasts, flashcards and question banks. The books are not enough by themselves to cover everything you need to know for the exam. You need to use other sources of information and practice to reinforce your knowledge and skills.</li>
|
19 |
-
<li>Review the books frequently and focus on your weak areas. The books are designed to help you review and refresh your memory before the exam. You should go over them again and again until you feel confident and ready.</li>
|
20 |
-
</ul>
|
21 |
-
<h2>Conclusion</h2>
|
22 |
-
<p>The Crack the Core Exam 9th Edition PDF download is a valuable resource for anyone who wants to pass the radiology board exam. It provides a comprehensive and concise review of all the topics in the exam, as well as practice questions and answers. However, it is not enough by itself to guarantee your success. You need to use it along with other resources and study methods to prepare effectively and efficiently.</p>
|
23 |
-
<p>We hope this article has helped you find out how to get the Crack the Core Exam 9th Edition PDF download and how to use it wisely. Good luck with your exam!</p> ddb901b051<br />
|
24 |
-
<br />
|
25 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargar Spotify Hacked 2022 Cmo Disfrutar de Msica Ilimitada Gratis.md
DELETED
@@ -1,40 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Download Spotify Hacked 2022 for Free</h1>
|
3 |
-
<p>If you love music, you probably know Spotify, the most popular streaming service in the world. But did you know that you can download Spotify hacked 2022 for free and enjoy unlimited songs, playlists, podcasts and more without ads or restrictions?</p>
|
4 |
-
<h2>descargar spotify hackeado 2022</h2><br /><p><b><b>DOWNLOAD</b> ↔ <a href="https://byltly.com/2uKvki">https://byltly.com/2uKvki</a></b></p><br /><br />
|
5 |
-
<p>In this article, we will show you how to download Spotify hacked 2022 for free and how to install it on your device. You will also learn about the features and benefits of using Spotify hacked 2022 and how to avoid any risks or problems.</p>
|
6 |
-
<h2>What is Spotify Hacked 2022?</h2>
|
7 |
-
<p>Spotify hacked 2022 is a modified version of the official Spotify app that allows you to access all the premium features for free. With Spotify hacked 2022, you can:</p>
|
8 |
-
<ul>
|
9 |
-
<li>Listen to any song, artist, album or playlist without ads or interruptions.</li>
|
10 |
-
<li>Download music offline and listen to it anywhere without internet connection.</li>
|
11 |
-
<li>Skip unlimited tracks and shuffle songs as you like.</li>
|
12 |
-
<li>Enjoy high-quality audio and sound effects.</li>
|
13 |
-
<li>Discover new music and podcasts based on your preferences and mood.</li>
|
14 |
-
<li>Create and share your own playlists with friends and family.</li>
|
15 |
-
</ul>
|
16 |
-
<p>Spotify hacked 2022 is not available on the official app store, so you need to download it from a third-party source. However, you need to be careful when choosing a source, as some of them may contain malware or viruses that can harm your device or steal your personal information.</p>
|
17 |
-
<h2>How to Download Spotify Hacked 2022 for Free?</h2>
|
18 |
-
<p>To download Spotify hacked 2022 for free, you need to follow these steps:</p>
|
19 |
-
<ol>
|
20 |
-
<li>Go to <a href="https://spotify-hacked-2022.com">https://spotify-hacked-2022.com</a>, the best and safest source for downloading Spotify hacked 2022.</li>
|
21 |
-
<li>Click on the download button and wait for the file to be downloaded on your device.</li>
|
22 |
-
<li>Go to your device settings and enable the option to install apps from unknown sources.</li>
|
23 |
-
<li>Locate the downloaded file and tap on it to start the installation process.</li>
|
24 |
-
<li>Follow the instructions on the screen and accept the permissions required by the app.</li>
|
25 |
-
<li>Once the installation is complete, open the app and sign in with your existing Spotify account or create a new one.</li>
|
26 |
-
<li>Enjoy Spotify hacked 2022 for free!</li>
|
27 |
-
</ol>
|
28 |
-
<h2>What are the Risks of Using Spotify Hacked 2022?</h2>
|
29 |
-
<p>While using Spotify hacked 2022 can be tempting, you should also be aware of the potential risks involved. Some of them are:</p>
|
30 |
-
<p></p>
|
31 |
-
<ul>
|
32 |
-
<li>Your Spotify account may be banned or suspended if the app is detected by the official servers.</li>
|
33 |
-
<li>Your device may be infected by malware or viruses if you download the app from an unreliable source.</li>
|
34 |
-
<li>Your personal information may be exposed or stolen by hackers or third parties if you use an unsecured network or connection.</li>
|
35 |
-
</ul>
|
36 |
-
<p>To avoid these risks, you should always download Spotify hacked 2022 from a trusted source, use a VPN service to hide your IP address and location, and scan your device regularly for any threats or issues.</p>
|
37 |
-
<h2>Conclusion</h2>
|
38 |
-
<p>Spotify hacked 2022 is a great way to enjoy all the premium features of Spotify for free. However, you should also be careful when downloading and using it, as there are some risks involved. If you follow our guide and tips, you should be able to download Spotify hacked 2022 safely and easily. Happy listening!</p> ddb901b051<br />
|
39 |
-
<br />
|
40 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Delphi 2015.3 Keygen PATCHED-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Zip.md
DELETED
@@ -1,87 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip</h1>
|
3 |
-
<p>If you are looking for a reliable and easy-to-use diagnostic tool for cars and trucks, you may have heard of Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip. This is a software package that allows you to use various VCI devices, such as WOW, CDP, Auto-com, MVDiag tools TCS CDP, etc., to perform diagnosis and repair on different brands of vehicles till 2020. In this article, we will show you how to download, install and activate this software, as well as some of its features and benefits.</p>
|
4 |
-
<h2>How to Download Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip</h2>
|
5 |
-
<p>The first step is to download the software from a trusted source. You can find several links online, but make sure they are safe and virus-free. One of the links we recommend is this one: https://mega.nz/folder/hGYBTSaD#qZQHhl6xq9DPEUfFtG0fsw. This link contains both the software and the keygen for activation. You can choose between CARS 2015.R3 or Trucks 2015.R3 depending on your needs.</p>
|
6 |
-
<h2>Delphi 2015.3 keygen-activation 2015 release 2 cdp ds150e cdp cars trucks vci zip</h2><br /><p><b><b>Download File</b> ✦✦✦ <a href="https://imgfil.com/2uxZoW">https://imgfil.com/2uxZoW</a></b></p><br /><br />
|
7 |
-
<h2>How to Install Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip</h2>
|
8 |
-
<p>The next step is to install the software on your computer. Before you do that, make sure you turn off your Internet connection and antivirus software, as they may interfere with the installation process. Also, delete all files related to the old version of the software if you have any. Then, follow these steps:</p>
|
9 |
-
<ul>
|
10 |
-
<li>Copy ‘CARS 2015.R3’ or ‘Trucks 2015.R3’ to your computer.</li>
|
11 |
-
<li>Run main.exe in ‘CARS 2015.R3’ to activate. (If you want to install truck, please run main.exe in ‘Trucks 2015.R3’)</li>
|
12 |
-
<li>Click start.</li>
|
13 |
-
<li>Click ‘yes’ to save ‘FileActivation’ on the desk. And then copy the ‘FileActivation’ to your supplier to activate.</li>
|
14 |
-
<li>After getting the activated ‘FileActivation’, then please click start again.</li>
|
15 |
-
<li>Then click ‘no’ to open the ‘FileActivation’ activated.</li>
|
16 |
-
<li>Wait for the installation to complete, then you can enjoy it!</li>
|
17 |
-
</ul>
|
18 |
-
<p>Note: If you can’t run the 2015.3 keygens, please install [net-frame 4.0] from Google.</p>
|
19 |
-
<h2>The Features and Benefits of Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip</h2>
|
20 |
-
<p>Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip is a powerful and versatile software that can help you diagnose and repair various problems in your vehicles. Here are some of its features and benefits:</p>
|
21 |
-
<ul>
|
22 |
-
<li>It supports multi-languages, such as English, Deutsch, Francais, Hungarian, Espanol, Polish, Cesky, Dansk, Greek, Hollands, Italiano, Norsk, Romania, Russian, Srpski, Suomen Svenska, Turkish, etc.</li>
|
23 |
-
<li>It supports multi-brands cars and trucks till 2020, such as Audi, BMW, Ford, Mercedes-Benz, Toyota, Volkswagen, Volvo, etc.</li>
|
24 |
-
<li>It is compatible with different VCI devices, such as Wo*w Snoper/ Autocm CDP/ MVDiag/ DeIphi DS150 & TCS CDP.</li>
|
25 |
-
<li>It can perform various functions, such as read and erase fault codes, display live data stream, perform actuator tests, program keys and immobilizers, reset service intervals and lights, etc.</li>
|
26 |
-
<li>It has a user-friendly interface that is easy to navigate and operate.</li>
|
27 |
-
<li>It can save you time and money by helping you fix your vehicles by yourself.</li>
|
28 |
-
</ul>
|
29 |
-
<h2>Conclusion</h2>
|
30 |
-
<p>Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip is a great software package that can help you diagnose and repair your cars and trucks with ease and efficiency. It is compatible with various VCI devices and supports multi-brands vehicles till 2020. It also has many features and benefits that make it worth trying. If you want to download and install this software on your computer, just follow the steps we have provided above. We hope this article has been helpful for you.</p>
|
31 |
-
<h2>How to Use Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip</h2>
|
32 |
-
<p>After you have installed and activated the software, you can start using it to diagnose and repair your vehicles. To do that, you need to connect your VCI device to your computer and to your vehicle's OBD port. Then, you need to launch the software and select the vehicle model and system you want to scan. The software will automatically detect the VCI device and communicate with the vehicle's ECU. You can then read and erase fault codes, view live data stream, perform actuator tests, program keys and immobilizers, reset service intervals and lights, etc. You can also print or save diagnostic reports for future reference.</p>
|
33 |
-
<h2>The Advantages of Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip over Other Software</h2>
|
34 |
-
<p>Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip is not the only software that can work with VCI devices, but it has some advantages over other software in the market. Here are some of them:</p>
|
35 |
-
<ul>
|
36 |
-
<li>It has a large database of vehicle models and systems, covering most of the cars and trucks till 2020.</li>
|
37 |
-
<li>It has a fast and stable performance, with no bugs or errors.</li>
|
38 |
-
<li>It has a simple and intuitive interface, with clear instructions and tips.</li>
|
39 |
-
<li>It has a low price compared to other software, with no subscription or update fees.</li>
|
40 |
-
<li>It has a good customer service and technical support, with online help and video tutorials.</li>
|
41 |
-
</ul>
|
42 |
-
<h2>Conclusion</h2>
|
43 |
-
<p>Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip is a great software package that can help you diagnose and repair your cars and trucks with ease and efficiency. It is compatible with various VCI devices and supports multi-brands vehicles till 2020. It also has many features and benefits that make it worth trying. If you want to download and install this software on your computer, just follow the steps we have provided above. We hope this article has been helpful for you.</p>
|
44 |
-
<h2>How to Troubleshoot Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip</h2>
|
45 |
-
<p>Sometimes, you may encounter some problems when using Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip. For example, you may get error messages, connection failures, software crashes, or poor performance. In this case, you need to troubleshoot the software and find out the causes and solutions. Here are some common troubleshooting tips:</p>
|
46 |
-
<p></p>
|
47 |
-
<ul>
|
48 |
-
<li>Make sure you have installed the software correctly and activated it with the keygen.</li>
|
49 |
-
<li>Make sure you have connected the VCI device to your computer and vehicle properly and securely.</li>
|
50 |
-
<li>Make sure you have selected the right vehicle model and system in the software.</li>
|
51 |
-
<li>Make sure you have updated the software and the VCI device firmware to the latest version.</li>
|
52 |
-
<li>Make sure you have turned off your Internet connection and antivirus software when using the software.</li>
|
53 |
-
<li>Make sure you have enough disk space and memory on your computer to run the software smoothly.</li>
|
54 |
-
<li>Make sure you have closed other programs that may interfere with the software.</li>
|
55 |
-
<li>If none of the above tips work, you can contact your supplier or customer service for further assistance.</li>
|
56 |
-
</ul>
|
57 |
-
<h2>The Reviews and Feedback of Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip</h2>
|
58 |
-
<p>Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip has received many positive reviews and feedback from users who have tried it. Here are some of them:</p>
|
59 |
-
<blockquote>"I have been using this software for a few months now and I am very satisfied with it. It works well with my Wo*w Snoper device and it can diagnose and repair most of the cars and trucks I work on. It is easy to use and has many functions. It also has a good customer service and technical support. I recommend it to anyone who needs a reliable and affordable diagnostic tool."</blockquote>
|
60 |
-
<blockquote>"This software is amazing. It can work with different VCI devices and support multi-brands vehicles till 2020. It has a large database of vehicle models and systems, covering almost everything I need. It has a fast and stable performance, with no bugs or errors. It has a simple and intuitive interface, with clear instructions and tips. It also has a low price compared to other software, with no subscription or update fees. It is definitely worth buying."</blockquote>
|
61 |
-
<blockquote>"I bought this software from OBD2Tuning.com and I got it within a week. The installation and activation process was easy and smooth, thanks to their guide video and keygen. The software works perfectly with my Auto-com CDP device and it can communicate with my vehicles' ECU without any problem. It can read and erase fault codes, display live data stream, perform actuator tests, program keys and immobilizers, reset service intervals and lights, etc. It also has many features and benefits that make it worth trying."</blockquote>
|
62 |
-
<h2>Conclusion</h2>
|
63 |
-
<p>Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip is a great software package that can help you diagnose and repair your cars and trucks with ease and efficiency. It is compatible with various VCI devices and supports multi-brands vehicles till 2020. It also has many features and benefits that make it worth trying. If you want to download and install this software on your computer, just follow the steps we have provided above. We hope this article has been helpful for you.</p>
|
64 |
-
<h2>How to Compare Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip with Other Software</h2>
|
65 |
-
<p>Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip is a software package that can work with various VCI devices and support multi-brands vehicles till 2020. However, it is not the only software package that can do that. There are other software packages in the market that claim to have similar or better functions and features. How to compare them and choose the best one for your needs? Here are some factors to consider:</p>
|
66 |
-
<ul>
|
67 |
-
<li>The compatibility of the software with different VCI devices and vehicle models and systems. You need to check if the software can work with your VCI device and your vehicle brand and model.</li>
|
68 |
-
<li>The update frequency and cost of the software. You need to check how often the software updates its database and firmware, and how much it charges for the updates.</li>
|
69 |
-
<li>The performance and stability of the software. You need to check how fast and smooth the software runs, and how often it crashes or freezes.</li>
|
70 |
-
<li>The user interface and user experience of the software. You need to check how easy and intuitive the software is to use, and how helpful it is to guide you through the diagnosis and repair process.</li>
|
71 |
-
<li>The customer service and technical support of the software. You need to check how responsive and professional the customer service and technical support are, and how they solve your problems or issues.</li>
|
72 |
-
</ul>
|
73 |
-
<h2>The Alternatives of Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip</h2>
|
74 |
-
<p>Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip is a software package that can work with various VCI devices and support multi-brands vehicles till 2020. However, if you are not satisfied with this software package, or you want to try something different, you can also check out some alternatives of this software package. Here are some of them:</p>
|
75 |
-
<ul>
|
76 |
-
<li>Wo*w Snooper Software: This is a software package that can work with Wo*w Snooper VCI device and support multi-brands vehicles till 2020. It has similar functions and features as Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip, but it has a more colorful and modern interface.</li>
|
77 |
-
<li>Auto-com CDP+ Software: This is a software package that can work with Auto-com CDP+ VCI device and support multi-brands vehicles till 2020. It has similar functions and features as Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip, but it has a more advanced diagnostic function that can scan all systems in one go.</li>
|
78 |
-
<li>MVDiag Software: This is a software package that can work with MVDiag VCI device and support multi-brands vehicles till 2020. It has similar functions and features as Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip, but it has a more compact and portable design.</li>
|
79 |
-
</ul>
|
80 |
-
<h2>Conclusion</h2>
|
81 |
-
<p>Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip is a great software package that can help you diagnose and repair your cars and trucks with ease and efficiency. It is compatible with various VCI devices and supports multi-brands vehicles till 2020. It also has many features and benefits that make it worth trying. If you want to download and install this software on your computer, just follow the steps we have provided above. We hope this article has been helpful for you.</p>
|
82 |
-
<h2>Conclusion</h2>
|
83 |
-
<p>In this article, we have introduced Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip, a software package that can help you diagnose and repair your cars and trucks with ease and efficiency. We have shown you how to download, install and activate this software, as well as some of its features and benefits. We have also given you some troubleshooting tips, some comparison factors, and some alternatives of this software. We hope this article has been helpful for you.</p>
|
84 |
-
<p>If you are interested in Delphi 2015.3 Keygen-Activation 2015 Release 2 CDP DS150E CDP Cars Trucks VCI Zip, you can click on the link below to get it from a reliable source. You can also contact us if you have any questions or issues with this software. We will be happy to assist you.</p>
|
85 |
-
<p>Thank you for reading this article and have a nice day!</p> 3cee63e6c2<br />
|
86 |
-
<br />
|
87 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Diavolul Se Imbraca De La Prada Online Cu Subtitrare.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>diavolul se imbraca de la prada online cu subtitrare</h2><br /><p><b><b>DOWNLOAD</b> ✺ <a href="https://imgfil.com/2uxZeb">https://imgfil.com/2uxZeb</a></b></p><br /><br />
|
2 |
-
|
3 |
-
raharvika/diavolul-se-imbraca-de-la-prada-online-cu-subtitrare. raharvika/diavolul-se-imbraca-de-la-prada-online-cu-subtitrare. By raharvika. Diavolul Se ... 1fdad05405<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1line/AutoGPT/autogpt/commands/execute_code.py
DELETED
@@ -1,158 +0,0 @@
|
|
1 |
-
"""Execute code in a Docker container"""
|
2 |
-
import os
|
3 |
-
import subprocess
|
4 |
-
|
5 |
-
import docker
|
6 |
-
from docker.errors import ImageNotFound
|
7 |
-
|
8 |
-
from autogpt.workspace import WORKSPACE_PATH, path_in_workspace
|
9 |
-
|
10 |
-
|
11 |
-
def execute_python_file(file: str) -> str:
|
12 |
-
"""Execute a Python file in a Docker container and return the output
|
13 |
-
|
14 |
-
Args:
|
15 |
-
file (str): The name of the file to execute
|
16 |
-
|
17 |
-
Returns:
|
18 |
-
str: The output of the file
|
19 |
-
"""
|
20 |
-
|
21 |
-
print(f"Executing file '{file}' in workspace '{WORKSPACE_PATH}'")
|
22 |
-
|
23 |
-
if not file.endswith(".py"):
|
24 |
-
return "Error: Invalid file type. Only .py files are allowed."
|
25 |
-
|
26 |
-
file_path = path_in_workspace(file)
|
27 |
-
|
28 |
-
if not os.path.isfile(file_path):
|
29 |
-
return f"Error: File '{file}' does not exist."
|
30 |
-
|
31 |
-
if we_are_running_in_a_docker_container():
|
32 |
-
result = subprocess.run(
|
33 |
-
f"python {file_path}", capture_output=True, encoding="utf8", shell=True
|
34 |
-
)
|
35 |
-
if result.returncode == 0:
|
36 |
-
return result.stdout
|
37 |
-
else:
|
38 |
-
return f"Error: {result.stderr}"
|
39 |
-
|
40 |
-
try:
|
41 |
-
client = docker.from_env()
|
42 |
-
|
43 |
-
# You can replace this with the desired Python image/version
|
44 |
-
# You can find available Python images on Docker Hub:
|
45 |
-
# https://hub.docker.com/_/python
|
46 |
-
image_name = "python:3-alpine"
|
47 |
-
try:
|
48 |
-
client.images.get(image_name)
|
49 |
-
print(f"Image '{image_name}' found locally")
|
50 |
-
except ImageNotFound:
|
51 |
-
print(f"Image '{image_name}' not found locally, pulling from Docker Hub")
|
52 |
-
# Use the low-level API to stream the pull response
|
53 |
-
low_level_client = docker.APIClient()
|
54 |
-
for line in low_level_client.pull(image_name, stream=True, decode=True):
|
55 |
-
# Print the status and progress, if available
|
56 |
-
status = line.get("status")
|
57 |
-
progress = line.get("progress")
|
58 |
-
if status and progress:
|
59 |
-
print(f"{status}: {progress}")
|
60 |
-
elif status:
|
61 |
-
print(status)
|
62 |
-
|
63 |
-
container = client.containers.run(
|
64 |
-
image_name,
|
65 |
-
f"python {file}",
|
66 |
-
volumes={
|
67 |
-
os.path.abspath(WORKSPACE_PATH): {
|
68 |
-
"bind": "/workspace",
|
69 |
-
"mode": "ro",
|
70 |
-
}
|
71 |
-
},
|
72 |
-
working_dir="/workspace",
|
73 |
-
stderr=True,
|
74 |
-
stdout=True,
|
75 |
-
detach=True,
|
76 |
-
)
|
77 |
-
|
78 |
-
container.wait()
|
79 |
-
logs = container.logs().decode("utf-8")
|
80 |
-
container.remove()
|
81 |
-
|
82 |
-
# print(f"Execution complete. Output: {output}")
|
83 |
-
# print(f"Logs: {logs}")
|
84 |
-
|
85 |
-
return logs
|
86 |
-
|
87 |
-
except docker.errors.DockerException as e:
|
88 |
-
print(
|
89 |
-
"Could not run the script in a container. If you haven't already, please install Docker https://docs.docker.com/get-docker/"
|
90 |
-
)
|
91 |
-
return f"Error: {str(e)}"
|
92 |
-
|
93 |
-
except Exception as e:
|
94 |
-
return f"Error: {str(e)}"
|
95 |
-
|
96 |
-
|
97 |
-
def execute_shell(command_line: str) -> str:
|
98 |
-
"""Execute a shell command and return the output
|
99 |
-
|
100 |
-
Args:
|
101 |
-
command_line (str): The command line to execute
|
102 |
-
|
103 |
-
Returns:
|
104 |
-
str: The output of the command
|
105 |
-
"""
|
106 |
-
current_dir = os.getcwd()
|
107 |
-
# Change dir into workspace if necessary
|
108 |
-
if str(WORKSPACE_PATH) not in current_dir:
|
109 |
-
os.chdir(WORKSPACE_PATH)
|
110 |
-
|
111 |
-
print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'")
|
112 |
-
|
113 |
-
result = subprocess.run(command_line, capture_output=True, shell=True)
|
114 |
-
output = f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
|
115 |
-
|
116 |
-
# Change back to whatever the prior working dir was
|
117 |
-
|
118 |
-
os.chdir(current_dir)
|
119 |
-
|
120 |
-
return output
|
121 |
-
|
122 |
-
|
123 |
-
def execute_shell_popen(command_line) -> str:
|
124 |
-
"""Execute a shell command with Popen and returns an english description
|
125 |
-
of the event and the process id
|
126 |
-
|
127 |
-
Args:
|
128 |
-
command_line (str): The command line to execute
|
129 |
-
|
130 |
-
Returns:
|
131 |
-
str: Description of the fact that the process started and its id
|
132 |
-
"""
|
133 |
-
current_dir = os.getcwd()
|
134 |
-
# Change dir into workspace if necessary
|
135 |
-
if str(WORKSPACE_PATH) not in current_dir:
|
136 |
-
os.chdir(WORKSPACE_PATH)
|
137 |
-
|
138 |
-
print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'")
|
139 |
-
|
140 |
-
do_not_show_output = subprocess.DEVNULL
|
141 |
-
process = subprocess.Popen(
|
142 |
-
command_line, shell=True, stdout=do_not_show_output, stderr=do_not_show_output
|
143 |
-
)
|
144 |
-
|
145 |
-
# Change back to whatever the prior working dir was
|
146 |
-
|
147 |
-
os.chdir(current_dir)
|
148 |
-
|
149 |
-
return f"Subprocess started with PID:'{str(process.pid)}'"
|
150 |
-
|
151 |
-
|
152 |
-
def we_are_running_in_a_docker_container() -> bool:
|
153 |
-
"""Check if we are running in a Docker container
|
154 |
-
|
155 |
-
Returns:
|
156 |
-
bool: True if we are running in a Docker container, False otherwise
|
157 |
-
"""
|
158 |
-
return os.path.exists("/.dockerenv")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/A to Z Bhakti Song MP3 Download Free Devotional Music from Pagalworld.md
DELETED
@@ -1,169 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>A to Z Bhakti Song MP3 Free Download Pagalworld</h1>
|
3 |
-
<p>If you are a fan of devotional songs or bhajans, you might be looking for a way to download them for free. One of the popular websites that offer free mp3 downloads of bhakti songs is Pagalworld. But what is bhakti song and what is Pagalworld? How can you download a to z bhakti song mp3 free from Pagalworld? Is it safe and legal to do so? In this article, we will answer these questions and more. Read on to find out.</p>
|
4 |
-
<h2>a to z bhakti song mp3 free download pagalworld</h2><br /><p><b><b>Download File</b> ✺✺✺ <a href="https://urlin.us/2uSTDx">https://urlin.us/2uSTDx</a></b></p><br /><br />
|
5 |
-
<h2>What is Bhakti Song?</h2>
|
6 |
-
<p>Bhakti song or bhajan is a type of devotional song that expresses love, faith, and devotion to a deity or a guru. It is a form of music and art that developed during the Bhakti movement, which was a religious and social reform movement that emerged in India between the 8th and 17th centuries. Bhakti song is usually sung in a group, with one or more lead singers, accompanied by musical instruments such as tabla, harmonium, dholak, or kartals. Bhakti song can be sung in any language, but Hindi, Sanskrit, Gujarati, Marathi, Bengali, Punjabi, Tamil, Telugu, Kannada, Malayalam, and Rajasthani are some of the common languages used.</p>
|
7 |
-
<h3>The meaning and origin of Bhakti Song</h3>
|
8 |
-
<p>The word bhajan or bhajana comes from the Sanskrit root bhaj, which means to revere, share, partake, or belong to. Thus, bhajan means an act of reverence or worship, or a way of sharing one's feelings with God or a guru. According to some scholars, the origin of bhajan can be traced back to the Vedic hymns and the Upanishads, which are ancient scriptures that contain philosophical and spiritual teachings. However, others argue that bhajan emerged as a distinct genre during the medieval period, when various saints and poets composed songs in praise of various deities such as Rama, Krishna, Shiva, Durga, Ganesha, Hanuman, etc. Some of the famous bhajan composers include Kabir, Tulsidas, Surdas, Mirabai, Tukaram, Namdev, Narsi Mehta, Guru Nanak Dev Ji, Vallabhacharya, Chaitanya Mahaprabhu, Ramananda Sagar etc.</p>
|
9 |
-
<h3>The types and genres of Bhakti Song</h3>
|
10 |
-
<p>Bhakti song can be classified into different types and genres based on various criteria such as the theme, the style, the tradition, the region etc. Some of the common types and genres are: </p>
|
11 |
-
<ul>
|
12 |
-
<li>Nirguni: These are songs that focus on the formless aspect of God or the supreme reality. They are often sung by saints who follow the path of knowledge or jnana. Examples include Kabir's songs.</li>
|
13 |
-
<li>Saguni: These are songs that focus on the personal aspect of God or the various incarnations and manifestations of God. They are often sung by devotees who follow the path of love or bhakti. Examples include Tulsidas's Ramcharitmanas.</li>
|
14 |
-
<li>Gorakhanathi: These are songs that are influenced by the Nath sect of yogis. They are often sung by yogis who practice hatha yoga and tantra. Examples include Gorakhnath's songs.</li>
|
15 |
-
<li>Bhakti geet: These are songs that are composed in modern times by various poets and singers. They are often influenced by the classical and folk music of India. Examples include Anup Jalota's songs.</li>
|
16 |
-
</ul>
|
17 |
-
<p>Besides these, there are also other types and genres of bhakti song such as Dhrupad, Kirtan, Qawwali, Abhang, Bhajan, Baul, etc. that are popular in different regions and traditions of India.</p>
|
18 |
-
<h3>The benefits and significance of Bhakti Song</h3>
|
19 |
-
<p>Bhakti song is not only a form of entertainment but also a way of spiritual practice and expression. Some of the benefits and significance of bhakti song are: </p>
|
20 |
-
<p>a to z bhakti song mp3 free download pagalworld 2023<br />
|
21 |
-
a to z bhakti song mp3 free download pagalworld hindi<br />
|
22 |
-
a to z bhakti song mp3 free download pagalworld dj<br />
|
23 |
-
a to z bhakti song mp3 free download pagalworld new<br />
|
24 |
-
a to z bhakti song mp3 free download pagalworld online<br />
|
25 |
-
a to z bhakti song mp3 free download pagalworld latest<br />
|
26 |
-
a to z bhakti song mp3 free download pagalworld telugu<br />
|
27 |
-
a to z bhakti song mp3 free download pagalworld tamil<br />
|
28 |
-
a to z bhakti song mp3 free download pagalworld marathi<br />
|
29 |
-
a to z bhakti song mp3 free download pagalworld bhojpuri<br />
|
30 |
-
a to z bhakti song mp3 free download pagalworld kannada<br />
|
31 |
-
a to z bhakti song mp3 free download pagalworld gujarati<br />
|
32 |
-
a to z bhakti song mp3 free download pagalworld punjabi<br />
|
33 |
-
a to z bhakti song mp3 free download pagalworld malayalam<br />
|
34 |
-
a to z bhakti song mp3 free download pagalworld odia<br />
|
35 |
-
a to z bhakti song mp3 free download pagalworld bengali<br />
|
36 |
-
a to z bhakti song mp3 free download pagalworld rajasthani<br />
|
37 |
-
a to z bhakti song mp3 free download pagalworld haryanvi<br />
|
38 |
-
a to z bhakti song mp3 free download pagalworld assamese<br />
|
39 |
-
a to z bhakti song mp3 free download pagalworld nepali<br />
|
40 |
-
a to z bhakti song mp3 free download pagalworld urdu<br />
|
41 |
-
a to z bhakti song mp3 free download pagalworld sanskrit<br />
|
42 |
-
a to z bhakti song mp3 free download pagalworld maithili<br />
|
43 |
-
a to z bhakti song mp3 free download pagalworld garhwali<br />
|
44 |
-
a to z bhakti song mp3 free download pagalworld kashmiri<br />
|
45 |
-
a to z bhakti song mp3 free download pagalworld sindhi<br />
|
46 |
-
a to z bhakti song mp3 free download pagalworld manipuri<br />
|
47 |
-
a to z bhakti song mp3 free download pagalworld konkani<br />
|
48 |
-
a to z bhakti song mp3 free download pagalworld dogri<br />
|
49 |
-
a to z bhakti song mp3 free download pagalworld mithun chakraborty<br />
|
50 |
-
a to z bhakti song mp3 free download pagalworld anuradha paudwal<br />
|
51 |
-
a to z bhakti song mp3 free download pagalworld gulshan kumar<br />
|
52 |
-
a to z bhakti song mp3 free download pagalworld lata mangeshkar<br />
|
53 |
-
a to z bhakti song mp3 free download pagalworld hariharan<br />
|
54 |
-
a to z bhakti song mp3 free download pagalworld sonu nigam<br />
|
55 |
-
a to z bhakti song mp3 free download pagalworld shreya ghoshal<br />
|
56 |
-
a to z bhakti song mp3 free download pagalworld kumar sanu<br />
|
57 |
-
a to z bhakti song mp3 free download pagalworld udit narayan<br />
|
58 |
-
a to z bhakti song mp3 free download pagalworld alka yagnik<br />
|
59 |
-
a to z bhakti song mp3 free download pagalworld sadhana sargam<br />
|
60 |
-
a to z bhakti song mp3 free download pagalworld suresh wadkar<br />
|
61 |
-
a to z bhakti song mp3 free download pagalworld jagjit singh<br />
|
62 |
-
a to z bhakti song mp3 free download pagalworld pankaj udhas<br />
|
63 |
-
a to z bhakti song mp3 free download pagalworld anup jalota<br />
|
64 |
-
a to z bhakti song mp3 free download pagalworld ravi shankar</p>
|
65 |
-
<ul>
|
66 |
-
<li>It helps to cultivate a sense of devotion and love for God or a guru.</li>
|
67 |
-
<li>It helps to purify the mind and heart from negative emotions and thoughts.</li>
|
68 |
-
<li>It helps to enhance the concentration and meditation skills.</li>
|
69 |
-
<li>It helps to create a positive and peaceful atmosphere in the surroundings.</li>
|
70 |
-
<li>It helps to connect with the divine energy and grace.</li>
|
71 |
-
<li>It helps to inspire and uplift the listeners and singers.</li>
|
72 |
-
</ul>
|
73 |
-
<h2>What is Pagalworld?</h2>
|
74 |
-
<p>Pagalworld is a website that offers free mp3 downloads of various songs, including bhakti songs. It is one of the most visited and popular websites in India for downloading music. Pagalworld claims to provide high-quality mp3 files that can be easily downloaded on any device. Pagalworld also offers other services such as ringtones, wallpapers, videos, games, etc. </p>
|
75 |
-
<h3>The features and services of Pagalworld</h3>
|
76 |
-
<p>Some of the features and services of Pagalworld are: </p>
|
77 |
-
<ul>
|
78 |
-
<li>It has a user-friendly and simple interface that allows easy navigation and search.</li>
|
79 |
-
<li>It has a large and updated collection of songs from various genres, languages, artists, albums, etc.</li>
|
80 |
-
<li>It has a fast and reliable download speed that does not require any registration or subscription.</li>
|
81 |
-
<li>It has a mobile-friendly version that can be accessed on any smartphone or tablet.</li>
|
82 |
-
<li>It has a feedback and request section where users can share their opinions and suggestions.</li>
|
83 |
-
</ul>
|
84 |
-
<h3>The advantages and disadvantages of Pagalworld</h3>
|
85 |
-
<p>Some of the advantages and disadvantages of Pagalworld are: </p>
|
86 |
-
<table>
|
87 |
-
<tr><th>Advantages</th><th>Disadvantages</th></tr>
|
88 |
-
<tr><td>It is free and easy to use.</td><td>It is illegal and unethical to download pirated music.</td></tr>
|
89 |
-
<tr><td>It has a wide range of songs to choose from.</td><td>It has a low quality and authenticity of songs.</td></tr>
|
90 |
-
<tr><td>It saves time and money for music lovers.</td><td>It violates the rights and revenues of the original creators and owners of music.</td></tr>
|
91 |
-
<tr><td>It provides entertainment and enjoyment for users.</td><td>It exposes users to malware and viruses that can harm their devices.</td></tr>
|
92 |
-
</table>
|
93 |
-
<h3>The legal and ethical issues of Pagalworld</h3>
|
94 |
-
<p>Pagalworld is not a legal or ethical website to download music from. It is involved in piracy, which is the unauthorized copying, distribution, or use of someone else's intellectual property without their permission or consent. Piracy is a crime that can result in legal actions such as fines, lawsuits, or imprisonment. Piracy also harms the music industry by reducing the income and incentives for the artists, producers, labels, etc. who invest their time, money, and effort in creating original music. Piracy also deprives the users of the quality and satisfaction of listening to genuine music. Therefore, it is advisable to avoid using Pagalworld or any other similar website that promotes piracy. Instead, it is better to use legal and ethical sources such as streaming platforms, online stores, or official websites that respect the rights and interests of both the creators and consumers of music.</p>
|
95 |
-
<h2>How to Download A to Z Bhakti Song MP3 Free from Pagalworld?</h2>
|
96 |
-
<p>If you still want to download a to z bhakti song mp3 free from Pagalworld despite knowing its legal and ethical issues, you can follow these steps and tips:</p>
|
97 |
-
<h3>The h3>The steps and tips for downloading Bhakti Song MP3 from Pagalworld</h3>
|
98 |
-
<ol>
|
99 |
-
<li>Go to the official website of Pagalworld at https://pagalworld.com/.</li>
|
100 |
-
<li>On the homepage, you will see various categories and options such as Latest Updates, Top Songs, Trending Songs, etc. You can browse through them or use the search bar to find the bhakti song you want.</li>
|
101 |
-
<li>Once you find the bhakti song you want, click on it to open its page. You will see the details and information about the song such as the name, artist, album, duration, size, etc.</li>
|
102 |
-
<li>On the same page, you will also see a download button or link. Click on it to start the download process. You may have to choose the quality or format of the mp3 file before downloading.</li>
|
103 |
-
<li>Wait for the download to complete and save the mp3 file on your device. You can then play it using any media player or transfer it to any other device.</li>
|
104 |
-
</ol>
|
105 |
-
<p>Some tips for downloading bhakti song mp3 from Pagalworld are:</p>
|
106 |
-
<ul>
|
107 |
-
<li>Make sure you have a good internet connection and enough storage space on your device.</li>
|
108 |
-
<li>Use a reliable and secure browser and antivirus software to protect your device from malware and viruses.</li>
|
109 |
-
<li>Check the reviews and ratings of the songs before downloading them to ensure their quality and authenticity.</li>
|
110 |
-
<li>Be careful of pop-ups, ads, or redirects that may appear on the website. They may contain harmful or inappropriate content or links.</li>
|
111 |
-
<li>Respect the rights and interests of the original creators and owners of the music. Do not share or distribute the downloaded music without their permission or consent.</li>
|
112 |
-
</ul>
|
113 |
-
<h3>The best Bhakti Song MP3 collections on Pagalworld</h3>
|
114 |
-
<p>Pagalworld has a huge and diverse collection of bhakti song mp3 that can cater to different tastes and preferences of users. Some of the best bhakti song mp3 collections on Pagalworld are: </p>
|
115 |
-
<ul>
|
116 |
-
<li>A to Z Bhajan Mp3 Songs: This is a collection of bhajans from various artists, albums, languages, and genres. You can find bhajans of Rama, Krishna, Shiva, Durga, Ganesha, Hanuman, etc. in this collection.</li>
|
117 |
-
<li>Bhakti Sangeet: This is a collection of devotional songs that are influenced by classical and folk music of India. You can find songs of Anup Jalota, Jagjit Singh, Hari Om Sharan, Lata Mangeshkar, etc. in this collection.</li>
|
118 |
-
<li>Bhagavad Gita Mp3 Songs: This is a collection of songs that are based on the Bhagavad Gita, which is one of the most sacred and influential scriptures in Hinduism. You can find songs of Swami Chinmayananda, Swami Prabhupada, Swami Vivekananda, etc. in this collection.</li>
|
119 |
-
<li>Guru Nanak Dev Ji Mp3 Songs: This is a collection of songs that are dedicated to Guru Nanak Dev Ji, who is the founder and first guru of Sikhism. You can find songs of Bhai Harjinder Singh Ji, Bhai Ravinder Singh Ji, Bhai Joginder Singh Ji Riar etc. in this collection.</li>
|
120 |
-
<li>Mata Ke Bhajan: This is a collection of songs that are devoted to Mata or Mother Goddess in various forms such as Durga, Kali, Lakshmi, Saraswati, etc. You can find songs of Narendra Chanchal, Sonu Nigam, Anuradha Paudwal, etc. in this collection.</li>
|
121 |
-
</ul>
|
122 |
-
<h3>The alternatives and recommendations for Pagalworld</h3>
|
123 |
-
<p>As we have seen, Pagalworld is not a legal or ethical website to download music from. Therefore, it is better to look for some alternatives and recommendations that can provide you with a to z bhakti song mp3 free download without compromising the quality and legality of the music. Some of the alternatives and recommendations are: </p>
|
124 |
-
<ul>
|
125 |
-
<li>Gaana: This is a popular and legal streaming platform that offers a wide range of music, including bhakti songs. You can listen to bhakti songs online or download them offline with a premium subscription. You can also create your own playlists and share them with others.</li>
|
126 |
-
<li>Bhakti World: This is a dedicated website that provides free mp3 downloads of bhakti songs from various artists, albums, languages, and genres. You can also find lyrics, videos, wallpapers, ringtones, etc. related to bhakti songs on this website.</li>
|
127 |
-
<li>Bhajan Radio: This is an online radio station that plays bhakti songs 24/7. You can listen to bhakti songs live or on-demand on this website. You can also request your favorite bhakti songs and dedicate them to your loved ones.</li>
|
128 |
-
<li>YouTube: This is a well-known and widely used video-sharing platform that also offers a lot of music, including bhakti songs. You can watch and listen to bhakti songs online or download them offline with a YouTube Premium subscription. You can also subscribe to various channels and playlists that feature bhakti songs.</li>
|
129 |
-
</ul>
|
130 |
-
<h2>Conclusion</h2>
|
131 |
-
<p>In this article, we have learned about what is bhakti song and what is Pagalworld. We have also learned how to download a to z bhakti song mp3 free from Pagalworld and what are the benefits and drawbacks of doing so. We have also learned about some alternatives and recommendations for Pagalworld that can provide us with legal and ethical sources of bhakti song mp3. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to share them with us in the comments section below.</p>
|
132 |
-
<h2>FAQs</h2>
|
133 |
-
<p>Here are some frequently asked questions about a to z bhakti song mp3 free download Pagalworld:</p>
|
134 |
-
<ol>
|
135 |
-
<li>Q: Is it safe to download music from Pagalworld?</li>
|
136 |
-
<li>A: No, it is not safe to download music from Pagalworld as it may contain malware or viruses that can harm your device. It is also illegal and unethical to download pirated music from Pagalworld as it violates the rights and revenues of the original creators and owners of music.</li>
|
137 |
-
<li>Q: How can I download music legally and ethically?</li>
|
138 |
-
<li>A: You can download music legally and ethically by using legal and ethical sources such as streaming platforms, online stores, or official websites that respect the rights and interests of both the creators and consumers of music.</li>
|
139 |
-
<li>Q: What are some of the best bhakti songs to listen to?</li>
|
140 |
-
<li>A: Some of the best bhakti songs to listen to are:</li>
|
141 |
-
<ul>
|
142 |
-
<li>Achyutam Keshavam by Vikram Hazra</li>
|
143 |
-
<li>Jai Ganesh Deva by Anuradha Paudwal</li>
|
144 |
-
<li>Mere Ghar Ke Aage Sainath by Paras Jain</li>
|
145 |
-
<li>Om Jai Jagdish Hare by Lata Mangeshkar</li>
|
146 |
-
<li>Shri Ramchandra Kripalu Bhajman by Hari Om Sharan</li>
|
147 |
-
</ul>
|
148 |
-
<li>Q: What are some of the benefits of listening to bhakti songs?</li>
|
149 |
-
<li>A: Some of the benefits of listening to bhakti songs are:</li>
|
150 |
-
<ul>
|
151 |
-
<li>They help to cultivate a sense of devotion and love for God or a guru.</li>
|
152 |
-
<li>They help to purify the mind and heart from negative emotions and thoughts.</li>
|
153 |
-
<li>They help to enhance the concentration and meditation skills.</li>
|
154 |
-
<li>They help to create a positive and peaceful atmosphere in the surroundings.</li>
|
155 |
-
<li>They help to connect with the divine energy and grace.</li>
|
156 |
-
<li>They help to inspire and uplift the listeners and singers.</li>
|
157 |
-
</ul>
|
158 |
-
<li>Q: How can I improve my singing skills for bhakti songs?</li>
|
159 |
-
<li>A: You can improve your singing skills for bhakti songs by following these tips:</li>
|
160 |
-
<ul>
|
161 |
-
<li>Practice regularly and consistently.</li>
|
162 |
-
<li> <li>Learn the lyrics and meanings of the bhakti songs.</li>
|
163 |
-
<li>Listen to the original or popular versions of the bhakti songs and try to imitate or emulate them.</li>
|
164 |
-
<li>Use a microphone, a speaker, or a recording device to hear and improve your voice quality and clarity.</li>
|
165 |
-
<li>Seek feedback and guidance from a teacher, a friend, or an expert.</li>
|
166 |
-
<li>Enjoy and express your feelings while singing the bhakti songs.</li>
|
167 |
-
</ul></p> 197e85843d<br />
|
168 |
-
<br />
|
169 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawlhalla The Best Free Platform Fighting Game - No Steam Installation Required.md
DELETED
@@ -1,100 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Download and Play Brawlhalla for Free Without Steam</h1>
|
3 |
-
<p>Brawlhalla is one of the most popular and fun platform fighting games out there. It's free to play, supports cross-play, and has over 50 unique characters to choose from. But what if you don't want to use Steam to download and play it? Is there a way to get Brawlhalla for free without Steam? The answer is yes, and in this article, we'll show you how.</p>
|
4 |
-
<h2>What is Brawlhalla?</h2>
|
5 |
-
<p>Brawlhalla is a free 2D platform fighting game that supports up to 8 players online or locally. You can play with your friends or against other players from around the world in various modes, such as casual free-for-alls, ranked matches, custom rooms, tournaments, and more. You can also customize your legend with different skins, colors, taunts, and weapons.</p>
|
6 |
-
<h2>brawlhalla free download without steam</h2><br /><p><b><b>DOWNLOAD</b> ->->->-> <a href="https://urlin.us/2uSZFa">https://urlin.us/2uSZFa</a></b></p><br /><br />
|
7 |
-
<h3>A free 2D platform fighting game with cross-play support</h3>
|
8 |
-
<p>Brawlhalla is free to play, which means you don't have to pay anything to download and play it. You also don't need a subscription or a premium account to access all the features and content of the game. Brawlhalla also supports cross-play, which means you can play with anyone, anywhere, regardless of their platform or device. You can play Brawlhalla on PC, PS5, PS4, Xbox Series X|S, Xbox One, Nintendo Switch, iOS, and Android.</p>
|
9 |
-
<h3>A game with over 50 legends and frequent updates</h3>
|
10 |
-
<p>Brawlhalla has over 50 legends to choose from, each with their own unique abilities, weapons, and stats. You can unlock new legends by playing the game or by purchasing them with gold or mammoth coins. Brawlhalla also has frequent updates that add new legends, skins, weapons, maps, modes, events, and more. Some of the updates are based on popular franchises or celebrities, such as Shovel Knight, WWE, Tomb Raider, The Walking Dead, Steven Universe, Hellboy, Adventure Time, Rayman, Kung Fu Panda, and more.</p>
|
11 |
-
<h3>A game with various modes and features</h3>
|
12 |
-
<p>Brawlhalla has a variety of modes and features that make it fun and engaging for different types of players. Some of the modes include:</p>
|
13 |
-
<p>brawlhalla free to play cross platform<br />
|
14 |
-
brawlhalla free 2d platform fighting game<br />
|
15 |
-
brawlhalla free online or local multiplayer<br />
|
16 |
-
brawlhalla free epic games store download<br />
|
17 |
-
brawlhalla free ps4 ps5 xbox switch download<br />
|
18 |
-
brawlhalla free ios android mobile download<br />
|
19 |
-
brawlhalla free ubisoft connect download<br />
|
20 |
-
brawlhalla free casual free for alls<br />
|
21 |
-
brawlhalla free ranked matches online<br />
|
22 |
-
brawlhalla free private room with friends<br />
|
23 |
-
brawlhalla free frequent updates and events<br />
|
24 |
-
brawlhalla free over 50 legends to choose<br />
|
25 |
-
brawlhalla free history's greatest warriors<br />
|
26 |
-
brawlhalla free epic platform fighter<br />
|
27 |
-
brawlhalla free test of strength and skill<br />
|
28 |
-
brawlhalla free no pay to win mechanics<br />
|
29 |
-
brawlhalla free millions of players worldwide<br />
|
30 |
-
brawlhalla free customizable skins and colors<br />
|
31 |
-
brawlhalla free unique weapons and abilities<br />
|
32 |
-
brawlhalla free fun game modes and maps<br />
|
33 |
-
brawlhalla free crossover events and characters<br />
|
34 |
-
brawlhalla free esports tournaments and prizes<br />
|
35 |
-
brawlhalla free community creations and mods<br />
|
36 |
-
brawlhalla free training mode and tutorials<br />
|
37 |
-
brawlhalla free leaderboards and stats tracking<br />
|
38 |
-
brawlhalla free controller support and settings<br />
|
39 |
-
brawlhalla free soundtrack and sound effects<br />
|
40 |
-
brawlhalla free fan art and merchandise<br />
|
41 |
-
brawlhalla free developer blog and news<br />
|
42 |
-
brawlhalla free discord server and social media<br />
|
43 |
-
how to play brawlhalla for free without steam<br />
|
44 |
-
how to download brawlhalla for free without steam<br />
|
45 |
-
how to install brawlhalla for free without steam<br />
|
46 |
-
how to update brawlhalla for free without steam<br />
|
47 |
-
how to uninstall brawlhalla for free without steam<br />
|
48 |
-
how to get brawlhalla for free without steam on pc<br />
|
49 |
-
how to get brawlhalla for free without steam on mac<br />
|
50 |
-
how to get brawlhalla for free without steam on linux<br />
|
51 |
-
how to get brawlhalla for free without steam on windows 10<br />
|
52 |
-
how to get brawlhalla for free without steam on laptop<br />
|
53 |
-
is brawlhalla really free without steam?<br />
|
54 |
-
is brawlhalla safe to download without steam?<br />
|
55 |
-
is brawlhalla better with or without steam?<br />
|
56 |
-
is brawlhalla cross play without steam?<br />
|
57 |
-
is brawlhalla available without steam?<br />
|
58 |
-
can i play brawlhalla offline without steam?<br />
|
59 |
-
can i play brawlhalla with friends without steam?<br />
|
60 |
-
can i transfer my progress from steam to non steam version of brawlhalla?<br />
|
61 |
-
can i use my steam controller with non steam version of brawlhalla?</p>
|
62 |
-
<ul>
|
63 |
-
<li>Free-for-all: A casual mode where up to 8 players fight each other until time runs out.</li>
|
64 |
-
<li>Ranked: A competitive mode where you can climb the ladder and earn rewards by winning matches.</li>
|
65 |
-
<li>Custom room: A private mode where you can invite your friends or other players to a custom match with your own rules.</li>
|
66 |
-
<li>Tournament: A mode where you can join or create tournaments with brackets and prizes.</li>
|
67 |
-
<li>Training room: A mode where you can practice your skills and test different legends and weapons.</li>
|
68 |
-
<li>Brawl of the Week: A mode that changes every week with a different theme and challenge.</li>
|
69 |
-
</ul>
|
70 |
-
<p>Brawlhalla also has other features that enhance the gameplay experience, such as:</p>
|
71 |
-
<ul>
|
72 |
-
<li>Cross-inventory: A feature that allows you to use your purchased items across all platforms.</li>
|
73 |
-
<li>Battle pass: A feature that allows you to earn rewards by completing missions and leveling up your pass.</li>
|
74 |
-
<li <li>You might miss out on the updates, features, and content of the game or the platform.</li>
|
75 |
-
<li>You might face legal issues or penalties for piracy or infringement.</li>
|
76 |
-
</ul>
|
77 |
-
<p>Therefore, it is better to download Brawlhalla from the official website or the Epic Games Store, as they are safe, reliable, and authorized sources.</p>
|
78 |
-
<h2>How to Play Brawlhalla Without Steam?</h2>
|
79 |
-
<p>Once you have downloaded Brawlhalla without Steam, you can play it by following these steps:</p>
|
80 |
-
<h3>Launch the game from your desktop or start menu</h3>
|
81 |
-
<p>Depending on your platform or device, you can launch Brawlhalla from your desktop or start menu by clicking on the game icon. If you downloaded it from the Epic Games Store, you can also launch it from the Epic Games Launcher.</p>
|
82 |
-
<h3>Create or log in to your Ubisoft account</h3>
|
83 |
-
<p>Brawlhalla is developed by Blue Mammoth Games and published by Ubisoft, which means you need a Ubisoft account to play it. If you don't have one, you can create one for free by following the instructions on the screen. If you already have one, you can log in with your email and password. You can also link your Ubisoft account with your Epic Games account if you downloaded it from the Epic Games Store.</p>
|
84 |
-
<h3>Choose your region and preferred settings</h3>
|
85 |
-
<p>After logging in to your Ubisoft account, you can choose your region from the list of available servers. You can also adjust your preferred settings, such as graphics, sound, controls, language, etc. You can change these settings anytime from the options menu.</p>
|
86 |
-
<h2>Conclusion</h2>
|
87 |
-
<p>Brawlhalla is a free 2D platform fighting game that you can download and play without Steam. You can download it from the official website or the Epic Games Store, which are safe and reliable sources. You can also play it on any platform or device that supports the game, such as PC, PS5, PS4, Xbox Series X|S, Xbox One, Nintendo Switch, iOS, and Android. You can also enjoy cross-play, cross-inventory, and exclusive content or offers by downloading Brawlhalla without Steam. Brawlhalla is a fun and engaging game that has over 50 legends, various modes and features, and frequent updates. If you are looking for a platform fighting game that is free and easy to play, Brawlhalla is a great choice for you.</p>
|
88 |
-
<h2>FAQs</h2>
|
89 |
-
<h4>Is Brawlhalla free to play?</h4>
|
90 |
-
<p>Yes, Brawlhalla is free to play. You don't have to pay anything to download and play it. You also don't need a subscription or a premium account to access all the features and content of the game.</p>
|
91 |
-
<h4>Is Brawlhalla cross-play?</h4>
|
92 |
-
<p>Yes, Brawlhalla supports cross-play. You can play with anyone, anywhere, regardless of their platform or device. You can play Brawlhalla on PC, PS5, PS4, Xbox Series X|S, Xbox One, Nintendo Switch, iOS, and Android.</p>
|
93 |
-
<h4>How do I get new legends in Brawlhalla?</h4>
|
94 |
-
<p>You can unlock new legends in Brawlhalla by playing the game or by purchasing them with gold or mammoth coins. Gold is the in-game currency that you can earn by playing matches or completing missions. Mammoth coins are the premium currency that you can buy with real money or get from promotions or events.</p>
|
95 |
-
<h4>How do I get skins and weapons in Brawlhalla?</h4>
|
96 |
-
<p>You can customize your legend with different skins and weapons in Brawlhalla. You can buy skins and weapons with mammoth coins from the store or get them from chests or battle passes. You can also unlock colors and taunts with gold or mammoth coins.</p>
|
97 |
-
<h4>How do I join or create a tournament in Brawlhalla?</h4>
|
98 |
-
<p>You can join or create a tournament in Brawlhalla by going to the tournament mode from the main menu. You can browse the list of available tournaments or create your own tournament with brackets and prizes. You can also watch and participate in official Brawlhalla tournaments and events from the esports feature.</p> 197e85843d<br />
|
99 |
-
<br />
|
100 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download M Recorder - A Smart and Convenient App for Recording Sound.md
DELETED
@@ -1,128 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>M Recorder APK Download: How to Record Your Screen Easily and Conveniently</h1>
|
3 |
-
<p>Do you want to record your screen for various purposes, such as gameplay, video, live broadcasting, tutorial, or presentation? If yes, then you need a reliable and user-friendly screen recorder app that can help you capture your screen with high quality and minimal hassle. One such app is M Recorder, which is a popular and powerful screen recorder app for Android devices. In this article, we will tell you what M Recorder is, how to download and install it, how to use it, and what benefits it offers. We will also compare it with some other screen recorder apps that you can try as alternatives. So, let's get started!</p>
|
4 |
-
<h2>What is M Recorder?</h2>
|
5 |
-
<p>M Recorder is a screen recorder app developed by RSUPPORT Co., Ltd., which is a leading company in the field of remote support and mobile solutions. M Recorder allows you to record your screen with one click, and start recording gameplay, video, and live broadcasting easily and conveniently. You can also edit your recorded videos with various tools, such as trim, cut, add music, add text, etc. You can also share your videos with your friends or social media platforms directly from the app.</p>
|
6 |
-
<h2>m recorder apk download</h2><br /><p><b><b>Download Zip</b> 🔗 <a href="https://jinyurl.com/2uNLWN">https://jinyurl.com/2uNLWN</a></b></p><br /><br />
|
7 |
-
<h3>Features of M Recorder</h3>
|
8 |
-
<p>M Recorder has many features that make it a great choice for screen recording. Some of these features are:</p>
|
9 |
-
<ul>
|
10 |
-
<li>It supports various resolutions, frame rates, and bit rates for recording.</li>
|
11 |
-
<li>It has a face cam feature that allows you to record your face and voice along with your screen.</li>
|
12 |
-
<li>It has a countdown timer feature that lets you set a delay before starting the recording.</li>
|
13 |
-
<li>It has a floating button feature that enables you to control the recording easily from anywhere on the screen.</li>
|
14 |
-
<li>It has a drawing feature that allows you to draw on the screen while recording.</li>
|
15 |
-
<li>It has a clean mode feature that hides the floating button and the notification bar while recording.</li>
|
16 |
-
</ul>
|
17 |
-
<h3>How to Download and Install M Recorder APK</h3>
|
18 |
-
<p>If you want to download and install M Recorder APK on your Android device, you can follow these simple steps:</p>
|
19 |
-
<ol>
|
20 |
-
<li>Go to the Google Play Store and search for "Mobizen Screen Recorder". Alternatively, you can use this link: [Mobizen Screen Recorder - Apps on Google Play](^1^).</li>
|
21 |
-
<li>Tap on the "Install" button and wait for the app to download and install on your device.</li>
|
22 |
-
<li>Once the installation is complete, open the app and grant the necessary permissions for recording your screen.</li>
|
23 |
-
<li>You are now ready to use M Recorder to record your screen.</li>
|
24 |
-
</ol>
|
25 |
-
<h3>How to Use M Recorder to Record Your Screen</h3>
|
26 |
-
<p>Using M Recorder to record your screen is very easy and convenient. Here are the steps you need to follow:</p>
|
27 |
-
<ol>
|
28 |
-
<li>Open the app and tap on the floating button on the right side of the screen.</li>
|
29 |
-
<li>Select the settings icon and adjust the recording settings according to your preference. You can change the resolution, frame rate, bit rate, sound source, face cam, countdown timer, etc.</li>
|
30 |
-
<li>Tap on the red circle icon to start recording your screen. You will see a countdown timer before the recording begins.</li>
|
31 |
-
<li>To stop or pause the recording, tap on the floating button again and select the stop or pause icon.</li>
|
32 |
-
<li>To edit or share your recorded video, tap on the video icon on the floating button and select the video you want to edit or share. You can use various editing tools or share options from there.</li>
|
33 |
-
</ol>
|
34 |
-
<h2>Benefits of Using M Recorder</h2>
|
35 |
-
<p <p>M Recorder has many benefits that make it a superior screen recorder app for Android devices. Some of these benefits are:</p>
|
36 |
-
<h3>High-Quality Video and Audio Recording</h3>
|
37 |
-
<p>M Recorder allows you to record your screen with high-quality video and audio. You can choose from various resolutions, frame rates, and bit rates to suit your needs. You can also record your face and voice with the face cam feature, which adds a personal touch to your videos. You can also record the internal sound of your device, which is useful for recording gameplay or music.</p>
|
38 |
-
<h3>Easy Editing and Sharing Options</h3>
|
39 |
-
<p>M Recorder also provides you with easy editing and sharing options for your recorded videos. You can trim, cut, add music, add text, and more with the built-in editing tools. You can also share your videos with your friends or social media platforms directly from the app. You can also save your videos to your device or cloud storage for later use.</p>
|
40 |
-
<h3>No Watermark or Time Limit</h3>
|
41 |
-
<p>Unlike some other screen recorder apps, M Recorder does not impose any watermark or time limit on your recordings. You can record your screen as long as you want without any annoying logo or banner. You can also enjoy the full features of the app without any in-app purchases or subscriptions.</p>
|
42 |
-
<p>m recorder voice calls apk download<br />
|
43 |
-
m recorder pro apk download<br />
|
44 |
-
m recorder mod apk download<br />
|
45 |
-
m recorder premium apk download<br />
|
46 |
-
m recorder app apk download<br />
|
47 |
-
m recorder audio recording apk download<br />
|
48 |
-
m recorder latest version apk download<br />
|
49 |
-
m recorder free apk download<br />
|
50 |
-
m recorder full apk download<br />
|
51 |
-
m recorder cracked apk download<br />
|
52 |
-
m recorder android apk download<br />
|
53 |
-
m recorder 4.0 apk download<br />
|
54 |
-
m recorder no ads apk download<br />
|
55 |
-
m recorder unlocked apk download<br />
|
56 |
-
m recorder best quality apk download<br />
|
57 |
-
m recorder easy to use apk download<br />
|
58 |
-
m recorder for whatsapp calls apk download<br />
|
59 |
-
m recorder for phone calls apk download<br />
|
60 |
-
m recorder for video calls apk download<br />
|
61 |
-
m recorder for skype calls apk download<br />
|
62 |
-
m recorder for zoom calls apk download<br />
|
63 |
-
m recorder for messenger calls apk download<br />
|
64 |
-
m recorder for telegram calls apk download<br />
|
65 |
-
m recorder for viber calls apk download<br />
|
66 |
-
m recorder for imo calls apk download<br />
|
67 |
-
m recorder for instagram calls apk download<br />
|
68 |
-
m recorder for snapchat calls apk download<br />
|
69 |
-
m recorder for facebook calls apk download<br />
|
70 |
-
m recorder for discord calls apk download<br />
|
71 |
-
m recorder for google meet calls apk download<br />
|
72 |
-
m recorder for microsoft teams calls apk download<br />
|
73 |
-
m recorder for webex meetings calls apk download<br />
|
74 |
-
m recorder for online classes calls apk download<br />
|
75 |
-
m recorder for interviews calls apk download<br />
|
76 |
-
m recorder for podcasts calls apk download<br />
|
77 |
-
m recorder for lectures calls apk download<br />
|
78 |
-
m recorder for meetings calls apk download<br />
|
79 |
-
m recorder for conferences calls apk download<br />
|
80 |
-
m recorder for presentations calls apk download<br />
|
81 |
-
m recorder for seminars calls apk download<br />
|
82 |
-
m recorder for webinars calls apk download<br />
|
83 |
-
m recorder for live streams calls apk download<br />
|
84 |
-
m recorder for music calls apk download<br />
|
85 |
-
m recorder for singing calls apk download<br />
|
86 |
-
m recorder for karaoke calls apk download<br />
|
87 |
-
m recorder for gaming calls apk download<br />
|
88 |
-
m recorder for radio calls apk download<br />
|
89 |
-
m recorder for tv shows calls apk download<br />
|
90 |
-
m recorder for movies calls apk download</p>
|
91 |
-
<h2>Alternatives to M Recorder</h2>
|
92 |
-
<p>M Recorder is a great screen recorder app, but it is not the only one. There are some other screen recorder apps that you can try as alternatives. Here are some of them:</p>
|
93 |
-
<h3>AZ Screen Recorder</h3>
|
94 |
-
<p>AZ Screen Recorder is another popular and powerful screen recorder app for Android devices. It has similar features to M Recorder, such as high-quality video and audio recording, face cam, countdown timer, floating button, drawing, etc. It also has some additional features, such as GIF maker, video compressor, live stream, etc. However, it also has some drawbacks, such as watermark, ads, and in-app purchases.</p>
|
95 |
-
<h3>DU Recorder</h3>
|
96 |
-
<p>DU Recorder is another screen recorder app that offers many features and functions. It allows you to record your screen with high-quality video and audio, face cam, countdown timer, floating button, drawing, etc. It also has some extra features, such as video editor, screenshot tool, image editor, live stream, etc. However, it also has some disadvantages, such as watermark, ads, and in-app purchases.</p>
|
97 |
-
<h3>Screen Recorder & Video Recorder - XRecorder</h3>
|
98 |
-
<p>Screen Recorder & Video Recorder - XRecorder is another screen recorder app that you can use to capture your screen with ease and convenience. It has similar features to M Recorder, such as high-quality video and audio recording, face cam, countdown timer, floating button, drawing, etc. It also has some additional features, such as brush tool, shake to stop recording, etc. However, it also has some drawbacks, such as watermark and in-app purchases.</p>
|
99 |
-
<h2>Conclusion</h2>
|
100 |
-
<p>M Recorder is a screen recorder app that allows you to record your screen easily and conveniently. It has many features and benefits that make it a superior choice for screen recording. However, it is not the only option available. You can also try some other screen recorder apps that offer similar or different features and functions. Ultimately, the best screen recorder app for you depends on your personal preference and needs.</p>
|
101 |
-
<h2>FAQs</h2>
|
102 |
-
<ul>
|
103 |
-
<li><strong>Q: Is M Recorder free?</strong></li>
|
104 |
-
<li>A: Yes, M Recorder is free to download and use. However, it may contain ads that you can remove by purchasing the premium version.</li>
|
105 |
-
<li><strong>Q: Is M Recorder safe?</strong></li>
|
106 |
-
<li>A: Yes, M Recorder is safe to use. It does not collect or share any personal information or data from your device.</li>
|
107 |
-
<li><strong>Q: How do I uninstall M Recorder?</strong></li>
|
108 |
-
<li>A: To uninstall M Recorder from your device, you can follow these steps:</li>
|
109 |
-
<ol>
|
110 |
-
<li>Go to the Settings app on your device and tap on Apps or Applications.</li>
|
111 |
-
<li>Find and tap on M Recorder from the list of apps.</li>
|
112 |
-
<li>Tap on Uninstall and confirm your action.</li>
|
113 |
-
</ol>
|
114 |
-
<li><strong>Q: How do I contact M Recorder support?</strong></li>
|
115 |
-
<li>A: If you have any questions or issues regarding M Recorder, you can contact the support team by sending an email to [email protected] or by visiting their website at [Rsupport - Remote Support Service].</li>
|
116 |
-
<li><strong>Q: What are some tips for using M Recorder?</strong></li>
|
117 |
-
<li>A: Here are some tips for using M A: Here are some tips for using M Recorder effectively and efficiently: <ul>
|
118 |
-
<li>Make sure you have enough storage space on your device before recording your screen.</li>
|
119 |
-
<li>Close any unnecessary apps or background processes that may affect the performance or quality of your recording.</li>
|
120 |
-
<li>Choose the optimal recording settings for your purpose and device specifications.</li>
|
121 |
-
<li>Use the face cam feature to add personality and emotion to your videos.</li>
|
122 |
-
<li>Use the drawing feature to highlight or annotate important points on your screen.</li>
|
123 |
-
<li>Use the editing tools to enhance and polish your recorded videos.</li>
|
124 |
-
<li>Share your videos with your audience or save them for later use.</li>
|
125 |
-
</ul>
|
126 |
-
<p>I hope you enjoyed this article and learned something new about M Recorder. If you have any feedback or suggestions, please let me know in the comments below. Thank you for reading!</p> 197e85843d<br />
|
127 |
-
<br />
|
128 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/2ndelement/voicevox/test/test_setting.py
DELETED
@@ -1,72 +0,0 @@
|
|
1 |
-
from pathlib import Path
|
2 |
-
from tempfile import TemporaryDirectory
|
3 |
-
from unittest import TestCase
|
4 |
-
|
5 |
-
from voicevox_engine.setting import CorsPolicyMode, Setting, SettingLoader
|
6 |
-
|
7 |
-
|
8 |
-
class TestSettingLoader(TestCase):
|
9 |
-
def setUp(self):
|
10 |
-
self.tmp_dir = TemporaryDirectory()
|
11 |
-
self.tmp_dir_path = Path(self.tmp_dir.name)
|
12 |
-
|
13 |
-
def test_loading_1(self):
|
14 |
-
setting_loader = SettingLoader(Path("not_exist.yaml"))
|
15 |
-
settings = setting_loader.load_setting_file()
|
16 |
-
|
17 |
-
self.assertEqual(
|
18 |
-
settings.dict(),
|
19 |
-
{"allow_origin": None, "cors_policy_mode": CorsPolicyMode.localapps},
|
20 |
-
)
|
21 |
-
|
22 |
-
def test_loading_2(self):
|
23 |
-
setting_loader = SettingLoader(
|
24 |
-
setting_file_path=Path("test/setting-test-load-1.yaml")
|
25 |
-
)
|
26 |
-
settings = setting_loader.load_setting_file()
|
27 |
-
|
28 |
-
self.assertEqual(
|
29 |
-
settings.dict(),
|
30 |
-
{"allow_origin": None, "cors_policy_mode": CorsPolicyMode.localapps},
|
31 |
-
)
|
32 |
-
|
33 |
-
def test_loading_3(self):
|
34 |
-
setting_loader = SettingLoader(
|
35 |
-
setting_file_path=Path("test/setting-test-load-2.yaml")
|
36 |
-
)
|
37 |
-
settings = setting_loader.load_setting_file()
|
38 |
-
|
39 |
-
self.assertEqual(
|
40 |
-
settings.dict(),
|
41 |
-
{"allow_origin": None, "cors_policy_mode": "all"},
|
42 |
-
)
|
43 |
-
|
44 |
-
def test_loading_4(self):
|
45 |
-
setting_loader = SettingLoader(
|
46 |
-
setting_file_path=Path("test/setting-test-load-3.yaml")
|
47 |
-
)
|
48 |
-
settings = setting_loader.load_setting_file()
|
49 |
-
|
50 |
-
self.assertEqual(
|
51 |
-
settings.dict(),
|
52 |
-
{
|
53 |
-
"allow_origin": "192.168.254.255 192.168.255.255",
|
54 |
-
"cors_policy_mode": CorsPolicyMode.localapps,
|
55 |
-
},
|
56 |
-
)
|
57 |
-
|
58 |
-
def test_dump(self):
|
59 |
-
setting_loader = SettingLoader(
|
60 |
-
setting_file_path=Path(self.tmp_dir_path / "setting-test-dump.yaml")
|
61 |
-
)
|
62 |
-
settings = Setting(cors_policy_mode=CorsPolicyMode.localapps)
|
63 |
-
setting_loader.dump_setting_file(settings)
|
64 |
-
|
65 |
-
self.assertTrue(setting_loader.setting_file_path.is_file())
|
66 |
-
self.assertEqual(
|
67 |
-
setting_loader.load_setting_file().dict(),
|
68 |
-
{"allow_origin": None, "cors_policy_mode": CorsPolicyMode.localapps},
|
69 |
-
)
|
70 |
-
|
71 |
-
def tearDown(self):
|
72 |
-
self.tmp_dir.cleanup()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AI-Hobbyist/Hoyo-RVC/docs/faiss_tips_en.md
DELETED
@@ -1,102 +0,0 @@
|
|
1 |
-
faiss tuning TIPS
|
2 |
-
==================
|
3 |
-
# about faiss
|
4 |
-
faiss is a library of neighborhood searches for dense vectors, developed by facebook research, which efficiently implements many approximate neighborhood search methods.
|
5 |
-
Approximate Neighbor Search finds similar vectors quickly while sacrificing some accuracy.
|
6 |
-
|
7 |
-
## faiss in RVC
|
8 |
-
In RVC, for the embedding of features converted by HuBERT, we search for embeddings similar to the embedding generated from the training data and mix them to achieve a conversion that is closer to the original speech. However, since this search takes time if performed naively, high-speed conversion is realized by using approximate neighborhood search.
|
9 |
-
|
10 |
-
# implementation overview
|
11 |
-
In '/logs/your-experiment/3_feature256' where the model is located, features extracted by HuBERT from each voice data are located.
|
12 |
-
From here we read the npy files in order sorted by filename and concatenate the vectors to create big_npy. (This vector has shape [N, 256].)
|
13 |
-
After saving big_npy as /logs/your-experiment/total_fea.npy, train it with faiss.
|
14 |
-
|
15 |
-
In this article, I will explain the meaning of these parameters.
|
16 |
-
|
17 |
-
# Explanation of the method
|
18 |
-
## index factory
|
19 |
-
An index factory is a unique faiss notation that expresses a pipeline that connects multiple approximate neighborhood search methods as a string.
|
20 |
-
This allows you to try various approximate neighborhood search methods simply by changing the index factory string.
|
21 |
-
In RVC it is used like this:
|
22 |
-
|
23 |
-
```python
|
24 |
-
index = faiss.index_factory(256, "IVF%s,Flat" % n_ivf)
|
25 |
-
```
|
26 |
-
Among the arguments of index_factory, the first is the number of dimensions of the vector, the second is the index factory string, and the third is the distance to use.
|
27 |
-
|
28 |
-
For more detailed notation
|
29 |
-
https://github.com/facebookresearch/faiss/wiki/The-index-factory
|
30 |
-
|
31 |
-
## index for distance
|
32 |
-
There are two typical indexes used as similarity of embedding as follows.
|
33 |
-
|
34 |
-
- Euclidean distance (METRIC_L2)
|
35 |
-
- inner product (METRIC_INNER_PRODUCT)
|
36 |
-
|
37 |
-
Euclidean distance takes the squared difference in each dimension, sums the differences in all dimensions, and then takes the square root. This is the same as the distance in 2D and 3D that we use on a daily basis.
|
38 |
-
The inner product is not used as an index of similarity as it is, and the cosine similarity that takes the inner product after being normalized by the L2 norm is generally used.
|
39 |
-
|
40 |
-
Which is better depends on the case, but cosine similarity is often used in embedding obtained by word2vec and similar image retrieval models learned by ArcFace. If you want to do l2 normalization on vector X with numpy, you can do it with the following code with eps small enough to avoid 0 division.
|
41 |
-
|
42 |
-
```python
|
43 |
-
X_normed = X / np.maximum(eps, np.linalg.norm(X, ord=2, axis=-1, keepdims=True))
|
44 |
-
```
|
45 |
-
|
46 |
-
Also, for the index factory, you can change the distance index used for calculation by choosing the value to pass as the third argument.
|
47 |
-
|
48 |
-
```python
|
49 |
-
index = faiss.index_factory(dimention, text, faiss.METRIC_INNER_PRODUCT)
|
50 |
-
```
|
51 |
-
|
52 |
-
## IVF
|
53 |
-
IVF (Inverted file indexes) is an algorithm similar to the inverted index in full-text search.
|
54 |
-
During learning, the search target is clustered with kmeans, and Voronoi partitioning is performed using the cluster center. Each data point is assigned a cluster, so we create a dictionary that looks up the data points from the clusters.
|
55 |
-
|
56 |
-
For example, if clusters are assigned as follows
|
57 |
-
|index|Cluster|
|
58 |
-
|-----|-------|
|
59 |
-
|1|A|
|
60 |
-
|2|B|
|
61 |
-
|3|A|
|
62 |
-
|4|C|
|
63 |
-
|5|B|
|
64 |
-
|
65 |
-
The resulting inverted index looks like this:
|
66 |
-
|
67 |
-
|cluster|index|
|
68 |
-
|-------|-----|
|
69 |
-
|A|1, 3|
|
70 |
-
|B|2, 5|
|
71 |
-
|C|4|
|
72 |
-
|
73 |
-
When searching, we first search n_probe clusters from the clusters, and then calculate the distances for the data points belonging to each cluster.
|
74 |
-
|
75 |
-
# recommend parameter
|
76 |
-
There are official guidelines on how to choose an index, so I will explain accordingly.
|
77 |
-
https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index
|
78 |
-
|
79 |
-
For datasets below 1M, 4bit-PQ is the most efficient method available in faiss as of April 2023.
|
80 |
-
Combining this with IVF, narrowing down the candidates with 4bit-PQ, and finally recalculating the distance with an accurate index can be described by using the following index factory.
|
81 |
-
|
82 |
-
```python
|
83 |
-
index = faiss.index_factory(256, "IVF1024,PQ128x4fs,RFlat")
|
84 |
-
```
|
85 |
-
|
86 |
-
## Recommended parameters for IVF
|
87 |
-
Consider the case of too many IVFs. For example, if coarse quantization by IVF is performed for the number of data, this is the same as a naive exhaustive search and is inefficient.
|
88 |
-
For 1M or less, IVF values are recommended between 4*sqrt(N) ~ 16*sqrt(N) for N number of data points.
|
89 |
-
|
90 |
-
Since the calculation time increases in proportion to the number of n_probes, please consult with the accuracy and choose appropriately. Personally, I don't think RVC needs that much accuracy, so n_probe = 1 is fine.
|
91 |
-
|
92 |
-
## FastScan
|
93 |
-
FastScan is a method that enables high-speed approximation of distances by Cartesian product quantization by performing them in registers.
|
94 |
-
Cartesian product quantization performs clustering independently for each d dimension (usually d = 2) during learning, calculates the distance between clusters in advance, and creates a lookup table. At the time of prediction, the distance of each dimension can be calculated in O(1) by looking at the lookup table.
|
95 |
-
So the number you specify after PQ usually specifies half the dimension of the vector.
|
96 |
-
|
97 |
-
For a more detailed description of FastScan, please refer to the official documentation.
|
98 |
-
https://github.com/facebookresearch/faiss/wiki/Fast-accumulation-of-PQ-and-AQ-codes-(FastScan)
|
99 |
-
|
100 |
-
## RFlat
|
101 |
-
RFlat is an instruction to recalculate the rough distance calculated by FastScan with the exact distance specified by the third argument of index factory.
|
102 |
-
When getting k neighbors, k*k_factor points are recalculated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIFILMS/riffusion-playground/app.py
DELETED
@@ -1,36 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Shim layer for using the riffusion playground streamlit app with huggingface spaces.
|
3 |
-
|
4 |
-
It doesn't support the pages feature of streamlit yet.
|
5 |
-
"""
|
6 |
-
import importlib
|
7 |
-
from pathlib import Path
|
8 |
-
import sys
|
9 |
-
|
10 |
-
import streamlit as st
|
11 |
-
|
12 |
-
|
13 |
-
def render_main():
|
14 |
-
RIFFUSION_PATH = Path(__file__).parent / "riffusion"
|
15 |
-
sys.path.append(str(RIFFUSION_PATH))
|
16 |
-
|
17 |
-
st.set_page_config(layout="wide", page_icon="🎸")
|
18 |
-
|
19 |
-
# Disable the rest of the setting
|
20 |
-
st.set_page_config = lambda **kwargs: None
|
21 |
-
|
22 |
-
# Find all pages in the riffusion directory
|
23 |
-
pages = sorted(
|
24 |
-
p.name[:-3] for p in (RIFFUSION_PATH / "riffusion" / "streamlit" / "pages").glob("*.py")
|
25 |
-
)
|
26 |
-
|
27 |
-
# Add the pages to the sidebar
|
28 |
-
page = st.sidebar.selectbox("Page", pages, index=pages.index("text_to_audio"))
|
29 |
-
assert page is not None
|
30 |
-
|
31 |
-
module = importlib.import_module(f"riffusion.streamlit.pages.{page}")
|
32 |
-
render_func = getattr(module, f"render_{page}")
|
33 |
-
render_func()
|
34 |
-
|
35 |
-
|
36 |
-
render_main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/layers/pqmf.py
DELETED
@@ -1,129 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
|
3 |
-
# Copyright 2020 Tomoki Hayashi
|
4 |
-
# MIT License (https://opensource.org/licenses/MIT)
|
5 |
-
|
6 |
-
"""Pseudo QMF modules."""
|
7 |
-
|
8 |
-
import numpy as np
|
9 |
-
import torch
|
10 |
-
import torch.nn.functional as F
|
11 |
-
|
12 |
-
from scipy.signal import kaiser
|
13 |
-
|
14 |
-
|
15 |
-
def design_prototype_filter(taps=62, cutoff_ratio=0.15, beta=9.0):
|
16 |
-
"""Design prototype filter for PQMF.
|
17 |
-
|
18 |
-
This method is based on `A Kaiser window approach for the design of prototype
|
19 |
-
filters of cosine modulated filterbanks`_.
|
20 |
-
|
21 |
-
Args:
|
22 |
-
taps (int): The number of filter taps.
|
23 |
-
cutoff_ratio (float): Cut-off frequency ratio.
|
24 |
-
beta (float): Beta coefficient for kaiser window.
|
25 |
-
|
26 |
-
Returns:
|
27 |
-
ndarray: Impluse response of prototype filter (taps + 1,).
|
28 |
-
|
29 |
-
.. _`A Kaiser window approach for the design of prototype filters of cosine modulated filterbanks`:
|
30 |
-
https://ieeexplore.ieee.org/abstract/document/681427
|
31 |
-
|
32 |
-
"""
|
33 |
-
# check the arguments are valid
|
34 |
-
assert taps % 2 == 0, "The number of taps mush be even number."
|
35 |
-
assert 0.0 < cutoff_ratio < 1.0, "Cutoff ratio must be > 0.0 and < 1.0."
|
36 |
-
|
37 |
-
# make initial filter
|
38 |
-
omega_c = np.pi * cutoff_ratio
|
39 |
-
with np.errstate(invalid='ignore'):
|
40 |
-
h_i = np.sin(omega_c * (np.arange(taps + 1) - 0.5 * taps)) \
|
41 |
-
/ (np.pi * (np.arange(taps + 1) - 0.5 * taps))
|
42 |
-
h_i[taps // 2] = np.cos(0) * cutoff_ratio # fix nan due to indeterminate form
|
43 |
-
|
44 |
-
# apply kaiser window
|
45 |
-
w = kaiser(taps + 1, beta)
|
46 |
-
h = h_i * w
|
47 |
-
|
48 |
-
return h
|
49 |
-
|
50 |
-
|
51 |
-
class PQMF(torch.nn.Module):
|
52 |
-
"""PQMF module.
|
53 |
-
|
54 |
-
This module is based on `Near-perfect-reconstruction pseudo-QMF banks`_.
|
55 |
-
|
56 |
-
.. _`Near-perfect-reconstruction pseudo-QMF banks`:
|
57 |
-
https://ieeexplore.ieee.org/document/258122
|
58 |
-
|
59 |
-
"""
|
60 |
-
|
61 |
-
def __init__(self, subbands=4, taps=62, cutoff_ratio=0.15, beta=9.0):
|
62 |
-
"""Initilize PQMF module.
|
63 |
-
|
64 |
-
Args:
|
65 |
-
subbands (int): The number of subbands.
|
66 |
-
taps (int): The number of filter taps.
|
67 |
-
cutoff_ratio (float): Cut-off frequency ratio.
|
68 |
-
beta (float): Beta coefficient for kaiser window.
|
69 |
-
|
70 |
-
"""
|
71 |
-
super(PQMF, self).__init__()
|
72 |
-
|
73 |
-
# define filter coefficient
|
74 |
-
h_proto = design_prototype_filter(taps, cutoff_ratio, beta)
|
75 |
-
h_analysis = np.zeros((subbands, len(h_proto)))
|
76 |
-
h_synthesis = np.zeros((subbands, len(h_proto)))
|
77 |
-
for k in range(subbands):
|
78 |
-
h_analysis[k] = 2 * h_proto * np.cos(
|
79 |
-
(2 * k + 1) * (np.pi / (2 * subbands)) *
|
80 |
-
(np.arange(taps + 1) - ((taps - 1) / 2)) +
|
81 |
-
(-1) ** k * np.pi / 4)
|
82 |
-
h_synthesis[k] = 2 * h_proto * np.cos(
|
83 |
-
(2 * k + 1) * (np.pi / (2 * subbands)) *
|
84 |
-
(np.arange(taps + 1) - ((taps - 1) / 2)) -
|
85 |
-
(-1) ** k * np.pi / 4)
|
86 |
-
|
87 |
-
# convert to tensor
|
88 |
-
analysis_filter = torch.from_numpy(h_analysis).float().unsqueeze(1)
|
89 |
-
synthesis_filter = torch.from_numpy(h_synthesis).float().unsqueeze(0)
|
90 |
-
|
91 |
-
# register coefficients as beffer
|
92 |
-
self.register_buffer("analysis_filter", analysis_filter)
|
93 |
-
self.register_buffer("synthesis_filter", synthesis_filter)
|
94 |
-
|
95 |
-
# filter for downsampling & upsampling
|
96 |
-
updown_filter = torch.zeros((subbands, subbands, subbands)).float()
|
97 |
-
for k in range(subbands):
|
98 |
-
updown_filter[k, k, 0] = 1.0
|
99 |
-
self.register_buffer("updown_filter", updown_filter)
|
100 |
-
self.subbands = subbands
|
101 |
-
|
102 |
-
# keep padding info
|
103 |
-
self.pad_fn = torch.nn.ConstantPad1d(taps // 2, 0.0)
|
104 |
-
|
105 |
-
def analysis(self, x):
|
106 |
-
"""Analysis with PQMF.
|
107 |
-
|
108 |
-
Args:
|
109 |
-
x (Tensor): Input tensor (B, 1, T).
|
110 |
-
|
111 |
-
Returns:
|
112 |
-
Tensor: Output tensor (B, subbands, T // subbands).
|
113 |
-
|
114 |
-
"""
|
115 |
-
x = F.conv1d(self.pad_fn(x), self.analysis_filter)
|
116 |
-
return F.conv1d(x, self.updown_filter, stride=self.subbands)
|
117 |
-
|
118 |
-
def synthesis(self, x):
|
119 |
-
"""Synthesis with PQMF.
|
120 |
-
|
121 |
-
Args:
|
122 |
-
x (Tensor): Input tensor (B, subbands, T // subbands).
|
123 |
-
|
124 |
-
Returns:
|
125 |
-
Tensor: Output tensor (B, 1, T).
|
126 |
-
|
127 |
-
"""
|
128 |
-
x = F.conv_transpose1d(x, self.updown_filter * self.subbands, stride=self.subbands)
|
129 |
-
return F.conv1d(self.pad_fn(x), self.synthesis_filter)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/training_utils.py
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
from utils.hparams import hparams
|
2 |
-
|
3 |
-
|
4 |
-
class RSQRTSchedule(object):
|
5 |
-
def __init__(self, optimizer):
|
6 |
-
super().__init__()
|
7 |
-
self.optimizer = optimizer
|
8 |
-
self.constant_lr = hparams['lr']
|
9 |
-
self.warmup_updates = hparams['warmup_updates']
|
10 |
-
self.hidden_size = hparams['hidden_size']
|
11 |
-
self.lr = hparams['lr']
|
12 |
-
for param_group in optimizer.param_groups:
|
13 |
-
param_group['lr'] = self.lr
|
14 |
-
self.step(0)
|
15 |
-
|
16 |
-
def step(self, num_updates):
|
17 |
-
constant_lr = self.constant_lr
|
18 |
-
warmup = min(num_updates / self.warmup_updates, 1.0)
|
19 |
-
rsqrt_decay = max(self.warmup_updates, num_updates) ** -0.5
|
20 |
-
rsqrt_hidden = self.hidden_size ** -0.5
|
21 |
-
self.lr = max(constant_lr * warmup * rsqrt_decay * rsqrt_hidden, 1e-7)
|
22 |
-
for param_group in self.optimizer.param_groups:
|
23 |
-
param_group['lr'] = self.lr
|
24 |
-
return self.lr
|
25 |
-
|
26 |
-
def get_lr(self):
|
27 |
-
return self.optimizer.param_groups[0]['lr']
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/diffusion/__init__.py
DELETED
File without changes
|
spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/models/diffusion/ddim.py
DELETED
@@ -1,262 +0,0 @@
|
|
1 |
-
"""SAMPLING ONLY."""
|
2 |
-
|
3 |
-
import torch
|
4 |
-
import numpy as np
|
5 |
-
from tqdm import tqdm
|
6 |
-
from functools import partial
|
7 |
-
|
8 |
-
from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \
|
9 |
-
extract_into_tensor
|
10 |
-
|
11 |
-
|
12 |
-
class DDIMSampler(object):
|
13 |
-
def __init__(self, model, schedule="linear", **kwargs):
|
14 |
-
super().__init__()
|
15 |
-
self.model = model
|
16 |
-
self.device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
|
17 |
-
self.ddpm_num_timesteps = model.num_timesteps
|
18 |
-
self.schedule = schedule
|
19 |
-
|
20 |
-
def register_buffer(self, name, attr):
|
21 |
-
if type(attr) == torch.Tensor:
|
22 |
-
# if attr.device != torch.device("cuda"):
|
23 |
-
# attr = attr.to(torch.device("cuda"))
|
24 |
-
attr = attr.to(self.device)
|
25 |
-
setattr(self, name, attr)
|
26 |
-
|
27 |
-
def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True):
|
28 |
-
self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,
|
29 |
-
num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose)
|
30 |
-
alphas_cumprod = self.model.alphas_cumprod
|
31 |
-
assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'
|
32 |
-
to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)
|
33 |
-
|
34 |
-
self.register_buffer('betas', to_torch(self.model.betas))
|
35 |
-
self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
|
36 |
-
self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))
|
37 |
-
|
38 |
-
# calculations for diffusion q(x_t | x_{t-1}) and others
|
39 |
-
self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))
|
40 |
-
self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))
|
41 |
-
self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))
|
42 |
-
self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))
|
43 |
-
self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))
|
44 |
-
|
45 |
-
# ddim sampling parameters
|
46 |
-
ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),
|
47 |
-
ddim_timesteps=self.ddim_timesteps,
|
48 |
-
eta=ddim_eta,verbose=verbose)
|
49 |
-
self.register_buffer('ddim_sigmas', ddim_sigmas)
|
50 |
-
self.register_buffer('ddim_alphas', ddim_alphas)
|
51 |
-
self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)
|
52 |
-
self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))
|
53 |
-
sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(
|
54 |
-
(1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (
|
55 |
-
1 - self.alphas_cumprod / self.alphas_cumprod_prev))
|
56 |
-
self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)
|
57 |
-
|
58 |
-
@torch.no_grad()
|
59 |
-
def sample(self,
|
60 |
-
S,
|
61 |
-
batch_size,
|
62 |
-
shape,
|
63 |
-
conditioning=None,
|
64 |
-
callback=None,
|
65 |
-
normals_sequence=None,
|
66 |
-
img_callback=None,
|
67 |
-
quantize_x0=False,
|
68 |
-
eta=0.,
|
69 |
-
mask=None,
|
70 |
-
x0=None,
|
71 |
-
temperature=1.,
|
72 |
-
noise_dropout=0.,
|
73 |
-
score_corrector=None,
|
74 |
-
corrector_kwargs=None,
|
75 |
-
verbose=True,
|
76 |
-
x_T=None,
|
77 |
-
log_every_t=100,
|
78 |
-
unconditional_guidance_scale=1.,
|
79 |
-
unconditional_conditioning=None,
|
80 |
-
# this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
|
81 |
-
**kwargs
|
82 |
-
):
|
83 |
-
if conditioning is not None:
|
84 |
-
if isinstance(conditioning, dict):
|
85 |
-
ctmp = conditioning[list(conditioning.keys())[0]]
|
86 |
-
while isinstance(ctmp, list): ctmp = ctmp[0]
|
87 |
-
cbs = ctmp.shape[0]
|
88 |
-
if cbs != batch_size:
|
89 |
-
print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}")
|
90 |
-
else:
|
91 |
-
if conditioning.shape[0] != batch_size:
|
92 |
-
print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}")
|
93 |
-
|
94 |
-
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
|
95 |
-
# sampling
|
96 |
-
C, H, W = shape
|
97 |
-
size = (batch_size, C, H, W)
|
98 |
-
# print(f'Data shape for DDIM sampling is {size}, eta {eta}')
|
99 |
-
|
100 |
-
samples, intermediates = self.ddim_sampling(conditioning, size,
|
101 |
-
callback=callback,
|
102 |
-
img_callback=img_callback,
|
103 |
-
quantize_denoised=quantize_x0,
|
104 |
-
mask=mask, x0=x0,
|
105 |
-
ddim_use_original_steps=False,
|
106 |
-
noise_dropout=noise_dropout,
|
107 |
-
temperature=temperature,
|
108 |
-
score_corrector=score_corrector,
|
109 |
-
corrector_kwargs=corrector_kwargs,
|
110 |
-
x_T=x_T,
|
111 |
-
log_every_t=log_every_t,
|
112 |
-
unconditional_guidance_scale=unconditional_guidance_scale,
|
113 |
-
unconditional_conditioning=unconditional_conditioning,
|
114 |
-
)
|
115 |
-
return samples, intermediates
|
116 |
-
|
117 |
-
@torch.no_grad()
|
118 |
-
def ddim_sampling(self, cond, shape,
|
119 |
-
x_T=None, ddim_use_original_steps=False,
|
120 |
-
callback=None, timesteps=None, quantize_denoised=False,
|
121 |
-
mask=None, x0=None, img_callback=None, log_every_t=100,
|
122 |
-
temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
|
123 |
-
unconditional_guidance_scale=1., unconditional_conditioning=None,):
|
124 |
-
device = self.model.betas.device
|
125 |
-
b = shape[0]
|
126 |
-
if x_T is None:
|
127 |
-
img = torch.randn(shape, device=device)
|
128 |
-
else:
|
129 |
-
img = x_T
|
130 |
-
|
131 |
-
if timesteps is None:
|
132 |
-
timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps
|
133 |
-
elif timesteps is not None and not ddim_use_original_steps:
|
134 |
-
subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1
|
135 |
-
timesteps = self.ddim_timesteps[:subset_end]
|
136 |
-
|
137 |
-
intermediates = {'x_inter': [img], 'pred_x0': [img]}
|
138 |
-
time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps)
|
139 |
-
total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]
|
140 |
-
|
141 |
-
# iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps)
|
142 |
-
|
143 |
-
for i, step in enumerate(time_range):
|
144 |
-
index = total_steps - i - 1
|
145 |
-
ts = torch.full((b,), step, device=device, dtype=torch.long)
|
146 |
-
|
147 |
-
if mask is not None:
|
148 |
-
assert x0 is not None
|
149 |
-
img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass?
|
150 |
-
img = img_orig * mask + (1. - mask) * img
|
151 |
-
|
152 |
-
outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
|
153 |
-
quantize_denoised=quantize_denoised, temperature=temperature,
|
154 |
-
noise_dropout=noise_dropout, score_corrector=score_corrector,
|
155 |
-
corrector_kwargs=corrector_kwargs,
|
156 |
-
unconditional_guidance_scale=unconditional_guidance_scale,
|
157 |
-
unconditional_conditioning=unconditional_conditioning)
|
158 |
-
img, pred_x0 = outs
|
159 |
-
if callback: callback(i)
|
160 |
-
if img_callback: img_callback(pred_x0, i)
|
161 |
-
|
162 |
-
if index % log_every_t == 0 or index == total_steps - 1:
|
163 |
-
intermediates['x_inter'].append(img)
|
164 |
-
intermediates['pred_x0'].append(pred_x0)
|
165 |
-
|
166 |
-
return img, intermediates
|
167 |
-
|
168 |
-
@torch.no_grad()
|
169 |
-
def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
|
170 |
-
temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
|
171 |
-
unconditional_guidance_scale=1., unconditional_conditioning=None):
|
172 |
-
b, *_, device = *x.shape, x.device
|
173 |
-
|
174 |
-
if unconditional_conditioning is None or unconditional_guidance_scale == 1.:
|
175 |
-
e_t = self.model.apply_model(x, t, c)
|
176 |
-
else:
|
177 |
-
x_in = torch.cat([x] * 2)
|
178 |
-
t_in = torch.cat([t] * 2)
|
179 |
-
if isinstance(c, dict):
|
180 |
-
assert isinstance(unconditional_conditioning, dict)
|
181 |
-
c_in = dict()
|
182 |
-
for k in c:
|
183 |
-
if isinstance(c[k], list):
|
184 |
-
c_in[k] = [torch.cat([
|
185 |
-
unconditional_conditioning[k][i],
|
186 |
-
c[k][i]]) for i in range(len(c[k]))]
|
187 |
-
else:
|
188 |
-
c_in[k] = torch.cat([
|
189 |
-
unconditional_conditioning[k],
|
190 |
-
c[k]])
|
191 |
-
elif isinstance(c, list):
|
192 |
-
c_in = list()
|
193 |
-
assert isinstance(unconditional_conditioning, list)
|
194 |
-
for i in range(len(c)):
|
195 |
-
c_in.append(torch.cat([unconditional_conditioning[i], c[i]]))
|
196 |
-
else:
|
197 |
-
c_in = torch.cat([unconditional_conditioning, c])# c/uc shape [b,seq_len=77,dim=1024],c_in shape [b*2,seq_len,dim]
|
198 |
-
e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
|
199 |
-
e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
|
200 |
-
|
201 |
-
if score_corrector is not None:
|
202 |
-
assert self.model.parameterization == "eps"
|
203 |
-
e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)
|
204 |
-
|
205 |
-
alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
|
206 |
-
alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev
|
207 |
-
sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas
|
208 |
-
sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas
|
209 |
-
# select parameters corresponding to the currently considered timestep
|
210 |
-
a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
|
211 |
-
a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
|
212 |
-
sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
|
213 |
-
sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device)
|
214 |
-
|
215 |
-
# current prediction for x_0
|
216 |
-
pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
|
217 |
-
if quantize_denoised:
|
218 |
-
pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
|
219 |
-
# direction pointing to x_t
|
220 |
-
dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t
|
221 |
-
noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
|
222 |
-
if noise_dropout > 0.:
|
223 |
-
noise = torch.nn.functional.dropout(noise, p=noise_dropout)
|
224 |
-
x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise
|
225 |
-
return x_prev, pred_x0
|
226 |
-
|
227 |
-
@torch.no_grad()
|
228 |
-
def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):
|
229 |
-
# fast, but does not allow for exact reconstruction
|
230 |
-
# t serves as an index to gather the correct alphas
|
231 |
-
if use_original_steps:
|
232 |
-
sqrt_alphas_cumprod = self.sqrt_alphas_cumprod
|
233 |
-
sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod
|
234 |
-
else:
|
235 |
-
sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)
|
236 |
-
sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas
|
237 |
-
|
238 |
-
if noise is None:
|
239 |
-
noise = torch.randn_like(x0)
|
240 |
-
return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 +
|
241 |
-
extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise)
|
242 |
-
|
243 |
-
@torch.no_grad()
|
244 |
-
def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,
|
245 |
-
use_original_steps=False):
|
246 |
-
|
247 |
-
timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps
|
248 |
-
timesteps = timesteps[:t_start]
|
249 |
-
|
250 |
-
time_range = np.flip(timesteps)
|
251 |
-
total_steps = timesteps.shape[0]
|
252 |
-
# print(f"Running DDIM Sampling with {total_steps} timesteps")
|
253 |
-
|
254 |
-
# iterator = tqdm(time_range, desc='Decoding image', total=total_steps)
|
255 |
-
x_dec = x_latent
|
256 |
-
for i, step in enumerate(time_range):
|
257 |
-
index = total_steps - i - 1
|
258 |
-
ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long)
|
259 |
-
x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,
|
260 |
-
unconditional_guidance_scale=unconditional_guidance_scale,
|
261 |
-
unconditional_conditioning=unconditional_conditioning)
|
262 |
-
return x_dec
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ASJMO/freegpt/g4f/Provider/Providers/Better.py
DELETED
@@ -1,56 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import json
|
3 |
-
import requests
|
4 |
-
from typing import Dict, get_type_hints
|
5 |
-
|
6 |
-
url = 'https://openai-proxy-api.vercel.app/v1/'
|
7 |
-
model = [
|
8 |
-
'gpt-3.5-turbo',
|
9 |
-
'gpt-3.5-turbo-0613',
|
10 |
-
'gpt-3.5-turbo-16k',
|
11 |
-
'gpt-3.5-turbo-16k-0613',
|
12 |
-
'gpt-4',
|
13 |
-
]
|
14 |
-
|
15 |
-
supports_stream = True
|
16 |
-
needs_auth = False
|
17 |
-
|
18 |
-
|
19 |
-
def _create_completion(model: str, messages: list, stream: bool, **kwargs):
|
20 |
-
headers = {
|
21 |
-
'Content-Type': 'application/json',
|
22 |
-
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Edg/114.0.1823.58',
|
23 |
-
'Referer': 'https://chat.ylokh.xyz/',
|
24 |
-
'Origin': 'https://chat.ylokh.xyz',
|
25 |
-
'Connection': 'keep-alive',
|
26 |
-
}
|
27 |
-
|
28 |
-
json_data = {
|
29 |
-
'messages': messages,
|
30 |
-
'temperature': 1.0,
|
31 |
-
'model': model,
|
32 |
-
'stream': stream,
|
33 |
-
}
|
34 |
-
|
35 |
-
response = requests.post(
|
36 |
-
'https://openai-proxy-api.vercel.app/v1/chat/completions', headers=headers, json=json_data, stream=True
|
37 |
-
)
|
38 |
-
|
39 |
-
for token in response.iter_lines():
|
40 |
-
decoded = token.decode('utf-8')
|
41 |
-
if decoded.startswith('data: '):
|
42 |
-
data_str = decoded.replace('data: ', '')
|
43 |
-
data = json.loads(data_str)
|
44 |
-
if 'choices' in data and 'delta' in data['choices'][0]:
|
45 |
-
delta = data['choices'][0]['delta']
|
46 |
-
content = delta.get('content', '')
|
47 |
-
finish_reason = delta.get('finish_reason', '')
|
48 |
-
|
49 |
-
if finish_reason == 'stop':
|
50 |
-
break
|
51 |
-
if content:
|
52 |
-
yield content
|
53 |
-
|
54 |
-
|
55 |
-
params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + '(%s)' % ', '.join(
|
56 |
-
[f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ASJMO/freegpt/g4f/Provider/Providers/ChatgptAi.py
DELETED
@@ -1,51 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import requests, re
|
3 |
-
from ...typing import sha256, Dict, get_type_hints
|
4 |
-
|
5 |
-
url = 'https://chatgpt.ai/gpt-4/'
|
6 |
-
model = ['gpt-4']
|
7 |
-
supports_stream = True
|
8 |
-
needs_auth = False
|
9 |
-
|
10 |
-
|
11 |
-
def _create_completion(model: str, messages: list, stream: bool, **kwargs):
|
12 |
-
chat = ''
|
13 |
-
for message in messages:
|
14 |
-
chat += '%s: %s\n' % (message['role'], message['content'])
|
15 |
-
chat += 'assistant: '
|
16 |
-
|
17 |
-
response = requests.get('https://chatgpt.ai/')
|
18 |
-
nonce, post_id, _, bot_id = re.findall(r'data-nonce="(.*)"\n data-post-id="(.*)"\n data-url="(.*)"\n data-bot-id="(.*)"\n data-width', response.text)[0]
|
19 |
-
|
20 |
-
headers = {
|
21 |
-
'authority': 'chatgpt.ai',
|
22 |
-
'accept': '*/*',
|
23 |
-
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
|
24 |
-
'cache-control': 'no-cache',
|
25 |
-
'origin': 'https://chatgpt.ai',
|
26 |
-
'pragma': 'no-cache',
|
27 |
-
'referer': 'https://chatgpt.ai/gpt-4/',
|
28 |
-
'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
|
29 |
-
'sec-ch-ua-mobile': '?0',
|
30 |
-
'sec-ch-ua-platform': '"Windows"',
|
31 |
-
'sec-fetch-dest': 'empty',
|
32 |
-
'sec-fetch-mode': 'cors',
|
33 |
-
'sec-fetch-site': 'same-origin',
|
34 |
-
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
|
35 |
-
}
|
36 |
-
data = {
|
37 |
-
'_wpnonce': nonce,
|
38 |
-
'post_id': post_id,
|
39 |
-
'url': 'https://chatgpt.ai/gpt-4',
|
40 |
-
'action': 'wpaicg_chat_shortcode_message',
|
41 |
-
'message': chat,
|
42 |
-
'bot_id': bot_id
|
43 |
-
}
|
44 |
-
|
45 |
-
response = requests.post('https://chatgpt.ai/wp-admin/admin-ajax.php',
|
46 |
-
headers=headers, data=data)
|
47 |
-
|
48 |
-
yield (response.json()['data'])
|
49 |
-
|
50 |
-
params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
|
51 |
-
'(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT/client/css/typing.css
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
.typing {
|
2 |
-
position: absolute;
|
3 |
-
top: -25px;
|
4 |
-
left: 0;
|
5 |
-
font-size: 14px;
|
6 |
-
animation: show_popup 0.4s;
|
7 |
-
}
|
8 |
-
|
9 |
-
.typing-hiding {
|
10 |
-
animation: hide_popup 0.4s;
|
11 |
-
}
|
12 |
-
|
13 |
-
.typing-hidden {
|
14 |
-
display: none;
|
15 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aditya9790/yolo7-object-tracking/deploy/triton-inference-server/render.py
DELETED
@@ -1,110 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
|
3 |
-
import cv2
|
4 |
-
|
5 |
-
from math import sqrt
|
6 |
-
|
7 |
-
_LINE_THICKNESS_SCALING = 500.0
|
8 |
-
|
9 |
-
np.random.seed(0)
|
10 |
-
RAND_COLORS = np.random.randint(50, 255, (64, 3), "int") # used for class visu
|
11 |
-
RAND_COLORS[0] = [220, 220, 220]
|
12 |
-
|
13 |
-
def render_box(img, box, color=(200, 200, 200)):
|
14 |
-
"""
|
15 |
-
Render a box. Calculates scaling and thickness automatically.
|
16 |
-
:param img: image to render into
|
17 |
-
:param box: (x1, y1, x2, y2) - box coordinates
|
18 |
-
:param color: (b, g, r) - box color
|
19 |
-
:return: updated image
|
20 |
-
"""
|
21 |
-
x1, y1, x2, y2 = box
|
22 |
-
thickness = int(
|
23 |
-
round(
|
24 |
-
(img.shape[0] * img.shape[1])
|
25 |
-
/ (_LINE_THICKNESS_SCALING * _LINE_THICKNESS_SCALING)
|
26 |
-
)
|
27 |
-
)
|
28 |
-
thickness = max(1, thickness)
|
29 |
-
img = cv2.rectangle(
|
30 |
-
img,
|
31 |
-
(int(x1), int(y1)),
|
32 |
-
(int(x2), int(y2)),
|
33 |
-
color,
|
34 |
-
thickness=thickness
|
35 |
-
)
|
36 |
-
return img
|
37 |
-
|
38 |
-
def render_filled_box(img, box, color=(200, 200, 200)):
|
39 |
-
"""
|
40 |
-
Render a box. Calculates scaling and thickness automatically.
|
41 |
-
:param img: image to render into
|
42 |
-
:param box: (x1, y1, x2, y2) - box coordinates
|
43 |
-
:param color: (b, g, r) - box color
|
44 |
-
:return: updated image
|
45 |
-
"""
|
46 |
-
x1, y1, x2, y2 = box
|
47 |
-
img = cv2.rectangle(
|
48 |
-
img,
|
49 |
-
(int(x1), int(y1)),
|
50 |
-
(int(x2), int(y2)),
|
51 |
-
color,
|
52 |
-
thickness=cv2.FILLED
|
53 |
-
)
|
54 |
-
return img
|
55 |
-
|
56 |
-
_TEXT_THICKNESS_SCALING = 700.0
|
57 |
-
_TEXT_SCALING = 520.0
|
58 |
-
|
59 |
-
|
60 |
-
def get_text_size(img, text, normalised_scaling=1.0):
|
61 |
-
"""
|
62 |
-
Get calculated text size (as box width and height)
|
63 |
-
:param img: image reference, used to determine appropriate text scaling
|
64 |
-
:param text: text to display
|
65 |
-
:param normalised_scaling: additional normalised scaling. Default 1.0.
|
66 |
-
:return: (width, height) - width and height of text box
|
67 |
-
"""
|
68 |
-
thickness = int(
|
69 |
-
round(
|
70 |
-
(img.shape[0] * img.shape[1])
|
71 |
-
/ (_TEXT_THICKNESS_SCALING * _TEXT_THICKNESS_SCALING)
|
72 |
-
)
|
73 |
-
* normalised_scaling
|
74 |
-
)
|
75 |
-
thickness = max(1, thickness)
|
76 |
-
scaling = img.shape[0] / _TEXT_SCALING * normalised_scaling
|
77 |
-
return cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, scaling, thickness)[0]
|
78 |
-
|
79 |
-
|
80 |
-
def render_text(img, text, pos, color=(200, 200, 200), normalised_scaling=1.0):
|
81 |
-
"""
|
82 |
-
Render a text into the image. Calculates scaling and thickness automatically.
|
83 |
-
:param img: image to render into
|
84 |
-
:param text: text to display
|
85 |
-
:param pos: (x, y) - upper left coordinates of render position
|
86 |
-
:param color: (b, g, r) - text color
|
87 |
-
:param normalised_scaling: additional normalised scaling. Default 1.0.
|
88 |
-
:return: updated image
|
89 |
-
"""
|
90 |
-
x, y = pos
|
91 |
-
thickness = int(
|
92 |
-
round(
|
93 |
-
(img.shape[0] * img.shape[1])
|
94 |
-
/ (_TEXT_THICKNESS_SCALING * _TEXT_THICKNESS_SCALING)
|
95 |
-
)
|
96 |
-
* normalised_scaling
|
97 |
-
)
|
98 |
-
thickness = max(1, thickness)
|
99 |
-
scaling = img.shape[0] / _TEXT_SCALING * normalised_scaling
|
100 |
-
size = get_text_size(img, text, normalised_scaling)
|
101 |
-
cv2.putText(
|
102 |
-
img,
|
103 |
-
text,
|
104 |
-
(int(x), int(y + size[1])),
|
105 |
-
cv2.FONT_HERSHEY_SIMPLEX,
|
106 |
-
scaling,
|
107 |
-
color,
|
108 |
-
thickness=thickness,
|
109 |
-
)
|
110 |
-
return img
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Factory.js
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
import Rings from './Rings.js';
|
2 |
-
import ObjectFactory from '../ObjectFactory.js';
|
3 |
-
import SetValue from '../../../plugins/utils/object/SetValue.js';
|
4 |
-
|
5 |
-
ObjectFactory.register('rings', function (config) {
|
6 |
-
var gameObject = new Rings(this.scene, config);
|
7 |
-
this.scene.add.existing(gameObject);
|
8 |
-
return gameObject;
|
9 |
-
});
|
10 |
-
|
11 |
-
SetValue(window, 'RexPlugins.Spinner.Rings', Rings);
|
12 |
-
|
13 |
-
export default Rings;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlekseyCalvin/dreambooth-training3/app.py
DELETED
@@ -1,659 +0,0 @@
|
|
1 |
-
from subprocess import getoutput
|
2 |
-
import os
|
3 |
-
|
4 |
-
gpu_info = getoutput('nvidia-smi')
|
5 |
-
if("A10G" in gpu_info):
|
6 |
-
which_gpu = "A10G"
|
7 |
-
os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl")
|
8 |
-
elif("T4" in gpu_info):
|
9 |
-
which_gpu = "T4"
|
10 |
-
os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl")
|
11 |
-
else:
|
12 |
-
which_gpu = "CPU"
|
13 |
-
|
14 |
-
import gradio as gr
|
15 |
-
from pathlib import Path
|
16 |
-
import argparse
|
17 |
-
import shutil
|
18 |
-
from train_dreambooth import run_training
|
19 |
-
from convertosd import convert
|
20 |
-
from PIL import Image
|
21 |
-
from slugify import slugify
|
22 |
-
import requests
|
23 |
-
import torch
|
24 |
-
import zipfile
|
25 |
-
import tarfile
|
26 |
-
import urllib.parse
|
27 |
-
import gc
|
28 |
-
from diffusers import StableDiffusionPipeline
|
29 |
-
from huggingface_hub import snapshot_download, update_repo_visibility, HfApi
|
30 |
-
|
31 |
-
is_spaces = True if "SPACE_ID" in os.environ else False
|
32 |
-
if(is_spaces):
|
33 |
-
is_shared_ui = True if "multimodalart/dreambooth-training" in os.environ['SPACE_ID'] else False
|
34 |
-
else:
|
35 |
-
is_shared_ui = False
|
36 |
-
is_gpu_associated = torch.cuda.is_available()
|
37 |
-
|
38 |
-
css = '''
|
39 |
-
.instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important}
|
40 |
-
.arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important}
|
41 |
-
#component-4, #component-3, #component-10{min-height: 0}
|
42 |
-
.duplicate-button img{margin: 0}
|
43 |
-
'''
|
44 |
-
maximum_concepts = 3
|
45 |
-
|
46 |
-
#Pre download the files
|
47 |
-
if(is_gpu_associated):
|
48 |
-
model_v1 = snapshot_download(repo_id="multimodalart/sd-fine-tunable")
|
49 |
-
model_v2 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1")
|
50 |
-
model_v2_512 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1-base")
|
51 |
-
safety_checker = snapshot_download(repo_id="multimodalart/sd-sc")
|
52 |
-
model_to_load = model_v1
|
53 |
-
|
54 |
-
#with zipfile.ZipFile("mix.zip", 'r') as zip_ref:
|
55 |
-
# zip_ref.extractall(".")
|
56 |
-
|
57 |
-
def swap_text(option, base):
|
58 |
-
resize_width = 768 if base == "v2-1-768" else 512
|
59 |
-
mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:"
|
60 |
-
if(option == "object"):
|
61 |
-
instance_prompt_example = "cttoy"
|
62 |
-
freeze_for = 30
|
63 |
-
return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. You can use services like <a style='text-decoration: underline' target='_blank' href='https://www.birme.net/?target_width={resize_width}&target_height={resize_width}'>birme</a> for smart cropping. {mandatory_liability}:", '''<img src="file/cat-toy.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, gr.update(visible=False)]
|
64 |
-
elif(option == "person"):
|
65 |
-
instance_prompt_example = "julcto"
|
66 |
-
freeze_for = 70
|
67 |
-
#show_prior_preservation = True if base != "v2-1-768" else False
|
68 |
-
show_prior_preservation=False
|
69 |
-
if(show_prior_preservation):
|
70 |
-
prior_preservation_box_update = gr.update(visible=show_prior_preservation)
|
71 |
-
else:
|
72 |
-
prior_preservation_box_update = gr.update(visible=show_prior_preservation, value=False)
|
73 |
-
return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. You can use services like <a style='text-decoration: underline' target='_blank' href='https://www.birme.net/?target_width={resize_width}&target_height={resize_width}'>birme</a> for smart cropping. {mandatory_liability}:", '''<img src="file/person.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, prior_preservation_box_update]
|
74 |
-
elif(option == "style"):
|
75 |
-
instance_prompt_example = "trsldamrl"
|
76 |
-
freeze_for = 10
|
77 |
-
return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. You can use services like <a style='text-decoration: underline' target='_blank' href='https://www.birme.net/?target_width={resize_width}&target_height={resize_width}'>birme</a> for smart cropping. {mandatory_liability}:", '''<img src="file/trsl_style.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}", freeze_for, gr.update(visible=False)]
|
78 |
-
|
79 |
-
def swap_base_model(selected_model):
|
80 |
-
if(is_gpu_associated):
|
81 |
-
global model_to_load
|
82 |
-
if(selected_model == "v1-5"):
|
83 |
-
model_to_load = model_v1
|
84 |
-
elif(selected_model == "v2-1-768"):
|
85 |
-
model_to_load = model_v2
|
86 |
-
else:
|
87 |
-
model_to_load = model_v2_512
|
88 |
-
|
89 |
-
def count_files(*inputs):
|
90 |
-
file_counter = 0
|
91 |
-
concept_counter = 0
|
92 |
-
for i, input in enumerate(inputs):
|
93 |
-
if(i < maximum_concepts-1):
|
94 |
-
files = inputs[i]
|
95 |
-
if(files):
|
96 |
-
concept_counter+=1
|
97 |
-
file_counter+=len(files)
|
98 |
-
uses_custom = inputs[-1]
|
99 |
-
type_of_thing = inputs[-4]
|
100 |
-
selected_model = inputs[-5]
|
101 |
-
experimental_faces = inputs[-6]
|
102 |
-
if(uses_custom):
|
103 |
-
Training_Steps = int(inputs[-3])
|
104 |
-
else:
|
105 |
-
Training_Steps = file_counter*150
|
106 |
-
if(type_of_thing == "person" and Training_Steps > 2400):
|
107 |
-
Training_Steps = 2400 #Avoid overfitting on person faces
|
108 |
-
if(is_spaces):
|
109 |
-
if(selected_model == "v1-5"):
|
110 |
-
its = 1.1 if which_gpu == "T4" else 1.8
|
111 |
-
if(experimental_faces):
|
112 |
-
its = 1
|
113 |
-
elif(selected_model == "v2-1-512"):
|
114 |
-
its = 0.8 if which_gpu == "T4" else 1.5
|
115 |
-
if(experimental_faces):
|
116 |
-
its = 0.7
|
117 |
-
elif(selected_model == "v2-1-768"):
|
118 |
-
its = 0.48 if which_gpu == "T4" else 0.85
|
119 |
-
|
120 |
-
gpu_price = 0.60 if which_gpu == "T4" else 1.10
|
121 |
-
summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. The training should take around {round(Training_Steps/its, 2)} seconds, or {round((Training_Steps/its)/60, 2)} minutes.
|
122 |
-
The setup, compression and uploading the model can take up to 20 minutes.<br>As the {which_gpu}-Small GPU costs US${gpu_price} for 1h, <span style="font-size: 120%"><b>the estimated cost for this training is below US${round((((Training_Steps/its)/3600)+0.3+0.1)*gpu_price, 2)}.</b></span><br><br>
|
123 |
-
If you check the box below the GPU attribution will automatically removed after training is done and the model is uploaded. If not, don't forget to come back here and swap the hardware back to CPU.<br><br>'''
|
124 |
-
else:
|
125 |
-
summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.<br><br>'''
|
126 |
-
|
127 |
-
return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)])
|
128 |
-
|
129 |
-
def update_steps(*files_list):
|
130 |
-
file_counter = 0
|
131 |
-
for i, files in enumerate(files_list):
|
132 |
-
if(files):
|
133 |
-
file_counter+=len(files)
|
134 |
-
return(gr.update(value=file_counter*200))
|
135 |
-
|
136 |
-
def pad_image(image):
|
137 |
-
w, h = image.size
|
138 |
-
if w == h:
|
139 |
-
return image
|
140 |
-
elif w > h:
|
141 |
-
new_image = Image.new(image.mode, (w, w), (0, 0, 0))
|
142 |
-
new_image.paste(image, (0, (w - h) // 2))
|
143 |
-
return new_image
|
144 |
-
else:
|
145 |
-
new_image = Image.new(image.mode, (h, h), (0, 0, 0))
|
146 |
-
new_image.paste(image, ((h - w) // 2, 0))
|
147 |
-
return new_image
|
148 |
-
|
149 |
-
def validate_model_upload(hf_token, model_name):
|
150 |
-
if(hf_token != ''):
|
151 |
-
api = HfApi()
|
152 |
-
try:
|
153 |
-
_ = api.whoami(hf_token)
|
154 |
-
except:
|
155 |
-
raise gr.Error("You have inserted an invalid Hugging Face token")
|
156 |
-
try:
|
157 |
-
if(is_spaces):
|
158 |
-
update_repo_visibility(repo_id=os.environ['SPACE_ID'], private=True, token=hf_token, repo_type="space")
|
159 |
-
except:
|
160 |
-
raise gr.Error("Oops, you created a Hugging Face token with read permissions only. You need one with write permissions")
|
161 |
-
else:
|
162 |
-
raise gr.Error("Please insert a Hugging Face Token (make sure to create it with write permissions)")
|
163 |
-
if(model_name == ""):
|
164 |
-
raise gr.Error("Please fill in your model's name")
|
165 |
-
|
166 |
-
def train(*inputs):
|
167 |
-
if is_shared_ui:
|
168 |
-
raise gr.Error("This Space only works in duplicated instances")
|
169 |
-
if not is_gpu_associated:
|
170 |
-
raise gr.Error("Please associate a T4 or A10G GPU for this Space")
|
171 |
-
hf_token = inputs[-5]
|
172 |
-
model_name = inputs[-7]
|
173 |
-
if(is_spaces):
|
174 |
-
remove_attribution_after = inputs[-6]
|
175 |
-
else:
|
176 |
-
remove_attribution_after = False
|
177 |
-
|
178 |
-
if(remove_attribution_after):
|
179 |
-
validate_model_upload(hf_token, model_name)
|
180 |
-
|
181 |
-
torch.cuda.empty_cache()
|
182 |
-
if 'pipe' in globals():
|
183 |
-
global pipe, pipe_is_set
|
184 |
-
del pipe
|
185 |
-
pipe_is_set = False
|
186 |
-
gc.collect()
|
187 |
-
|
188 |
-
if os.path.exists("output_model"): shutil.rmtree('output_model')
|
189 |
-
if os.path.exists("instance_images"): shutil.rmtree('instance_images')
|
190 |
-
if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar")
|
191 |
-
if os.path.exists("model.ckpt"): os.remove("model.ckpt")
|
192 |
-
if os.path.exists("hastrained.success"): os.remove("hastrained.success")
|
193 |
-
file_counter = 0
|
194 |
-
which_model = inputs[-10]
|
195 |
-
resolution = 512 if which_model != "v2-1-768" else 768
|
196 |
-
for i, input in enumerate(inputs):
|
197 |
-
if(i < maximum_concepts-1):
|
198 |
-
if(input):
|
199 |
-
os.makedirs('instance_images',exist_ok=True)
|
200 |
-
files = inputs[i+(maximum_concepts*2)]
|
201 |
-
prompt = inputs[i+maximum_concepts]
|
202 |
-
if(prompt == "" or prompt == None):
|
203 |
-
raise gr.Error("You forgot to define your concept prompt")
|
204 |
-
for j, file_temp in enumerate(files):
|
205 |
-
file = Image.open(file_temp.name)
|
206 |
-
image = pad_image(file)
|
207 |
-
image = image.resize((resolution, resolution))
|
208 |
-
extension = file_temp.name.split(".")[1]
|
209 |
-
image = image.convert('RGB')
|
210 |
-
image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100)
|
211 |
-
file_counter += 1
|
212 |
-
|
213 |
-
os.makedirs('output_model',exist_ok=True)
|
214 |
-
uses_custom = inputs[-1]
|
215 |
-
type_of_thing = inputs[-4]
|
216 |
-
experimental_face_improvement = inputs[-9]
|
217 |
-
|
218 |
-
if(uses_custom):
|
219 |
-
Training_Steps = int(inputs[-3])
|
220 |
-
Train_text_encoder_for = int(inputs[-2])
|
221 |
-
else:
|
222 |
-
if(type_of_thing == "object"):
|
223 |
-
Train_text_encoder_for=30
|
224 |
-
|
225 |
-
elif(type_of_thing == "style"):
|
226 |
-
Train_text_encoder_for=15
|
227 |
-
|
228 |
-
elif(type_of_thing == "person"):
|
229 |
-
Train_text_encoder_for=70
|
230 |
-
|
231 |
-
Training_Steps = file_counter*150
|
232 |
-
if(type_of_thing == "person" and Training_Steps > 2600):
|
233 |
-
Training_Steps = 2600 #Avoid overfitting on people's faces
|
234 |
-
stptxt = int((Training_Steps*Train_text_encoder_for)/100)
|
235 |
-
gradient_checkpointing = True if (experimental_face_improvement or which_model != "v1-5") else False
|
236 |
-
cache_latents = True if which_model != "v1-5" else False
|
237 |
-
if (type_of_thing == "object" or type_of_thing == "style" or (type_of_thing == "person" and not experimental_face_improvement)):
|
238 |
-
args_general = argparse.Namespace(
|
239 |
-
image_captions_filename = True,
|
240 |
-
train_text_encoder = True if stptxt > 0 else False,
|
241 |
-
stop_text_encoder_training = stptxt,
|
242 |
-
save_n_steps = 0,
|
243 |
-
pretrained_model_name_or_path = model_to_load,
|
244 |
-
instance_data_dir="instance_images",
|
245 |
-
class_data_dir=None,
|
246 |
-
output_dir="output_model",
|
247 |
-
instance_prompt="",
|
248 |
-
seed=42,
|
249 |
-
resolution=resolution,
|
250 |
-
mixed_precision="fp16",
|
251 |
-
train_batch_size=1,
|
252 |
-
gradient_accumulation_steps=1,
|
253 |
-
use_8bit_adam=True,
|
254 |
-
learning_rate=2e-6,
|
255 |
-
lr_scheduler="polynomial",
|
256 |
-
lr_warmup_steps = 0,
|
257 |
-
max_train_steps=Training_Steps,
|
258 |
-
gradient_checkpointing=gradient_checkpointing,
|
259 |
-
cache_latents=cache_latents,
|
260 |
-
)
|
261 |
-
print("Starting single training...")
|
262 |
-
lock_file = open("intraining.lock", "w")
|
263 |
-
lock_file.close()
|
264 |
-
run_training(args_general)
|
265 |
-
else:
|
266 |
-
args_general = argparse.Namespace(
|
267 |
-
image_captions_filename = True,
|
268 |
-
train_text_encoder = True if stptxt > 0 else False,
|
269 |
-
stop_text_encoder_training = stptxt,
|
270 |
-
save_n_steps = 0,
|
271 |
-
pretrained_model_name_or_path = model_to_load,
|
272 |
-
instance_data_dir="instance_images",
|
273 |
-
class_data_dir="Mix",
|
274 |
-
output_dir="output_model",
|
275 |
-
with_prior_preservation=True,
|
276 |
-
prior_loss_weight=1.0,
|
277 |
-
instance_prompt="",
|
278 |
-
seed=42,
|
279 |
-
resolution=resolution,
|
280 |
-
mixed_precision="fp16",
|
281 |
-
train_batch_size=1,
|
282 |
-
gradient_accumulation_steps=1,
|
283 |
-
use_8bit_adam=True,
|
284 |
-
learning_rate=2e-6,
|
285 |
-
lr_scheduler="polynomial",
|
286 |
-
lr_warmup_steps = 0,
|
287 |
-
max_train_steps=Training_Steps,
|
288 |
-
num_class_images=200,
|
289 |
-
gradient_checkpointing=gradient_checkpointing,
|
290 |
-
cache_latents=cache_latents,
|
291 |
-
)
|
292 |
-
print("Starting multi-training...")
|
293 |
-
lock_file = open("intraining.lock", "w")
|
294 |
-
lock_file.close()
|
295 |
-
run_training(args_general)
|
296 |
-
gc.collect()
|
297 |
-
torch.cuda.empty_cache()
|
298 |
-
if(which_model == "v1-5"):
|
299 |
-
print("Adding Safety Checker to the model...")
|
300 |
-
shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor")
|
301 |
-
shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker")
|
302 |
-
shutil.copy(f"model_index.json", "output_model/model_index.json")
|
303 |
-
|
304 |
-
if(not remove_attribution_after):
|
305 |
-
print("Archiving model file...")
|
306 |
-
with tarfile.open("diffusers_model.tar", "w") as tar:
|
307 |
-
tar.add("output_model", arcname=os.path.basename("output_model"))
|
308 |
-
if os.path.exists("intraining.lock"): os.remove("intraining.lock")
|
309 |
-
trained_file = open("hastrained.success", "w")
|
310 |
-
trained_file.close()
|
311 |
-
print("Training completed!")
|
312 |
-
return [
|
313 |
-
gr.update(visible=True, value=["diffusers_model.tar"]), #result
|
314 |
-
gr.update(visible=True), #try_your_model
|
315 |
-
gr.update(visible=True), #push_to_hub
|
316 |
-
gr.update(visible=True), #convert_button
|
317 |
-
gr.update(visible=False), #training_ongoing
|
318 |
-
gr.update(visible=True) #completed_training
|
319 |
-
]
|
320 |
-
else:
|
321 |
-
where_to_upload = inputs[-8]
|
322 |
-
push(model_name, where_to_upload, hf_token, which_model, True)
|
323 |
-
hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware"
|
324 |
-
headers = { "authorization" : f"Bearer {hf_token}"}
|
325 |
-
body = {'flavor': 'cpu-basic'}
|
326 |
-
requests.post(hardware_url, json = body, headers=headers)
|
327 |
-
|
328 |
-
pipe_is_set = False
|
329 |
-
def generate(prompt, steps):
|
330 |
-
torch.cuda.empty_cache()
|
331 |
-
from diffusers import StableDiffusionPipeline
|
332 |
-
global pipe_is_set
|
333 |
-
if(not pipe_is_set):
|
334 |
-
global pipe
|
335 |
-
pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16)
|
336 |
-
pipe = pipe.to("cuda")
|
337 |
-
pipe_is_set = True
|
338 |
-
|
339 |
-
image = pipe(prompt, num_inference_steps=steps).images[0]
|
340 |
-
return(image)
|
341 |
-
|
342 |
-
def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False):
|
343 |
-
validate_model_upload(hf_token, model_name)
|
344 |
-
if(not os.path.exists("model.ckpt")):
|
345 |
-
convert("output_model", "model.ckpt")
|
346 |
-
from huggingface_hub import HfApi, HfFolder, CommitOperationAdd
|
347 |
-
from huggingface_hub import create_repo
|
348 |
-
model_name_slug = slugify(model_name)
|
349 |
-
api = HfApi()
|
350 |
-
your_username = api.whoami(token=hf_token)["name"]
|
351 |
-
if(where_to_upload == "My personal profile"):
|
352 |
-
model_id = f"{your_username}/{model_name_slug}"
|
353 |
-
else:
|
354 |
-
model_id = f"sd-dreambooth-library/{model_name_slug}"
|
355 |
-
headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"}
|
356 |
-
response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers)
|
357 |
-
|
358 |
-
print(f"Starting to upload the model {model_id}...")
|
359 |
-
images_upload = os.listdir("instance_images")
|
360 |
-
image_string = ""
|
361 |
-
instance_prompt_list = []
|
362 |
-
previous_instance_prompt = ''
|
363 |
-
for i, image in enumerate(images_upload):
|
364 |
-
instance_prompt = image.split("_")[0]
|
365 |
-
if(instance_prompt != previous_instance_prompt):
|
366 |
-
title_instance_prompt_string = instance_prompt
|
367 |
-
instance_prompt_list.append(instance_prompt)
|
368 |
-
else:
|
369 |
-
title_instance_prompt_string = ''
|
370 |
-
previous_instance_prompt = instance_prompt
|
371 |
-
image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""}
|
372 |
-
{image_string}})'''
|
373 |
-
readme_text = f'''---
|
374 |
-
license: creativeml-openrail-m
|
375 |
-
tags:
|
376 |
-
- text-to-image
|
377 |
-
widget:
|
378 |
-
- text: {instance_prompt_list[0]}
|
379 |
-
---
|
380 |
-
### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model
|
381 |
-
|
382 |
-
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
|
383 |
-
|
384 |
-
Sample pictures of:
|
385 |
-
{image_string}
|
386 |
-
'''
|
387 |
-
#Save the readme to a file
|
388 |
-
readme_file = open("model.README.md", "w")
|
389 |
-
readme_file.write(readme_text)
|
390 |
-
readme_file.close()
|
391 |
-
#Save the token identifier to a file
|
392 |
-
text_file = open("token_identifier.txt", "w")
|
393 |
-
text_file.write(', '.join(instance_prompt_list))
|
394 |
-
text_file.close()
|
395 |
-
try:
|
396 |
-
create_repo(model_id,private=True, token=hf_token)
|
397 |
-
except:
|
398 |
-
import time
|
399 |
-
epoch_time = str(int(time.time()))
|
400 |
-
create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token)
|
401 |
-
operations = [
|
402 |
-
CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"),
|
403 |
-
CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"),
|
404 |
-
CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt")
|
405 |
-
]
|
406 |
-
api.create_commit(
|
407 |
-
repo_id=model_id,
|
408 |
-
operations=operations,
|
409 |
-
commit_message=f"Upload the model {model_name}",
|
410 |
-
token=hf_token
|
411 |
-
)
|
412 |
-
api.upload_folder(
|
413 |
-
folder_path="output_model",
|
414 |
-
repo_id=model_id,
|
415 |
-
token=hf_token
|
416 |
-
)
|
417 |
-
api.upload_folder(
|
418 |
-
folder_path="instance_images",
|
419 |
-
path_in_repo="concept_images",
|
420 |
-
repo_id=model_id,
|
421 |
-
token=hf_token
|
422 |
-
)
|
423 |
-
if is_spaces:
|
424 |
-
if(not comes_from_automated):
|
425 |
-
extra_message = "Don't forget to remove the GPU attribution after you play with it."
|
426 |
-
else:
|
427 |
-
extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page"
|
428 |
-
api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token)
|
429 |
-
print("Model uploaded successfully!")
|
430 |
-
return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])]
|
431 |
-
|
432 |
-
def convert_to_ckpt():
|
433 |
-
if 'pipe' in globals():
|
434 |
-
global pipe, pipe_is_set
|
435 |
-
del pipe
|
436 |
-
pipe_is_set = False
|
437 |
-
gc.collect()
|
438 |
-
convert("output_model", "model.ckpt")
|
439 |
-
return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])
|
440 |
-
|
441 |
-
def check_status(top_description):
|
442 |
-
if os.path.exists("hastrained.success"):
|
443 |
-
if is_spaces:
|
444 |
-
update_top_tag = gr.update(value=f'''
|
445 |
-
<div class="gr-prose" style="max-width: 80%">
|
446 |
-
<h2>Your model has finished training ✅</h2>
|
447 |
-
<p>Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the <a href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}" target="_blank">settings page</a> and downgrade your Space to a CPU Basic</p>
|
448 |
-
</div>
|
449 |
-
''')
|
450 |
-
else:
|
451 |
-
update_top_tag = gr.update(value=f'''
|
452 |
-
<div class="gr-prose" style="max-width: 80%">
|
453 |
-
<h2>Your model has finished training ✅</h2>
|
454 |
-
<p>Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub).</p>
|
455 |
-
</div>
|
456 |
-
''')
|
457 |
-
show_outputs = True
|
458 |
-
elif os.path.exists("intraining.lock"):
|
459 |
-
update_top_tag = gr.update(value='''
|
460 |
-
<div class="gr-prose" style="max-width: 80%">
|
461 |
-
<h2>Don't worry, your model is still training! ⌛</h2>
|
462 |
-
<p>You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model</p>
|
463 |
-
</div>
|
464 |
-
''')
|
465 |
-
show_outputs = False
|
466 |
-
else:
|
467 |
-
update_top_tag = gr.update(value=top_description)
|
468 |
-
show_outputs = False
|
469 |
-
if os.path.exists("diffusers_model.tar"):
|
470 |
-
update_files_tag = gr.update(visible=show_outputs, value=["diffusers_model.tar"])
|
471 |
-
else:
|
472 |
-
update_files_tag = gr.update(visible=show_outputs)
|
473 |
-
return [
|
474 |
-
update_top_tag, #top_description
|
475 |
-
gr.update(visible=show_outputs), #try_your_model
|
476 |
-
gr.update(visible=show_outputs), #push_to_hub
|
477 |
-
update_files_tag, #result
|
478 |
-
gr.update(visible=show_outputs), #convert_button
|
479 |
-
]
|
480 |
-
|
481 |
-
def checkbox_swap(checkbox):
|
482 |
-
return [gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox)]
|
483 |
-
|
484 |
-
with gr.Blocks(css=css) as demo:
|
485 |
-
with gr.Box():
|
486 |
-
if is_shared_ui:
|
487 |
-
top_description = gr.HTML(f'''
|
488 |
-
<div class="gr-prose" style="max-width: 80%">
|
489 |
-
<h2>Attention - This Space doesn't work in this shared UI</h2>
|
490 |
-
<p>For it to work, you can either run locally or duplicate the Space and run it on your own profile using a (paid) private T4-small or A10G-small GPU for training. A T4 costs US$0.60/h, so it should cost < US$1 to train most models using default settings with it! <a class="duplicate-button" style="display:inline-block" target="_blank" href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}?duplicate=true"><img src="https://img.shields.io/badge/-Duplicate%20Space-blue?labelColor=white&style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IArs4c6QAAAP5JREFUOE+lk7FqAkEURY+ltunEgFXS2sZGIbXfEPdLlnxJyDdYB62sbbUKpLbVNhyYFzbrrA74YJlh9r079973psed0cvUD4A+4HoCjsA85X0Dfn/RBLBgBDxnQPfAEJgBY+A9gALA4tcbamSzS4xq4FOQAJgCDwV2CPKV8tZAJcAjMMkUe1vX+U+SMhfAJEHasQIWmXNN3abzDwHUrgcRGmYcgKe0bxrblHEB4E/pndMazNpSZGcsZdBlYJcEL9Afo75molJyM2FxmPgmgPqlWNLGfwZGG6UiyEvLzHYDmoPkDDiNm9JR9uboiONcBXrpY1qmgs21x1QwyZcpvxt9NS09PlsPAAAAAElFTkSuQmCC&logoWidth=14" alt="Duplicate Space"></a></p>
|
491 |
-
<img class="instruction" src="file/duplicate.png">
|
492 |
-
<img class="arrow" src="file/arrow.png" />
|
493 |
-
</div>
|
494 |
-
''')
|
495 |
-
elif(is_spaces):
|
496 |
-
if(is_gpu_associated):
|
497 |
-
top_description = gr.HTML(f'''
|
498 |
-
<div class="gr-prose" style="max-width: 80%">
|
499 |
-
<h2>You have successfully associated a {which_gpu} GPU to the Dreambooth Training Space 🎉</h2>
|
500 |
-
<p>You can now train your model! You will be billed by the minute from when you activated the GPU until when it is turned it off.</p>
|
501 |
-
</div>
|
502 |
-
''')
|
503 |
-
else:
|
504 |
-
top_description = gr.HTML(f'''
|
505 |
-
<div class="gr-prose" style="max-width: 80%">
|
506 |
-
<h2>You have successfully duplicated the Dreambooth Training Space 🎉</h2>
|
507 |
-
<p>There's only one step left before you can train your model: <a href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}/settings" style="text-decoration: underline" target="_blank">attribute a <b>T4-small or A10G-small GPU</b> to it (via the Settings tab)</a> and run the training below. You will be billed by the minute from when you activate the GPU until when it is turned it off.</p>
|
508 |
-
</div>
|
509 |
-
''')
|
510 |
-
else:
|
511 |
-
top_description = gr.HTML(f'''
|
512 |
-
<div class="gr-prose" style="max-width: 80%">
|
513 |
-
<h2>You have successfully cloned the Dreambooth Training Space locally 🎉</h2>
|
514 |
-
<p>Do a <code>pip install requirements-local.txt</code></p>
|
515 |
-
</div>
|
516 |
-
''')
|
517 |
-
gr.Markdown("# Dreambooth Training UI 💭")
|
518 |
-
gr.Markdown("Customize Stable Diffusion v1 or v2 (ⁿᵉʷ!) by giving it a few examples of a concept. Based on the [🧨 diffusers](https://github.com/huggingface/diffusers) implementation, additional techniques from [TheLastBen](https://github.com/TheLastBen/diffusers) and [ShivamShrirao](https://github.com/ShivamShrirao/diffusers)")
|
519 |
-
|
520 |
-
with gr.Row() as what_are_you_training:
|
521 |
-
type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True)
|
522 |
-
base_model_to_use = gr.Dropdown(label="Which base model would you like to use?", choices=["v1-5", "v2-1-512", "v2-1-768"], value="v1-5", interactive=True)
|
523 |
-
|
524 |
-
#Very hacky approach to emulate dynamically created Gradio components
|
525 |
-
with gr.Row() as upload_your_concept:
|
526 |
-
with gr.Column():
|
527 |
-
thing_description = gr.Markdown("You are going to train an `object`, please upload 5-10 images of the object you are planning on training on from different angles/perspectives. You must have the right to do so and you are liable for the images you use, example")
|
528 |
-
thing_experimental = gr.Checkbox(label="Improve faces (prior preservation) - can take longer training but can improve faces", visible=False, value=False)
|
529 |
-
thing_image_example = gr.HTML('''<img src="file/cat-toy.png" />''')
|
530 |
-
things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.")
|
531 |
-
|
532 |
-
with gr.Column():
|
533 |
-
file_collection = []
|
534 |
-
concept_collection = []
|
535 |
-
buttons_collection = []
|
536 |
-
delete_collection = []
|
537 |
-
is_visible = []
|
538 |
-
|
539 |
-
row = [None] * maximum_concepts
|
540 |
-
for x in range(maximum_concepts):
|
541 |
-
ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4])
|
542 |
-
if(x == 0):
|
543 |
-
visible = True
|
544 |
-
is_visible.append(gr.State(value=True))
|
545 |
-
else:
|
546 |
-
visible = False
|
547 |
-
is_visible.append(gr.State(value=False))
|
548 |
-
|
549 |
-
file_collection.append(gr.File(label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible))
|
550 |
-
with gr.Column(visible=visible) as row[x]:
|
551 |
-
concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept prompt - use a unique, made up word to avoid collisions'''))
|
552 |
-
with gr.Row():
|
553 |
-
if(x < maximum_concepts-1):
|
554 |
-
buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible))
|
555 |
-
if(x > 0):
|
556 |
-
delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept"))
|
557 |
-
|
558 |
-
counter_add = 1
|
559 |
-
for button in buttons_collection:
|
560 |
-
if(counter_add < len(buttons_collection)):
|
561 |
-
button.click(lambda:
|
562 |
-
[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None],
|
563 |
-
None,
|
564 |
-
[row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False)
|
565 |
-
else:
|
566 |
-
button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False)
|
567 |
-
counter_add += 1
|
568 |
-
|
569 |
-
counter_delete = 1
|
570 |
-
for delete_button in delete_collection:
|
571 |
-
if(counter_delete < len(delete_collection)+1):
|
572 |
-
delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False)
|
573 |
-
counter_delete += 1
|
574 |
-
|
575 |
-
with gr.Accordion("Custom Settings", open=False):
|
576 |
-
swap_auto_calculated = gr.Checkbox(label="Use custom settings")
|
577 |
-
gr.Markdown("If not checked, the % of frozen encoder will be tuned automatically to whether you are training an `object`, `person` or `style`. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and 75% trained for persons. The number of steps varies between 1400 and 2400 depending on how many images uploaded. If you see too many artifacts in your output, it means it may have overfit and you need less steps. If your results aren't really what you wanted, it may be underfitting and you need more steps.")
|
578 |
-
steps = gr.Number(label="How many steps", value=2400)
|
579 |
-
perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30)
|
580 |
-
|
581 |
-
with gr.Box(visible=False) as training_summary:
|
582 |
-
training_summary_text = gr.HTML("", visible=True, label="Training Summary")
|
583 |
-
is_advanced_visible = True if is_spaces else False
|
584 |
-
training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=True, visible=is_advanced_visible)
|
585 |
-
training_summary_model_name = gr.Textbox(label="Name of your model", visible=True)
|
586 |
-
training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], value="My personal profile", label="Upload to", visible=True)
|
587 |
-
training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=True)
|
588 |
-
training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=True)
|
589 |
-
|
590 |
-
train_btn = gr.Button("Start Training")
|
591 |
-
if(is_shared_ui):
|
592 |
-
training_ongoing = gr.Markdown("## This Space only works in duplicated instances. Please duplicate it and try again!", visible=False)
|
593 |
-
elif(not is_gpu_associated):
|
594 |
-
training_ongoing = gr.Markdown("## Oops, you haven't associated your T4 or A10G GPU to this Space. Visit the Settings tab, associate and try again.", visible=False)
|
595 |
-
else:
|
596 |
-
training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False)
|
597 |
-
|
598 |
-
#Post-training UI
|
599 |
-
completed_training = gr.Markdown('''# ✅ Training completed.
|
600 |
-
### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False)
|
601 |
-
|
602 |
-
with gr.Row():
|
603 |
-
with gr.Box(visible=False) as try_your_model:
|
604 |
-
gr.Markdown("## Try your model")
|
605 |
-
prompt = gr.Textbox(label="Type your prompt")
|
606 |
-
result_image = gr.Image()
|
607 |
-
inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1)
|
608 |
-
generate_button = gr.Button("Generate Image")
|
609 |
-
|
610 |
-
with gr.Box(visible=False) as push_to_hub:
|
611 |
-
gr.Markdown("## Push to Hugging Face Hub")
|
612 |
-
model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style")
|
613 |
-
where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to")
|
614 |
-
gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.")
|
615 |
-
hf_token = gr.Textbox(label="Hugging Face Write Token", type="password")
|
616 |
-
|
617 |
-
push_button = gr.Button("Push to the Hub")
|
618 |
-
|
619 |
-
result = gr.File(label="Download the uploaded models in the diffusers format", visible=True)
|
620 |
-
success_message_upload = gr.Markdown(visible=False)
|
621 |
-
convert_button = gr.Button("Convert to CKPT", visible=False)
|
622 |
-
|
623 |
-
#Swap the examples and the % of text encoder trained depending if it is an object, person or style
|
624 |
-
type_of_thing.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
|
625 |
-
|
626 |
-
#Swap the base model
|
627 |
-
base_model_to_use.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
|
628 |
-
base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[])
|
629 |
-
|
630 |
-
#Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not
|
631 |
-
for file in file_collection:
|
632 |
-
#file.change(fn=update_steps,inputs=file_collection, outputs=steps)
|
633 |
-
file.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
|
634 |
-
|
635 |
-
thing_experimental.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
|
636 |
-
base_model_to_use.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
|
637 |
-
steps.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
|
638 |
-
perc_txt_encoder.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
|
639 |
-
|
640 |
-
#Give more options if the user wants to finish everything after training
|
641 |
-
if(is_spaces):
|
642 |
-
training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False)
|
643 |
-
#Add a message for while it is in training
|
644 |
-
train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing)
|
645 |
-
|
646 |
-
#The main train function
|
647 |
-
train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False)
|
648 |
-
|
649 |
-
#Button to generate an image from your trained model after training
|
650 |
-
generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False)
|
651 |
-
#Button to push the model to the Hugging Face Hub
|
652 |
-
push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False)
|
653 |
-
#Button to convert the model to ckpt format
|
654 |
-
convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False)
|
655 |
-
|
656 |
-
#Checks if the training is running
|
657 |
-
demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False)
|
658 |
-
|
659 |
-
demo.queue(default_enabled=False).launch(debug=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/distributed_inference.md
DELETED
@@ -1,91 +0,0 @@
|
|
1 |
-
# Distributed inference with multiple GPUs
|
2 |
-
|
3 |
-
On distributed setups, you can run inference across multiple GPUs with 🤗 [Accelerate](https://huggingface.co/docs/accelerate/index) or [PyTorch Distributed](https://pytorch.org/tutorials/beginner/dist_overview.html), which is useful for generating with multiple prompts in parallel.
|
4 |
-
|
5 |
-
This guide will show you how to use 🤗 Accelerate and PyTorch Distributed for distributed inference.
|
6 |
-
|
7 |
-
## 🤗 Accelerate
|
8 |
-
|
9 |
-
🤗 [Accelerate](https://huggingface.co/docs/accelerate/index) is a library designed to make it easy to train or run inference across distributed setups. It simplifies the process of setting up the distributed environment, allowing you to focus on your PyTorch code.
|
10 |
-
|
11 |
-
To begin, create a Python file and initialize an [`accelerate.PartialState`] to create a distributed environment; your setup is automatically detected so you don't need to explicitly define the `rank` or `world_size`. Move the [`DiffusionPipeline`] to `distributed_state.device` to assign a GPU to each process.
|
12 |
-
|
13 |
-
Now use the [`~accelerate.PartialState.split_between_processes`] utility as a context manager to automatically distribute the prompts between the number of processes.
|
14 |
-
|
15 |
-
```py
|
16 |
-
from accelerate import PartialState
|
17 |
-
from diffusers import DiffusionPipeline
|
18 |
-
|
19 |
-
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
|
20 |
-
distributed_state = PartialState()
|
21 |
-
pipeline.to(distributed_state.device)
|
22 |
-
|
23 |
-
with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt:
|
24 |
-
result = pipeline(prompt).images[0]
|
25 |
-
result.save(f"result_{distributed_state.process_index}.png")
|
26 |
-
```
|
27 |
-
|
28 |
-
Use the `--num_processes` argument to specify the number of GPUs to use, and call `accelerate launch` to run the script:
|
29 |
-
|
30 |
-
```bash
|
31 |
-
accelerate launch run_distributed.py --num_processes=2
|
32 |
-
```
|
33 |
-
|
34 |
-
<Tip>
|
35 |
-
|
36 |
-
To learn more, take a look at the [Distributed Inference with 🤗 Accelerate](https://huggingface.co/docs/accelerate/en/usage_guides/distributed_inference#distributed-inference-with-accelerate) guide.
|
37 |
-
|
38 |
-
</Tip>
|
39 |
-
|
40 |
-
## PyTorch Distributed
|
41 |
-
|
42 |
-
PyTorch supports [`DistributedDataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) which enables data parallelism.
|
43 |
-
|
44 |
-
To start, create a Python file and import `torch.distributed` and `torch.multiprocessing` to set up the distributed process group and to spawn the processes for inference on each GPU. You should also initialize a [`DiffusionPipeline`]:
|
45 |
-
|
46 |
-
```py
|
47 |
-
import torch
|
48 |
-
import torch.distributed as dist
|
49 |
-
import torch.multiprocessing as mp
|
50 |
-
|
51 |
-
from diffusers import DiffusionPipeline
|
52 |
-
|
53 |
-
sd = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
|
54 |
-
```
|
55 |
-
|
56 |
-
You'll want to create a function to run inference; [`init_process_group`](https://pytorch.org/docs/stable/distributed.html?highlight=init_process_group#torch.distributed.init_process_group) handles creating a distributed environment with the type of backend to use, the `rank` of the current process, and the `world_size` or the number of processes participating. If you're running inference in parallel over 2 GPUs, then the `world_size` is 2.
|
57 |
-
|
58 |
-
Move the [`DiffusionPipeline`] to `rank` and use `get_rank` to assign a GPU to each process, where each process handles a different prompt:
|
59 |
-
|
60 |
-
```py
|
61 |
-
def run_inference(rank, world_size):
|
62 |
-
dist.init_process_group("nccl", rank=rank, world_size=world_size)
|
63 |
-
|
64 |
-
sd.to(rank)
|
65 |
-
|
66 |
-
if torch.distributed.get_rank() == 0:
|
67 |
-
prompt = "a dog"
|
68 |
-
elif torch.distributed.get_rank() == 1:
|
69 |
-
prompt = "a cat"
|
70 |
-
|
71 |
-
image = sd(prompt).images[0]
|
72 |
-
image.save(f"./{'_'.join(prompt)}.png")
|
73 |
-
```
|
74 |
-
|
75 |
-
To run the distributed inference, call [`mp.spawn`](https://pytorch.org/docs/stable/multiprocessing.html#torch.multiprocessing.spawn) to run the `run_inference` function on the number of GPUs defined in `world_size`:
|
76 |
-
|
77 |
-
```py
|
78 |
-
def main():
|
79 |
-
world_size = 2
|
80 |
-
mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True)
|
81 |
-
|
82 |
-
|
83 |
-
if __name__ == "__main__":
|
84 |
-
main()
|
85 |
-
```
|
86 |
-
|
87 |
-
Once you've completed the inference script, use the `--nproc_per_node` argument to specify the number of GPUs to use and call `torchrun` to run the script:
|
88 |
-
|
89 |
-
```bash
|
90 |
-
torchrun run_distributed.py --nproc_per_node=2
|
91 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky/__init__.py
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
from ...utils import (
|
2 |
-
OptionalDependencyNotAvailable,
|
3 |
-
is_torch_available,
|
4 |
-
is_transformers_available,
|
5 |
-
)
|
6 |
-
|
7 |
-
|
8 |
-
try:
|
9 |
-
if not (is_transformers_available() and is_torch_available()):
|
10 |
-
raise OptionalDependencyNotAvailable()
|
11 |
-
except OptionalDependencyNotAvailable:
|
12 |
-
from ...utils.dummy_torch_and_transformers_objects import *
|
13 |
-
else:
|
14 |
-
from .pipeline_kandinsky import KandinskyPipeline
|
15 |
-
from .pipeline_kandinsky_combined import (
|
16 |
-
KandinskyCombinedPipeline,
|
17 |
-
KandinskyImg2ImgCombinedPipeline,
|
18 |
-
KandinskyInpaintCombinedPipeline,
|
19 |
-
)
|
20 |
-
from .pipeline_kandinsky_img2img import KandinskyImg2ImgPipeline
|
21 |
-
from .pipeline_kandinsky_inpaint import KandinskyInpaintPipeline
|
22 |
-
from .pipeline_kandinsky_prior import KandinskyPriorPipeline, KandinskyPriorPipelineOutput
|
23 |
-
from .text_encoder import MultilingualCLIP
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/__init__.py
DELETED
@@ -1,92 +0,0 @@
|
|
1 |
-
# Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
-
# you may not use this file except in compliance with the License.
|
5 |
-
# You may obtain a copy of the License at
|
6 |
-
#
|
7 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
-
#
|
9 |
-
# Unless required by applicable law or agreed to in writing, software
|
10 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
-
# See the License for the specific language governing permissions and
|
13 |
-
# limitations under the License.
|
14 |
-
|
15 |
-
|
16 |
-
from ..utils import (
|
17 |
-
OptionalDependencyNotAvailable,
|
18 |
-
is_flax_available,
|
19 |
-
is_scipy_available,
|
20 |
-
is_torch_available,
|
21 |
-
is_torchsde_available,
|
22 |
-
)
|
23 |
-
|
24 |
-
|
25 |
-
try:
|
26 |
-
if not is_torch_available():
|
27 |
-
raise OptionalDependencyNotAvailable()
|
28 |
-
except OptionalDependencyNotAvailable:
|
29 |
-
from ..utils.dummy_pt_objects import * # noqa F403
|
30 |
-
else:
|
31 |
-
from .scheduling_consistency_models import CMStochasticIterativeScheduler
|
32 |
-
from .scheduling_ddim import DDIMScheduler
|
33 |
-
from .scheduling_ddim_inverse import DDIMInverseScheduler
|
34 |
-
from .scheduling_ddim_parallel import DDIMParallelScheduler
|
35 |
-
from .scheduling_ddpm import DDPMScheduler
|
36 |
-
from .scheduling_ddpm_parallel import DDPMParallelScheduler
|
37 |
-
from .scheduling_deis_multistep import DEISMultistepScheduler
|
38 |
-
from .scheduling_dpmsolver_multistep import DPMSolverMultistepScheduler
|
39 |
-
from .scheduling_dpmsolver_multistep_inverse import DPMSolverMultistepInverseScheduler
|
40 |
-
from .scheduling_dpmsolver_singlestep import DPMSolverSinglestepScheduler
|
41 |
-
from .scheduling_euler_ancestral_discrete import EulerAncestralDiscreteScheduler
|
42 |
-
from .scheduling_euler_discrete import EulerDiscreteScheduler
|
43 |
-
from .scheduling_heun_discrete import HeunDiscreteScheduler
|
44 |
-
from .scheduling_ipndm import IPNDMScheduler
|
45 |
-
from .scheduling_k_dpm_2_ancestral_discrete import KDPM2AncestralDiscreteScheduler
|
46 |
-
from .scheduling_k_dpm_2_discrete import KDPM2DiscreteScheduler
|
47 |
-
from .scheduling_karras_ve import KarrasVeScheduler
|
48 |
-
from .scheduling_pndm import PNDMScheduler
|
49 |
-
from .scheduling_repaint import RePaintScheduler
|
50 |
-
from .scheduling_sde_ve import ScoreSdeVeScheduler
|
51 |
-
from .scheduling_sde_vp import ScoreSdeVpScheduler
|
52 |
-
from .scheduling_unclip import UnCLIPScheduler
|
53 |
-
from .scheduling_unipc_multistep import UniPCMultistepScheduler
|
54 |
-
from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
|
55 |
-
from .scheduling_vq_diffusion import VQDiffusionScheduler
|
56 |
-
|
57 |
-
try:
|
58 |
-
if not is_flax_available():
|
59 |
-
raise OptionalDependencyNotAvailable()
|
60 |
-
except OptionalDependencyNotAvailable:
|
61 |
-
from ..utils.dummy_flax_objects import * # noqa F403
|
62 |
-
else:
|
63 |
-
from .scheduling_ddim_flax import FlaxDDIMScheduler
|
64 |
-
from .scheduling_ddpm_flax import FlaxDDPMScheduler
|
65 |
-
from .scheduling_dpmsolver_multistep_flax import FlaxDPMSolverMultistepScheduler
|
66 |
-
from .scheduling_karras_ve_flax import FlaxKarrasVeScheduler
|
67 |
-
from .scheduling_lms_discrete_flax import FlaxLMSDiscreteScheduler
|
68 |
-
from .scheduling_pndm_flax import FlaxPNDMScheduler
|
69 |
-
from .scheduling_sde_ve_flax import FlaxScoreSdeVeScheduler
|
70 |
-
from .scheduling_utils_flax import (
|
71 |
-
FlaxKarrasDiffusionSchedulers,
|
72 |
-
FlaxSchedulerMixin,
|
73 |
-
FlaxSchedulerOutput,
|
74 |
-
broadcast_to_shape_from_left,
|
75 |
-
)
|
76 |
-
|
77 |
-
|
78 |
-
try:
|
79 |
-
if not (is_torch_available() and is_scipy_available()):
|
80 |
-
raise OptionalDependencyNotAvailable()
|
81 |
-
except OptionalDependencyNotAvailable:
|
82 |
-
from ..utils.dummy_torch_and_scipy_objects import * # noqa F403
|
83 |
-
else:
|
84 |
-
from .scheduling_lms_discrete import LMSDiscreteScheduler
|
85 |
-
|
86 |
-
try:
|
87 |
-
if not (is_torch_available() and is_torchsde_available()):
|
88 |
-
raise OptionalDependencyNotAvailable()
|
89 |
-
except OptionalDependencyNotAvailable:
|
90 |
-
from ..utils.dummy_torch_and_torchsde_objects import * # noqa F403
|
91 |
-
else:
|
92 |
-
from .scheduling_dpmsolver_sde import DPMSolverSDEScheduler
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/repaint/test_repaint.py
DELETED
@@ -1,169 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2023 HuggingFace Inc.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
import gc
|
17 |
-
import unittest
|
18 |
-
|
19 |
-
import numpy as np
|
20 |
-
import torch
|
21 |
-
|
22 |
-
from diffusers import RePaintPipeline, RePaintScheduler, UNet2DModel
|
23 |
-
from diffusers.utils.testing_utils import (
|
24 |
-
enable_full_determinism,
|
25 |
-
load_image,
|
26 |
-
load_numpy,
|
27 |
-
nightly,
|
28 |
-
require_torch_gpu,
|
29 |
-
skip_mps,
|
30 |
-
torch_device,
|
31 |
-
)
|
32 |
-
|
33 |
-
from ..pipeline_params import IMAGE_INPAINTING_BATCH_PARAMS, IMAGE_INPAINTING_PARAMS
|
34 |
-
from ..test_pipelines_common import PipelineTesterMixin
|
35 |
-
|
36 |
-
|
37 |
-
enable_full_determinism()
|
38 |
-
|
39 |
-
|
40 |
-
class RepaintPipelineFastTests(PipelineTesterMixin, unittest.TestCase):
|
41 |
-
pipeline_class = RePaintPipeline
|
42 |
-
params = IMAGE_INPAINTING_PARAMS - {"width", "height", "guidance_scale"}
|
43 |
-
required_optional_params = PipelineTesterMixin.required_optional_params - {
|
44 |
-
"latents",
|
45 |
-
"num_images_per_prompt",
|
46 |
-
"callback",
|
47 |
-
"callback_steps",
|
48 |
-
}
|
49 |
-
batch_params = IMAGE_INPAINTING_BATCH_PARAMS
|
50 |
-
|
51 |
-
def get_dummy_components(self):
|
52 |
-
torch.manual_seed(0)
|
53 |
-
torch.manual_seed(0)
|
54 |
-
unet = UNet2DModel(
|
55 |
-
block_out_channels=(32, 64),
|
56 |
-
layers_per_block=2,
|
57 |
-
sample_size=32,
|
58 |
-
in_channels=3,
|
59 |
-
out_channels=3,
|
60 |
-
down_block_types=("DownBlock2D", "AttnDownBlock2D"),
|
61 |
-
up_block_types=("AttnUpBlock2D", "UpBlock2D"),
|
62 |
-
)
|
63 |
-
scheduler = RePaintScheduler()
|
64 |
-
components = {"unet": unet, "scheduler": scheduler}
|
65 |
-
return components
|
66 |
-
|
67 |
-
def get_dummy_inputs(self, device, seed=0):
|
68 |
-
if str(device).startswith("mps"):
|
69 |
-
generator = torch.manual_seed(seed)
|
70 |
-
else:
|
71 |
-
generator = torch.Generator(device=device).manual_seed(seed)
|
72 |
-
image = np.random.RandomState(seed).standard_normal((1, 3, 32, 32))
|
73 |
-
image = torch.from_numpy(image).to(device=device, dtype=torch.float32)
|
74 |
-
mask = (image > 0).to(device=device, dtype=torch.float32)
|
75 |
-
inputs = {
|
76 |
-
"image": image,
|
77 |
-
"mask_image": mask,
|
78 |
-
"generator": generator,
|
79 |
-
"num_inference_steps": 5,
|
80 |
-
"eta": 0.0,
|
81 |
-
"jump_length": 2,
|
82 |
-
"jump_n_sample": 2,
|
83 |
-
"output_type": "numpy",
|
84 |
-
}
|
85 |
-
return inputs
|
86 |
-
|
87 |
-
def test_repaint(self):
|
88 |
-
device = "cpu" # ensure determinism for the device-dependent torch.Generator
|
89 |
-
components = self.get_dummy_components()
|
90 |
-
sd_pipe = RePaintPipeline(**components)
|
91 |
-
sd_pipe = sd_pipe.to(device)
|
92 |
-
sd_pipe.set_progress_bar_config(disable=None)
|
93 |
-
|
94 |
-
inputs = self.get_dummy_inputs(device)
|
95 |
-
image = sd_pipe(**inputs).images
|
96 |
-
image_slice = image[0, -3:, -3:, -1]
|
97 |
-
|
98 |
-
assert image.shape == (1, 32, 32, 3)
|
99 |
-
expected_slice = np.array([1.0000, 0.5426, 0.5497, 0.2200, 1.0000, 1.0000, 0.5623, 1.0000, 0.6274])
|
100 |
-
|
101 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
|
102 |
-
|
103 |
-
@skip_mps
|
104 |
-
def test_save_load_local(self):
|
105 |
-
return super().test_save_load_local()
|
106 |
-
|
107 |
-
# RePaint can hardly be made deterministic since the scheduler is currently always
|
108 |
-
# nondeterministic
|
109 |
-
@unittest.skip("non-deterministic pipeline")
|
110 |
-
def test_inference_batch_single_identical(self):
|
111 |
-
return super().test_inference_batch_single_identical()
|
112 |
-
|
113 |
-
@skip_mps
|
114 |
-
def test_dict_tuple_outputs_equivalent(self):
|
115 |
-
return super().test_dict_tuple_outputs_equivalent()
|
116 |
-
|
117 |
-
@skip_mps
|
118 |
-
def test_save_load_optional_components(self):
|
119 |
-
return super().test_save_load_optional_components()
|
120 |
-
|
121 |
-
@skip_mps
|
122 |
-
def test_attention_slicing_forward_pass(self):
|
123 |
-
return super().test_attention_slicing_forward_pass()
|
124 |
-
|
125 |
-
|
126 |
-
@nightly
|
127 |
-
@require_torch_gpu
|
128 |
-
class RepaintPipelineNightlyTests(unittest.TestCase):
|
129 |
-
def tearDown(self):
|
130 |
-
super().tearDown()
|
131 |
-
gc.collect()
|
132 |
-
torch.cuda.empty_cache()
|
133 |
-
|
134 |
-
def test_celebahq(self):
|
135 |
-
original_image = load_image(
|
136 |
-
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/"
|
137 |
-
"repaint/celeba_hq_256.png"
|
138 |
-
)
|
139 |
-
mask_image = load_image(
|
140 |
-
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png"
|
141 |
-
)
|
142 |
-
expected_image = load_numpy(
|
143 |
-
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/"
|
144 |
-
"repaint/celeba_hq_256_result.npy"
|
145 |
-
)
|
146 |
-
|
147 |
-
model_id = "google/ddpm-ema-celebahq-256"
|
148 |
-
unet = UNet2DModel.from_pretrained(model_id)
|
149 |
-
scheduler = RePaintScheduler.from_pretrained(model_id)
|
150 |
-
|
151 |
-
repaint = RePaintPipeline(unet=unet, scheduler=scheduler).to(torch_device)
|
152 |
-
repaint.set_progress_bar_config(disable=None)
|
153 |
-
repaint.enable_attention_slicing()
|
154 |
-
|
155 |
-
generator = torch.manual_seed(0)
|
156 |
-
output = repaint(
|
157 |
-
original_image,
|
158 |
-
mask_image,
|
159 |
-
num_inference_steps=250,
|
160 |
-
eta=0.0,
|
161 |
-
jump_length=10,
|
162 |
-
jump_n_sample=10,
|
163 |
-
generator=generator,
|
164 |
-
output_type="np",
|
165 |
-
)
|
166 |
-
image = output.images[0]
|
167 |
-
|
168 |
-
assert image.shape == (256, 256, 3)
|
169 |
-
assert np.abs(expected_image - image).mean() < 1e-2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_x101_64x4d_fpn_1x_coco.py
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
_base_ = './rpn_r50_fpn_1x_coco.py'
|
2 |
-
model = dict(
|
3 |
-
pretrained='open-mmlab://resnext101_64x4d',
|
4 |
-
backbone=dict(
|
5 |
-
type='ResNeXt',
|
6 |
-
depth=101,
|
7 |
-
groups=64,
|
8 |
-
base_width=4,
|
9 |
-
num_stages=4,
|
10 |
-
out_indices=(0, 1, 2, 3),
|
11 |
-
frozen_stages=1,
|
12 |
-
norm_cfg=dict(type='BN', requires_grad=True),
|
13 |
-
style='pytorch'))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/iou_calculators/builder.py
DELETED
@@ -1,8 +0,0 @@
|
|
1 |
-
from mmcv.utils import Registry, build_from_cfg
|
2 |
-
|
3 |
-
IOU_CALCULATORS = Registry('IoU calculator')
|
4 |
-
|
5 |
-
|
6 |
-
def build_iou_calculator(cfg, default_args=None):
|
7 |
-
"""Builder of IoU calculator."""
|
8 |
-
return build_from_cfg(cfg, IOU_CALCULATORS, default_args)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/elevenlabs_tts/script.py
DELETED
@@ -1,197 +0,0 @@
|
|
1 |
-
import html
|
2 |
-
import re
|
3 |
-
from pathlib import Path
|
4 |
-
|
5 |
-
import elevenlabs
|
6 |
-
import gradio as gr
|
7 |
-
|
8 |
-
from modules import chat, shared, ui_chat
|
9 |
-
from modules.logging_colors import logger
|
10 |
-
from modules.utils import gradio
|
11 |
-
|
12 |
-
params = {
|
13 |
-
'activate': True,
|
14 |
-
'api_key': None,
|
15 |
-
'selected_voice': 'None',
|
16 |
-
'autoplay': False,
|
17 |
-
'show_text': True,
|
18 |
-
'model': 'eleven_monolingual_v1',
|
19 |
-
}
|
20 |
-
|
21 |
-
voices = None
|
22 |
-
wav_idx = 0
|
23 |
-
LANG_MODELS = ['eleven_monolingual_v1', 'eleven_multilingual_v1']
|
24 |
-
|
25 |
-
|
26 |
-
def update_api_key(key):
|
27 |
-
params['api_key'] = key
|
28 |
-
if key is not None:
|
29 |
-
elevenlabs.set_api_key(key)
|
30 |
-
|
31 |
-
|
32 |
-
def refresh_voices():
|
33 |
-
global params
|
34 |
-
your_voices = elevenlabs.voices()
|
35 |
-
voice_names = [voice.name for voice in your_voices]
|
36 |
-
return voice_names
|
37 |
-
|
38 |
-
|
39 |
-
def refresh_voices_dd():
|
40 |
-
all_voices = refresh_voices()
|
41 |
-
return gr.Dropdown.update(value=all_voices[0], choices=all_voices)
|
42 |
-
|
43 |
-
|
44 |
-
def remove_tts_from_history(history):
|
45 |
-
for i, entry in enumerate(history['internal']):
|
46 |
-
history['visible'][i] = [history['visible'][i][0], entry[1]]
|
47 |
-
|
48 |
-
return history
|
49 |
-
|
50 |
-
|
51 |
-
def toggle_text_in_history(history):
|
52 |
-
for i, entry in enumerate(history['visible']):
|
53 |
-
visible_reply = entry[1]
|
54 |
-
if visible_reply.startswith('<audio'):
|
55 |
-
if params['show_text']:
|
56 |
-
reply = history['internal'][i][1]
|
57 |
-
history['visible'][i] = [history['visible'][i][0], f"{visible_reply.split('</audio>')[0]}</audio>\n\n{reply}"]
|
58 |
-
else:
|
59 |
-
history['visible'][i] = [history['visible'][i][0], f"{visible_reply.split('</audio>')[0]}</audio>"]
|
60 |
-
|
61 |
-
return history
|
62 |
-
|
63 |
-
|
64 |
-
def remove_surrounded_chars(string):
|
65 |
-
# this expression matches to 'as few symbols as possible (0 upwards) between any asterisks' OR
|
66 |
-
# 'as few symbols as possible (0 upwards) between an asterisk and the end of the string'
|
67 |
-
return re.sub('\*[^\*]*?(\*|$)', '', string)
|
68 |
-
|
69 |
-
|
70 |
-
def state_modifier(state):
|
71 |
-
if not params['activate']:
|
72 |
-
return state
|
73 |
-
|
74 |
-
state['stream'] = False
|
75 |
-
return state
|
76 |
-
|
77 |
-
|
78 |
-
def input_modifier(string):
|
79 |
-
if not params['activate']:
|
80 |
-
return string
|
81 |
-
|
82 |
-
shared.processing_message = "*Is recording a voice message...*"
|
83 |
-
return string
|
84 |
-
|
85 |
-
|
86 |
-
def history_modifier(history):
|
87 |
-
# Remove autoplay from the last reply
|
88 |
-
if len(history['internal']) > 0:
|
89 |
-
history['visible'][-1] = [
|
90 |
-
history['visible'][-1][0],
|
91 |
-
history['visible'][-1][1].replace('controls autoplay>', 'controls>')
|
92 |
-
]
|
93 |
-
|
94 |
-
return history
|
95 |
-
|
96 |
-
|
97 |
-
def output_modifier(string):
|
98 |
-
global params, wav_idx
|
99 |
-
|
100 |
-
if not params['activate']:
|
101 |
-
return string
|
102 |
-
|
103 |
-
original_string = string
|
104 |
-
string = remove_surrounded_chars(string)
|
105 |
-
string = string.replace('"', '')
|
106 |
-
string = string.replace('“', '')
|
107 |
-
string = string.replace('\n', ' ')
|
108 |
-
string = string.strip()
|
109 |
-
if string == '':
|
110 |
-
string = 'empty reply, try regenerating'
|
111 |
-
|
112 |
-
output_file = Path(f'extensions/elevenlabs_tts/outputs/{wav_idx:06d}.mp3'.format(wav_idx))
|
113 |
-
print(f'Outputting audio to {str(output_file)}')
|
114 |
-
try:
|
115 |
-
audio = elevenlabs.generate(text=html.unescape(string), voice=params['selected_voice'], model=params['model'])
|
116 |
-
elevenlabs.save(audio, str(output_file))
|
117 |
-
|
118 |
-
autoplay = 'autoplay' if params['autoplay'] else ''
|
119 |
-
string = f'<audio src="file/{output_file.as_posix()}" controls {autoplay}></audio>'
|
120 |
-
wav_idx += 1
|
121 |
-
except elevenlabs.api.error.UnauthenticatedRateLimitError:
|
122 |
-
string = "🤖 ElevenLabs Unauthenticated Rate Limit Reached - Please create an API key to continue\n\n"
|
123 |
-
except elevenlabs.api.error.RateLimitError:
|
124 |
-
string = "🤖 ElevenLabs API Tier Limit Reached\n\n"
|
125 |
-
except elevenlabs.api.error.APIError as err:
|
126 |
-
string = f"🤖 ElevenLabs Error: {err}\n\n"
|
127 |
-
|
128 |
-
if params['show_text']:
|
129 |
-
string += f'\n\n{original_string}'
|
130 |
-
|
131 |
-
shared.processing_message = "*Is typing...*"
|
132 |
-
return string
|
133 |
-
|
134 |
-
|
135 |
-
def ui():
|
136 |
-
global voices
|
137 |
-
if not voices:
|
138 |
-
voices = refresh_voices()
|
139 |
-
selected = params['selected_voice']
|
140 |
-
if selected == 'None':
|
141 |
-
params['selected_voice'] = voices[0]
|
142 |
-
elif selected not in voices:
|
143 |
-
logger.error(f'Selected voice {selected} not available, switching to {voices[0]}')
|
144 |
-
params['selected_voice'] = voices[0]
|
145 |
-
|
146 |
-
# Gradio elements
|
147 |
-
with gr.Row():
|
148 |
-
activate = gr.Checkbox(value=params['activate'], label='Activate TTS')
|
149 |
-
autoplay = gr.Checkbox(value=params['autoplay'], label='Play TTS automatically')
|
150 |
-
show_text = gr.Checkbox(value=params['show_text'], label='Show message text under audio player')
|
151 |
-
|
152 |
-
with gr.Row():
|
153 |
-
voice = gr.Dropdown(value=params['selected_voice'], choices=voices, label='TTS Voice')
|
154 |
-
refresh = gr.Button(value='Refresh')
|
155 |
-
|
156 |
-
with gr.Row():
|
157 |
-
if params['api_key']:
|
158 |
-
api_key = gr.Textbox(value=params['api_key'], label='API Key')
|
159 |
-
update_api_key(params['api_key'])
|
160 |
-
else:
|
161 |
-
api_key = gr.Textbox(placeholder="Enter your API key.", label='API Key')
|
162 |
-
|
163 |
-
with gr.Row():
|
164 |
-
model = gr.Dropdown(value=params['model'], choices=LANG_MODELS, label='Language model')
|
165 |
-
|
166 |
-
with gr.Row():
|
167 |
-
convert = gr.Button('Permanently replace audios with the message texts')
|
168 |
-
convert_cancel = gr.Button('Cancel', visible=False)
|
169 |
-
convert_confirm = gr.Button('Confirm (cannot be undone)', variant="stop", visible=False)
|
170 |
-
|
171 |
-
# Convert history with confirmation
|
172 |
-
convert_arr = [convert_confirm, convert, convert_cancel]
|
173 |
-
convert.click(lambda: [gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, convert_arr)
|
174 |
-
convert_confirm.click(
|
175 |
-
lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr).then(
|
176 |
-
remove_tts_from_history, gradio('history'), gradio('history')).then(
|
177 |
-
chat.save_history, gradio('history', 'unique_id', 'character_menu', 'mode'), None).then(
|
178 |
-
chat.redraw_html, gradio(ui_chat.reload_arr), gradio('display'))
|
179 |
-
|
180 |
-
convert_cancel.click(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, convert_arr)
|
181 |
-
|
182 |
-
# Toggle message text in history
|
183 |
-
show_text.change(
|
184 |
-
lambda x: params.update({"show_text": x}), show_text, None).then(
|
185 |
-
toggle_text_in_history, gradio('history'), gradio('history')).then(
|
186 |
-
chat.save_history, gradio('history', 'unique_id', 'character_menu', 'mode'), None).then(
|
187 |
-
chat.redraw_html, gradio(ui_chat.reload_arr), gradio('display'))
|
188 |
-
|
189 |
-
# Event functions to update the parameters in the backend
|
190 |
-
activate.change(lambda x: params.update({'activate': x}), activate, None)
|
191 |
-
voice.change(lambda x: params.update({'selected_voice': x}), voice, None)
|
192 |
-
api_key.change(update_api_key, api_key, None)
|
193 |
-
model.change(lambda x: params.update({'model': x}), model, None)
|
194 |
-
# connect.click(check_valid_api, [], connection_status)
|
195 |
-
refresh.click(refresh_voices_dd, [], voice)
|
196 |
-
# Event functions to update the parameters in the backend
|
197 |
-
autoplay.change(lambda x: params.update({"autoplay": x}), autoplay, None)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/util/task.py
DELETED
@@ -1,120 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn.functional as F
|
3 |
-
import torchvision.transforms as transforms
|
4 |
-
import numpy as np
|
5 |
-
import cv2
|
6 |
-
from PIL import Image
|
7 |
-
import random
|
8 |
-
|
9 |
-
|
10 |
-
###################################################################
|
11 |
-
# random mask generation
|
12 |
-
###################################################################
|
13 |
-
def random_regular_mask(img):
|
14 |
-
"""Generate a random regular mask
|
15 |
-
:param img: original image size C*H*W
|
16 |
-
:return: mask
|
17 |
-
"""
|
18 |
-
mask = torch.ones_like(img)[0:1, :, :]
|
19 |
-
s = img.size()
|
20 |
-
N_mask = random.randint(1, 5)
|
21 |
-
lim_x = s[1] - s[1] / (N_mask + 1)
|
22 |
-
lim_y = s[2] - s[2] / (N_mask + 1)
|
23 |
-
for _ in range(N_mask):
|
24 |
-
x = random.randint(0, int(lim_x))
|
25 |
-
y = random.randint(0, int(lim_y))
|
26 |
-
range_x = x + random.randint(int(s[1] / (N_mask + 7)), min(int(s[1] - x), int(s[1] / 2)))
|
27 |
-
range_y = y + random.randint(int(s[2] / (N_mask + 7)), min(int(s[2] - y), int(s[2] / 2)))
|
28 |
-
mask[:, int(x) : int(range_x), int(y) : int(range_y)] = 0
|
29 |
-
return mask
|
30 |
-
|
31 |
-
|
32 |
-
def center_mask(img):
|
33 |
-
"""Generate a center hole with 1/4*W and 1/4*H
|
34 |
-
:param img: original image size C*H*W
|
35 |
-
:return: mask
|
36 |
-
"""
|
37 |
-
mask = torch.ones_like(img)[0:1, :, :]
|
38 |
-
s = img.size()
|
39 |
-
mask[:, int(s[1]/4):int(s[1]*3/4), int(s[2]/4):int(s[2]*3/4)] = 0
|
40 |
-
return mask
|
41 |
-
|
42 |
-
|
43 |
-
def random_irregular_mask(img):
|
44 |
-
"""Generate a random irregular mask with lines, circles and ellipses
|
45 |
-
:param img: original image size C*H*W
|
46 |
-
:return: mask
|
47 |
-
"""
|
48 |
-
transform = transforms.Compose([transforms.ToTensor()])
|
49 |
-
mask = torch.ones_like(img)[0:1, :, :]
|
50 |
-
s = mask.size()
|
51 |
-
img = np.zeros((s[1], s[2], 1), np.uint8)
|
52 |
-
|
53 |
-
max_width = int(min(s[1]/10, s[2]/10))
|
54 |
-
N_mask = random.randint(16, 64)
|
55 |
-
for _ in range(N_mask):
|
56 |
-
model = random.random()
|
57 |
-
if model < 0.2: # Draw random lines
|
58 |
-
x1, x2 = random.randint(1, s[1]), random.randint(1, s[1])
|
59 |
-
y1, y2 = random.randint(1, s[2]), random.randint(1, s[2])
|
60 |
-
thickness = random.randint(2, max_width)
|
61 |
-
cv2.line(img, (x1, y1), (x2, y2), (1, 1, 1), thickness)
|
62 |
-
elif (model > 0.2 and model < 0.5): # Draw random circles
|
63 |
-
x1, y1 = random.randint(1, s[1]), random.randint(1, s[2])
|
64 |
-
radius = random.randint(2, max_width)
|
65 |
-
cv2.circle(img, (x1, y1), radius, (1, 1, 1), -1)
|
66 |
-
else: # draw random ellipses
|
67 |
-
x1, y1 = random.randint(1, s[1]), random.randint(1, s[2])
|
68 |
-
s1, s2 = random.randint(1, s[1]), random.randint(1, s[2])
|
69 |
-
a1, a2, a3 = random.randint(3, 180), random.randint(3, 180), random.randint(3, 180)
|
70 |
-
thickness = random.randint(2, max_width)
|
71 |
-
cv2.ellipse(img, (x1, y1), (s1, s2), a1, a2, a3, (1, 1, 1), thickness)
|
72 |
-
|
73 |
-
img = img.reshape(s[2], s[1])
|
74 |
-
img = Image.fromarray(img*255)
|
75 |
-
|
76 |
-
img_mask = transform(img)
|
77 |
-
for j in range(s[0]):
|
78 |
-
mask[j, :, :] = img_mask
|
79 |
-
|
80 |
-
return mask
|
81 |
-
|
82 |
-
|
83 |
-
def scale_img(img, size):
|
84 |
-
h_ratio = img.size(-1) // size[-1]
|
85 |
-
w_ratio = img.size(-2) // size[-2]
|
86 |
-
scaled_img = F.avg_pool2d(img, kernel_size=(w_ratio, h_ratio), stride=(w_ratio, h_ratio))
|
87 |
-
return scaled_img
|
88 |
-
|
89 |
-
|
90 |
-
def scale_pyramid(img, num_scales):
|
91 |
-
scaled_imgs = [img]
|
92 |
-
|
93 |
-
for i in range(1, num_scales):
|
94 |
-
ratio = 2**i
|
95 |
-
scaled_img = F.avg_pool2d(img, kernel_size=ratio, stride=ratio)
|
96 |
-
scaled_imgs.append(scaled_img)
|
97 |
-
|
98 |
-
scaled_imgs.reverse()
|
99 |
-
return scaled_imgs
|
100 |
-
|
101 |
-
|
102 |
-
def jacobian(y, x, point=None, create_graph=True):
|
103 |
-
"""Calculate the jacobian matrix for given point"""
|
104 |
-
jac = []
|
105 |
-
flat_y = y.reshape(-1)
|
106 |
-
b, c, h, w = y.size()
|
107 |
-
if point is not None:
|
108 |
-
i = point[0] * h + point[1]
|
109 |
-
input_y = flat_y[i]
|
110 |
-
grad_x = torch.autograd.grad(input_y, x, retain_graph=True, grad_outputs=torch.ones(input_y.size()).to(x.device),
|
111 |
-
create_graph=create_graph, only_inputs=True)[0]
|
112 |
-
jac.append(grad_x.reshape(x.shape))
|
113 |
-
return jac
|
114 |
-
else:
|
115 |
-
for i in range(len(flat_y)):
|
116 |
-
input_y = flat_y[i]
|
117 |
-
grad_x = torch.autograd.grad(input_y, x, retain_graph=True, grad_outputs=torch.ones(input_y.size()).to(x.device),
|
118 |
-
create_graph=create_graph, only_inputs=True)[0]
|
119 |
-
jac.append(grad_x.reshape(x.shape))
|
120 |
-
return torch.stack(jac).reshape(y.shape + x.shape)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/gmflow_module/loss.py
DELETED
@@ -1,37 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
|
3 |
-
|
4 |
-
def flow_loss_func(flow_preds, flow_gt, valid,
|
5 |
-
gamma=0.9,
|
6 |
-
max_flow=400,
|
7 |
-
**kwargs,
|
8 |
-
):
|
9 |
-
n_predictions = len(flow_preds)
|
10 |
-
flow_loss = 0.0
|
11 |
-
|
12 |
-
# exlude invalid pixels and extremely large diplacements
|
13 |
-
mag = torch.sum(flow_gt ** 2, dim=1).sqrt() # [B, H, W]
|
14 |
-
valid = (valid >= 0.5) & (mag < max_flow)
|
15 |
-
|
16 |
-
for i in range(n_predictions):
|
17 |
-
i_weight = gamma ** (n_predictions - i - 1)
|
18 |
-
|
19 |
-
i_loss = (flow_preds[i] - flow_gt).abs()
|
20 |
-
|
21 |
-
flow_loss += i_weight * (valid[:, None] * i_loss).mean()
|
22 |
-
|
23 |
-
epe = torch.sum((flow_preds[-1] - flow_gt) ** 2, dim=1).sqrt()
|
24 |
-
|
25 |
-
if valid.max() < 0.5:
|
26 |
-
pass
|
27 |
-
|
28 |
-
epe = epe.view(-1)[valid.view(-1)]
|
29 |
-
|
30 |
-
metrics = {
|
31 |
-
'epe': epe.mean().item(),
|
32 |
-
'1px': (epe > 1).float().mean().item(),
|
33 |
-
'3px': (epe > 3).float().mean().item(),
|
34 |
-
'5px': (epe > 5).float().mean().item(),
|
35 |
-
}
|
36 |
-
|
37 |
-
return flow_loss, metrics
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aravindan/BreedClassification/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: BreedClassification
|
3 |
-
emoji: 🔥
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: gray
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.0.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AriaMei/TTSdemo/modules.py
DELETED
@@ -1,390 +0,0 @@
|
|
1 |
-
import copy
|
2 |
-
import math
|
3 |
-
import numpy as np
|
4 |
-
import scipy
|
5 |
-
import torch
|
6 |
-
from torch import nn
|
7 |
-
from torch.nn import functional as F
|
8 |
-
|
9 |
-
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
|
10 |
-
from torch.nn.utils import weight_norm, remove_weight_norm
|
11 |
-
|
12 |
-
import commons
|
13 |
-
from commons import init_weights, get_padding
|
14 |
-
from transforms import piecewise_rational_quadratic_transform
|
15 |
-
|
16 |
-
|
17 |
-
LRELU_SLOPE = 0.1
|
18 |
-
|
19 |
-
|
20 |
-
class LayerNorm(nn.Module):
|
21 |
-
def __init__(self, channels, eps=1e-5):
|
22 |
-
super().__init__()
|
23 |
-
self.channels = channels
|
24 |
-
self.eps = eps
|
25 |
-
|
26 |
-
self.gamma = nn.Parameter(torch.ones(channels))
|
27 |
-
self.beta = nn.Parameter(torch.zeros(channels))
|
28 |
-
|
29 |
-
def forward(self, x):
|
30 |
-
x = x.transpose(1, -1)
|
31 |
-
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
|
32 |
-
return x.transpose(1, -1)
|
33 |
-
|
34 |
-
|
35 |
-
class ConvReluNorm(nn.Module):
|
36 |
-
def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
|
37 |
-
super().__init__()
|
38 |
-
self.in_channels = in_channels
|
39 |
-
self.hidden_channels = hidden_channels
|
40 |
-
self.out_channels = out_channels
|
41 |
-
self.kernel_size = kernel_size
|
42 |
-
self.n_layers = n_layers
|
43 |
-
self.p_dropout = p_dropout
|
44 |
-
assert n_layers > 1, "Number of layers should be larger than 0."
|
45 |
-
|
46 |
-
self.conv_layers = nn.ModuleList()
|
47 |
-
self.norm_layers = nn.ModuleList()
|
48 |
-
self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
|
49 |
-
self.norm_layers.append(LayerNorm(hidden_channels))
|
50 |
-
self.relu_drop = nn.Sequential(
|
51 |
-
nn.ReLU(),
|
52 |
-
nn.Dropout(p_dropout))
|
53 |
-
for _ in range(n_layers-1):
|
54 |
-
self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
|
55 |
-
self.norm_layers.append(LayerNorm(hidden_channels))
|
56 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
|
57 |
-
self.proj.weight.data.zero_()
|
58 |
-
self.proj.bias.data.zero_()
|
59 |
-
|
60 |
-
def forward(self, x, x_mask):
|
61 |
-
x_org = x
|
62 |
-
for i in range(self.n_layers):
|
63 |
-
x = self.conv_layers[i](x * x_mask)
|
64 |
-
x = self.norm_layers[i](x)
|
65 |
-
x = self.relu_drop(x)
|
66 |
-
x = x_org + self.proj(x)
|
67 |
-
return x * x_mask
|
68 |
-
|
69 |
-
|
70 |
-
class DDSConv(nn.Module):
|
71 |
-
"""
|
72 |
-
Dialted and Depth-Separable Convolution
|
73 |
-
"""
|
74 |
-
def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
|
75 |
-
super().__init__()
|
76 |
-
self.channels = channels
|
77 |
-
self.kernel_size = kernel_size
|
78 |
-
self.n_layers = n_layers
|
79 |
-
self.p_dropout = p_dropout
|
80 |
-
|
81 |
-
self.drop = nn.Dropout(p_dropout)
|
82 |
-
self.convs_sep = nn.ModuleList()
|
83 |
-
self.convs_1x1 = nn.ModuleList()
|
84 |
-
self.norms_1 = nn.ModuleList()
|
85 |
-
self.norms_2 = nn.ModuleList()
|
86 |
-
for i in range(n_layers):
|
87 |
-
dilation = kernel_size ** i
|
88 |
-
padding = (kernel_size * dilation - dilation) // 2
|
89 |
-
self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
|
90 |
-
groups=channels, dilation=dilation, padding=padding
|
91 |
-
))
|
92 |
-
self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
|
93 |
-
self.norms_1.append(LayerNorm(channels))
|
94 |
-
self.norms_2.append(LayerNorm(channels))
|
95 |
-
|
96 |
-
def forward(self, x, x_mask, g=None):
|
97 |
-
if g is not None:
|
98 |
-
x = x + g
|
99 |
-
for i in range(self.n_layers):
|
100 |
-
y = self.convs_sep[i](x * x_mask)
|
101 |
-
y = self.norms_1[i](y)
|
102 |
-
y = F.gelu(y)
|
103 |
-
y = self.convs_1x1[i](y)
|
104 |
-
y = self.norms_2[i](y)
|
105 |
-
y = F.gelu(y)
|
106 |
-
y = self.drop(y)
|
107 |
-
x = x + y
|
108 |
-
return x * x_mask
|
109 |
-
|
110 |
-
|
111 |
-
class WN(torch.nn.Module):
|
112 |
-
def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
|
113 |
-
super(WN, self).__init__()
|
114 |
-
assert(kernel_size % 2 == 1)
|
115 |
-
self.hidden_channels =hidden_channels
|
116 |
-
self.kernel_size = kernel_size,
|
117 |
-
self.dilation_rate = dilation_rate
|
118 |
-
self.n_layers = n_layers
|
119 |
-
self.gin_channels = gin_channels
|
120 |
-
self.p_dropout = p_dropout
|
121 |
-
|
122 |
-
self.in_layers = torch.nn.ModuleList()
|
123 |
-
self.res_skip_layers = torch.nn.ModuleList()
|
124 |
-
self.drop = nn.Dropout(p_dropout)
|
125 |
-
|
126 |
-
if gin_channels != 0:
|
127 |
-
cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
|
128 |
-
self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
|
129 |
-
|
130 |
-
for i in range(n_layers):
|
131 |
-
dilation = dilation_rate ** i
|
132 |
-
padding = int((kernel_size * dilation - dilation) / 2)
|
133 |
-
in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
|
134 |
-
dilation=dilation, padding=padding)
|
135 |
-
in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
|
136 |
-
self.in_layers.append(in_layer)
|
137 |
-
|
138 |
-
# last one is not necessary
|
139 |
-
if i < n_layers - 1:
|
140 |
-
res_skip_channels = 2 * hidden_channels
|
141 |
-
else:
|
142 |
-
res_skip_channels = hidden_channels
|
143 |
-
|
144 |
-
res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
|
145 |
-
res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
|
146 |
-
self.res_skip_layers.append(res_skip_layer)
|
147 |
-
|
148 |
-
def forward(self, x, x_mask, g=None, **kwargs):
|
149 |
-
output = torch.zeros_like(x)
|
150 |
-
n_channels_tensor = torch.IntTensor([self.hidden_channels])
|
151 |
-
|
152 |
-
if g is not None:
|
153 |
-
g = self.cond_layer(g)
|
154 |
-
|
155 |
-
for i in range(self.n_layers):
|
156 |
-
x_in = self.in_layers[i](x)
|
157 |
-
if g is not None:
|
158 |
-
cond_offset = i * 2 * self.hidden_channels
|
159 |
-
g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
|
160 |
-
else:
|
161 |
-
g_l = torch.zeros_like(x_in)
|
162 |
-
|
163 |
-
acts = commons.fused_add_tanh_sigmoid_multiply(
|
164 |
-
x_in,
|
165 |
-
g_l,
|
166 |
-
n_channels_tensor)
|
167 |
-
acts = self.drop(acts)
|
168 |
-
|
169 |
-
res_skip_acts = self.res_skip_layers[i](acts)
|
170 |
-
if i < self.n_layers - 1:
|
171 |
-
res_acts = res_skip_acts[:,:self.hidden_channels,:]
|
172 |
-
x = (x + res_acts) * x_mask
|
173 |
-
output = output + res_skip_acts[:,self.hidden_channels:,:]
|
174 |
-
else:
|
175 |
-
output = output + res_skip_acts
|
176 |
-
return output * x_mask
|
177 |
-
|
178 |
-
def remove_weight_norm(self):
|
179 |
-
if self.gin_channels != 0:
|
180 |
-
torch.nn.utils.remove_weight_norm(self.cond_layer)
|
181 |
-
for l in self.in_layers:
|
182 |
-
torch.nn.utils.remove_weight_norm(l)
|
183 |
-
for l in self.res_skip_layers:
|
184 |
-
torch.nn.utils.remove_weight_norm(l)
|
185 |
-
|
186 |
-
|
187 |
-
class ResBlock1(torch.nn.Module):
|
188 |
-
def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
|
189 |
-
super(ResBlock1, self).__init__()
|
190 |
-
self.convs1 = nn.ModuleList([
|
191 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
|
192 |
-
padding=get_padding(kernel_size, dilation[0]))),
|
193 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
|
194 |
-
padding=get_padding(kernel_size, dilation[1]))),
|
195 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
|
196 |
-
padding=get_padding(kernel_size, dilation[2])))
|
197 |
-
])
|
198 |
-
self.convs1.apply(init_weights)
|
199 |
-
|
200 |
-
self.convs2 = nn.ModuleList([
|
201 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
|
202 |
-
padding=get_padding(kernel_size, 1))),
|
203 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
|
204 |
-
padding=get_padding(kernel_size, 1))),
|
205 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
|
206 |
-
padding=get_padding(kernel_size, 1)))
|
207 |
-
])
|
208 |
-
self.convs2.apply(init_weights)
|
209 |
-
|
210 |
-
def forward(self, x, x_mask=None):
|
211 |
-
for c1, c2 in zip(self.convs1, self.convs2):
|
212 |
-
xt = F.leaky_relu(x, LRELU_SLOPE)
|
213 |
-
if x_mask is not None:
|
214 |
-
xt = xt * x_mask
|
215 |
-
xt = c1(xt)
|
216 |
-
xt = F.leaky_relu(xt, LRELU_SLOPE)
|
217 |
-
if x_mask is not None:
|
218 |
-
xt = xt * x_mask
|
219 |
-
xt = c2(xt)
|
220 |
-
x = xt + x
|
221 |
-
if x_mask is not None:
|
222 |
-
x = x * x_mask
|
223 |
-
return x
|
224 |
-
|
225 |
-
def remove_weight_norm(self):
|
226 |
-
for l in self.convs1:
|
227 |
-
remove_weight_norm(l)
|
228 |
-
for l in self.convs2:
|
229 |
-
remove_weight_norm(l)
|
230 |
-
|
231 |
-
|
232 |
-
class ResBlock2(torch.nn.Module):
|
233 |
-
def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
|
234 |
-
super(ResBlock2, self).__init__()
|
235 |
-
self.convs = nn.ModuleList([
|
236 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
|
237 |
-
padding=get_padding(kernel_size, dilation[0]))),
|
238 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
|
239 |
-
padding=get_padding(kernel_size, dilation[1])))
|
240 |
-
])
|
241 |
-
self.convs.apply(init_weights)
|
242 |
-
|
243 |
-
def forward(self, x, x_mask=None):
|
244 |
-
for c in self.convs:
|
245 |
-
xt = F.leaky_relu(x, LRELU_SLOPE)
|
246 |
-
if x_mask is not None:
|
247 |
-
xt = xt * x_mask
|
248 |
-
xt = c(xt)
|
249 |
-
x = xt + x
|
250 |
-
if x_mask is not None:
|
251 |
-
x = x * x_mask
|
252 |
-
return x
|
253 |
-
|
254 |
-
def remove_weight_norm(self):
|
255 |
-
for l in self.convs:
|
256 |
-
remove_weight_norm(l)
|
257 |
-
|
258 |
-
|
259 |
-
class Log(nn.Module):
|
260 |
-
def forward(self, x, x_mask, reverse=False, **kwargs):
|
261 |
-
if not reverse:
|
262 |
-
y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
|
263 |
-
logdet = torch.sum(-y, [1, 2])
|
264 |
-
return y, logdet
|
265 |
-
else:
|
266 |
-
x = torch.exp(x) * x_mask
|
267 |
-
return x
|
268 |
-
|
269 |
-
|
270 |
-
class Flip(nn.Module):
|
271 |
-
def forward(self, x, *args, reverse=False, **kwargs):
|
272 |
-
x = torch.flip(x, [1])
|
273 |
-
if not reverse:
|
274 |
-
logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
|
275 |
-
return x, logdet
|
276 |
-
else:
|
277 |
-
return x
|
278 |
-
|
279 |
-
|
280 |
-
class ElementwiseAffine(nn.Module):
|
281 |
-
def __init__(self, channels):
|
282 |
-
super().__init__()
|
283 |
-
self.channels = channels
|
284 |
-
self.m = nn.Parameter(torch.zeros(channels,1))
|
285 |
-
self.logs = nn.Parameter(torch.zeros(channels,1))
|
286 |
-
|
287 |
-
def forward(self, x, x_mask, reverse=False, **kwargs):
|
288 |
-
if not reverse:
|
289 |
-
y = self.m + torch.exp(self.logs) * x
|
290 |
-
y = y * x_mask
|
291 |
-
logdet = torch.sum(self.logs * x_mask, [1,2])
|
292 |
-
return y, logdet
|
293 |
-
else:
|
294 |
-
x = (x - self.m) * torch.exp(-self.logs) * x_mask
|
295 |
-
return x
|
296 |
-
|
297 |
-
|
298 |
-
class ResidualCouplingLayer(nn.Module):
|
299 |
-
def __init__(self,
|
300 |
-
channels,
|
301 |
-
hidden_channels,
|
302 |
-
kernel_size,
|
303 |
-
dilation_rate,
|
304 |
-
n_layers,
|
305 |
-
p_dropout=0,
|
306 |
-
gin_channels=0,
|
307 |
-
mean_only=False):
|
308 |
-
assert channels % 2 == 0, "channels should be divisible by 2"
|
309 |
-
super().__init__()
|
310 |
-
self.channels = channels
|
311 |
-
self.hidden_channels = hidden_channels
|
312 |
-
self.kernel_size = kernel_size
|
313 |
-
self.dilation_rate = dilation_rate
|
314 |
-
self.n_layers = n_layers
|
315 |
-
self.half_channels = channels // 2
|
316 |
-
self.mean_only = mean_only
|
317 |
-
|
318 |
-
self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
|
319 |
-
self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
|
320 |
-
self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
|
321 |
-
self.post.weight.data.zero_()
|
322 |
-
self.post.bias.data.zero_()
|
323 |
-
|
324 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
325 |
-
x0, x1 = torch.split(x, [self.half_channels]*2, 1)
|
326 |
-
h = self.pre(x0) * x_mask
|
327 |
-
h = self.enc(h, x_mask, g=g)
|
328 |
-
stats = self.post(h) * x_mask
|
329 |
-
if not self.mean_only:
|
330 |
-
m, logs = torch.split(stats, [self.half_channels]*2, 1)
|
331 |
-
else:
|
332 |
-
m = stats
|
333 |
-
logs = torch.zeros_like(m)
|
334 |
-
|
335 |
-
if not reverse:
|
336 |
-
x1 = m + x1 * torch.exp(logs) * x_mask
|
337 |
-
x = torch.cat([x0, x1], 1)
|
338 |
-
logdet = torch.sum(logs, [1,2])
|
339 |
-
return x, logdet
|
340 |
-
else:
|
341 |
-
x1 = (x1 - m) * torch.exp(-logs) * x_mask
|
342 |
-
x = torch.cat([x0, x1], 1)
|
343 |
-
return x
|
344 |
-
|
345 |
-
|
346 |
-
class ConvFlow(nn.Module):
|
347 |
-
def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
|
348 |
-
super().__init__()
|
349 |
-
self.in_channels = in_channels
|
350 |
-
self.filter_channels = filter_channels
|
351 |
-
self.kernel_size = kernel_size
|
352 |
-
self.n_layers = n_layers
|
353 |
-
self.num_bins = num_bins
|
354 |
-
self.tail_bound = tail_bound
|
355 |
-
self.half_channels = in_channels // 2
|
356 |
-
|
357 |
-
self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
|
358 |
-
self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
|
359 |
-
self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
|
360 |
-
self.proj.weight.data.zero_()
|
361 |
-
self.proj.bias.data.zero_()
|
362 |
-
|
363 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
364 |
-
x0, x1 = torch.split(x, [self.half_channels]*2, 1)
|
365 |
-
h = self.pre(x0)
|
366 |
-
h = self.convs(h, x_mask, g=g)
|
367 |
-
h = self.proj(h) * x_mask
|
368 |
-
|
369 |
-
b, c, t = x0.shape
|
370 |
-
h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
|
371 |
-
|
372 |
-
unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
|
373 |
-
unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
|
374 |
-
unnormalized_derivatives = h[..., 2 * self.num_bins:]
|
375 |
-
|
376 |
-
x1, logabsdet = piecewise_rational_quadratic_transform(x1,
|
377 |
-
unnormalized_widths,
|
378 |
-
unnormalized_heights,
|
379 |
-
unnormalized_derivatives,
|
380 |
-
inverse=reverse,
|
381 |
-
tails='linear',
|
382 |
-
tail_bound=self.tail_bound
|
383 |
-
)
|
384 |
-
|
385 |
-
x = torch.cat([x0, x1], 1) * x_mask
|
386 |
-
logdet = torch.sum(logabsdet * x_mask, [1,2])
|
387 |
-
if not reverse:
|
388 |
-
return x, logdet
|
389 |
-
else:
|
390 |
-
return x
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ariharasudhan/YoloV5/utils/segment/__init__.py
DELETED
File without changes
|
spaces/Artples/Chat-with-Llama-2-70b/app.py
DELETED
@@ -1,64 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
from gradio_client import Client
|
3 |
-
|
4 |
-
client = Client("https://ysharma-explore-llamav2-with-tgi.hf.space/")
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
title = "Lauche-AI LEU-Chatbot"
|
9 |
-
description = """
|
10 |
-
Disclaimer: Lauche - AI (POWERED BY LLAMA 2) can produce factually incorrect output, and should not be relied on to produce factually accurate information. Lauche - AI (POWERED BY LLAMA 2) was trained on various public datasets; while great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased, or otherwise offensive outputs. - - - Our Impressum: https://lauche.eu/n-impressum - - - Visit this space on our website: ai-app.lauche.online.
|
11 |
-
"""
|
12 |
-
css = """.toast-wrap { display: none !important } """
|
13 |
-
examples=[
|
14 |
-
['Hello there! How are you doing?'],
|
15 |
-
['Can you explain to me briefly what is Python programming language?'],
|
16 |
-
['Explain the plot of Cinderella in a sentence.'],
|
17 |
-
['How many hours does it take a man to eat a Helicopter?'],
|
18 |
-
["Write a 100-word article on 'Benefits of Open-Source in AI research'"],
|
19 |
-
]
|
20 |
-
|
21 |
-
|
22 |
-
# Stream text
|
23 |
-
def predict(message, chatbot, system_prompt="", temperature=0.9, max_new_tokens=4096):
|
24 |
-
return client.predict(
|
25 |
-
message, # str in 'Message' Textbox component
|
26 |
-
system_prompt, # str in 'Optional system prompt' Textbox component
|
27 |
-
temperature, # int | float (numeric value between 0.0 and 1.0)
|
28 |
-
max_new_tokens, # int | float (numeric value between 0 and 4096)
|
29 |
-
0.3, # int | float (numeric value between 0.0 and 1)
|
30 |
-
1, # int | float (numeric value between 1.0 and 2.0)
|
31 |
-
api_name="/chat"
|
32 |
-
)
|
33 |
-
|
34 |
-
|
35 |
-
additional_inputs=[
|
36 |
-
gr.Textbox("", label="Optional system prompt"),
|
37 |
-
gr.Slider(
|
38 |
-
label="Temperature",
|
39 |
-
value=0.9,
|
40 |
-
minimum=0.0,
|
41 |
-
maximum=1.0,
|
42 |
-
step=0.05,
|
43 |
-
interactive=True,
|
44 |
-
info="Higher values produce more diverse outputs",
|
45 |
-
),
|
46 |
-
gr.Slider(
|
47 |
-
label="Max new tokens",
|
48 |
-
value=4096,
|
49 |
-
minimum=0,
|
50 |
-
maximum=4096,
|
51 |
-
step=64,
|
52 |
-
interactive=True,
|
53 |
-
info="The maximum numbers of new tokens",
|
54 |
-
)
|
55 |
-
]
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
# Gradio Demo
|
60 |
-
with gr.Blocks(theme=gr.themes.Base()) as demo:
|
61 |
-
|
62 |
-
gr.ChatInterface(predict, title=title, description=description, css=css, examples=examples, additional_inputs=additional_inputs)
|
63 |
-
|
64 |
-
demo.queue().launch(debug=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/securetransport.py
DELETED
@@ -1,921 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
SecureTranport support for urllib3 via ctypes.
|
3 |
-
|
4 |
-
This makes platform-native TLS available to urllib3 users on macOS without the
|
5 |
-
use of a compiler. This is an important feature because the Python Package
|
6 |
-
Index is moving to become a TLSv1.2-or-higher server, and the default OpenSSL
|
7 |
-
that ships with macOS is not capable of doing TLSv1.2. The only way to resolve
|
8 |
-
this is to give macOS users an alternative solution to the problem, and that
|
9 |
-
solution is to use SecureTransport.
|
10 |
-
|
11 |
-
We use ctypes here because this solution must not require a compiler. That's
|
12 |
-
because pip is not allowed to require a compiler either.
|
13 |
-
|
14 |
-
This is not intended to be a seriously long-term solution to this problem.
|
15 |
-
The hope is that PEP 543 will eventually solve this issue for us, at which
|
16 |
-
point we can retire this contrib module. But in the short term, we need to
|
17 |
-
solve the impending tire fire that is Python on Mac without this kind of
|
18 |
-
contrib module. So...here we are.
|
19 |
-
|
20 |
-
To use this module, simply import and inject it::
|
21 |
-
|
22 |
-
import pip._vendor.urllib3.contrib.securetransport as securetransport
|
23 |
-
securetransport.inject_into_urllib3()
|
24 |
-
|
25 |
-
Happy TLSing!
|
26 |
-
|
27 |
-
This code is a bastardised version of the code found in Will Bond's oscrypto
|
28 |
-
library. An enormous debt is owed to him for blazing this trail for us. For
|
29 |
-
that reason, this code should be considered to be covered both by urllib3's
|
30 |
-
license and by oscrypto's:
|
31 |
-
|
32 |
-
.. code-block::
|
33 |
-
|
34 |
-
Copyright (c) 2015-2016 Will Bond <[email protected]>
|
35 |
-
|
36 |
-
Permission is hereby granted, free of charge, to any person obtaining a
|
37 |
-
copy of this software and associated documentation files (the "Software"),
|
38 |
-
to deal in the Software without restriction, including without limitation
|
39 |
-
the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
40 |
-
and/or sell copies of the Software, and to permit persons to whom the
|
41 |
-
Software is furnished to do so, subject to the following conditions:
|
42 |
-
|
43 |
-
The above copyright notice and this permission notice shall be included in
|
44 |
-
all copies or substantial portions of the Software.
|
45 |
-
|
46 |
-
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
47 |
-
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
48 |
-
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
49 |
-
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
50 |
-
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
51 |
-
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
52 |
-
DEALINGS IN THE SOFTWARE.
|
53 |
-
"""
|
54 |
-
from __future__ import absolute_import
|
55 |
-
|
56 |
-
import contextlib
|
57 |
-
import ctypes
|
58 |
-
import errno
|
59 |
-
import os.path
|
60 |
-
import shutil
|
61 |
-
import socket
|
62 |
-
import ssl
|
63 |
-
import struct
|
64 |
-
import threading
|
65 |
-
import weakref
|
66 |
-
|
67 |
-
from pip._vendor import six
|
68 |
-
|
69 |
-
from .. import util
|
70 |
-
from ..util.ssl_ import PROTOCOL_TLS_CLIENT
|
71 |
-
from ._securetransport.bindings import CoreFoundation, Security, SecurityConst
|
72 |
-
from ._securetransport.low_level import (
|
73 |
-
_assert_no_error,
|
74 |
-
_build_tls_unknown_ca_alert,
|
75 |
-
_cert_array_from_pem,
|
76 |
-
_create_cfstring_array,
|
77 |
-
_load_client_cert_chain,
|
78 |
-
_temporary_keychain,
|
79 |
-
)
|
80 |
-
|
81 |
-
try: # Platform-specific: Python 2
|
82 |
-
from socket import _fileobject
|
83 |
-
except ImportError: # Platform-specific: Python 3
|
84 |
-
_fileobject = None
|
85 |
-
from ..packages.backports.makefile import backport_makefile
|
86 |
-
|
87 |
-
__all__ = ["inject_into_urllib3", "extract_from_urllib3"]
|
88 |
-
|
89 |
-
# SNI always works
|
90 |
-
HAS_SNI = True
|
91 |
-
|
92 |
-
orig_util_HAS_SNI = util.HAS_SNI
|
93 |
-
orig_util_SSLContext = util.ssl_.SSLContext
|
94 |
-
|
95 |
-
# This dictionary is used by the read callback to obtain a handle to the
|
96 |
-
# calling wrapped socket. This is a pretty silly approach, but for now it'll
|
97 |
-
# do. I feel like I should be able to smuggle a handle to the wrapped socket
|
98 |
-
# directly in the SSLConnectionRef, but for now this approach will work I
|
99 |
-
# guess.
|
100 |
-
#
|
101 |
-
# We need to lock around this structure for inserts, but we don't do it for
|
102 |
-
# reads/writes in the callbacks. The reasoning here goes as follows:
|
103 |
-
#
|
104 |
-
# 1. It is not possible to call into the callbacks before the dictionary is
|
105 |
-
# populated, so once in the callback the id must be in the dictionary.
|
106 |
-
# 2. The callbacks don't mutate the dictionary, they only read from it, and
|
107 |
-
# so cannot conflict with any of the insertions.
|
108 |
-
#
|
109 |
-
# This is good: if we had to lock in the callbacks we'd drastically slow down
|
110 |
-
# the performance of this code.
|
111 |
-
_connection_refs = weakref.WeakValueDictionary()
|
112 |
-
_connection_ref_lock = threading.Lock()
|
113 |
-
|
114 |
-
# Limit writes to 16kB. This is OpenSSL's limit, but we'll cargo-cult it over
|
115 |
-
# for no better reason than we need *a* limit, and this one is right there.
|
116 |
-
SSL_WRITE_BLOCKSIZE = 16384
|
117 |
-
|
118 |
-
# This is our equivalent of util.ssl_.DEFAULT_CIPHERS, but expanded out to
|
119 |
-
# individual cipher suites. We need to do this because this is how
|
120 |
-
# SecureTransport wants them.
|
121 |
-
CIPHER_SUITES = [
|
122 |
-
SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
|
123 |
-
SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
|
124 |
-
SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
|
125 |
-
SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
|
126 |
-
SecurityConst.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
|
127 |
-
SecurityConst.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,
|
128 |
-
SecurityConst.TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,
|
129 |
-
SecurityConst.TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,
|
130 |
-
SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
|
131 |
-
SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
|
132 |
-
SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
|
133 |
-
SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
|
134 |
-
SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
|
135 |
-
SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
|
136 |
-
SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
|
137 |
-
SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
|
138 |
-
SecurityConst.TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
|
139 |
-
SecurityConst.TLS_DHE_RSA_WITH_AES_256_CBC_SHA,
|
140 |
-
SecurityConst.TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
|
141 |
-
SecurityConst.TLS_DHE_RSA_WITH_AES_128_CBC_SHA,
|
142 |
-
SecurityConst.TLS_AES_256_GCM_SHA384,
|
143 |
-
SecurityConst.TLS_AES_128_GCM_SHA256,
|
144 |
-
SecurityConst.TLS_RSA_WITH_AES_256_GCM_SHA384,
|
145 |
-
SecurityConst.TLS_RSA_WITH_AES_128_GCM_SHA256,
|
146 |
-
SecurityConst.TLS_AES_128_CCM_8_SHA256,
|
147 |
-
SecurityConst.TLS_AES_128_CCM_SHA256,
|
148 |
-
SecurityConst.TLS_RSA_WITH_AES_256_CBC_SHA256,
|
149 |
-
SecurityConst.TLS_RSA_WITH_AES_128_CBC_SHA256,
|
150 |
-
SecurityConst.TLS_RSA_WITH_AES_256_CBC_SHA,
|
151 |
-
SecurityConst.TLS_RSA_WITH_AES_128_CBC_SHA,
|
152 |
-
]
|
153 |
-
|
154 |
-
# Basically this is simple: for PROTOCOL_SSLv23 we turn it into a low of
|
155 |
-
# TLSv1 and a high of TLSv1.2. For everything else, we pin to that version.
|
156 |
-
# TLSv1 to 1.2 are supported on macOS 10.8+
|
157 |
-
_protocol_to_min_max = {
|
158 |
-
util.PROTOCOL_TLS: (SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocol12),
|
159 |
-
PROTOCOL_TLS_CLIENT: (SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocol12),
|
160 |
-
}
|
161 |
-
|
162 |
-
if hasattr(ssl, "PROTOCOL_SSLv2"):
|
163 |
-
_protocol_to_min_max[ssl.PROTOCOL_SSLv2] = (
|
164 |
-
SecurityConst.kSSLProtocol2,
|
165 |
-
SecurityConst.kSSLProtocol2,
|
166 |
-
)
|
167 |
-
if hasattr(ssl, "PROTOCOL_SSLv3"):
|
168 |
-
_protocol_to_min_max[ssl.PROTOCOL_SSLv3] = (
|
169 |
-
SecurityConst.kSSLProtocol3,
|
170 |
-
SecurityConst.kSSLProtocol3,
|
171 |
-
)
|
172 |
-
if hasattr(ssl, "PROTOCOL_TLSv1"):
|
173 |
-
_protocol_to_min_max[ssl.PROTOCOL_TLSv1] = (
|
174 |
-
SecurityConst.kTLSProtocol1,
|
175 |
-
SecurityConst.kTLSProtocol1,
|
176 |
-
)
|
177 |
-
if hasattr(ssl, "PROTOCOL_TLSv1_1"):
|
178 |
-
_protocol_to_min_max[ssl.PROTOCOL_TLSv1_1] = (
|
179 |
-
SecurityConst.kTLSProtocol11,
|
180 |
-
SecurityConst.kTLSProtocol11,
|
181 |
-
)
|
182 |
-
if hasattr(ssl, "PROTOCOL_TLSv1_2"):
|
183 |
-
_protocol_to_min_max[ssl.PROTOCOL_TLSv1_2] = (
|
184 |
-
SecurityConst.kTLSProtocol12,
|
185 |
-
SecurityConst.kTLSProtocol12,
|
186 |
-
)
|
187 |
-
|
188 |
-
|
189 |
-
def inject_into_urllib3():
|
190 |
-
"""
|
191 |
-
Monkey-patch urllib3 with SecureTransport-backed SSL-support.
|
192 |
-
"""
|
193 |
-
util.SSLContext = SecureTransportContext
|
194 |
-
util.ssl_.SSLContext = SecureTransportContext
|
195 |
-
util.HAS_SNI = HAS_SNI
|
196 |
-
util.ssl_.HAS_SNI = HAS_SNI
|
197 |
-
util.IS_SECURETRANSPORT = True
|
198 |
-
util.ssl_.IS_SECURETRANSPORT = True
|
199 |
-
|
200 |
-
|
201 |
-
def extract_from_urllib3():
|
202 |
-
"""
|
203 |
-
Undo monkey-patching by :func:`inject_into_urllib3`.
|
204 |
-
"""
|
205 |
-
util.SSLContext = orig_util_SSLContext
|
206 |
-
util.ssl_.SSLContext = orig_util_SSLContext
|
207 |
-
util.HAS_SNI = orig_util_HAS_SNI
|
208 |
-
util.ssl_.HAS_SNI = orig_util_HAS_SNI
|
209 |
-
util.IS_SECURETRANSPORT = False
|
210 |
-
util.ssl_.IS_SECURETRANSPORT = False
|
211 |
-
|
212 |
-
|
213 |
-
def _read_callback(connection_id, data_buffer, data_length_pointer):
|
214 |
-
"""
|
215 |
-
SecureTransport read callback. This is called by ST to request that data
|
216 |
-
be returned from the socket.
|
217 |
-
"""
|
218 |
-
wrapped_socket = None
|
219 |
-
try:
|
220 |
-
wrapped_socket = _connection_refs.get(connection_id)
|
221 |
-
if wrapped_socket is None:
|
222 |
-
return SecurityConst.errSSLInternal
|
223 |
-
base_socket = wrapped_socket.socket
|
224 |
-
|
225 |
-
requested_length = data_length_pointer[0]
|
226 |
-
|
227 |
-
timeout = wrapped_socket.gettimeout()
|
228 |
-
error = None
|
229 |
-
read_count = 0
|
230 |
-
|
231 |
-
try:
|
232 |
-
while read_count < requested_length:
|
233 |
-
if timeout is None or timeout >= 0:
|
234 |
-
if not util.wait_for_read(base_socket, timeout):
|
235 |
-
raise socket.error(errno.EAGAIN, "timed out")
|
236 |
-
|
237 |
-
remaining = requested_length - read_count
|
238 |
-
buffer = (ctypes.c_char * remaining).from_address(
|
239 |
-
data_buffer + read_count
|
240 |
-
)
|
241 |
-
chunk_size = base_socket.recv_into(buffer, remaining)
|
242 |
-
read_count += chunk_size
|
243 |
-
if not chunk_size:
|
244 |
-
if not read_count:
|
245 |
-
return SecurityConst.errSSLClosedGraceful
|
246 |
-
break
|
247 |
-
except (socket.error) as e:
|
248 |
-
error = e.errno
|
249 |
-
|
250 |
-
if error is not None and error != errno.EAGAIN:
|
251 |
-
data_length_pointer[0] = read_count
|
252 |
-
if error == errno.ECONNRESET or error == errno.EPIPE:
|
253 |
-
return SecurityConst.errSSLClosedAbort
|
254 |
-
raise
|
255 |
-
|
256 |
-
data_length_pointer[0] = read_count
|
257 |
-
|
258 |
-
if read_count != requested_length:
|
259 |
-
return SecurityConst.errSSLWouldBlock
|
260 |
-
|
261 |
-
return 0
|
262 |
-
except Exception as e:
|
263 |
-
if wrapped_socket is not None:
|
264 |
-
wrapped_socket._exception = e
|
265 |
-
return SecurityConst.errSSLInternal
|
266 |
-
|
267 |
-
|
268 |
-
def _write_callback(connection_id, data_buffer, data_length_pointer):
|
269 |
-
"""
|
270 |
-
SecureTransport write callback. This is called by ST to request that data
|
271 |
-
actually be sent on the network.
|
272 |
-
"""
|
273 |
-
wrapped_socket = None
|
274 |
-
try:
|
275 |
-
wrapped_socket = _connection_refs.get(connection_id)
|
276 |
-
if wrapped_socket is None:
|
277 |
-
return SecurityConst.errSSLInternal
|
278 |
-
base_socket = wrapped_socket.socket
|
279 |
-
|
280 |
-
bytes_to_write = data_length_pointer[0]
|
281 |
-
data = ctypes.string_at(data_buffer, bytes_to_write)
|
282 |
-
|
283 |
-
timeout = wrapped_socket.gettimeout()
|
284 |
-
error = None
|
285 |
-
sent = 0
|
286 |
-
|
287 |
-
try:
|
288 |
-
while sent < bytes_to_write:
|
289 |
-
if timeout is None or timeout >= 0:
|
290 |
-
if not util.wait_for_write(base_socket, timeout):
|
291 |
-
raise socket.error(errno.EAGAIN, "timed out")
|
292 |
-
chunk_sent = base_socket.send(data)
|
293 |
-
sent += chunk_sent
|
294 |
-
|
295 |
-
# This has some needless copying here, but I'm not sure there's
|
296 |
-
# much value in optimising this data path.
|
297 |
-
data = data[chunk_sent:]
|
298 |
-
except (socket.error) as e:
|
299 |
-
error = e.errno
|
300 |
-
|
301 |
-
if error is not None and error != errno.EAGAIN:
|
302 |
-
data_length_pointer[0] = sent
|
303 |
-
if error == errno.ECONNRESET or error == errno.EPIPE:
|
304 |
-
return SecurityConst.errSSLClosedAbort
|
305 |
-
raise
|
306 |
-
|
307 |
-
data_length_pointer[0] = sent
|
308 |
-
|
309 |
-
if sent != bytes_to_write:
|
310 |
-
return SecurityConst.errSSLWouldBlock
|
311 |
-
|
312 |
-
return 0
|
313 |
-
except Exception as e:
|
314 |
-
if wrapped_socket is not None:
|
315 |
-
wrapped_socket._exception = e
|
316 |
-
return SecurityConst.errSSLInternal
|
317 |
-
|
318 |
-
|
319 |
-
# We need to keep these two objects references alive: if they get GC'd while
|
320 |
-
# in use then SecureTransport could attempt to call a function that is in freed
|
321 |
-
# memory. That would be...uh...bad. Yeah, that's the word. Bad.
|
322 |
-
_read_callback_pointer = Security.SSLReadFunc(_read_callback)
|
323 |
-
_write_callback_pointer = Security.SSLWriteFunc(_write_callback)
|
324 |
-
|
325 |
-
|
326 |
-
class WrappedSocket(object):
|
327 |
-
"""
|
328 |
-
API-compatibility wrapper for Python's OpenSSL wrapped socket object.
|
329 |
-
|
330 |
-
Note: _makefile_refs, _drop(), and _reuse() are needed for the garbage
|
331 |
-
collector of PyPy.
|
332 |
-
"""
|
333 |
-
|
334 |
-
def __init__(self, socket):
|
335 |
-
self.socket = socket
|
336 |
-
self.context = None
|
337 |
-
self._makefile_refs = 0
|
338 |
-
self._closed = False
|
339 |
-
self._exception = None
|
340 |
-
self._keychain = None
|
341 |
-
self._keychain_dir = None
|
342 |
-
self._client_cert_chain = None
|
343 |
-
|
344 |
-
# We save off the previously-configured timeout and then set it to
|
345 |
-
# zero. This is done because we use select and friends to handle the
|
346 |
-
# timeouts, but if we leave the timeout set on the lower socket then
|
347 |
-
# Python will "kindly" call select on that socket again for us. Avoid
|
348 |
-
# that by forcing the timeout to zero.
|
349 |
-
self._timeout = self.socket.gettimeout()
|
350 |
-
self.socket.settimeout(0)
|
351 |
-
|
352 |
-
@contextlib.contextmanager
|
353 |
-
def _raise_on_error(self):
|
354 |
-
"""
|
355 |
-
A context manager that can be used to wrap calls that do I/O from
|
356 |
-
SecureTransport. If any of the I/O callbacks hit an exception, this
|
357 |
-
context manager will correctly propagate the exception after the fact.
|
358 |
-
This avoids silently swallowing those exceptions.
|
359 |
-
|
360 |
-
It also correctly forces the socket closed.
|
361 |
-
"""
|
362 |
-
self._exception = None
|
363 |
-
|
364 |
-
# We explicitly don't catch around this yield because in the unlikely
|
365 |
-
# event that an exception was hit in the block we don't want to swallow
|
366 |
-
# it.
|
367 |
-
yield
|
368 |
-
if self._exception is not None:
|
369 |
-
exception, self._exception = self._exception, None
|
370 |
-
self.close()
|
371 |
-
raise exception
|
372 |
-
|
373 |
-
def _set_ciphers(self):
|
374 |
-
"""
|
375 |
-
Sets up the allowed ciphers. By default this matches the set in
|
376 |
-
util.ssl_.DEFAULT_CIPHERS, at least as supported by macOS. This is done
|
377 |
-
custom and doesn't allow changing at this time, mostly because parsing
|
378 |
-
OpenSSL cipher strings is going to be a freaking nightmare.
|
379 |
-
"""
|
380 |
-
ciphers = (Security.SSLCipherSuite * len(CIPHER_SUITES))(*CIPHER_SUITES)
|
381 |
-
result = Security.SSLSetEnabledCiphers(
|
382 |
-
self.context, ciphers, len(CIPHER_SUITES)
|
383 |
-
)
|
384 |
-
_assert_no_error(result)
|
385 |
-
|
386 |
-
def _set_alpn_protocols(self, protocols):
|
387 |
-
"""
|
388 |
-
Sets up the ALPN protocols on the context.
|
389 |
-
"""
|
390 |
-
if not protocols:
|
391 |
-
return
|
392 |
-
protocols_arr = _create_cfstring_array(protocols)
|
393 |
-
try:
|
394 |
-
result = Security.SSLSetALPNProtocols(self.context, protocols_arr)
|
395 |
-
_assert_no_error(result)
|
396 |
-
finally:
|
397 |
-
CoreFoundation.CFRelease(protocols_arr)
|
398 |
-
|
399 |
-
def _custom_validate(self, verify, trust_bundle):
|
400 |
-
"""
|
401 |
-
Called when we have set custom validation. We do this in two cases:
|
402 |
-
first, when cert validation is entirely disabled; and second, when
|
403 |
-
using a custom trust DB.
|
404 |
-
Raises an SSLError if the connection is not trusted.
|
405 |
-
"""
|
406 |
-
# If we disabled cert validation, just say: cool.
|
407 |
-
if not verify:
|
408 |
-
return
|
409 |
-
|
410 |
-
successes = (
|
411 |
-
SecurityConst.kSecTrustResultUnspecified,
|
412 |
-
SecurityConst.kSecTrustResultProceed,
|
413 |
-
)
|
414 |
-
try:
|
415 |
-
trust_result = self._evaluate_trust(trust_bundle)
|
416 |
-
if trust_result in successes:
|
417 |
-
return
|
418 |
-
reason = "error code: %d" % (trust_result,)
|
419 |
-
except Exception as e:
|
420 |
-
# Do not trust on error
|
421 |
-
reason = "exception: %r" % (e,)
|
422 |
-
|
423 |
-
# SecureTransport does not send an alert nor shuts down the connection.
|
424 |
-
rec = _build_tls_unknown_ca_alert(self.version())
|
425 |
-
self.socket.sendall(rec)
|
426 |
-
# close the connection immediately
|
427 |
-
# l_onoff = 1, activate linger
|
428 |
-
# l_linger = 0, linger for 0 seoncds
|
429 |
-
opts = struct.pack("ii", 1, 0)
|
430 |
-
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, opts)
|
431 |
-
self.close()
|
432 |
-
raise ssl.SSLError("certificate verify failed, %s" % reason)
|
433 |
-
|
434 |
-
def _evaluate_trust(self, trust_bundle):
|
435 |
-
# We want data in memory, so load it up.
|
436 |
-
if os.path.isfile(trust_bundle):
|
437 |
-
with open(trust_bundle, "rb") as f:
|
438 |
-
trust_bundle = f.read()
|
439 |
-
|
440 |
-
cert_array = None
|
441 |
-
trust = Security.SecTrustRef()
|
442 |
-
|
443 |
-
try:
|
444 |
-
# Get a CFArray that contains the certs we want.
|
445 |
-
cert_array = _cert_array_from_pem(trust_bundle)
|
446 |
-
|
447 |
-
# Ok, now the hard part. We want to get the SecTrustRef that ST has
|
448 |
-
# created for this connection, shove our CAs into it, tell ST to
|
449 |
-
# ignore everything else it knows, and then ask if it can build a
|
450 |
-
# chain. This is a buuuunch of code.
|
451 |
-
result = Security.SSLCopyPeerTrust(self.context, ctypes.byref(trust))
|
452 |
-
_assert_no_error(result)
|
453 |
-
if not trust:
|
454 |
-
raise ssl.SSLError("Failed to copy trust reference")
|
455 |
-
|
456 |
-
result = Security.SecTrustSetAnchorCertificates(trust, cert_array)
|
457 |
-
_assert_no_error(result)
|
458 |
-
|
459 |
-
result = Security.SecTrustSetAnchorCertificatesOnly(trust, True)
|
460 |
-
_assert_no_error(result)
|
461 |
-
|
462 |
-
trust_result = Security.SecTrustResultType()
|
463 |
-
result = Security.SecTrustEvaluate(trust, ctypes.byref(trust_result))
|
464 |
-
_assert_no_error(result)
|
465 |
-
finally:
|
466 |
-
if trust:
|
467 |
-
CoreFoundation.CFRelease(trust)
|
468 |
-
|
469 |
-
if cert_array is not None:
|
470 |
-
CoreFoundation.CFRelease(cert_array)
|
471 |
-
|
472 |
-
return trust_result.value
|
473 |
-
|
474 |
-
def handshake(
|
475 |
-
self,
|
476 |
-
server_hostname,
|
477 |
-
verify,
|
478 |
-
trust_bundle,
|
479 |
-
min_version,
|
480 |
-
max_version,
|
481 |
-
client_cert,
|
482 |
-
client_key,
|
483 |
-
client_key_passphrase,
|
484 |
-
alpn_protocols,
|
485 |
-
):
|
486 |
-
"""
|
487 |
-
Actually performs the TLS handshake. This is run automatically by
|
488 |
-
wrapped socket, and shouldn't be needed in user code.
|
489 |
-
"""
|
490 |
-
# First, we do the initial bits of connection setup. We need to create
|
491 |
-
# a context, set its I/O funcs, and set the connection reference.
|
492 |
-
self.context = Security.SSLCreateContext(
|
493 |
-
None, SecurityConst.kSSLClientSide, SecurityConst.kSSLStreamType
|
494 |
-
)
|
495 |
-
result = Security.SSLSetIOFuncs(
|
496 |
-
self.context, _read_callback_pointer, _write_callback_pointer
|
497 |
-
)
|
498 |
-
_assert_no_error(result)
|
499 |
-
|
500 |
-
# Here we need to compute the handle to use. We do this by taking the
|
501 |
-
# id of self modulo 2**31 - 1. If this is already in the dictionary, we
|
502 |
-
# just keep incrementing by one until we find a free space.
|
503 |
-
with _connection_ref_lock:
|
504 |
-
handle = id(self) % 2147483647
|
505 |
-
while handle in _connection_refs:
|
506 |
-
handle = (handle + 1) % 2147483647
|
507 |
-
_connection_refs[handle] = self
|
508 |
-
|
509 |
-
result = Security.SSLSetConnection(self.context, handle)
|
510 |
-
_assert_no_error(result)
|
511 |
-
|
512 |
-
# If we have a server hostname, we should set that too.
|
513 |
-
if server_hostname:
|
514 |
-
if not isinstance(server_hostname, bytes):
|
515 |
-
server_hostname = server_hostname.encode("utf-8")
|
516 |
-
|
517 |
-
result = Security.SSLSetPeerDomainName(
|
518 |
-
self.context, server_hostname, len(server_hostname)
|
519 |
-
)
|
520 |
-
_assert_no_error(result)
|
521 |
-
|
522 |
-
# Setup the ciphers.
|
523 |
-
self._set_ciphers()
|
524 |
-
|
525 |
-
# Setup the ALPN protocols.
|
526 |
-
self._set_alpn_protocols(alpn_protocols)
|
527 |
-
|
528 |
-
# Set the minimum and maximum TLS versions.
|
529 |
-
result = Security.SSLSetProtocolVersionMin(self.context, min_version)
|
530 |
-
_assert_no_error(result)
|
531 |
-
|
532 |
-
result = Security.SSLSetProtocolVersionMax(self.context, max_version)
|
533 |
-
_assert_no_error(result)
|
534 |
-
|
535 |
-
# If there's a trust DB, we need to use it. We do that by telling
|
536 |
-
# SecureTransport to break on server auth. We also do that if we don't
|
537 |
-
# want to validate the certs at all: we just won't actually do any
|
538 |
-
# authing in that case.
|
539 |
-
if not verify or trust_bundle is not None:
|
540 |
-
result = Security.SSLSetSessionOption(
|
541 |
-
self.context, SecurityConst.kSSLSessionOptionBreakOnServerAuth, True
|
542 |
-
)
|
543 |
-
_assert_no_error(result)
|
544 |
-
|
545 |
-
# If there's a client cert, we need to use it.
|
546 |
-
if client_cert:
|
547 |
-
self._keychain, self._keychain_dir = _temporary_keychain()
|
548 |
-
self._client_cert_chain = _load_client_cert_chain(
|
549 |
-
self._keychain, client_cert, client_key
|
550 |
-
)
|
551 |
-
result = Security.SSLSetCertificate(self.context, self._client_cert_chain)
|
552 |
-
_assert_no_error(result)
|
553 |
-
|
554 |
-
while True:
|
555 |
-
with self._raise_on_error():
|
556 |
-
result = Security.SSLHandshake(self.context)
|
557 |
-
|
558 |
-
if result == SecurityConst.errSSLWouldBlock:
|
559 |
-
raise socket.timeout("handshake timed out")
|
560 |
-
elif result == SecurityConst.errSSLServerAuthCompleted:
|
561 |
-
self._custom_validate(verify, trust_bundle)
|
562 |
-
continue
|
563 |
-
else:
|
564 |
-
_assert_no_error(result)
|
565 |
-
break
|
566 |
-
|
567 |
-
def fileno(self):
|
568 |
-
return self.socket.fileno()
|
569 |
-
|
570 |
-
# Copy-pasted from Python 3.5 source code
|
571 |
-
def _decref_socketios(self):
|
572 |
-
if self._makefile_refs > 0:
|
573 |
-
self._makefile_refs -= 1
|
574 |
-
if self._closed:
|
575 |
-
self.close()
|
576 |
-
|
577 |
-
def recv(self, bufsiz):
|
578 |
-
buffer = ctypes.create_string_buffer(bufsiz)
|
579 |
-
bytes_read = self.recv_into(buffer, bufsiz)
|
580 |
-
data = buffer[:bytes_read]
|
581 |
-
return data
|
582 |
-
|
583 |
-
def recv_into(self, buffer, nbytes=None):
|
584 |
-
# Read short on EOF.
|
585 |
-
if self._closed:
|
586 |
-
return 0
|
587 |
-
|
588 |
-
if nbytes is None:
|
589 |
-
nbytes = len(buffer)
|
590 |
-
|
591 |
-
buffer = (ctypes.c_char * nbytes).from_buffer(buffer)
|
592 |
-
processed_bytes = ctypes.c_size_t(0)
|
593 |
-
|
594 |
-
with self._raise_on_error():
|
595 |
-
result = Security.SSLRead(
|
596 |
-
self.context, buffer, nbytes, ctypes.byref(processed_bytes)
|
597 |
-
)
|
598 |
-
|
599 |
-
# There are some result codes that we want to treat as "not always
|
600 |
-
# errors". Specifically, those are errSSLWouldBlock,
|
601 |
-
# errSSLClosedGraceful, and errSSLClosedNoNotify.
|
602 |
-
if result == SecurityConst.errSSLWouldBlock:
|
603 |
-
# If we didn't process any bytes, then this was just a time out.
|
604 |
-
# However, we can get errSSLWouldBlock in situations when we *did*
|
605 |
-
# read some data, and in those cases we should just read "short"
|
606 |
-
# and return.
|
607 |
-
if processed_bytes.value == 0:
|
608 |
-
# Timed out, no data read.
|
609 |
-
raise socket.timeout("recv timed out")
|
610 |
-
elif result in (
|
611 |
-
SecurityConst.errSSLClosedGraceful,
|
612 |
-
SecurityConst.errSSLClosedNoNotify,
|
613 |
-
):
|
614 |
-
# The remote peer has closed this connection. We should do so as
|
615 |
-
# well. Note that we don't actually return here because in
|
616 |
-
# principle this could actually be fired along with return data.
|
617 |
-
# It's unlikely though.
|
618 |
-
self.close()
|
619 |
-
else:
|
620 |
-
_assert_no_error(result)
|
621 |
-
|
622 |
-
# Ok, we read and probably succeeded. We should return whatever data
|
623 |
-
# was actually read.
|
624 |
-
return processed_bytes.value
|
625 |
-
|
626 |
-
def settimeout(self, timeout):
|
627 |
-
self._timeout = timeout
|
628 |
-
|
629 |
-
def gettimeout(self):
|
630 |
-
return self._timeout
|
631 |
-
|
632 |
-
def send(self, data):
|
633 |
-
processed_bytes = ctypes.c_size_t(0)
|
634 |
-
|
635 |
-
with self._raise_on_error():
|
636 |
-
result = Security.SSLWrite(
|
637 |
-
self.context, data, len(data), ctypes.byref(processed_bytes)
|
638 |
-
)
|
639 |
-
|
640 |
-
if result == SecurityConst.errSSLWouldBlock and processed_bytes.value == 0:
|
641 |
-
# Timed out
|
642 |
-
raise socket.timeout("send timed out")
|
643 |
-
else:
|
644 |
-
_assert_no_error(result)
|
645 |
-
|
646 |
-
# We sent, and probably succeeded. Tell them how much we sent.
|
647 |
-
return processed_bytes.value
|
648 |
-
|
649 |
-
def sendall(self, data):
|
650 |
-
total_sent = 0
|
651 |
-
while total_sent < len(data):
|
652 |
-
sent = self.send(data[total_sent : total_sent + SSL_WRITE_BLOCKSIZE])
|
653 |
-
total_sent += sent
|
654 |
-
|
655 |
-
def shutdown(self):
|
656 |
-
with self._raise_on_error():
|
657 |
-
Security.SSLClose(self.context)
|
658 |
-
|
659 |
-
def close(self):
|
660 |
-
# TODO: should I do clean shutdown here? Do I have to?
|
661 |
-
if self._makefile_refs < 1:
|
662 |
-
self._closed = True
|
663 |
-
if self.context:
|
664 |
-
CoreFoundation.CFRelease(self.context)
|
665 |
-
self.context = None
|
666 |
-
if self._client_cert_chain:
|
667 |
-
CoreFoundation.CFRelease(self._client_cert_chain)
|
668 |
-
self._client_cert_chain = None
|
669 |
-
if self._keychain:
|
670 |
-
Security.SecKeychainDelete(self._keychain)
|
671 |
-
CoreFoundation.CFRelease(self._keychain)
|
672 |
-
shutil.rmtree(self._keychain_dir)
|
673 |
-
self._keychain = self._keychain_dir = None
|
674 |
-
return self.socket.close()
|
675 |
-
else:
|
676 |
-
self._makefile_refs -= 1
|
677 |
-
|
678 |
-
def getpeercert(self, binary_form=False):
|
679 |
-
# Urgh, annoying.
|
680 |
-
#
|
681 |
-
# Here's how we do this:
|
682 |
-
#
|
683 |
-
# 1. Call SSLCopyPeerTrust to get hold of the trust object for this
|
684 |
-
# connection.
|
685 |
-
# 2. Call SecTrustGetCertificateAtIndex for index 0 to get the leaf.
|
686 |
-
# 3. To get the CN, call SecCertificateCopyCommonName and process that
|
687 |
-
# string so that it's of the appropriate type.
|
688 |
-
# 4. To get the SAN, we need to do something a bit more complex:
|
689 |
-
# a. Call SecCertificateCopyValues to get the data, requesting
|
690 |
-
# kSecOIDSubjectAltName.
|
691 |
-
# b. Mess about with this dictionary to try to get the SANs out.
|
692 |
-
#
|
693 |
-
# This is gross. Really gross. It's going to be a few hundred LoC extra
|
694 |
-
# just to repeat something that SecureTransport can *already do*. So my
|
695 |
-
# operating assumption at this time is that what we want to do is
|
696 |
-
# instead to just flag to urllib3 that it shouldn't do its own hostname
|
697 |
-
# validation when using SecureTransport.
|
698 |
-
if not binary_form:
|
699 |
-
raise ValueError("SecureTransport only supports dumping binary certs")
|
700 |
-
trust = Security.SecTrustRef()
|
701 |
-
certdata = None
|
702 |
-
der_bytes = None
|
703 |
-
|
704 |
-
try:
|
705 |
-
# Grab the trust store.
|
706 |
-
result = Security.SSLCopyPeerTrust(self.context, ctypes.byref(trust))
|
707 |
-
_assert_no_error(result)
|
708 |
-
if not trust:
|
709 |
-
# Probably we haven't done the handshake yet. No biggie.
|
710 |
-
return None
|
711 |
-
|
712 |
-
cert_count = Security.SecTrustGetCertificateCount(trust)
|
713 |
-
if not cert_count:
|
714 |
-
# Also a case that might happen if we haven't handshaked.
|
715 |
-
# Handshook? Handshaken?
|
716 |
-
return None
|
717 |
-
|
718 |
-
leaf = Security.SecTrustGetCertificateAtIndex(trust, 0)
|
719 |
-
assert leaf
|
720 |
-
|
721 |
-
# Ok, now we want the DER bytes.
|
722 |
-
certdata = Security.SecCertificateCopyData(leaf)
|
723 |
-
assert certdata
|
724 |
-
|
725 |
-
data_length = CoreFoundation.CFDataGetLength(certdata)
|
726 |
-
data_buffer = CoreFoundation.CFDataGetBytePtr(certdata)
|
727 |
-
der_bytes = ctypes.string_at(data_buffer, data_length)
|
728 |
-
finally:
|
729 |
-
if certdata:
|
730 |
-
CoreFoundation.CFRelease(certdata)
|
731 |
-
if trust:
|
732 |
-
CoreFoundation.CFRelease(trust)
|
733 |
-
|
734 |
-
return der_bytes
|
735 |
-
|
736 |
-
def version(self):
|
737 |
-
protocol = Security.SSLProtocol()
|
738 |
-
result = Security.SSLGetNegotiatedProtocolVersion(
|
739 |
-
self.context, ctypes.byref(protocol)
|
740 |
-
)
|
741 |
-
_assert_no_error(result)
|
742 |
-
if protocol.value == SecurityConst.kTLSProtocol13:
|
743 |
-
raise ssl.SSLError("SecureTransport does not support TLS 1.3")
|
744 |
-
elif protocol.value == SecurityConst.kTLSProtocol12:
|
745 |
-
return "TLSv1.2"
|
746 |
-
elif protocol.value == SecurityConst.kTLSProtocol11:
|
747 |
-
return "TLSv1.1"
|
748 |
-
elif protocol.value == SecurityConst.kTLSProtocol1:
|
749 |
-
return "TLSv1"
|
750 |
-
elif protocol.value == SecurityConst.kSSLProtocol3:
|
751 |
-
return "SSLv3"
|
752 |
-
elif protocol.value == SecurityConst.kSSLProtocol2:
|
753 |
-
return "SSLv2"
|
754 |
-
else:
|
755 |
-
raise ssl.SSLError("Unknown TLS version: %r" % protocol)
|
756 |
-
|
757 |
-
def _reuse(self):
|
758 |
-
self._makefile_refs += 1
|
759 |
-
|
760 |
-
def _drop(self):
|
761 |
-
if self._makefile_refs < 1:
|
762 |
-
self.close()
|
763 |
-
else:
|
764 |
-
self._makefile_refs -= 1
|
765 |
-
|
766 |
-
|
767 |
-
if _fileobject: # Platform-specific: Python 2
|
768 |
-
|
769 |
-
def makefile(self, mode, bufsize=-1):
|
770 |
-
self._makefile_refs += 1
|
771 |
-
return _fileobject(self, mode, bufsize, close=True)
|
772 |
-
|
773 |
-
else: # Platform-specific: Python 3
|
774 |
-
|
775 |
-
def makefile(self, mode="r", buffering=None, *args, **kwargs):
|
776 |
-
# We disable buffering with SecureTransport because it conflicts with
|
777 |
-
# the buffering that ST does internally (see issue #1153 for more).
|
778 |
-
buffering = 0
|
779 |
-
return backport_makefile(self, mode, buffering, *args, **kwargs)
|
780 |
-
|
781 |
-
|
782 |
-
WrappedSocket.makefile = makefile
|
783 |
-
|
784 |
-
|
785 |
-
class SecureTransportContext(object):
|
786 |
-
"""
|
787 |
-
I am a wrapper class for the SecureTransport library, to translate the
|
788 |
-
interface of the standard library ``SSLContext`` object to calls into
|
789 |
-
SecureTransport.
|
790 |
-
"""
|
791 |
-
|
792 |
-
def __init__(self, protocol):
|
793 |
-
self._min_version, self._max_version = _protocol_to_min_max[protocol]
|
794 |
-
self._options = 0
|
795 |
-
self._verify = False
|
796 |
-
self._trust_bundle = None
|
797 |
-
self._client_cert = None
|
798 |
-
self._client_key = None
|
799 |
-
self._client_key_passphrase = None
|
800 |
-
self._alpn_protocols = None
|
801 |
-
|
802 |
-
@property
|
803 |
-
def check_hostname(self):
|
804 |
-
"""
|
805 |
-
SecureTransport cannot have its hostname checking disabled. For more,
|
806 |
-
see the comment on getpeercert() in this file.
|
807 |
-
"""
|
808 |
-
return True
|
809 |
-
|
810 |
-
@check_hostname.setter
|
811 |
-
def check_hostname(self, value):
|
812 |
-
"""
|
813 |
-
SecureTransport cannot have its hostname checking disabled. For more,
|
814 |
-
see the comment on getpeercert() in this file.
|
815 |
-
"""
|
816 |
-
pass
|
817 |
-
|
818 |
-
@property
|
819 |
-
def options(self):
|
820 |
-
# TODO: Well, crap.
|
821 |
-
#
|
822 |
-
# So this is the bit of the code that is the most likely to cause us
|
823 |
-
# trouble. Essentially we need to enumerate all of the SSL options that
|
824 |
-
# users might want to use and try to see if we can sensibly translate
|
825 |
-
# them, or whether we should just ignore them.
|
826 |
-
return self._options
|
827 |
-
|
828 |
-
@options.setter
|
829 |
-
def options(self, value):
|
830 |
-
# TODO: Update in line with above.
|
831 |
-
self._options = value
|
832 |
-
|
833 |
-
@property
|
834 |
-
def verify_mode(self):
|
835 |
-
return ssl.CERT_REQUIRED if self._verify else ssl.CERT_NONE
|
836 |
-
|
837 |
-
@verify_mode.setter
|
838 |
-
def verify_mode(self, value):
|
839 |
-
self._verify = True if value == ssl.CERT_REQUIRED else False
|
840 |
-
|
841 |
-
def set_default_verify_paths(self):
|
842 |
-
# So, this has to do something a bit weird. Specifically, what it does
|
843 |
-
# is nothing.
|
844 |
-
#
|
845 |
-
# This means that, if we had previously had load_verify_locations
|
846 |
-
# called, this does not undo that. We need to do that because it turns
|
847 |
-
# out that the rest of the urllib3 code will attempt to load the
|
848 |
-
# default verify paths if it hasn't been told about any paths, even if
|
849 |
-
# the context itself was sometime earlier. We resolve that by just
|
850 |
-
# ignoring it.
|
851 |
-
pass
|
852 |
-
|
853 |
-
def load_default_certs(self):
|
854 |
-
return self.set_default_verify_paths()
|
855 |
-
|
856 |
-
def set_ciphers(self, ciphers):
|
857 |
-
# For now, we just require the default cipher string.
|
858 |
-
if ciphers != util.ssl_.DEFAULT_CIPHERS:
|
859 |
-
raise ValueError("SecureTransport doesn't support custom cipher strings")
|
860 |
-
|
861 |
-
def load_verify_locations(self, cafile=None, capath=None, cadata=None):
|
862 |
-
# OK, we only really support cadata and cafile.
|
863 |
-
if capath is not None:
|
864 |
-
raise ValueError("SecureTransport does not support cert directories")
|
865 |
-
|
866 |
-
# Raise if cafile does not exist.
|
867 |
-
if cafile is not None:
|
868 |
-
with open(cafile):
|
869 |
-
pass
|
870 |
-
|
871 |
-
self._trust_bundle = cafile or cadata
|
872 |
-
|
873 |
-
def load_cert_chain(self, certfile, keyfile=None, password=None):
|
874 |
-
self._client_cert = certfile
|
875 |
-
self._client_key = keyfile
|
876 |
-
self._client_cert_passphrase = password
|
877 |
-
|
878 |
-
def set_alpn_protocols(self, protocols):
|
879 |
-
"""
|
880 |
-
Sets the ALPN protocols that will later be set on the context.
|
881 |
-
|
882 |
-
Raises a NotImplementedError if ALPN is not supported.
|
883 |
-
"""
|
884 |
-
if not hasattr(Security, "SSLSetALPNProtocols"):
|
885 |
-
raise NotImplementedError(
|
886 |
-
"SecureTransport supports ALPN only in macOS 10.12+"
|
887 |
-
)
|
888 |
-
self._alpn_protocols = [six.ensure_binary(p) for p in protocols]
|
889 |
-
|
890 |
-
def wrap_socket(
|
891 |
-
self,
|
892 |
-
sock,
|
893 |
-
server_side=False,
|
894 |
-
do_handshake_on_connect=True,
|
895 |
-
suppress_ragged_eofs=True,
|
896 |
-
server_hostname=None,
|
897 |
-
):
|
898 |
-
# So, what do we do here? Firstly, we assert some properties. This is a
|
899 |
-
# stripped down shim, so there is some functionality we don't support.
|
900 |
-
# See PEP 543 for the real deal.
|
901 |
-
assert not server_side
|
902 |
-
assert do_handshake_on_connect
|
903 |
-
assert suppress_ragged_eofs
|
904 |
-
|
905 |
-
# Ok, we're good to go. Now we want to create the wrapped socket object
|
906 |
-
# and store it in the appropriate place.
|
907 |
-
wrapped_socket = WrappedSocket(sock)
|
908 |
-
|
909 |
-
# Now we can handshake
|
910 |
-
wrapped_socket.handshake(
|
911 |
-
server_hostname,
|
912 |
-
self._verify,
|
913 |
-
self._trust_bundle,
|
914 |
-
self._min_version,
|
915 |
-
self._max_version,
|
916 |
-
self._client_cert,
|
917 |
-
self._client_key,
|
918 |
-
self._client_key_passphrase,
|
919 |
-
self._alpn_protocols,
|
920 |
-
)
|
921 |
-
return wrapped_socket
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Atualli/yoloxTeste/yoloxdetect2/configs/yolox_x.py
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python3
|
2 |
-
# -*- coding:utf-8 -*-
|
3 |
-
# Copyright (c) Megvii, Inc. and its affiliates.
|
4 |
-
|
5 |
-
import os
|
6 |
-
|
7 |
-
from yolox.exp import Exp as MyExp
|
8 |
-
|
9 |
-
|
10 |
-
class Exp(MyExp):
|
11 |
-
def __init__(self):
|
12 |
-
super(Exp, self).__init__()
|
13 |
-
self.depth = 1.33
|
14 |
-
self.width = 1.25
|
15 |
-
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BIASLab/sars-cov-2-classification-fcgr/src/models/resnet50_7mers.py
DELETED
@@ -1,103 +0,0 @@
|
|
1 |
-
# https://github.com/c1ph3rr/Deep-Residual-Learning-for-Image-Recognition/blob/master/Resnet50.py
|
2 |
-
from pathlib import Path
|
3 |
-
from tensorflow.keras.models import Model
|
4 |
-
from tensorflow.keras.layers import (
|
5 |
-
Input,
|
6 |
-
Conv2D,
|
7 |
-
Dense,
|
8 |
-
MaxPool2D,
|
9 |
-
GlobalAveragePooling2D,
|
10 |
-
Add,
|
11 |
-
Activation,
|
12 |
-
BatchNormalization,
|
13 |
-
ZeroPadding2D,
|
14 |
-
)
|
15 |
-
|
16 |
-
# Reference name of model
|
17 |
-
MODEL_NAME = str(Path(__file__).resolve().stem)
|
18 |
-
|
19 |
-
def identity_block(inp, filters, kernel_size, block, layer):
|
20 |
-
|
21 |
-
f1, f2, f3 = filters
|
22 |
-
|
23 |
-
conv_name = 'id_conv_b' + block + '_l' + layer
|
24 |
-
batch_name = 'id_batch_b' + block + '_l' + layer
|
25 |
-
|
26 |
-
x = Conv2D(filters=f1, kernel_size=1, padding='same', kernel_initializer='he_normal', name=conv_name + '_a')(inp)
|
27 |
-
x = BatchNormalization(name=batch_name + '_a')(x)
|
28 |
-
x = Activation('relu')(x)
|
29 |
-
|
30 |
-
x = Conv2D(filters=f2, kernel_size=kernel_size, padding='same', kernel_initializer='he_normal', name=conv_name + '_b')(x)
|
31 |
-
x = BatchNormalization(name=batch_name + '_b')(x)
|
32 |
-
x = Activation('relu')(x)
|
33 |
-
|
34 |
-
x = Conv2D(filters=f3, kernel_size=1, padding='same', kernel_initializer='he_normal', name=conv_name + '_c')(x)
|
35 |
-
x = BatchNormalization(name=batch_name + '_c')(x)
|
36 |
-
|
37 |
-
add = Add()([inp, x])
|
38 |
-
x = Activation('relu')(add)
|
39 |
-
|
40 |
-
return x
|
41 |
-
|
42 |
-
|
43 |
-
def convolutional_block(inp, filters, kernel_size, block, layer, strides=2):
|
44 |
-
|
45 |
-
f1, f2, f3 = filters
|
46 |
-
|
47 |
-
conv_name = 'res_conv_b' + block + '_l' + layer
|
48 |
-
batch_name = 'res_batch_b' + block + '_l' + layer
|
49 |
-
|
50 |
-
y = Conv2D(filters=f1, kernel_size=1, padding='same', strides=strides, kernel_initializer='he_normal', name=conv_name + '_a')(inp)
|
51 |
-
y = BatchNormalization(name=batch_name + '_a')(y)
|
52 |
-
y = Activation('relu')(y)
|
53 |
-
|
54 |
-
y = Conv2D(filters=f2, kernel_size=kernel_size, padding='same', kernel_initializer='he_normal', name=conv_name + '_b')(y)
|
55 |
-
y = BatchNormalization(name=batch_name + '_b')(y)
|
56 |
-
y = Activation('relu')(y)
|
57 |
-
|
58 |
-
y = Conv2D(filters=f3, kernel_size=1, padding='same', kernel_initializer='he_normal', name=conv_name + '_c')(y)
|
59 |
-
y = BatchNormalization(name=batch_name + '_c')(y)
|
60 |
-
|
61 |
-
shortcut = Conv2D(filters=f3, kernel_size=1, strides=strides, kernel_initializer='he_normal', name=conv_name + '_shortcut')(inp)
|
62 |
-
shortcut = BatchNormalization(name=batch_name + '_shortcut')(shortcut)
|
63 |
-
|
64 |
-
add = Add()([shortcut, y])
|
65 |
-
y = Activation('relu')(add)
|
66 |
-
|
67 |
-
return y
|
68 |
-
|
69 |
-
def get_model(n_outputs):
|
70 |
-
|
71 |
-
inp = Input(shape=(128, 128, 1), name='input')
|
72 |
-
padd = ZeroPadding2D(3)(inp)
|
73 |
-
|
74 |
-
conv1 = Conv2D(64, 7, strides=2, padding='valid', name='conv1')(padd)
|
75 |
-
conv1 = BatchNormalization(name='batch2')(conv1)
|
76 |
-
conv1 = Activation('relu')(conv1)
|
77 |
-
conv1 = ZeroPadding2D(1)(conv1)
|
78 |
-
conv1 = MaxPool2D(3, 2)(conv1)
|
79 |
-
|
80 |
-
conv2 = convolutional_block(conv1, [64,64,256], 3, '2', '1', strides=1)
|
81 |
-
conv2 = identity_block(conv2, [64,64,256], 3, '2', '2')
|
82 |
-
conv2 = identity_block(conv2, [64,64,256], 3, '2', '3')
|
83 |
-
|
84 |
-
conv3 = convolutional_block(conv2, [128,128,512], 3, '3', '1')
|
85 |
-
conv3 = identity_block(conv3, [128,128,512], 3, '3', '2')
|
86 |
-
conv3 = identity_block(conv3, [128,128,512], 3, '3', '3')
|
87 |
-
conv3 = identity_block(conv3, [128,128,512], 3, '3', '4')
|
88 |
-
|
89 |
-
conv4 = convolutional_block(conv3, [256,256,1024], 3, '4', '1')
|
90 |
-
conv4 = identity_block(conv4, [256,256,1024], 3, '4', '2')
|
91 |
-
conv4 = identity_block(conv4, [256,256,1024], 3, '4', '3')
|
92 |
-
conv4 = identity_block(conv4, [256,256,1024], 3, '4', '4')
|
93 |
-
conv4 = identity_block(conv4, [256,256,1024], 3, '4', '5')
|
94 |
-
conv4 = identity_block(conv4, [256,256,1024], 3, '4', '6')
|
95 |
-
|
96 |
-
conv5 = convolutional_block(conv4, [512,512,2048], 3, '5', '1')
|
97 |
-
conv5 = identity_block(conv5, [512,512,2048], 3, '5', '2')
|
98 |
-
conv5 = identity_block(conv5, [512,512,2048], 3, '5', '3')
|
99 |
-
|
100 |
-
avg_pool = GlobalAveragePooling2D()(conv5)
|
101 |
-
out = Dense(n_outputs, activation='softmax')(avg_pool)
|
102 |
-
|
103 |
-
return Model(inp, out)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tomli/_types.py
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
# SPDX-License-Identifier: MIT
|
2 |
-
# SPDX-FileCopyrightText: 2021 Taneli Hukkinen
|
3 |
-
# Licensed to PSF under a Contributor Agreement.
|
4 |
-
|
5 |
-
from typing import Any, Callable, Tuple
|
6 |
-
|
7 |
-
# Type annotations
|
8 |
-
ParseFloat = Callable[[str], Any]
|
9 |
-
Key = Tuple[str, ...]
|
10 |
-
Pos = int
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BilalSardar/Gpt4All/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Gpt4All
|
3 |
-
emoji: 🐨
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: red
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.23.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CHDCruze/entertainmentbybhdcruze/style.css
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
body {
|
2 |
-
padding: 2rem;
|
3 |
-
font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
|
4 |
-
}
|
5 |
-
|
6 |
-
h1 {
|
7 |
-
font-size: 16px;
|
8 |
-
margin-top: 0;
|
9 |
-
}
|
10 |
-
|
11 |
-
p {
|
12 |
-
color: rgb(107, 114, 128);
|
13 |
-
font-size: 15px;
|
14 |
-
margin-bottom: 10px;
|
15 |
-
margin-top: 5px;
|
16 |
-
}
|
17 |
-
|
18 |
-
.card {
|
19 |
-
max-width: 620px;
|
20 |
-
margin: 0 auto;
|
21 |
-
padding: 16px;
|
22 |
-
border: 1px solid lightgray;
|
23 |
-
border-radius: 16px;
|
24 |
-
}
|
25 |
-
|
26 |
-
.card p:last-child {
|
27 |
-
margin-bottom: 0;
|
28 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/point_rend/__init__.py
DELETED
@@ -1,4 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
2 |
-
from .config import add_pointrend_config
|
3 |
-
from .coarse_mask_head import CoarseMaskHead
|
4 |
-
from .roi_heads import PointRendROIHeads
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/_static/mathjax_wikipedia.user.js
DELETED
@@ -1,30 +0,0 @@
|
|
1 |
-
// ==UserScript==
|
2 |
-
// @name MathJax in Wikipedia
|
3 |
-
// @namespace http://www.mathjax.org/
|
4 |
-
// @description Insert MathJax into Wikipedia pages
|
5 |
-
// @include http://en.wikipedia.org/wiki/*
|
6 |
-
// ==/UserScript==
|
7 |
-
|
8 |
-
if ((window.unsafeWindow == null ? window : unsafeWindow).MathJax == null) {
|
9 |
-
//
|
10 |
-
// Replace the images with MathJax scripts of type math/tex
|
11 |
-
//
|
12 |
-
var images = document.getElementsByTagName('img'), count = 0;
|
13 |
-
for (var i = images.length - 1; i >= 0; i--) {
|
14 |
-
var img = images[i];
|
15 |
-
if (img.className === "tex") {
|
16 |
-
var script = document.createElement("script"); script.type = "math/tex";
|
17 |
-
if (window.opera) {script.innerHTML = img.alt} else {script.text = img.alt}
|
18 |
-
img.parentNode.replaceChild(script,img); count++;
|
19 |
-
}
|
20 |
-
}
|
21 |
-
if (count) {
|
22 |
-
//
|
23 |
-
// Load MathJax and have it process the page
|
24 |
-
//
|
25 |
-
var script = document.createElement("script");
|
26 |
-
script.type = "text/javascript";
|
27 |
-
script.src = "https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/latest.js?config=TeX-AMS-MML_CHTML-full";
|
28 |
-
document.getElementsByTagName("head")[0].appendChild(script);
|
29 |
-
}
|
30 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/omp/vector.h
DELETED
@@ -1,70 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
/*! \file thrust/system/omp/vector.h
|
18 |
-
* \brief A dynamically-sizable array of elements which reside in memory available to
|
19 |
-
* Thrust's OpenMP system.
|
20 |
-
*/
|
21 |
-
|
22 |
-
#pragma once
|
23 |
-
|
24 |
-
#include <thrust/detail/config.h>
|
25 |
-
#include <thrust/system/omp/memory.h>
|
26 |
-
#include <thrust/detail/vector_base.h>
|
27 |
-
#include <vector>
|
28 |
-
|
29 |
-
namespace thrust
|
30 |
-
{
|
31 |
-
|
32 |
-
// forward declaration of host_vector
|
33 |
-
// XXX why is this here? it doesn't seem necessary for anything below
|
34 |
-
template<typename T, typename Allocator> class host_vector;
|
35 |
-
|
36 |
-
namespace system
|
37 |
-
{
|
38 |
-
namespace omp
|
39 |
-
{
|
40 |
-
|
41 |
-
/*! \p omp::vector is a container that supports random access to elements,
|
42 |
-
* constant time removal of elements at the end, and linear time insertion
|
43 |
-
* and removal of elements at the beginning or in the middle. The number of
|
44 |
-
* elements in a \p omp::vector may vary dynamically; memory management is
|
45 |
-
* automatic. The elements contained in an \p omp::vector reside in memory
|
46 |
-
* available to the \p omp system.
|
47 |
-
*
|
48 |
-
* \tparam T The element type of the \p omp::vector.
|
49 |
-
* \tparam Allocator The allocator type of the \p omp::vector. Defaults to \p omp::allocator.
|
50 |
-
*
|
51 |
-
* \see http://www.sgi.com/tech/stl/Vector.html
|
52 |
-
* \see host_vector For the documentation of the complete interface which is
|
53 |
-
* shared by \p omp::vector
|
54 |
-
* \see device_vector
|
55 |
-
*/
|
56 |
-
template<typename T, typename Allocator = allocator<T> >
|
57 |
-
using vector = thrust::detail::vector_base<T, Allocator>;
|
58 |
-
|
59 |
-
} // end omp
|
60 |
-
} // end system
|
61 |
-
|
62 |
-
// alias system::omp names at top-level
|
63 |
-
namespace omp
|
64 |
-
{
|
65 |
-
|
66 |
-
using thrust::system::omp::vector;
|
67 |
-
|
68 |
-
} // end omp
|
69 |
-
|
70 |
-
} // end thrust
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/MonoScene/monoscene/.ipynb_checkpoints/config-checkpoint.py
DELETED
@@ -1,34 +0,0 @@
|
|
1 |
-
from transformers import PretrainedConfig
|
2 |
-
from typing import List
|
3 |
-
|
4 |
-
|
5 |
-
class MonoSceneConfig(PretrainedConfig):
|
6 |
-
|
7 |
-
def __init__(
|
8 |
-
self,
|
9 |
-
block_type="bottleneck",
|
10 |
-
layers: List[int] = [3, 4, 6, 3],
|
11 |
-
num_classes: int = 1000,
|
12 |
-
input_channels: int = 3,
|
13 |
-
cardinality: int = 1,
|
14 |
-
base_width: int = 64,
|
15 |
-
stem_width: int = 64,
|
16 |
-
stem_type: str = "",
|
17 |
-
avg_down: bool = False,
|
18 |
-
**kwargs,
|
19 |
-
):
|
20 |
-
self.block_type = block_type
|
21 |
-
self.layers = layers
|
22 |
-
self.num_classes = num_classes
|
23 |
-
self.input_channels = input_channels
|
24 |
-
self.cardinality = cardinality
|
25 |
-
self.base_width = base_width
|
26 |
-
self.stem_width = stem_width
|
27 |
-
self.stem_type = stem_type
|
28 |
-
self.avg_down = avg_down
|
29 |
-
super().__init__(**kwargs)
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/models/dense_heads/gfl_head.py
DELETED
@@ -1,647 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
import torch.nn.functional as F
|
4 |
-
from mmcv.cnn import ConvModule, Scale, bias_init_with_prob, normal_init
|
5 |
-
from mmcv.runner import force_fp32
|
6 |
-
|
7 |
-
from mmdet.core import (anchor_inside_flags, bbox2distance, bbox_overlaps,
|
8 |
-
build_assigner, build_sampler, distance2bbox,
|
9 |
-
images_to_levels, multi_apply, multiclass_nms,
|
10 |
-
reduce_mean, unmap)
|
11 |
-
from ..builder import HEADS, build_loss
|
12 |
-
from .anchor_head import AnchorHead
|
13 |
-
|
14 |
-
|
15 |
-
class Integral(nn.Module):
|
16 |
-
"""A fixed layer for calculating integral result from distribution.
|
17 |
-
|
18 |
-
This layer calculates the target location by :math: `sum{P(y_i) * y_i}`,
|
19 |
-
P(y_i) denotes the softmax vector that represents the discrete distribution
|
20 |
-
y_i denotes the discrete set, usually {0, 1, 2, ..., reg_max}
|
21 |
-
|
22 |
-
Args:
|
23 |
-
reg_max (int): The maximal value of the discrete set. Default: 16. You
|
24 |
-
may want to reset it according to your new dataset or related
|
25 |
-
settings.
|
26 |
-
"""
|
27 |
-
|
28 |
-
def __init__(self, reg_max=16):
|
29 |
-
super(Integral, self).__init__()
|
30 |
-
self.reg_max = reg_max
|
31 |
-
self.register_buffer('project',
|
32 |
-
torch.linspace(0, self.reg_max, self.reg_max + 1))
|
33 |
-
|
34 |
-
def forward(self, x):
|
35 |
-
"""Forward feature from the regression head to get integral result of
|
36 |
-
bounding box location.
|
37 |
-
|
38 |
-
Args:
|
39 |
-
x (Tensor): Features of the regression head, shape (N, 4*(n+1)),
|
40 |
-
n is self.reg_max.
|
41 |
-
|
42 |
-
Returns:
|
43 |
-
x (Tensor): Integral result of box locations, i.e., distance
|
44 |
-
offsets from the box center in four directions, shape (N, 4).
|
45 |
-
"""
|
46 |
-
x = F.softmax(x.reshape(-1, self.reg_max + 1), dim=1)
|
47 |
-
x = F.linear(x, self.project.type_as(x)).reshape(-1, 4)
|
48 |
-
return x
|
49 |
-
|
50 |
-
|
51 |
-
@HEADS.register_module()
|
52 |
-
class GFLHead(AnchorHead):
|
53 |
-
"""Generalized Focal Loss: Learning Qualified and Distributed Bounding
|
54 |
-
Boxes for Dense Object Detection.
|
55 |
-
|
56 |
-
GFL head structure is similar with ATSS, however GFL uses
|
57 |
-
1) joint representation for classification and localization quality, and
|
58 |
-
2) flexible General distribution for bounding box locations,
|
59 |
-
which are supervised by
|
60 |
-
Quality Focal Loss (QFL) and Distribution Focal Loss (DFL), respectively
|
61 |
-
|
62 |
-
https://arxiv.org/abs/2006.04388
|
63 |
-
|
64 |
-
Args:
|
65 |
-
num_classes (int): Number of categories excluding the background
|
66 |
-
category.
|
67 |
-
in_channels (int): Number of channels in the input feature map.
|
68 |
-
stacked_convs (int): Number of conv layers in cls and reg tower.
|
69 |
-
Default: 4.
|
70 |
-
conv_cfg (dict): dictionary to construct and config conv layer.
|
71 |
-
Default: None.
|
72 |
-
norm_cfg (dict): dictionary to construct and config norm layer.
|
73 |
-
Default: dict(type='GN', num_groups=32, requires_grad=True).
|
74 |
-
loss_qfl (dict): Config of Quality Focal Loss (QFL).
|
75 |
-
reg_max (int): Max value of integral set :math: `{0, ..., reg_max}`
|
76 |
-
in QFL setting. Default: 16.
|
77 |
-
Example:
|
78 |
-
>>> self = GFLHead(11, 7)
|
79 |
-
>>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]]
|
80 |
-
>>> cls_quality_score, bbox_pred = self.forward(feats)
|
81 |
-
>>> assert len(cls_quality_score) == len(self.scales)
|
82 |
-
"""
|
83 |
-
|
84 |
-
def __init__(self,
|
85 |
-
num_classes,
|
86 |
-
in_channels,
|
87 |
-
stacked_convs=4,
|
88 |
-
conv_cfg=None,
|
89 |
-
norm_cfg=dict(type='GN', num_groups=32, requires_grad=True),
|
90 |
-
loss_dfl=dict(type='DistributionFocalLoss', loss_weight=0.25),
|
91 |
-
reg_max=16,
|
92 |
-
**kwargs):
|
93 |
-
self.stacked_convs = stacked_convs
|
94 |
-
self.conv_cfg = conv_cfg
|
95 |
-
self.norm_cfg = norm_cfg
|
96 |
-
self.reg_max = reg_max
|
97 |
-
super(GFLHead, self).__init__(num_classes, in_channels, **kwargs)
|
98 |
-
|
99 |
-
self.sampling = False
|
100 |
-
if self.train_cfg:
|
101 |
-
self.assigner = build_assigner(self.train_cfg.assigner)
|
102 |
-
# SSD sampling=False so use PseudoSampler
|
103 |
-
sampler_cfg = dict(type='PseudoSampler')
|
104 |
-
self.sampler = build_sampler(sampler_cfg, context=self)
|
105 |
-
|
106 |
-
self.integral = Integral(self.reg_max)
|
107 |
-
self.loss_dfl = build_loss(loss_dfl)
|
108 |
-
|
109 |
-
def _init_layers(self):
|
110 |
-
"""Initialize layers of the head."""
|
111 |
-
self.relu = nn.ReLU(inplace=True)
|
112 |
-
self.cls_convs = nn.ModuleList()
|
113 |
-
self.reg_convs = nn.ModuleList()
|
114 |
-
for i in range(self.stacked_convs):
|
115 |
-
chn = self.in_channels if i == 0 else self.feat_channels
|
116 |
-
self.cls_convs.append(
|
117 |
-
ConvModule(
|
118 |
-
chn,
|
119 |
-
self.feat_channels,
|
120 |
-
3,
|
121 |
-
stride=1,
|
122 |
-
padding=1,
|
123 |
-
conv_cfg=self.conv_cfg,
|
124 |
-
norm_cfg=self.norm_cfg))
|
125 |
-
self.reg_convs.append(
|
126 |
-
ConvModule(
|
127 |
-
chn,
|
128 |
-
self.feat_channels,
|
129 |
-
3,
|
130 |
-
stride=1,
|
131 |
-
padding=1,
|
132 |
-
conv_cfg=self.conv_cfg,
|
133 |
-
norm_cfg=self.norm_cfg))
|
134 |
-
assert self.num_anchors == 1, 'anchor free version'
|
135 |
-
self.gfl_cls = nn.Conv2d(
|
136 |
-
self.feat_channels, self.cls_out_channels, 3, padding=1)
|
137 |
-
self.gfl_reg = nn.Conv2d(
|
138 |
-
self.feat_channels, 4 * (self.reg_max + 1), 3, padding=1)
|
139 |
-
self.scales = nn.ModuleList(
|
140 |
-
[Scale(1.0) for _ in self.anchor_generator.strides])
|
141 |
-
|
142 |
-
def init_weights(self):
|
143 |
-
"""Initialize weights of the head."""
|
144 |
-
for m in self.cls_convs:
|
145 |
-
normal_init(m.conv, std=0.01)
|
146 |
-
for m in self.reg_convs:
|
147 |
-
normal_init(m.conv, std=0.01)
|
148 |
-
bias_cls = bias_init_with_prob(0.01)
|
149 |
-
normal_init(self.gfl_cls, std=0.01, bias=bias_cls)
|
150 |
-
normal_init(self.gfl_reg, std=0.01)
|
151 |
-
|
152 |
-
def forward(self, feats):
|
153 |
-
"""Forward features from the upstream network.
|
154 |
-
|
155 |
-
Args:
|
156 |
-
feats (tuple[Tensor]): Features from the upstream network, each is
|
157 |
-
a 4D-tensor.
|
158 |
-
|
159 |
-
Returns:
|
160 |
-
tuple: Usually a tuple of classification scores and bbox prediction
|
161 |
-
cls_scores (list[Tensor]): Classification and quality (IoU)
|
162 |
-
joint scores for all scale levels, each is a 4D-tensor,
|
163 |
-
the channel number is num_classes.
|
164 |
-
bbox_preds (list[Tensor]): Box distribution logits for all
|
165 |
-
scale levels, each is a 4D-tensor, the channel number is
|
166 |
-
4*(n+1), n is max value of integral set.
|
167 |
-
"""
|
168 |
-
return multi_apply(self.forward_single, feats, self.scales)
|
169 |
-
|
170 |
-
def forward_single(self, x, scale):
|
171 |
-
"""Forward feature of a single scale level.
|
172 |
-
|
173 |
-
Args:
|
174 |
-
x (Tensor): Features of a single scale level.
|
175 |
-
scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize
|
176 |
-
the bbox prediction.
|
177 |
-
|
178 |
-
Returns:
|
179 |
-
tuple:
|
180 |
-
cls_score (Tensor): Cls and quality joint scores for a single
|
181 |
-
scale level the channel number is num_classes.
|
182 |
-
bbox_pred (Tensor): Box distribution logits for a single scale
|
183 |
-
level, the channel number is 4*(n+1), n is max value of
|
184 |
-
integral set.
|
185 |
-
"""
|
186 |
-
cls_feat = x
|
187 |
-
reg_feat = x
|
188 |
-
for cls_conv in self.cls_convs:
|
189 |
-
cls_feat = cls_conv(cls_feat)
|
190 |
-
for reg_conv in self.reg_convs:
|
191 |
-
reg_feat = reg_conv(reg_feat)
|
192 |
-
cls_score = self.gfl_cls(cls_feat)
|
193 |
-
bbox_pred = scale(self.gfl_reg(reg_feat)).float()
|
194 |
-
return cls_score, bbox_pred
|
195 |
-
|
196 |
-
def anchor_center(self, anchors):
|
197 |
-
"""Get anchor centers from anchors.
|
198 |
-
|
199 |
-
Args:
|
200 |
-
anchors (Tensor): Anchor list with shape (N, 4), "xyxy" format.
|
201 |
-
|
202 |
-
Returns:
|
203 |
-
Tensor: Anchor centers with shape (N, 2), "xy" format.
|
204 |
-
"""
|
205 |
-
anchors_cx = (anchors[..., 2] + anchors[..., 0]) / 2
|
206 |
-
anchors_cy = (anchors[..., 3] + anchors[..., 1]) / 2
|
207 |
-
return torch.stack([anchors_cx, anchors_cy], dim=-1)
|
208 |
-
|
209 |
-
def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights,
|
210 |
-
bbox_targets, stride, num_total_samples):
|
211 |
-
"""Compute loss of a single scale level.
|
212 |
-
|
213 |
-
Args:
|
214 |
-
anchors (Tensor): Box reference for each scale level with shape
|
215 |
-
(N, num_total_anchors, 4).
|
216 |
-
cls_score (Tensor): Cls and quality joint scores for each scale
|
217 |
-
level has shape (N, num_classes, H, W).
|
218 |
-
bbox_pred (Tensor): Box distribution logits for each scale
|
219 |
-
level with shape (N, 4*(n+1), H, W), n is max value of integral
|
220 |
-
set.
|
221 |
-
labels (Tensor): Labels of each anchors with shape
|
222 |
-
(N, num_total_anchors).
|
223 |
-
label_weights (Tensor): Label weights of each anchor with shape
|
224 |
-
(N, num_total_anchors)
|
225 |
-
bbox_targets (Tensor): BBox regression targets of each anchor wight
|
226 |
-
shape (N, num_total_anchors, 4).
|
227 |
-
stride (tuple): Stride in this scale level.
|
228 |
-
num_total_samples (int): Number of positive samples that is
|
229 |
-
reduced over all GPUs.
|
230 |
-
|
231 |
-
Returns:
|
232 |
-
dict[str, Tensor]: A dictionary of loss components.
|
233 |
-
"""
|
234 |
-
assert stride[0] == stride[1], 'h stride is not equal to w stride!'
|
235 |
-
anchors = anchors.reshape(-1, 4)
|
236 |
-
cls_score = cls_score.permute(0, 2, 3,
|
237 |
-
1).reshape(-1, self.cls_out_channels)
|
238 |
-
bbox_pred = bbox_pred.permute(0, 2, 3,
|
239 |
-
1).reshape(-1, 4 * (self.reg_max + 1))
|
240 |
-
bbox_targets = bbox_targets.reshape(-1, 4)
|
241 |
-
labels = labels.reshape(-1)
|
242 |
-
label_weights = label_weights.reshape(-1)
|
243 |
-
|
244 |
-
# FG cat_id: [0, num_classes -1], BG cat_id: num_classes
|
245 |
-
bg_class_ind = self.num_classes
|
246 |
-
pos_inds = ((labels >= 0)
|
247 |
-
& (labels < bg_class_ind)).nonzero().squeeze(1)
|
248 |
-
score = label_weights.new_zeros(labels.shape)
|
249 |
-
|
250 |
-
if len(pos_inds) > 0:
|
251 |
-
pos_bbox_targets = bbox_targets[pos_inds]
|
252 |
-
pos_bbox_pred = bbox_pred[pos_inds]
|
253 |
-
pos_anchors = anchors[pos_inds]
|
254 |
-
pos_anchor_centers = self.anchor_center(pos_anchors) / stride[0]
|
255 |
-
|
256 |
-
weight_targets = cls_score.detach().sigmoid()
|
257 |
-
weight_targets = weight_targets.max(dim=1)[0][pos_inds]
|
258 |
-
pos_bbox_pred_corners = self.integral(pos_bbox_pred)
|
259 |
-
pos_decode_bbox_pred = distance2bbox(pos_anchor_centers,
|
260 |
-
pos_bbox_pred_corners)
|
261 |
-
pos_decode_bbox_targets = pos_bbox_targets / stride[0]
|
262 |
-
score[pos_inds] = bbox_overlaps(
|
263 |
-
pos_decode_bbox_pred.detach(),
|
264 |
-
pos_decode_bbox_targets,
|
265 |
-
is_aligned=True)
|
266 |
-
pred_corners = pos_bbox_pred.reshape(-1, self.reg_max + 1)
|
267 |
-
target_corners = bbox2distance(pos_anchor_centers,
|
268 |
-
pos_decode_bbox_targets,
|
269 |
-
self.reg_max).reshape(-1)
|
270 |
-
|
271 |
-
# regression loss
|
272 |
-
loss_bbox = self.loss_bbox(
|
273 |
-
pos_decode_bbox_pred,
|
274 |
-
pos_decode_bbox_targets,
|
275 |
-
weight=weight_targets,
|
276 |
-
avg_factor=1.0)
|
277 |
-
|
278 |
-
# dfl loss
|
279 |
-
loss_dfl = self.loss_dfl(
|
280 |
-
pred_corners,
|
281 |
-
target_corners,
|
282 |
-
weight=weight_targets[:, None].expand(-1, 4).reshape(-1),
|
283 |
-
avg_factor=4.0)
|
284 |
-
else:
|
285 |
-
loss_bbox = bbox_pred.sum() * 0
|
286 |
-
loss_dfl = bbox_pred.sum() * 0
|
287 |
-
weight_targets = bbox_pred.new_tensor(0)
|
288 |
-
|
289 |
-
# cls (qfl) loss
|
290 |
-
loss_cls = self.loss_cls(
|
291 |
-
cls_score, (labels, score),
|
292 |
-
weight=label_weights,
|
293 |
-
avg_factor=num_total_samples)
|
294 |
-
|
295 |
-
return loss_cls, loss_bbox, loss_dfl, weight_targets.sum()
|
296 |
-
|
297 |
-
@force_fp32(apply_to=('cls_scores', 'bbox_preds'))
|
298 |
-
def loss(self,
|
299 |
-
cls_scores,
|
300 |
-
bbox_preds,
|
301 |
-
gt_bboxes,
|
302 |
-
gt_labels,
|
303 |
-
img_metas,
|
304 |
-
gt_bboxes_ignore=None):
|
305 |
-
"""Compute losses of the head.
|
306 |
-
|
307 |
-
Args:
|
308 |
-
cls_scores (list[Tensor]): Cls and quality scores for each scale
|
309 |
-
level has shape (N, num_classes, H, W).
|
310 |
-
bbox_preds (list[Tensor]): Box distribution logits for each scale
|
311 |
-
level with shape (N, 4*(n+1), H, W), n is max value of integral
|
312 |
-
set.
|
313 |
-
gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
|
314 |
-
shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
|
315 |
-
gt_labels (list[Tensor]): class indices corresponding to each box
|
316 |
-
img_metas (list[dict]): Meta information of each image, e.g.,
|
317 |
-
image size, scaling factor, etc.
|
318 |
-
gt_bboxes_ignore (list[Tensor] | None): specify which bounding
|
319 |
-
boxes can be ignored when computing the loss.
|
320 |
-
|
321 |
-
Returns:
|
322 |
-
dict[str, Tensor]: A dictionary of loss components.
|
323 |
-
"""
|
324 |
-
|
325 |
-
featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
|
326 |
-
assert len(featmap_sizes) == self.anchor_generator.num_levels
|
327 |
-
|
328 |
-
device = cls_scores[0].device
|
329 |
-
anchor_list, valid_flag_list = self.get_anchors(
|
330 |
-
featmap_sizes, img_metas, device=device)
|
331 |
-
label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
|
332 |
-
|
333 |
-
cls_reg_targets = self.get_targets(
|
334 |
-
anchor_list,
|
335 |
-
valid_flag_list,
|
336 |
-
gt_bboxes,
|
337 |
-
img_metas,
|
338 |
-
gt_bboxes_ignore_list=gt_bboxes_ignore,
|
339 |
-
gt_labels_list=gt_labels,
|
340 |
-
label_channels=label_channels)
|
341 |
-
if cls_reg_targets is None:
|
342 |
-
return None
|
343 |
-
|
344 |
-
(anchor_list, labels_list, label_weights_list, bbox_targets_list,
|
345 |
-
bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets
|
346 |
-
|
347 |
-
num_total_samples = reduce_mean(
|
348 |
-
torch.tensor(num_total_pos, dtype=torch.float,
|
349 |
-
device=device)).item()
|
350 |
-
num_total_samples = max(num_total_samples, 1.0)
|
351 |
-
|
352 |
-
losses_cls, losses_bbox, losses_dfl,\
|
353 |
-
avg_factor = multi_apply(
|
354 |
-
self.loss_single,
|
355 |
-
anchor_list,
|
356 |
-
cls_scores,
|
357 |
-
bbox_preds,
|
358 |
-
labels_list,
|
359 |
-
label_weights_list,
|
360 |
-
bbox_targets_list,
|
361 |
-
self.anchor_generator.strides,
|
362 |
-
num_total_samples=num_total_samples)
|
363 |
-
|
364 |
-
avg_factor = sum(avg_factor)
|
365 |
-
avg_factor = reduce_mean(avg_factor).item()
|
366 |
-
losses_bbox = list(map(lambda x: x / avg_factor, losses_bbox))
|
367 |
-
losses_dfl = list(map(lambda x: x / avg_factor, losses_dfl))
|
368 |
-
return dict(
|
369 |
-
loss_cls=losses_cls, loss_bbox=losses_bbox, loss_dfl=losses_dfl)
|
370 |
-
|
371 |
-
def _get_bboxes(self,
|
372 |
-
cls_scores,
|
373 |
-
bbox_preds,
|
374 |
-
mlvl_anchors,
|
375 |
-
img_shapes,
|
376 |
-
scale_factors,
|
377 |
-
cfg,
|
378 |
-
rescale=False,
|
379 |
-
with_nms=True):
|
380 |
-
"""Transform outputs for a single batch item into labeled boxes.
|
381 |
-
|
382 |
-
Args:
|
383 |
-
cls_scores (list[Tensor]): Box scores for a single scale level
|
384 |
-
has shape (N, num_classes, H, W).
|
385 |
-
bbox_preds (list[Tensor]): Box distribution logits for a single
|
386 |
-
scale level with shape (N, 4*(n+1), H, W), n is max value of
|
387 |
-
integral set.
|
388 |
-
mlvl_anchors (list[Tensor]): Box reference for a single scale level
|
389 |
-
with shape (num_total_anchors, 4).
|
390 |
-
img_shapes (list[tuple[int]]): Shape of the input image,
|
391 |
-
list[(height, width, 3)].
|
392 |
-
scale_factors (list[ndarray]): Scale factor of the image arange as
|
393 |
-
(w_scale, h_scale, w_scale, h_scale).
|
394 |
-
cfg (mmcv.Config | None): Test / postprocessing configuration,
|
395 |
-
if None, test_cfg would be used.
|
396 |
-
rescale (bool): If True, return boxes in original image space.
|
397 |
-
Default: False.
|
398 |
-
with_nms (bool): If True, do nms before return boxes.
|
399 |
-
Default: True.
|
400 |
-
|
401 |
-
Returns:
|
402 |
-
list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple.
|
403 |
-
The first item is an (n, 5) tensor, where 5 represent
|
404 |
-
(tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1.
|
405 |
-
The shape of the second tensor in the tuple is (n,), and
|
406 |
-
each element represents the class label of the corresponding
|
407 |
-
box.
|
408 |
-
"""
|
409 |
-
cfg = self.test_cfg if cfg is None else cfg
|
410 |
-
assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors)
|
411 |
-
batch_size = cls_scores[0].shape[0]
|
412 |
-
|
413 |
-
mlvl_bboxes = []
|
414 |
-
mlvl_scores = []
|
415 |
-
for cls_score, bbox_pred, stride, anchors in zip(
|
416 |
-
cls_scores, bbox_preds, self.anchor_generator.strides,
|
417 |
-
mlvl_anchors):
|
418 |
-
assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
|
419 |
-
assert stride[0] == stride[1]
|
420 |
-
scores = cls_score.permute(0, 2, 3, 1).reshape(
|
421 |
-
batch_size, -1, self.cls_out_channels).sigmoid()
|
422 |
-
bbox_pred = bbox_pred.permute(0, 2, 3, 1)
|
423 |
-
|
424 |
-
bbox_pred = self.integral(bbox_pred) * stride[0]
|
425 |
-
bbox_pred = bbox_pred.reshape(batch_size, -1, 4)
|
426 |
-
|
427 |
-
nms_pre = cfg.get('nms_pre', -1)
|
428 |
-
if nms_pre > 0 and scores.shape[1] > nms_pre:
|
429 |
-
max_scores, _ = scores.max(-1)
|
430 |
-
_, topk_inds = max_scores.topk(nms_pre)
|
431 |
-
batch_inds = torch.arange(batch_size).view(
|
432 |
-
-1, 1).expand_as(topk_inds).long()
|
433 |
-
anchors = anchors[topk_inds, :]
|
434 |
-
bbox_pred = bbox_pred[batch_inds, topk_inds, :]
|
435 |
-
scores = scores[batch_inds, topk_inds, :]
|
436 |
-
else:
|
437 |
-
anchors = anchors.expand_as(bbox_pred)
|
438 |
-
|
439 |
-
bboxes = distance2bbox(
|
440 |
-
self.anchor_center(anchors), bbox_pred, max_shape=img_shapes)
|
441 |
-
mlvl_bboxes.append(bboxes)
|
442 |
-
mlvl_scores.append(scores)
|
443 |
-
|
444 |
-
batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1)
|
445 |
-
if rescale:
|
446 |
-
batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor(
|
447 |
-
scale_factors).unsqueeze(1)
|
448 |
-
|
449 |
-
batch_mlvl_scores = torch.cat(mlvl_scores, dim=1)
|
450 |
-
# Add a dummy background class to the backend when using sigmoid
|
451 |
-
# remind that we set FG labels to [0, num_class-1] since mmdet v2.0
|
452 |
-
# BG cat_id: num_class
|
453 |
-
padding = batch_mlvl_scores.new_zeros(batch_size,
|
454 |
-
batch_mlvl_scores.shape[1], 1)
|
455 |
-
batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1)
|
456 |
-
|
457 |
-
if with_nms:
|
458 |
-
det_results = []
|
459 |
-
for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes,
|
460 |
-
batch_mlvl_scores):
|
461 |
-
det_bbox, det_label = multiclass_nms(mlvl_bboxes, mlvl_scores,
|
462 |
-
cfg.score_thr, cfg.nms,
|
463 |
-
cfg.max_per_img)
|
464 |
-
det_results.append(tuple([det_bbox, det_label]))
|
465 |
-
else:
|
466 |
-
det_results = [
|
467 |
-
tuple(mlvl_bs)
|
468 |
-
for mlvl_bs in zip(batch_mlvl_bboxes, batch_mlvl_scores)
|
469 |
-
]
|
470 |
-
return det_results
|
471 |
-
|
472 |
-
def get_targets(self,
|
473 |
-
anchor_list,
|
474 |
-
valid_flag_list,
|
475 |
-
gt_bboxes_list,
|
476 |
-
img_metas,
|
477 |
-
gt_bboxes_ignore_list=None,
|
478 |
-
gt_labels_list=None,
|
479 |
-
label_channels=1,
|
480 |
-
unmap_outputs=True):
|
481 |
-
"""Get targets for GFL head.
|
482 |
-
|
483 |
-
This method is almost the same as `AnchorHead.get_targets()`. Besides
|
484 |
-
returning the targets as the parent method does, it also returns the
|
485 |
-
anchors as the first element of the returned tuple.
|
486 |
-
"""
|
487 |
-
num_imgs = len(img_metas)
|
488 |
-
assert len(anchor_list) == len(valid_flag_list) == num_imgs
|
489 |
-
|
490 |
-
# anchor number of multi levels
|
491 |
-
num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]]
|
492 |
-
num_level_anchors_list = [num_level_anchors] * num_imgs
|
493 |
-
|
494 |
-
# concat all level anchors and flags to a single tensor
|
495 |
-
for i in range(num_imgs):
|
496 |
-
assert len(anchor_list[i]) == len(valid_flag_list[i])
|
497 |
-
anchor_list[i] = torch.cat(anchor_list[i])
|
498 |
-
valid_flag_list[i] = torch.cat(valid_flag_list[i])
|
499 |
-
|
500 |
-
# compute targets for each image
|
501 |
-
if gt_bboxes_ignore_list is None:
|
502 |
-
gt_bboxes_ignore_list = [None for _ in range(num_imgs)]
|
503 |
-
if gt_labels_list is None:
|
504 |
-
gt_labels_list = [None for _ in range(num_imgs)]
|
505 |
-
(all_anchors, all_labels, all_label_weights, all_bbox_targets,
|
506 |
-
all_bbox_weights, pos_inds_list, neg_inds_list) = multi_apply(
|
507 |
-
self._get_target_single,
|
508 |
-
anchor_list,
|
509 |
-
valid_flag_list,
|
510 |
-
num_level_anchors_list,
|
511 |
-
gt_bboxes_list,
|
512 |
-
gt_bboxes_ignore_list,
|
513 |
-
gt_labels_list,
|
514 |
-
img_metas,
|
515 |
-
label_channels=label_channels,
|
516 |
-
unmap_outputs=unmap_outputs)
|
517 |
-
# no valid anchors
|
518 |
-
if any([labels is None for labels in all_labels]):
|
519 |
-
return None
|
520 |
-
# sampled anchors of all images
|
521 |
-
num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list])
|
522 |
-
num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list])
|
523 |
-
# split targets to a list w.r.t. multiple levels
|
524 |
-
anchors_list = images_to_levels(all_anchors, num_level_anchors)
|
525 |
-
labels_list = images_to_levels(all_labels, num_level_anchors)
|
526 |
-
label_weights_list = images_to_levels(all_label_weights,
|
527 |
-
num_level_anchors)
|
528 |
-
bbox_targets_list = images_to_levels(all_bbox_targets,
|
529 |
-
num_level_anchors)
|
530 |
-
bbox_weights_list = images_to_levels(all_bbox_weights,
|
531 |
-
num_level_anchors)
|
532 |
-
return (anchors_list, labels_list, label_weights_list,
|
533 |
-
bbox_targets_list, bbox_weights_list, num_total_pos,
|
534 |
-
num_total_neg)
|
535 |
-
|
536 |
-
def _get_target_single(self,
|
537 |
-
flat_anchors,
|
538 |
-
valid_flags,
|
539 |
-
num_level_anchors,
|
540 |
-
gt_bboxes,
|
541 |
-
gt_bboxes_ignore,
|
542 |
-
gt_labels,
|
543 |
-
img_meta,
|
544 |
-
label_channels=1,
|
545 |
-
unmap_outputs=True):
|
546 |
-
"""Compute regression, classification targets for anchors in a single
|
547 |
-
image.
|
548 |
-
|
549 |
-
Args:
|
550 |
-
flat_anchors (Tensor): Multi-level anchors of the image, which are
|
551 |
-
concatenated into a single tensor of shape (num_anchors, 4)
|
552 |
-
valid_flags (Tensor): Multi level valid flags of the image,
|
553 |
-
which are concatenated into a single tensor of
|
554 |
-
shape (num_anchors,).
|
555 |
-
num_level_anchors Tensor): Number of anchors of each scale level.
|
556 |
-
gt_bboxes (Tensor): Ground truth bboxes of the image,
|
557 |
-
shape (num_gts, 4).
|
558 |
-
gt_bboxes_ignore (Tensor): Ground truth bboxes to be
|
559 |
-
ignored, shape (num_ignored_gts, 4).
|
560 |
-
gt_labels (Tensor): Ground truth labels of each box,
|
561 |
-
shape (num_gts,).
|
562 |
-
img_meta (dict): Meta info of the image.
|
563 |
-
label_channels (int): Channel of label.
|
564 |
-
unmap_outputs (bool): Whether to map outputs back to the original
|
565 |
-
set of anchors.
|
566 |
-
|
567 |
-
Returns:
|
568 |
-
tuple: N is the number of total anchors in the image.
|
569 |
-
anchors (Tensor): All anchors in the image with shape (N, 4).
|
570 |
-
labels (Tensor): Labels of all anchors in the image with shape
|
571 |
-
(N,).
|
572 |
-
label_weights (Tensor): Label weights of all anchor in the
|
573 |
-
image with shape (N,).
|
574 |
-
bbox_targets (Tensor): BBox targets of all anchors in the
|
575 |
-
image with shape (N, 4).
|
576 |
-
bbox_weights (Tensor): BBox weights of all anchors in the
|
577 |
-
image with shape (N, 4).
|
578 |
-
pos_inds (Tensor): Indices of positive anchor with shape
|
579 |
-
(num_pos,).
|
580 |
-
neg_inds (Tensor): Indices of negative anchor with shape
|
581 |
-
(num_neg,).
|
582 |
-
"""
|
583 |
-
inside_flags = anchor_inside_flags(flat_anchors, valid_flags,
|
584 |
-
img_meta['img_shape'][:2],
|
585 |
-
self.train_cfg.allowed_border)
|
586 |
-
if not inside_flags.any():
|
587 |
-
return (None, ) * 7
|
588 |
-
# assign gt and sample anchors
|
589 |
-
anchors = flat_anchors[inside_flags, :]
|
590 |
-
|
591 |
-
num_level_anchors_inside = self.get_num_level_anchors_inside(
|
592 |
-
num_level_anchors, inside_flags)
|
593 |
-
assign_result = self.assigner.assign(anchors, num_level_anchors_inside,
|
594 |
-
gt_bboxes, gt_bboxes_ignore,
|
595 |
-
gt_labels)
|
596 |
-
|
597 |
-
sampling_result = self.sampler.sample(assign_result, anchors,
|
598 |
-
gt_bboxes)
|
599 |
-
|
600 |
-
num_valid_anchors = anchors.shape[0]
|
601 |
-
bbox_targets = torch.zeros_like(anchors)
|
602 |
-
bbox_weights = torch.zeros_like(anchors)
|
603 |
-
labels = anchors.new_full((num_valid_anchors, ),
|
604 |
-
self.num_classes,
|
605 |
-
dtype=torch.long)
|
606 |
-
label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float)
|
607 |
-
|
608 |
-
pos_inds = sampling_result.pos_inds
|
609 |
-
neg_inds = sampling_result.neg_inds
|
610 |
-
if len(pos_inds) > 0:
|
611 |
-
pos_bbox_targets = sampling_result.pos_gt_bboxes
|
612 |
-
bbox_targets[pos_inds, :] = pos_bbox_targets
|
613 |
-
bbox_weights[pos_inds, :] = 1.0
|
614 |
-
if gt_labels is None:
|
615 |
-
# Only rpn gives gt_labels as None
|
616 |
-
# Foreground is the first class
|
617 |
-
labels[pos_inds] = 0
|
618 |
-
else:
|
619 |
-
labels[pos_inds] = gt_labels[
|
620 |
-
sampling_result.pos_assigned_gt_inds]
|
621 |
-
if self.train_cfg.pos_weight <= 0:
|
622 |
-
label_weights[pos_inds] = 1.0
|
623 |
-
else:
|
624 |
-
label_weights[pos_inds] = self.train_cfg.pos_weight
|
625 |
-
if len(neg_inds) > 0:
|
626 |
-
label_weights[neg_inds] = 1.0
|
627 |
-
|
628 |
-
# map up to original set of anchors
|
629 |
-
if unmap_outputs:
|
630 |
-
num_total_anchors = flat_anchors.size(0)
|
631 |
-
anchors = unmap(anchors, num_total_anchors, inside_flags)
|
632 |
-
labels = unmap(
|
633 |
-
labels, num_total_anchors, inside_flags, fill=self.num_classes)
|
634 |
-
label_weights = unmap(label_weights, num_total_anchors,
|
635 |
-
inside_flags)
|
636 |
-
bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags)
|
637 |
-
bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags)
|
638 |
-
|
639 |
-
return (anchors, labels, label_weights, bbox_targets, bbox_weights,
|
640 |
-
pos_inds, neg_inds)
|
641 |
-
|
642 |
-
def get_num_level_anchors_inside(self, num_level_anchors, inside_flags):
|
643 |
-
split_inside_flags = torch.split(inside_flags, num_level_anchors)
|
644 |
-
num_level_anchors_inside = [
|
645 |
-
int(flags.sum()) for flags in split_inside_flags
|
646 |
-
]
|
647 |
-
return num_level_anchors_inside
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/lama-example/bin/gen_mask_dataset.py
DELETED
@@ -1,130 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python3
|
2 |
-
|
3 |
-
import glob
|
4 |
-
import os
|
5 |
-
import shutil
|
6 |
-
import traceback
|
7 |
-
|
8 |
-
import PIL.Image as Image
|
9 |
-
import numpy as np
|
10 |
-
from joblib import Parallel, delayed
|
11 |
-
|
12 |
-
from saicinpainting.evaluation.masks.mask import SegmentationMask, propose_random_square_crop
|
13 |
-
from saicinpainting.evaluation.utils import load_yaml, SmallMode
|
14 |
-
from saicinpainting.training.data.masks import MixedMaskGenerator
|
15 |
-
|
16 |
-
|
17 |
-
class MakeManyMasksWrapper:
|
18 |
-
def __init__(self, impl, variants_n=2):
|
19 |
-
self.impl = impl
|
20 |
-
self.variants_n = variants_n
|
21 |
-
|
22 |
-
def get_masks(self, img):
|
23 |
-
img = np.transpose(np.array(img), (2, 0, 1))
|
24 |
-
return [self.impl(img)[0] for _ in range(self.variants_n)]
|
25 |
-
|
26 |
-
|
27 |
-
def process_images(src_images, indir, outdir, config):
|
28 |
-
if config.generator_kind == 'segmentation':
|
29 |
-
mask_generator = SegmentationMask(**config.mask_generator_kwargs)
|
30 |
-
elif config.generator_kind == 'random':
|
31 |
-
variants_n = config.mask_generator_kwargs.pop('variants_n', 2)
|
32 |
-
mask_generator = MakeManyMasksWrapper(MixedMaskGenerator(**config.mask_generator_kwargs),
|
33 |
-
variants_n=variants_n)
|
34 |
-
else:
|
35 |
-
raise ValueError(f'Unexpected generator kind: {config.generator_kind}')
|
36 |
-
|
37 |
-
max_tamper_area = config.get('max_tamper_area', 1)
|
38 |
-
|
39 |
-
for infile in src_images:
|
40 |
-
try:
|
41 |
-
file_relpath = infile[len(indir):]
|
42 |
-
img_outpath = os.path.join(outdir, file_relpath)
|
43 |
-
os.makedirs(os.path.dirname(img_outpath), exist_ok=True)
|
44 |
-
|
45 |
-
image = Image.open(infile).convert('RGB')
|
46 |
-
|
47 |
-
# scale input image to output resolution and filter smaller images
|
48 |
-
if min(image.size) < config.cropping.out_min_size:
|
49 |
-
handle_small_mode = SmallMode(config.cropping.handle_small_mode)
|
50 |
-
if handle_small_mode == SmallMode.DROP:
|
51 |
-
continue
|
52 |
-
elif handle_small_mode == SmallMode.UPSCALE:
|
53 |
-
factor = config.cropping.out_min_size / min(image.size)
|
54 |
-
out_size = (np.array(image.size) * factor).round().astype('uint32')
|
55 |
-
image = image.resize(out_size, resample=Image.BICUBIC)
|
56 |
-
else:
|
57 |
-
factor = config.cropping.out_min_size / min(image.size)
|
58 |
-
out_size = (np.array(image.size) * factor).round().astype('uint32')
|
59 |
-
image = image.resize(out_size, resample=Image.BICUBIC)
|
60 |
-
|
61 |
-
# generate and select masks
|
62 |
-
src_masks = mask_generator.get_masks(image)
|
63 |
-
|
64 |
-
filtered_image_mask_pairs = []
|
65 |
-
for cur_mask in src_masks:
|
66 |
-
if config.cropping.out_square_crop:
|
67 |
-
(crop_left,
|
68 |
-
crop_top,
|
69 |
-
crop_right,
|
70 |
-
crop_bottom) = propose_random_square_crop(cur_mask,
|
71 |
-
min_overlap=config.cropping.crop_min_overlap)
|
72 |
-
cur_mask = cur_mask[crop_top:crop_bottom, crop_left:crop_right]
|
73 |
-
cur_image = image.copy().crop((crop_left, crop_top, crop_right, crop_bottom))
|
74 |
-
else:
|
75 |
-
cur_image = image
|
76 |
-
|
77 |
-
if len(np.unique(cur_mask)) == 0 or cur_mask.mean() > max_tamper_area:
|
78 |
-
continue
|
79 |
-
|
80 |
-
filtered_image_mask_pairs.append((cur_image, cur_mask))
|
81 |
-
|
82 |
-
mask_indices = np.random.choice(len(filtered_image_mask_pairs),
|
83 |
-
size=min(len(filtered_image_mask_pairs), config.max_masks_per_image),
|
84 |
-
replace=False)
|
85 |
-
|
86 |
-
# crop masks; save masks together with input image
|
87 |
-
mask_basename = os.path.join(outdir, os.path.splitext(file_relpath)[0])
|
88 |
-
for i, idx in enumerate(mask_indices):
|
89 |
-
cur_image, cur_mask = filtered_image_mask_pairs[idx]
|
90 |
-
cur_basename = mask_basename + f'_crop{i:03d}'
|
91 |
-
Image.fromarray(np.clip(cur_mask * 255, 0, 255).astype('uint8'),
|
92 |
-
mode='L').save(cur_basename + f'_mask{i:03d}.png')
|
93 |
-
cur_image.save(cur_basename + '.png')
|
94 |
-
except KeyboardInterrupt:
|
95 |
-
return
|
96 |
-
except Exception as ex:
|
97 |
-
print(f'Could not make masks for {infile} due to {ex}:\n{traceback.format_exc()}')
|
98 |
-
|
99 |
-
|
100 |
-
def main(args):
|
101 |
-
if not args.indir.endswith('/'):
|
102 |
-
args.indir += '/'
|
103 |
-
|
104 |
-
os.makedirs(args.outdir, exist_ok=True)
|
105 |
-
|
106 |
-
config = load_yaml(args.config)
|
107 |
-
|
108 |
-
in_files = list(glob.glob(os.path.join(args.indir, '**', f'*.{args.ext}'), recursive=True))
|
109 |
-
if args.n_jobs == 0:
|
110 |
-
process_images(in_files, args.indir, args.outdir, config)
|
111 |
-
else:
|
112 |
-
in_files_n = len(in_files)
|
113 |
-
chunk_size = in_files_n // args.n_jobs + (1 if in_files_n % args.n_jobs > 0 else 0)
|
114 |
-
Parallel(n_jobs=args.n_jobs)(
|
115 |
-
delayed(process_images)(in_files[start:start+chunk_size], args.indir, args.outdir, config)
|
116 |
-
for start in range(0, len(in_files), chunk_size)
|
117 |
-
)
|
118 |
-
|
119 |
-
|
120 |
-
if __name__ == '__main__':
|
121 |
-
import argparse
|
122 |
-
|
123 |
-
aparser = argparse.ArgumentParser()
|
124 |
-
aparser.add_argument('config', type=str, help='Path to config for dataset generation')
|
125 |
-
aparser.add_argument('indir', type=str, help='Path to folder with images')
|
126 |
-
aparser.add_argument('outdir', type=str, help='Path to folder to store aligned images and masks to')
|
127 |
-
aparser.add_argument('--n-jobs', type=int, default=0, help='How many processes to use')
|
128 |
-
aparser.add_argument('--ext', type=str, default='jpg', help='Input image extension')
|
129 |
-
|
130 |
-
main(aparser.parse_args())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/monoscene_lite/monoscene/.ipynb_checkpoints/monoscene-checkpoint.py
DELETED
@@ -1,123 +0,0 @@
|
|
1 |
-
import pytorch_lightning as pl
|
2 |
-
import torch
|
3 |
-
import torch.nn as nn
|
4 |
-
from monoscene.unet3d_nyu import UNet3D as UNet3DNYU
|
5 |
-
from monoscene.unet3d_kitti import UNet3D as UNet3DKitti
|
6 |
-
from monoscene.flosp import FLoSP
|
7 |
-
import numpy as np
|
8 |
-
import torch.nn.functional as F
|
9 |
-
from monoscene.unet2d import UNet2D
|
10 |
-
|
11 |
-
|
12 |
-
class MonoScene(pl.LightningModule):
|
13 |
-
def __init__(
|
14 |
-
self,
|
15 |
-
n_classes,
|
16 |
-
feature,
|
17 |
-
project_scale,
|
18 |
-
full_scene_size,
|
19 |
-
dataset,
|
20 |
-
n_relations=4,
|
21 |
-
context_prior=True,
|
22 |
-
fp_loss=True,
|
23 |
-
project_res=[],
|
24 |
-
frustum_size=4,
|
25 |
-
relation_loss=False,
|
26 |
-
CE_ssc_loss=True,
|
27 |
-
geo_scal_loss=True,
|
28 |
-
sem_scal_loss=True,
|
29 |
-
lr=1e-4,
|
30 |
-
weight_decay=1e-4,
|
31 |
-
):
|
32 |
-
super().__init__()
|
33 |
-
|
34 |
-
self.project_res = project_res
|
35 |
-
self.fp_loss = fp_loss
|
36 |
-
self.dataset = dataset
|
37 |
-
self.context_prior = context_prior
|
38 |
-
self.frustum_size = frustum_size
|
39 |
-
self.relation_loss = relation_loss
|
40 |
-
self.CE_ssc_loss = CE_ssc_loss
|
41 |
-
self.sem_scal_loss = sem_scal_loss
|
42 |
-
self.geo_scal_loss = geo_scal_loss
|
43 |
-
self.project_scale = project_scale
|
44 |
-
self.lr = lr
|
45 |
-
self.weight_decay = weight_decay
|
46 |
-
|
47 |
-
self.projects = {}
|
48 |
-
self.scale_2ds = [1, 2, 4, 8] # 2D scales
|
49 |
-
for scale_2d in self.scale_2ds:
|
50 |
-
self.projects[str(scale_2d)] = FLoSP(
|
51 |
-
full_scene_size, project_scale=self.project_scale, dataset=self.dataset
|
52 |
-
)
|
53 |
-
self.projects = nn.ModuleDict(self.projects)
|
54 |
-
|
55 |
-
self.n_classes = n_classes
|
56 |
-
if self.dataset == "NYU":
|
57 |
-
self.net_3d_decoder = UNet3DNYU(
|
58 |
-
self.n_classes,
|
59 |
-
nn.BatchNorm3d,
|
60 |
-
n_relations=n_relations,
|
61 |
-
feature=feature,
|
62 |
-
full_scene_size=full_scene_size,
|
63 |
-
context_prior=context_prior,
|
64 |
-
)
|
65 |
-
elif self.dataset == "kitti":
|
66 |
-
self.net_3d_decoder = UNet3DKitti(
|
67 |
-
self.n_classes,
|
68 |
-
nn.BatchNorm3d,
|
69 |
-
project_scale=project_scale,
|
70 |
-
feature=feature,
|
71 |
-
full_scene_size=full_scene_size,
|
72 |
-
context_prior=context_prior,
|
73 |
-
)
|
74 |
-
self.net_rgb = UNet2D.build(out_feature=feature, use_decoder=True)
|
75 |
-
|
76 |
-
def forward(self, batch):
|
77 |
-
|
78 |
-
img = batch["img"]
|
79 |
-
bs = len(img)
|
80 |
-
|
81 |
-
out = {}
|
82 |
-
|
83 |
-
x_rgb = self.net_rgb(img)
|
84 |
-
|
85 |
-
x3ds = []
|
86 |
-
for i in range(bs):
|
87 |
-
x3d = None
|
88 |
-
for scale_2d in self.project_res:
|
89 |
-
|
90 |
-
# project features at each 2D scale to target 3D scale
|
91 |
-
scale_2d = int(scale_2d)
|
92 |
-
projected_pix = batch["projected_pix_{}".format(self.project_scale)][i].cuda()
|
93 |
-
fov_mask = batch["fov_mask_{}".format(self.project_scale)][i].cuda()
|
94 |
-
|
95 |
-
# Sum all the 3D features
|
96 |
-
if x3d is None:
|
97 |
-
x3d = self.projects[str(scale_2d)](
|
98 |
-
x_rgb["1_" + str(scale_2d)][i],
|
99 |
-
projected_pix // scale_2d,
|
100 |
-
fov_mask,
|
101 |
-
)
|
102 |
-
else:
|
103 |
-
x3d += self.projects[str(scale_2d)](
|
104 |
-
x_rgb["1_" + str(scale_2d)][i],
|
105 |
-
projected_pix // scale_2d,
|
106 |
-
fov_mask,
|
107 |
-
)
|
108 |
-
x3ds.append(x3d)
|
109 |
-
|
110 |
-
input_dict = {
|
111 |
-
"x3d": torch.stack(x3ds),
|
112 |
-
}
|
113 |
-
|
114 |
-
out_dict = self.net_3d_decoder(input_dict)
|
115 |
-
|
116 |
-
ssc_pred = out_dict["ssc_logit"]
|
117 |
-
|
118 |
-
y_pred = ssc_pred.detach().cpu().numpy()
|
119 |
-
y_pred = np.argmax(y_pred, axis=1)
|
120 |
-
|
121 |
-
return y_pred
|
122 |
-
|
123 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/regionclip-demo/detectron2/modeling/roi_heads/rotated_fast_rcnn.py
DELETED
@@ -1,270 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
import logging
|
3 |
-
import numpy as np
|
4 |
-
import torch
|
5 |
-
|
6 |
-
from detectron2.config import configurable
|
7 |
-
from detectron2.layers import ShapeSpec, batched_nms_rotated
|
8 |
-
from detectron2.structures import Instances, RotatedBoxes, pairwise_iou_rotated
|
9 |
-
from detectron2.utils.events import get_event_storage
|
10 |
-
|
11 |
-
from ..box_regression import Box2BoxTransformRotated
|
12 |
-
from ..poolers import ROIPooler
|
13 |
-
from ..proposal_generator.proposal_utils import add_ground_truth_to_proposals
|
14 |
-
from .box_head import build_box_head
|
15 |
-
from .fast_rcnn import FastRCNNOutputLayers
|
16 |
-
from .roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads
|
17 |
-
|
18 |
-
logger = logging.getLogger(__name__)
|
19 |
-
|
20 |
-
"""
|
21 |
-
Shape shorthand in this module:
|
22 |
-
|
23 |
-
N: number of images in the minibatch
|
24 |
-
R: number of ROIs, combined over all images, in the minibatch
|
25 |
-
Ri: number of ROIs in image i
|
26 |
-
K: number of foreground classes. E.g.,there are 80 foreground classes in COCO.
|
27 |
-
|
28 |
-
Naming convention:
|
29 |
-
|
30 |
-
deltas: refers to the 5-d (dx, dy, dw, dh, da) deltas that parameterize the box2box
|
31 |
-
transform (see :class:`box_regression.Box2BoxTransformRotated`).
|
32 |
-
|
33 |
-
pred_class_logits: predicted class scores in [-inf, +inf]; use
|
34 |
-
softmax(pred_class_logits) to estimate P(class).
|
35 |
-
|
36 |
-
gt_classes: ground-truth classification labels in [0, K], where [0, K) represent
|
37 |
-
foreground object classes and K represents the background class.
|
38 |
-
|
39 |
-
pred_proposal_deltas: predicted rotated box2box transform deltas for transforming proposals
|
40 |
-
to detection box predictions.
|
41 |
-
|
42 |
-
gt_proposal_deltas: ground-truth rotated box2box transform deltas
|
43 |
-
"""
|
44 |
-
|
45 |
-
|
46 |
-
def fast_rcnn_inference_rotated(
|
47 |
-
boxes, scores, image_shapes, score_thresh, nms_thresh, topk_per_image
|
48 |
-
):
|
49 |
-
"""
|
50 |
-
Call `fast_rcnn_inference_single_image_rotated` for all images.
|
51 |
-
|
52 |
-
Args:
|
53 |
-
boxes (list[Tensor]): A list of Tensors of predicted class-specific or class-agnostic
|
54 |
-
boxes for each image. Element i has shape (Ri, K * 5) if doing
|
55 |
-
class-specific regression, or (Ri, 5) if doing class-agnostic
|
56 |
-
regression, where Ri is the number of predicted objects for image i.
|
57 |
-
This is compatible with the output of :meth:`FastRCNNOutputs.predict_boxes`.
|
58 |
-
scores (list[Tensor]): A list of Tensors of predicted class scores for each image.
|
59 |
-
Element i has shape (Ri, K + 1), where Ri is the number of predicted objects
|
60 |
-
for image i. Compatible with the output of :meth:`FastRCNNOutputs.predict_probs`.
|
61 |
-
image_shapes (list[tuple]): A list of (width, height) tuples for each image in the batch.
|
62 |
-
score_thresh (float): Only return detections with a confidence score exceeding this
|
63 |
-
threshold.
|
64 |
-
nms_thresh (float): The threshold to use for box non-maximum suppression. Value in [0, 1].
|
65 |
-
topk_per_image (int): The number of top scoring detections to return. Set < 0 to return
|
66 |
-
all detections.
|
67 |
-
|
68 |
-
Returns:
|
69 |
-
instances: (list[Instances]): A list of N instances, one for each image in the batch,
|
70 |
-
that stores the topk most confidence detections.
|
71 |
-
kept_indices: (list[Tensor]): A list of 1D tensor of length of N, each element indicates
|
72 |
-
the corresponding boxes/scores index in [0, Ri) from the input, for image i.
|
73 |
-
"""
|
74 |
-
result_per_image = [
|
75 |
-
fast_rcnn_inference_single_image_rotated(
|
76 |
-
boxes_per_image, scores_per_image, image_shape, score_thresh, nms_thresh, topk_per_image
|
77 |
-
)
|
78 |
-
for scores_per_image, boxes_per_image, image_shape in zip(scores, boxes, image_shapes)
|
79 |
-
]
|
80 |
-
return [x[0] for x in result_per_image], [x[1] for x in result_per_image]
|
81 |
-
|
82 |
-
|
83 |
-
def fast_rcnn_inference_single_image_rotated(
|
84 |
-
boxes, scores, image_shape, score_thresh, nms_thresh, topk_per_image
|
85 |
-
):
|
86 |
-
"""
|
87 |
-
Single-image inference. Return rotated bounding-box detection results by thresholding
|
88 |
-
on scores and applying rotated non-maximum suppression (Rotated NMS).
|
89 |
-
|
90 |
-
Args:
|
91 |
-
Same as `fast_rcnn_inference_rotated`, but with rotated boxes, scores, and image shapes
|
92 |
-
per image.
|
93 |
-
|
94 |
-
Returns:
|
95 |
-
Same as `fast_rcnn_inference_rotated`, but for only one image.
|
96 |
-
"""
|
97 |
-
valid_mask = torch.isfinite(boxes).all(dim=1) & torch.isfinite(scores).all(dim=1)
|
98 |
-
if not valid_mask.all():
|
99 |
-
boxes = boxes[valid_mask]
|
100 |
-
scores = scores[valid_mask]
|
101 |
-
|
102 |
-
B = 5 # box dimension
|
103 |
-
scores = scores[:, :-1]
|
104 |
-
num_bbox_reg_classes = boxes.shape[1] // B
|
105 |
-
# Convert to Boxes to use the `clip` function ...
|
106 |
-
boxes = RotatedBoxes(boxes.reshape(-1, B))
|
107 |
-
boxes.clip(image_shape)
|
108 |
-
boxes = boxes.tensor.view(-1, num_bbox_reg_classes, B) # R x C x B
|
109 |
-
# Filter results based on detection scores
|
110 |
-
filter_mask = scores > score_thresh # R x K
|
111 |
-
# R' x 2. First column contains indices of the R predictions;
|
112 |
-
# Second column contains indices of classes.
|
113 |
-
filter_inds = filter_mask.nonzero()
|
114 |
-
if num_bbox_reg_classes == 1:
|
115 |
-
boxes = boxes[filter_inds[:, 0], 0]
|
116 |
-
else:
|
117 |
-
boxes = boxes[filter_mask]
|
118 |
-
scores = scores[filter_mask]
|
119 |
-
|
120 |
-
# Apply per-class Rotated NMS
|
121 |
-
keep = batched_nms_rotated(boxes, scores, filter_inds[:, 1], nms_thresh)
|
122 |
-
if topk_per_image >= 0:
|
123 |
-
keep = keep[:topk_per_image]
|
124 |
-
boxes, scores, filter_inds = boxes[keep], scores[keep], filter_inds[keep]
|
125 |
-
|
126 |
-
result = Instances(image_shape)
|
127 |
-
result.pred_boxes = RotatedBoxes(boxes)
|
128 |
-
result.scores = scores
|
129 |
-
result.pred_classes = filter_inds[:, 1]
|
130 |
-
|
131 |
-
return result, filter_inds[:, 0]
|
132 |
-
|
133 |
-
|
134 |
-
class RotatedFastRCNNOutputLayers(FastRCNNOutputLayers):
|
135 |
-
"""
|
136 |
-
Two linear layers for predicting Rotated Fast R-CNN outputs.
|
137 |
-
"""
|
138 |
-
|
139 |
-
@classmethod
|
140 |
-
def from_config(cls, cfg, input_shape):
|
141 |
-
args = super().from_config(cfg, input_shape)
|
142 |
-
args["box2box_transform"] = Box2BoxTransformRotated(
|
143 |
-
weights=cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS
|
144 |
-
)
|
145 |
-
return args
|
146 |
-
|
147 |
-
def inference(self, predictions, proposals):
|
148 |
-
"""
|
149 |
-
Returns:
|
150 |
-
list[Instances]: same as `fast_rcnn_inference_rotated`.
|
151 |
-
list[Tensor]: same as `fast_rcnn_inference_rotated`.
|
152 |
-
"""
|
153 |
-
boxes = self.predict_boxes(predictions, proposals)
|
154 |
-
scores = self.predict_probs(predictions, proposals)
|
155 |
-
image_shapes = [x.image_size for x in proposals]
|
156 |
-
|
157 |
-
return fast_rcnn_inference_rotated(
|
158 |
-
boxes,
|
159 |
-
scores,
|
160 |
-
image_shapes,
|
161 |
-
self.test_score_thresh,
|
162 |
-
self.test_nms_thresh,
|
163 |
-
self.test_topk_per_image,
|
164 |
-
)
|
165 |
-
|
166 |
-
|
167 |
-
@ROI_HEADS_REGISTRY.register()
|
168 |
-
class RROIHeads(StandardROIHeads):
|
169 |
-
"""
|
170 |
-
This class is used by Rotated Fast R-CNN to detect rotated boxes.
|
171 |
-
For now, it only supports box predictions but not mask or keypoints.
|
172 |
-
"""
|
173 |
-
|
174 |
-
@configurable
|
175 |
-
def __init__(self, **kwargs):
|
176 |
-
"""
|
177 |
-
NOTE: this interface is experimental.
|
178 |
-
"""
|
179 |
-
super().__init__(**kwargs)
|
180 |
-
assert (
|
181 |
-
not self.mask_on and not self.keypoint_on
|
182 |
-
), "Mask/Keypoints not supported in Rotated ROIHeads."
|
183 |
-
assert not self.train_on_pred_boxes, "train_on_pred_boxes not implemented for RROIHeads!"
|
184 |
-
|
185 |
-
@classmethod
|
186 |
-
def _init_box_head(cls, cfg, input_shape):
|
187 |
-
# fmt: off
|
188 |
-
in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES
|
189 |
-
pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
|
190 |
-
pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features)
|
191 |
-
sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
|
192 |
-
pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
|
193 |
-
# fmt: on
|
194 |
-
assert pooler_type in ["ROIAlignRotated"], pooler_type
|
195 |
-
# assume all channel counts are equal
|
196 |
-
in_channels = [input_shape[f].channels for f in in_features][0]
|
197 |
-
|
198 |
-
box_pooler = ROIPooler(
|
199 |
-
output_size=pooler_resolution,
|
200 |
-
scales=pooler_scales,
|
201 |
-
sampling_ratio=sampling_ratio,
|
202 |
-
pooler_type=pooler_type,
|
203 |
-
)
|
204 |
-
box_head = build_box_head(
|
205 |
-
cfg, ShapeSpec(channels=in_channels, height=pooler_resolution, width=pooler_resolution)
|
206 |
-
)
|
207 |
-
# This line is the only difference v.s. StandardROIHeads
|
208 |
-
box_predictor = RotatedFastRCNNOutputLayers(cfg, box_head.output_shape)
|
209 |
-
return {
|
210 |
-
"box_in_features": in_features,
|
211 |
-
"box_pooler": box_pooler,
|
212 |
-
"box_head": box_head,
|
213 |
-
"box_predictor": box_predictor,
|
214 |
-
}
|
215 |
-
|
216 |
-
@torch.no_grad()
|
217 |
-
def label_and_sample_proposals(self, proposals, targets):
|
218 |
-
"""
|
219 |
-
Prepare some proposals to be used to train the RROI heads.
|
220 |
-
It performs box matching between `proposals` and `targets`, and assigns
|
221 |
-
training labels to the proposals.
|
222 |
-
It returns `self.batch_size_per_image` random samples from proposals and groundtruth boxes,
|
223 |
-
with a fraction of positives that is no larger than `self.positive_sample_fraction.
|
224 |
-
|
225 |
-
Args:
|
226 |
-
See :meth:`StandardROIHeads.forward`
|
227 |
-
|
228 |
-
Returns:
|
229 |
-
list[Instances]: length `N` list of `Instances`s containing the proposals
|
230 |
-
sampled for training. Each `Instances` has the following fields:
|
231 |
-
- proposal_boxes: the rotated proposal boxes
|
232 |
-
- gt_boxes: the ground-truth rotated boxes that the proposal is assigned to
|
233 |
-
(this is only meaningful if the proposal has a label > 0; if label = 0
|
234 |
-
then the ground-truth box is random)
|
235 |
-
- gt_classes: the ground-truth classification lable for each proposal
|
236 |
-
"""
|
237 |
-
if self.proposal_append_gt:
|
238 |
-
proposals = add_ground_truth_to_proposals(targets, proposals)
|
239 |
-
|
240 |
-
proposals_with_gt = []
|
241 |
-
|
242 |
-
num_fg_samples = []
|
243 |
-
num_bg_samples = []
|
244 |
-
for proposals_per_image, targets_per_image in zip(proposals, targets):
|
245 |
-
has_gt = len(targets_per_image) > 0
|
246 |
-
match_quality_matrix = pairwise_iou_rotated(
|
247 |
-
targets_per_image.gt_boxes, proposals_per_image.proposal_boxes
|
248 |
-
)
|
249 |
-
matched_idxs, matched_labels = self.proposal_matcher(match_quality_matrix)
|
250 |
-
sampled_idxs, gt_classes = self._sample_proposals(
|
251 |
-
matched_idxs, matched_labels, targets_per_image.gt_classes
|
252 |
-
)
|
253 |
-
|
254 |
-
proposals_per_image = proposals_per_image[sampled_idxs]
|
255 |
-
proposals_per_image.gt_classes = gt_classes
|
256 |
-
|
257 |
-
if has_gt:
|
258 |
-
sampled_targets = matched_idxs[sampled_idxs]
|
259 |
-
proposals_per_image.gt_boxes = targets_per_image.gt_boxes[sampled_targets]
|
260 |
-
|
261 |
-
num_bg_samples.append((gt_classes == self.num_classes).sum().item())
|
262 |
-
num_fg_samples.append(gt_classes.numel() - num_bg_samples[-1])
|
263 |
-
proposals_with_gt.append(proposals_per_image)
|
264 |
-
|
265 |
-
# Log the number of fg/bg samples that are selected for training ROI heads
|
266 |
-
storage = get_event_storage()
|
267 |
-
storage.put_scalar("roi_head/num_fg_samples", np.mean(num_fg_samples))
|
268 |
-
storage.put_scalar("roi_head/num_bg_samples", np.mean(num_bg_samples))
|
269 |
-
|
270 |
-
return proposals_with_gt
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cartof/Chatbot/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Chatbot
|
3 |
-
emoji: 🐢
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: gray
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.20.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|