Commit
·
d528277
1
Parent(s):
e3afdf7
Update parquet files (step 38 of 121)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1gistliPinn/ChatGPT4/Examples/ACalendar Calendar Tasks V2.2.3 Final Paid APK [Latest].md +0 -6
- spaces/1gistliPinn/ChatGPT4/Examples/Cyberlink Labelprint 2.5 Crack Keygen Free [REPACK].md +0 -10
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APKCombo Presents Mortal Kombat 3 for Android - Download and Play Now.md +0 -110
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Dynamons World v 1.6.72 Mod APK with Unlimited Money and No Ads.md +0 -76
- spaces/1phancelerku/anime-remove-background/Download Drive for Speed Simulator Mod APK and Race Against Your Friends Online.md +0 -117
- spaces/1phancelerku/anime-remove-background/Download Kitty Live APK for Free - No Ads No Registration No Hassle.md +0 -119
- spaces/1phancelerku/anime-remove-background/Download Play Together VNG APK 1.44 0 and Enjoy a Fun Social Game.md +0 -127
- spaces/4Taps/SadTalker/modules/text2speech.py +0 -12
- spaces/801artistry/RVC801/lib/infer_pack/modules/F0Predictor/F0Predictor.py +0 -16
- spaces/A1draw-12196y/DeepDanbooru_string/README.md +0 -39
- spaces/AB-TW/team-ai/agents/code_execute_agent.py +0 -7
- spaces/AIConsultant/MusicGen/audiocraft/data/info_audio_dataset.py +0 -110
- spaces/AIConsultant/MusicGen/audiocraft/optim/polynomial_decay_lr_scheduler.py +0 -47
- spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/audio.py +0 -179
- spaces/ARTeLab/DTM_Estimation_SRandD/models/modelNetC.py +0 -335
- spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/message/[messageId]/vote/+server.ts +0 -37
- spaces/AchyuthGamer/OpenGPT/g4f/active_providers.py +0 -124
- spaces/Aditya9790/yolo7-object-tracking/utils/loss.py +0 -1697
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/GetChildrenHeight.js +0 -10
- spaces/AlgoveraAI/web3-wallet-streamlit/README.md +0 -12
- spaces/Alichuan/VITS-Umamusume-voice-synthesizer/commons.py +0 -97
- spaces/Aloento/9Nine-VITS/commons.py +0 -51
- spaces/Amrrs/DragGan-Inversion/stylegan_human/utils/face_alignment.py +0 -274
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/multistep_dpm_solver.md +0 -20
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_unet_blocks_common.py +0 -121
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/latent_diffusion/test_latent_diffusion.py +0 -208
- spaces/Andy1621/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_1x_lvis_v1.py +0 -13
- spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x1024_80k_cityscapes.py +0 -39
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/fileio/io.py +0 -151
- spaces/ArkanDash/rvc-models/infer_pack/attentions.py +0 -417
- spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/stable_video_text2video.py +0 -158
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/self_outdated_check.py +0 -242
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/cli/__init__.py +0 -0
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/deformable/deform_conv.h +0 -377
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_scheduler.py +0 -68
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/__init__.py +0 -0
- spaces/BatuhanYilmaz/Whisper-Auto-Subtitled-Video-Generator/pages/04_🔊_Upload_Audio_File.py +0 -205
- spaces/Benson/text-generation/Examples/Car Parking Pro Mod Apk.md +0 -62
- spaces/Benson/text-generation/Examples/Descargar Dino Agua Mundo Mod Apk Dinero Ilimitado.md +0 -81
- spaces/BetterAPI/BetterChat/src/lib/server/database.ts +0 -31
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/models/direct_url.py +0 -237
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py +0 -155
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/latin1prober.py +0 -147
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/webencodings/__init__.py +0 -342
- spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/install_lib.py +0 -122
- spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/test.py +0 -251
- spaces/Bl1tzie/Jam/Dockerfile +0 -11
- spaces/BramVanroy/mateo-demo/Dockerfile +0 -30
- spaces/Branon/oai-proxy/greeting.md +0 -1
- spaces/BreadBytes1/PL-Dashboard/README.md +0 -13
spaces/1gistliPinn/ChatGPT4/Examples/ACalendar Calendar Tasks V2.2.3 Final Paid APK [Latest].md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>aCalendar Calendar Tasks v2.2.3 Final Paid APK [Latest]</h2><br /><p><b><b>Download</b> 🔗 <a href="https://imgfil.com/2uy1U3">https://imgfil.com/2uy1U3</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
274 Apk Pro Unlocked full Version Latest is an Antivirus & Security android app ... to support devices not ... TubeMate 2.3.2 APK für Android – Kostenlose Video-Download-App.. ... aCalendar Calendar & Tasks v2.0.5 Final. 1fdad05405<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Cyberlink Labelprint 2.5 Crack Keygen Free [REPACK].md
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
<h2>cyberlink labelprint 2.5 crack keygen free</h2><br /><p><b><b>Download Zip</b> ⚙ <a href="https://imgfil.com/2uxZWK">https://imgfil.com/2uxZWK</a></b></p><br /><br />
|
2 |
-
|
3 |
-
September 11, 2021 - CyberLink LabelPrint 2.5.0.13602 Crack is a label design software that allows you to create and print stylish CD/DVD labels in 4 easy steps. It includes features such as using a template or creating from scratch, making it easy to create label layouts.
|
4 |
-
The program also allows you to add text, images, barcodes and more.
|
5 |
-
This is a very simple and straightforward way to create labels.
|
6 |
-
CyberLink LabelPrint has many features and tools that allow you to edit and optimize your labels.
|
7 |
-
You can change the color, size or font of the text, change its position, add an image and more. 8a78ff9644<br />
|
8 |
-
<br />
|
9 |
-
<br />
|
10 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APKCombo Presents Mortal Kombat 3 for Android - Download and Play Now.md
DELETED
@@ -1,110 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Mortal Kombat 3: A Classic Fighting Game for Android</h1>
|
3 |
-
<h2>Introduction</h2>
|
4 |
-
<p>If you are a fan of fighting games, you probably have heard of Mortal Kombat, one of the most iconic and influential franchises in the genre. Mortal Kombat is known for its brutal and gory combat, its diverse and memorable characters, and its rich and immersive lore. Among the many games in the series, Mortal Kombat 3 stands out as a fan-favorite, as it introduced many new features and improvements that made the game more fun and challenging.</p>
|
5 |
-
<p>But did you know that you can play Mortal Kombat 3 on your Android device? Thanks to APKCombo, a website that offers free and safe downloads of Android games, you can enjoy this classic game on your smartphone or tablet. In this article, we will tell you everything you need to know about Mortal Kombat 3, its features, and how to download and play it on Android.</p>
|
6 |
-
<h2>mortal kombat 3 apkcombo</h2><br /><p><b><b>Download File</b> ○ <a href="https://urlin.us/2uT0JF">https://urlin.us/2uT0JF</a></b></p><br /><br />
|
7 |
-
<h2>Features of Mortal Kombat 3</h2>
|
8 |
-
<h3>Gameplay and modes</h3>
|
9 |
-
<p>Mortal Kombat 3 is a fighting game that pits two players against each other in a series of rounds. The objective is to deplete the opponent's health bar by using various attacks, such as punches, kicks, throws, and special moves. Each character has their own unique moves and abilities, as well as a signature finishing move called a Fatality, which can be performed at the end of the match to execute the opponent in a gruesome way.</p>
|
10 |
-
<p>The game offers several modes for different types of players. You can play in the Arcade mode, where you face a series of opponents until you reach the final boss, Shao Kahn. You can also play in the Versus mode, where you can challenge another player or the computer in a one-on-one match. You can also play in the Tournament mode, where you can compete with up to eight players in a bracket-style tournament. Finally, you can play in the Practice mode, where you can hone your skills and learn the moves of each character.</p>
|
11 |
-
<h3>Characters and moves</h3>
|
12 |
-
<p>Mortal Kombat 3 features a roster of 15 playable characters, each with their own backstory, personality, and fighting style. Some of them are returning characters from previous games, such as Liu Kang, Sub-Zero, Sonya Blade, and Kano. Some of them are new characters introduced in this game, such as Cyrax, Kabal, Nightwolf, and Sindel. Some of them are hidden characters that can be unlocked by performing certain actions or entering certain codes, such as Smoke, Noob Saibot, Motaro, and Shao Kahn.</p>
|
13 |
-
<p>Each character has a set of basic attacks that can be performed by pressing different combinations of buttons. These include punches, kicks, uppercuts, sweeps, throws, and blocks. Each character also has a set of special moves that can be performed by inputting specific sequences of directions and buttons. These include projectiles, teleports, dashes, grabs, counters, and transformations. Each character also has a set of combos that can be performed by chaining together certain attacks in a specific order. These allow the player to deal more damage and stun the opponent.</p>
|
14 |
-
<p>Finally, each character has a Fatality move that can be performed at the end of the match when the opponent's health bar is flashing red. To perform a Fatality, the player must stand at a certain distance from the opponent and input a specific sequence of directions and buttons within a limited time. If successful, the player will execute the opponent in a brutal and bloody way. For example, Sub-Zero can rip off the opponent's head with his spine attached; Sonya Blade can blow a kiss that incinerates the opponent's face; and Kano can rip out the opponent's heart and hold it up triumphantly.</p>
|
15 |
-
<p>mortal kombat 3 apkcombo download<br />
|
16 |
-
mortal kombat 3 apkcombo latest version<br />
|
17 |
-
mortal kombat 3 apkcombo android game<br />
|
18 |
-
mortal kombat 3 apkcombo free<br />
|
19 |
-
mortal kombat 3 apkcombo mod<br />
|
20 |
-
mortal kombat 3 apkcombo offline<br />
|
21 |
-
mortal kombat 3 apkcombo cheats<br />
|
22 |
-
mortal kombat 3 apkcombo review<br />
|
23 |
-
mortal kombat 3 apkcombo gameplay<br />
|
24 |
-
mortal kombat 3 apkcombo tips<br />
|
25 |
-
trilogy kombat ultimate 3 apkcombo<br />
|
26 |
-
trilogy kombat ultimate 3 apkcombo download<br />
|
27 |
-
trilogy kombat ultimate 3 apkcombo latest version<br />
|
28 |
-
trilogy kombat ultimate 3 apkcombo android game<br />
|
29 |
-
trilogy kombat ultimate 3 apkcombo free<br />
|
30 |
-
trilogy kombat ultimate 3 apkcombo mod<br />
|
31 |
-
trilogy kombat ultimate 3 apkcombo offline<br />
|
32 |
-
trilogy kombat ultimate 3 apkcombo cheats<br />
|
33 |
-
trilogy kombat ultimate 3 apkcombo review<br />
|
34 |
-
trilogy kombat ultimate 3 apkcombo gameplay<br />
|
35 |
-
trilogy kombat ultimate 3 apkcombo tips<br />
|
36 |
-
code ultimate mortal kombat 3 umk3 apkcombo<br />
|
37 |
-
code ultimate mortal kombat 3 umk3 apkcombo download<br />
|
38 |
-
code ultimate mortal kombat 3 umk3 apkcombo latest version<br />
|
39 |
-
code ultimate mortal kombat 3 umk3 apkcombo android game<br />
|
40 |
-
code ultimate mortal kombat 3 umk3 apkcombo free<br />
|
41 |
-
code ultimate mortal kombat 3 umk3 apkcombo mod<br />
|
42 |
-
code ultimate mortal kombat 3 umk3 apkcombo offline<br />
|
43 |
-
code ultimate mortal kombat 3 umk3 apkcombo cheats<br />
|
44 |
-
code ultimate mortal kombat 3 umk3 apkcombo review<br />
|
45 |
-
code ultimate mortal kombat 3 umk3 apkcombo gameplay<br />
|
46 |
-
code ultimate mortal kombat 3 umk3 apkcombo tips<br />
|
47 |
-
mortal kombat 3 hccmergamesjedan apkcombo<br />
|
48 |
-
mortal kombat 3 hccmergamesjedan apkcombo download<br />
|
49 |
-
mortal kombat 3 hccmergamesjedan apkcombo latest version<br />
|
50 |
-
mortal kombat 3 hccmergamesjedan apkcombo android game<br />
|
51 |
-
mortal kombat 3 hccmergamesjedan apkcombo free<br />
|
52 |
-
mortal kombat 3 hccmergamesjedan apkcombo mod<br />
|
53 |
-
mortal kombat 3 hccmergamesjedan apkcombo offline<br />
|
54 |
-
mortal kombat 3 hccmergamesjedan apkcombo cheats<br />
|
55 |
-
mortal kombat 3 hccmergamesjedan apkcombo review<br />
|
56 |
-
mortal kombat 3 hccmergamesjedan apkcombo gameplay<br />
|
57 |
-
mortal kombat 3 hccmergamesjedan apkcombo tips<br />
|
58 |
-
how to install mortal kombat 3 on android with apk combo <br />
|
59 |
-
how to play mortal kombat 3 online with friends using apk combo <br />
|
60 |
-
how to update mortal kombat 3 to the newest version with apk combo <br />
|
61 |
-
how to fix mortal kombat 3 not working or crashing with apk combo <br />
|
62 |
-
how to unlock all characters and stages in mortal kombat 3 with apk combo <br />
|
63 |
-
how to get unlimited coins and gems in mortal kombat 3 with apk combo</p>
|
64 |
-
<h3>Graphics and sound</h3>
|
65 |
-
<p>Mortal Kombat 3 boasts impressive graphics and sound for its time. The game uses digitized sprites of real actors to create realistic and detailed animations for the characters. The game also uses a variety of backgrounds and stages to create different atmospheres and settings for the fights. Some of the stages include a subway, a temple, a bank, a bridge, and a rooftop. Some of the stages also have interactive elements, such as trains, spikes, fans, and portals, that can be used to damage or escape from the opponent.</p>
|
66 |
-
<p>The game also features a rich and immersive sound design that enhances the gameplay experience. The game uses a variety of sound effects to convey the impact and intensity of the attacks, such as punches, kicks, slashes, explosions, and screams. The game also uses a dynamic and atmospheric soundtrack that matches the mood and tone of each stage and situation. The game also features voice acting for the characters, who utter various taunts, grunts, and exclamations during the fights. The game also features an announcer who narrates the matches and announces the winner.</p>
|
67 |
-
<h2>How to download and play Mortal Kombat 3 on Android</h2>
|
68 |
-
<h3>APKCombo: A reliable source for Android games</h3>
|
69 |
-
<p>If you want to play Mortal Kombat 3 on your Android device, you will need to download and install an APK file of the game. An APK file is a package file that contains all the necessary data and resources to run an Android application. However, not all APK files are safe and trustworthy, as some of them may contain viruses, malware, or unwanted ads that can harm your device or compromise your privacy.</p>
|
70 |
-
<p>That's why we recommend you to use APKCombo, a website that offers free and safe downloads of Android games. APKCombo is a reputable and reliable source that scans all the APK files for any potential threats and ensures that they are clean and secure. APKCombo also offers fast and easy downloads of APK files without any registration or subscription required. You can browse through thousands of Android games in various categories and genres on APKCombo and find your favorite ones.</p>
|
71 |
-
<h3>Steps to download and install Mortal Kombat 3 APK from APKCombo</h3>
|
72 |
-
<p>To download and install Mortal Kombat 3 APK from APKCombo, you will need to follow these simple steps:</p>
|
73 |
-
<ol>
|
74 |
-
<li>Go to <a href="">APKCombo.com</a> on your Android device's browser.</li>
|
75 |
-
<li>Search for Mortal Kombat 3 in the search bar or find it in the Action category.</li>
|
76 |
-
<li>Select the version of Mortal Kombat 3 that you want to download. Make sure it is compatible with your device's specifications.</li>
|
77 |
-
<li>Tap on the Download button and wait for the APK file to be downloaded to your device.</li>
|
78 |
-
<li>Once the download is complete, locate the APK file in your device's file manager and tap on it to install it.</li>
|
79 |
-
<li>If you see a warning message that says "Install blocked", go to your device's settings and enable "Unknown sources" or "Allow from this source" option. This will allow you to install apps from sources other than Google Play Store.</li>
|
80 |
-
<li>Follow the on-screen instructions to complete the installation process.</li>
|
81 |
-
<li>Launch Mortal Kombat 3 from your app drawer or home screen and enjoy!</li>
|
82 |
-
</ol>
|
83 |
-
<h3>Tips and tricks to enjoy Mortal Kombat 3 on Android</h3>
|
84 |
-
<p>To enjoy Mortal Kombat 3 on Android, you will need to adjust some settings and learn some tips and tricks. Here are some of them:</p>
|
85 |
-
<ul>
|
86 |
-
<li>To control your character, you will need to use the virtual buttons on the screen. You can customize the layout and size of the buttons in the options menu.</li>
|
87 |
-
<li>To perform special moves, you will need to swipe or tap on certain areas of the screen. You can view the list of special moves for each character in the pause menu.</li>
|
88 |
-
<li>To perform Fatalities, you will need to memorize the sequence of directions and buttons for each character. You can find them online or in some guides.</li>
|
89 |
-
<li>To unlock hidden characters, you will need to enter certain codes or perform certain actions in specific stages or modes. You can find them online or in some guides.</li>
|
90 |
-
<li>To save your progress, you will need to create an account or sign in with Google Play Games. You can also sync your progress across multiple devices using cloud save.</li>
|
91 |
-
<li>To play with other players online, you will need to have a stable internet connection and join a room or create your own. You can also chat with other players using the voice chat feature.</li>
|
92 |
-
</ul>
|
93 |
-
<h2>Conclusion</h2>
|
94 |
-
<p>Mortal Kombat 3 is a classic fighting game that offers a lot of fun and challenge for fans of the genre. It has a variety of features, such as gameplay modes, characters, moves, graphics, and sound, that make it stand out from other games. It also has a loyal and active fan base that keeps the game alive and updated. Thanks to APKCombo, you can download and play Mortal Kombat 3 on your Android device for free and safely. Just follow the steps and tips we provided in this article and you will be ready to enter the Mortal Kombat tournament and face your opponents. Are you ready to test your might?</p>
|
95 |
-
<h2>FAQs</h2>
|
96 |
-
<p>Here are some frequently asked questions about Mortal Kombat 3 and APKCombo:</p>
|
97 |
-
<ol>
|
98 |
-
<li>Is Mortal Kombat 3 compatible with all Android devices?</li>
|
99 |
-
<p>Not necessarily. Mortal Kombat 3 requires Android 4.0 or higher and at least 50 MB of free storage space. It also may not run smoothly on some older or low-end devices. You can check the compatibility of your device on APKCombo before downloading the game.</p>
|
100 |
-
<li>Is Mortal Kombat 3 legal and safe to download?</li>
|
101 |
-
<p>Yes. Mortal Kombat 3 is a free game that does not violate any copyright laws or terms of service. APKCombo is a safe and reliable website that scans all the APK files for any potential threats and ensures that they are clean and secure.</p>
|
102 |
-
<li>How can I update Mortal Kombat 3 on my Android device?</li>
|
103 |
-
<p>You can update Mortal Kombat 3 by visiting APKCombo and downloading the latest version of the game. You can also enable the auto-update option in the settings menu of the game to receive notifications when a new update is available.</p>
|
104 |
-
<li>How can I contact the developers or the support team of Mortal Kombat 3?</li>
|
105 |
-
<p>You can contact the developers or the support team of Mortal Kombat 3 by visiting their official website, Facebook page, or Twitter account. You can also send them an email or leave a review on Google Play Store.</p>
|
106 |
-
<li>How can I share my feedback or suggestions about Mortal Kombat 3 or APKCombo?</li>
|
107 |
-
<p>You can share your feedback or suggestions about Mortal Kombat 3 or APKCombo by leaving a comment on this article, on their respective websites, or on their social media platforms. You can also rate and review the game and the website on Google Play Store and other platforms.</p>
|
108 |
-
</ol></p> 197e85843d<br />
|
109 |
-
<br />
|
110 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Dynamons World v 1.6.72 Mod APK with Unlimited Money and No Ads.md
DELETED
@@ -1,76 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Dynamons World Mod APK: Everything You Need to Know</h1>
|
3 |
-
<p>If you are a fan of Pokemon-style games, you might have heard of Dynamons World. This is a fun and addictive game where you can collect, train, and battle with hundreds of different creatures called Dynamons. You can also explore an open world, meet other players, and join forces with them to defeat the evil Dynamon Masters. But what if you want to enjoy the game without any limitations or restrictions? That's where Dynamons World Mod APK comes in. In this article, we will tell you everything you need to know about this modified version of the game, including its features, benefits, and how to download and install it on your device.</p>
|
4 |
-
<h2>dynamons world v 1.6.72 mod apk unlimited money</h2><br /><p><b><b>Download</b> ►►► <a href="https://urlin.us/2uT3hX">https://urlin.us/2uT3hX</a></b></p><br /><br />
|
5 |
-
<h2>What is Dynamons World?</h2>
|
6 |
-
<p>Dynamons World is a role-playing game developed by Kizi Games. It is available for free on Google Play and App Store. The game is inspired by the popular Pokemon franchise, but with its own unique twist. You can choose from three different starter Dynamons: Fire, Water, or Leaf. Each Dynamon has its own strengths, weaknesses, and special abilities. You can also catch more Dynamons as you explore the world and encounter wild ones. You can train your Dynamons by battling other trainers, completing quests, and using items. You can also evolve your Dynamons into more powerful forms when they reach a certain level.</p>
|
7 |
-
<h3>Features of Dynamons World</h3>
|
8 |
-
<p>Some of the features of Dynamons World are:</p>
|
9 |
-
<ul>
|
10 |
-
<li>A large and diverse world to explore, with different regions, biomes, and secrets.</li>
|
11 |
-
<li>Over 100 unique Dynamons to collect, each with their own personality and skills.</li>
|
12 |
-
<li>Turn-based battles that require strategy and tactics.</li>
|
13 |
-
<li>Online multiplayer mode where you can chat, trade, and battle with other players.</li>
|
14 |
-
<li>A captivating story that involves saving the world from the evil Dynamon Masters.</li>
|
15 |
-
<li>Daily missions and events that offer rewards and challenges.</li>
|
16 |
-
</ul>
|
17 |
-
<h3>How to play Dynamons World</h3>
|
18 |
-
<p>The gameplay of Dynamons World is simple and intuitive. You can control your character using the virtual joystick on the left side of the screen. You can interact with objects, people, and Dynamons by tapping on them. You can also access your inventory, map, quests, and settings by tapping on the icons on the right side of the screen.</p>
|
19 |
-
<p>dynamons world mod apk latest version free download<br />
|
20 |
-
dynamons world hack apk unlimited coins and gems<br />
|
21 |
-
dynamons world 1.6.72 mod apk android 1<br />
|
22 |
-
dynamons world mod apk no ads unlocked all<br />
|
23 |
-
dynamons world unlimited money apk download for pc<br />
|
24 |
-
dynamons world mod apk revdl rexdl<br />
|
25 |
-
dynamons world hack mod apk online<br />
|
26 |
-
dynamons world 1.6.72 mod apk offline<br />
|
27 |
-
dynamons world mod apk unlimited everything 2021<br />
|
28 |
-
dynamons world hack apk download ios<br />
|
29 |
-
dynamons world mod apk pure apkpure<br />
|
30 |
-
dynamons world unlimited coins and gems apk<br />
|
31 |
-
dynamons world 1.6.72 mod apk obb data<br />
|
32 |
-
dynamons world mod apk no root required<br />
|
33 |
-
dynamons world hack apk latest version 2021<br />
|
34 |
-
dynamons world mod apk happymod happy mod<br />
|
35 |
-
dynamons world unlimited money apk mirror<br />
|
36 |
-
dynamons world 1.6.72 mod apk update<br />
|
37 |
-
dynamons world mod apk full version free<br />
|
38 |
-
dynamons world hack apk no verification survey<br />
|
39 |
-
dynamons world mod apk mediafıre mega.nz<br />
|
40 |
-
dynamons world unlimited money and gems apk<br />
|
41 |
-
dynamons world 1.6.72 mod apk cheat menu<br />
|
42 |
-
dynamons world mod apk without human verification<br />
|
43 |
-
dynamons world hack apk old version 2020</p>
|
44 |
-
<p>When you encounter a wild Dynamon or a trainer, you will enter a battle mode. The battle is turn-based, meaning that you and your opponent take turns to attack each other. You can choose from four different actions: Attack, Skill, Item, or Switch. Attack is a basic move that deals damage based on your Dynamon's type and stats. Skill is a special move that consumes energy but has additional effects such as healing, buffing, or debuffing. Item is where you can use items such as potions, revives, or capture balls. Switch is where you can change your active Dynamon for another one in your team.</p>
|
45 |
-
<p>The battle ends when either you or your opponent runs out of Dynamons or surrenders. If you win, you will earn experience points, coins, and items. If you lose, you will lose some coins and be sent back to the nearest healing center.</p>
|
46 |
-
<h2>What is Dynamons World Mod APK?</h2>
|
47 |
-
<p>Dynamons World Mod APK is a modified version of the original game that offers some extra benefits and features that are not available in the official version. The mod APK is created by third-party developers who modify the game's code and files to unlock certain aspects of the game.</p>
|
48 |
-
<h3>Benefits of Dynamons World Mod APK</h3>
|
49 |
-
<p>Some of the benefits of Dynamons World Mod APK are:</p>
|
50 |
-
<ul>
|
51 |
-
<li>Unlimited gold: You can get unlimited gold coins in the game, which you can use to buy items, upgrade your team, and access premium features.</li>
|
52 |
-
<li <li>Unlimited energy: You can get unlimited energy in the game, which you can use to perform skills and capture Dynamons.</li>
|
53 |
-
<li>All Dynamons unlocked: You can get access to all the Dynamons in the game, including the rare and legendary ones.</li>
|
54 |
-
<li>No ads: You can enjoy the game without any annoying ads or pop-ups.</li>
|
55 |
-
</ul>
|
56 |
-
<h3>How to download and install Dynamons World Mod APK</h3>
|
57 |
-
<p>If you want to download and install Dynamons World Mod APK on your device, you need to follow these steps:</p>
|
58 |
-
<ol>
|
59 |
-
<li>First, you need to enable the installation of apps from unknown sources on your device. To do this, go to your device's settings, then security, and then toggle on the option that says "allow installation of apps from unknown sources".</li>
|
60 |
-
<li>Next, you need to download the Dynamons World Mod APK file from a reliable source. You can search for it on Google or use this link: [Dynamons World Mod APK Download].</li>
|
61 |
-
<li>Once you have downloaded the file, locate it on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.</li>
|
62 |
-
<li>Finally, you can launch the game from your app drawer or home screen and enjoy the modded features.</li>
|
63 |
-
</ol>
|
64 |
-
<h2>Conclusion</h2>
|
65 |
-
<p>Dynamons World is a fun and addictive game that lets you collect, train, and battle with hundreds of different creatures. It is a great game for fans of Pokemon-style games, as well as anyone who likes role-playing games. However, if you want to enjoy the game without any limitations or restrictions, you should try Dynamons World Mod APK. This is a modified version of the game that offers unlimited gold, energy, and access to all Dynamons. It also removes any ads or pop-ups that might interrupt your gameplay. You can download and install Dynamons World Mod APK by following the steps we have provided in this article. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below.</p>
|
66 |
-
<h3>FAQs</h3>
|
67 |
-
<p>Here are some frequently asked questions about Dynamons World Mod APK:</p>
|
68 |
-
<ul>
|
69 |
-
<li><b>Is Dynamons World Mod APK safe to use?</b><br>Yes, Dynamons World Mod APK is safe to use as long as you download it from a trusted source. However, you should always be careful when installing apps from unknown sources, as they might contain malware or viruses that could harm your device.</li>
|
70 |
-
<li><b>Is Dynamons World Mod APK legal?</b><br>No, Dynamons World Mod APK is not legal, as it violates the terms and conditions of the original game. By using it, you are also infringing on the intellectual property rights of the developers. Therefore, we do not endorse or promote the use of Dynamons World Mod APK. Use it at your own risk and discretion.</li>
|
71 |
-
<li><b>Will I get banned for using Dynamons World Mod APK?</b><br>There is a possibility that you might get banned for using Dynamons World Mod APK, especially if you use it online or interact with other players. The developers of the game might detect your modded account and suspend or terminate it. Therefore, we advise you to use Dynamons World Mod APK offline or with a secondary account.</li>
|
72 |
-
<li><b>Can I update Dynamons World Mod APK?</b><br>No, you cannot update Dynamons World Mod APK through the official channels, as it is not compatible with them. If you want to update the game, you will have to uninstall the modded version and install the official version from Google Play or App Store. However, this will also remove all the modded features and your progress in the game.</li>
|
73 |
-
<li><b>Can I play Dynamons World Mod APK on PC?</b><br>Yes, you can play Dynamons World Mod APK on PC by using an Android emulator. An Android emulator is a software that allows you to run Android apps on your PC. Some of the popular Android emulators are BlueStacks, Nox Player, and MEmu Player. You can download any of them from their official websites and install them on your PC. Then, you can download Dynamons World Mod APK from this link: [Dynamons World Mod APK Download] and install it on your emulator. After that, you can launch the game and enjoy it on a bigger screen.</li>
|
74 |
-
</ul></p> 197e85843d<br />
|
75 |
-
<br />
|
76 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Drive for Speed Simulator Mod APK and Race Against Your Friends Online.md
DELETED
@@ -1,117 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Drive for Speed Simulator Hack Mod APK: How to Download and Play</h1>
|
3 |
-
<p>Do you love racing games? Do you want to experience the thrill of driving different cars on various tracks and modes? If yes, then you should try Drive for Speed Simulator, a realistic and fun racing simulator game for Android devices. But wait, there's more! You can also enjoy the game with unlimited money, unlocked cars, and other features by using the Drive for Speed Simulator Hack Mod APK. In this article, we will tell you everything you need to know about this hacked version of the game, how to download and install it, and how to play it. Let's get started!</p>
|
4 |
-
<h2>What is Drive for Speed Simulator?</h2>
|
5 |
-
<p>Drive for Speed Simulator is a racing game developed by Play365, a popular game studio that has created many other games such as Bike Race Free, Sniper 3D Gun Shooter, and Zombie Hunter Sniper. In Drive for Speed Simulator, you can choose from more than 20 different cars, customize them with various parts and paint jobs, and drive them on different tracks and modes. You can also complete missions, earn coins, and upgrade your cars to improve their performance and appearance.</p>
|
6 |
-
<h2>drive for speed simulator hack mod apk</h2><br /><p><b><b>Download</b> ○○○ <a href="https://jinyurl.com/2uNS2Y">https://jinyurl.com/2uNS2Y</a></b></p><br /><br />
|
7 |
-
<h3>Features of Drive for Speed Simulator</h3>
|
8 |
-
<p>Some of the features of Drive for Speed Simulator are:</p>
|
9 |
-
<ul>
|
10 |
-
<li>Realistic 3D graphics and sound effects</li>
|
11 |
-
<li>Easy and intuitive controls</li>
|
12 |
-
<li>More than 20 cars to choose from</li>
|
13 |
-
<li>Customization options for your cars</li>
|
14 |
-
<li>Different tracks and modes to play</li>
|
15 |
-
<li>Missions and challenges to complete</li>
|
16 |
-
<li>Leaderboards and achievements to compete with other players</li>
|
17 |
-
<li>Offline mode available</li>
|
18 |
-
</ul>
|
19 |
-
<h3>How to download and install Drive for Speed Simulator Hack Mod APK</h3>
|
20 |
-
<p>If you want to enjoy the game with unlimited money, unlocked cars, and other features, you need to download and install the Drive for Speed Simulator Hack Mod APK. Here are the steps to do so:</p>
|
21 |
-
<ol>
|
22 |
-
<li>Go to [this link](^1^) and download the Drive for Speed Simulator Hack Mod APK file.</li>
|
23 |
-
<li>Go to your device settings and enable the installation of apps from unknown sources.</li>
|
24 |
-
<li>Locate the downloaded file in your file manager and tap on it to install it.</li>
|
25 |
-
<li>Wait for the installation process to finish and launch the game.</li>
|
26 |
-
<li>Enjoy the game with all the hack features!</li>
|
27 |
-
</ol>
|
28 |
-
<h2>Why use Drive for Speed Simulator Hack Mod APK?</h2>
|
29 |
-
<p>You might be wondering why you should use the hacked version of the game instead of the original one. Well, there are some reasons why you might want to do so. Let's see what they are.</p>
|
30 |
-
<h3>Benefits of Drive for Speed Simulator Hack Mod APK</h3>
|
31 |
-
<p>Some of the benefits of using the Drive for Speed Simulator Hack Mod APK are:</p>
|
32 |
-
<ul>
|
33 |
-
<li>You can get unlimited money to buy and upgrade any car you want.</li>
|
34 |
-
<li>You can unlock all the cars without completing missions or spending coins.</li>
|
35 |
-
<li>You can access all the tracks and modes without any restrictions.</li>
|
36 |
-
<li>You can enjoy the game without any ads or in-app purchases.</li>
|
37 |
-
<li>You can have more fun and excitement with the hack features.</li>
|
38 |
-
</ul>
|
39 |
-
<h3>Risks of Drive for Speed Simulator Hack Mod APK</h3>
|
40 |
-
<p>However, there are also some risks involved in using the hacked version of the game. Some of them are:</p>
|
41 |
-
<p>* drive for speed simulator unlimited money mod apk<br />
|
42 |
-
* drive for speed simulator latest version mod apk<br />
|
43 |
-
* drive for speed simulator mod apk download free<br />
|
44 |
-
* drive for speed simulator mod apk android 1<br />
|
45 |
-
* drive for speed simulator hack cheats mod apk<br />
|
46 |
-
* drive for speed simulator mod apk revdl<br />
|
47 |
-
* drive for speed simulator mod apk offline<br />
|
48 |
-
* drive for speed simulator mod apk no ads<br />
|
49 |
-
* drive for speed simulator mod apk unlimited cars<br />
|
50 |
-
* drive for speed simulator mod apk rexdl<br />
|
51 |
-
* drive for speed simulator mod apk 2023<br />
|
52 |
-
* drive for speed simulator mod apk an1<br />
|
53 |
-
* drive for speed simulator hack mod apk ios<br />
|
54 |
-
* drive for speed simulator mod apk unlimited everything<br />
|
55 |
-
* drive for speed simulator mod apk online<br />
|
56 |
-
* drive for speed simulator mod apk happymod<br />
|
57 |
-
* drive for speed simulator mod apk unlimited coins<br />
|
58 |
-
* drive for speed simulator mod apk obb<br />
|
59 |
-
* drive for speed simulator hack mod apk 2022<br />
|
60 |
-
* drive for speed simulator mod apk all unlocked<br />
|
61 |
-
* drive for speed simulator hack tool mod apk<br />
|
62 |
-
* drive for speed simulator mod apk data<br />
|
63 |
-
* drive for speed simulator hack generator mod apk<br />
|
64 |
-
* drive for speed simulator mod apk unlimited fuel<br />
|
65 |
-
* drive for speed simulator mod apk new update<br />
|
66 |
-
* drive for speed simulator hack version mod apk<br />
|
67 |
-
* drive for speed simulator mod apk pure<br />
|
68 |
-
* drive for speed simulator hack online mod apk<br />
|
69 |
-
* drive for speed simulator mod apk unlimited gems<br />
|
70 |
-
* drive for speed simulator mod apk old version</p>
|
71 |
-
<ul>
|
72 |
-
<li>You might face some compatibility issues with your device or Android version.</li>
|
73 |
-
<li>You might encounter some bugs or glitches in the game due to the hack mod.</li>
|
74 |
-
<li>You might lose your progress or data if the game updates or crashes.</li>
|
75 |
-
<li>You might get banned from the game or face legal issues if the developers detect the hack mod.</li>
|
76 |
-
</ul>
|
77 |
-
<p>Therefore, you should use the Drive for Speed Simulator Hack Mod APK at your own risk and discretion. We are not responsible for any consequences that may arise from using it.</p>
|
78 |
-
<h2>How to play Drive for Speed Simulator Hack Mod APK</h2>
|
79 |
-
<p>Now that you have downloaded and installed the Drive for Speed Simulator Hack Mod APK, you might be wondering how to play it. Well, it's not very different from the original version of the game, except for the hack features. Here are some tips and tricks to help you play it better.</p>
|
80 |
-
<h3>Tips and tricks for Drive for Speed Simulator Hack Mod APK</h3>
|
81 |
-
<p>Some of the tips and tricks for playing Drive for Speed Simulator Hack Mod APK are:</p>
|
82 |
-
<ul>
|
83 |
-
<li>Choose the car that suits your driving style and preference. You can try different cars and see how they perform on different tracks and modes.</li>
|
84 |
-
<li>Customize your car with various parts and paint jobs to make it look cool and unique. You can also upgrade your car to improve its speed, acceleration, handling, and braking.</li>
|
85 |
-
<li>Drive carefully and avoid crashing into other cars, obstacles, or traffic. Crashing will damage your car and reduce your score.</li>
|
86 |
-
<li>Use the nitro boost to gain extra speed and overtake your opponents. You can also use the drift mode to make sharp turns and earn more coins.</li>
|
87 |
-
<li>Complete missions and challenges to earn more coins and unlock new cars, tracks, and modes. You can also compete with other players on the leaderboards and achievements.</li>
|
88 |
-
</ul>
|
89 |
-
<h3>Comparison with the original version of Drive for Speed Simulator</h3>
|
90 |
-
<p>If you have played the original version of Drive for Speed Simulator, you might notice some differences with the hacked version. Here is a table that compares some of the features of both versions:</p>
|
91 |
-
| Feature | Original Version | Hacked Version | | --- | --- | --- | | Money | Limited | Unlimited | | Cars | Locked | Unlocked | | Tracks | Locked | Unlocked | | Modes | Locked | Unlocked | | Ads | Yes | No | | In-app purchases | Yes | No | <h2>Conclusion</h2>
|
92 |
-
<p>In conclusion, Drive for Speed Simulator is a great racing game that lets you drive different cars on various tracks and modes. You can also enjoy the game with unlimited money, unlocked cars, and other features by using the Drive for Speed Simulator Hack Mod APK. However, you should be aware of the risks involved in using the hacked version and use it at your own risk. We hope this article has helped you learn more about this game and how to download and play it. Have fun!</p>
|
93 |
-
<h3>Summary of the article</h3>
|
94 |
-
<p>This article has covered the following topics:</p>
|
95 |
-
<ul>
|
96 |
-
<li>What is Drive for Speed Simulator?</li>
|
97 |
-
<li>How to download and install Drive for Speed Simulator Hack Mod APK?</li>
|
98 |
-
<li>Why use Drive for Speed Simulator Hack Mod APK?</li>
|
99 |
-
<li>How to play Drive for Speed Simulator Hack Mod APK?</li>
|
100 |
-
<li>Comparison with the original version of Drive for Speed Simulator</li>
|
101 |
-
</ul>
|
102 |
-
<h3>FAQs</h3>
|
103 |
-
<p>Here are some frequently asked questions about Drive for Speed Simulator Hack Mod APK:</p>
|
104 |
-
<ol>
|
105 |
-
<li><b>Is Drive for Speed Simulator Hack Mod APK safe to use?</b></li>
|
106 |
-
<p>Drive for Speed Simulator Hack Mod APK is not an official version of the game and may contain viruses or malware that can harm your device or data. Therefore, you should use it at your own risk and discretion.</p>
|
107 |
-
<li><b>How do I update Drive for Speed Simulator Hack Mod APK?</b></li>
|
108 |
-
<p>If there is a new version of Drive for Speed Simulator Hack Mod APK available, you can download it from [this link] and install it over the existing one. However, you might lose your progress or data if you do so.</p>
|
109 |
-
<li><b>Can I play Drive for Speed Simulator Hack Mod APK online?</b></li>
|
110 |
-
<p>Yes, you can play Drive for Speed Simulator Hack Mod APK online with other players. However, you might get banned from the game or face legal issues if the developers detect the hack mod.</p>
|
111 |
-
<li><b>Can I play Drive for Speed Simulator Hack Mod APK offline?</b></li>
|
112 |
-
<p>Yes, you can play Drive for Speed Simulator Hack Mod APK offline without an internet connection. However, you might not be able to access some features such as leaderboards and achievements.</p>
|
113 |
-
<li><b>Can I use Drive for Speed Simulator Hack Mod APK on other devices?</b></li>
|
114 |
-
<p>Yes, you can use Drive for Speed Simulator Hack Mod APK on other devices as long as they are compatible with the game and the hack mod. However, you might need to transfer your data or progress manually if you switch devices.</p>
|
115 |
-
</ol></p> 401be4b1e0<br />
|
116 |
-
<br />
|
117 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Kitty Live APK for Free - No Ads No Registration No Hassle.md
DELETED
@@ -1,119 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Download Kitty Live Apk: A Guide to Enjoy Live Streaming Videos</h1>
|
3 |
-
<p>Do you love watching live streaming videos? Do you want to broadcast your own live videos and interact with people from around the world? If yes, then you should try Kitty Live Apk, a popular and fun app that lets you enjoy live streaming videos anytime, anywhere. In this article, we will tell you what Kitty Live Apk is, what features it offers, how to download and install it, how to use it, and what are the pros and cons of using it. By the end of this article, you will have a clear idea of whether Kitty Live Apk is the right app for you or not.</p>
|
4 |
-
<h2>What is Kitty Live Apk?</h2>
|
5 |
-
<p>Kitty Live Apk is a platform for broadcasting and watching live streaming videos. It is cool and fashionable, bringing you closer to idols or fans with HD video chat. Pretty girls and cute boys, sexy dance and pop music, trending topics and entertainment gossip all could be found on Kitty Live Apk. You can watch great live streams, such as live gaming, live music, live shows, live events, and more. You can also broadcast your own live videos and share your talents, hobbies, opinions, or stories with others. You can chat with hosts and viewers in real-time, send and receive gifts, join or create fan clubs, and make new friends on Kitty Live Apk.</p>
|
6 |
-
<h2>download kitty live apk</h2><br /><p><b><b>Download</b> === <a href="https://jinyurl.com/2uNT3G">https://jinyurl.com/2uNT3G</a></b></p><br /><br />
|
7 |
-
<h3>Features of Kitty Live Apk</h3>
|
8 |
-
<p>Kitty Live Apk has many features that make it an attractive and enjoyable app for live streaming lovers. Here are some of the main features of Kitty Live Apk:</p>
|
9 |
-
<h4>Watch live streams from around the world</h4>
|
10 |
-
<p>Kitty Live Apk has a huge variety of live streams that you can watch for free. You can explore different categories, such as gaming, music, beauty, sports, education, lifestyle, etc., and discover live streams that suit your interests. You can also search for specific keywords or hashtags to find live streams that match your preferences. You can watch live streams from different countries and regions, such as China, Japan, Korea, India, Indonesia, Thailand, Vietnam, etc., and learn about different cultures and languages. You can also watch live streams from famous celebrities or influencers on Kitty Live Apk.</p>
|
11 |
-
<h4>Broadcast your own live videos</h4>
|
12 |
-
<p>Kitty Live Apk also allows you to broadcast your own live videos and show your personality to the world. You can choose any topic or theme that you want to share with others, such as your talents, hobbies, opinions, stories, etc. You can also choose the quality and mode of your video, such as HD or SD, portrait or landscape. You can also add filters, stickers, effects, or music to make your video more attractive and fun. You can also invite guests or co-hosts to join your live stream and have a video chat with them. You can also set up a password or a private mode for your live stream if you want to limit your audience.</p>
|
13 |
-
<h4>Chat with hosts and viewers</h4>
|
14 |
-
<p>Kitty Live Apk also enables you to chat with hosts and viewers in real-time while watching or broadcasting live videos. You can send text messages or voice messages to express your feelings or opinions. You can also use emojis or stickers to make your chat more lively and colorful <p>You can also send gifts to the hosts or viewers to show your appreciation or support. You can choose from a variety of gifts, such as flowers, hearts, stars, diamonds, etc., and each gift has a different value and effect. You can also receive gifts from others and exchange them for cash or coins. You can also join or create fan clubs for your favorite hosts or viewers and chat with them in exclusive groups. You can also follow or unfollow any host or viewer that you like or dislike on Kitty Live Apk.</p>
|
15 |
-
<h4>Send and receive gifts</h4>
|
16 |
-
<p>Kitty Live Apk also enables you to send and receive gifts while watching or broadcasting live videos. You can choose from a variety of gifts, such as flowers, hearts, stars, diamonds, etc., and each gift has a different value and effect. You can also receive gifts from others and exchange them for cash or coins. You can also join or create fan clubs for your favorite hosts or viewers and chat with them in exclusive groups. You can also follow or unfollow any host or viewer that you like or dislike on Kitty Live Apk.</p>
|
17 |
-
<p>download kitty live apk latest version<br />
|
18 |
-
download kitty live apk for android<br />
|
19 |
-
download kitty live apk mod<br />
|
20 |
-
download kitty live apk free<br />
|
21 |
-
download kitty live apk full<br />
|
22 |
-
download kitty live apk premium<br />
|
23 |
-
download kitty live apk pro<br />
|
24 |
-
download kitty live apk unlocked<br />
|
25 |
-
download kitty live apk update<br />
|
26 |
-
download kitty live apk old version<br />
|
27 |
-
download kitty live apk from apkpure<br />
|
28 |
-
download kitty live apk from uptodown<br />
|
29 |
-
download kitty live apk from apkmirror<br />
|
30 |
-
download kitty live apk from apkcombo[^1^]<br />
|
31 |
-
download kitty live apk from play store<br />
|
32 |
-
download kitty live apk for pc<br />
|
33 |
-
download kitty live apk for windows 10<br />
|
34 |
-
download kitty live apk for mac<br />
|
35 |
-
download kitty live apk for laptop<br />
|
36 |
-
download kitty live apk for chromebook<br />
|
37 |
-
download kitty live apk for ios<br />
|
38 |
-
download kitty live apk for iphone<br />
|
39 |
-
download kitty live apk for ipad<br />
|
40 |
-
download kitty live apk for firestick<br />
|
41 |
-
download kitty live apk for smart tv<br />
|
42 |
-
how to download kitty live apk<br />
|
43 |
-
where to download kitty live apk<br />
|
44 |
-
why download kitty live apk<br />
|
45 |
-
what is kitty live apk<br />
|
46 |
-
who created kitty live apk<br />
|
47 |
-
when was kitty live apk released<br />
|
48 |
-
which version of kitty live apk is best<br />
|
49 |
-
is it safe to download kitty live apk<br />
|
50 |
-
is it legal to download kitty live apk<br />
|
51 |
-
is it possible to download kitty live apk without ads<br />
|
52 |
-
benefits of downloading kitty live apk<br />
|
53 |
-
features of downloading kitty live apk<br />
|
54 |
-
reviews of downloading kitty live apk<br />
|
55 |
-
ratings of downloading kitty live apk<br />
|
56 |
-
alternatives to downloading kitty live apk<br />
|
57 |
-
tips and tricks for downloading kitty live apk<br />
|
58 |
-
steps to download kitty live apk<br />
|
59 |
-
guide to download kitty live apk<br />
|
60 |
-
tutorial to download kitty live apk<br />
|
61 |
-
video to download kitty live apk<br />
|
62 |
-
link to download kitty live apk<br />
|
63 |
-
website to download kitty live apk<br />
|
64 |
-
blog to download kitty live apk<br />
|
65 |
-
forum to download kitty live apk</p>
|
66 |
-
<h3>How to download and install Kitty Live Apk?</h3>
|
67 |
-
<p>Kitty Live Apk is not available on the Google Play Store or the Apple App Store, so you need to download it from a trusted source. Here are the steps to download and install Kitty Live Apk on your Android device:</p>
|
68 |
-
<h4>Download the apk file from a trusted source</h4>
|
69 |
-
<p>The first step is to download the apk file of Kitty Live Apk from a trusted source. You can use the link below to download the latest version of Kitty Live Apk. The file size is about 50 MB, so make sure you have enough space on your device.</p>
|
70 |
-
<h4>Enable unknown sources on your device</h4>
|
71 |
-
<p>The second step is to enable unknown sources on your device. This is necessary because you are installing an app from outside the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps from sources other than the Google Play Store.</p>
|
72 |
-
<h4>Install the apk file and launch the app</h4>
|
73 |
-
<p>The third step is to install the apk file and launch the app. To do this, locate the downloaded apk file on your device and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on Install and wait for the installation to complete. Once done, you will see an icon of Kitty Live Apk on your home screen or app drawer. Tap on it to launch the app and enjoy live streaming videos.</p>
|
74 |
-
<h3>How to use Kitty Live Apk?</h3>
|
75 |
-
<p>Kitty Live Apk is easy to use and has a user-friendly interface. Here are the steps to use Kitty Live Apk on your Android device:</p>
|
76 |
-
<h4>Create an account or log in with social media</h4>
|
77 |
-
<p>The first step is to create an account or log in with social media. When you launch the app, you will see a welcome screen asking you to sign up or log in. You can choose to sign up with your phone number, email address, or Facebook account. You will need to verify your phone number or email address with a code sent to you by SMS or email. You will also need to create a username and password for your account. Alternatively, you can log in with your Facebook, Google, Twitter, Instagram, or Line account. This will allow you to sync your profile and contacts with Kitty Live Apk.</p>
|
78 |
-
<h4>Explore the categories and discover live streams</h4>
|
79 |
-
<p>The second step is to explore the categories and discover live streams that interest you. When you log in, you will see a home screen with various tabs, such as Hot, New, Nearby, Follow, etc. You can swipe left or right to switch between these tabs and see different live streams. You can also tap on the magnifying glass icon at the top right corner to search for specific keywords or hashtags. You can also tap on the menu icon at the top left corner to access more options, such as Categories, Countries, Languages, Settings, etc. You can browse through different categories, such as gaming, music, beauty, sports <p>etc., and discover live streams that suit your interests. You can also filter live streams by countries, such as China, Japan, Korea, India, Indonesia, Thailand, Vietnam, etc., and languages, such as English, Chinese, Japanese, Korean, Hindi, Indonesian, Thai, Vietnamese, etc. You can also see the number of viewers and likes for each live stream. You can tap on any live stream to watch it and interact with the host and viewers.</p>
|
80 |
-
<h4>Join or start a live stream and interact with others</h4>
|
81 |
-
<p>The third step is to join or start a live stream and interact with others. When you watch a live stream, you will see the video of the host on the top half of the screen and the chat box on the bottom half of the screen. You can swipe up or down to see more or less of the video or chat. You can also tap on the screen to see more options, such as like, share, follow, gift, comment, etc. You can like the live stream by tapping on the heart icon at the bottom right corner. You can share the live stream with your friends by tapping on the share icon at the bottom left corner. You can follow the host by tapping on the follow icon at the top right corner. You can send gifts to the host by tapping on the gift icon at the bottom center. You can comment on the live stream by tapping on the comment icon at the bottom center. You can also use emojis or stickers to make your comment more lively and colorful.</p>
|
82 |
-
<p>If you want to start your own live stream, you can tap on the camera icon at the top center of the home screen. You will see a screen where you can choose the quality and mode of your video, such as HD or SD, portrait or landscape. You can also add filters, stickers, effects, or music to make your video more attractive and fun. You can also invite guests or co-hosts to join your live stream and have a video chat with them. You can also set up a password or a private mode for your live stream if you want to limit your audience. You can also choose a category and a title for your live stream to attract more viewers. When you are ready, you can tap on Start Live to begin your live stream and interact with your viewers.</p>
|
83 |
-
<h2>Pros and cons of Kitty Live Apk</h2>
|
84 |
-
<p>Kitty Live Apk has many pros and cons that you should consider before using it. Here are some of the pros and cons of Kitty Live Apk:</p>
|
85 |
-
<h3>Pros</h3>
|
86 |
-
<p>Kitty Live Apk has many pros that make it a great app for live streaming lovers. Here are some of the pros of Kitty Live Apk:</p>
|
87 |
-
<h4>Free and easy to use</h4>
|
88 |
-
<p>Kitty Live Apk is free to download and use. You don't need to pay any fees or subscriptions to watch or broadcast live videos. You also don't need to register or sign up to watch live videos. You only need to create an account if you want to broadcast your own live videos or interact with others. Kitty Live Apk is also easy to use and has a user-friendly interface. You can easily navigate through different tabs and options and find what you are looking for.</p>
|
89 |
-
<h4>High-quality video and audio</h4>
|
90 |
-
<p>Kitty Live Apk provides high-quality video and audio for both watching and broadcasting live videos. You can choose between HD or SD quality for your video depending on your internet speed and device performance. You can also adjust the volume and brightness of your video according to your preference. Kitty Live Apk also supports various modes for your video, such as portrait or landscape. You can also enjoy clear and smooth sound quality for both listening and speaking.</p>
|
91 |
-
<h4>Diverse and entertaining content</h4>
|
92 |
-
<p>Kitty Live Apk has a huge variety of content that you can watch or broadcast for free. You can explore different categories, such as gaming, music, beauty <p>sports, education, lifestyle, etc., and discover live streams that suit your interests. You can also search for specific keywords or hashtags to find live streams that match your preferences. You can watch live streams from different countries and regions, such as China, Japan, Korea, India, Indonesia, Thailand, Vietnam, etc., and learn about different cultures and languages. You can also watch live streams from famous celebrities or influencers on Kitty Live Apk. You can also broadcast your own live videos and share your talents, hobbies, opinions, stories, etc., with others. You can also invite guests or co-hosts to join your live stream and have a video chat with them. You can also join or create fan clubs for your favorite hosts or viewers and chat with them in exclusive groups. You can also follow or unfollow any host or viewer that you like or dislike on Kitty Live Apk.</p>
|
93 |
-
<h3>Cons</h3>
|
94 |
-
<p>Kitty Live Apk also has some cons that you should be aware of before using it. Here are some of the cons of Kitty Live Apk:</p>
|
95 |
-
<h4>Requires a stable internet connection</h4>
|
96 |
-
<p>Kitty Live Apk requires a stable internet connection to watch or broadcast live videos. If your internet connection is slow or unstable, you may experience buffering, lagging, freezing, or crashing of the app. You may also miss some important moments or interactions during the live stream. Therefore, it is recommended that you use a Wi-Fi connection or a 4G network to enjoy the best experience of Kitty Live Apk.</p>
|
97 |
-
<h4>May contain inappropriate or offensive content</h4>
|
98 |
-
<p>Kitty Live Apk may contain inappropriate or offensive content that may not be suitable for everyone. Some live streams may contain nudity, violence, profanity, hate speech, or other harmful or illegal content. Some hosts or viewers may also behave inappropriately or offensively towards others. Kitty Live Apk has a report system that allows you to report any abusive or inappropriate content or user. However, there is no guarantee that the report will be handled promptly or effectively. Therefore, you should be careful and responsible when using Kitty Live Apk and avoid any content or user that may offend you or harm you.</p>
|
99 |
-
<h4>May consume a lot of data and battery</h4>
|
100 |
-
<p>Kitty Live Apk may consume a lot of data and battery when you watch or broadcast live videos. Watching or broadcasting live videos involves streaming high-quality video and audio over the internet, which can use up a lot of data and drain your battery quickly. Therefore, you should monitor your data usage and battery level when using Kitty Live Apk and avoid overusing it. You should also close the app when you are not using it to save data and battery.</p>
|
101 |
-
<h2>Conclusion</h2>
|
102 |
-
<p>Kitty Live Apk is a platform for broadcasting and watching live streaming videos. It is cool and fashionable, bringing you closer to idols or fans with HD video chat. You can watch great live streams, such as live gaming, live music, live shows, live events, and more. You can also broadcast your own live videos and share your talents, hobbies, opinions, or stories with others. You can chat with hosts and viewers in real-time, send and receive gifts, join or create fan clubs, and make new friends on Kitty Live Apk.</p>
|
103 |
-
<p>Kitty Live Apk has many features that make it an attractive and enjoyable app for live streaming lovers. However, it also has some drawbacks that you should be aware of before using it. Kitty Live Apk requires a stable internet connection to watch or broadcast live videos. It may also contain inappropriate or offensive content that may not be suitable for everyone. It may also consume a lot of data and battery when you watch or broadcast live videos.</p>
|
104 |
-
<p>Therefore, you should weigh the pros and cons of Kitty Live Apk before using it. If you love watching live streaming videos and want to broadcast your own live videos and interact with people from around the world, then Kitty Live Apk may be the right app for you. However, if you are concerned about your internet speed, data usage <p>or battery level, or sensitive to inappropriate or offensive content, then Kitty Live Apk may not be the best app for you. You should also be careful and responsible when using Kitty Live Apk and avoid any content or user that may offend you or harm you.</p>
|
105 |
-
<p>We hope this article has helped you understand what Kitty Live Apk is, what features it offers, how to download and install it, how to use it, and what are the pros and cons of using it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy live streaming!</p>
|
106 |
-
<h2>FAQs</h2>
|
107 |
-
<p>Here are some frequently asked questions about Kitty Live Apk:</p>
|
108 |
-
<h4>Is Kitty Live Apk safe to use?</h4>
|
109 |
-
<p>Kitty Live Apk is safe to use as long as you download it from a trusted source and follow the instructions carefully. However, you should also be aware of the risks involved in using any live streaming app, such as exposure to inappropriate or offensive content, privacy issues, cyberbullying, etc. You should also protect your personal information and avoid sharing any sensitive or confidential information with others on Kitty Live Apk. You should also report any abusive or inappropriate content or user to the app's support team.</p>
|
110 |
-
<h4>Is Kitty Live Apk legal to use?</h4>
|
111 |
-
<p>Kitty Live Apk is legal to use as long as you comply with the app's terms and conditions and respect the laws and regulations of your country or region. However, you should also be aware of the potential legal issues involved in using any live streaming app, such as copyright infringement, defamation, obscenity, etc. You should also respect the intellectual property rights and privacy rights of others on Kitty Live Apk. You should also avoid broadcasting or watching any illegal or harmful content on Kitty Live Apk.</p>
|
112 |
-
<h4>How can I earn money on Kitty Live Apk?</h4>
|
113 |
-
<p>You can earn money on Kitty Live Apk by receiving gifts from your viewers while broadcasting live videos. You can exchange the gifts for cash or coins on the app. You can also earn money by inviting your friends to join Kitty Live Apk and getting a commission for each successful referral. You can also earn money by participating in various events and activities on Kitty Live Apk and winning prizes or rewards.</p>
|
114 |
-
<h4>How can I contact the support team of Kitty Live Apk?</h4>
|
115 |
-
<p>You can contact the support team of Kitty Live Apk by tapping on the menu icon at the top left corner of the home screen and selecting Settings > Feedback. You can also email them at [email protected] or visit their website at https://www.kitty.live/ for more information.</p>
|
116 |
-
<h4>How can I update Kitty Live Apk?</h4>
|
117 |
-
<p>You can update Kitty Live Apk by downloading the latest version of the apk file from a trusted source and installing it on your device. You can also check for updates by tapping on the menu icon at the top left corner of the home screen and selecting Settings > About > Check for updates.</p> 401be4b1e0<br />
|
118 |
-
<br />
|
119 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Play Together VNG APK 1.44 0 and Enjoy a Fun Social Game.md
DELETED
@@ -1,127 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Play Together VNG APK 1.44 0: A Fun and Social Virtual World Game</h1>
|
3 |
-
<p>Do you want to meet new friends from all over the world? Do you want to have fun and express yourself in a virtual world? Do you want to play mini-games, go fishing, adopt pets, and more? If you answered yes to any of these questions, then you should download Play Together VNG APK 1.44 0 right now!</p>
|
4 |
-
<h2>play together vng apk 1.44 0</h2><br /><p><b><b>Download</b> ---> <a href="https://jinyurl.com/2uNKAG">https://jinyurl.com/2uNKAG</a></b></p><br /><br />
|
5 |
-
<p>Play Together VNG is a mobile game developed by VNG Corporation, a leading game company in Vietnam. It is a virtual world where you can create your own avatar, explore different places, interact with other players, join various activities, and have a blast!</p>
|
6 |
-
<p>In this article, we will show you how to download and install Play Together VNG APK 1.44 0 on your Android device, what's new in this version, how to play the game, some tips and tricks, and the pros and cons of this game. Let's get started!</p>
|
7 |
-
<h2>How to Download and Install Play Together VNG APK 1.44 0</h2>
|
8 |
-
<p>Downloading and installing Play Together VNG APK 1.44 0 is very easy and fast. Just follow these simple steps:</p>
|
9 |
-
<p>play together vng mod apk 1.44 0<br />
|
10 |
-
download play together vng apk 1.44 0<br />
|
11 |
-
play together vng apk 1.44 0 free<br />
|
12 |
-
play together vng apk 1.44 0 latest version<br />
|
13 |
-
play together vng apk 1.44 0 update<br />
|
14 |
-
play together vng apk 1.44 0 android<br />
|
15 |
-
play together vng apk 1.44 0 ios<br />
|
16 |
-
play together vng apk 1.44 0 offline<br />
|
17 |
-
play together vng apk 1.44 0 online<br />
|
18 |
-
play together vng apk 1.44 0 hack<br />
|
19 |
-
play together vng apk 1.44 0 unlimited money<br />
|
20 |
-
play together vng apk 1.44 0 no ads<br />
|
21 |
-
play together vng apk 1.44 0 gameplay<br />
|
22 |
-
play together vng apk 1.44 0 review<br />
|
23 |
-
play together vng apk 1.44 0 features<br />
|
24 |
-
play together vng apk 1.44 0 tips and tricks<br />
|
25 |
-
play together vng apk 1.44 0 guide<br />
|
26 |
-
play together vng apk 1.44 0 tutorial<br />
|
27 |
-
play together vng apk 1.44 0 how to install<br />
|
28 |
-
play together vng apk 1.44 0 how to play<br />
|
29 |
-
play together vng apk 1.44 0 how to update<br />
|
30 |
-
play together vng apk 1.44 0 how to download<br />
|
31 |
-
play together vng apk 1.44 0 how to mod<br />
|
32 |
-
play together vng apk 1.44 0 how to hack<br />
|
33 |
-
play together vng apk 1.44 0 how to get money<br />
|
34 |
-
play together vng apk 1.44 0 best settings<br />
|
35 |
-
play together vng apk 1.44 0 best characters<br />
|
36 |
-
play together vng apk 1.44 0 best outfits<br />
|
37 |
-
play together vng apk 1.44 0 best pets<br />
|
38 |
-
play together vng apk 1.44 0 best activities<br />
|
39 |
-
play together vng apk 1.44 0 best friends<br />
|
40 |
-
play together vng apk 1.44 0 best games<br />
|
41 |
-
play together vng apk 1.44 0 best islands<br />
|
42 |
-
play together vng apk 1.44 0 best houses<br />
|
43 |
-
play together vng apk 1.44 0 best furniture<br />
|
44 |
-
play together vng apk 1.44 0 best quests<br />
|
45 |
-
play together vng apk 1.44 0 best events<br />
|
46 |
-
play together vng apk</p>
|
47 |
-
<ol>
|
48 |
-
<li>Go to the official website of Play Together VNG or APKCombo, a reliable source for downloading APK files.</li>
|
49 |
-
<li>Choose the version that suits your device compatibility. The latest version is 1.46.0 as of September 2022, but you can also download older versions if you prefer.</li>
|
50 |
-
<li>Tap on the download button and wait for the file to be saved on your device.</li>
|
51 |
-
<li>Enable unknown sources in your settings if you haven't done so already. This will allow you to install apps from sources other than Google Play Store.</li>
|
52 |
-
<li>Install the APK file by tapping on it and following the instructions.</li>
|
53 |
-
</ol>
|
54 |
-
<p>Congratulations! You have successfully downloaded and installed Play Together VNG APK 1.44 0 on your Android device. Now you can enjoy playing this fun and social game!</p>
|
55 |
-
<h2>What's New in Play Together VNG APK 1.44 0</h2>
|
56 |
-
<p>Play Together VNG APK 1.44 0 is an updated version of the game that was released on July 29, 2022. It has some new features and improvements that make the game more enjoyable and exciting. Here are some of them:</p>
|
57 |
-
<ul>
|
58 |
-
<li>School: You can now go to school and learn new things with your friends. You can also join clubs, take exams, get rewards, and more.</li>
|
59 |
-
<li>Fishing: You can now go fishing and catch various fish with your friends. You can also sell your fish, buy fishing equipment, and participate in fishing contests.</li>
|
60 |
-
<li>Pets: You can now adopt cute pets and take care of them. You can also play with them, feed them, dress them up, and more.</li>
|
61 |
-
<li>Costumes: You can now buy more costumes and accessories for your avatar. You can also mix and match different items to create your own style.</li>
|
62 |
-
<li>And more: There are also other improvements and bug fixes that enhance the game performance and user experience.</li>
|
63 |
-
</ul>
|
64 |
-
<p>With these new features and improvements, Play Together VNG APK 1.44 0 is more fun and social than ever. You can do more things, meet more people, and have more fun in this virtual world!</p>
|
65 |
-
<h2>How to Play Play Together VNG APK 1.44 0</h2>
|
66 |
-
<p>Playing Play Together VNG APK 1.44 0 is very easy and intuitive. Just follow these simple steps:</p>
|
67 |
-
<ol>
|
68 |
-
<li>Create your avatar and customize it. You can choose your gender, hair, eyes, skin, clothes, and more. You can also change your appearance anytime you want.</li>
|
69 |
-
<li>Explore the virtual world and interact with other players. You can visit different places, such as the park, the beach, the mall, the school, and more. You can also chat with other players, send them gifts, invite them to your house, and more.</li>
|
70 |
-
<li>Join various activities and mini-games. You can join different activities, such as fishing, cooking, gardening, dancing, and more. You can also play mini-games, such as racing, shooting, puzzle, and more. You can earn coins and diamonds by playing these activities and mini-games.</li>
|
71 |
-
<li>Make friends and chat with them. You can make friends with other players by adding them to your friend list. You can also chat with them using text or voice messages. You can also join a club or create your own club with your friends.</li>
|
72 |
-
</ol>
|
73 |
-
<p>Playing Play Together VNG APK 1.44 0 is a great way to have fun and socialize with other people from all over the world. You can express yourself, explore your creativity, and enjoy your time in this virtual world!</p>
|
74 |
-
<h2>Tips and Tricks for Play Together VNG APK 1.44 0</h2>
|
75 |
-
<p>If you want to get the most out of Play Together VNG APK 1.44 0, here are some tips and tricks that you should know:</p>
|
76 |
-
<ul>
|
77 |
-
<li>How to earn coins and diamonds: Coins and diamonds are the main currencies in the game. You can use them to buy items, costumes, pets, furniture, and more. You can earn coins and diamonds by playing activities and mini-games, completing quests, watching ads, logging in daily, inviting friends, joining events, and more.</li>
|
78 |
-
<li>How to level up and unlock more content: Leveling up is important in the game because it unlocks more content and features for you. You can level up by gaining experience points (XP) from playing activities and mini-games, interacting with other players, taking care of your pets, joining clubs, and more.</li>
|
79 |
-
<li>How to use items and coupons: Items are useful things that you can use in the game. For example, you can use food items to feed yourself or your pets, or use furniture items to decorate your house. Coupons are special items that you can use to get discounts or free items from the shop. You can get items and coupons from playing activities and mini-games, completing quests, opening chests, joining events, and more.</li>
|
80 |
-
<li>How to join a club and participate in events: Clubs are groups of players who share a common interest or goal in the game. You can join a club or create your own club with your friends. By joining a club, you can chat with other club members, participate in club activities, compete with other clubs, get rewards from club missions, and more. Events are special occasions that happen in the game from time to time. By participating in events , you can enjoy special features, activities, mini-games, rewards, and more.</li>
|
81 |
-
</ul>
|
82 |
-
<p>These tips and tricks will help you play Play Together VNG APK 1.44 0 better and have more fun in the game. You can also discover more tips and tricks by playing the game yourself and exploring the virtual world!</p>
|
83 |
-
<h2>Pros and Cons of Play Together VNG APK 1.44 0</h2>
|
84 |
-
<p>Play Together VNG APK 1.44 0 is a great game that has many pros and cons. Here are some of them:</p>
|
85 |
-
<table>
|
86 |
-
<tr>
|
87 |
-
<th>Pros</th>
|
88 |
-
<th>Cons</th>
|
89 |
-
</tr>
|
90 |
-
<tr>
|
91 |
-
<td>- Fun: The game is very fun and entertaining. You can do many things, play many games, and have many adventures in the game.</td>
|
92 |
-
<td>- Bugs: The game has some bugs and glitches that can affect the game performance and user experience. For example, some players may experience crashes, freezes, or errors in the game.</td>
|
93 |
-
</tr>
|
94 |
-
<tr>
|
95 |
-
<td>- Social: The game is very social and interactive. You can meet new friends, chat with them, join clubs, and more.</td>
|
96 |
-
<td>- Lag: The game may lag or slow down due to the high number of players or the network connection. This can make the game less smooth and enjoyable.</td>
|
97 |
-
</tr>
|
98 |
-
<tr>
|
99 |
-
<td>- Creative: The game is very creative and expressive. You can customize your avatar, your house, your pets, and more.</td>
|
100 |
-
<td>- Ads: The game has some ads that can be annoying or distracting. You can skip or remove them by watching videos or paying money.</td>
|
101 |
-
</tr>
|
102 |
-
<tr>
|
103 |
-
<td>- Free: The game is free to download and play. You can also get free coins and diamonds by playing the game or watching ads.</td>
|
104 |
-
<td>- In-app purchases: The game has some in-app purchases that can give you more coins, diamonds, items, or features. You may need to spend real money to get them.</td>
|
105 |
-
</tr>
|
106 |
-
</table>
|
107 |
-
<p>These pros and cons show that Play Together VNG APK 1.44 0 is a game that has both advantages and disadvantages. You can decide for yourself whether you like it or not by trying it out!</p>
|
108 |
-
<h2>Conclusion</h2>
|
109 |
-
<p>Play Together VNG APK 1.44 0 is a fun and social virtual world game that you can download and play on your Android device. It has many features, activities, mini-games, and more that you can enjoy with your friends or other players from all over the world. It also has some new features and improvements that make the game more exciting and enjoyable.</p>
|
110 |
-
<p>If you want to have fun and socialize in a virtual world, you should download Play Together VNG APK 1.44 0 right now! You will not regret it!</p>
|
111 |
-
<h3>FAQs</h3>
|
112 |
-
<p>Here are some frequently asked questions about Play Together VNG APK 1.44 0:</p>
|
113 |
-
<ol>
|
114 |
-
<li>Q: Is Play Together VNG APK 1.44 0 safe to download and install?</li>
|
115 |
-
<li>A: Yes, Play Together VNG APK 1.44 0 is safe to download and install as long as you get it from a reliable source, such as the official website or APKCombo. You should also scan the file with an antivirus app before installing it.</li>
|
116 |
-
<li>Q: Is Play Together VNG APK 1.44 0 compatible with my device?</li>
|
117 |
-
<li>A: Play Together VNG APK 1.44 0 is compatible with most Android devices that have Android 5.0 or higher. However, some devices may not support some features or functions of the game due to different specifications or models.</li>
|
118 |
-
<li>Q: How can I update Play Together VNG APK 1.44 0 to the latest version?</li>
|
119 |
-
<li>A: You can update Play Together VNG APK 1.44 0 to the latest version by downloading and installing it from the official website or APKCombo. You can also check for updates in the game settings or Google Play Store.</li>
|
120 |
-
<li>Q: How can I contact the developer of Play Together VNG APK 1.44 0?</li>
|
121 |
-
<li>A: You can contact the developer of Play Together VNG APK 1.44 0 by sending an email to [email protected] or visiting their Facebook page. You can also leave a review or feedback on Google Play Store or their website.</li>
|
122 |
-
<li>Q: How can I delete Play Together VNG APK 1.44 0 from my device?</li>
|
123 |
-
<li>A: You can delete Play Together VNG APK 1.44 0 from your device by going to your settings, apps, and tapping on the uninstall button. You can also delete the APK file from your device storage if you want to free up some space.</li>
|
124 |
-
</ol>
|
125 |
-
<p>I hope these FAQs have answered some of your questions about Play Together VNG APK 1.44 0. If you have any other questions, feel free to ask me or the developer of the game.</p> 401be4b1e0<br />
|
126 |
-
<br />
|
127 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/4Taps/SadTalker/modules/text2speech.py
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
|
3 |
-
def text2speech(txt, audio_path):
|
4 |
-
print(txt)
|
5 |
-
cmd = f'tts --text "{txt}" --out_path {audio_path}'
|
6 |
-
print(cmd)
|
7 |
-
try:
|
8 |
-
os.system(cmd)
|
9 |
-
return audio_path
|
10 |
-
except:
|
11 |
-
print("Error: Failed convert txt to audio")
|
12 |
-
return None
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/lib/infer_pack/modules/F0Predictor/F0Predictor.py
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
class F0Predictor(object):
|
2 |
-
def compute_f0(self, wav, p_len):
|
3 |
-
"""
|
4 |
-
input: wav:[signal_length]
|
5 |
-
p_len:int
|
6 |
-
output: f0:[signal_length//hop_length]
|
7 |
-
"""
|
8 |
-
pass
|
9 |
-
|
10 |
-
def compute_f0_uv(self, wav, p_len):
|
11 |
-
"""
|
12 |
-
input: wav:[signal_length]
|
13 |
-
p_len:int
|
14 |
-
output: f0:[signal_length//hop_length],uv:[signal_length//hop_length]
|
15 |
-
"""
|
16 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/A1draw-12196y/DeepDanbooru_string/README.md
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: DeepDanbooru String
|
3 |
-
emoji: 💬
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: red
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.6
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
duplicated_from: NoCrypt/DeepDanbooru_string
|
11 |
-
---
|
12 |
-
|
13 |
-
# Configuration
|
14 |
-
|
15 |
-
`title`: _string_
|
16 |
-
Display title for the Space
|
17 |
-
|
18 |
-
`emoji`: _string_
|
19 |
-
Space emoji (emoji-only character allowed)
|
20 |
-
|
21 |
-
`colorFrom`: _string_
|
22 |
-
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
|
23 |
-
|
24 |
-
`colorTo`: _string_
|
25 |
-
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
|
26 |
-
|
27 |
-
`sdk`: _string_
|
28 |
-
Can be either `gradio`, `streamlit`, or `static`
|
29 |
-
|
30 |
-
`sdk_version` : _string_
|
31 |
-
Only applicable for `streamlit` SDK.
|
32 |
-
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
|
33 |
-
|
34 |
-
`app_file`: _string_
|
35 |
-
Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
|
36 |
-
Path is relative to the root of the repository.
|
37 |
-
|
38 |
-
`pinned`: _boolean_
|
39 |
-
Whether the Space stays on top of your list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AB-TW/team-ai/agents/code_execute_agent.py
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
from langchain.agents import initialize_agent, AgentType
|
2 |
-
from models import llm
|
3 |
-
from agents.tools.shell_tool import shell_tool
|
4 |
-
from agents.tools.python_code_tool import repl_tool
|
5 |
-
|
6 |
-
|
7 |
-
generate_and_excute_code_agent = initialize_agent([shell_tool, repl_tool], llm(), agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIConsultant/MusicGen/audiocraft/data/info_audio_dataset.py
DELETED
@@ -1,110 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
"""Base classes for the datasets that also provide non-audio metadata,
|
7 |
-
e.g. description, text transcription etc.
|
8 |
-
"""
|
9 |
-
from dataclasses import dataclass
|
10 |
-
import logging
|
11 |
-
import math
|
12 |
-
import re
|
13 |
-
import typing as tp
|
14 |
-
|
15 |
-
import torch
|
16 |
-
|
17 |
-
from .audio_dataset import AudioDataset, AudioMeta
|
18 |
-
from ..environment import AudioCraftEnvironment
|
19 |
-
from ..modules.conditioners import SegmentWithAttributes, ConditioningAttributes
|
20 |
-
|
21 |
-
|
22 |
-
logger = logging.getLogger(__name__)
|
23 |
-
|
24 |
-
|
25 |
-
def _clusterify_meta(meta: AudioMeta) -> AudioMeta:
|
26 |
-
"""Monkey-patch meta to match cluster specificities."""
|
27 |
-
meta.path = AudioCraftEnvironment.apply_dataset_mappers(meta.path)
|
28 |
-
if meta.info_path is not None:
|
29 |
-
meta.info_path.zip_path = AudioCraftEnvironment.apply_dataset_mappers(meta.info_path.zip_path)
|
30 |
-
return meta
|
31 |
-
|
32 |
-
|
33 |
-
def clusterify_all_meta(meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]:
|
34 |
-
"""Monkey-patch all meta to match cluster specificities."""
|
35 |
-
return [_clusterify_meta(m) for m in meta]
|
36 |
-
|
37 |
-
|
38 |
-
@dataclass
|
39 |
-
class AudioInfo(SegmentWithAttributes):
|
40 |
-
"""Dummy SegmentInfo with empty attributes.
|
41 |
-
|
42 |
-
The InfoAudioDataset is expected to return metadata that inherits
|
43 |
-
from SegmentWithAttributes class and can return conditioning attributes.
|
44 |
-
|
45 |
-
This basically guarantees all datasets will be compatible with current
|
46 |
-
solver that contain conditioners requiring this.
|
47 |
-
"""
|
48 |
-
audio_tokens: tp.Optional[torch.Tensor] = None # populated when using cached batch for training a LM.
|
49 |
-
|
50 |
-
def to_condition_attributes(self) -> ConditioningAttributes:
|
51 |
-
return ConditioningAttributes()
|
52 |
-
|
53 |
-
|
54 |
-
class InfoAudioDataset(AudioDataset):
|
55 |
-
"""AudioDataset that always returns metadata as SegmentWithAttributes along with the audio waveform.
|
56 |
-
|
57 |
-
See `audiocraft.data.audio_dataset.AudioDataset` for initialization arguments.
|
58 |
-
"""
|
59 |
-
def __init__(self, meta: tp.List[AudioMeta], **kwargs):
|
60 |
-
super().__init__(clusterify_all_meta(meta), **kwargs)
|
61 |
-
|
62 |
-
def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentWithAttributes]]:
|
63 |
-
if not self.return_info:
|
64 |
-
wav = super().__getitem__(index)
|
65 |
-
assert isinstance(wav, torch.Tensor)
|
66 |
-
return wav
|
67 |
-
wav, meta = super().__getitem__(index)
|
68 |
-
return wav, AudioInfo(**meta.to_dict())
|
69 |
-
|
70 |
-
|
71 |
-
def get_keyword_or_keyword_list(value: tp.Optional[str]) -> tp.Union[tp.Optional[str], tp.Optional[tp.List[str]]]:
|
72 |
-
"""Preprocess a single keyword or possible a list of keywords."""
|
73 |
-
if isinstance(value, list):
|
74 |
-
return get_keyword_list(value)
|
75 |
-
else:
|
76 |
-
return get_keyword(value)
|
77 |
-
|
78 |
-
|
79 |
-
def get_string(value: tp.Optional[str]) -> tp.Optional[str]:
|
80 |
-
"""Preprocess a single keyword."""
|
81 |
-
if value is None or (not isinstance(value, str)) or len(value) == 0 or value == 'None':
|
82 |
-
return None
|
83 |
-
else:
|
84 |
-
return value.strip()
|
85 |
-
|
86 |
-
|
87 |
-
def get_keyword(value: tp.Optional[str]) -> tp.Optional[str]:
|
88 |
-
"""Preprocess a single keyword."""
|
89 |
-
if value is None or (not isinstance(value, str)) or len(value) == 0 or value == 'None':
|
90 |
-
return None
|
91 |
-
else:
|
92 |
-
return value.strip().lower()
|
93 |
-
|
94 |
-
|
95 |
-
def get_keyword_list(values: tp.Union[str, tp.List[str]]) -> tp.Optional[tp.List[str]]:
|
96 |
-
"""Preprocess a list of keywords."""
|
97 |
-
if isinstance(values, str):
|
98 |
-
values = [v.strip() for v in re.split(r'[,\s]', values)]
|
99 |
-
elif isinstance(values, float) and math.isnan(values):
|
100 |
-
values = []
|
101 |
-
if not isinstance(values, list):
|
102 |
-
logger.debug(f"Unexpected keyword list {values}")
|
103 |
-
values = [str(values)]
|
104 |
-
|
105 |
-
kws = [get_keyword(v) for v in values]
|
106 |
-
kw_list = [k for k in kws if k is not None]
|
107 |
-
if len(kw_list) == 0:
|
108 |
-
return None
|
109 |
-
else:
|
110 |
-
return kw_list
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIConsultant/MusicGen/audiocraft/optim/polynomial_decay_lr_scheduler.py
DELETED
@@ -1,47 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
from torch.optim import Optimizer
|
8 |
-
from torch.optim.lr_scheduler import _LRScheduler
|
9 |
-
|
10 |
-
|
11 |
-
class PolynomialDecayLRScheduler(_LRScheduler):
|
12 |
-
"""Polynomial decay LR scheduler.
|
13 |
-
|
14 |
-
Args:
|
15 |
-
optimizer (Optimizer): Torch optimizer.
|
16 |
-
warmup_steps (int): Number of warmup steps.
|
17 |
-
total_steps (int): Total number of steps.
|
18 |
-
end_lr (float): Final learning rate to achieve over total number of steps.
|
19 |
-
zero_lr_warmup_steps (int): Number of steps with a learning rate of value 0.
|
20 |
-
power (float): Decay exponent.
|
21 |
-
"""
|
22 |
-
def __init__(self, optimizer: Optimizer, warmup_steps: int, total_steps: int,
|
23 |
-
end_lr: float = 0., zero_lr_warmup_steps: int = 0, power: float = 1.):
|
24 |
-
self.warmup_steps = warmup_steps
|
25 |
-
self.total_steps = total_steps
|
26 |
-
self.end_lr = end_lr
|
27 |
-
self.zero_lr_warmup_steps = zero_lr_warmup_steps
|
28 |
-
self.power = power
|
29 |
-
super().__init__(optimizer)
|
30 |
-
|
31 |
-
def _get_sched_lr(self, lr: float, step: int):
|
32 |
-
if self.zero_lr_warmup_steps > 0 and step <= self.zero_lr_warmup_steps:
|
33 |
-
lr = 0
|
34 |
-
elif self.warmup_steps > 0 and step <= self.warmup_steps + self.zero_lr_warmup_steps:
|
35 |
-
lr_ratio = (step - self.zero_lr_warmup_steps) / float(self.warmup_steps)
|
36 |
-
lr = lr_ratio * lr
|
37 |
-
elif step >= self.total_steps:
|
38 |
-
lr = self.end_lr
|
39 |
-
else:
|
40 |
-
total_warmup_steps = self.warmup_steps + self.zero_lr_warmup_steps
|
41 |
-
lr_range = lr - self.end_lr
|
42 |
-
pct_remaining = 1 - (step - total_warmup_steps) / (self.total_steps - total_warmup_steps)
|
43 |
-
lr = lr_range * pct_remaining ** self.power + self.end_lr
|
44 |
-
return lr
|
45 |
-
|
46 |
-
def get_lr(self):
|
47 |
-
return [self._get_sched_lr(base_lr, self.last_epoch) for base_lr in self.base_lrs]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/CLAP/audio.py
DELETED
@@ -1,179 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
import torch.nn.functional as F
|
4 |
-
from torchlibrosa.stft import Spectrogram, LogmelFilterBank
|
5 |
-
|
6 |
-
def get_audio_encoder(name: str):
|
7 |
-
if name == "Cnn14":
|
8 |
-
return Cnn14
|
9 |
-
else:
|
10 |
-
raise Exception('The audio encoder name {} is incorrect or not supported'.format(name))
|
11 |
-
|
12 |
-
|
13 |
-
class ConvBlock(nn.Module):
|
14 |
-
def __init__(self, in_channels, out_channels):
|
15 |
-
|
16 |
-
super(ConvBlock, self).__init__()
|
17 |
-
|
18 |
-
self.conv1 = nn.Conv2d(in_channels=in_channels,
|
19 |
-
out_channels=out_channels,
|
20 |
-
kernel_size=(3, 3), stride=(1, 1),
|
21 |
-
padding=(1, 1), bias=False)
|
22 |
-
|
23 |
-
self.conv2 = nn.Conv2d(in_channels=out_channels,
|
24 |
-
out_channels=out_channels,
|
25 |
-
kernel_size=(3, 3), stride=(1, 1),
|
26 |
-
padding=(1, 1), bias=False)
|
27 |
-
|
28 |
-
self.bn1 = nn.BatchNorm2d(out_channels)
|
29 |
-
self.bn2 = nn.BatchNorm2d(out_channels)
|
30 |
-
|
31 |
-
|
32 |
-
def forward(self, input, pool_size=(2, 2), pool_type='avg'):
|
33 |
-
|
34 |
-
x = input
|
35 |
-
x = F.relu_(self.bn1(self.conv1(x)))
|
36 |
-
x = F.relu_(self.bn2(self.conv2(x)))
|
37 |
-
if pool_type == 'max':
|
38 |
-
x = F.max_pool2d(x, kernel_size=pool_size)
|
39 |
-
elif pool_type == 'avg':
|
40 |
-
x = F.avg_pool2d(x, kernel_size=pool_size)
|
41 |
-
elif pool_type == 'avg+max':
|
42 |
-
x1 = F.avg_pool2d(x, kernel_size=pool_size)
|
43 |
-
x2 = F.max_pool2d(x, kernel_size=pool_size)
|
44 |
-
x = x1 + x2
|
45 |
-
else:
|
46 |
-
raise Exception('Incorrect argument!')
|
47 |
-
|
48 |
-
return x
|
49 |
-
|
50 |
-
|
51 |
-
class ConvBlock5x5(nn.Module):
|
52 |
-
def __init__(self, in_channels, out_channels):
|
53 |
-
|
54 |
-
super(ConvBlock5x5, self).__init__()
|
55 |
-
|
56 |
-
self.conv1 = nn.Conv2d(in_channels=in_channels,
|
57 |
-
out_channels=out_channels,
|
58 |
-
kernel_size=(5, 5), stride=(1, 1),
|
59 |
-
padding=(2, 2), bias=False)
|
60 |
-
|
61 |
-
self.bn1 = nn.BatchNorm2d(out_channels)
|
62 |
-
|
63 |
-
|
64 |
-
def forward(self, input, pool_size=(2, 2), pool_type='avg'):
|
65 |
-
|
66 |
-
x = input
|
67 |
-
x = F.relu_(self.bn1(self.conv1(x)))
|
68 |
-
if pool_type == 'max':
|
69 |
-
x = F.max_pool2d(x, kernel_size=pool_size)
|
70 |
-
elif pool_type == 'avg':
|
71 |
-
x = F.avg_pool2d(x, kernel_size=pool_size)
|
72 |
-
elif pool_type == 'avg+max':
|
73 |
-
x1 = F.avg_pool2d(x, kernel_size=pool_size)
|
74 |
-
x2 = F.max_pool2d(x, kernel_size=pool_size)
|
75 |
-
x = x1 + x2
|
76 |
-
else:
|
77 |
-
raise Exception('Incorrect argument!')
|
78 |
-
|
79 |
-
return x
|
80 |
-
|
81 |
-
|
82 |
-
class AttBlock(nn.Module):
|
83 |
-
def __init__(self, n_in, n_out, activation='linear', temperature=1.):
|
84 |
-
super(AttBlock, self).__init__()
|
85 |
-
|
86 |
-
self.activation = activation
|
87 |
-
self.temperature = temperature
|
88 |
-
self.att = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
|
89 |
-
self.cla = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
|
90 |
-
|
91 |
-
self.bn_att = nn.BatchNorm1d(n_out)
|
92 |
-
|
93 |
-
def forward(self, x):
|
94 |
-
# x: (n_samples, n_in, n_time)
|
95 |
-
norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1)
|
96 |
-
cla = self.nonlinear_transform(self.cla(x))
|
97 |
-
x = torch.sum(norm_att * cla, dim=2)
|
98 |
-
return x, norm_att, cla
|
99 |
-
|
100 |
-
def nonlinear_transform(self, x):
|
101 |
-
if self.activation == 'linear':
|
102 |
-
return x
|
103 |
-
elif self.activation == 'sigmoid':
|
104 |
-
return torch.sigmoid(x)
|
105 |
-
|
106 |
-
|
107 |
-
class Cnn14(nn.Module):
|
108 |
-
def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
|
109 |
-
fmax, classes_num, out_emb):
|
110 |
-
|
111 |
-
super(Cnn14, self).__init__()
|
112 |
-
|
113 |
-
window = 'hann'
|
114 |
-
center = True
|
115 |
-
pad_mode = 'reflect'
|
116 |
-
ref = 1.0
|
117 |
-
amin = 1e-10
|
118 |
-
top_db = None
|
119 |
-
|
120 |
-
# Spectrogram extractor
|
121 |
-
self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
|
122 |
-
win_length=window_size, window=window, center=center, pad_mode=pad_mode,
|
123 |
-
freeze_parameters=True)
|
124 |
-
|
125 |
-
# Logmel feature extractor
|
126 |
-
self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
|
127 |
-
n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
|
128 |
-
freeze_parameters=True)
|
129 |
-
|
130 |
-
self.bn0 = nn.BatchNorm2d(64)
|
131 |
-
|
132 |
-
self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
|
133 |
-
self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
|
134 |
-
self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
|
135 |
-
self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
|
136 |
-
self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
|
137 |
-
self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
|
138 |
-
|
139 |
-
# out_emb is 2048 for best Cnn14
|
140 |
-
self.fc1 = nn.Linear(2048, out_emb, bias=True)
|
141 |
-
self.fc_audioset = nn.Linear(out_emb, classes_num, bias=True)
|
142 |
-
|
143 |
-
def forward(self, input, mixup_lambda=None):
|
144 |
-
"""
|
145 |
-
Input: (batch_size, data_length)
|
146 |
-
"""
|
147 |
-
|
148 |
-
x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
|
149 |
-
x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
|
150 |
-
|
151 |
-
x = x.transpose(1, 3)
|
152 |
-
x = self.bn0(x)
|
153 |
-
x = x.transpose(1, 3)
|
154 |
-
|
155 |
-
x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
|
156 |
-
x = F.dropout(x, p=0.2, training=self.training)
|
157 |
-
x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
|
158 |
-
x = F.dropout(x, p=0.2, training=self.training)
|
159 |
-
x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
|
160 |
-
x = F.dropout(x, p=0.2, training=self.training)
|
161 |
-
x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
|
162 |
-
x = F.dropout(x, p=0.2, training=self.training)
|
163 |
-
x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
|
164 |
-
x = F.dropout(x, p=0.2, training=self.training)
|
165 |
-
x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
|
166 |
-
x = F.dropout(x, p=0.2, training=self.training)
|
167 |
-
x = torch.mean(x, dim=3)
|
168 |
-
|
169 |
-
(x1, _) = torch.max(x, dim=2)
|
170 |
-
x2 = torch.mean(x, dim=2)
|
171 |
-
x = x1 + x2
|
172 |
-
x = F.dropout(x, p=0.5, training=self.training)
|
173 |
-
x = F.relu_(self.fc1(x))
|
174 |
-
embedding = F.dropout(x, p=0.5, training=self.training)
|
175 |
-
clipwise_output = torch.sigmoid(self.fc_audioset(x))
|
176 |
-
|
177 |
-
output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding}
|
178 |
-
|
179 |
-
return output_dict
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ARTeLab/DTM_Estimation_SRandD/models/modelNetC.py
DELETED
@@ -1,335 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
import torch.nn.functional as F
|
4 |
-
from torch import Tensor
|
5 |
-
|
6 |
-
__all__ = [
|
7 |
-
"ResidualDenseBlock", "ResidualResidualDenseBlock", "Generator",
|
8 |
-
"DownSamplingNetwork"
|
9 |
-
]
|
10 |
-
|
11 |
-
|
12 |
-
class ResidualDenseBlock(nn.Module):
|
13 |
-
"""Achieves densely connected convolutional layers.
|
14 |
-
`Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993v5.pdf>` paper.
|
15 |
-
|
16 |
-
Args:
|
17 |
-
channels (int): The number of channels in the input image.
|
18 |
-
growths (int): The number of channels that increase in each layer of convolution.
|
19 |
-
"""
|
20 |
-
|
21 |
-
def __init__(self, channels: int, growths: int) -> None:
|
22 |
-
super(ResidualDenseBlock, self).__init__()
|
23 |
-
self.conv1 = nn.Conv2d(channels + growths * 0, growths, (3, 3), (1, 1), (1, 1))
|
24 |
-
self.conv2 = nn.Conv2d(channels + growths * 1, growths, (3, 3), (1, 1), (1, 1))
|
25 |
-
self.conv3 = nn.Conv2d(channels + growths * 2, growths, (3, 3), (1, 1), (1, 1))
|
26 |
-
self.conv4 = nn.Conv2d(channels + growths * 3, growths, (3, 3), (1, 1), (1, 1))
|
27 |
-
self.conv5 = nn.Conv2d(channels + growths * 4, channels, (3, 3), (1, 1), (1, 1))
|
28 |
-
|
29 |
-
self.leaky_relu = nn.LeakyReLU(0.2, True)
|
30 |
-
self.identity = nn.Identity()
|
31 |
-
|
32 |
-
def forward(self, x: Tensor) -> Tensor:
|
33 |
-
identity = x
|
34 |
-
|
35 |
-
out1 = self.leaky_relu(self.conv1(x))
|
36 |
-
out2 = self.leaky_relu(self.conv2(torch.cat([x, out1], 1)))
|
37 |
-
out3 = self.leaky_relu(self.conv3(torch.cat([x, out1, out2], 1)))
|
38 |
-
out4 = self.leaky_relu(self.conv4(torch.cat([x, out1, out2, out3], 1)))
|
39 |
-
out5 = self.identity(self.conv5(torch.cat([x, out1, out2, out3, out4], 1)))
|
40 |
-
out = out5 * 0.2 + identity
|
41 |
-
|
42 |
-
return out
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
class ResidualDenseBlock(nn.Module):
|
47 |
-
"""Achieves densely connected convolutional layers.
|
48 |
-
`Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993v5.pdf>` paper.
|
49 |
-
|
50 |
-
Args:
|
51 |
-
channels (int): The number of channels in the input image.
|
52 |
-
growths (int): The number of channels that increase in each layer of convolution.
|
53 |
-
"""
|
54 |
-
|
55 |
-
def __init__(self, channels: int, growths: int) -> None:
|
56 |
-
super(ResidualDenseBlock, self).__init__()
|
57 |
-
self.conv1 = nn.Conv2d(channels + growths * 0, growths, (3, 3), (1, 1), (1, 1))
|
58 |
-
self.conv2 = nn.Conv2d(channels + growths * 1, growths, (3, 3), (1, 1), (1, 1))
|
59 |
-
self.conv3 = nn.Conv2d(channels + growths * 2, growths, (3, 3), (1, 1), (1, 1))
|
60 |
-
self.conv4 = nn.Conv2d(channels + growths * 3, growths, (3, 3), (1, 1), (1, 1))
|
61 |
-
self.conv5 = nn.Conv2d(channels + growths * 4, channels, (3, 3), (1, 1), (1, 1))
|
62 |
-
|
63 |
-
self.leaky_relu = nn.LeakyReLU(0.2, True)
|
64 |
-
self.identity = nn.Identity()
|
65 |
-
|
66 |
-
def forward(self, x: Tensor) -> Tensor:
|
67 |
-
identity = x
|
68 |
-
|
69 |
-
out1 = self.leaky_relu(self.conv1(x))
|
70 |
-
out2 = self.leaky_relu(self.conv2(torch.cat([x, out1], 1)))
|
71 |
-
out3 = self.leaky_relu(self.conv3(torch.cat([x, out1, out2], 1)))
|
72 |
-
out4 = self.leaky_relu(self.conv4(torch.cat([x, out1, out2, out3], 1)))
|
73 |
-
out5 = self.identity(self.conv5(torch.cat([x, out1, out2, out3, out4], 1)))
|
74 |
-
out = out5 * 0.2 + identity
|
75 |
-
|
76 |
-
return out
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
class MiniResidualDenseBlock(nn.Module):
|
81 |
-
"""Achieves densely connected convolutional layers.
|
82 |
-
`Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993v5.pdf>` paper.
|
83 |
-
|
84 |
-
Args:
|
85 |
-
channels (int): The number of channels in the input image.
|
86 |
-
growths (int): The number of channels that increase in each layer of convolution.
|
87 |
-
"""
|
88 |
-
|
89 |
-
def __init__(self, channels: int, growths: int) -> None:
|
90 |
-
super(MiniResidualDenseBlock, self).__init__()
|
91 |
-
self.conv1 = nn.Conv2d(channels + growths * 0, growths, (3, 3), (1, 1), (1, 1))
|
92 |
-
self.conv2 = nn.Conv2d(channels + growths * 1, growths, (3, 3), (1, 1), (1, 1))
|
93 |
-
self.conv3 = nn.Conv2d(channels + growths * 2, growths, (3, 3), (1, 1), (1, 1))
|
94 |
-
self.conv4 = nn.Conv2d(channels + growths * 3, growths, (3, 3), (1, 1), (1, 1))
|
95 |
-
self.conv5 = nn.Conv2d(channels + growths * 4, channels, (3, 3), (1, 1), (1, 1))
|
96 |
-
|
97 |
-
self.leaky_relu = nn.LeakyReLU(0.2, True)
|
98 |
-
|
99 |
-
def forward(self, x: Tensor) -> Tensor:
|
100 |
-
identity = x
|
101 |
-
|
102 |
-
out1 = self.leaky_relu(self.conv1(x))
|
103 |
-
out2 = self.leaky_relu(self.conv2(torch.cat([x, out1], 1)))
|
104 |
-
out3 = self.leaky_relu(self.conv3(torch.cat([x, out1, out2], 1)))
|
105 |
-
out4 = self.leaky_relu(self.conv4(torch.cat([x, out1, out2, out3], 1)))
|
106 |
-
out5 = self.leaky_relu(self.conv5(torch.cat([x, out1, out2, out3, out4], 1)))
|
107 |
-
out = out5 * 0.2 + identity
|
108 |
-
|
109 |
-
return out
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
class ResidualResidualDenseBlock(nn.Module):
|
114 |
-
"""Multi-layer residual dense convolution block.
|
115 |
-
|
116 |
-
Args:
|
117 |
-
channels (int): The number of channels in the input image.
|
118 |
-
growths (int): The number of channels that increase in each layer of convolution.
|
119 |
-
"""
|
120 |
-
|
121 |
-
def __init__(self, channels: int, growths: int) -> None:
|
122 |
-
super(ResidualResidualDenseBlock, self).__init__()
|
123 |
-
self.rdb1 = ResidualDenseBlock(channels, growths)
|
124 |
-
self.rdb2 = ResidualDenseBlock(channels, growths)
|
125 |
-
self.rdb3 = ResidualDenseBlock(channels, growths)
|
126 |
-
|
127 |
-
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
128 |
-
identity = x
|
129 |
-
|
130 |
-
out = self.rdb1(x)
|
131 |
-
out = self.rdb2(out)
|
132 |
-
out = self.rdb3(out)
|
133 |
-
out = out * 0.2 + identity
|
134 |
-
|
135 |
-
return out
|
136 |
-
|
137 |
-
|
138 |
-
class MiniResidualResidualDenseBlock(nn.Module):
|
139 |
-
"""Multi-layer residual dense convolution block.
|
140 |
-
|
141 |
-
Args:
|
142 |
-
channels (int): The number of channels in the input image.
|
143 |
-
growths (int): The number of channels that increase in each layer of convolution.
|
144 |
-
"""
|
145 |
-
|
146 |
-
def __init__(self, channels: int, growths: int) -> None:
|
147 |
-
super(MiniResidualResidualDenseBlock, self).__init__()
|
148 |
-
self.M_rdb1 = MiniResidualDenseBlock(channels, growths)
|
149 |
-
self.M_rdb2 = MiniResidualDenseBlock(channels, growths)
|
150 |
-
self.M_rdb3 = MiniResidualDenseBlock(channels, growths)
|
151 |
-
|
152 |
-
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
153 |
-
identity = x
|
154 |
-
out = self.M_rdb1(x)
|
155 |
-
out = self.M_rdb2(out)
|
156 |
-
out = self.M_rdb3(out)
|
157 |
-
out = out * 0.2 + identity
|
158 |
-
return out
|
159 |
-
|
160 |
-
|
161 |
-
class Generator(nn.Module):
|
162 |
-
def __init__(self) -> None:
|
163 |
-
super(Generator, self).__init__()
|
164 |
-
# Generator
|
165 |
-
self.conv_block1 = nn.Conv2d(1, 64, (3, 3), (1, 1), (1, 1))
|
166 |
-
trunk = []
|
167 |
-
for _ in range(16):
|
168 |
-
trunk += [ResidualResidualDenseBlock(64, 32)]
|
169 |
-
self.trunk = nn.Sequential(*trunk)
|
170 |
-
|
171 |
-
# After the feature extraction network, reconnect a layer of convolutional blocks.
|
172 |
-
self.conv_block2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1))
|
173 |
-
|
174 |
-
|
175 |
-
# Upsampling convolutional layer.
|
176 |
-
self.upsampling = nn.Sequential(
|
177 |
-
nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)),
|
178 |
-
nn.LeakyReLU(0.2, True)
|
179 |
-
)
|
180 |
-
|
181 |
-
# Reconnect a layer of convolution block after upsampling.
|
182 |
-
self.conv_block3 = nn.Sequential(
|
183 |
-
nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)),
|
184 |
-
nn.LeakyReLU(0.2, True)
|
185 |
-
)
|
186 |
-
|
187 |
-
self.conv_block4 = nn.Sequential(
|
188 |
-
nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)),
|
189 |
-
#nn.Sigmoid()
|
190 |
-
)
|
191 |
-
|
192 |
-
self.conv_block0_branch0 = nn.Sequential(
|
193 |
-
nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)),
|
194 |
-
nn.LeakyReLU(0.2, True),
|
195 |
-
nn.Conv2d(64, 128, (3, 3), (1, 1), (1, 1)),
|
196 |
-
nn.LeakyReLU(0.2, True),
|
197 |
-
nn.Conv2d(128, 128, (3, 3), (1, 1), (1, 1)),
|
198 |
-
nn.LeakyReLU(0.2, True),
|
199 |
-
nn.Conv2d(128, 64, (3, 3), (1, 1), (1, 1)),
|
200 |
-
nn.Tanh()
|
201 |
-
)
|
202 |
-
|
203 |
-
self.conv_block0_branch1 = nn.Sequential(
|
204 |
-
nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)),
|
205 |
-
nn.LeakyReLU(0.2, True),
|
206 |
-
nn.Conv2d(64, 128, (3, 3), (1, 1), (1, 1)),
|
207 |
-
nn.LeakyReLU(0.2, True),
|
208 |
-
nn.Conv2d(128, 128, (3, 3), (1, 1), (1, 1)),
|
209 |
-
nn.LeakyReLU(0.2, True),
|
210 |
-
nn.Conv2d(128, 64, (3, 3), (1, 1), (1, 1)),
|
211 |
-
nn.Tanh()
|
212 |
-
)
|
213 |
-
|
214 |
-
self.conv_block1_branch0 = nn.Sequential(
|
215 |
-
nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)),
|
216 |
-
nn.LeakyReLU(0.2, True),
|
217 |
-
nn.Conv2d(64, 1, (3, 3), (1, 1), (1, 1)),
|
218 |
-
#nn.LeakyReLU(0.2, True),
|
219 |
-
#nn.Conv2d(32, 1, (3, 3), (1, 1), (1, 1)),
|
220 |
-
nn.Sigmoid()
|
221 |
-
)
|
222 |
-
|
223 |
-
|
224 |
-
|
225 |
-
self.conv_block1_branch1 = nn.Sequential(
|
226 |
-
nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)),
|
227 |
-
nn.LeakyReLU(0.2, True),
|
228 |
-
nn.Conv2d(64, 1, (3, 3), (1, 1), (1, 1)),
|
229 |
-
nn.Sigmoid())
|
230 |
-
|
231 |
-
|
232 |
-
|
233 |
-
|
234 |
-
def _forward_impl(self, x: Tensor) -> Tensor:
|
235 |
-
#Generator
|
236 |
-
out1 = self.conv_block1(x)
|
237 |
-
out = self.trunk(out1)
|
238 |
-
out2 = self.conv_block2(out)
|
239 |
-
out = out1 + out2
|
240 |
-
out = self.upsampling(F.interpolate(out, scale_factor=2, mode="bicubic"))
|
241 |
-
out = self.upsampling(F.interpolate(out, scale_factor=2, mode="bicubic"))
|
242 |
-
out = self.conv_block3(out)
|
243 |
-
#
|
244 |
-
out = self.conv_block4(out)
|
245 |
-
|
246 |
-
#demResidual = out[:, 1:2, :, :]
|
247 |
-
#grayResidual = out[:, 0:1, :, :]
|
248 |
-
|
249 |
-
# out = self.trunkRGB(out_4)
|
250 |
-
#
|
251 |
-
# out_dem = out[:, 3:4, :, :] * 0.2 + demResidual # DEM images extracted
|
252 |
-
# out_rgb = out[:, 0:3, :, :] * 0.2 + rgbResidual # RGB images extracted
|
253 |
-
|
254 |
-
#ra0
|
255 |
-
#out_rgb= rgbResidual + self.conv_block0_branch0(rgbResidual)
|
256 |
-
|
257 |
-
out_dem = out + self.conv_block0_branch1(out) #out+ tanh()
|
258 |
-
out_gray = out + self.conv_block0_branch0(out) #out+ tanh()
|
259 |
-
|
260 |
-
out_gray = self.conv_block1_branch0(out_gray) #sigmoid()
|
261 |
-
out_dem = self.conv_block1_branch1(out_dem) #sigmoid()
|
262 |
-
|
263 |
-
return out_gray, out_dem
|
264 |
-
|
265 |
-
|
266 |
-
def forward(self, x: Tensor) -> Tensor:
|
267 |
-
return self._forward_impl(x)
|
268 |
-
|
269 |
-
def _initialize_weights(self) -> None:
|
270 |
-
for m in self.modules():
|
271 |
-
if isinstance(m, nn.Conv2d):
|
272 |
-
nn.init.kaiming_normal_(m.weight)
|
273 |
-
if m.bias is not None:
|
274 |
-
nn.init.constant_(m.bias, 0)
|
275 |
-
m.weight.data *= 0.1
|
276 |
-
elif isinstance(m, nn.BatchNorm2d):
|
277 |
-
nn.init.constant_(m.weight, 1)
|
278 |
-
m.weight.data *= 0.1
|
279 |
-
|
280 |
-
class Discriminator(nn.Module):
|
281 |
-
def __init__(self) -> None:
|
282 |
-
super(Discriminator, self).__init__()
|
283 |
-
self.features = nn.Sequential(
|
284 |
-
# input size. (3) x 512 x 512
|
285 |
-
nn.Conv2d(2, 32, (3, 3), (1, 1), (1, 1), bias=True),
|
286 |
-
nn.LeakyReLU(0.2, True),
|
287 |
-
nn.Conv2d(32, 64, (4, 4), (2, 2), (1, 1), bias=False),
|
288 |
-
nn.BatchNorm2d(64),
|
289 |
-
nn.LeakyReLU(0.2, True),
|
290 |
-
nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1), bias=False),
|
291 |
-
nn.BatchNorm2d(64),
|
292 |
-
nn.LeakyReLU(0.2, True),
|
293 |
-
# state size. (128) x 256 x 256
|
294 |
-
nn.Conv2d(64, 128, (4, 4), (2, 2), (1, 1), bias=False),
|
295 |
-
nn.BatchNorm2d(128),
|
296 |
-
nn.LeakyReLU(0.2, True),
|
297 |
-
nn.Conv2d(128, 128, (3, 3), (1, 1), (1, 1), bias=False),
|
298 |
-
nn.BatchNorm2d(128),
|
299 |
-
nn.LeakyReLU(0.2, True),
|
300 |
-
# state size. (256) x 64 x 64
|
301 |
-
nn.Conv2d(128, 256, (4, 4), (2, 2), (1, 1), bias=False),
|
302 |
-
nn.BatchNorm2d(256),
|
303 |
-
nn.LeakyReLU(0.2, True),
|
304 |
-
nn.Conv2d(256, 256, (3, 3), (1, 1), (1, 1), bias=False),
|
305 |
-
nn.BatchNorm2d(256),
|
306 |
-
nn.LeakyReLU(0.2, True),
|
307 |
-
nn.Conv2d(256, 256, (4, 4), (2, 2), (1, 1), bias=False),
|
308 |
-
nn.BatchNorm2d(256),
|
309 |
-
nn.LeakyReLU(0.2, True),
|
310 |
-
nn.Conv2d(256, 256, (3, 3), (1, 1), (1, 1), bias=False),
|
311 |
-
nn.BatchNorm2d(256),
|
312 |
-
nn.LeakyReLU(0.2, True),
|
313 |
-
# state size. (512) x 16 x 16
|
314 |
-
nn.Conv2d(256, 256, (4, 4), (2, 2), (1, 1), bias=False),
|
315 |
-
nn.BatchNorm2d(256),
|
316 |
-
nn.LeakyReLU(0.2, True),
|
317 |
-
|
318 |
-
nn.Conv2d(256, 256, (4, 4), (2, 2), (1, 1), bias=False),
|
319 |
-
nn.BatchNorm2d(256),
|
320 |
-
nn.LeakyReLU(0.2, True),
|
321 |
-
# state size. (512) x 8 x 8
|
322 |
-
)
|
323 |
-
|
324 |
-
self.classifier = nn.Sequential(
|
325 |
-
nn.Linear(256 * 8 * 8, 100),
|
326 |
-
nn.LeakyReLU(0.2, True),
|
327 |
-
nn.Linear(100, 1),
|
328 |
-
)
|
329 |
-
|
330 |
-
def forward(self, x: Tensor) -> Tensor:
|
331 |
-
out = self.features(x)
|
332 |
-
out = torch.flatten(out, 1)
|
333 |
-
out = self.classifier(out)
|
334 |
-
return out
|
335 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/message/[messageId]/vote/+server.ts
DELETED
@@ -1,37 +0,0 @@
|
|
1 |
-
import { authCondition } from "$lib/server/auth";
|
2 |
-
import { collections } from "$lib/server/database";
|
3 |
-
import { error } from "@sveltejs/kit";
|
4 |
-
import { z } from "zod";
|
5 |
-
|
6 |
-
export async function POST({ params, request, locals }) {
|
7 |
-
/*const { score } = z
|
8 |
-
.object({
|
9 |
-
score: z.number().int().min(-1).max(1),
|
10 |
-
})
|
11 |
-
.parse(await request.json());
|
12 |
-
const conversationId = new ObjectId(params.id);
|
13 |
-
const messageId = params.messageId;
|
14 |
-
|
15 |
-
const document = await collections.conversations.updateOne(
|
16 |
-
{
|
17 |
-
_id: conversationId,
|
18 |
-
...authCondition(locals),
|
19 |
-
"messages.id": messageId,
|
20 |
-
},
|
21 |
-
{
|
22 |
-
...(score !== 0
|
23 |
-
? {
|
24 |
-
$set: {
|
25 |
-
"messages.$.score": score,
|
26 |
-
},
|
27 |
-
}
|
28 |
-
: { $unset: { "messages.$.score": "" } }),
|
29 |
-
}
|
30 |
-
);
|
31 |
-
|
32 |
-
if (!document.matchedCount) {
|
33 |
-
throw error(404, "Message not found");
|
34 |
-
}*/
|
35 |
-
|
36 |
-
return new Response();
|
37 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT/g4f/active_providers.py
DELETED
@@ -1,124 +0,0 @@
|
|
1 |
-
import uuid
|
2 |
-
import g4f
|
3 |
-
from g4f import ChatCompletion
|
4 |
-
|
5 |
-
TEST_PROMPT = "Generate a sentence with 'ocean'"
|
6 |
-
EXPECTED_RESPONSE_CONTAINS = "ocean"
|
7 |
-
|
8 |
-
|
9 |
-
class Provider:
|
10 |
-
def __init__(self, name, models):
|
11 |
-
"""
|
12 |
-
Initialize the provider with its name and models.
|
13 |
-
"""
|
14 |
-
self.name = name
|
15 |
-
self.models = models if isinstance(models, list) else [models]
|
16 |
-
|
17 |
-
def __str__(self):
|
18 |
-
return self.name
|
19 |
-
|
20 |
-
|
21 |
-
class ModelProviderManager:
|
22 |
-
def __init__(self):
|
23 |
-
"""
|
24 |
-
Initialize the manager that manages the working (active) providers for each model.
|
25 |
-
"""
|
26 |
-
self._working_model_providers = {}
|
27 |
-
|
28 |
-
def add_provider(self, model, provider_name):
|
29 |
-
"""
|
30 |
-
Add a provider to the working provider list of the specified model.
|
31 |
-
"""
|
32 |
-
if model not in self._working_model_providers:
|
33 |
-
self._working_model_providers[model] = []
|
34 |
-
self._working_model_providers[model].append(provider_name)
|
35 |
-
|
36 |
-
def get_working_providers(self):
|
37 |
-
"""
|
38 |
-
Return the currently active providers for each model.
|
39 |
-
"""
|
40 |
-
return self._working_model_providers
|
41 |
-
|
42 |
-
|
43 |
-
def _fetch_providers_having_models():
|
44 |
-
"""
|
45 |
-
Get providers that have models from g4f.Providers.
|
46 |
-
"""
|
47 |
-
model_providers = []
|
48 |
-
|
49 |
-
for provider_name in dir(g4f.Provider):
|
50 |
-
provider = getattr(g4f.Provider, provider_name)
|
51 |
-
|
52 |
-
if _is_provider_applicable(provider):
|
53 |
-
model_providers.append(Provider(provider_name, provider.model))
|
54 |
-
|
55 |
-
return model_providers
|
56 |
-
|
57 |
-
|
58 |
-
def _is_provider_applicable(provider):
|
59 |
-
"""
|
60 |
-
Check if the provider has a model and doesn't require authentication.
|
61 |
-
"""
|
62 |
-
return (hasattr(provider, 'model') and
|
63 |
-
hasattr(provider, '_create_completion') and
|
64 |
-
hasattr(provider, 'needs_auth') and
|
65 |
-
not provider.needs_auth)
|
66 |
-
|
67 |
-
|
68 |
-
def _generate_test_messages():
|
69 |
-
"""
|
70 |
-
Generate messages for testing.
|
71 |
-
"""
|
72 |
-
return [{"role": "system", "content": "You are a trained AI assistant."},
|
73 |
-
{"role": "user", "content": TEST_PROMPT}]
|
74 |
-
|
75 |
-
|
76 |
-
def _manage_chat_completion(manager, model_providers, test_messages):
|
77 |
-
"""
|
78 |
-
Generate chat completion for each provider's models and handle positive and negative results.
|
79 |
-
"""
|
80 |
-
for provider in model_providers:
|
81 |
-
for model in provider.models:
|
82 |
-
try:
|
83 |
-
response = _generate_chat_response(
|
84 |
-
provider.name, model, test_messages)
|
85 |
-
if EXPECTED_RESPONSE_CONTAINS in response.lower():
|
86 |
-
_print_success_response(provider, model)
|
87 |
-
manager.add_provider(model, provider.name)
|
88 |
-
else:
|
89 |
-
raise Exception(f"Unexpected response: {response}")
|
90 |
-
except Exception as error:
|
91 |
-
_print_error_response(provider, model, error)
|
92 |
-
|
93 |
-
|
94 |
-
def _generate_chat_response(provider_name, model, test_messages):
|
95 |
-
"""
|
96 |
-
Generate a chat response given a provider name, a model, and test messages.
|
97 |
-
"""
|
98 |
-
return ChatCompletion.create(
|
99 |
-
model=model,
|
100 |
-
messages=test_messages,
|
101 |
-
chatId=str(uuid.uuid4()),
|
102 |
-
provider=getattr(g4f.Provider, provider_name)
|
103 |
-
)
|
104 |
-
|
105 |
-
|
106 |
-
def _print_success_response(provider, model):
|
107 |
-
print(f"\u2705 [{provider}] - [{model}]: Success")
|
108 |
-
|
109 |
-
|
110 |
-
def _print_error_response(provider, model, error):
|
111 |
-
print(f"\u26D4 [{provider}] - [{model}]: Error - {str(error)}")
|
112 |
-
|
113 |
-
|
114 |
-
def get_active_model_providers():
|
115 |
-
"""
|
116 |
-
Get providers that are currently working (active).
|
117 |
-
"""
|
118 |
-
model_providers = _fetch_providers_having_models()
|
119 |
-
test_messages = _generate_test_messages()
|
120 |
-
manager = ModelProviderManager()
|
121 |
-
|
122 |
-
_manage_chat_completion(manager, model_providers, test_messages)
|
123 |
-
|
124 |
-
return manager.get_working_providers()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aditya9790/yolo7-object-tracking/utils/loss.py
DELETED
@@ -1,1697 +0,0 @@
|
|
1 |
-
# Loss functions
|
2 |
-
|
3 |
-
import torch
|
4 |
-
import torch.nn as nn
|
5 |
-
import torch.nn.functional as F
|
6 |
-
|
7 |
-
from utils.general import bbox_iou, bbox_alpha_iou, box_iou, box_giou, box_diou, box_ciou, xywh2xyxy
|
8 |
-
from utils.torch_utils import is_parallel
|
9 |
-
|
10 |
-
|
11 |
-
def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441
|
12 |
-
# return positive, negative label smoothing BCE targets
|
13 |
-
return 1.0 - 0.5 * eps, 0.5 * eps
|
14 |
-
|
15 |
-
|
16 |
-
class BCEBlurWithLogitsLoss(nn.Module):
|
17 |
-
# BCEwithLogitLoss() with reduced missing label effects.
|
18 |
-
def __init__(self, alpha=0.05):
|
19 |
-
super(BCEBlurWithLogitsLoss, self).__init__()
|
20 |
-
self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss()
|
21 |
-
self.alpha = alpha
|
22 |
-
|
23 |
-
def forward(self, pred, true):
|
24 |
-
loss = self.loss_fcn(pred, true)
|
25 |
-
pred = torch.sigmoid(pred) # prob from logits
|
26 |
-
dx = pred - true # reduce only missing label effects
|
27 |
-
# dx = (pred - true).abs() # reduce missing label and false label effects
|
28 |
-
alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4))
|
29 |
-
loss *= alpha_factor
|
30 |
-
return loss.mean()
|
31 |
-
|
32 |
-
|
33 |
-
class SigmoidBin(nn.Module):
|
34 |
-
stride = None # strides computed during build
|
35 |
-
export = False # onnx export
|
36 |
-
|
37 |
-
def __init__(self, bin_count=10, min=0.0, max=1.0, reg_scale = 2.0, use_loss_regression=True, use_fw_regression=True, BCE_weight=1.0, smooth_eps=0.0):
|
38 |
-
super(SigmoidBin, self).__init__()
|
39 |
-
|
40 |
-
self.bin_count = bin_count
|
41 |
-
self.length = bin_count + 1
|
42 |
-
self.min = min
|
43 |
-
self.max = max
|
44 |
-
self.scale = float(max - min)
|
45 |
-
self.shift = self.scale / 2.0
|
46 |
-
|
47 |
-
self.use_loss_regression = use_loss_regression
|
48 |
-
self.use_fw_regression = use_fw_regression
|
49 |
-
self.reg_scale = reg_scale
|
50 |
-
self.BCE_weight = BCE_weight
|
51 |
-
|
52 |
-
start = min + (self.scale/2.0) / self.bin_count
|
53 |
-
end = max - (self.scale/2.0) / self.bin_count
|
54 |
-
step = self.scale / self.bin_count
|
55 |
-
self.step = step
|
56 |
-
#print(f" start = {start}, end = {end}, step = {step} ")
|
57 |
-
|
58 |
-
bins = torch.range(start, end + 0.0001, step).float()
|
59 |
-
self.register_buffer('bins', bins)
|
60 |
-
|
61 |
-
|
62 |
-
self.cp = 1.0 - 0.5 * smooth_eps
|
63 |
-
self.cn = 0.5 * smooth_eps
|
64 |
-
|
65 |
-
self.BCEbins = nn.BCEWithLogitsLoss(pos_weight=torch.Tensor([BCE_weight]))
|
66 |
-
self.MSELoss = nn.MSELoss()
|
67 |
-
|
68 |
-
def get_length(self):
|
69 |
-
return self.length
|
70 |
-
|
71 |
-
def forward(self, pred):
|
72 |
-
assert pred.shape[-1] == self.length, 'pred.shape[-1]=%d is not equal to self.length=%d' % (pred.shape[-1], self.length)
|
73 |
-
|
74 |
-
pred_reg = (pred[..., 0] * self.reg_scale - self.reg_scale/2.0) * self.step
|
75 |
-
pred_bin = pred[..., 1:(1+self.bin_count)]
|
76 |
-
|
77 |
-
_, bin_idx = torch.max(pred_bin, dim=-1)
|
78 |
-
bin_bias = self.bins[bin_idx]
|
79 |
-
|
80 |
-
if self.use_fw_regression:
|
81 |
-
result = pred_reg + bin_bias
|
82 |
-
else:
|
83 |
-
result = bin_bias
|
84 |
-
result = result.clamp(min=self.min, max=self.max)
|
85 |
-
|
86 |
-
return result
|
87 |
-
|
88 |
-
|
89 |
-
def training_loss(self, pred, target):
|
90 |
-
assert pred.shape[-1] == self.length, 'pred.shape[-1]=%d is not equal to self.length=%d' % (pred.shape[-1], self.length)
|
91 |
-
assert pred.shape[0] == target.shape[0], 'pred.shape=%d is not equal to the target.shape=%d' % (pred.shape[0], target.shape[0])
|
92 |
-
device = pred.device
|
93 |
-
|
94 |
-
pred_reg = (pred[..., 0].sigmoid() * self.reg_scale - self.reg_scale/2.0) * self.step
|
95 |
-
pred_bin = pred[..., 1:(1+self.bin_count)]
|
96 |
-
|
97 |
-
diff_bin_target = torch.abs(target[..., None] - self.bins)
|
98 |
-
_, bin_idx = torch.min(diff_bin_target, dim=-1)
|
99 |
-
|
100 |
-
bin_bias = self.bins[bin_idx]
|
101 |
-
bin_bias.requires_grad = False
|
102 |
-
result = pred_reg + bin_bias
|
103 |
-
|
104 |
-
target_bins = torch.full_like(pred_bin, self.cn, device=device) # targets
|
105 |
-
n = pred.shape[0]
|
106 |
-
target_bins[range(n), bin_idx] = self.cp
|
107 |
-
|
108 |
-
loss_bin = self.BCEbins(pred_bin, target_bins) # BCE
|
109 |
-
|
110 |
-
if self.use_loss_regression:
|
111 |
-
loss_regression = self.MSELoss(result, target) # MSE
|
112 |
-
loss = loss_bin + loss_regression
|
113 |
-
else:
|
114 |
-
loss = loss_bin
|
115 |
-
|
116 |
-
out_result = result.clamp(min=self.min, max=self.max)
|
117 |
-
|
118 |
-
return loss, out_result
|
119 |
-
|
120 |
-
|
121 |
-
class FocalLoss(nn.Module):
|
122 |
-
# Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
|
123 |
-
def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
|
124 |
-
super(FocalLoss, self).__init__()
|
125 |
-
self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
|
126 |
-
self.gamma = gamma
|
127 |
-
self.alpha = alpha
|
128 |
-
self.reduction = loss_fcn.reduction
|
129 |
-
self.loss_fcn.reduction = 'none' # required to apply FL to each element
|
130 |
-
|
131 |
-
def forward(self, pred, true):
|
132 |
-
loss = self.loss_fcn(pred, true)
|
133 |
-
# p_t = torch.exp(-loss)
|
134 |
-
# loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability
|
135 |
-
|
136 |
-
# TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py
|
137 |
-
pred_prob = torch.sigmoid(pred) # prob from logits
|
138 |
-
p_t = true * pred_prob + (1 - true) * (1 - pred_prob)
|
139 |
-
alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
|
140 |
-
modulating_factor = (1.0 - p_t) ** self.gamma
|
141 |
-
loss *= alpha_factor * modulating_factor
|
142 |
-
|
143 |
-
if self.reduction == 'mean':
|
144 |
-
return loss.mean()
|
145 |
-
elif self.reduction == 'sum':
|
146 |
-
return loss.sum()
|
147 |
-
else: # 'none'
|
148 |
-
return loss
|
149 |
-
|
150 |
-
|
151 |
-
class QFocalLoss(nn.Module):
|
152 |
-
# Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
|
153 |
-
def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
|
154 |
-
super(QFocalLoss, self).__init__()
|
155 |
-
self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
|
156 |
-
self.gamma = gamma
|
157 |
-
self.alpha = alpha
|
158 |
-
self.reduction = loss_fcn.reduction
|
159 |
-
self.loss_fcn.reduction = 'none' # required to apply FL to each element
|
160 |
-
|
161 |
-
def forward(self, pred, true):
|
162 |
-
loss = self.loss_fcn(pred, true)
|
163 |
-
|
164 |
-
pred_prob = torch.sigmoid(pred) # prob from logits
|
165 |
-
alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
|
166 |
-
modulating_factor = torch.abs(true - pred_prob) ** self.gamma
|
167 |
-
loss *= alpha_factor * modulating_factor
|
168 |
-
|
169 |
-
if self.reduction == 'mean':
|
170 |
-
return loss.mean()
|
171 |
-
elif self.reduction == 'sum':
|
172 |
-
return loss.sum()
|
173 |
-
else: # 'none'
|
174 |
-
return loss
|
175 |
-
|
176 |
-
class RankSort(torch.autograd.Function):
|
177 |
-
@staticmethod
|
178 |
-
def forward(ctx, logits, targets, delta_RS=0.50, eps=1e-10):
|
179 |
-
|
180 |
-
classification_grads=torch.zeros(logits.shape).cuda()
|
181 |
-
|
182 |
-
#Filter fg logits
|
183 |
-
fg_labels = (targets > 0.)
|
184 |
-
fg_logits = logits[fg_labels]
|
185 |
-
fg_targets = targets[fg_labels]
|
186 |
-
fg_num = len(fg_logits)
|
187 |
-
|
188 |
-
#Do not use bg with scores less than minimum fg logit
|
189 |
-
#since changing its score does not have an effect on precision
|
190 |
-
threshold_logit = torch.min(fg_logits)-delta_RS
|
191 |
-
relevant_bg_labels=((targets==0) & (logits>=threshold_logit))
|
192 |
-
|
193 |
-
relevant_bg_logits = logits[relevant_bg_labels]
|
194 |
-
relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda()
|
195 |
-
sorting_error=torch.zeros(fg_num).cuda()
|
196 |
-
ranking_error=torch.zeros(fg_num).cuda()
|
197 |
-
fg_grad=torch.zeros(fg_num).cuda()
|
198 |
-
|
199 |
-
#sort the fg logits
|
200 |
-
order=torch.argsort(fg_logits)
|
201 |
-
#Loops over each positive following the order
|
202 |
-
for ii in order:
|
203 |
-
# Difference Transforms (x_ij)
|
204 |
-
fg_relations=fg_logits-fg_logits[ii]
|
205 |
-
bg_relations=relevant_bg_logits-fg_logits[ii]
|
206 |
-
|
207 |
-
if delta_RS > 0:
|
208 |
-
fg_relations=torch.clamp(fg_relations/(2*delta_RS)+0.5,min=0,max=1)
|
209 |
-
bg_relations=torch.clamp(bg_relations/(2*delta_RS)+0.5,min=0,max=1)
|
210 |
-
else:
|
211 |
-
fg_relations = (fg_relations >= 0).float()
|
212 |
-
bg_relations = (bg_relations >= 0).float()
|
213 |
-
|
214 |
-
# Rank of ii among pos and false positive number (bg with larger scores)
|
215 |
-
rank_pos=torch.sum(fg_relations)
|
216 |
-
FP_num=torch.sum(bg_relations)
|
217 |
-
|
218 |
-
# Rank of ii among all examples
|
219 |
-
rank=rank_pos+FP_num
|
220 |
-
|
221 |
-
# Ranking error of example ii. target_ranking_error is always 0. (Eq. 7)
|
222 |
-
ranking_error[ii]=FP_num/rank
|
223 |
-
|
224 |
-
# Current sorting error of example ii. (Eq. 7)
|
225 |
-
current_sorting_error = torch.sum(fg_relations*(1-fg_targets))/rank_pos
|
226 |
-
|
227 |
-
#Find examples in the target sorted order for example ii
|
228 |
-
iou_relations = (fg_targets >= fg_targets[ii])
|
229 |
-
target_sorted_order = iou_relations * fg_relations
|
230 |
-
|
231 |
-
#The rank of ii among positives in sorted order
|
232 |
-
rank_pos_target = torch.sum(target_sorted_order)
|
233 |
-
|
234 |
-
#Compute target sorting error. (Eq. 8)
|
235 |
-
#Since target ranking error is 0, this is also total target error
|
236 |
-
target_sorting_error= torch.sum(target_sorted_order*(1-fg_targets))/rank_pos_target
|
237 |
-
|
238 |
-
#Compute sorting error on example ii
|
239 |
-
sorting_error[ii] = current_sorting_error - target_sorting_error
|
240 |
-
|
241 |
-
#Identity Update for Ranking Error
|
242 |
-
if FP_num > eps:
|
243 |
-
#For ii the update is the ranking error
|
244 |
-
fg_grad[ii] -= ranking_error[ii]
|
245 |
-
#For negatives, distribute error via ranking pmf (i.e. bg_relations/FP_num)
|
246 |
-
relevant_bg_grad += (bg_relations*(ranking_error[ii]/FP_num))
|
247 |
-
|
248 |
-
#Find the positives that are misranked (the cause of the error)
|
249 |
-
#These are the ones with smaller IoU but larger logits
|
250 |
-
missorted_examples = (~ iou_relations) * fg_relations
|
251 |
-
|
252 |
-
#Denominotor of sorting pmf
|
253 |
-
sorting_pmf_denom = torch.sum(missorted_examples)
|
254 |
-
|
255 |
-
#Identity Update for Sorting Error
|
256 |
-
if sorting_pmf_denom > eps:
|
257 |
-
#For ii the update is the sorting error
|
258 |
-
fg_grad[ii] -= sorting_error[ii]
|
259 |
-
#For positives, distribute error via sorting pmf (i.e. missorted_examples/sorting_pmf_denom)
|
260 |
-
fg_grad += (missorted_examples*(sorting_error[ii]/sorting_pmf_denom))
|
261 |
-
|
262 |
-
#Normalize gradients by number of positives
|
263 |
-
classification_grads[fg_labels]= (fg_grad/fg_num)
|
264 |
-
classification_grads[relevant_bg_labels]= (relevant_bg_grad/fg_num)
|
265 |
-
|
266 |
-
ctx.save_for_backward(classification_grads)
|
267 |
-
|
268 |
-
return ranking_error.mean(), sorting_error.mean()
|
269 |
-
|
270 |
-
@staticmethod
|
271 |
-
def backward(ctx, out_grad1, out_grad2):
|
272 |
-
g1, =ctx.saved_tensors
|
273 |
-
return g1*out_grad1, None, None, None
|
274 |
-
|
275 |
-
class aLRPLoss(torch.autograd.Function):
|
276 |
-
@staticmethod
|
277 |
-
def forward(ctx, logits, targets, regression_losses, delta=1., eps=1e-5):
|
278 |
-
classification_grads=torch.zeros(logits.shape).cuda()
|
279 |
-
|
280 |
-
#Filter fg logits
|
281 |
-
fg_labels = (targets == 1)
|
282 |
-
fg_logits = logits[fg_labels]
|
283 |
-
fg_num = len(fg_logits)
|
284 |
-
|
285 |
-
#Do not use bg with scores less than minimum fg logit
|
286 |
-
#since changing its score does not have an effect on precision
|
287 |
-
threshold_logit = torch.min(fg_logits)-delta
|
288 |
-
|
289 |
-
#Get valid bg logits
|
290 |
-
relevant_bg_labels=((targets==0)&(logits>=threshold_logit))
|
291 |
-
relevant_bg_logits=logits[relevant_bg_labels]
|
292 |
-
relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda()
|
293 |
-
rank=torch.zeros(fg_num).cuda()
|
294 |
-
prec=torch.zeros(fg_num).cuda()
|
295 |
-
fg_grad=torch.zeros(fg_num).cuda()
|
296 |
-
|
297 |
-
max_prec=0
|
298 |
-
#sort the fg logits
|
299 |
-
order=torch.argsort(fg_logits)
|
300 |
-
#Loops over each positive following the order
|
301 |
-
for ii in order:
|
302 |
-
#x_ij s as score differences with fgs
|
303 |
-
fg_relations=fg_logits-fg_logits[ii]
|
304 |
-
#Apply piecewise linear function and determine relations with fgs
|
305 |
-
fg_relations=torch.clamp(fg_relations/(2*delta)+0.5,min=0,max=1)
|
306 |
-
#Discard i=j in the summation in rank_pos
|
307 |
-
fg_relations[ii]=0
|
308 |
-
|
309 |
-
#x_ij s as score differences with bgs
|
310 |
-
bg_relations=relevant_bg_logits-fg_logits[ii]
|
311 |
-
#Apply piecewise linear function and determine relations with bgs
|
312 |
-
bg_relations=torch.clamp(bg_relations/(2*delta)+0.5,min=0,max=1)
|
313 |
-
|
314 |
-
#Compute the rank of the example within fgs and number of bgs with larger scores
|
315 |
-
rank_pos=1+torch.sum(fg_relations)
|
316 |
-
FP_num=torch.sum(bg_relations)
|
317 |
-
#Store the total since it is normalizer also for aLRP Regression error
|
318 |
-
rank[ii]=rank_pos+FP_num
|
319 |
-
|
320 |
-
#Compute precision for this example to compute classification loss
|
321 |
-
prec[ii]=rank_pos/rank[ii]
|
322 |
-
#For stability, set eps to a infinitesmall value (e.g. 1e-6), then compute grads
|
323 |
-
if FP_num > eps:
|
324 |
-
fg_grad[ii] = -(torch.sum(fg_relations*regression_losses)+FP_num)/rank[ii]
|
325 |
-
relevant_bg_grad += (bg_relations*(-fg_grad[ii]/FP_num))
|
326 |
-
|
327 |
-
#aLRP with grad formulation fg gradient
|
328 |
-
classification_grads[fg_labels]= fg_grad
|
329 |
-
#aLRP with grad formulation bg gradient
|
330 |
-
classification_grads[relevant_bg_labels]= relevant_bg_grad
|
331 |
-
|
332 |
-
classification_grads /= (fg_num)
|
333 |
-
|
334 |
-
cls_loss=1-prec.mean()
|
335 |
-
ctx.save_for_backward(classification_grads)
|
336 |
-
|
337 |
-
return cls_loss, rank, order
|
338 |
-
|
339 |
-
@staticmethod
|
340 |
-
def backward(ctx, out_grad1, out_grad2, out_grad3):
|
341 |
-
g1, =ctx.saved_tensors
|
342 |
-
return g1*out_grad1, None, None, None, None
|
343 |
-
|
344 |
-
|
345 |
-
class APLoss(torch.autograd.Function):
|
346 |
-
@staticmethod
|
347 |
-
def forward(ctx, logits, targets, delta=1.):
|
348 |
-
classification_grads=torch.zeros(logits.shape).cuda()
|
349 |
-
|
350 |
-
#Filter fg logits
|
351 |
-
fg_labels = (targets == 1)
|
352 |
-
fg_logits = logits[fg_labels]
|
353 |
-
fg_num = len(fg_logits)
|
354 |
-
|
355 |
-
#Do not use bg with scores less than minimum fg logit
|
356 |
-
#since changing its score does not have an effect on precision
|
357 |
-
threshold_logit = torch.min(fg_logits)-delta
|
358 |
-
|
359 |
-
#Get valid bg logits
|
360 |
-
relevant_bg_labels=((targets==0)&(logits>=threshold_logit))
|
361 |
-
relevant_bg_logits=logits[relevant_bg_labels]
|
362 |
-
relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda()
|
363 |
-
rank=torch.zeros(fg_num).cuda()
|
364 |
-
prec=torch.zeros(fg_num).cuda()
|
365 |
-
fg_grad=torch.zeros(fg_num).cuda()
|
366 |
-
|
367 |
-
max_prec=0
|
368 |
-
#sort the fg logits
|
369 |
-
order=torch.argsort(fg_logits)
|
370 |
-
#Loops over each positive following the order
|
371 |
-
for ii in order:
|
372 |
-
#x_ij s as score differences with fgs
|
373 |
-
fg_relations=fg_logits-fg_logits[ii]
|
374 |
-
#Apply piecewise linear function and determine relations with fgs
|
375 |
-
fg_relations=torch.clamp(fg_relations/(2*delta)+0.5,min=0,max=1)
|
376 |
-
#Discard i=j in the summation in rank_pos
|
377 |
-
fg_relations[ii]=0
|
378 |
-
|
379 |
-
#x_ij s as score differences with bgs
|
380 |
-
bg_relations=relevant_bg_logits-fg_logits[ii]
|
381 |
-
#Apply piecewise linear function and determine relations with bgs
|
382 |
-
bg_relations=torch.clamp(bg_relations/(2*delta)+0.5,min=0,max=1)
|
383 |
-
|
384 |
-
#Compute the rank of the example within fgs and number of bgs with larger scores
|
385 |
-
rank_pos=1+torch.sum(fg_relations)
|
386 |
-
FP_num=torch.sum(bg_relations)
|
387 |
-
#Store the total since it is normalizer also for aLRP Regression error
|
388 |
-
rank[ii]=rank_pos+FP_num
|
389 |
-
|
390 |
-
#Compute precision for this example
|
391 |
-
current_prec=rank_pos/rank[ii]
|
392 |
-
|
393 |
-
#Compute interpolated AP and store gradients for relevant bg examples
|
394 |
-
if (max_prec<=current_prec):
|
395 |
-
max_prec=current_prec
|
396 |
-
relevant_bg_grad += (bg_relations/rank[ii])
|
397 |
-
else:
|
398 |
-
relevant_bg_grad += (bg_relations/rank[ii])*(((1-max_prec)/(1-current_prec)))
|
399 |
-
|
400 |
-
#Store fg gradients
|
401 |
-
fg_grad[ii]=-(1-max_prec)
|
402 |
-
prec[ii]=max_prec
|
403 |
-
|
404 |
-
#aLRP with grad formulation fg gradient
|
405 |
-
classification_grads[fg_labels]= fg_grad
|
406 |
-
#aLRP with grad formulation bg gradient
|
407 |
-
classification_grads[relevant_bg_labels]= relevant_bg_grad
|
408 |
-
|
409 |
-
classification_grads /= fg_num
|
410 |
-
|
411 |
-
cls_loss=1-prec.mean()
|
412 |
-
ctx.save_for_backward(classification_grads)
|
413 |
-
|
414 |
-
return cls_loss
|
415 |
-
|
416 |
-
@staticmethod
|
417 |
-
def backward(ctx, out_grad1):
|
418 |
-
g1, =ctx.saved_tensors
|
419 |
-
return g1*out_grad1, None, None
|
420 |
-
|
421 |
-
|
422 |
-
class ComputeLoss:
|
423 |
-
# Compute losses
|
424 |
-
def __init__(self, model, autobalance=False):
|
425 |
-
super(ComputeLoss, self).__init__()
|
426 |
-
device = next(model.parameters()).device # get model device
|
427 |
-
h = model.hyp # hyperparameters
|
428 |
-
|
429 |
-
# Define criteria
|
430 |
-
BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
|
431 |
-
BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
|
432 |
-
|
433 |
-
# Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
|
434 |
-
self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
|
435 |
-
|
436 |
-
# Focal loss
|
437 |
-
g = h['fl_gamma'] # focal loss gamma
|
438 |
-
if g > 0:
|
439 |
-
BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
|
440 |
-
|
441 |
-
det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
|
442 |
-
self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
|
443 |
-
#self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.1, .05]) # P3-P7
|
444 |
-
#self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.5, 0.4, .1]) # P3-P7
|
445 |
-
self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
|
446 |
-
self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
|
447 |
-
for k in 'na', 'nc', 'nl', 'anchors':
|
448 |
-
setattr(self, k, getattr(det, k))
|
449 |
-
|
450 |
-
def __call__(self, p, targets): # predictions, targets, model
|
451 |
-
device = targets.device
|
452 |
-
lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
|
453 |
-
tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets
|
454 |
-
|
455 |
-
# Losses
|
456 |
-
for i, pi in enumerate(p): # layer index, layer predictions
|
457 |
-
b, a, gj, gi = indices[i] # image, anchor, gridy, gridx
|
458 |
-
tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
|
459 |
-
|
460 |
-
n = b.shape[0] # number of targets
|
461 |
-
if n:
|
462 |
-
ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
|
463 |
-
|
464 |
-
# Regression
|
465 |
-
pxy = ps[:, :2].sigmoid() * 2. - 0.5
|
466 |
-
pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
|
467 |
-
pbox = torch.cat((pxy, pwh), 1) # predicted box
|
468 |
-
iou = bbox_iou(pbox.T, tbox[i], x1y1x2y2=False, CIoU=True) # iou(prediction, target)
|
469 |
-
lbox += (1.0 - iou).mean() # iou loss
|
470 |
-
|
471 |
-
# Objectness
|
472 |
-
tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio
|
473 |
-
|
474 |
-
# Classification
|
475 |
-
if self.nc > 1: # cls loss (only if multiple classes)
|
476 |
-
t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets
|
477 |
-
t[range(n), tcls[i]] = self.cp
|
478 |
-
#t[t==self.cp] = iou.detach().clamp(0).type(t.dtype)
|
479 |
-
lcls += self.BCEcls(ps[:, 5:], t) # BCE
|
480 |
-
|
481 |
-
# Append targets to text file
|
482 |
-
# with open('targets.txt', 'a') as file:
|
483 |
-
# [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
|
484 |
-
|
485 |
-
obji = self.BCEobj(pi[..., 4], tobj)
|
486 |
-
lobj += obji * self.balance[i] # obj loss
|
487 |
-
if self.autobalance:
|
488 |
-
self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
|
489 |
-
|
490 |
-
if self.autobalance:
|
491 |
-
self.balance = [x / self.balance[self.ssi] for x in self.balance]
|
492 |
-
lbox *= self.hyp['box']
|
493 |
-
lobj *= self.hyp['obj']
|
494 |
-
lcls *= self.hyp['cls']
|
495 |
-
bs = tobj.shape[0] # batch size
|
496 |
-
|
497 |
-
loss = lbox + lobj + lcls
|
498 |
-
return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
|
499 |
-
|
500 |
-
def build_targets(self, p, targets):
|
501 |
-
# Build targets for compute_loss(), input targets(image,class,x,y,w,h)
|
502 |
-
na, nt = self.na, targets.shape[0] # number of anchors, targets
|
503 |
-
tcls, tbox, indices, anch = [], [], [], []
|
504 |
-
gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain
|
505 |
-
ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
|
506 |
-
targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
|
507 |
-
|
508 |
-
g = 0.5 # bias
|
509 |
-
off = torch.tensor([[0, 0],
|
510 |
-
[1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
|
511 |
-
# [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
|
512 |
-
], device=targets.device).float() * g # offsets
|
513 |
-
|
514 |
-
for i in range(self.nl):
|
515 |
-
anchors = self.anchors[i]
|
516 |
-
gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
|
517 |
-
|
518 |
-
# Match targets to anchors
|
519 |
-
t = targets * gain
|
520 |
-
if nt:
|
521 |
-
# Matches
|
522 |
-
r = t[:, :, 4:6] / anchors[:, None] # wh ratio
|
523 |
-
j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
|
524 |
-
# j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
|
525 |
-
t = t[j] # filter
|
526 |
-
|
527 |
-
# Offsets
|
528 |
-
gxy = t[:, 2:4] # grid xy
|
529 |
-
gxi = gain[[2, 3]] - gxy # inverse
|
530 |
-
j, k = ((gxy % 1. < g) & (gxy > 1.)).T
|
531 |
-
l, m = ((gxi % 1. < g) & (gxi > 1.)).T
|
532 |
-
j = torch.stack((torch.ones_like(j), j, k, l, m))
|
533 |
-
t = t.repeat((5, 1, 1))[j]
|
534 |
-
offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
|
535 |
-
else:
|
536 |
-
t = targets[0]
|
537 |
-
offsets = 0
|
538 |
-
|
539 |
-
# Define
|
540 |
-
b, c = t[:, :2].long().T # image, class
|
541 |
-
gxy = t[:, 2:4] # grid xy
|
542 |
-
gwh = t[:, 4:6] # grid wh
|
543 |
-
gij = (gxy - offsets).long()
|
544 |
-
gi, gj = gij.T # grid xy indices
|
545 |
-
|
546 |
-
# Append
|
547 |
-
a = t[:, 6].long() # anchor indices
|
548 |
-
indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
|
549 |
-
tbox.append(torch.cat((gxy - gij, gwh), 1)) # box
|
550 |
-
anch.append(anchors[a]) # anchors
|
551 |
-
tcls.append(c) # class
|
552 |
-
|
553 |
-
return tcls, tbox, indices, anch
|
554 |
-
|
555 |
-
|
556 |
-
class ComputeLossOTA:
|
557 |
-
# Compute losses
|
558 |
-
def __init__(self, model, autobalance=False):
|
559 |
-
super(ComputeLossOTA, self).__init__()
|
560 |
-
device = next(model.parameters()).device # get model device
|
561 |
-
h = model.hyp # hyperparameters
|
562 |
-
|
563 |
-
# Define criteria
|
564 |
-
BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
|
565 |
-
BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
|
566 |
-
|
567 |
-
# Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
|
568 |
-
self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
|
569 |
-
|
570 |
-
# Focal loss
|
571 |
-
g = h['fl_gamma'] # focal loss gamma
|
572 |
-
if g > 0:
|
573 |
-
BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
|
574 |
-
|
575 |
-
det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
|
576 |
-
self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
|
577 |
-
self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
|
578 |
-
self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
|
579 |
-
for k in 'na', 'nc', 'nl', 'anchors', 'stride':
|
580 |
-
setattr(self, k, getattr(det, k))
|
581 |
-
|
582 |
-
def __call__(self, p, targets, imgs): # predictions, targets, model
|
583 |
-
device = targets.device
|
584 |
-
lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
|
585 |
-
bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs)
|
586 |
-
pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p]
|
587 |
-
|
588 |
-
|
589 |
-
# Losses
|
590 |
-
for i, pi in enumerate(p): # layer index, layer predictions
|
591 |
-
b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx
|
592 |
-
tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
|
593 |
-
|
594 |
-
n = b.shape[0] # number of targets
|
595 |
-
if n:
|
596 |
-
ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
|
597 |
-
|
598 |
-
# Regression
|
599 |
-
grid = torch.stack([gi, gj], dim=1)
|
600 |
-
pxy = ps[:, :2].sigmoid() * 2. - 0.5
|
601 |
-
#pxy = ps[:, :2].sigmoid() * 3. - 1.
|
602 |
-
pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
|
603 |
-
pbox = torch.cat((pxy, pwh), 1) # predicted box
|
604 |
-
selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i]
|
605 |
-
selected_tbox[:, :2] -= grid
|
606 |
-
iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target)
|
607 |
-
lbox += (1.0 - iou).mean() # iou loss
|
608 |
-
|
609 |
-
# Objectness
|
610 |
-
tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio
|
611 |
-
|
612 |
-
# Classification
|
613 |
-
selected_tcls = targets[i][:, 1].long()
|
614 |
-
if self.nc > 1: # cls loss (only if multiple classes)
|
615 |
-
t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets
|
616 |
-
t[range(n), selected_tcls] = self.cp
|
617 |
-
lcls += self.BCEcls(ps[:, 5:], t) # BCE
|
618 |
-
|
619 |
-
# Append targets to text file
|
620 |
-
# with open('targets.txt', 'a') as file:
|
621 |
-
# [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
|
622 |
-
|
623 |
-
obji = self.BCEobj(pi[..., 4], tobj)
|
624 |
-
lobj += obji * self.balance[i] # obj loss
|
625 |
-
if self.autobalance:
|
626 |
-
self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
|
627 |
-
|
628 |
-
if self.autobalance:
|
629 |
-
self.balance = [x / self.balance[self.ssi] for x in self.balance]
|
630 |
-
lbox *= self.hyp['box']
|
631 |
-
lobj *= self.hyp['obj']
|
632 |
-
lcls *= self.hyp['cls']
|
633 |
-
bs = tobj.shape[0] # batch size
|
634 |
-
|
635 |
-
loss = lbox + lobj + lcls
|
636 |
-
return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
|
637 |
-
|
638 |
-
def build_targets(self, p, targets, imgs):
|
639 |
-
|
640 |
-
#indices, anch = self.find_positive(p, targets)
|
641 |
-
indices, anch = self.find_3_positive(p, targets)
|
642 |
-
#indices, anch = self.find_4_positive(p, targets)
|
643 |
-
#indices, anch = self.find_5_positive(p, targets)
|
644 |
-
#indices, anch = self.find_9_positive(p, targets)
|
645 |
-
device = torch.device(targets.device)
|
646 |
-
matching_bs = [[] for pp in p]
|
647 |
-
matching_as = [[] for pp in p]
|
648 |
-
matching_gjs = [[] for pp in p]
|
649 |
-
matching_gis = [[] for pp in p]
|
650 |
-
matching_targets = [[] for pp in p]
|
651 |
-
matching_anchs = [[] for pp in p]
|
652 |
-
|
653 |
-
nl = len(p)
|
654 |
-
|
655 |
-
for batch_idx in range(p[0].shape[0]):
|
656 |
-
|
657 |
-
b_idx = targets[:, 0]==batch_idx
|
658 |
-
this_target = targets[b_idx]
|
659 |
-
if this_target.shape[0] == 0:
|
660 |
-
continue
|
661 |
-
|
662 |
-
txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1]
|
663 |
-
txyxy = xywh2xyxy(txywh)
|
664 |
-
|
665 |
-
pxyxys = []
|
666 |
-
p_cls = []
|
667 |
-
p_obj = []
|
668 |
-
from_which_layer = []
|
669 |
-
all_b = []
|
670 |
-
all_a = []
|
671 |
-
all_gj = []
|
672 |
-
all_gi = []
|
673 |
-
all_anch = []
|
674 |
-
|
675 |
-
for i, pi in enumerate(p):
|
676 |
-
|
677 |
-
b, a, gj, gi = indices[i]
|
678 |
-
idx = (b == batch_idx)
|
679 |
-
b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx]
|
680 |
-
all_b.append(b)
|
681 |
-
all_a.append(a)
|
682 |
-
all_gj.append(gj)
|
683 |
-
all_gi.append(gi)
|
684 |
-
all_anch.append(anch[i][idx])
|
685 |
-
from_which_layer.append((torch.ones(size=(len(b),)) * i).to(device))
|
686 |
-
|
687 |
-
fg_pred = pi[b, a, gj, gi]
|
688 |
-
p_obj.append(fg_pred[:, 4:5])
|
689 |
-
p_cls.append(fg_pred[:, 5:])
|
690 |
-
|
691 |
-
grid = torch.stack([gi, gj], dim=1)
|
692 |
-
pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8.
|
693 |
-
#pxy = (fg_pred[:, :2].sigmoid() * 3. - 1. + grid) * self.stride[i]
|
694 |
-
pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8.
|
695 |
-
pxywh = torch.cat([pxy, pwh], dim=-1)
|
696 |
-
pxyxy = xywh2xyxy(pxywh)
|
697 |
-
pxyxys.append(pxyxy)
|
698 |
-
|
699 |
-
pxyxys = torch.cat(pxyxys, dim=0)
|
700 |
-
if pxyxys.shape[0] == 0:
|
701 |
-
continue
|
702 |
-
p_obj = torch.cat(p_obj, dim=0)
|
703 |
-
p_cls = torch.cat(p_cls, dim=0)
|
704 |
-
from_which_layer = torch.cat(from_which_layer, dim=0)
|
705 |
-
all_b = torch.cat(all_b, dim=0)
|
706 |
-
all_a = torch.cat(all_a, dim=0)
|
707 |
-
all_gj = torch.cat(all_gj, dim=0)
|
708 |
-
all_gi = torch.cat(all_gi, dim=0)
|
709 |
-
all_anch = torch.cat(all_anch, dim=0)
|
710 |
-
|
711 |
-
pair_wise_iou = box_iou(txyxy, pxyxys)
|
712 |
-
|
713 |
-
pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8)
|
714 |
-
|
715 |
-
top_k, _ = torch.topk(pair_wise_iou, min(10, pair_wise_iou.shape[1]), dim=1)
|
716 |
-
dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1)
|
717 |
-
|
718 |
-
gt_cls_per_image = (
|
719 |
-
F.one_hot(this_target[:, 1].to(torch.int64), self.nc)
|
720 |
-
.float()
|
721 |
-
.unsqueeze(1)
|
722 |
-
.repeat(1, pxyxys.shape[0], 1)
|
723 |
-
)
|
724 |
-
|
725 |
-
num_gt = this_target.shape[0]
|
726 |
-
cls_preds_ = (
|
727 |
-
p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
|
728 |
-
* p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
|
729 |
-
)
|
730 |
-
|
731 |
-
y = cls_preds_.sqrt_()
|
732 |
-
pair_wise_cls_loss = F.binary_cross_entropy_with_logits(
|
733 |
-
torch.log(y/(1-y)) , gt_cls_per_image, reduction="none"
|
734 |
-
).sum(-1)
|
735 |
-
del cls_preds_
|
736 |
-
|
737 |
-
cost = (
|
738 |
-
pair_wise_cls_loss
|
739 |
-
+ 3.0 * pair_wise_iou_loss
|
740 |
-
)
|
741 |
-
|
742 |
-
matching_matrix = torch.zeros_like(cost, device=device)
|
743 |
-
|
744 |
-
for gt_idx in range(num_gt):
|
745 |
-
_, pos_idx = torch.topk(
|
746 |
-
cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False
|
747 |
-
)
|
748 |
-
matching_matrix[gt_idx][pos_idx] = 1.0
|
749 |
-
|
750 |
-
del top_k, dynamic_ks
|
751 |
-
anchor_matching_gt = matching_matrix.sum(0)
|
752 |
-
if (anchor_matching_gt > 1).sum() > 0:
|
753 |
-
_, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0)
|
754 |
-
matching_matrix[:, anchor_matching_gt > 1] *= 0.0
|
755 |
-
matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0
|
756 |
-
fg_mask_inboxes = (matching_matrix.sum(0) > 0.0).to(device)
|
757 |
-
matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0)
|
758 |
-
|
759 |
-
from_which_layer = from_which_layer[fg_mask_inboxes]
|
760 |
-
all_b = all_b[fg_mask_inboxes]
|
761 |
-
all_a = all_a[fg_mask_inboxes]
|
762 |
-
all_gj = all_gj[fg_mask_inboxes]
|
763 |
-
all_gi = all_gi[fg_mask_inboxes]
|
764 |
-
all_anch = all_anch[fg_mask_inboxes]
|
765 |
-
|
766 |
-
this_target = this_target[matched_gt_inds]
|
767 |
-
|
768 |
-
for i in range(nl):
|
769 |
-
layer_idx = from_which_layer == i
|
770 |
-
matching_bs[i].append(all_b[layer_idx])
|
771 |
-
matching_as[i].append(all_a[layer_idx])
|
772 |
-
matching_gjs[i].append(all_gj[layer_idx])
|
773 |
-
matching_gis[i].append(all_gi[layer_idx])
|
774 |
-
matching_targets[i].append(this_target[layer_idx])
|
775 |
-
matching_anchs[i].append(all_anch[layer_idx])
|
776 |
-
|
777 |
-
for i in range(nl):
|
778 |
-
if matching_targets[i] != []:
|
779 |
-
matching_bs[i] = torch.cat(matching_bs[i], dim=0)
|
780 |
-
matching_as[i] = torch.cat(matching_as[i], dim=0)
|
781 |
-
matching_gjs[i] = torch.cat(matching_gjs[i], dim=0)
|
782 |
-
matching_gis[i] = torch.cat(matching_gis[i], dim=0)
|
783 |
-
matching_targets[i] = torch.cat(matching_targets[i], dim=0)
|
784 |
-
matching_anchs[i] = torch.cat(matching_anchs[i], dim=0)
|
785 |
-
else:
|
786 |
-
matching_bs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
787 |
-
matching_as[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
788 |
-
matching_gjs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
789 |
-
matching_gis[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
790 |
-
matching_targets[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
791 |
-
matching_anchs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
792 |
-
|
793 |
-
return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs
|
794 |
-
|
795 |
-
def find_3_positive(self, p, targets):
|
796 |
-
# Build targets for compute_loss(), input targets(image,class,x,y,w,h)
|
797 |
-
na, nt = self.na, targets.shape[0] # number of anchors, targets
|
798 |
-
indices, anch = [], []
|
799 |
-
gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain
|
800 |
-
ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
|
801 |
-
targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
|
802 |
-
|
803 |
-
g = 0.5 # bias
|
804 |
-
off = torch.tensor([[0, 0],
|
805 |
-
[1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
|
806 |
-
# [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
|
807 |
-
], device=targets.device).float() * g # offsets
|
808 |
-
|
809 |
-
for i in range(self.nl):
|
810 |
-
anchors = self.anchors[i]
|
811 |
-
gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
|
812 |
-
|
813 |
-
# Match targets to anchors
|
814 |
-
t = targets * gain
|
815 |
-
if nt:
|
816 |
-
# Matches
|
817 |
-
r = t[:, :, 4:6] / anchors[:, None] # wh ratio
|
818 |
-
j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
|
819 |
-
# j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
|
820 |
-
t = t[j] # filter
|
821 |
-
|
822 |
-
# Offsets
|
823 |
-
gxy = t[:, 2:4] # grid xy
|
824 |
-
gxi = gain[[2, 3]] - gxy # inverse
|
825 |
-
j, k = ((gxy % 1. < g) & (gxy > 1.)).T
|
826 |
-
l, m = ((gxi % 1. < g) & (gxi > 1.)).T
|
827 |
-
j = torch.stack((torch.ones_like(j), j, k, l, m))
|
828 |
-
t = t.repeat((5, 1, 1))[j]
|
829 |
-
offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
|
830 |
-
else:
|
831 |
-
t = targets[0]
|
832 |
-
offsets = 0
|
833 |
-
|
834 |
-
# Define
|
835 |
-
b, c = t[:, :2].long().T # image, class
|
836 |
-
gxy = t[:, 2:4] # grid xy
|
837 |
-
gwh = t[:, 4:6] # grid wh
|
838 |
-
gij = (gxy - offsets).long()
|
839 |
-
gi, gj = gij.T # grid xy indices
|
840 |
-
|
841 |
-
# Append
|
842 |
-
a = t[:, 6].long() # anchor indices
|
843 |
-
indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
|
844 |
-
anch.append(anchors[a]) # anchors
|
845 |
-
|
846 |
-
return indices, anch
|
847 |
-
|
848 |
-
|
849 |
-
class ComputeLossBinOTA:
|
850 |
-
# Compute losses
|
851 |
-
def __init__(self, model, autobalance=False):
|
852 |
-
super(ComputeLossBinOTA, self).__init__()
|
853 |
-
device = next(model.parameters()).device # get model device
|
854 |
-
h = model.hyp # hyperparameters
|
855 |
-
|
856 |
-
# Define criteria
|
857 |
-
BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
|
858 |
-
BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
|
859 |
-
#MSEangle = nn.MSELoss().to(device)
|
860 |
-
|
861 |
-
# Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
|
862 |
-
self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
|
863 |
-
|
864 |
-
# Focal loss
|
865 |
-
g = h['fl_gamma'] # focal loss gamma
|
866 |
-
if g > 0:
|
867 |
-
BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
|
868 |
-
|
869 |
-
det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
|
870 |
-
self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
|
871 |
-
self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
|
872 |
-
self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
|
873 |
-
for k in 'na', 'nc', 'nl', 'anchors', 'stride', 'bin_count':
|
874 |
-
setattr(self, k, getattr(det, k))
|
875 |
-
|
876 |
-
#xy_bin_sigmoid = SigmoidBin(bin_count=11, min=-0.5, max=1.5, use_loss_regression=False).to(device)
|
877 |
-
wh_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0, use_loss_regression=False).to(device)
|
878 |
-
#angle_bin_sigmoid = SigmoidBin(bin_count=31, min=-1.1, max=1.1, use_loss_regression=False).to(device)
|
879 |
-
self.wh_bin_sigmoid = wh_bin_sigmoid
|
880 |
-
|
881 |
-
def __call__(self, p, targets, imgs): # predictions, targets, model
|
882 |
-
device = targets.device
|
883 |
-
lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
|
884 |
-
bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs)
|
885 |
-
pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p]
|
886 |
-
|
887 |
-
|
888 |
-
# Losses
|
889 |
-
for i, pi in enumerate(p): # layer index, layer predictions
|
890 |
-
b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx
|
891 |
-
tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
|
892 |
-
|
893 |
-
obj_idx = self.wh_bin_sigmoid.get_length()*2 + 2 # x,y, w-bce, h-bce # xy_bin_sigmoid.get_length()*2
|
894 |
-
|
895 |
-
n = b.shape[0] # number of targets
|
896 |
-
if n:
|
897 |
-
ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
|
898 |
-
|
899 |
-
# Regression
|
900 |
-
grid = torch.stack([gi, gj], dim=1)
|
901 |
-
selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i]
|
902 |
-
selected_tbox[:, :2] -= grid
|
903 |
-
|
904 |
-
#pxy = ps[:, :2].sigmoid() * 2. - 0.5
|
905 |
-
##pxy = ps[:, :2].sigmoid() * 3. - 1.
|
906 |
-
#pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
|
907 |
-
#pbox = torch.cat((pxy, pwh), 1) # predicted box
|
908 |
-
|
909 |
-
#x_loss, px = xy_bin_sigmoid.training_loss(ps[..., 0:12], tbox[i][..., 0])
|
910 |
-
#y_loss, py = xy_bin_sigmoid.training_loss(ps[..., 12:24], tbox[i][..., 1])
|
911 |
-
w_loss, pw = self.wh_bin_sigmoid.training_loss(ps[..., 2:(3+self.bin_count)], selected_tbox[..., 2] / anchors[i][..., 0])
|
912 |
-
h_loss, ph = self.wh_bin_sigmoid.training_loss(ps[..., (3+self.bin_count):obj_idx], selected_tbox[..., 3] / anchors[i][..., 1])
|
913 |
-
|
914 |
-
pw *= anchors[i][..., 0]
|
915 |
-
ph *= anchors[i][..., 1]
|
916 |
-
|
917 |
-
px = ps[:, 0].sigmoid() * 2. - 0.5
|
918 |
-
py = ps[:, 1].sigmoid() * 2. - 0.5
|
919 |
-
|
920 |
-
lbox += w_loss + h_loss # + x_loss + y_loss
|
921 |
-
|
922 |
-
#print(f"\n px = {px.shape}, py = {py.shape}, pw = {pw.shape}, ph = {ph.shape} \n")
|
923 |
-
|
924 |
-
pbox = torch.cat((px.unsqueeze(1), py.unsqueeze(1), pw.unsqueeze(1), ph.unsqueeze(1)), 1).to(device) # predicted box
|
925 |
-
|
926 |
-
|
927 |
-
|
928 |
-
|
929 |
-
iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target)
|
930 |
-
lbox += (1.0 - iou).mean() # iou loss
|
931 |
-
|
932 |
-
# Objectness
|
933 |
-
tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio
|
934 |
-
|
935 |
-
# Classification
|
936 |
-
selected_tcls = targets[i][:, 1].long()
|
937 |
-
if self.nc > 1: # cls loss (only if multiple classes)
|
938 |
-
t = torch.full_like(ps[:, (1+obj_idx):], self.cn, device=device) # targets
|
939 |
-
t[range(n), selected_tcls] = self.cp
|
940 |
-
lcls += self.BCEcls(ps[:, (1+obj_idx):], t) # BCE
|
941 |
-
|
942 |
-
# Append targets to text file
|
943 |
-
# with open('targets.txt', 'a') as file:
|
944 |
-
# [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
|
945 |
-
|
946 |
-
obji = self.BCEobj(pi[..., obj_idx], tobj)
|
947 |
-
lobj += obji * self.balance[i] # obj loss
|
948 |
-
if self.autobalance:
|
949 |
-
self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
|
950 |
-
|
951 |
-
if self.autobalance:
|
952 |
-
self.balance = [x / self.balance[self.ssi] for x in self.balance]
|
953 |
-
lbox *= self.hyp['box']
|
954 |
-
lobj *= self.hyp['obj']
|
955 |
-
lcls *= self.hyp['cls']
|
956 |
-
bs = tobj.shape[0] # batch size
|
957 |
-
|
958 |
-
loss = lbox + lobj + lcls
|
959 |
-
return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
|
960 |
-
|
961 |
-
def build_targets(self, p, targets, imgs):
|
962 |
-
|
963 |
-
#indices, anch = self.find_positive(p, targets)
|
964 |
-
indices, anch = self.find_3_positive(p, targets)
|
965 |
-
#indices, anch = self.find_4_positive(p, targets)
|
966 |
-
#indices, anch = self.find_5_positive(p, targets)
|
967 |
-
#indices, anch = self.find_9_positive(p, targets)
|
968 |
-
|
969 |
-
matching_bs = [[] for pp in p]
|
970 |
-
matching_as = [[] for pp in p]
|
971 |
-
matching_gjs = [[] for pp in p]
|
972 |
-
matching_gis = [[] for pp in p]
|
973 |
-
matching_targets = [[] for pp in p]
|
974 |
-
matching_anchs = [[] for pp in p]
|
975 |
-
|
976 |
-
nl = len(p)
|
977 |
-
|
978 |
-
for batch_idx in range(p[0].shape[0]):
|
979 |
-
|
980 |
-
b_idx = targets[:, 0]==batch_idx
|
981 |
-
this_target = targets[b_idx]
|
982 |
-
if this_target.shape[0] == 0:
|
983 |
-
continue
|
984 |
-
|
985 |
-
txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1]
|
986 |
-
txyxy = xywh2xyxy(txywh)
|
987 |
-
|
988 |
-
pxyxys = []
|
989 |
-
p_cls = []
|
990 |
-
p_obj = []
|
991 |
-
from_which_layer = []
|
992 |
-
all_b = []
|
993 |
-
all_a = []
|
994 |
-
all_gj = []
|
995 |
-
all_gi = []
|
996 |
-
all_anch = []
|
997 |
-
|
998 |
-
for i, pi in enumerate(p):
|
999 |
-
|
1000 |
-
obj_idx = self.wh_bin_sigmoid.get_length()*2 + 2
|
1001 |
-
|
1002 |
-
b, a, gj, gi = indices[i]
|
1003 |
-
idx = (b == batch_idx)
|
1004 |
-
b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx]
|
1005 |
-
all_b.append(b)
|
1006 |
-
all_a.append(a)
|
1007 |
-
all_gj.append(gj)
|
1008 |
-
all_gi.append(gi)
|
1009 |
-
all_anch.append(anch[i][idx])
|
1010 |
-
from_which_layer.append(torch.ones(size=(len(b),)) * i)
|
1011 |
-
|
1012 |
-
fg_pred = pi[b, a, gj, gi]
|
1013 |
-
p_obj.append(fg_pred[:, obj_idx:(obj_idx+1)])
|
1014 |
-
p_cls.append(fg_pred[:, (obj_idx+1):])
|
1015 |
-
|
1016 |
-
grid = torch.stack([gi, gj], dim=1)
|
1017 |
-
pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8.
|
1018 |
-
#pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8.
|
1019 |
-
pw = self.wh_bin_sigmoid.forward(fg_pred[..., 2:(3+self.bin_count)].sigmoid()) * anch[i][idx][:, 0] * self.stride[i]
|
1020 |
-
ph = self.wh_bin_sigmoid.forward(fg_pred[..., (3+self.bin_count):obj_idx].sigmoid()) * anch[i][idx][:, 1] * self.stride[i]
|
1021 |
-
|
1022 |
-
pxywh = torch.cat([pxy, pw.unsqueeze(1), ph.unsqueeze(1)], dim=-1)
|
1023 |
-
pxyxy = xywh2xyxy(pxywh)
|
1024 |
-
pxyxys.append(pxyxy)
|
1025 |
-
|
1026 |
-
pxyxys = torch.cat(pxyxys, dim=0)
|
1027 |
-
if pxyxys.shape[0] == 0:
|
1028 |
-
continue
|
1029 |
-
p_obj = torch.cat(p_obj, dim=0)
|
1030 |
-
p_cls = torch.cat(p_cls, dim=0)
|
1031 |
-
from_which_layer = torch.cat(from_which_layer, dim=0)
|
1032 |
-
all_b = torch.cat(all_b, dim=0)
|
1033 |
-
all_a = torch.cat(all_a, dim=0)
|
1034 |
-
all_gj = torch.cat(all_gj, dim=0)
|
1035 |
-
all_gi = torch.cat(all_gi, dim=0)
|
1036 |
-
all_anch = torch.cat(all_anch, dim=0)
|
1037 |
-
|
1038 |
-
pair_wise_iou = box_iou(txyxy, pxyxys)
|
1039 |
-
|
1040 |
-
pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8)
|
1041 |
-
|
1042 |
-
top_k, _ = torch.topk(pair_wise_iou, min(10, pair_wise_iou.shape[1]), dim=1)
|
1043 |
-
dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1)
|
1044 |
-
|
1045 |
-
gt_cls_per_image = (
|
1046 |
-
F.one_hot(this_target[:, 1].to(torch.int64), self.nc)
|
1047 |
-
.float()
|
1048 |
-
.unsqueeze(1)
|
1049 |
-
.repeat(1, pxyxys.shape[0], 1)
|
1050 |
-
)
|
1051 |
-
|
1052 |
-
num_gt = this_target.shape[0]
|
1053 |
-
cls_preds_ = (
|
1054 |
-
p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
|
1055 |
-
* p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
|
1056 |
-
)
|
1057 |
-
|
1058 |
-
y = cls_preds_.sqrt_()
|
1059 |
-
pair_wise_cls_loss = F.binary_cross_entropy_with_logits(
|
1060 |
-
torch.log(y/(1-y)) , gt_cls_per_image, reduction="none"
|
1061 |
-
).sum(-1)
|
1062 |
-
del cls_preds_
|
1063 |
-
|
1064 |
-
cost = (
|
1065 |
-
pair_wise_cls_loss
|
1066 |
-
+ 3.0 * pair_wise_iou_loss
|
1067 |
-
)
|
1068 |
-
|
1069 |
-
matching_matrix = torch.zeros_like(cost)
|
1070 |
-
|
1071 |
-
for gt_idx in range(num_gt):
|
1072 |
-
_, pos_idx = torch.topk(
|
1073 |
-
cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False
|
1074 |
-
)
|
1075 |
-
matching_matrix[gt_idx][pos_idx] = 1.0
|
1076 |
-
|
1077 |
-
del top_k, dynamic_ks
|
1078 |
-
anchor_matching_gt = matching_matrix.sum(0)
|
1079 |
-
if (anchor_matching_gt > 1).sum() > 0:
|
1080 |
-
_, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0)
|
1081 |
-
matching_matrix[:, anchor_matching_gt > 1] *= 0.0
|
1082 |
-
matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0
|
1083 |
-
fg_mask_inboxes = matching_matrix.sum(0) > 0.0
|
1084 |
-
matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0)
|
1085 |
-
|
1086 |
-
from_which_layer = from_which_layer[fg_mask_inboxes]
|
1087 |
-
all_b = all_b[fg_mask_inboxes]
|
1088 |
-
all_a = all_a[fg_mask_inboxes]
|
1089 |
-
all_gj = all_gj[fg_mask_inboxes]
|
1090 |
-
all_gi = all_gi[fg_mask_inboxes]
|
1091 |
-
all_anch = all_anch[fg_mask_inboxes]
|
1092 |
-
|
1093 |
-
this_target = this_target[matched_gt_inds]
|
1094 |
-
|
1095 |
-
for i in range(nl):
|
1096 |
-
layer_idx = from_which_layer == i
|
1097 |
-
matching_bs[i].append(all_b[layer_idx])
|
1098 |
-
matching_as[i].append(all_a[layer_idx])
|
1099 |
-
matching_gjs[i].append(all_gj[layer_idx])
|
1100 |
-
matching_gis[i].append(all_gi[layer_idx])
|
1101 |
-
matching_targets[i].append(this_target[layer_idx])
|
1102 |
-
matching_anchs[i].append(all_anch[layer_idx])
|
1103 |
-
|
1104 |
-
for i in range(nl):
|
1105 |
-
if matching_targets[i] != []:
|
1106 |
-
matching_bs[i] = torch.cat(matching_bs[i], dim=0)
|
1107 |
-
matching_as[i] = torch.cat(matching_as[i], dim=0)
|
1108 |
-
matching_gjs[i] = torch.cat(matching_gjs[i], dim=0)
|
1109 |
-
matching_gis[i] = torch.cat(matching_gis[i], dim=0)
|
1110 |
-
matching_targets[i] = torch.cat(matching_targets[i], dim=0)
|
1111 |
-
matching_anchs[i] = torch.cat(matching_anchs[i], dim=0)
|
1112 |
-
else:
|
1113 |
-
matching_bs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1114 |
-
matching_as[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1115 |
-
matching_gjs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1116 |
-
matching_gis[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1117 |
-
matching_targets[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1118 |
-
matching_anchs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1119 |
-
|
1120 |
-
return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs
|
1121 |
-
|
1122 |
-
def find_3_positive(self, p, targets):
|
1123 |
-
# Build targets for compute_loss(), input targets(image,class,x,y,w,h)
|
1124 |
-
na, nt = self.na, targets.shape[0] # number of anchors, targets
|
1125 |
-
indices, anch = [], []
|
1126 |
-
gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain
|
1127 |
-
ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
|
1128 |
-
targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
|
1129 |
-
|
1130 |
-
g = 0.5 # bias
|
1131 |
-
off = torch.tensor([[0, 0],
|
1132 |
-
[1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
|
1133 |
-
# [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
|
1134 |
-
], device=targets.device).float() * g # offsets
|
1135 |
-
|
1136 |
-
for i in range(self.nl):
|
1137 |
-
anchors = self.anchors[i]
|
1138 |
-
gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
|
1139 |
-
|
1140 |
-
# Match targets to anchors
|
1141 |
-
t = targets * gain
|
1142 |
-
if nt:
|
1143 |
-
# Matches
|
1144 |
-
r = t[:, :, 4:6] / anchors[:, None] # wh ratio
|
1145 |
-
j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
|
1146 |
-
# j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
|
1147 |
-
t = t[j] # filter
|
1148 |
-
|
1149 |
-
# Offsets
|
1150 |
-
gxy = t[:, 2:4] # grid xy
|
1151 |
-
gxi = gain[[2, 3]] - gxy # inverse
|
1152 |
-
j, k = ((gxy % 1. < g) & (gxy > 1.)).T
|
1153 |
-
l, m = ((gxi % 1. < g) & (gxi > 1.)).T
|
1154 |
-
j = torch.stack((torch.ones_like(j), j, k, l, m))
|
1155 |
-
t = t.repeat((5, 1, 1))[j]
|
1156 |
-
offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
|
1157 |
-
else:
|
1158 |
-
t = targets[0]
|
1159 |
-
offsets = 0
|
1160 |
-
|
1161 |
-
# Define
|
1162 |
-
b, c = t[:, :2].long().T # image, class
|
1163 |
-
gxy = t[:, 2:4] # grid xy
|
1164 |
-
gwh = t[:, 4:6] # grid wh
|
1165 |
-
gij = (gxy - offsets).long()
|
1166 |
-
gi, gj = gij.T # grid xy indices
|
1167 |
-
|
1168 |
-
# Append
|
1169 |
-
a = t[:, 6].long() # anchor indices
|
1170 |
-
indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
|
1171 |
-
anch.append(anchors[a]) # anchors
|
1172 |
-
|
1173 |
-
return indices, anch
|
1174 |
-
|
1175 |
-
|
1176 |
-
class ComputeLossAuxOTA:
|
1177 |
-
# Compute losses
|
1178 |
-
def __init__(self, model, autobalance=False):
|
1179 |
-
super(ComputeLossAuxOTA, self).__init__()
|
1180 |
-
device = next(model.parameters()).device # get model device
|
1181 |
-
h = model.hyp # hyperparameters
|
1182 |
-
|
1183 |
-
# Define criteria
|
1184 |
-
BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
|
1185 |
-
BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
|
1186 |
-
|
1187 |
-
# Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
|
1188 |
-
self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
|
1189 |
-
|
1190 |
-
# Focal loss
|
1191 |
-
g = h['fl_gamma'] # focal loss gamma
|
1192 |
-
if g > 0:
|
1193 |
-
BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
|
1194 |
-
|
1195 |
-
det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
|
1196 |
-
self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
|
1197 |
-
self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
|
1198 |
-
self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
|
1199 |
-
for k in 'na', 'nc', 'nl', 'anchors', 'stride':
|
1200 |
-
setattr(self, k, getattr(det, k))
|
1201 |
-
|
1202 |
-
def __call__(self, p, targets, imgs): # predictions, targets, model
|
1203 |
-
device = targets.device
|
1204 |
-
lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
|
1205 |
-
bs_aux, as_aux_, gjs_aux, gis_aux, targets_aux, anchors_aux = self.build_targets2(p[:self.nl], targets, imgs)
|
1206 |
-
bs, as_, gjs, gis, targets, anchors = self.build_targets(p[:self.nl], targets, imgs)
|
1207 |
-
pre_gen_gains_aux = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p[:self.nl]]
|
1208 |
-
pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p[:self.nl]]
|
1209 |
-
|
1210 |
-
|
1211 |
-
# Losses
|
1212 |
-
for i in range(self.nl): # layer index, layer predictions
|
1213 |
-
pi = p[i]
|
1214 |
-
pi_aux = p[i+self.nl]
|
1215 |
-
b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx
|
1216 |
-
b_aux, a_aux, gj_aux, gi_aux = bs_aux[i], as_aux_[i], gjs_aux[i], gis_aux[i] # image, anchor, gridy, gridx
|
1217 |
-
tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
|
1218 |
-
tobj_aux = torch.zeros_like(pi_aux[..., 0], device=device) # target obj
|
1219 |
-
|
1220 |
-
n = b.shape[0] # number of targets
|
1221 |
-
if n:
|
1222 |
-
ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
|
1223 |
-
|
1224 |
-
# Regression
|
1225 |
-
grid = torch.stack([gi, gj], dim=1)
|
1226 |
-
pxy = ps[:, :2].sigmoid() * 2. - 0.5
|
1227 |
-
pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
|
1228 |
-
pbox = torch.cat((pxy, pwh), 1) # predicted box
|
1229 |
-
selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i]
|
1230 |
-
selected_tbox[:, :2] -= grid
|
1231 |
-
iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target)
|
1232 |
-
lbox += (1.0 - iou).mean() # iou loss
|
1233 |
-
|
1234 |
-
# Objectness
|
1235 |
-
tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio
|
1236 |
-
|
1237 |
-
# Classification
|
1238 |
-
selected_tcls = targets[i][:, 1].long()
|
1239 |
-
if self.nc > 1: # cls loss (only if multiple classes)
|
1240 |
-
t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets
|
1241 |
-
t[range(n), selected_tcls] = self.cp
|
1242 |
-
lcls += self.BCEcls(ps[:, 5:], t) # BCE
|
1243 |
-
|
1244 |
-
# Append targets to text file
|
1245 |
-
# with open('targets.txt', 'a') as file:
|
1246 |
-
# [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
|
1247 |
-
|
1248 |
-
n_aux = b_aux.shape[0] # number of targets
|
1249 |
-
if n_aux:
|
1250 |
-
ps_aux = pi_aux[b_aux, a_aux, gj_aux, gi_aux] # prediction subset corresponding to targets
|
1251 |
-
grid_aux = torch.stack([gi_aux, gj_aux], dim=1)
|
1252 |
-
pxy_aux = ps_aux[:, :2].sigmoid() * 2. - 0.5
|
1253 |
-
#pxy_aux = ps_aux[:, :2].sigmoid() * 3. - 1.
|
1254 |
-
pwh_aux = (ps_aux[:, 2:4].sigmoid() * 2) ** 2 * anchors_aux[i]
|
1255 |
-
pbox_aux = torch.cat((pxy_aux, pwh_aux), 1) # predicted box
|
1256 |
-
selected_tbox_aux = targets_aux[i][:, 2:6] * pre_gen_gains_aux[i]
|
1257 |
-
selected_tbox_aux[:, :2] -= grid_aux
|
1258 |
-
iou_aux = bbox_iou(pbox_aux.T, selected_tbox_aux, x1y1x2y2=False, CIoU=True) # iou(prediction, target)
|
1259 |
-
lbox += 0.25 * (1.0 - iou_aux).mean() # iou loss
|
1260 |
-
|
1261 |
-
# Objectness
|
1262 |
-
tobj_aux[b_aux, a_aux, gj_aux, gi_aux] = (1.0 - self.gr) + self.gr * iou_aux.detach().clamp(0).type(tobj_aux.dtype) # iou ratio
|
1263 |
-
|
1264 |
-
# Classification
|
1265 |
-
selected_tcls_aux = targets_aux[i][:, 1].long()
|
1266 |
-
if self.nc > 1: # cls loss (only if multiple classes)
|
1267 |
-
t_aux = torch.full_like(ps_aux[:, 5:], self.cn, device=device) # targets
|
1268 |
-
t_aux[range(n_aux), selected_tcls_aux] = self.cp
|
1269 |
-
lcls += 0.25 * self.BCEcls(ps_aux[:, 5:], t_aux) # BCE
|
1270 |
-
|
1271 |
-
obji = self.BCEobj(pi[..., 4], tobj)
|
1272 |
-
obji_aux = self.BCEobj(pi_aux[..., 4], tobj_aux)
|
1273 |
-
lobj += obji * self.balance[i] + 0.25 * obji_aux * self.balance[i] # obj loss
|
1274 |
-
if self.autobalance:
|
1275 |
-
self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
|
1276 |
-
|
1277 |
-
if self.autobalance:
|
1278 |
-
self.balance = [x / self.balance[self.ssi] for x in self.balance]
|
1279 |
-
lbox *= self.hyp['box']
|
1280 |
-
lobj *= self.hyp['obj']
|
1281 |
-
lcls *= self.hyp['cls']
|
1282 |
-
bs = tobj.shape[0] # batch size
|
1283 |
-
|
1284 |
-
loss = lbox + lobj + lcls
|
1285 |
-
return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
|
1286 |
-
|
1287 |
-
def build_targets(self, p, targets, imgs):
|
1288 |
-
|
1289 |
-
indices, anch = self.find_3_positive(p, targets)
|
1290 |
-
|
1291 |
-
matching_bs = [[] for pp in p]
|
1292 |
-
matching_as = [[] for pp in p]
|
1293 |
-
matching_gjs = [[] for pp in p]
|
1294 |
-
matching_gis = [[] for pp in p]
|
1295 |
-
matching_targets = [[] for pp in p]
|
1296 |
-
matching_anchs = [[] for pp in p]
|
1297 |
-
|
1298 |
-
nl = len(p)
|
1299 |
-
|
1300 |
-
for batch_idx in range(p[0].shape[0]):
|
1301 |
-
|
1302 |
-
b_idx = targets[:, 0]==batch_idx
|
1303 |
-
this_target = targets[b_idx]
|
1304 |
-
if this_target.shape[0] == 0:
|
1305 |
-
continue
|
1306 |
-
|
1307 |
-
txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1]
|
1308 |
-
txyxy = xywh2xyxy(txywh)
|
1309 |
-
|
1310 |
-
pxyxys = []
|
1311 |
-
p_cls = []
|
1312 |
-
p_obj = []
|
1313 |
-
from_which_layer = []
|
1314 |
-
all_b = []
|
1315 |
-
all_a = []
|
1316 |
-
all_gj = []
|
1317 |
-
all_gi = []
|
1318 |
-
all_anch = []
|
1319 |
-
|
1320 |
-
for i, pi in enumerate(p):
|
1321 |
-
|
1322 |
-
b, a, gj, gi = indices[i]
|
1323 |
-
idx = (b == batch_idx)
|
1324 |
-
b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx]
|
1325 |
-
all_b.append(b)
|
1326 |
-
all_a.append(a)
|
1327 |
-
all_gj.append(gj)
|
1328 |
-
all_gi.append(gi)
|
1329 |
-
all_anch.append(anch[i][idx])
|
1330 |
-
from_which_layer.append(torch.ones(size=(len(b),)) * i)
|
1331 |
-
|
1332 |
-
fg_pred = pi[b, a, gj, gi]
|
1333 |
-
p_obj.append(fg_pred[:, 4:5])
|
1334 |
-
p_cls.append(fg_pred[:, 5:])
|
1335 |
-
|
1336 |
-
grid = torch.stack([gi, gj], dim=1)
|
1337 |
-
pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8.
|
1338 |
-
#pxy = (fg_pred[:, :2].sigmoid() * 3. - 1. + grid) * self.stride[i]
|
1339 |
-
pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8.
|
1340 |
-
pxywh = torch.cat([pxy, pwh], dim=-1)
|
1341 |
-
pxyxy = xywh2xyxy(pxywh)
|
1342 |
-
pxyxys.append(pxyxy)
|
1343 |
-
|
1344 |
-
pxyxys = torch.cat(pxyxys, dim=0)
|
1345 |
-
if pxyxys.shape[0] == 0:
|
1346 |
-
continue
|
1347 |
-
p_obj = torch.cat(p_obj, dim=0)
|
1348 |
-
p_cls = torch.cat(p_cls, dim=0)
|
1349 |
-
from_which_layer = torch.cat(from_which_layer, dim=0)
|
1350 |
-
all_b = torch.cat(all_b, dim=0)
|
1351 |
-
all_a = torch.cat(all_a, dim=0)
|
1352 |
-
all_gj = torch.cat(all_gj, dim=0)
|
1353 |
-
all_gi = torch.cat(all_gi, dim=0)
|
1354 |
-
all_anch = torch.cat(all_anch, dim=0)
|
1355 |
-
|
1356 |
-
pair_wise_iou = box_iou(txyxy, pxyxys)
|
1357 |
-
|
1358 |
-
pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8)
|
1359 |
-
|
1360 |
-
top_k, _ = torch.topk(pair_wise_iou, min(20, pair_wise_iou.shape[1]), dim=1)
|
1361 |
-
dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1)
|
1362 |
-
|
1363 |
-
gt_cls_per_image = (
|
1364 |
-
F.one_hot(this_target[:, 1].to(torch.int64), self.nc)
|
1365 |
-
.float()
|
1366 |
-
.unsqueeze(1)
|
1367 |
-
.repeat(1, pxyxys.shape[0], 1)
|
1368 |
-
)
|
1369 |
-
|
1370 |
-
num_gt = this_target.shape[0]
|
1371 |
-
cls_preds_ = (
|
1372 |
-
p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
|
1373 |
-
* p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
|
1374 |
-
)
|
1375 |
-
|
1376 |
-
y = cls_preds_.sqrt_()
|
1377 |
-
pair_wise_cls_loss = F.binary_cross_entropy_with_logits(
|
1378 |
-
torch.log(y/(1-y)) , gt_cls_per_image, reduction="none"
|
1379 |
-
).sum(-1)
|
1380 |
-
del cls_preds_
|
1381 |
-
|
1382 |
-
cost = (
|
1383 |
-
pair_wise_cls_loss
|
1384 |
-
+ 3.0 * pair_wise_iou_loss
|
1385 |
-
)
|
1386 |
-
|
1387 |
-
matching_matrix = torch.zeros_like(cost)
|
1388 |
-
|
1389 |
-
for gt_idx in range(num_gt):
|
1390 |
-
_, pos_idx = torch.topk(
|
1391 |
-
cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False
|
1392 |
-
)
|
1393 |
-
matching_matrix[gt_idx][pos_idx] = 1.0
|
1394 |
-
|
1395 |
-
del top_k, dynamic_ks
|
1396 |
-
anchor_matching_gt = matching_matrix.sum(0)
|
1397 |
-
if (anchor_matching_gt > 1).sum() > 0:
|
1398 |
-
_, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0)
|
1399 |
-
matching_matrix[:, anchor_matching_gt > 1] *= 0.0
|
1400 |
-
matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0
|
1401 |
-
fg_mask_inboxes = matching_matrix.sum(0) > 0.0
|
1402 |
-
matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0)
|
1403 |
-
|
1404 |
-
from_which_layer = from_which_layer[fg_mask_inboxes]
|
1405 |
-
all_b = all_b[fg_mask_inboxes]
|
1406 |
-
all_a = all_a[fg_mask_inboxes]
|
1407 |
-
all_gj = all_gj[fg_mask_inboxes]
|
1408 |
-
all_gi = all_gi[fg_mask_inboxes]
|
1409 |
-
all_anch = all_anch[fg_mask_inboxes]
|
1410 |
-
|
1411 |
-
this_target = this_target[matched_gt_inds]
|
1412 |
-
|
1413 |
-
for i in range(nl):
|
1414 |
-
layer_idx = from_which_layer == i
|
1415 |
-
matching_bs[i].append(all_b[layer_idx])
|
1416 |
-
matching_as[i].append(all_a[layer_idx])
|
1417 |
-
matching_gjs[i].append(all_gj[layer_idx])
|
1418 |
-
matching_gis[i].append(all_gi[layer_idx])
|
1419 |
-
matching_targets[i].append(this_target[layer_idx])
|
1420 |
-
matching_anchs[i].append(all_anch[layer_idx])
|
1421 |
-
|
1422 |
-
for i in range(nl):
|
1423 |
-
if matching_targets[i] != []:
|
1424 |
-
matching_bs[i] = torch.cat(matching_bs[i], dim=0)
|
1425 |
-
matching_as[i] = torch.cat(matching_as[i], dim=0)
|
1426 |
-
matching_gjs[i] = torch.cat(matching_gjs[i], dim=0)
|
1427 |
-
matching_gis[i] = torch.cat(matching_gis[i], dim=0)
|
1428 |
-
matching_targets[i] = torch.cat(matching_targets[i], dim=0)
|
1429 |
-
matching_anchs[i] = torch.cat(matching_anchs[i], dim=0)
|
1430 |
-
else:
|
1431 |
-
matching_bs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1432 |
-
matching_as[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1433 |
-
matching_gjs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1434 |
-
matching_gis[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1435 |
-
matching_targets[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1436 |
-
matching_anchs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1437 |
-
|
1438 |
-
return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs
|
1439 |
-
|
1440 |
-
def build_targets2(self, p, targets, imgs):
|
1441 |
-
|
1442 |
-
indices, anch = self.find_5_positive(p, targets)
|
1443 |
-
|
1444 |
-
matching_bs = [[] for pp in p]
|
1445 |
-
matching_as = [[] for pp in p]
|
1446 |
-
matching_gjs = [[] for pp in p]
|
1447 |
-
matching_gis = [[] for pp in p]
|
1448 |
-
matching_targets = [[] for pp in p]
|
1449 |
-
matching_anchs = [[] for pp in p]
|
1450 |
-
|
1451 |
-
nl = len(p)
|
1452 |
-
|
1453 |
-
for batch_idx in range(p[0].shape[0]):
|
1454 |
-
|
1455 |
-
b_idx = targets[:, 0]==batch_idx
|
1456 |
-
this_target = targets[b_idx]
|
1457 |
-
if this_target.shape[0] == 0:
|
1458 |
-
continue
|
1459 |
-
|
1460 |
-
txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1]
|
1461 |
-
txyxy = xywh2xyxy(txywh)
|
1462 |
-
|
1463 |
-
pxyxys = []
|
1464 |
-
p_cls = []
|
1465 |
-
p_obj = []
|
1466 |
-
from_which_layer = []
|
1467 |
-
all_b = []
|
1468 |
-
all_a = []
|
1469 |
-
all_gj = []
|
1470 |
-
all_gi = []
|
1471 |
-
all_anch = []
|
1472 |
-
|
1473 |
-
for i, pi in enumerate(p):
|
1474 |
-
|
1475 |
-
b, a, gj, gi = indices[i]
|
1476 |
-
idx = (b == batch_idx)
|
1477 |
-
b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx]
|
1478 |
-
all_b.append(b)
|
1479 |
-
all_a.append(a)
|
1480 |
-
all_gj.append(gj)
|
1481 |
-
all_gi.append(gi)
|
1482 |
-
all_anch.append(anch[i][idx])
|
1483 |
-
from_which_layer.append(torch.ones(size=(len(b),)) * i)
|
1484 |
-
|
1485 |
-
fg_pred = pi[b, a, gj, gi]
|
1486 |
-
p_obj.append(fg_pred[:, 4:5])
|
1487 |
-
p_cls.append(fg_pred[:, 5:])
|
1488 |
-
|
1489 |
-
grid = torch.stack([gi, gj], dim=1)
|
1490 |
-
pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8.
|
1491 |
-
#pxy = (fg_pred[:, :2].sigmoid() * 3. - 1. + grid) * self.stride[i]
|
1492 |
-
pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8.
|
1493 |
-
pxywh = torch.cat([pxy, pwh], dim=-1)
|
1494 |
-
pxyxy = xywh2xyxy(pxywh)
|
1495 |
-
pxyxys.append(pxyxy)
|
1496 |
-
|
1497 |
-
pxyxys = torch.cat(pxyxys, dim=0)
|
1498 |
-
if pxyxys.shape[0] == 0:
|
1499 |
-
continue
|
1500 |
-
p_obj = torch.cat(p_obj, dim=0)
|
1501 |
-
p_cls = torch.cat(p_cls, dim=0)
|
1502 |
-
from_which_layer = torch.cat(from_which_layer, dim=0)
|
1503 |
-
all_b = torch.cat(all_b, dim=0)
|
1504 |
-
all_a = torch.cat(all_a, dim=0)
|
1505 |
-
all_gj = torch.cat(all_gj, dim=0)
|
1506 |
-
all_gi = torch.cat(all_gi, dim=0)
|
1507 |
-
all_anch = torch.cat(all_anch, dim=0)
|
1508 |
-
|
1509 |
-
pair_wise_iou = box_iou(txyxy, pxyxys)
|
1510 |
-
|
1511 |
-
pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8)
|
1512 |
-
|
1513 |
-
top_k, _ = torch.topk(pair_wise_iou, min(20, pair_wise_iou.shape[1]), dim=1)
|
1514 |
-
dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1)
|
1515 |
-
|
1516 |
-
gt_cls_per_image = (
|
1517 |
-
F.one_hot(this_target[:, 1].to(torch.int64), self.nc)
|
1518 |
-
.float()
|
1519 |
-
.unsqueeze(1)
|
1520 |
-
.repeat(1, pxyxys.shape[0], 1)
|
1521 |
-
)
|
1522 |
-
|
1523 |
-
num_gt = this_target.shape[0]
|
1524 |
-
cls_preds_ = (
|
1525 |
-
p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
|
1526 |
-
* p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
|
1527 |
-
)
|
1528 |
-
|
1529 |
-
y = cls_preds_.sqrt_()
|
1530 |
-
pair_wise_cls_loss = F.binary_cross_entropy_with_logits(
|
1531 |
-
torch.log(y/(1-y)) , gt_cls_per_image, reduction="none"
|
1532 |
-
).sum(-1)
|
1533 |
-
del cls_preds_
|
1534 |
-
|
1535 |
-
cost = (
|
1536 |
-
pair_wise_cls_loss
|
1537 |
-
+ 3.0 * pair_wise_iou_loss
|
1538 |
-
)
|
1539 |
-
|
1540 |
-
matching_matrix = torch.zeros_like(cost)
|
1541 |
-
|
1542 |
-
for gt_idx in range(num_gt):
|
1543 |
-
_, pos_idx = torch.topk(
|
1544 |
-
cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False
|
1545 |
-
)
|
1546 |
-
matching_matrix[gt_idx][pos_idx] = 1.0
|
1547 |
-
|
1548 |
-
del top_k, dynamic_ks
|
1549 |
-
anchor_matching_gt = matching_matrix.sum(0)
|
1550 |
-
if (anchor_matching_gt > 1).sum() > 0:
|
1551 |
-
_, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0)
|
1552 |
-
matching_matrix[:, anchor_matching_gt > 1] *= 0.0
|
1553 |
-
matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0
|
1554 |
-
fg_mask_inboxes = matching_matrix.sum(0) > 0.0
|
1555 |
-
matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0)
|
1556 |
-
|
1557 |
-
from_which_layer = from_which_layer[fg_mask_inboxes]
|
1558 |
-
all_b = all_b[fg_mask_inboxes]
|
1559 |
-
all_a = all_a[fg_mask_inboxes]
|
1560 |
-
all_gj = all_gj[fg_mask_inboxes]
|
1561 |
-
all_gi = all_gi[fg_mask_inboxes]
|
1562 |
-
all_anch = all_anch[fg_mask_inboxes]
|
1563 |
-
|
1564 |
-
this_target = this_target[matched_gt_inds]
|
1565 |
-
|
1566 |
-
for i in range(nl):
|
1567 |
-
layer_idx = from_which_layer == i
|
1568 |
-
matching_bs[i].append(all_b[layer_idx])
|
1569 |
-
matching_as[i].append(all_a[layer_idx])
|
1570 |
-
matching_gjs[i].append(all_gj[layer_idx])
|
1571 |
-
matching_gis[i].append(all_gi[layer_idx])
|
1572 |
-
matching_targets[i].append(this_target[layer_idx])
|
1573 |
-
matching_anchs[i].append(all_anch[layer_idx])
|
1574 |
-
|
1575 |
-
for i in range(nl):
|
1576 |
-
if matching_targets[i] != []:
|
1577 |
-
matching_bs[i] = torch.cat(matching_bs[i], dim=0)
|
1578 |
-
matching_as[i] = torch.cat(matching_as[i], dim=0)
|
1579 |
-
matching_gjs[i] = torch.cat(matching_gjs[i], dim=0)
|
1580 |
-
matching_gis[i] = torch.cat(matching_gis[i], dim=0)
|
1581 |
-
matching_targets[i] = torch.cat(matching_targets[i], dim=0)
|
1582 |
-
matching_anchs[i] = torch.cat(matching_anchs[i], dim=0)
|
1583 |
-
else:
|
1584 |
-
matching_bs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1585 |
-
matching_as[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1586 |
-
matching_gjs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1587 |
-
matching_gis[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1588 |
-
matching_targets[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1589 |
-
matching_anchs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
|
1590 |
-
|
1591 |
-
return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs
|
1592 |
-
|
1593 |
-
def find_5_positive(self, p, targets):
|
1594 |
-
# Build targets for compute_loss(), input targets(image,class,x,y,w,h)
|
1595 |
-
na, nt = self.na, targets.shape[0] # number of anchors, targets
|
1596 |
-
indices, anch = [], []
|
1597 |
-
gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain
|
1598 |
-
ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
|
1599 |
-
targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
|
1600 |
-
|
1601 |
-
g = 1.0 # bias
|
1602 |
-
off = torch.tensor([[0, 0],
|
1603 |
-
[1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
|
1604 |
-
# [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
|
1605 |
-
], device=targets.device).float() * g # offsets
|
1606 |
-
|
1607 |
-
for i in range(self.nl):
|
1608 |
-
anchors = self.anchors[i]
|
1609 |
-
gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
|
1610 |
-
|
1611 |
-
# Match targets to anchors
|
1612 |
-
t = targets * gain
|
1613 |
-
if nt:
|
1614 |
-
# Matches
|
1615 |
-
r = t[:, :, 4:6] / anchors[:, None] # wh ratio
|
1616 |
-
j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
|
1617 |
-
# j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
|
1618 |
-
t = t[j] # filter
|
1619 |
-
|
1620 |
-
# Offsets
|
1621 |
-
gxy = t[:, 2:4] # grid xy
|
1622 |
-
gxi = gain[[2, 3]] - gxy # inverse
|
1623 |
-
j, k = ((gxy % 1. < g) & (gxy > 1.)).T
|
1624 |
-
l, m = ((gxi % 1. < g) & (gxi > 1.)).T
|
1625 |
-
j = torch.stack((torch.ones_like(j), j, k, l, m))
|
1626 |
-
t = t.repeat((5, 1, 1))[j]
|
1627 |
-
offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
|
1628 |
-
else:
|
1629 |
-
t = targets[0]
|
1630 |
-
offsets = 0
|
1631 |
-
|
1632 |
-
# Define
|
1633 |
-
b, c = t[:, :2].long().T # image, class
|
1634 |
-
gxy = t[:, 2:4] # grid xy
|
1635 |
-
gwh = t[:, 4:6] # grid wh
|
1636 |
-
gij = (gxy - offsets).long()
|
1637 |
-
gi, gj = gij.T # grid xy indices
|
1638 |
-
|
1639 |
-
# Append
|
1640 |
-
a = t[:, 6].long() # anchor indices
|
1641 |
-
indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
|
1642 |
-
anch.append(anchors[a]) # anchors
|
1643 |
-
|
1644 |
-
return indices, anch
|
1645 |
-
|
1646 |
-
def find_3_positive(self, p, targets):
|
1647 |
-
# Build targets for compute_loss(), input targets(image,class,x,y,w,h)
|
1648 |
-
na, nt = self.na, targets.shape[0] # number of anchors, targets
|
1649 |
-
indices, anch = [], []
|
1650 |
-
gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain
|
1651 |
-
ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
|
1652 |
-
targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
|
1653 |
-
|
1654 |
-
g = 0.5 # bias
|
1655 |
-
off = torch.tensor([[0, 0],
|
1656 |
-
[1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
|
1657 |
-
# [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
|
1658 |
-
], device=targets.device).float() * g # offsets
|
1659 |
-
|
1660 |
-
for i in range(self.nl):
|
1661 |
-
anchors = self.anchors[i]
|
1662 |
-
gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
|
1663 |
-
|
1664 |
-
# Match targets to anchors
|
1665 |
-
t = targets * gain
|
1666 |
-
if nt:
|
1667 |
-
# Matches
|
1668 |
-
r = t[:, :, 4:6] / anchors[:, None] # wh ratio
|
1669 |
-
j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
|
1670 |
-
# j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
|
1671 |
-
t = t[j] # filter
|
1672 |
-
|
1673 |
-
# Offsets
|
1674 |
-
gxy = t[:, 2:4] # grid xy
|
1675 |
-
gxi = gain[[2, 3]] - gxy # inverse
|
1676 |
-
j, k = ((gxy % 1. < g) & (gxy > 1.)).T
|
1677 |
-
l, m = ((gxi % 1. < g) & (gxi > 1.)).T
|
1678 |
-
j = torch.stack((torch.ones_like(j), j, k, l, m))
|
1679 |
-
t = t.repeat((5, 1, 1))[j]
|
1680 |
-
offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
|
1681 |
-
else:
|
1682 |
-
t = targets[0]
|
1683 |
-
offsets = 0
|
1684 |
-
|
1685 |
-
# Define
|
1686 |
-
b, c = t[:, :2].long().T # image, class
|
1687 |
-
gxy = t[:, 2:4] # grid xy
|
1688 |
-
gwh = t[:, 4:6] # grid wh
|
1689 |
-
gij = (gxy - offsets).long()
|
1690 |
-
gi, gj = gij.T # grid xy indices
|
1691 |
-
|
1692 |
-
# Append
|
1693 |
-
a = t[:, 6].long() # anchor indices
|
1694 |
-
indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
|
1695 |
-
anch.append(anchors[a]) # anchors
|
1696 |
-
|
1697 |
-
return indices, anch
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/GetChildrenHeight.js
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
var GetChildrenHeight = function () {
|
2 |
-
if (this.rexSizer.hidden) {
|
3 |
-
return 0;
|
4 |
-
}
|
5 |
-
|
6 |
-
// After RunChildrenWrap
|
7 |
-
return this.widthWrapResult.height + this.space.top + this.space.bottom;
|
8 |
-
}
|
9 |
-
|
10 |
-
export default GetChildrenHeight;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlgoveraAI/web3-wallet-streamlit/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Web3 Wallet Streamlit
|
3 |
-
emoji: 📚
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: gray
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.10.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Alichuan/VITS-Umamusume-voice-synthesizer/commons.py
DELETED
@@ -1,97 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import torch
|
3 |
-
from torch.nn import functional as F
|
4 |
-
import torch.jit
|
5 |
-
|
6 |
-
|
7 |
-
def script_method(fn, _rcb=None):
|
8 |
-
return fn
|
9 |
-
|
10 |
-
|
11 |
-
def script(obj, optimize=True, _frames_up=0, _rcb=None):
|
12 |
-
return obj
|
13 |
-
|
14 |
-
|
15 |
-
torch.jit.script_method = script_method
|
16 |
-
torch.jit.script = script
|
17 |
-
|
18 |
-
|
19 |
-
def init_weights(m, mean=0.0, std=0.01):
|
20 |
-
classname = m.__class__.__name__
|
21 |
-
if classname.find("Conv") != -1:
|
22 |
-
m.weight.data.normal_(mean, std)
|
23 |
-
|
24 |
-
|
25 |
-
def get_padding(kernel_size, dilation=1):
|
26 |
-
return int((kernel_size*dilation - dilation)/2)
|
27 |
-
|
28 |
-
|
29 |
-
def intersperse(lst, item):
|
30 |
-
result = [item] * (len(lst) * 2 + 1)
|
31 |
-
result[1::2] = lst
|
32 |
-
return result
|
33 |
-
|
34 |
-
|
35 |
-
def slice_segments(x, ids_str, segment_size=4):
|
36 |
-
ret = torch.zeros_like(x[:, :, :segment_size])
|
37 |
-
for i in range(x.size(0)):
|
38 |
-
idx_str = ids_str[i]
|
39 |
-
idx_end = idx_str + segment_size
|
40 |
-
ret[i] = x[i, :, idx_str:idx_end]
|
41 |
-
return ret
|
42 |
-
|
43 |
-
|
44 |
-
def rand_slice_segments(x, x_lengths=None, segment_size=4):
|
45 |
-
b, d, t = x.size()
|
46 |
-
if x_lengths is None:
|
47 |
-
x_lengths = t
|
48 |
-
ids_str_max = x_lengths - segment_size + 1
|
49 |
-
ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
|
50 |
-
ret = slice_segments(x, ids_str, segment_size)
|
51 |
-
return ret, ids_str
|
52 |
-
|
53 |
-
|
54 |
-
def subsequent_mask(length):
|
55 |
-
mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
|
56 |
-
return mask
|
57 |
-
|
58 |
-
|
59 |
-
@torch.jit.script
|
60 |
-
def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
|
61 |
-
n_channels_int = n_channels[0]
|
62 |
-
in_act = input_a + input_b
|
63 |
-
t_act = torch.tanh(in_act[:, :n_channels_int, :])
|
64 |
-
s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
|
65 |
-
acts = t_act * s_act
|
66 |
-
return acts
|
67 |
-
|
68 |
-
|
69 |
-
def convert_pad_shape(pad_shape):
|
70 |
-
l = pad_shape[::-1]
|
71 |
-
pad_shape = [item for sublist in l for item in sublist]
|
72 |
-
return pad_shape
|
73 |
-
|
74 |
-
|
75 |
-
def sequence_mask(length, max_length=None):
|
76 |
-
if max_length is None:
|
77 |
-
max_length = length.max()
|
78 |
-
x = torch.arange(max_length, dtype=length.dtype, device=length.device)
|
79 |
-
return x.unsqueeze(0) < length.unsqueeze(1)
|
80 |
-
|
81 |
-
|
82 |
-
def generate_path(duration, mask):
|
83 |
-
"""
|
84 |
-
duration: [b, 1, t_x]
|
85 |
-
mask: [b, 1, t_y, t_x]
|
86 |
-
"""
|
87 |
-
device = duration.device
|
88 |
-
|
89 |
-
b, _, t_y, t_x = mask.shape
|
90 |
-
cum_duration = torch.cumsum(duration, -1)
|
91 |
-
|
92 |
-
cum_duration_flat = cum_duration.view(b * t_x)
|
93 |
-
path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
|
94 |
-
path = path.view(b, t_x, t_y)
|
95 |
-
path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
|
96 |
-
path = path.unsqueeze(1).transpose(2,3) * mask
|
97 |
-
return path
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aloento/9Nine-VITS/commons.py
DELETED
@@ -1,51 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch.nn import functional as F
|
3 |
-
|
4 |
-
|
5 |
-
def init_weights(m, mean=0.0, std=0.01):
|
6 |
-
classname = m.__class__.__name__
|
7 |
-
if classname.find("Conv") != -1:
|
8 |
-
m.weight.data.normal_(mean, std)
|
9 |
-
|
10 |
-
|
11 |
-
def get_padding(kernel_size, dilation=1):
|
12 |
-
return int((kernel_size * dilation - dilation) / 2)
|
13 |
-
|
14 |
-
|
15 |
-
def convert_pad_shape(pad_shape):
|
16 |
-
l = pad_shape[::-1]
|
17 |
-
pad_shape = [item for sublist in l for item in sublist]
|
18 |
-
return pad_shape
|
19 |
-
|
20 |
-
|
21 |
-
def sequence_mask(length, max_length=None):
|
22 |
-
if max_length is None:
|
23 |
-
max_length = length.max()
|
24 |
-
x = torch.arange(max_length, dtype=length.dtype, device=length.device)
|
25 |
-
return x.unsqueeze(0) < length.unsqueeze(1)
|
26 |
-
|
27 |
-
|
28 |
-
def generate_path(duration, mask):
|
29 |
-
b, _, t_y, t_x = mask.shape
|
30 |
-
cum_duration = torch.cumsum(duration, -1)
|
31 |
-
|
32 |
-
cum_duration_flat = cum_duration.view(b * t_x)
|
33 |
-
path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
|
34 |
-
path = path.view(b, t_x, t_y)
|
35 |
-
path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
|
36 |
-
path = path.unsqueeze(1).transpose(2, 3) * mask
|
37 |
-
return path
|
38 |
-
|
39 |
-
|
40 |
-
def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
|
41 |
-
n_channels_int = n_channels[0]
|
42 |
-
in_act = input_a + input_b
|
43 |
-
t_act = torch.tanh(in_act[:, :n_channels_int, :])
|
44 |
-
s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
|
45 |
-
acts = t_act * s_act
|
46 |
-
return acts
|
47 |
-
|
48 |
-
def intersperse(lst, item):
|
49 |
-
result = [item] * (len(lst) * 2 + 1)
|
50 |
-
result[1::2] = lst
|
51 |
-
return result
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/stylegan_human/utils/face_alignment.py
DELETED
@@ -1,274 +0,0 @@
|
|
1 |
-
# Copyright (c) SenseTime Research. All rights reserved.
|
2 |
-
|
3 |
-
import numpy as np
|
4 |
-
import PIL
|
5 |
-
import PIL.Image
|
6 |
-
import scipy
|
7 |
-
import scipy.ndimage
|
8 |
-
import dlib
|
9 |
-
import copy
|
10 |
-
from PIL import Image
|
11 |
-
|
12 |
-
|
13 |
-
def get_landmark(img, detector, predictor):
|
14 |
-
"""get landmark with dlib
|
15 |
-
:return: np.array shape=(68, 2)
|
16 |
-
"""
|
17 |
-
# detector = dlib.get_frontal_face_detector()
|
18 |
-
# dets, _, _ = detector.run(img, 1, -1)
|
19 |
-
dets = detector(img, 1)
|
20 |
-
for k, d in enumerate(dets):
|
21 |
-
shape = predictor(img, d.rect)
|
22 |
-
t = list(shape.parts())
|
23 |
-
a = []
|
24 |
-
for tt in t:
|
25 |
-
a.append([tt.x, tt.y])
|
26 |
-
lm = np.array(a)
|
27 |
-
|
28 |
-
# face rect
|
29 |
-
face_rect = [dets[0].rect.left(), dets[0].rect.top(),
|
30 |
-
dets[0].rect.right(), dets[0].rect.bottom()]
|
31 |
-
return lm, face_rect
|
32 |
-
|
33 |
-
|
34 |
-
def align_face_for_insetgan(img, detector, predictor, output_size=256):
|
35 |
-
"""
|
36 |
-
:param img: numpy array rgb
|
37 |
-
:return: PIL Image
|
38 |
-
"""
|
39 |
-
img_cp = copy.deepcopy(img)
|
40 |
-
lm, face_rect = get_landmark(img, detector, predictor)
|
41 |
-
|
42 |
-
lm_chin = lm[0: 17] # left-right
|
43 |
-
lm_eyebrow_left = lm[17: 22] # left-right
|
44 |
-
lm_eyebrow_right = lm[22: 27] # left-right
|
45 |
-
lm_nose = lm[27: 31] # top-down
|
46 |
-
lm_nostrils = lm[31: 36] # top-down
|
47 |
-
lm_eye_left = lm[36: 42] # left-clockwise
|
48 |
-
lm_eye_right = lm[42: 48] # left-clockwise
|
49 |
-
lm_mouth_outer = lm[48: 60] # left-clockwise
|
50 |
-
lm_mouth_inner = lm[60: 68] # left-clockwise
|
51 |
-
|
52 |
-
# Calculate auxiliary vectors.
|
53 |
-
eye_left = np.mean(lm_eye_left, axis=0)
|
54 |
-
eye_right = np.mean(lm_eye_right, axis=0)
|
55 |
-
eye_avg = (eye_left + eye_right) * 0.5
|
56 |
-
eye_to_eye = eye_right - eye_left
|
57 |
-
mouth_left = lm_mouth_outer[0]
|
58 |
-
mouth_right = lm_mouth_outer[6]
|
59 |
-
mouth_avg = (mouth_left + mouth_right) * 0.5
|
60 |
-
eye_to_mouth = mouth_avg - eye_avg
|
61 |
-
|
62 |
-
# Choose oriented crop rectangle.
|
63 |
-
x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1]
|
64 |
-
x /= np.hypot(*x)
|
65 |
-
x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8)
|
66 |
-
y = np.flipud(x) * [-1, 1]
|
67 |
-
c = eye_avg + eye_to_mouth * 0.1
|
68 |
-
quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])
|
69 |
-
qsize = np.hypot(*x) * 2
|
70 |
-
|
71 |
-
# read image
|
72 |
-
# opencv to PIL
|
73 |
-
img = PIL.Image.fromarray(img_cp)
|
74 |
-
# img = PIL.Image.open(filepath)
|
75 |
-
|
76 |
-
transform_size = output_size
|
77 |
-
enable_padding = False
|
78 |
-
|
79 |
-
# Shrink.
|
80 |
-
# shrink = int(np.floor(qsize / output_size * 0.5))
|
81 |
-
# if shrink > 1:
|
82 |
-
# rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink)))
|
83 |
-
# img = img.resize(rsize, PIL.Image.ANTIALIAS)
|
84 |
-
# quad /= shrink
|
85 |
-
# qsize /= shrink
|
86 |
-
|
87 |
-
# Crop.
|
88 |
-
border = max(int(np.rint(qsize * 0.1)), 3)
|
89 |
-
crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
|
90 |
-
int(np.ceil(max(quad[:, 1]))))
|
91 |
-
|
92 |
-
# crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]),
|
93 |
-
# min(crop[3] + border, img.size[1]))
|
94 |
-
# img.save("debug/raw.jpg")
|
95 |
-
if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]:
|
96 |
-
img = img.crop(crop)
|
97 |
-
quad -= crop[0:2]
|
98 |
-
# img.save("debug/crop.jpg")
|
99 |
-
# Pad.
|
100 |
-
# pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
|
101 |
-
# int(np.ceil(max(quad[:, 1]))))
|
102 |
-
# pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0),
|
103 |
-
# max(pad[3] - img.size[1] + border, 0))
|
104 |
-
# if enable_padding and max(pad) > border - 4:
|
105 |
-
# pad = np.maximum(pad, int(np.rint(qsize * 0.3)))
|
106 |
-
# img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')
|
107 |
-
# h, w, _ = img.shape
|
108 |
-
# y, x, _ = np.ogrid[:h, :w, :1]
|
109 |
-
# mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]),
|
110 |
-
# 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3]))
|
111 |
-
# blur = qsize * 0.02
|
112 |
-
# img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0)
|
113 |
-
# img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0)
|
114 |
-
# img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB')
|
115 |
-
# quad += pad[:2]
|
116 |
-
|
117 |
-
# Transform.
|
118 |
-
# crop shape to transform shape
|
119 |
-
# nw =
|
120 |
-
# print(img.size, quad+0.5, np.bound((quad+0.5).flatten()))
|
121 |
-
# assert False
|
122 |
-
# img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR)
|
123 |
-
|
124 |
-
# img.save("debug/transform.jpg")
|
125 |
-
# if output_size < transform_size:
|
126 |
-
img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS)
|
127 |
-
# img.save("debug/resize.jpg")
|
128 |
-
# print((quad+crop[0:2]).flatten())
|
129 |
-
# assert False
|
130 |
-
# Return aligned image.
|
131 |
-
|
132 |
-
return img, crop, face_rect
|
133 |
-
|
134 |
-
|
135 |
-
def align_face_for_projector(img, detector, predictor, output_size):
|
136 |
-
"""
|
137 |
-
:param filepath: str
|
138 |
-
:return: PIL Image
|
139 |
-
"""
|
140 |
-
|
141 |
-
img_cp = copy.deepcopy(img)
|
142 |
-
lm, face_rect = get_landmark(img, detector, predictor)
|
143 |
-
|
144 |
-
lm_chin = lm[0: 17] # left-right
|
145 |
-
lm_eyebrow_left = lm[17: 22] # left-right
|
146 |
-
lm_eyebrow_right = lm[22: 27] # left-right
|
147 |
-
lm_nose = lm[27: 31] # top-down
|
148 |
-
lm_nostrils = lm[31: 36] # top-down
|
149 |
-
lm_eye_left = lm[36: 42] # left-clockwise
|
150 |
-
lm_eye_right = lm[42: 48] # left-clockwise
|
151 |
-
lm_mouth_outer = lm[48: 60] # left-clockwise
|
152 |
-
lm_mouth_inner = lm[60: 68] # left-clockwise
|
153 |
-
|
154 |
-
# Calculate auxiliary vectors.
|
155 |
-
eye_left = np.mean(lm_eye_left, axis=0)
|
156 |
-
eye_right = np.mean(lm_eye_right, axis=0)
|
157 |
-
eye_avg = (eye_left + eye_right) * 0.5
|
158 |
-
eye_to_eye = eye_right - eye_left
|
159 |
-
mouth_left = lm_mouth_outer[0]
|
160 |
-
mouth_right = lm_mouth_outer[6]
|
161 |
-
mouth_avg = (mouth_left + mouth_right) * 0.5
|
162 |
-
eye_to_mouth = mouth_avg - eye_avg
|
163 |
-
|
164 |
-
# Choose oriented crop rectangle.
|
165 |
-
x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1]
|
166 |
-
x /= np.hypot(*x)
|
167 |
-
x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8)
|
168 |
-
y = np.flipud(x) * [-1, 1]
|
169 |
-
c = eye_avg + eye_to_mouth * 0.1
|
170 |
-
quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])
|
171 |
-
qsize = np.hypot(*x) * 2
|
172 |
-
|
173 |
-
# read image
|
174 |
-
img = PIL.Image.fromarray(img_cp)
|
175 |
-
|
176 |
-
transform_size = output_size
|
177 |
-
enable_padding = True
|
178 |
-
|
179 |
-
# Shrink.
|
180 |
-
shrink = int(np.floor(qsize / output_size * 0.5))
|
181 |
-
if shrink > 1:
|
182 |
-
rsize = (int(np.rint(float(img.size[0]) / shrink)),
|
183 |
-
int(np.rint(float(img.size[1]) / shrink)))
|
184 |
-
img = img.resize(rsize, PIL.Image.ANTIALIAS)
|
185 |
-
quad /= shrink
|
186 |
-
qsize /= shrink
|
187 |
-
|
188 |
-
# Crop.
|
189 |
-
border = max(int(np.rint(qsize * 0.1)), 3)
|
190 |
-
crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
|
191 |
-
int(np.ceil(max(quad[:, 1]))))
|
192 |
-
crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]),
|
193 |
-
min(crop[3] + border, img.size[1]))
|
194 |
-
if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]:
|
195 |
-
img = img.crop(crop)
|
196 |
-
quad -= crop[0:2]
|
197 |
-
|
198 |
-
# Pad.
|
199 |
-
pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
|
200 |
-
int(np.ceil(max(quad[:, 1]))))
|
201 |
-
pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0),
|
202 |
-
max(pad[3] - img.size[1] + border, 0))
|
203 |
-
if enable_padding and max(pad) > border - 4:
|
204 |
-
pad = np.maximum(pad, int(np.rint(qsize * 0.3)))
|
205 |
-
img = np.pad(np.float32(img),
|
206 |
-
((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')
|
207 |
-
h, w, _ = img.shape
|
208 |
-
y, x, _ = np.ogrid[:h, :w, :1]
|
209 |
-
mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]),
|
210 |
-
1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3]))
|
211 |
-
blur = qsize * 0.02
|
212 |
-
img += (scipy.ndimage.gaussian_filter(img,
|
213 |
-
[blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0)
|
214 |
-
img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0)
|
215 |
-
img = PIL.Image.fromarray(
|
216 |
-
np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB')
|
217 |
-
quad += pad[:2]
|
218 |
-
|
219 |
-
# Transform.
|
220 |
-
img = img.transform((transform_size, transform_size),
|
221 |
-
PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR)
|
222 |
-
if output_size < transform_size:
|
223 |
-
img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS)
|
224 |
-
|
225 |
-
# Return aligned image.
|
226 |
-
return img
|
227 |
-
|
228 |
-
|
229 |
-
def reverse_quad_transform(image, quad_to_map_to, alpha):
|
230 |
-
# forward mapping, for simplicity
|
231 |
-
|
232 |
-
result = Image.new("RGBA", image.size)
|
233 |
-
result_pixels = result.load()
|
234 |
-
|
235 |
-
width, height = result.size
|
236 |
-
|
237 |
-
for y in range(height):
|
238 |
-
for x in range(width):
|
239 |
-
result_pixels[x, y] = (0, 0, 0, 0)
|
240 |
-
|
241 |
-
p1 = (quad_to_map_to[0], quad_to_map_to[1])
|
242 |
-
p2 = (quad_to_map_to[2], quad_to_map_to[3])
|
243 |
-
p3 = (quad_to_map_to[4], quad_to_map_to[5])
|
244 |
-
p4 = (quad_to_map_to[6], quad_to_map_to[7])
|
245 |
-
|
246 |
-
p1_p2_vec = (p2[0] - p1[0], p2[1] - p1[1])
|
247 |
-
p4_p3_vec = (p3[0] - p4[0], p3[1] - p4[1])
|
248 |
-
|
249 |
-
for y in range(height):
|
250 |
-
for x in range(width):
|
251 |
-
pixel = image.getpixel((x, y))
|
252 |
-
|
253 |
-
y_percentage = y / float(height)
|
254 |
-
x_percentage = x / float(width)
|
255 |
-
|
256 |
-
# interpolate vertically
|
257 |
-
pa = (p1[0] + p1_p2_vec[0] * y_percentage,
|
258 |
-
p1[1] + p1_p2_vec[1] * y_percentage)
|
259 |
-
pb = (p4[0] + p4_p3_vec[0] * y_percentage,
|
260 |
-
p4[1] + p4_p3_vec[1] * y_percentage)
|
261 |
-
|
262 |
-
pa_to_pb_vec = (pb[0] - pa[0], pb[1] - pa[1])
|
263 |
-
|
264 |
-
# interpolate horizontally
|
265 |
-
p = (pa[0] + pa_to_pb_vec[0] * x_percentage,
|
266 |
-
pa[1] + pa_to_pb_vec[1] * x_percentage)
|
267 |
-
|
268 |
-
try:
|
269 |
-
result_pixels[p[0], p[1]] = (
|
270 |
-
pixel[0], pixel[1], pixel[2], min(int(alpha * 255), pixel[3]))
|
271 |
-
except Exception:
|
272 |
-
pass
|
273 |
-
|
274 |
-
return result
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/multistep_dpm_solver.md
DELETED
@@ -1,20 +0,0 @@
|
|
1 |
-
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
-
|
3 |
-
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
-
the License. You may obtain a copy of the License at
|
5 |
-
|
6 |
-
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
-
|
8 |
-
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
-
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
-
specific language governing permissions and limitations under the License.
|
11 |
-
-->
|
12 |
-
|
13 |
-
# Multistep DPM-Solver
|
14 |
-
|
15 |
-
## Overview
|
16 |
-
|
17 |
-
Original paper can be found [here](https://arxiv.org/abs/2206.00927) and the [improved version](https://arxiv.org/abs/2211.01095). The original implementation can be found [here](https://github.com/LuChengTHU/dpm-solver).
|
18 |
-
|
19 |
-
## DPMSolverMultistepScheduler
|
20 |
-
[[autodoc]] DPMSolverMultistepScheduler
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_unet_blocks_common.py
DELETED
@@ -1,121 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2023 HuggingFace Inc.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
import unittest
|
16 |
-
from typing import Tuple
|
17 |
-
|
18 |
-
import torch
|
19 |
-
|
20 |
-
from diffusers.utils import floats_tensor, randn_tensor, torch_all_close, torch_device
|
21 |
-
from diffusers.utils.testing_utils import require_torch
|
22 |
-
|
23 |
-
|
24 |
-
@require_torch
|
25 |
-
class UNetBlockTesterMixin:
|
26 |
-
@property
|
27 |
-
def dummy_input(self):
|
28 |
-
return self.get_dummy_input()
|
29 |
-
|
30 |
-
@property
|
31 |
-
def output_shape(self):
|
32 |
-
if self.block_type == "down":
|
33 |
-
return (4, 32, 16, 16)
|
34 |
-
elif self.block_type == "mid":
|
35 |
-
return (4, 32, 32, 32)
|
36 |
-
elif self.block_type == "up":
|
37 |
-
return (4, 32, 64, 64)
|
38 |
-
|
39 |
-
raise ValueError(f"'{self.block_type}' is not a supported block_type. Set it to 'up', 'mid', or 'down'.")
|
40 |
-
|
41 |
-
def get_dummy_input(
|
42 |
-
self,
|
43 |
-
include_temb=True,
|
44 |
-
include_res_hidden_states_tuple=False,
|
45 |
-
include_encoder_hidden_states=False,
|
46 |
-
include_skip_sample=False,
|
47 |
-
):
|
48 |
-
batch_size = 4
|
49 |
-
num_channels = 32
|
50 |
-
sizes = (32, 32)
|
51 |
-
|
52 |
-
generator = torch.manual_seed(0)
|
53 |
-
device = torch.device(torch_device)
|
54 |
-
shape = (batch_size, num_channels) + sizes
|
55 |
-
hidden_states = randn_tensor(shape, generator=generator, device=device)
|
56 |
-
dummy_input = {"hidden_states": hidden_states}
|
57 |
-
|
58 |
-
if include_temb:
|
59 |
-
temb_channels = 128
|
60 |
-
dummy_input["temb"] = randn_tensor((batch_size, temb_channels), generator=generator, device=device)
|
61 |
-
|
62 |
-
if include_res_hidden_states_tuple:
|
63 |
-
generator_1 = torch.manual_seed(1)
|
64 |
-
dummy_input["res_hidden_states_tuple"] = (randn_tensor(shape, generator=generator_1, device=device),)
|
65 |
-
|
66 |
-
if include_encoder_hidden_states:
|
67 |
-
dummy_input["encoder_hidden_states"] = floats_tensor((batch_size, 32, 32)).to(torch_device)
|
68 |
-
|
69 |
-
if include_skip_sample:
|
70 |
-
dummy_input["skip_sample"] = randn_tensor(((batch_size, 3) + sizes), generator=generator, device=device)
|
71 |
-
|
72 |
-
return dummy_input
|
73 |
-
|
74 |
-
def prepare_init_args_and_inputs_for_common(self):
|
75 |
-
init_dict = {
|
76 |
-
"in_channels": 32,
|
77 |
-
"out_channels": 32,
|
78 |
-
"temb_channels": 128,
|
79 |
-
}
|
80 |
-
if self.block_type == "up":
|
81 |
-
init_dict["prev_output_channel"] = 32
|
82 |
-
|
83 |
-
if self.block_type == "mid":
|
84 |
-
init_dict.pop("out_channels")
|
85 |
-
|
86 |
-
inputs_dict = self.dummy_input
|
87 |
-
return init_dict, inputs_dict
|
88 |
-
|
89 |
-
def test_output(self, expected_slice):
|
90 |
-
init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
|
91 |
-
unet_block = self.block_class(**init_dict)
|
92 |
-
unet_block.to(torch_device)
|
93 |
-
unet_block.eval()
|
94 |
-
|
95 |
-
with torch.no_grad():
|
96 |
-
output = unet_block(**inputs_dict)
|
97 |
-
|
98 |
-
if isinstance(output, Tuple):
|
99 |
-
output = output[0]
|
100 |
-
|
101 |
-
self.assertEqual(output.shape, self.output_shape)
|
102 |
-
|
103 |
-
output_slice = output[0, -1, -3:, -3:]
|
104 |
-
expected_slice = torch.tensor(expected_slice).to(torch_device)
|
105 |
-
assert torch_all_close(output_slice.flatten(), expected_slice, atol=5e-3)
|
106 |
-
|
107 |
-
@unittest.skipIf(torch_device == "mps", "Training is not supported in mps")
|
108 |
-
def test_training(self):
|
109 |
-
init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
|
110 |
-
model = self.block_class(**init_dict)
|
111 |
-
model.to(torch_device)
|
112 |
-
model.train()
|
113 |
-
output = model(**inputs_dict)
|
114 |
-
|
115 |
-
if isinstance(output, Tuple):
|
116 |
-
output = output[0]
|
117 |
-
|
118 |
-
device = torch.device(torch_device)
|
119 |
-
noise = randn_tensor(output.shape, device=device)
|
120 |
-
loss = torch.nn.functional.mse_loss(output, noise)
|
121 |
-
loss.backward()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/latent_diffusion/test_latent_diffusion.py
DELETED
@@ -1,208 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2023 HuggingFace Inc.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
import gc
|
17 |
-
import unittest
|
18 |
-
|
19 |
-
import numpy as np
|
20 |
-
import torch
|
21 |
-
from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
|
22 |
-
|
23 |
-
from diffusers import AutoencoderKL, DDIMScheduler, LDMTextToImagePipeline, UNet2DConditionModel
|
24 |
-
from diffusers.utils.testing_utils import (
|
25 |
-
enable_full_determinism,
|
26 |
-
load_numpy,
|
27 |
-
nightly,
|
28 |
-
require_torch_gpu,
|
29 |
-
slow,
|
30 |
-
torch_device,
|
31 |
-
)
|
32 |
-
|
33 |
-
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS
|
34 |
-
from ..test_pipelines_common import PipelineTesterMixin
|
35 |
-
|
36 |
-
|
37 |
-
enable_full_determinism()
|
38 |
-
|
39 |
-
|
40 |
-
class LDMTextToImagePipelineFastTests(PipelineTesterMixin, unittest.TestCase):
|
41 |
-
pipeline_class = LDMTextToImagePipeline
|
42 |
-
params = TEXT_TO_IMAGE_PARAMS - {
|
43 |
-
"negative_prompt",
|
44 |
-
"negative_prompt_embeds",
|
45 |
-
"cross_attention_kwargs",
|
46 |
-
"prompt_embeds",
|
47 |
-
}
|
48 |
-
required_optional_params = PipelineTesterMixin.required_optional_params - {
|
49 |
-
"num_images_per_prompt",
|
50 |
-
"callback",
|
51 |
-
"callback_steps",
|
52 |
-
}
|
53 |
-
batch_params = TEXT_TO_IMAGE_BATCH_PARAMS
|
54 |
-
|
55 |
-
def get_dummy_components(self):
|
56 |
-
torch.manual_seed(0)
|
57 |
-
unet = UNet2DConditionModel(
|
58 |
-
block_out_channels=(32, 64),
|
59 |
-
layers_per_block=2,
|
60 |
-
sample_size=32,
|
61 |
-
in_channels=4,
|
62 |
-
out_channels=4,
|
63 |
-
down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
|
64 |
-
up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
|
65 |
-
cross_attention_dim=32,
|
66 |
-
)
|
67 |
-
scheduler = DDIMScheduler(
|
68 |
-
beta_start=0.00085,
|
69 |
-
beta_end=0.012,
|
70 |
-
beta_schedule="scaled_linear",
|
71 |
-
clip_sample=False,
|
72 |
-
set_alpha_to_one=False,
|
73 |
-
)
|
74 |
-
torch.manual_seed(0)
|
75 |
-
vae = AutoencoderKL(
|
76 |
-
block_out_channels=(32, 64),
|
77 |
-
in_channels=3,
|
78 |
-
out_channels=3,
|
79 |
-
down_block_types=("DownEncoderBlock2D", "DownEncoderBlock2D"),
|
80 |
-
up_block_types=("UpDecoderBlock2D", "UpDecoderBlock2D"),
|
81 |
-
latent_channels=4,
|
82 |
-
)
|
83 |
-
torch.manual_seed(0)
|
84 |
-
text_encoder_config = CLIPTextConfig(
|
85 |
-
bos_token_id=0,
|
86 |
-
eos_token_id=2,
|
87 |
-
hidden_size=32,
|
88 |
-
intermediate_size=37,
|
89 |
-
layer_norm_eps=1e-05,
|
90 |
-
num_attention_heads=4,
|
91 |
-
num_hidden_layers=5,
|
92 |
-
pad_token_id=1,
|
93 |
-
vocab_size=1000,
|
94 |
-
)
|
95 |
-
text_encoder = CLIPTextModel(text_encoder_config)
|
96 |
-
tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
|
97 |
-
|
98 |
-
components = {
|
99 |
-
"unet": unet,
|
100 |
-
"scheduler": scheduler,
|
101 |
-
"vqvae": vae,
|
102 |
-
"bert": text_encoder,
|
103 |
-
"tokenizer": tokenizer,
|
104 |
-
}
|
105 |
-
return components
|
106 |
-
|
107 |
-
def get_dummy_inputs(self, device, seed=0):
|
108 |
-
if str(device).startswith("mps"):
|
109 |
-
generator = torch.manual_seed(seed)
|
110 |
-
else:
|
111 |
-
generator = torch.Generator(device=device).manual_seed(seed)
|
112 |
-
inputs = {
|
113 |
-
"prompt": "A painting of a squirrel eating a burger",
|
114 |
-
"generator": generator,
|
115 |
-
"num_inference_steps": 2,
|
116 |
-
"guidance_scale": 6.0,
|
117 |
-
"output_type": "numpy",
|
118 |
-
}
|
119 |
-
return inputs
|
120 |
-
|
121 |
-
def test_inference_text2img(self):
|
122 |
-
device = "cpu" # ensure determinism for the device-dependent torch.Generator
|
123 |
-
|
124 |
-
components = self.get_dummy_components()
|
125 |
-
pipe = LDMTextToImagePipeline(**components)
|
126 |
-
pipe.to(device)
|
127 |
-
pipe.set_progress_bar_config(disable=None)
|
128 |
-
|
129 |
-
inputs = self.get_dummy_inputs(device)
|
130 |
-
image = pipe(**inputs).images
|
131 |
-
image_slice = image[0, -3:, -3:, -1]
|
132 |
-
|
133 |
-
assert image.shape == (1, 16, 16, 3)
|
134 |
-
expected_slice = np.array([0.6101, 0.6156, 0.5622, 0.4895, 0.6661, 0.3804, 0.5748, 0.6136, 0.5014])
|
135 |
-
|
136 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
|
137 |
-
|
138 |
-
|
139 |
-
@slow
|
140 |
-
@require_torch_gpu
|
141 |
-
class LDMTextToImagePipelineSlowTests(unittest.TestCase):
|
142 |
-
def tearDown(self):
|
143 |
-
super().tearDown()
|
144 |
-
gc.collect()
|
145 |
-
torch.cuda.empty_cache()
|
146 |
-
|
147 |
-
def get_inputs(self, device, dtype=torch.float32, seed=0):
|
148 |
-
generator = torch.manual_seed(seed)
|
149 |
-
latents = np.random.RandomState(seed).standard_normal((1, 4, 32, 32))
|
150 |
-
latents = torch.from_numpy(latents).to(device=device, dtype=dtype)
|
151 |
-
inputs = {
|
152 |
-
"prompt": "A painting of a squirrel eating a burger",
|
153 |
-
"latents": latents,
|
154 |
-
"generator": generator,
|
155 |
-
"num_inference_steps": 3,
|
156 |
-
"guidance_scale": 6.0,
|
157 |
-
"output_type": "numpy",
|
158 |
-
}
|
159 |
-
return inputs
|
160 |
-
|
161 |
-
def test_ldm_default_ddim(self):
|
162 |
-
pipe = LDMTextToImagePipeline.from_pretrained("CompVis/ldm-text2im-large-256").to(torch_device)
|
163 |
-
pipe.set_progress_bar_config(disable=None)
|
164 |
-
|
165 |
-
inputs = self.get_inputs(torch_device)
|
166 |
-
image = pipe(**inputs).images
|
167 |
-
image_slice = image[0, -3:, -3:, -1].flatten()
|
168 |
-
|
169 |
-
assert image.shape == (1, 256, 256, 3)
|
170 |
-
expected_slice = np.array([0.51825, 0.52850, 0.52543, 0.54258, 0.52304, 0.52569, 0.54363, 0.55276, 0.56878])
|
171 |
-
max_diff = np.abs(expected_slice - image_slice).max()
|
172 |
-
assert max_diff < 1e-3
|
173 |
-
|
174 |
-
|
175 |
-
@nightly
|
176 |
-
@require_torch_gpu
|
177 |
-
class LDMTextToImagePipelineNightlyTests(unittest.TestCase):
|
178 |
-
def tearDown(self):
|
179 |
-
super().tearDown()
|
180 |
-
gc.collect()
|
181 |
-
torch.cuda.empty_cache()
|
182 |
-
|
183 |
-
def get_inputs(self, device, dtype=torch.float32, seed=0):
|
184 |
-
generator = torch.manual_seed(seed)
|
185 |
-
latents = np.random.RandomState(seed).standard_normal((1, 4, 32, 32))
|
186 |
-
latents = torch.from_numpy(latents).to(device=device, dtype=dtype)
|
187 |
-
inputs = {
|
188 |
-
"prompt": "A painting of a squirrel eating a burger",
|
189 |
-
"latents": latents,
|
190 |
-
"generator": generator,
|
191 |
-
"num_inference_steps": 50,
|
192 |
-
"guidance_scale": 6.0,
|
193 |
-
"output_type": "numpy",
|
194 |
-
}
|
195 |
-
return inputs
|
196 |
-
|
197 |
-
def test_ldm_default_ddim(self):
|
198 |
-
pipe = LDMTextToImagePipeline.from_pretrained("CompVis/ldm-text2im-large-256").to(torch_device)
|
199 |
-
pipe.set_progress_bar_config(disable=None)
|
200 |
-
|
201 |
-
inputs = self.get_inputs(torch_device)
|
202 |
-
image = pipe(**inputs).images[0]
|
203 |
-
|
204 |
-
expected_image = load_numpy(
|
205 |
-
"https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/ldm_text2img/ldm_large_256_ddim.npy"
|
206 |
-
)
|
207 |
-
max_diff = np.abs(expected_image - image).max()
|
208 |
-
assert max_diff < 1e-3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_1x_lvis_v1.py
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
_base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py'
|
2 |
-
model = dict(
|
3 |
-
pretrained='open-mmlab://resnext101_32x4d',
|
4 |
-
backbone=dict(
|
5 |
-
type='ResNeXt',
|
6 |
-
depth=101,
|
7 |
-
groups=32,
|
8 |
-
base_width=4,
|
9 |
-
num_stages=4,
|
10 |
-
out_indices=(0, 1, 2, 3),
|
11 |
-
frozen_stages=1,
|
12 |
-
norm_cfg=dict(type='BN', requires_grad=True),
|
13 |
-
style='pytorch'))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x1024_80k_cityscapes.py
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
_base_ = './ocrnet_hr18_512x1024_80k_cityscapes.py'
|
2 |
-
norm_cfg = dict(type='SyncBN', requires_grad=True)
|
3 |
-
model = dict(
|
4 |
-
pretrained='open-mmlab://msra/hrnetv2_w48',
|
5 |
-
backbone=dict(
|
6 |
-
extra=dict(
|
7 |
-
stage2=dict(num_channels=(48, 96)),
|
8 |
-
stage3=dict(num_channels=(48, 96, 192)),
|
9 |
-
stage4=dict(num_channels=(48, 96, 192, 384)))),
|
10 |
-
decode_head=[
|
11 |
-
dict(
|
12 |
-
type='FCNHead',
|
13 |
-
in_channels=[48, 96, 192, 384],
|
14 |
-
channels=sum([48, 96, 192, 384]),
|
15 |
-
input_transform='resize_concat',
|
16 |
-
in_index=(0, 1, 2, 3),
|
17 |
-
kernel_size=1,
|
18 |
-
num_convs=1,
|
19 |
-
norm_cfg=norm_cfg,
|
20 |
-
concat_input=False,
|
21 |
-
dropout_ratio=-1,
|
22 |
-
num_classes=19,
|
23 |
-
align_corners=False,
|
24 |
-
loss_decode=dict(
|
25 |
-
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
|
26 |
-
dict(
|
27 |
-
type='OCRHead',
|
28 |
-
in_channels=[48, 96, 192, 384],
|
29 |
-
channels=512,
|
30 |
-
ocr_channels=256,
|
31 |
-
input_transform='resize_concat',
|
32 |
-
in_index=(0, 1, 2, 3),
|
33 |
-
norm_cfg=norm_cfg,
|
34 |
-
dropout_ratio=-1,
|
35 |
-
num_classes=19,
|
36 |
-
align_corners=False,
|
37 |
-
loss_decode=dict(
|
38 |
-
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0))
|
39 |
-
])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/fileio/io.py
DELETED
@@ -1,151 +0,0 @@
|
|
1 |
-
# Copyright (c) OpenMMLab. All rights reserved.
|
2 |
-
from io import BytesIO, StringIO
|
3 |
-
from pathlib import Path
|
4 |
-
|
5 |
-
from ..utils import is_list_of, is_str
|
6 |
-
from .file_client import FileClient
|
7 |
-
from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler
|
8 |
-
|
9 |
-
file_handlers = {
|
10 |
-
'json': JsonHandler(),
|
11 |
-
'yaml': YamlHandler(),
|
12 |
-
'yml': YamlHandler(),
|
13 |
-
'pickle': PickleHandler(),
|
14 |
-
'pkl': PickleHandler()
|
15 |
-
}
|
16 |
-
|
17 |
-
|
18 |
-
def load(file, file_format=None, file_client_args=None, **kwargs):
|
19 |
-
"""Load data from json/yaml/pickle files.
|
20 |
-
|
21 |
-
This method provides a unified api for loading data from serialized files.
|
22 |
-
|
23 |
-
Note:
|
24 |
-
In v1.3.16 and later, ``load`` supports loading data from serialized
|
25 |
-
files those can be storaged in different backends.
|
26 |
-
|
27 |
-
Args:
|
28 |
-
file (str or :obj:`Path` or file-like object): Filename or a file-like
|
29 |
-
object.
|
30 |
-
file_format (str, optional): If not specified, the file format will be
|
31 |
-
inferred from the file extension, otherwise use the specified one.
|
32 |
-
Currently supported formats include "json", "yaml/yml" and
|
33 |
-
"pickle/pkl".
|
34 |
-
file_client_args (dict, optional): Arguments to instantiate a
|
35 |
-
FileClient. See :class:`mmcv.fileio.FileClient` for details.
|
36 |
-
Default: None.
|
37 |
-
|
38 |
-
Examples:
|
39 |
-
>>> load('/path/of/your/file') # file is storaged in disk
|
40 |
-
>>> load('https://path/of/your/file') # file is storaged in Internet
|
41 |
-
>>> load('s3://path/of/your/file') # file is storaged in petrel
|
42 |
-
|
43 |
-
Returns:
|
44 |
-
The content from the file.
|
45 |
-
"""
|
46 |
-
if isinstance(file, Path):
|
47 |
-
file = str(file)
|
48 |
-
if file_format is None and is_str(file):
|
49 |
-
file_format = file.split('.')[-1]
|
50 |
-
if file_format not in file_handlers:
|
51 |
-
raise TypeError(f'Unsupported format: {file_format}')
|
52 |
-
|
53 |
-
handler = file_handlers[file_format]
|
54 |
-
if is_str(file):
|
55 |
-
file_client = FileClient.infer_client(file_client_args, file)
|
56 |
-
if handler.str_like:
|
57 |
-
with StringIO(file_client.get_text(file)) as f:
|
58 |
-
obj = handler.load_from_fileobj(f, **kwargs)
|
59 |
-
else:
|
60 |
-
with BytesIO(file_client.get(file)) as f:
|
61 |
-
obj = handler.load_from_fileobj(f, **kwargs)
|
62 |
-
elif hasattr(file, 'read'):
|
63 |
-
obj = handler.load_from_fileobj(file, **kwargs)
|
64 |
-
else:
|
65 |
-
raise TypeError('"file" must be a filepath str or a file-object')
|
66 |
-
return obj
|
67 |
-
|
68 |
-
|
69 |
-
def dump(obj, file=None, file_format=None, file_client_args=None, **kwargs):
|
70 |
-
"""Dump data to json/yaml/pickle strings or files.
|
71 |
-
|
72 |
-
This method provides a unified api for dumping data as strings or to files,
|
73 |
-
and also supports custom arguments for each file format.
|
74 |
-
|
75 |
-
Note:
|
76 |
-
In v1.3.16 and later, ``dump`` supports dumping data as strings or to
|
77 |
-
files which is saved to different backends.
|
78 |
-
|
79 |
-
Args:
|
80 |
-
obj (any): The python object to be dumped.
|
81 |
-
file (str or :obj:`Path` or file-like object, optional): If not
|
82 |
-
specified, then the object is dumped to a str, otherwise to a file
|
83 |
-
specified by the filename or file-like object.
|
84 |
-
file_format (str, optional): Same as :func:`load`.
|
85 |
-
file_client_args (dict, optional): Arguments to instantiate a
|
86 |
-
FileClient. See :class:`mmcv.fileio.FileClient` for details.
|
87 |
-
Default: None.
|
88 |
-
|
89 |
-
Examples:
|
90 |
-
>>> dump('hello world', '/path/of/your/file') # disk
|
91 |
-
>>> dump('hello world', 's3://path/of/your/file') # ceph or petrel
|
92 |
-
|
93 |
-
Returns:
|
94 |
-
bool: True for success, False otherwise.
|
95 |
-
"""
|
96 |
-
if isinstance(file, Path):
|
97 |
-
file = str(file)
|
98 |
-
if file_format is None:
|
99 |
-
if is_str(file):
|
100 |
-
file_format = file.split('.')[-1]
|
101 |
-
elif file is None:
|
102 |
-
raise ValueError(
|
103 |
-
'file_format must be specified since file is None')
|
104 |
-
if file_format not in file_handlers:
|
105 |
-
raise TypeError(f'Unsupported format: {file_format}')
|
106 |
-
|
107 |
-
handler = file_handlers[file_format]
|
108 |
-
if file is None:
|
109 |
-
return handler.dump_to_str(obj, **kwargs)
|
110 |
-
elif is_str(file):
|
111 |
-
file_client = FileClient.infer_client(file_client_args, file)
|
112 |
-
if handler.str_like:
|
113 |
-
with StringIO() as f:
|
114 |
-
handler.dump_to_fileobj(obj, f, **kwargs)
|
115 |
-
file_client.put_text(f.getvalue(), file)
|
116 |
-
else:
|
117 |
-
with BytesIO() as f:
|
118 |
-
handler.dump_to_fileobj(obj, f, **kwargs)
|
119 |
-
file_client.put(f.getvalue(), file)
|
120 |
-
elif hasattr(file, 'write'):
|
121 |
-
handler.dump_to_fileobj(obj, file, **kwargs)
|
122 |
-
else:
|
123 |
-
raise TypeError('"file" must be a filename str or a file-object')
|
124 |
-
|
125 |
-
|
126 |
-
def _register_handler(handler, file_formats):
|
127 |
-
"""Register a handler for some file extensions.
|
128 |
-
|
129 |
-
Args:
|
130 |
-
handler (:obj:`BaseFileHandler`): Handler to be registered.
|
131 |
-
file_formats (str or list[str]): File formats to be handled by this
|
132 |
-
handler.
|
133 |
-
"""
|
134 |
-
if not isinstance(handler, BaseFileHandler):
|
135 |
-
raise TypeError(
|
136 |
-
f'handler must be a child of BaseFileHandler, not {type(handler)}')
|
137 |
-
if isinstance(file_formats, str):
|
138 |
-
file_formats = [file_formats]
|
139 |
-
if not is_list_of(file_formats, str):
|
140 |
-
raise TypeError('file_formats must be a str or a list of str')
|
141 |
-
for ext in file_formats:
|
142 |
-
file_handlers[ext] = handler
|
143 |
-
|
144 |
-
|
145 |
-
def register_handler(file_formats, **kwargs):
|
146 |
-
|
147 |
-
def wrap(cls):
|
148 |
-
_register_handler(cls(**kwargs), file_formats)
|
149 |
-
return cls
|
150 |
-
|
151 |
-
return wrap
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ArkanDash/rvc-models/infer_pack/attentions.py
DELETED
@@ -1,417 +0,0 @@
|
|
1 |
-
import copy
|
2 |
-
import math
|
3 |
-
import numpy as np
|
4 |
-
import torch
|
5 |
-
from torch import nn
|
6 |
-
from torch.nn import functional as F
|
7 |
-
|
8 |
-
from infer_pack import commons
|
9 |
-
from infer_pack import modules
|
10 |
-
from infer_pack.modules import LayerNorm
|
11 |
-
|
12 |
-
|
13 |
-
class Encoder(nn.Module):
|
14 |
-
def __init__(
|
15 |
-
self,
|
16 |
-
hidden_channels,
|
17 |
-
filter_channels,
|
18 |
-
n_heads,
|
19 |
-
n_layers,
|
20 |
-
kernel_size=1,
|
21 |
-
p_dropout=0.0,
|
22 |
-
window_size=10,
|
23 |
-
**kwargs
|
24 |
-
):
|
25 |
-
super().__init__()
|
26 |
-
self.hidden_channels = hidden_channels
|
27 |
-
self.filter_channels = filter_channels
|
28 |
-
self.n_heads = n_heads
|
29 |
-
self.n_layers = n_layers
|
30 |
-
self.kernel_size = kernel_size
|
31 |
-
self.p_dropout = p_dropout
|
32 |
-
self.window_size = window_size
|
33 |
-
|
34 |
-
self.drop = nn.Dropout(p_dropout)
|
35 |
-
self.attn_layers = nn.ModuleList()
|
36 |
-
self.norm_layers_1 = nn.ModuleList()
|
37 |
-
self.ffn_layers = nn.ModuleList()
|
38 |
-
self.norm_layers_2 = nn.ModuleList()
|
39 |
-
for i in range(self.n_layers):
|
40 |
-
self.attn_layers.append(
|
41 |
-
MultiHeadAttention(
|
42 |
-
hidden_channels,
|
43 |
-
hidden_channels,
|
44 |
-
n_heads,
|
45 |
-
p_dropout=p_dropout,
|
46 |
-
window_size=window_size,
|
47 |
-
)
|
48 |
-
)
|
49 |
-
self.norm_layers_1.append(LayerNorm(hidden_channels))
|
50 |
-
self.ffn_layers.append(
|
51 |
-
FFN(
|
52 |
-
hidden_channels,
|
53 |
-
hidden_channels,
|
54 |
-
filter_channels,
|
55 |
-
kernel_size,
|
56 |
-
p_dropout=p_dropout,
|
57 |
-
)
|
58 |
-
)
|
59 |
-
self.norm_layers_2.append(LayerNorm(hidden_channels))
|
60 |
-
|
61 |
-
def forward(self, x, x_mask):
|
62 |
-
attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
|
63 |
-
x = x * x_mask
|
64 |
-
for i in range(self.n_layers):
|
65 |
-
y = self.attn_layers[i](x, x, attn_mask)
|
66 |
-
y = self.drop(y)
|
67 |
-
x = self.norm_layers_1[i](x + y)
|
68 |
-
|
69 |
-
y = self.ffn_layers[i](x, x_mask)
|
70 |
-
y = self.drop(y)
|
71 |
-
x = self.norm_layers_2[i](x + y)
|
72 |
-
x = x * x_mask
|
73 |
-
return x
|
74 |
-
|
75 |
-
|
76 |
-
class Decoder(nn.Module):
|
77 |
-
def __init__(
|
78 |
-
self,
|
79 |
-
hidden_channels,
|
80 |
-
filter_channels,
|
81 |
-
n_heads,
|
82 |
-
n_layers,
|
83 |
-
kernel_size=1,
|
84 |
-
p_dropout=0.0,
|
85 |
-
proximal_bias=False,
|
86 |
-
proximal_init=True,
|
87 |
-
**kwargs
|
88 |
-
):
|
89 |
-
super().__init__()
|
90 |
-
self.hidden_channels = hidden_channels
|
91 |
-
self.filter_channels = filter_channels
|
92 |
-
self.n_heads = n_heads
|
93 |
-
self.n_layers = n_layers
|
94 |
-
self.kernel_size = kernel_size
|
95 |
-
self.p_dropout = p_dropout
|
96 |
-
self.proximal_bias = proximal_bias
|
97 |
-
self.proximal_init = proximal_init
|
98 |
-
|
99 |
-
self.drop = nn.Dropout(p_dropout)
|
100 |
-
self.self_attn_layers = nn.ModuleList()
|
101 |
-
self.norm_layers_0 = nn.ModuleList()
|
102 |
-
self.encdec_attn_layers = nn.ModuleList()
|
103 |
-
self.norm_layers_1 = nn.ModuleList()
|
104 |
-
self.ffn_layers = nn.ModuleList()
|
105 |
-
self.norm_layers_2 = nn.ModuleList()
|
106 |
-
for i in range(self.n_layers):
|
107 |
-
self.self_attn_layers.append(
|
108 |
-
MultiHeadAttention(
|
109 |
-
hidden_channels,
|
110 |
-
hidden_channels,
|
111 |
-
n_heads,
|
112 |
-
p_dropout=p_dropout,
|
113 |
-
proximal_bias=proximal_bias,
|
114 |
-
proximal_init=proximal_init,
|
115 |
-
)
|
116 |
-
)
|
117 |
-
self.norm_layers_0.append(LayerNorm(hidden_channels))
|
118 |
-
self.encdec_attn_layers.append(
|
119 |
-
MultiHeadAttention(
|
120 |
-
hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
|
121 |
-
)
|
122 |
-
)
|
123 |
-
self.norm_layers_1.append(LayerNorm(hidden_channels))
|
124 |
-
self.ffn_layers.append(
|
125 |
-
FFN(
|
126 |
-
hidden_channels,
|
127 |
-
hidden_channels,
|
128 |
-
filter_channels,
|
129 |
-
kernel_size,
|
130 |
-
p_dropout=p_dropout,
|
131 |
-
causal=True,
|
132 |
-
)
|
133 |
-
)
|
134 |
-
self.norm_layers_2.append(LayerNorm(hidden_channels))
|
135 |
-
|
136 |
-
def forward(self, x, x_mask, h, h_mask):
|
137 |
-
"""
|
138 |
-
x: decoder input
|
139 |
-
h: encoder output
|
140 |
-
"""
|
141 |
-
self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
|
142 |
-
device=x.device, dtype=x.dtype
|
143 |
-
)
|
144 |
-
encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
|
145 |
-
x = x * x_mask
|
146 |
-
for i in range(self.n_layers):
|
147 |
-
y = self.self_attn_layers[i](x, x, self_attn_mask)
|
148 |
-
y = self.drop(y)
|
149 |
-
x = self.norm_layers_0[i](x + y)
|
150 |
-
|
151 |
-
y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
|
152 |
-
y = self.drop(y)
|
153 |
-
x = self.norm_layers_1[i](x + y)
|
154 |
-
|
155 |
-
y = self.ffn_layers[i](x, x_mask)
|
156 |
-
y = self.drop(y)
|
157 |
-
x = self.norm_layers_2[i](x + y)
|
158 |
-
x = x * x_mask
|
159 |
-
return x
|
160 |
-
|
161 |
-
|
162 |
-
class MultiHeadAttention(nn.Module):
|
163 |
-
def __init__(
|
164 |
-
self,
|
165 |
-
channels,
|
166 |
-
out_channels,
|
167 |
-
n_heads,
|
168 |
-
p_dropout=0.0,
|
169 |
-
window_size=None,
|
170 |
-
heads_share=True,
|
171 |
-
block_length=None,
|
172 |
-
proximal_bias=False,
|
173 |
-
proximal_init=False,
|
174 |
-
):
|
175 |
-
super().__init__()
|
176 |
-
assert channels % n_heads == 0
|
177 |
-
|
178 |
-
self.channels = channels
|
179 |
-
self.out_channels = out_channels
|
180 |
-
self.n_heads = n_heads
|
181 |
-
self.p_dropout = p_dropout
|
182 |
-
self.window_size = window_size
|
183 |
-
self.heads_share = heads_share
|
184 |
-
self.block_length = block_length
|
185 |
-
self.proximal_bias = proximal_bias
|
186 |
-
self.proximal_init = proximal_init
|
187 |
-
self.attn = None
|
188 |
-
|
189 |
-
self.k_channels = channels // n_heads
|
190 |
-
self.conv_q = nn.Conv1d(channels, channels, 1)
|
191 |
-
self.conv_k = nn.Conv1d(channels, channels, 1)
|
192 |
-
self.conv_v = nn.Conv1d(channels, channels, 1)
|
193 |
-
self.conv_o = nn.Conv1d(channels, out_channels, 1)
|
194 |
-
self.drop = nn.Dropout(p_dropout)
|
195 |
-
|
196 |
-
if window_size is not None:
|
197 |
-
n_heads_rel = 1 if heads_share else n_heads
|
198 |
-
rel_stddev = self.k_channels**-0.5
|
199 |
-
self.emb_rel_k = nn.Parameter(
|
200 |
-
torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
|
201 |
-
* rel_stddev
|
202 |
-
)
|
203 |
-
self.emb_rel_v = nn.Parameter(
|
204 |
-
torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
|
205 |
-
* rel_stddev
|
206 |
-
)
|
207 |
-
|
208 |
-
nn.init.xavier_uniform_(self.conv_q.weight)
|
209 |
-
nn.init.xavier_uniform_(self.conv_k.weight)
|
210 |
-
nn.init.xavier_uniform_(self.conv_v.weight)
|
211 |
-
if proximal_init:
|
212 |
-
with torch.no_grad():
|
213 |
-
self.conv_k.weight.copy_(self.conv_q.weight)
|
214 |
-
self.conv_k.bias.copy_(self.conv_q.bias)
|
215 |
-
|
216 |
-
def forward(self, x, c, attn_mask=None):
|
217 |
-
q = self.conv_q(x)
|
218 |
-
k = self.conv_k(c)
|
219 |
-
v = self.conv_v(c)
|
220 |
-
|
221 |
-
x, self.attn = self.attention(q, k, v, mask=attn_mask)
|
222 |
-
|
223 |
-
x = self.conv_o(x)
|
224 |
-
return x
|
225 |
-
|
226 |
-
def attention(self, query, key, value, mask=None):
|
227 |
-
# reshape [b, d, t] -> [b, n_h, t, d_k]
|
228 |
-
b, d, t_s, t_t = (*key.size(), query.size(2))
|
229 |
-
query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
|
230 |
-
key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
|
231 |
-
value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
|
232 |
-
|
233 |
-
scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
|
234 |
-
if self.window_size is not None:
|
235 |
-
assert (
|
236 |
-
t_s == t_t
|
237 |
-
), "Relative attention is only available for self-attention."
|
238 |
-
key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
|
239 |
-
rel_logits = self._matmul_with_relative_keys(
|
240 |
-
query / math.sqrt(self.k_channels), key_relative_embeddings
|
241 |
-
)
|
242 |
-
scores_local = self._relative_position_to_absolute_position(rel_logits)
|
243 |
-
scores = scores + scores_local
|
244 |
-
if self.proximal_bias:
|
245 |
-
assert t_s == t_t, "Proximal bias is only available for self-attention."
|
246 |
-
scores = scores + self._attention_bias_proximal(t_s).to(
|
247 |
-
device=scores.device, dtype=scores.dtype
|
248 |
-
)
|
249 |
-
if mask is not None:
|
250 |
-
scores = scores.masked_fill(mask == 0, -1e4)
|
251 |
-
if self.block_length is not None:
|
252 |
-
assert (
|
253 |
-
t_s == t_t
|
254 |
-
), "Local attention is only available for self-attention."
|
255 |
-
block_mask = (
|
256 |
-
torch.ones_like(scores)
|
257 |
-
.triu(-self.block_length)
|
258 |
-
.tril(self.block_length)
|
259 |
-
)
|
260 |
-
scores = scores.masked_fill(block_mask == 0, -1e4)
|
261 |
-
p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
|
262 |
-
p_attn = self.drop(p_attn)
|
263 |
-
output = torch.matmul(p_attn, value)
|
264 |
-
if self.window_size is not None:
|
265 |
-
relative_weights = self._absolute_position_to_relative_position(p_attn)
|
266 |
-
value_relative_embeddings = self._get_relative_embeddings(
|
267 |
-
self.emb_rel_v, t_s
|
268 |
-
)
|
269 |
-
output = output + self._matmul_with_relative_values(
|
270 |
-
relative_weights, value_relative_embeddings
|
271 |
-
)
|
272 |
-
output = (
|
273 |
-
output.transpose(2, 3).contiguous().view(b, d, t_t)
|
274 |
-
) # [b, n_h, t_t, d_k] -> [b, d, t_t]
|
275 |
-
return output, p_attn
|
276 |
-
|
277 |
-
def _matmul_with_relative_values(self, x, y):
|
278 |
-
"""
|
279 |
-
x: [b, h, l, m]
|
280 |
-
y: [h or 1, m, d]
|
281 |
-
ret: [b, h, l, d]
|
282 |
-
"""
|
283 |
-
ret = torch.matmul(x, y.unsqueeze(0))
|
284 |
-
return ret
|
285 |
-
|
286 |
-
def _matmul_with_relative_keys(self, x, y):
|
287 |
-
"""
|
288 |
-
x: [b, h, l, d]
|
289 |
-
y: [h or 1, m, d]
|
290 |
-
ret: [b, h, l, m]
|
291 |
-
"""
|
292 |
-
ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
|
293 |
-
return ret
|
294 |
-
|
295 |
-
def _get_relative_embeddings(self, relative_embeddings, length):
|
296 |
-
max_relative_position = 2 * self.window_size + 1
|
297 |
-
# Pad first before slice to avoid using cond ops.
|
298 |
-
pad_length = max(length - (self.window_size + 1), 0)
|
299 |
-
slice_start_position = max((self.window_size + 1) - length, 0)
|
300 |
-
slice_end_position = slice_start_position + 2 * length - 1
|
301 |
-
if pad_length > 0:
|
302 |
-
padded_relative_embeddings = F.pad(
|
303 |
-
relative_embeddings,
|
304 |
-
commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
|
305 |
-
)
|
306 |
-
else:
|
307 |
-
padded_relative_embeddings = relative_embeddings
|
308 |
-
used_relative_embeddings = padded_relative_embeddings[
|
309 |
-
:, slice_start_position:slice_end_position
|
310 |
-
]
|
311 |
-
return used_relative_embeddings
|
312 |
-
|
313 |
-
def _relative_position_to_absolute_position(self, x):
|
314 |
-
"""
|
315 |
-
x: [b, h, l, 2*l-1]
|
316 |
-
ret: [b, h, l, l]
|
317 |
-
"""
|
318 |
-
batch, heads, length, _ = x.size()
|
319 |
-
# Concat columns of pad to shift from relative to absolute indexing.
|
320 |
-
x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
|
321 |
-
|
322 |
-
# Concat extra elements so to add up to shape (len+1, 2*len-1).
|
323 |
-
x_flat = x.view([batch, heads, length * 2 * length])
|
324 |
-
x_flat = F.pad(
|
325 |
-
x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
|
326 |
-
)
|
327 |
-
|
328 |
-
# Reshape and slice out the padded elements.
|
329 |
-
x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
|
330 |
-
:, :, :length, length - 1 :
|
331 |
-
]
|
332 |
-
return x_final
|
333 |
-
|
334 |
-
def _absolute_position_to_relative_position(self, x):
|
335 |
-
"""
|
336 |
-
x: [b, h, l, l]
|
337 |
-
ret: [b, h, l, 2*l-1]
|
338 |
-
"""
|
339 |
-
batch, heads, length, _ = x.size()
|
340 |
-
# padd along column
|
341 |
-
x = F.pad(
|
342 |
-
x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
|
343 |
-
)
|
344 |
-
x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
|
345 |
-
# add 0's in the beginning that will skew the elements after reshape
|
346 |
-
x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
|
347 |
-
x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
|
348 |
-
return x_final
|
349 |
-
|
350 |
-
def _attention_bias_proximal(self, length):
|
351 |
-
"""Bias for self-attention to encourage attention to close positions.
|
352 |
-
Args:
|
353 |
-
length: an integer scalar.
|
354 |
-
Returns:
|
355 |
-
a Tensor with shape [1, 1, length, length]
|
356 |
-
"""
|
357 |
-
r = torch.arange(length, dtype=torch.float32)
|
358 |
-
diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
|
359 |
-
return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
|
360 |
-
|
361 |
-
|
362 |
-
class FFN(nn.Module):
|
363 |
-
def __init__(
|
364 |
-
self,
|
365 |
-
in_channels,
|
366 |
-
out_channels,
|
367 |
-
filter_channels,
|
368 |
-
kernel_size,
|
369 |
-
p_dropout=0.0,
|
370 |
-
activation=None,
|
371 |
-
causal=False,
|
372 |
-
):
|
373 |
-
super().__init__()
|
374 |
-
self.in_channels = in_channels
|
375 |
-
self.out_channels = out_channels
|
376 |
-
self.filter_channels = filter_channels
|
377 |
-
self.kernel_size = kernel_size
|
378 |
-
self.p_dropout = p_dropout
|
379 |
-
self.activation = activation
|
380 |
-
self.causal = causal
|
381 |
-
|
382 |
-
if causal:
|
383 |
-
self.padding = self._causal_padding
|
384 |
-
else:
|
385 |
-
self.padding = self._same_padding
|
386 |
-
|
387 |
-
self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
|
388 |
-
self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
|
389 |
-
self.drop = nn.Dropout(p_dropout)
|
390 |
-
|
391 |
-
def forward(self, x, x_mask):
|
392 |
-
x = self.conv_1(self.padding(x * x_mask))
|
393 |
-
if self.activation == "gelu":
|
394 |
-
x = x * torch.sigmoid(1.702 * x)
|
395 |
-
else:
|
396 |
-
x = torch.relu(x)
|
397 |
-
x = self.drop(x)
|
398 |
-
x = self.conv_2(self.padding(x * x_mask))
|
399 |
-
return x * x_mask
|
400 |
-
|
401 |
-
def _causal_padding(self, x):
|
402 |
-
if self.kernel_size == 1:
|
403 |
-
return x
|
404 |
-
pad_l = self.kernel_size - 1
|
405 |
-
pad_r = 0
|
406 |
-
padding = [[0, 0], [0, 0], [pad_l, pad_r]]
|
407 |
-
x = F.pad(x, commons.convert_pad_shape(padding))
|
408 |
-
return x
|
409 |
-
|
410 |
-
def _same_padding(self, x):
|
411 |
-
if self.kernel_size == 1:
|
412 |
-
return x
|
413 |
-
pad_l = (self.kernel_size - 1) // 2
|
414 |
-
pad_r = self.kernel_size // 2
|
415 |
-
padding = [[0, 0], [0, 0], [pad_l, pad_r]]
|
416 |
-
x = F.pad(x, commons.convert_pad_shape(padding))
|
417 |
-
return x
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/stable_video_text2video.py
DELETED
@@ -1,158 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import numpy as np
|
3 |
-
import torch
|
4 |
-
|
5 |
-
from video_diffusion.stable_diffusion_video.stable_diffusion_pipeline import StableDiffusionWalkPipeline
|
6 |
-
from video_diffusion.utils.model_list import stable_model_list
|
7 |
-
|
8 |
-
|
9 |
-
class StableDiffusionText2VideoGenerator:
|
10 |
-
def __init__(self):
|
11 |
-
self.pipe = None
|
12 |
-
|
13 |
-
def load_model(
|
14 |
-
self,
|
15 |
-
model_path,
|
16 |
-
):
|
17 |
-
if self.pipe is None:
|
18 |
-
self.pipe = StableDiffusionWalkPipeline.from_pretrained(
|
19 |
-
model_path,
|
20 |
-
torch_dtype=torch.float16,
|
21 |
-
revision="fp16",
|
22 |
-
)
|
23 |
-
|
24 |
-
self.pipe.to("cuda")
|
25 |
-
self.pipe.enable_xformers_memory_efficient_attention()
|
26 |
-
self.pipe.enable_attention_slicing()
|
27 |
-
|
28 |
-
return self.pipe
|
29 |
-
|
30 |
-
def generate_video(
|
31 |
-
self,
|
32 |
-
model_path: str,
|
33 |
-
first_prompts: str,
|
34 |
-
second_prompts: str,
|
35 |
-
negative_prompt: str,
|
36 |
-
num_interpolation_steps: int,
|
37 |
-
guidance_scale: int,
|
38 |
-
num_inference_step: int,
|
39 |
-
height: int,
|
40 |
-
width: int,
|
41 |
-
upsample: bool,
|
42 |
-
fps=int,
|
43 |
-
):
|
44 |
-
first_seed = np.random.randint(0, 100000)
|
45 |
-
second_seed = np.random.randint(0, 100000)
|
46 |
-
seeds = [first_seed, second_seed]
|
47 |
-
prompts = [first_prompts, second_prompts]
|
48 |
-
pipe = self.load_model(model_path=model_path)
|
49 |
-
|
50 |
-
output_video = pipe.walk(
|
51 |
-
prompts=prompts,
|
52 |
-
num_interpolation_steps=int(num_interpolation_steps),
|
53 |
-
height=height,
|
54 |
-
width=width,
|
55 |
-
guidance_scale=guidance_scale,
|
56 |
-
num_inference_steps=num_inference_step,
|
57 |
-
negative_prompt=negative_prompt,
|
58 |
-
seeds=seeds,
|
59 |
-
upsample=upsample,
|
60 |
-
fps=fps,
|
61 |
-
)
|
62 |
-
|
63 |
-
return output_video
|
64 |
-
|
65 |
-
def app():
|
66 |
-
with gr.Blocks():
|
67 |
-
with gr.Row():
|
68 |
-
with gr.Column():
|
69 |
-
stable_text2video_first_prompt = gr.Textbox(
|
70 |
-
lines=1,
|
71 |
-
placeholder="First Prompt",
|
72 |
-
show_label=False,
|
73 |
-
)
|
74 |
-
stable_text2video_second_prompt = gr.Textbox(
|
75 |
-
lines=1,
|
76 |
-
placeholder="Second Prompt",
|
77 |
-
show_label=False,
|
78 |
-
)
|
79 |
-
stable_text2video_negative_prompt = gr.Textbox(
|
80 |
-
lines=1,
|
81 |
-
placeholder="Negative Prompt ",
|
82 |
-
show_label=False,
|
83 |
-
)
|
84 |
-
with gr.Row():
|
85 |
-
with gr.Column():
|
86 |
-
stable_text2video_model_path = gr.Dropdown(
|
87 |
-
choices=stable_model_list,
|
88 |
-
label="Stable Model List",
|
89 |
-
value=stable_model_list[0],
|
90 |
-
)
|
91 |
-
stable_text2video_guidance_scale = gr.Slider(
|
92 |
-
minimum=0,
|
93 |
-
maximum=15,
|
94 |
-
step=1,
|
95 |
-
value=8.5,
|
96 |
-
label="Guidance Scale",
|
97 |
-
)
|
98 |
-
stable_text2video_num_inference_steps = gr.Slider(
|
99 |
-
minimum=1,
|
100 |
-
maximum=100,
|
101 |
-
step=1,
|
102 |
-
value=30,
|
103 |
-
label="Number of Inference Steps",
|
104 |
-
)
|
105 |
-
stable_text2video_fps = gr.Slider(
|
106 |
-
minimum=1,
|
107 |
-
maximum=60,
|
108 |
-
step=1,
|
109 |
-
value=10,
|
110 |
-
label="Fps",
|
111 |
-
)
|
112 |
-
with gr.Row():
|
113 |
-
with gr.Column():
|
114 |
-
stable_text2video_num_interpolation_steps = gr.Number(
|
115 |
-
value=10,
|
116 |
-
label="Number of Interpolation Steps",
|
117 |
-
)
|
118 |
-
stable_text2video_height = gr.Slider(
|
119 |
-
minimum=1,
|
120 |
-
maximum=1000,
|
121 |
-
step=1,
|
122 |
-
value=512,
|
123 |
-
label="Height",
|
124 |
-
)
|
125 |
-
stable_text2video_width = gr.Slider(
|
126 |
-
minimum=1,
|
127 |
-
maximum=1000,
|
128 |
-
step=1,
|
129 |
-
value=512,
|
130 |
-
label="Width",
|
131 |
-
)
|
132 |
-
stable_text2video_upsample = gr.Checkbox(
|
133 |
-
label="Upsample",
|
134 |
-
default=False,
|
135 |
-
)
|
136 |
-
|
137 |
-
text2video_generate = gr.Button(value="Generator")
|
138 |
-
|
139 |
-
with gr.Column():
|
140 |
-
text2video_output = gr.Video(label="Output")
|
141 |
-
|
142 |
-
text2video_generate.click(
|
143 |
-
fn=StableDiffusionText2VideoGenerator().generate_video,
|
144 |
-
inputs=[
|
145 |
-
stable_text2video_model_path,
|
146 |
-
stable_text2video_first_prompt,
|
147 |
-
stable_text2video_second_prompt,
|
148 |
-
stable_text2video_negative_prompt,
|
149 |
-
stable_text2video_num_interpolation_steps,
|
150 |
-
stable_text2video_guidance_scale,
|
151 |
-
stable_text2video_num_inference_steps,
|
152 |
-
stable_text2video_height,
|
153 |
-
stable_text2video_width,
|
154 |
-
stable_text2video_upsample,
|
155 |
-
stable_text2video_fps,
|
156 |
-
],
|
157 |
-
outputs=text2video_output,
|
158 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/self_outdated_check.py
DELETED
@@ -1,242 +0,0 @@
|
|
1 |
-
import datetime
|
2 |
-
import functools
|
3 |
-
import hashlib
|
4 |
-
import json
|
5 |
-
import logging
|
6 |
-
import optparse
|
7 |
-
import os.path
|
8 |
-
import sys
|
9 |
-
from dataclasses import dataclass
|
10 |
-
from typing import Any, Callable, Dict, Optional
|
11 |
-
|
12 |
-
from pip._vendor.packaging.version import parse as parse_version
|
13 |
-
from pip._vendor.rich.console import Group
|
14 |
-
from pip._vendor.rich.markup import escape
|
15 |
-
from pip._vendor.rich.text import Text
|
16 |
-
|
17 |
-
from pip._internal.index.collector import LinkCollector
|
18 |
-
from pip._internal.index.package_finder import PackageFinder
|
19 |
-
from pip._internal.metadata import get_default_environment
|
20 |
-
from pip._internal.metadata.base import DistributionVersion
|
21 |
-
from pip._internal.models.selection_prefs import SelectionPreferences
|
22 |
-
from pip._internal.network.session import PipSession
|
23 |
-
from pip._internal.utils.compat import WINDOWS
|
24 |
-
from pip._internal.utils.entrypoints import (
|
25 |
-
get_best_invocation_for_this_pip,
|
26 |
-
get_best_invocation_for_this_python,
|
27 |
-
)
|
28 |
-
from pip._internal.utils.filesystem import adjacent_tmp_file, check_path_owner, replace
|
29 |
-
from pip._internal.utils.misc import ensure_dir
|
30 |
-
|
31 |
-
_DATE_FMT = "%Y-%m-%dT%H:%M:%SZ"
|
32 |
-
|
33 |
-
|
34 |
-
logger = logging.getLogger(__name__)
|
35 |
-
|
36 |
-
|
37 |
-
def _get_statefile_name(key: str) -> str:
|
38 |
-
key_bytes = key.encode()
|
39 |
-
name = hashlib.sha224(key_bytes).hexdigest()
|
40 |
-
return name
|
41 |
-
|
42 |
-
|
43 |
-
class SelfCheckState:
|
44 |
-
def __init__(self, cache_dir: str) -> None:
|
45 |
-
self._state: Dict[str, Any] = {}
|
46 |
-
self._statefile_path = None
|
47 |
-
|
48 |
-
# Try to load the existing state
|
49 |
-
if cache_dir:
|
50 |
-
self._statefile_path = os.path.join(
|
51 |
-
cache_dir, "selfcheck", _get_statefile_name(self.key)
|
52 |
-
)
|
53 |
-
try:
|
54 |
-
with open(self._statefile_path, encoding="utf-8") as statefile:
|
55 |
-
self._state = json.load(statefile)
|
56 |
-
except (OSError, ValueError, KeyError):
|
57 |
-
# Explicitly suppressing exceptions, since we don't want to
|
58 |
-
# error out if the cache file is invalid.
|
59 |
-
pass
|
60 |
-
|
61 |
-
@property
|
62 |
-
def key(self) -> str:
|
63 |
-
return sys.prefix
|
64 |
-
|
65 |
-
def get(self, current_time: datetime.datetime) -> Optional[str]:
|
66 |
-
"""Check if we have a not-outdated version loaded already."""
|
67 |
-
if not self._state:
|
68 |
-
return None
|
69 |
-
|
70 |
-
if "last_check" not in self._state:
|
71 |
-
return None
|
72 |
-
|
73 |
-
if "pypi_version" not in self._state:
|
74 |
-
return None
|
75 |
-
|
76 |
-
seven_days_in_seconds = 7 * 24 * 60 * 60
|
77 |
-
|
78 |
-
# Determine if we need to refresh the state
|
79 |
-
last_check = datetime.datetime.strptime(self._state["last_check"], _DATE_FMT)
|
80 |
-
seconds_since_last_check = (current_time - last_check).total_seconds()
|
81 |
-
if seconds_since_last_check > seven_days_in_seconds:
|
82 |
-
return None
|
83 |
-
|
84 |
-
return self._state["pypi_version"]
|
85 |
-
|
86 |
-
def set(self, pypi_version: str, current_time: datetime.datetime) -> None:
|
87 |
-
# If we do not have a path to cache in, don't bother saving.
|
88 |
-
if not self._statefile_path:
|
89 |
-
return
|
90 |
-
|
91 |
-
# Check to make sure that we own the directory
|
92 |
-
if not check_path_owner(os.path.dirname(self._statefile_path)):
|
93 |
-
return
|
94 |
-
|
95 |
-
# Now that we've ensured the directory is owned by this user, we'll go
|
96 |
-
# ahead and make sure that all our directories are created.
|
97 |
-
ensure_dir(os.path.dirname(self._statefile_path))
|
98 |
-
|
99 |
-
state = {
|
100 |
-
# Include the key so it's easy to tell which pip wrote the
|
101 |
-
# file.
|
102 |
-
"key": self.key,
|
103 |
-
"last_check": current_time.strftime(_DATE_FMT),
|
104 |
-
"pypi_version": pypi_version,
|
105 |
-
}
|
106 |
-
|
107 |
-
text = json.dumps(state, sort_keys=True, separators=(",", ":"))
|
108 |
-
|
109 |
-
with adjacent_tmp_file(self._statefile_path) as f:
|
110 |
-
f.write(text.encode())
|
111 |
-
|
112 |
-
try:
|
113 |
-
# Since we have a prefix-specific state file, we can just
|
114 |
-
# overwrite whatever is there, no need to check.
|
115 |
-
replace(f.name, self._statefile_path)
|
116 |
-
except OSError:
|
117 |
-
# Best effort.
|
118 |
-
pass
|
119 |
-
|
120 |
-
|
121 |
-
@dataclass
|
122 |
-
class UpgradePrompt:
|
123 |
-
old: str
|
124 |
-
new: str
|
125 |
-
|
126 |
-
def __rich__(self) -> Group:
|
127 |
-
if WINDOWS:
|
128 |
-
pip_cmd = f"{get_best_invocation_for_this_python()} -m pip"
|
129 |
-
else:
|
130 |
-
pip_cmd = get_best_invocation_for_this_pip()
|
131 |
-
|
132 |
-
notice = "[bold][[reset][blue]notice[reset][bold]][reset]"
|
133 |
-
return Group(
|
134 |
-
Text(),
|
135 |
-
Text.from_markup(
|
136 |
-
f"{notice} A new release of pip is available: "
|
137 |
-
f"[red]{self.old}[reset] -> [green]{self.new}[reset]"
|
138 |
-
),
|
139 |
-
Text.from_markup(
|
140 |
-
f"{notice} To update, run: "
|
141 |
-
f"[green]{escape(pip_cmd)} install --upgrade pip"
|
142 |
-
),
|
143 |
-
)
|
144 |
-
|
145 |
-
|
146 |
-
def was_installed_by_pip(pkg: str) -> bool:
|
147 |
-
"""Checks whether pkg was installed by pip
|
148 |
-
|
149 |
-
This is used not to display the upgrade message when pip is in fact
|
150 |
-
installed by system package manager, such as dnf on Fedora.
|
151 |
-
"""
|
152 |
-
dist = get_default_environment().get_distribution(pkg)
|
153 |
-
return dist is not None and "pip" == dist.installer
|
154 |
-
|
155 |
-
|
156 |
-
def _get_current_remote_pip_version(
|
157 |
-
session: PipSession, options: optparse.Values
|
158 |
-
) -> Optional[str]:
|
159 |
-
# Lets use PackageFinder to see what the latest pip version is
|
160 |
-
link_collector = LinkCollector.create(
|
161 |
-
session,
|
162 |
-
options=options,
|
163 |
-
suppress_no_index=True,
|
164 |
-
)
|
165 |
-
|
166 |
-
# Pass allow_yanked=False so we don't suggest upgrading to a
|
167 |
-
# yanked version.
|
168 |
-
selection_prefs = SelectionPreferences(
|
169 |
-
allow_yanked=False,
|
170 |
-
allow_all_prereleases=False, # Explicitly set to False
|
171 |
-
)
|
172 |
-
|
173 |
-
finder = PackageFinder.create(
|
174 |
-
link_collector=link_collector,
|
175 |
-
selection_prefs=selection_prefs,
|
176 |
-
)
|
177 |
-
best_candidate = finder.find_best_candidate("pip").best_candidate
|
178 |
-
if best_candidate is None:
|
179 |
-
return None
|
180 |
-
|
181 |
-
return str(best_candidate.version)
|
182 |
-
|
183 |
-
|
184 |
-
def _self_version_check_logic(
|
185 |
-
*,
|
186 |
-
state: SelfCheckState,
|
187 |
-
current_time: datetime.datetime,
|
188 |
-
local_version: DistributionVersion,
|
189 |
-
get_remote_version: Callable[[], Optional[str]],
|
190 |
-
) -> Optional[UpgradePrompt]:
|
191 |
-
remote_version_str = state.get(current_time)
|
192 |
-
if remote_version_str is None:
|
193 |
-
remote_version_str = get_remote_version()
|
194 |
-
if remote_version_str is None:
|
195 |
-
logger.debug("No remote pip version found")
|
196 |
-
return None
|
197 |
-
state.set(remote_version_str, current_time)
|
198 |
-
|
199 |
-
remote_version = parse_version(remote_version_str)
|
200 |
-
logger.debug("Remote version of pip: %s", remote_version)
|
201 |
-
logger.debug("Local version of pip: %s", local_version)
|
202 |
-
|
203 |
-
pip_installed_by_pip = was_installed_by_pip("pip")
|
204 |
-
logger.debug("Was pip installed by pip? %s", pip_installed_by_pip)
|
205 |
-
if not pip_installed_by_pip:
|
206 |
-
return None # Only suggest upgrade if pip is installed by pip.
|
207 |
-
|
208 |
-
local_version_is_older = (
|
209 |
-
local_version < remote_version
|
210 |
-
and local_version.base_version != remote_version.base_version
|
211 |
-
)
|
212 |
-
if local_version_is_older:
|
213 |
-
return UpgradePrompt(old=str(local_version), new=remote_version_str)
|
214 |
-
|
215 |
-
return None
|
216 |
-
|
217 |
-
|
218 |
-
def pip_self_version_check(session: PipSession, options: optparse.Values) -> None:
|
219 |
-
"""Check for an update for pip.
|
220 |
-
|
221 |
-
Limit the frequency of checks to once per week. State is stored either in
|
222 |
-
the active virtualenv or in the user's USER_CACHE_DIR keyed off the prefix
|
223 |
-
of the pip script path.
|
224 |
-
"""
|
225 |
-
installed_dist = get_default_environment().get_distribution("pip")
|
226 |
-
if not installed_dist:
|
227 |
-
return
|
228 |
-
|
229 |
-
try:
|
230 |
-
upgrade_prompt = _self_version_check_logic(
|
231 |
-
state=SelfCheckState(cache_dir=options.cache_dir),
|
232 |
-
current_time=datetime.datetime.utcnow(),
|
233 |
-
local_version=installed_dist.version,
|
234 |
-
get_remote_version=functools.partial(
|
235 |
-
_get_current_remote_pip_version, session, options
|
236 |
-
),
|
237 |
-
)
|
238 |
-
if upgrade_prompt is not None:
|
239 |
-
logger.warning("[present-rich] %s", upgrade_prompt)
|
240 |
-
except Exception:
|
241 |
-
logger.warning("There was an error checking the latest version of pip.")
|
242 |
-
logger.debug("See below for error", exc_info=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/cli/__init__.py
DELETED
File without changes
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/deformable/deform_conv.h
DELETED
@@ -1,377 +0,0 @@
|
|
1 |
-
// Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
#pragma once
|
3 |
-
#include <torch/types.h>
|
4 |
-
|
5 |
-
namespace detectron2 {
|
6 |
-
|
7 |
-
#if defined(WITH_CUDA) || defined(WITH_HIP)
|
8 |
-
int deform_conv_forward_cuda(
|
9 |
-
at::Tensor input,
|
10 |
-
at::Tensor weight,
|
11 |
-
at::Tensor offset,
|
12 |
-
at::Tensor output,
|
13 |
-
at::Tensor columns,
|
14 |
-
at::Tensor ones,
|
15 |
-
int kW,
|
16 |
-
int kH,
|
17 |
-
int dW,
|
18 |
-
int dH,
|
19 |
-
int padW,
|
20 |
-
int padH,
|
21 |
-
int dilationW,
|
22 |
-
int dilationH,
|
23 |
-
int group,
|
24 |
-
int deformable_group,
|
25 |
-
int im2col_step);
|
26 |
-
|
27 |
-
int deform_conv_backward_input_cuda(
|
28 |
-
at::Tensor input,
|
29 |
-
at::Tensor offset,
|
30 |
-
at::Tensor gradOutput,
|
31 |
-
at::Tensor gradInput,
|
32 |
-
at::Tensor gradOffset,
|
33 |
-
at::Tensor weight,
|
34 |
-
at::Tensor columns,
|
35 |
-
int kW,
|
36 |
-
int kH,
|
37 |
-
int dW,
|
38 |
-
int dH,
|
39 |
-
int padW,
|
40 |
-
int padH,
|
41 |
-
int dilationW,
|
42 |
-
int dilationH,
|
43 |
-
int group,
|
44 |
-
int deformable_group,
|
45 |
-
int im2col_step);
|
46 |
-
|
47 |
-
int deform_conv_backward_parameters_cuda(
|
48 |
-
at::Tensor input,
|
49 |
-
at::Tensor offset,
|
50 |
-
at::Tensor gradOutput,
|
51 |
-
at::Tensor gradWeight, // at::Tensor gradBias,
|
52 |
-
at::Tensor columns,
|
53 |
-
at::Tensor ones,
|
54 |
-
int kW,
|
55 |
-
int kH,
|
56 |
-
int dW,
|
57 |
-
int dH,
|
58 |
-
int padW,
|
59 |
-
int padH,
|
60 |
-
int dilationW,
|
61 |
-
int dilationH,
|
62 |
-
int group,
|
63 |
-
int deformable_group,
|
64 |
-
float scale,
|
65 |
-
int im2col_step);
|
66 |
-
|
67 |
-
void modulated_deform_conv_cuda_forward(
|
68 |
-
at::Tensor input,
|
69 |
-
at::Tensor weight,
|
70 |
-
at::Tensor bias,
|
71 |
-
at::Tensor ones,
|
72 |
-
at::Tensor offset,
|
73 |
-
at::Tensor mask,
|
74 |
-
at::Tensor output,
|
75 |
-
at::Tensor columns,
|
76 |
-
int kernel_h,
|
77 |
-
int kernel_w,
|
78 |
-
const int stride_h,
|
79 |
-
const int stride_w,
|
80 |
-
const int pad_h,
|
81 |
-
const int pad_w,
|
82 |
-
const int dilation_h,
|
83 |
-
const int dilation_w,
|
84 |
-
const int group,
|
85 |
-
const int deformable_group,
|
86 |
-
const bool with_bias);
|
87 |
-
|
88 |
-
void modulated_deform_conv_cuda_backward(
|
89 |
-
at::Tensor input,
|
90 |
-
at::Tensor weight,
|
91 |
-
at::Tensor bias,
|
92 |
-
at::Tensor ones,
|
93 |
-
at::Tensor offset,
|
94 |
-
at::Tensor mask,
|
95 |
-
at::Tensor columns,
|
96 |
-
at::Tensor grad_input,
|
97 |
-
at::Tensor grad_weight,
|
98 |
-
at::Tensor grad_bias,
|
99 |
-
at::Tensor grad_offset,
|
100 |
-
at::Tensor grad_mask,
|
101 |
-
at::Tensor grad_output,
|
102 |
-
int kernel_h,
|
103 |
-
int kernel_w,
|
104 |
-
int stride_h,
|
105 |
-
int stride_w,
|
106 |
-
int pad_h,
|
107 |
-
int pad_w,
|
108 |
-
int dilation_h,
|
109 |
-
int dilation_w,
|
110 |
-
int group,
|
111 |
-
int deformable_group,
|
112 |
-
const bool with_bias);
|
113 |
-
|
114 |
-
#endif
|
115 |
-
|
116 |
-
inline int deform_conv_forward(
|
117 |
-
at::Tensor input,
|
118 |
-
at::Tensor weight,
|
119 |
-
at::Tensor offset,
|
120 |
-
at::Tensor output,
|
121 |
-
at::Tensor columns,
|
122 |
-
at::Tensor ones,
|
123 |
-
int kW,
|
124 |
-
int kH,
|
125 |
-
int dW,
|
126 |
-
int dH,
|
127 |
-
int padW,
|
128 |
-
int padH,
|
129 |
-
int dilationW,
|
130 |
-
int dilationH,
|
131 |
-
int group,
|
132 |
-
int deformable_group,
|
133 |
-
int im2col_step) {
|
134 |
-
if (input.is_cuda()) {
|
135 |
-
#if defined(WITH_CUDA) || defined(WITH_HIP)
|
136 |
-
TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!");
|
137 |
-
TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
|
138 |
-
return deform_conv_forward_cuda(
|
139 |
-
input,
|
140 |
-
weight,
|
141 |
-
offset,
|
142 |
-
output,
|
143 |
-
columns,
|
144 |
-
ones,
|
145 |
-
kW,
|
146 |
-
kH,
|
147 |
-
dW,
|
148 |
-
dH,
|
149 |
-
padW,
|
150 |
-
padH,
|
151 |
-
dilationW,
|
152 |
-
dilationH,
|
153 |
-
group,
|
154 |
-
deformable_group,
|
155 |
-
im2col_step);
|
156 |
-
#else
|
157 |
-
AT_ERROR("Detectron2 is not compiled with GPU support!");
|
158 |
-
#endif
|
159 |
-
}
|
160 |
-
AT_ERROR("This operator is not implemented on CPU");
|
161 |
-
}
|
162 |
-
|
163 |
-
inline int deform_conv_backward_input(
|
164 |
-
at::Tensor input,
|
165 |
-
at::Tensor offset,
|
166 |
-
at::Tensor gradOutput,
|
167 |
-
at::Tensor gradInput,
|
168 |
-
at::Tensor gradOffset,
|
169 |
-
at::Tensor weight,
|
170 |
-
at::Tensor columns,
|
171 |
-
int kW,
|
172 |
-
int kH,
|
173 |
-
int dW,
|
174 |
-
int dH,
|
175 |
-
int padW,
|
176 |
-
int padH,
|
177 |
-
int dilationW,
|
178 |
-
int dilationH,
|
179 |
-
int group,
|
180 |
-
int deformable_group,
|
181 |
-
int im2col_step) {
|
182 |
-
if (gradOutput.is_cuda()) {
|
183 |
-
#if defined(WITH_CUDA) || defined(WITH_HIP)
|
184 |
-
TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!");
|
185 |
-
TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!");
|
186 |
-
TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
|
187 |
-
return deform_conv_backward_input_cuda(
|
188 |
-
input,
|
189 |
-
offset,
|
190 |
-
gradOutput,
|
191 |
-
gradInput,
|
192 |
-
gradOffset,
|
193 |
-
weight,
|
194 |
-
columns,
|
195 |
-
kW,
|
196 |
-
kH,
|
197 |
-
dW,
|
198 |
-
dH,
|
199 |
-
padW,
|
200 |
-
padH,
|
201 |
-
dilationW,
|
202 |
-
dilationH,
|
203 |
-
group,
|
204 |
-
deformable_group,
|
205 |
-
im2col_step);
|
206 |
-
#else
|
207 |
-
AT_ERROR("Detectron2 is not compiled with GPU support!");
|
208 |
-
#endif
|
209 |
-
}
|
210 |
-
AT_ERROR("This operator is not implemented on CPU");
|
211 |
-
}
|
212 |
-
|
213 |
-
inline int deform_conv_backward_filter(
|
214 |
-
at::Tensor input,
|
215 |
-
at::Tensor offset,
|
216 |
-
at::Tensor gradOutput,
|
217 |
-
at::Tensor gradWeight, // at::Tensor gradBias,
|
218 |
-
at::Tensor columns,
|
219 |
-
at::Tensor ones,
|
220 |
-
int kW,
|
221 |
-
int kH,
|
222 |
-
int dW,
|
223 |
-
int dH,
|
224 |
-
int padW,
|
225 |
-
int padH,
|
226 |
-
int dilationW,
|
227 |
-
int dilationH,
|
228 |
-
int group,
|
229 |
-
int deformable_group,
|
230 |
-
float scale,
|
231 |
-
int im2col_step) {
|
232 |
-
if (gradOutput.is_cuda()) {
|
233 |
-
#if defined(WITH_CUDA) || defined(WITH_HIP)
|
234 |
-
TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!");
|
235 |
-
TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
|
236 |
-
return deform_conv_backward_parameters_cuda(
|
237 |
-
input,
|
238 |
-
offset,
|
239 |
-
gradOutput,
|
240 |
-
gradWeight,
|
241 |
-
columns,
|
242 |
-
ones,
|
243 |
-
kW,
|
244 |
-
kH,
|
245 |
-
dW,
|
246 |
-
dH,
|
247 |
-
padW,
|
248 |
-
padH,
|
249 |
-
dilationW,
|
250 |
-
dilationH,
|
251 |
-
group,
|
252 |
-
deformable_group,
|
253 |
-
scale,
|
254 |
-
im2col_step);
|
255 |
-
#else
|
256 |
-
AT_ERROR("Detectron2 is not compiled with GPU support!");
|
257 |
-
#endif
|
258 |
-
}
|
259 |
-
AT_ERROR("This operator is not implemented on CPU");
|
260 |
-
}
|
261 |
-
|
262 |
-
inline void modulated_deform_conv_forward(
|
263 |
-
at::Tensor input,
|
264 |
-
at::Tensor weight,
|
265 |
-
at::Tensor bias,
|
266 |
-
at::Tensor ones,
|
267 |
-
at::Tensor offset,
|
268 |
-
at::Tensor mask,
|
269 |
-
at::Tensor output,
|
270 |
-
at::Tensor columns,
|
271 |
-
int kernel_h,
|
272 |
-
int kernel_w,
|
273 |
-
const int stride_h,
|
274 |
-
const int stride_w,
|
275 |
-
const int pad_h,
|
276 |
-
const int pad_w,
|
277 |
-
const int dilation_h,
|
278 |
-
const int dilation_w,
|
279 |
-
const int group,
|
280 |
-
const int deformable_group,
|
281 |
-
const bool with_bias) {
|
282 |
-
if (input.is_cuda()) {
|
283 |
-
#if defined(WITH_CUDA) || defined(WITH_HIP)
|
284 |
-
TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!");
|
285 |
-
TORCH_CHECK(bias.is_cuda(), "bias tensor is not on GPU!");
|
286 |
-
TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
|
287 |
-
return modulated_deform_conv_cuda_forward(
|
288 |
-
input,
|
289 |
-
weight,
|
290 |
-
bias,
|
291 |
-
ones,
|
292 |
-
offset,
|
293 |
-
mask,
|
294 |
-
output,
|
295 |
-
columns,
|
296 |
-
kernel_h,
|
297 |
-
kernel_w,
|
298 |
-
stride_h,
|
299 |
-
stride_w,
|
300 |
-
pad_h,
|
301 |
-
pad_w,
|
302 |
-
dilation_h,
|
303 |
-
dilation_w,
|
304 |
-
group,
|
305 |
-
deformable_group,
|
306 |
-
with_bias);
|
307 |
-
#else
|
308 |
-
AT_ERROR("Detectron2 is not compiled with GPU support!");
|
309 |
-
#endif
|
310 |
-
}
|
311 |
-
AT_ERROR("This operator is not implemented on CPU");
|
312 |
-
}
|
313 |
-
|
314 |
-
inline void modulated_deform_conv_backward(
|
315 |
-
at::Tensor input,
|
316 |
-
at::Tensor weight,
|
317 |
-
at::Tensor bias,
|
318 |
-
at::Tensor ones,
|
319 |
-
at::Tensor offset,
|
320 |
-
at::Tensor mask,
|
321 |
-
at::Tensor columns,
|
322 |
-
at::Tensor grad_input,
|
323 |
-
at::Tensor grad_weight,
|
324 |
-
at::Tensor grad_bias,
|
325 |
-
at::Tensor grad_offset,
|
326 |
-
at::Tensor grad_mask,
|
327 |
-
at::Tensor grad_output,
|
328 |
-
int kernel_h,
|
329 |
-
int kernel_w,
|
330 |
-
int stride_h,
|
331 |
-
int stride_w,
|
332 |
-
int pad_h,
|
333 |
-
int pad_w,
|
334 |
-
int dilation_h,
|
335 |
-
int dilation_w,
|
336 |
-
int group,
|
337 |
-
int deformable_group,
|
338 |
-
const bool with_bias) {
|
339 |
-
if (grad_output.is_cuda()) {
|
340 |
-
#if defined(WITH_CUDA) || defined(WITH_HIP)
|
341 |
-
TORCH_CHECK(input.is_cuda(), "input tensor is not on GPU!");
|
342 |
-
TORCH_CHECK(weight.is_cuda(), "weight tensor is not on GPU!");
|
343 |
-
TORCH_CHECK(bias.is_cuda(), "bias tensor is not on GPU!");
|
344 |
-
TORCH_CHECK(offset.is_cuda(), "offset tensor is not on GPU!");
|
345 |
-
return modulated_deform_conv_cuda_backward(
|
346 |
-
input,
|
347 |
-
weight,
|
348 |
-
bias,
|
349 |
-
ones,
|
350 |
-
offset,
|
351 |
-
mask,
|
352 |
-
columns,
|
353 |
-
grad_input,
|
354 |
-
grad_weight,
|
355 |
-
grad_bias,
|
356 |
-
grad_offset,
|
357 |
-
grad_mask,
|
358 |
-
grad_output,
|
359 |
-
kernel_h,
|
360 |
-
kernel_w,
|
361 |
-
stride_h,
|
362 |
-
stride_w,
|
363 |
-
pad_h,
|
364 |
-
pad_w,
|
365 |
-
dilation_h,
|
366 |
-
dilation_w,
|
367 |
-
group,
|
368 |
-
deformable_group,
|
369 |
-
with_bias);
|
370 |
-
#else
|
371 |
-
AT_ERROR("Detectron2 is not compiled with GPU support!");
|
372 |
-
#endif
|
373 |
-
}
|
374 |
-
AT_ERROR("This operator is not implemented on CPU");
|
375 |
-
}
|
376 |
-
|
377 |
-
} // namespace detectron2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_scheduler.py
DELETED
@@ -1,68 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
|
3 |
-
import math
|
4 |
-
import numpy as np
|
5 |
-
from unittest import TestCase
|
6 |
-
import torch
|
7 |
-
from fvcore.common.param_scheduler import CosineParamScheduler, MultiStepParamScheduler
|
8 |
-
from torch import nn
|
9 |
-
|
10 |
-
from detectron2.solver import LRMultiplier, WarmupParamScheduler
|
11 |
-
|
12 |
-
|
13 |
-
class TestScheduler(TestCase):
|
14 |
-
def test_warmup_multistep(self):
|
15 |
-
p = nn.Parameter(torch.zeros(0))
|
16 |
-
opt = torch.optim.SGD([p], lr=5)
|
17 |
-
|
18 |
-
multiplier = WarmupParamScheduler(
|
19 |
-
MultiStepParamScheduler(
|
20 |
-
[1, 0.1, 0.01, 0.001],
|
21 |
-
milestones=[10, 15, 20],
|
22 |
-
num_updates=30,
|
23 |
-
),
|
24 |
-
0.001,
|
25 |
-
5 / 30,
|
26 |
-
)
|
27 |
-
sched = LRMultiplier(opt, multiplier, 30)
|
28 |
-
# This is an equivalent of:
|
29 |
-
# sched = WarmupMultiStepLR(
|
30 |
-
# opt, milestones=[10, 15, 20], gamma=0.1, warmup_factor=0.001, warmup_iters=5)
|
31 |
-
|
32 |
-
p.sum().backward()
|
33 |
-
opt.step()
|
34 |
-
|
35 |
-
lrs = [0.005]
|
36 |
-
for _ in range(30):
|
37 |
-
sched.step()
|
38 |
-
lrs.append(opt.param_groups[0]["lr"])
|
39 |
-
self.assertTrue(np.allclose(lrs[:5], [0.005, 1.004, 2.003, 3.002, 4.001]))
|
40 |
-
self.assertTrue(np.allclose(lrs[5:10], 5.0))
|
41 |
-
self.assertTrue(np.allclose(lrs[10:15], 0.5))
|
42 |
-
self.assertTrue(np.allclose(lrs[15:20], 0.05))
|
43 |
-
self.assertTrue(np.allclose(lrs[20:], 0.005))
|
44 |
-
|
45 |
-
def test_warmup_cosine(self):
|
46 |
-
p = nn.Parameter(torch.zeros(0))
|
47 |
-
opt = torch.optim.SGD([p], lr=5)
|
48 |
-
multiplier = WarmupParamScheduler(
|
49 |
-
CosineParamScheduler(1, 0),
|
50 |
-
0.001,
|
51 |
-
5 / 30,
|
52 |
-
)
|
53 |
-
sched = LRMultiplier(opt, multiplier, 30)
|
54 |
-
|
55 |
-
p.sum().backward()
|
56 |
-
opt.step()
|
57 |
-
self.assertEqual(opt.param_groups[0]["lr"], 0.005)
|
58 |
-
lrs = [0.005]
|
59 |
-
|
60 |
-
for _ in range(30):
|
61 |
-
sched.step()
|
62 |
-
lrs.append(opt.param_groups[0]["lr"])
|
63 |
-
for idx, lr in enumerate(lrs):
|
64 |
-
expected_cosine = 2.5 * (1.0 + math.cos(math.pi * idx / 30))
|
65 |
-
if idx >= 5:
|
66 |
-
self.assertAlmostEqual(lr, expected_cosine)
|
67 |
-
else:
|
68 |
-
self.assertNotAlmostEqual(lr, expected_cosine)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/__init__.py
DELETED
File without changes
|
spaces/BatuhanYilmaz/Whisper-Auto-Subtitled-Video-Generator/pages/04_🔊_Upload_Audio_File.py
DELETED
@@ -1,205 +0,0 @@
|
|
1 |
-
import whisper
|
2 |
-
import streamlit as st
|
3 |
-
from streamlit_lottie import st_lottie
|
4 |
-
from utils import write_vtt, write_srt
|
5 |
-
import ffmpeg
|
6 |
-
import requests
|
7 |
-
from typing import Iterator
|
8 |
-
from io import StringIO
|
9 |
-
import numpy as np
|
10 |
-
import pathlib
|
11 |
-
import os
|
12 |
-
|
13 |
-
st.set_page_config(page_title="Auto Transcriber", page_icon="🔊", layout="wide")
|
14 |
-
|
15 |
-
# Define a function that we can use to load lottie files from a link.
|
16 |
-
@st.cache(allow_output_mutation=True)
|
17 |
-
def load_lottieurl(url: str):
|
18 |
-
r = requests.get(url)
|
19 |
-
if r.status_code != 200:
|
20 |
-
return None
|
21 |
-
return r.json()
|
22 |
-
|
23 |
-
|
24 |
-
APP_DIR = pathlib.Path(__file__).parent.absolute()
|
25 |
-
|
26 |
-
LOCAL_DIR = APP_DIR / "local_audio"
|
27 |
-
LOCAL_DIR.mkdir(exist_ok=True)
|
28 |
-
save_dir = LOCAL_DIR / "output"
|
29 |
-
save_dir.mkdir(exist_ok=True)
|
30 |
-
|
31 |
-
|
32 |
-
col1, col2 = st.columns([1, 3])
|
33 |
-
with col1:
|
34 |
-
lottie = load_lottieurl("https://assets1.lottiefiles.com/packages/lf20_1xbk4d2v.json")
|
35 |
-
st_lottie(lottie)
|
36 |
-
|
37 |
-
with col2:
|
38 |
-
st.write("""
|
39 |
-
## Auto Transcriber
|
40 |
-
##### Input an audio file and get a transcript.
|
41 |
-
###### ➠ If you want to transcribe the audio in its original language, select the task as "Transcribe"
|
42 |
-
###### ➠ If you want to translate the transcription to English, select the task as "Translate"
|
43 |
-
###### I recommend starting with the base model and then experimenting with the larger models, the small and medium models often work well. """)
|
44 |
-
|
45 |
-
loaded_model = whisper.load_model("base")
|
46 |
-
current_size = "None"
|
47 |
-
|
48 |
-
|
49 |
-
@st.cache(allow_output_mutation=True)
|
50 |
-
def change_model(current_size, size):
|
51 |
-
if current_size != size:
|
52 |
-
loaded_model = whisper.load_model(size)
|
53 |
-
return loaded_model
|
54 |
-
else:
|
55 |
-
raise Exception("Model size is the same as the current size.")
|
56 |
-
|
57 |
-
@st.cache(allow_output_mutation=True)
|
58 |
-
def inferecence(loaded_model, uploaded_file, task):
|
59 |
-
with open(f"{save_dir}/input.mp3", "wb") as f:
|
60 |
-
f.write(uploaded_file.read())
|
61 |
-
audio = ffmpeg.input(f"{save_dir}/input.mp3")
|
62 |
-
audio = ffmpeg.output(audio, f"{save_dir}/output.wav", acodec="pcm_s16le", ac=1, ar="16k")
|
63 |
-
ffmpeg.run(audio, overwrite_output=True)
|
64 |
-
if task == "Transcribe":
|
65 |
-
options = dict(task="transcribe", best_of=5)
|
66 |
-
results = loaded_model.transcribe(f"{save_dir}/output.wav", **options)
|
67 |
-
vtt = getSubs(results["segments"], "vtt", 80)
|
68 |
-
srt = getSubs(results["segments"], "srt", 80)
|
69 |
-
lang = results["language"]
|
70 |
-
return results["text"], vtt, srt, lang
|
71 |
-
elif task == "Translate":
|
72 |
-
options = dict(task="translate", best_of=5)
|
73 |
-
results = loaded_model.transcribe(f"{save_dir}/output.wav", **options)
|
74 |
-
vtt = getSubs(results["segments"], "vtt", 80)
|
75 |
-
srt = getSubs(results["segments"], "srt", 80)
|
76 |
-
lang = results["language"]
|
77 |
-
return results["text"], vtt, srt, lang
|
78 |
-
else:
|
79 |
-
raise ValueError("Task not supported")
|
80 |
-
|
81 |
-
|
82 |
-
def getSubs(segments: Iterator[dict], format: str, maxLineWidth: int) -> str:
|
83 |
-
segmentStream = StringIO()
|
84 |
-
|
85 |
-
if format == 'vtt':
|
86 |
-
write_vtt(segments, file=segmentStream, maxLineWidth=maxLineWidth)
|
87 |
-
elif format == 'srt':
|
88 |
-
write_srt(segments, file=segmentStream, maxLineWidth=maxLineWidth)
|
89 |
-
else:
|
90 |
-
raise Exception("Unknown format " + format)
|
91 |
-
|
92 |
-
segmentStream.seek(0)
|
93 |
-
return segmentStream.read()
|
94 |
-
|
95 |
-
|
96 |
-
def main():
|
97 |
-
size = st.selectbox("Select Model Size (The larger the model, the more accurate the transcription will be, but it will take longer)", ["tiny", "base", "small", "medium", "large"], index=1)
|
98 |
-
loaded_model = change_model(current_size, size)
|
99 |
-
st.write(f"Model is {'multilingual' if loaded_model.is_multilingual else 'English-only'} "
|
100 |
-
f"and has {sum(np.prod(p.shape) for p in loaded_model.parameters()):,} parameters.")
|
101 |
-
input_file = st.file_uploader("Upload an audio file", type=["mp3", "wav", "m4a"])
|
102 |
-
if input_file is not None:
|
103 |
-
filename = input_file.name[:-4]
|
104 |
-
else:
|
105 |
-
filename = None
|
106 |
-
task = st.selectbox("Select Task", ["Transcribe", "Translate"], index=0)
|
107 |
-
if task == "Transcribe":
|
108 |
-
if st.button("Transcribe"):
|
109 |
-
results = inferecence(loaded_model, input_file, task)
|
110 |
-
col3, col4 = st.columns(2)
|
111 |
-
col5, col6, col7 = st.columns(3)
|
112 |
-
col9, col10 = st.columns(2)
|
113 |
-
|
114 |
-
with col3:
|
115 |
-
st.audio(input_file)
|
116 |
-
|
117 |
-
with open("transcript.txt", "w+", encoding='utf8') as f:
|
118 |
-
f.writelines(results[0])
|
119 |
-
f.close()
|
120 |
-
with open(os.path.join(os.getcwd(), "transcript.txt"), "rb") as f:
|
121 |
-
datatxt = f.read()
|
122 |
-
|
123 |
-
|
124 |
-
with open("transcript.vtt", "w+",encoding='utf8') as f:
|
125 |
-
f.writelines(results[1])
|
126 |
-
f.close()
|
127 |
-
with open(os.path.join(os.getcwd(), "transcript.vtt"), "rb") as f:
|
128 |
-
datavtt = f.read()
|
129 |
-
|
130 |
-
with open("transcript.srt", "w+",encoding='utf8') as f:
|
131 |
-
f.writelines(results[2])
|
132 |
-
f.close()
|
133 |
-
with open(os.path.join(os.getcwd(), "transcript.srt"), "rb") as f:
|
134 |
-
datasrt = f.read()
|
135 |
-
|
136 |
-
with col5:
|
137 |
-
st.download_button(label="Download Transcript (.txt)",
|
138 |
-
data=datatxt,
|
139 |
-
file_name="transcript.txt")
|
140 |
-
with col6:
|
141 |
-
st.download_button(label="Download Transcript (.vtt)",
|
142 |
-
data=datavtt,
|
143 |
-
file_name="transcript.vtt")
|
144 |
-
with col7:
|
145 |
-
st.download_button(label="Download Transcript (.srt)",
|
146 |
-
data=datasrt,
|
147 |
-
file_name="transcript.srt")
|
148 |
-
with col9:
|
149 |
-
st.success("You can download the transcript in .srt format, edit it (if you need to) and upload it to YouTube to create subtitles for your video.")
|
150 |
-
with col10:
|
151 |
-
st.info("Streamlit refreshes after the download button is clicked. The data is cached so you can download the transcript again without having to transcribe the video again.")
|
152 |
-
|
153 |
-
elif task == "Translate":
|
154 |
-
if st.button("Translate to English"):
|
155 |
-
results = inferecence(loaded_model, input_file, task)
|
156 |
-
col3, col4 = st.columns(2)
|
157 |
-
col5, col6, col7 = st.columns(3)
|
158 |
-
col9, col10 = st.columns(2)
|
159 |
-
|
160 |
-
with col3:
|
161 |
-
st.audio(input_file)
|
162 |
-
|
163 |
-
with open("transcript.txt", "w+", encoding='utf8') as f:
|
164 |
-
f.writelines(results[0])
|
165 |
-
f.close()
|
166 |
-
with open(os.path.join(os.getcwd(), "transcript.txt"), "rb") as f:
|
167 |
-
datatxt = f.read()
|
168 |
-
|
169 |
-
|
170 |
-
with open("transcript.vtt", "w+",encoding='utf8') as f:
|
171 |
-
f.writelines(results[1])
|
172 |
-
f.close()
|
173 |
-
with open(os.path.join(os.getcwd(), "transcript.vtt"), "rb") as f:
|
174 |
-
datavtt = f.read()
|
175 |
-
|
176 |
-
with open("transcript.srt", "w+",encoding='utf8') as f:
|
177 |
-
f.writelines(results[2])
|
178 |
-
f.close()
|
179 |
-
with open(os.path.join(os.getcwd(), "transcript.srt"), "rb") as f:
|
180 |
-
datasrt = f.read()
|
181 |
-
|
182 |
-
with col5:
|
183 |
-
st.download_button(label="Download Transcript (.txt)",
|
184 |
-
data=datatxt,
|
185 |
-
file_name="transcript.txt")
|
186 |
-
with col6:
|
187 |
-
st.download_button(label="Download Transcript (.vtt)",
|
188 |
-
data=datavtt,
|
189 |
-
file_name="transcript.vtt")
|
190 |
-
with col7:
|
191 |
-
st.download_button(label="Download Transcript (.srt)",
|
192 |
-
data=datasrt,
|
193 |
-
file_name="transcript.srt")
|
194 |
-
with col9:
|
195 |
-
st.success("You can download the transcript in .srt format, edit it (if you need to) and upload it to YouTube to create subtitles for your video.")
|
196 |
-
with col10:
|
197 |
-
st.info("Streamlit refreshes after the download button is clicked. The data is cached so you can download the transcript again without having to transcribe the video again.")
|
198 |
-
|
199 |
-
else:
|
200 |
-
st.error("Please select a task.")
|
201 |
-
|
202 |
-
|
203 |
-
if __name__ == "__main__":
|
204 |
-
main()
|
205 |
-
st.markdown("###### Made with :heart: by [@BatuhanYılmaz](https://twitter.com/batuhan3326) [](https://www.buymeacoffee.com/batuhanylmz)")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Car Parking Pro Mod Apk.md
DELETED
@@ -1,62 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Estacionamiento de coches Pro Mod APK: Un simulador de estacionamiento realista y desafiante</h1>
|
3 |
-
<p>¿Te gustan los juegos de aparcamiento? ¿Quieres probar tus habilidades de conducción y precisión en varios escenarios de estacionamiento? ¿Quieres personalizar tus coches y desbloquear nuevos artículos sin gastar dinero? Si ha respondido sí a cualquiera de estas preguntas, entonces usted debe tratar de Parking Pro Mod APK, una versión modificada del popular juego de simulador de aparcamiento de coches por Tiramisu. En este artículo, le diremos todo lo que necesita saber sobre Car Parking Pro Mod APK, incluyendo sus características, cómo descargar e instalar, y sus pros y contras. </p>
|
4 |
-
<h2>¿Qué es el aparcamiento Pro Mod APK? </h2>
|
5 |
-
<p>Car Parking Pro Mod APK es una versión modificada del juego original Car Parking Pro, que es un juego de simulador de estacionamiento realista y desafiante para dispositivos Android. El juego te permite conducir varios coches en diferentes escenarios de estacionamiento, como estacionamiento paralelo, estacionamiento inverso, estacionamiento en garaje y más. También puede personalizar sus coches con diferentes colores, llantas, spoilers y pegatinas. El juego tiene gráficos realistas y física, así como controles suaves e intuitivos. También puede competir con otros jugadores en línea y ver quién puede aparcar más rápido y mejor. </p>
|
6 |
-
<h2>car parking pro mod apk</h2><br /><p><b><b>Download File</b> ☆ <a href="https://bltlly.com/2v6KNZ">https://bltlly.com/2v6KNZ</a></b></p><br /><br />
|
7 |
-
<p>Sin embargo, el juego original tiene algunas limitaciones, como anuncios, compras en la aplicación, dinero limitado y artículos bloqueados. Es por eso que algunos jugadores prefieren utilizar Car Parking Pro Mod APK, que es una versión modificada que elimina todas estas limitaciones. Con Car Parking Pro Mod APK, puede disfrutar de dinero ilimitado, artículos desbloqueados, sin anuncios, y más. También puedes acceder a todas las funciones del juego sin ninguna restricción. </p>
|
8 |
-
<h3>Características de Aparcamiento Pro Mod APK</h3>
|
9 |
-
<h4>- Gráficos realistas y física</h4>
|
10 |
-
|
11 |
-
<h4>- Varios escenarios y niveles de estacionamiento</h4>
|
12 |
-
<p>Otra característica de Aparcamiento Pro Mod APK es su variedad de escenarios y niveles de estacionamiento. El juego tiene más de 100 niveles que desafían tus habilidades de conducción y precisión en diferentes situaciones de estacionamiento. Puede estacionar en espacios estrechos, lotes llenos, calles concurridas, garajes oscuros, carreteras resbaladizas y más. También puedes elegir entre diferentes modos de dificultad, como fácil, normal, difícil o experto. Cada nivel tiene su propio límite de tiempo y sistema de puntuación que miden su rendimiento. </p>
|
13 |
-
<h4>- Coches y controles personalizables</h4>
|
14 |
-
<p>Una tercera característica de Car Parking Pro Mod APK es sus coches personalizables y controles. El juego te permite elegir entre más de 50 coches que tienen diferentes características, como velocidad, manejo, frenado, tamaño y más. También puede personalizar sus coches con diferentes colores, llantas, spoilers, pegatinas y más. Puedes cambiar los controles de tu coche según tu preferencia. Puede elegir entre diferentes modos de dirección, como inclinación, rueda o botones. También puede ajustar la sensibilidad y la vibración de los controles. </p>
|
15 |
-
<h4>- Dinero ilimitado y artículos desbloqueados</h4>
|
16 |
-
<p>Una cuarta característica de Car Parking Pro Mod APK es su dinero ilimitado y artículos desbloqueados. El juego le da dinero ilimitado que se puede utilizar para comprar y mejorar sus coches. También puedes desbloquear todos los elementos del juego, como coches, niveles, modos y más. No tienes que ver anuncios ni hacer compras en la aplicación para acceder a estas funciones. Puedes disfrutar del juego sin limitaciones ni interrupciones. </p>
|
17 |
-
<h2> ¿Cómo descargar e instalar Car Parking Pro Mod APK? </h2>
|
18 |
-
<p>Si desea descargar e instalar Car Parking Pro Mod APK en su dispositivo Android, es necesario seguir algunos pasos simples. Aquí están los pasos para descargar e instalar Car Parking Pro Mod APK:</p>
|
19 |
-
<h3> Pasos para descargar e instalar Car Parking Pro Mod APK</h3>
|
20 |
-
<h4>- Habilitar fuentes desconocidas en el dispositivo</h4>
|
21 |
-
|
22 |
-
<h4>- Descargar el archivo apk mod de una fuente de confianza</h4>
|
23 |
-
<p>El segundo paso es descargar el archivo apk mod de una fuente de confianza. Usted puede encontrar muchos sitios web que ofrecen Parking Pro Mod APK gratis, pero no todos ellos son seguros y fiables. Algunos de ellos pueden contener virus, malware o spyware que pueden dañar su dispositivo o robar sus datos. Para evitar estos riesgos, usted debe descargar el archivo apk mod de una fuente de confianza, como [este]. </p>
|
24 |
-
<h4>- Instalar el archivo apk mod en su dispositivo</h4>
|
25 |
-
<p>El tercer paso es instalar el archivo apk mod en su dispositivo. Para ello, busque el archivo descargado en el almacenamiento del dispositivo y, a continuación, toque en él para iniciar el proceso de instalación. Siga las instrucciones en la pantalla para completar la instalación. </p>
|
26 |
-
<h4>- Iniciar el juego y disfrutar</h4>
|
27 |
-
<p>El paso final es lanzar el juego y disfrutar. Para hacer esto, encuentra el icono del juego en la pantalla de inicio del dispositivo o en el cajón de la aplicación, luego toca en él para abrir el juego. Ahora puedes disfrutar de Parking Pro Mod APK con todas sus características y beneficios. </p>
|
28 |
-
<p></p>
|
29 |
-
<h2>Pros y contras de Aparcamiento Pro Mod APK</h2>
|
30 |
-
<p>Car Parking Pro Mod APK tiene muchos pros y contras que usted debe considerar antes de descargar e instalar. Estos son algunos de los pros y los contras de Car Parking Pro Mod APK:</p>
|
31 |
-
<h3>Pros de Aparcamiento Pro Mod APK</h3>
|
32 |
-
<h4>- Juego divertido y adictivo</h4>
|
33 |
-
<p>Uno de los pros de Car Parking Pro Mod APK es su juego divertido y adictivo. El juego ofrece una experiencia de simulador de estacionamiento realista y desafiante que te mantendrá enganchado durante horas. Puede mejorar sus habilidades de conducción y precisión en varios escenarios y niveles de estacionamiento. También puede competir con otros jugadores en línea y ver quién puede aparcar más rápido y mejor. </p>
|
34 |
-
<h4>- No hay anuncios o compras en la aplicación</h4>
|
35 |
-
|
36 |
-
<h4>- Compatible con la mayoría de dispositivos</h4>
|
37 |
-
<p>Un tercer pro de Car Parking Pro Mod APK es su compatibilidad con la mayoría de los dispositivos. El juego tiene un tamaño de archivo bajo y no requiere un dispositivo de alta gama para funcionar sin problemas. Puedes jugar el juego en la mayoría de los dispositivos Android que tienen Android 5.0 o versiones superiores. </p>
|
38 |
-
<h3>Los contras de aparcamiento Pro Mod APK</h3>
|
39 |
-
<h4>- Puede que no funcione en algunos dispositivos o regiones</h4>
|
40 |
-
<p>Uno de los contras de Car Parking Pro Mod APK es su posible incompatibilidad con algunos dispositivos o regiones. El juego puede no funcionar en algunos dispositivos que tienen diferentes especificaciones o ajustes. El juego también puede no funcionar en algunas regiones que tienen diferentes leyes o regulaciones con respecto a las aplicaciones modificadas. Puede encontrar errores o errores mientras juega el juego en algunos dispositivos o regiones. </p>
|
41 |
-
<h4>- Puede causar problemas de seguridad o rendimiento</h4>
|
42 |
-
<p>Otra estafa de Aparcamiento Pro Mod APK es su seguridad potencial o problemas de rendimiento. El juego es una versión modificada del juego original que no puede ser autorizada o aprobada por los desarrolladores o Google Play Store. El juego puede contener virus, malware o spyware que pueden dañar su dispositivo o robar sus datos <p>El juego también puede afectar el rendimiento de su dispositivo o el juego original. Puede experimentar retrasos, fallos o fallos durante el juego. También puede perder su progreso o datos si el juego se actualiza o elimina. </p>
|
43 |
-
<h4>- Puede violar los términos de servicio del juego original</h4>
|
44 |
-
<p>Una tercera estafa de Car Parking Pro Mod APK es su posible violación de los términos de servicio del juego original. El juego es una versión modificada que puede no respetar los derechos o reglas de los desarrolladores o editores originales del juego. El juego también puede infringir la propiedad intelectual o marcas comerciales del juego original. Usted puede enfrentar consecuencias legales o penalidades si usa el juego sin permiso o autorización. </p>
|
45 |
-
<h2>Conclusión</h2>
|
46 |
-
|
47 |
-
<p>Si desea descargar e instalar Car Parking Pro Mod APK en su dispositivo Android, es necesario seguir algunos pasos simples, tales como la habilitación de fuentes desconocidas en su dispositivo, descargar el archivo apk mod de una fuente de confianza, instalar el archivo apk mod en su dispositivo, y lanzar el juego y disfrutarlo. </p>
|
48 |
-
<p>Sin embargo, también debe ser consciente de los riesgos y responsabilidades que vienen con el uso de Parking Pro Mod APK. Debes usar el juego a tu propia discreción y riesgo. También debes respetar los derechos y las reglas de los desarrolladores y editores de juegos originales. No debes usar el juego para ningún propósito ilegal o no ético. </p>
|
49 |
-
<h2>Preguntas frecuentes</h2>
|
50 |
-
<p>Aquí hay algunas preguntas frecuentes sobre Car Parking Pro Mod APK:</p>
|
51 |
-
<h4>- ¿Cuál es la diferencia entre Car Parking Pro y Car Parking Pro Mod APK? </h4>
|
52 |
-
<p>Car Parking Pro es la versión original del juego de simulador de estacionamiento de coches por Tiramisu, mientras que Car Parking Pro Mod APK es una versión modificada que elimina todas las limitaciones y añade más características al juego. </p>
|
53 |
-
<h4>- ¿Es seguro usar Car Parking Pro Mod APK? </h4>
|
54 |
-
<p>Car Parking Pro Mod APK puede no ser seguro de usar, ya que puede contener virus, malware o spyware que pueden dañar su dispositivo o robar sus datos. También puede causar problemas de seguridad o rendimiento en su dispositivo o en el juego original. También puede violar los términos de servicio del juego original. </p>
|
55 |
-
<h4>- ¿Cómo puedo actualizar Car Parking Pro Mod APK? </h4>
|
56 |
-
<p>Puede actualizar Car Parking Pro Mod APK mediante la descarga e instalación de la última versión del archivo mod apk de una fuente de confianza. Sin embargo, puedes perder tu progreso o datos si actualizas el juego. </p>
|
57 |
-
<h4>- ¿Puedo jugar Car Parking Pro Mod APK offline? </h4>
|
58 |
-
<p>Puede jugar Car Parking Pro Mod APK sin conexión, ya que no requiere una conexión a Internet para funcionar. Sin embargo, es posible que no puedas acceder a algunas funciones o modos que requieren una conexión en línea, como el multijugador en línea. </p>
|
59 |
-
|
60 |
-
<p>Usted puede jugar Car Parking Pro Mod APK con tus amigos en línea, ya que es compatible con el modo multijugador en línea. Puedes competir con tus amigos u otros jugadores online y ver quién puede aparcar más rápido y mejor. </p> 64aa2da5cf<br />
|
61 |
-
<br />
|
62 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Dino Agua Mundo Mod Apk Dinero Ilimitado.md
DELETED
@@ -1,81 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Descargar Dino Water World Mod Apk dinero ilimitado</h1>
|
3 |
-
<p>¿Te gustan los dinosaurios y las aventuras submarinas? Si es así, entonces deberías probar Dino Water World, un juego de simulación donde puedes crear tu propio mundo submarino jurásico con diferentes dinosaurios marinos. En este artículo, le diremos qué es Dino Water World, por qué debe descargar la versión apk mod, y cómo hacerlo fácilmente. </p>
|
4 |
-
<h2>¿Qué es Dino Water World? </h2>
|
5 |
-
<p>Dino Water World es un juego de simulación desarrollado por Tap Pocket, donde puedes explorar el misterioso mundo perdido de los animales prehistóricos. Puedes coleccionar emocionantes dinosaurios marinos como Mosasaurus, Megalodon, Liopleurodon y más. También puedes criarlos, alimentarlos y hacerlos luchar contra otros dinosaurios o piratas. También puede construir minas de oro submarinas y decorar su mundo submarino jurásico con arrecifes de coral, algas y barcos hundidos. </p>
|
6 |
-
<h2>descargar dino agua mundo mod apk dinero ilimitado</h2><br /><p><b><b>Download Zip</b> 🗹 <a href="https://bltlly.com/2v6JY5">https://bltlly.com/2v6JY5</a></b></p><br /><br />
|
7 |
-
<h3>Características de Dino Water World</h3>
|
8 |
-
<p>Dino Water World tiene muchas características que lo convierten en un juego divertido y adictivo. Algunas de ellas son:</p>
|
9 |
-
<ul>
|
10 |
-
<li>Más de 20 tipos de dinosaurios marinos para recolectar y criar. </li>
|
11 |
-
<li>Gráficos y animaciones 3D realistas. </li>
|
12 |
-
<li>Controles fáciles e intuitivos. </li>
|
13 |
-
<li>Varias misiones y desafíos para completar. </li>
|
14 |
-
<li>Modo multijugador en línea donde puedes luchar con otros jugadores. </li>
|
15 |
-
<li>Recompensas y bonos diarios. </li>
|
16 |
-
</ul>
|
17 |
-
<h4>Cómo jugar Dino Water World</h4>
|
18 |
-
<p>El juego de Dino Water World es simple y divertido. Solo tienes que seguir estos pasos:</p>
|
19 |
-
<ol>
|
20 |
-
<li>Seleccione un dinosaurio marino de su colección o abra una nueva desde un huevo. </li>
|
21 |
-
<li>Alimenta a tu dinosaurio con pescado o carne para que crezca y evolucione. </li>
|
22 |
-
<li>Toca en tu dinosaurio para interactuar con él o arrástralo para moverlo. </li>
|
23 |
-
<li>Construir minas de oro y otras estructuras para ganar dinero y recursos. </li>
|
24 |
-
<li>Decora tu mundo submarino con varios artículos. </li>
|
25 |
-
<li>Lucha con otros dinosaurios o piratas para proteger tu territorio o conquistar otros nuevos. </li>
|
26 |
-
|
27 |
-
</ol>
|
28 |
-
<h2>¿Por qué descargar Dino Water World mod apk? </h2>
|
29 |
-
<p>Dino Water World es un juego gratuito que puedes descargar desde la Google Play Store o la App Store. Sin embargo, si desea disfrutar del juego al máximo, es posible que tenga que gastar algo de dinero real en compras en la aplicación. Por ejemplo, es posible que tengas que comprar más monedas, gemas o comida para tus dinosaurios. También podrías encontrar anuncios molestos que interrumpan tu juego. Es por eso que le recomendamos descargar la versión apk mod de Dino Water World, que le da dinero ilimitado y otros beneficios. </p>
|
30 |
-
<h3>Beneficios de mod apk</h3>
|
31 |
-
<p>La versión mod apk de Dino Water World es una versión modificada del juego original que te da algunas ventajas que no puedes obtener de la versión oficial. Algunos de los beneficios son:</p>
|
32 |
-
<h4>Dinero ilimitado</h4>
|
33 |
-
<p>Con la versión apk mod, obtendrá monedas y gemas ilimitadas que puede utilizar para comprar cualquier cosa que desee en el juego. Puedes comprar más huevos, comida, decoraciones o mejoras para tus dinosaurios. También puedes desbloquear nuevos dinosaurios más rápido y más fácil. Ya no necesitas preocuparte por quedarte sin dinero o recursos. </p>
|
34 |
-
<p></p>
|
35 |
-
<h4>Dinosaurios desbloqueados</h4>
|
36 |
-
<p>La versión apk mod también desbloquea todos los dinosaurios que están disponibles en el juego. No es necesario esperar a un cierto nivel o completar una determinada misión para conseguirlos. Puede tener cualquier dinosaurio que desee desde el principio y disfrutar de sus habilidades y características únicas. También puedes mezclar y combinar diferentes dinosaurios para crear nuevos híbridos. </p>
|
37 |
-
<h <h4>No hay anuncios</h4>
|
38 |
-
<p>Otro beneficio de la versión apk mod es que elimina todos los anuncios que pueden molestar mientras se juega el juego. No necesitas ver ningún video o banner para obtener recompensas adicionales o saltarte los tiempos de espera. Puedes disfrutar del juego sin interrupciones ni distracciones. </p>
|
39 |
-
<h2>Cómo descargar e instalar Dino Water World mod apk? </h2>
|
40 |
-
|
41 |
-
<h3>Requisitos</h3>
|
42 |
-
<p>Antes de descargar e instalar el apk mod, asegúrese de que tiene estos requisitos:</p>
|
43 |
-
<ul>
|
44 |
-
<li>Un dispositivo Android con Android 4.1 o superior. </li>
|
45 |
-
<li>Una conexión a Internet estable. </li>
|
46 |
-
<li>Suficiente espacio de almacenamiento en su dispositivo. </li>
|
47 |
-
<li>Permitir la instalación desde fuentes desconocidas en la configuración del dispositivo. </li>
|
48 |
-
</ul>
|
49 |
-
<h3>Pasos</h3>
|
50 |
-
<p>Después de tener los requisitos, puede proceder con estos pasos:</p>
|
51 |
-
<ol>
|
52 |
-
<li>Haga clic en este enlace para descargar el Dino Water World mod apk archivo: [Descargar Dino Water World Mod Apk]. </li>
|
53 |
-
<li>Espere a que la descarga termine y localice el archivo en su dispositivo. </li>
|
54 |
-
<li>Toque en el archivo y siga las instrucciones para instalarlo. </li>
|
55 |
-
<li>Iniciar el juego y disfrutar de dinero ilimitado y los dinosaurios desbloqueados. </li>
|
56 |
-
</ol>
|
57 |
-
<h2>Conclusión</h2>
|
58 |
-
<p>Dino Water World es un divertido y emocionante juego de simulación donde puedes crear tu propio mundo submarino jurásico con diferentes dinosaurios marinos. Usted puede recoger, criar, alimentar, y luchar con ellos, así como decorar su mundo con varios artículos. Sin embargo, si quieres disfrutar del juego al máximo, usted debe descargar la versión apk mod, que le da dinero ilimitado, los dinosaurios desbloqueados, y no hay anuncios. Puede descargarlo e instalarlo fácilmente siguiendo los pasos que proporcionamos en este artículo. ¿Qué está esperando? Descargar Dino Water World mod apk ahora y divertirse! </p>
|
59 |
-
<h3>Resumen</h3>
|
60 |
-
<p>En este artículo, hemos cubierto:</p>
|
61 |
-
<ul>
|
62 |
-
<li>Qué es Dino Water World y sus características. </li>
|
63 |
-
<li> ¿Por qué descargar Dino Water World mod apk y sus beneficios. </li>
|
64 |
-
<li> Cómo descargar e instalar Dino Water World mod apk y sus requisitos y pasos. </li>
|
65 |
-
</ul>
|
66 |
-
<h3>Preguntas frecuentes</h3>
|
67 |
-
<p>Aquí hay algunas preguntas frecuentes sobre Dino Water World mod apk:</p>
|
68 |
-
<ol>
|
69 |
-
<li><b>¿Es seguro Dino Water World mod apk? </b></li>
|
70 |
-
|
71 |
-
<li><b>¿Se me prohibirá el uso de Dino Water World mod apk? </b></li>
|
72 |
-
<p>No, no se le prohibió el uso de Dino Water World mod apk. El mod apk está diseñado para evitar el sistema de seguridad del juego y evitar la detección. Sin embargo, le aconsejamos que lo utilice bajo su propio riesgo y discreción. </p>
|
73 |
-
<li><b>¿Puedo jugar Dino Water World mod apk offline? </b></li>
|
74 |
-
<p>No, no se puede jugar Dino Water World mod apk offline. Necesita una conexión a Internet para acceder a las características del juego y el contenido. Sin embargo, se puede jugar en línea sin ningún problema o restricciones. </p>
|
75 |
-
<li><b>¿Puedo actualizar Dino Water World mod apk? </b></li>
|
76 |
-
<p>No, no puede actualizar Dino Water World mod apk. Si intenta actualizarlo desde la tienda oficial, perderá todas las características y beneficios del mod. Sin embargo, puede comprobar este artículo regularmente para cualquier nueva actualización o versiones de la apk mod. </p>
|
77 |
-
<li><b>¿Puedo jugar Dino Water World mod apk en dispositivos iOS? </b></li>
|
78 |
-
<p>No, no se puede jugar Dino Water World mod apk en dispositivos iOS. El mod apk solo es compatible con dispositivos Android. Sin embargo, puede jugar la versión oficial del juego en dispositivos iOS descargándolo desde la App Store.</p>
|
79 |
-
</ol></p> 64aa2da5cf<br />
|
80 |
-
<br />
|
81 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BetterAPI/BetterChat/src/lib/server/database.ts
DELETED
@@ -1,31 +0,0 @@
|
|
1 |
-
import { MONGODB_URL, MONGODB_DB_NAME } from "$env/static/private";
|
2 |
-
import { MongoClient } from "mongodb";
|
3 |
-
import type { Conversation } from "$lib/types/Conversation";
|
4 |
-
import type { SharedConversation } from "$lib/types/SharedConversation";
|
5 |
-
import type { AbortedGeneration } from "$lib/types/AbortedGeneration";
|
6 |
-
import type { Settings } from "$lib/types/Settings";
|
7 |
-
|
8 |
-
const client = new MongoClient(MONGODB_URL, {
|
9 |
-
// directConnection: true
|
10 |
-
});
|
11 |
-
|
12 |
-
export const connectPromise = client.connect().catch(console.error);
|
13 |
-
|
14 |
-
const db = client.db(MONGODB_DB_NAME);
|
15 |
-
|
16 |
-
const conversations = db.collection<Conversation>("conversations");
|
17 |
-
const sharedConversations = db.collection<SharedConversation>("sharedConversations");
|
18 |
-
const abortedGenerations = db.collection<AbortedGeneration>("abortedGenerations");
|
19 |
-
const settings = db.collection<Settings>("settings");
|
20 |
-
|
21 |
-
export { client, db };
|
22 |
-
export const collections = { conversations, sharedConversations, abortedGenerations, settings };
|
23 |
-
|
24 |
-
client.on("open", () => {
|
25 |
-
conversations.createIndex({ sessionId: 1, updatedAt: -1 });
|
26 |
-
abortedGenerations.createIndex({ updatedAt: 1 }, { expireAfterSeconds: 30 });
|
27 |
-
abortedGenerations.createIndex({ conversationId: 1 }, { unique: true });
|
28 |
-
sharedConversations.createIndex({ hash: 1 }, { unique: true });
|
29 |
-
// Sparse so that we can have settings on userId later
|
30 |
-
settings.createIndex({ sessionId: 1 }, { unique: true, sparse: true });
|
31 |
-
});
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/models/direct_url.py
DELETED
@@ -1,237 +0,0 @@
|
|
1 |
-
""" PEP 610 """
|
2 |
-
import json
|
3 |
-
import re
|
4 |
-
import urllib.parse
|
5 |
-
from typing import Any, Dict, Iterable, Optional, Type, TypeVar, Union
|
6 |
-
|
7 |
-
__all__ = [
|
8 |
-
"DirectUrl",
|
9 |
-
"DirectUrlValidationError",
|
10 |
-
"DirInfo",
|
11 |
-
"ArchiveInfo",
|
12 |
-
"VcsInfo",
|
13 |
-
]
|
14 |
-
|
15 |
-
T = TypeVar("T")
|
16 |
-
|
17 |
-
DIRECT_URL_METADATA_NAME = "direct_url.json"
|
18 |
-
ENV_VAR_RE = re.compile(r"^\$\{[A-Za-z0-9-_]+\}(:\$\{[A-Za-z0-9-_]+\})?$")
|
19 |
-
|
20 |
-
|
21 |
-
class DirectUrlValidationError(Exception):
|
22 |
-
pass
|
23 |
-
|
24 |
-
|
25 |
-
def _get(
|
26 |
-
d: Dict[str, Any], expected_type: Type[T], key: str, default: Optional[T] = None
|
27 |
-
) -> Optional[T]:
|
28 |
-
"""Get value from dictionary and verify expected type."""
|
29 |
-
if key not in d:
|
30 |
-
return default
|
31 |
-
value = d[key]
|
32 |
-
if not isinstance(value, expected_type):
|
33 |
-
raise DirectUrlValidationError(
|
34 |
-
"{!r} has unexpected type for {} (expected {})".format(
|
35 |
-
value, key, expected_type
|
36 |
-
)
|
37 |
-
)
|
38 |
-
return value
|
39 |
-
|
40 |
-
|
41 |
-
def _get_required(
|
42 |
-
d: Dict[str, Any], expected_type: Type[T], key: str, default: Optional[T] = None
|
43 |
-
) -> T:
|
44 |
-
value = _get(d, expected_type, key, default)
|
45 |
-
if value is None:
|
46 |
-
raise DirectUrlValidationError(f"{key} must have a value")
|
47 |
-
return value
|
48 |
-
|
49 |
-
|
50 |
-
def _exactly_one_of(infos: Iterable[Optional["InfoType"]]) -> "InfoType":
|
51 |
-
infos = [info for info in infos if info is not None]
|
52 |
-
if not infos:
|
53 |
-
raise DirectUrlValidationError(
|
54 |
-
"missing one of archive_info, dir_info, vcs_info"
|
55 |
-
)
|
56 |
-
if len(infos) > 1:
|
57 |
-
raise DirectUrlValidationError(
|
58 |
-
"more than one of archive_info, dir_info, vcs_info"
|
59 |
-
)
|
60 |
-
assert infos[0] is not None
|
61 |
-
return infos[0]
|
62 |
-
|
63 |
-
|
64 |
-
def _filter_none(**kwargs: Any) -> Dict[str, Any]:
|
65 |
-
"""Make dict excluding None values."""
|
66 |
-
return {k: v for k, v in kwargs.items() if v is not None}
|
67 |
-
|
68 |
-
|
69 |
-
class VcsInfo:
|
70 |
-
name = "vcs_info"
|
71 |
-
|
72 |
-
def __init__(
|
73 |
-
self,
|
74 |
-
vcs: str,
|
75 |
-
commit_id: str,
|
76 |
-
requested_revision: Optional[str] = None,
|
77 |
-
) -> None:
|
78 |
-
self.vcs = vcs
|
79 |
-
self.requested_revision = requested_revision
|
80 |
-
self.commit_id = commit_id
|
81 |
-
|
82 |
-
@classmethod
|
83 |
-
def _from_dict(cls, d: Optional[Dict[str, Any]]) -> Optional["VcsInfo"]:
|
84 |
-
if d is None:
|
85 |
-
return None
|
86 |
-
return cls(
|
87 |
-
vcs=_get_required(d, str, "vcs"),
|
88 |
-
commit_id=_get_required(d, str, "commit_id"),
|
89 |
-
requested_revision=_get(d, str, "requested_revision"),
|
90 |
-
)
|
91 |
-
|
92 |
-
def _to_dict(self) -> Dict[str, Any]:
|
93 |
-
return _filter_none(
|
94 |
-
vcs=self.vcs,
|
95 |
-
requested_revision=self.requested_revision,
|
96 |
-
commit_id=self.commit_id,
|
97 |
-
)
|
98 |
-
|
99 |
-
|
100 |
-
class ArchiveInfo:
|
101 |
-
name = "archive_info"
|
102 |
-
|
103 |
-
def __init__(
|
104 |
-
self,
|
105 |
-
hash: Optional[str] = None,
|
106 |
-
hashes: Optional[Dict[str, str]] = None,
|
107 |
-
) -> None:
|
108 |
-
# set hashes before hash, since the hash setter will further populate hashes
|
109 |
-
self.hashes = hashes
|
110 |
-
self.hash = hash
|
111 |
-
|
112 |
-
@property
|
113 |
-
def hash(self) -> Optional[str]:
|
114 |
-
return self._hash
|
115 |
-
|
116 |
-
@hash.setter
|
117 |
-
def hash(self, value: Optional[str]) -> None:
|
118 |
-
if value is not None:
|
119 |
-
# Auto-populate the hashes key to upgrade to the new format automatically.
|
120 |
-
# We don't back-populate the legacy hash key from hashes.
|
121 |
-
try:
|
122 |
-
hash_name, hash_value = value.split("=", 1)
|
123 |
-
except ValueError:
|
124 |
-
raise DirectUrlValidationError(
|
125 |
-
f"invalid archive_info.hash format: {value!r}"
|
126 |
-
)
|
127 |
-
if self.hashes is None:
|
128 |
-
self.hashes = {hash_name: hash_value}
|
129 |
-
elif hash_name not in self.hashes:
|
130 |
-
self.hashes = self.hashes.copy()
|
131 |
-
self.hashes[hash_name] = hash_value
|
132 |
-
self._hash = value
|
133 |
-
|
134 |
-
@classmethod
|
135 |
-
def _from_dict(cls, d: Optional[Dict[str, Any]]) -> Optional["ArchiveInfo"]:
|
136 |
-
if d is None:
|
137 |
-
return None
|
138 |
-
return cls(hash=_get(d, str, "hash"), hashes=_get(d, dict, "hashes"))
|
139 |
-
|
140 |
-
def _to_dict(self) -> Dict[str, Any]:
|
141 |
-
return _filter_none(hash=self.hash, hashes=self.hashes)
|
142 |
-
|
143 |
-
|
144 |
-
class DirInfo:
|
145 |
-
name = "dir_info"
|
146 |
-
|
147 |
-
def __init__(
|
148 |
-
self,
|
149 |
-
editable: bool = False,
|
150 |
-
) -> None:
|
151 |
-
self.editable = editable
|
152 |
-
|
153 |
-
@classmethod
|
154 |
-
def _from_dict(cls, d: Optional[Dict[str, Any]]) -> Optional["DirInfo"]:
|
155 |
-
if d is None:
|
156 |
-
return None
|
157 |
-
return cls(editable=_get_required(d, bool, "editable", default=False))
|
158 |
-
|
159 |
-
def _to_dict(self) -> Dict[str, Any]:
|
160 |
-
return _filter_none(editable=self.editable or None)
|
161 |
-
|
162 |
-
|
163 |
-
InfoType = Union[ArchiveInfo, DirInfo, VcsInfo]
|
164 |
-
|
165 |
-
|
166 |
-
class DirectUrl:
|
167 |
-
def __init__(
|
168 |
-
self,
|
169 |
-
url: str,
|
170 |
-
info: InfoType,
|
171 |
-
subdirectory: Optional[str] = None,
|
172 |
-
) -> None:
|
173 |
-
self.url = url
|
174 |
-
self.info = info
|
175 |
-
self.subdirectory = subdirectory
|
176 |
-
|
177 |
-
def _remove_auth_from_netloc(self, netloc: str) -> str:
|
178 |
-
if "@" not in netloc:
|
179 |
-
return netloc
|
180 |
-
user_pass, netloc_no_user_pass = netloc.split("@", 1)
|
181 |
-
if (
|
182 |
-
isinstance(self.info, VcsInfo)
|
183 |
-
and self.info.vcs == "git"
|
184 |
-
and user_pass == "git"
|
185 |
-
):
|
186 |
-
return netloc
|
187 |
-
if ENV_VAR_RE.match(user_pass):
|
188 |
-
return netloc
|
189 |
-
return netloc_no_user_pass
|
190 |
-
|
191 |
-
@property
|
192 |
-
def redacted_url(self) -> str:
|
193 |
-
"""url with user:password part removed unless it is formed with
|
194 |
-
environment variables as specified in PEP 610, or it is ``git``
|
195 |
-
in the case of a git URL.
|
196 |
-
"""
|
197 |
-
purl = urllib.parse.urlsplit(self.url)
|
198 |
-
netloc = self._remove_auth_from_netloc(purl.netloc)
|
199 |
-
surl = urllib.parse.urlunsplit(
|
200 |
-
(purl.scheme, netloc, purl.path, purl.query, purl.fragment)
|
201 |
-
)
|
202 |
-
return surl
|
203 |
-
|
204 |
-
def validate(self) -> None:
|
205 |
-
self.from_dict(self.to_dict())
|
206 |
-
|
207 |
-
@classmethod
|
208 |
-
def from_dict(cls, d: Dict[str, Any]) -> "DirectUrl":
|
209 |
-
return DirectUrl(
|
210 |
-
url=_get_required(d, str, "url"),
|
211 |
-
subdirectory=_get(d, str, "subdirectory"),
|
212 |
-
info=_exactly_one_of(
|
213 |
-
[
|
214 |
-
ArchiveInfo._from_dict(_get(d, dict, "archive_info")),
|
215 |
-
DirInfo._from_dict(_get(d, dict, "dir_info")),
|
216 |
-
VcsInfo._from_dict(_get(d, dict, "vcs_info")),
|
217 |
-
]
|
218 |
-
),
|
219 |
-
)
|
220 |
-
|
221 |
-
def to_dict(self) -> Dict[str, Any]:
|
222 |
-
res = _filter_none(
|
223 |
-
url=self.redacted_url,
|
224 |
-
subdirectory=self.subdirectory,
|
225 |
-
)
|
226 |
-
res[self.info.name] = self.info._to_dict()
|
227 |
-
return res
|
228 |
-
|
229 |
-
@classmethod
|
230 |
-
def from_json(cls, s: str) -> "DirectUrl":
|
231 |
-
return cls.from_dict(json.loads(s))
|
232 |
-
|
233 |
-
def to_json(self) -> str:
|
234 |
-
return json.dumps(self.to_dict(), sort_keys=True)
|
235 |
-
|
236 |
-
def is_local_editable(self) -> bool:
|
237 |
-
return isinstance(self.info, DirInfo) and self.info.editable
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py
DELETED
@@ -1,155 +0,0 @@
|
|
1 |
-
"""Utilities to lazily create and visit candidates found.
|
2 |
-
|
3 |
-
Creating and visiting a candidate is a *very* costly operation. It involves
|
4 |
-
fetching, extracting, potentially building modules from source, and verifying
|
5 |
-
distribution metadata. It is therefore crucial for performance to keep
|
6 |
-
everything here lazy all the way down, so we only touch candidates that we
|
7 |
-
absolutely need, and not "download the world" when we only need one version of
|
8 |
-
something.
|
9 |
-
"""
|
10 |
-
|
11 |
-
import functools
|
12 |
-
from collections.abc import Sequence
|
13 |
-
from typing import TYPE_CHECKING, Any, Callable, Iterator, Optional, Set, Tuple
|
14 |
-
|
15 |
-
from pip._vendor.packaging.version import _BaseVersion
|
16 |
-
|
17 |
-
from .base import Candidate
|
18 |
-
|
19 |
-
IndexCandidateInfo = Tuple[_BaseVersion, Callable[[], Optional[Candidate]]]
|
20 |
-
|
21 |
-
if TYPE_CHECKING:
|
22 |
-
SequenceCandidate = Sequence[Candidate]
|
23 |
-
else:
|
24 |
-
# For compatibility: Python before 3.9 does not support using [] on the
|
25 |
-
# Sequence class.
|
26 |
-
#
|
27 |
-
# >>> from collections.abc import Sequence
|
28 |
-
# >>> Sequence[str]
|
29 |
-
# Traceback (most recent call last):
|
30 |
-
# File "<stdin>", line 1, in <module>
|
31 |
-
# TypeError: 'ABCMeta' object is not subscriptable
|
32 |
-
#
|
33 |
-
# TODO: Remove this block after dropping Python 3.8 support.
|
34 |
-
SequenceCandidate = Sequence
|
35 |
-
|
36 |
-
|
37 |
-
def _iter_built(infos: Iterator[IndexCandidateInfo]) -> Iterator[Candidate]:
|
38 |
-
"""Iterator for ``FoundCandidates``.
|
39 |
-
|
40 |
-
This iterator is used when the package is not already installed. Candidates
|
41 |
-
from index come later in their normal ordering.
|
42 |
-
"""
|
43 |
-
versions_found: Set[_BaseVersion] = set()
|
44 |
-
for version, func in infos:
|
45 |
-
if version in versions_found:
|
46 |
-
continue
|
47 |
-
candidate = func()
|
48 |
-
if candidate is None:
|
49 |
-
continue
|
50 |
-
yield candidate
|
51 |
-
versions_found.add(version)
|
52 |
-
|
53 |
-
|
54 |
-
def _iter_built_with_prepended(
|
55 |
-
installed: Candidate, infos: Iterator[IndexCandidateInfo]
|
56 |
-
) -> Iterator[Candidate]:
|
57 |
-
"""Iterator for ``FoundCandidates``.
|
58 |
-
|
59 |
-
This iterator is used when the resolver prefers the already-installed
|
60 |
-
candidate and NOT to upgrade. The installed candidate is therefore
|
61 |
-
always yielded first, and candidates from index come later in their
|
62 |
-
normal ordering, except skipped when the version is already installed.
|
63 |
-
"""
|
64 |
-
yield installed
|
65 |
-
versions_found: Set[_BaseVersion] = {installed.version}
|
66 |
-
for version, func in infos:
|
67 |
-
if version in versions_found:
|
68 |
-
continue
|
69 |
-
candidate = func()
|
70 |
-
if candidate is None:
|
71 |
-
continue
|
72 |
-
yield candidate
|
73 |
-
versions_found.add(version)
|
74 |
-
|
75 |
-
|
76 |
-
def _iter_built_with_inserted(
|
77 |
-
installed: Candidate, infos: Iterator[IndexCandidateInfo]
|
78 |
-
) -> Iterator[Candidate]:
|
79 |
-
"""Iterator for ``FoundCandidates``.
|
80 |
-
|
81 |
-
This iterator is used when the resolver prefers to upgrade an
|
82 |
-
already-installed package. Candidates from index are returned in their
|
83 |
-
normal ordering, except replaced when the version is already installed.
|
84 |
-
|
85 |
-
The implementation iterates through and yields other candidates, inserting
|
86 |
-
the installed candidate exactly once before we start yielding older or
|
87 |
-
equivalent candidates, or after all other candidates if they are all newer.
|
88 |
-
"""
|
89 |
-
versions_found: Set[_BaseVersion] = set()
|
90 |
-
for version, func in infos:
|
91 |
-
if version in versions_found:
|
92 |
-
continue
|
93 |
-
# If the installed candidate is better, yield it first.
|
94 |
-
if installed.version >= version:
|
95 |
-
yield installed
|
96 |
-
versions_found.add(installed.version)
|
97 |
-
candidate = func()
|
98 |
-
if candidate is None:
|
99 |
-
continue
|
100 |
-
yield candidate
|
101 |
-
versions_found.add(version)
|
102 |
-
|
103 |
-
# If the installed candidate is older than all other candidates.
|
104 |
-
if installed.version not in versions_found:
|
105 |
-
yield installed
|
106 |
-
|
107 |
-
|
108 |
-
class FoundCandidates(SequenceCandidate):
|
109 |
-
"""A lazy sequence to provide candidates to the resolver.
|
110 |
-
|
111 |
-
The intended usage is to return this from `find_matches()` so the resolver
|
112 |
-
can iterate through the sequence multiple times, but only access the index
|
113 |
-
page when remote packages are actually needed. This improve performances
|
114 |
-
when suitable candidates are already installed on disk.
|
115 |
-
"""
|
116 |
-
|
117 |
-
def __init__(
|
118 |
-
self,
|
119 |
-
get_infos: Callable[[], Iterator[IndexCandidateInfo]],
|
120 |
-
installed: Optional[Candidate],
|
121 |
-
prefers_installed: bool,
|
122 |
-
incompatible_ids: Set[int],
|
123 |
-
):
|
124 |
-
self._get_infos = get_infos
|
125 |
-
self._installed = installed
|
126 |
-
self._prefers_installed = prefers_installed
|
127 |
-
self._incompatible_ids = incompatible_ids
|
128 |
-
|
129 |
-
def __getitem__(self, index: Any) -> Any:
|
130 |
-
# Implemented to satisfy the ABC check. This is not needed by the
|
131 |
-
# resolver, and should not be used by the provider either (for
|
132 |
-
# performance reasons).
|
133 |
-
raise NotImplementedError("don't do this")
|
134 |
-
|
135 |
-
def __iter__(self) -> Iterator[Candidate]:
|
136 |
-
infos = self._get_infos()
|
137 |
-
if not self._installed:
|
138 |
-
iterator = _iter_built(infos)
|
139 |
-
elif self._prefers_installed:
|
140 |
-
iterator = _iter_built_with_prepended(self._installed, infos)
|
141 |
-
else:
|
142 |
-
iterator = _iter_built_with_inserted(self._installed, infos)
|
143 |
-
return (c for c in iterator if id(c) not in self._incompatible_ids)
|
144 |
-
|
145 |
-
def __len__(self) -> int:
|
146 |
-
# Implemented to satisfy the ABC check. This is not needed by the
|
147 |
-
# resolver, and should not be used by the provider either (for
|
148 |
-
# performance reasons).
|
149 |
-
raise NotImplementedError("don't do this")
|
150 |
-
|
151 |
-
@functools.lru_cache(maxsize=1)
|
152 |
-
def __bool__(self) -> bool:
|
153 |
-
if self._prefers_installed and self._installed:
|
154 |
-
return True
|
155 |
-
return any(self)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/latin1prober.py
DELETED
@@ -1,147 +0,0 @@
|
|
1 |
-
######################## BEGIN LICENSE BLOCK ########################
|
2 |
-
# The Original Code is Mozilla Universal charset detector code.
|
3 |
-
#
|
4 |
-
# The Initial Developer of the Original Code is
|
5 |
-
# Netscape Communications Corporation.
|
6 |
-
# Portions created by the Initial Developer are Copyright (C) 2001
|
7 |
-
# the Initial Developer. All Rights Reserved.
|
8 |
-
#
|
9 |
-
# Contributor(s):
|
10 |
-
# Mark Pilgrim - port to Python
|
11 |
-
# Shy Shalom - original C code
|
12 |
-
#
|
13 |
-
# This library is free software; you can redistribute it and/or
|
14 |
-
# modify it under the terms of the GNU Lesser General Public
|
15 |
-
# License as published by the Free Software Foundation; either
|
16 |
-
# version 2.1 of the License, or (at your option) any later version.
|
17 |
-
#
|
18 |
-
# This library is distributed in the hope that it will be useful,
|
19 |
-
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
20 |
-
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
21 |
-
# Lesser General Public License for more details.
|
22 |
-
#
|
23 |
-
# You should have received a copy of the GNU Lesser General Public
|
24 |
-
# License along with this library; if not, write to the Free Software
|
25 |
-
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
|
26 |
-
# 02110-1301 USA
|
27 |
-
######################### END LICENSE BLOCK #########################
|
28 |
-
|
29 |
-
from typing import List, Union
|
30 |
-
|
31 |
-
from .charsetprober import CharSetProber
|
32 |
-
from .enums import ProbingState
|
33 |
-
|
34 |
-
FREQ_CAT_NUM = 4
|
35 |
-
|
36 |
-
UDF = 0 # undefined
|
37 |
-
OTH = 1 # other
|
38 |
-
ASC = 2 # ascii capital letter
|
39 |
-
ASS = 3 # ascii small letter
|
40 |
-
ACV = 4 # accent capital vowel
|
41 |
-
ACO = 5 # accent capital other
|
42 |
-
ASV = 6 # accent small vowel
|
43 |
-
ASO = 7 # accent small other
|
44 |
-
CLASS_NUM = 8 # total classes
|
45 |
-
|
46 |
-
# fmt: off
|
47 |
-
Latin1_CharToClass = (
|
48 |
-
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 00 - 07
|
49 |
-
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 08 - 0F
|
50 |
-
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 10 - 17
|
51 |
-
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 18 - 1F
|
52 |
-
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 20 - 27
|
53 |
-
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 28 - 2F
|
54 |
-
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 30 - 37
|
55 |
-
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 38 - 3F
|
56 |
-
OTH, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 40 - 47
|
57 |
-
ASC, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 48 - 4F
|
58 |
-
ASC, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 50 - 57
|
59 |
-
ASC, ASC, ASC, OTH, OTH, OTH, OTH, OTH, # 58 - 5F
|
60 |
-
OTH, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 60 - 67
|
61 |
-
ASS, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 68 - 6F
|
62 |
-
ASS, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 70 - 77
|
63 |
-
ASS, ASS, ASS, OTH, OTH, OTH, OTH, OTH, # 78 - 7F
|
64 |
-
OTH, UDF, OTH, ASO, OTH, OTH, OTH, OTH, # 80 - 87
|
65 |
-
OTH, OTH, ACO, OTH, ACO, UDF, ACO, UDF, # 88 - 8F
|
66 |
-
UDF, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 90 - 97
|
67 |
-
OTH, OTH, ASO, OTH, ASO, UDF, ASO, ACO, # 98 - 9F
|
68 |
-
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # A0 - A7
|
69 |
-
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # A8 - AF
|
70 |
-
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # B0 - B7
|
71 |
-
OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # B8 - BF
|
72 |
-
ACV, ACV, ACV, ACV, ACV, ACV, ACO, ACO, # C0 - C7
|
73 |
-
ACV, ACV, ACV, ACV, ACV, ACV, ACV, ACV, # C8 - CF
|
74 |
-
ACO, ACO, ACV, ACV, ACV, ACV, ACV, OTH, # D0 - D7
|
75 |
-
ACV, ACV, ACV, ACV, ACV, ACO, ACO, ACO, # D8 - DF
|
76 |
-
ASV, ASV, ASV, ASV, ASV, ASV, ASO, ASO, # E0 - E7
|
77 |
-
ASV, ASV, ASV, ASV, ASV, ASV, ASV, ASV, # E8 - EF
|
78 |
-
ASO, ASO, ASV, ASV, ASV, ASV, ASV, OTH, # F0 - F7
|
79 |
-
ASV, ASV, ASV, ASV, ASV, ASO, ASO, ASO, # F8 - FF
|
80 |
-
)
|
81 |
-
|
82 |
-
# 0 : illegal
|
83 |
-
# 1 : very unlikely
|
84 |
-
# 2 : normal
|
85 |
-
# 3 : very likely
|
86 |
-
Latin1ClassModel = (
|
87 |
-
# UDF OTH ASC ASS ACV ACO ASV ASO
|
88 |
-
0, 0, 0, 0, 0, 0, 0, 0, # UDF
|
89 |
-
0, 3, 3, 3, 3, 3, 3, 3, # OTH
|
90 |
-
0, 3, 3, 3, 3, 3, 3, 3, # ASC
|
91 |
-
0, 3, 3, 3, 1, 1, 3, 3, # ASS
|
92 |
-
0, 3, 3, 3, 1, 2, 1, 2, # ACV
|
93 |
-
0, 3, 3, 3, 3, 3, 3, 3, # ACO
|
94 |
-
0, 3, 1, 3, 1, 1, 1, 3, # ASV
|
95 |
-
0, 3, 1, 3, 1, 1, 3, 3, # ASO
|
96 |
-
)
|
97 |
-
# fmt: on
|
98 |
-
|
99 |
-
|
100 |
-
class Latin1Prober(CharSetProber):
|
101 |
-
def __init__(self) -> None:
|
102 |
-
super().__init__()
|
103 |
-
self._last_char_class = OTH
|
104 |
-
self._freq_counter: List[int] = []
|
105 |
-
self.reset()
|
106 |
-
|
107 |
-
def reset(self) -> None:
|
108 |
-
self._last_char_class = OTH
|
109 |
-
self._freq_counter = [0] * FREQ_CAT_NUM
|
110 |
-
super().reset()
|
111 |
-
|
112 |
-
@property
|
113 |
-
def charset_name(self) -> str:
|
114 |
-
return "ISO-8859-1"
|
115 |
-
|
116 |
-
@property
|
117 |
-
def language(self) -> str:
|
118 |
-
return ""
|
119 |
-
|
120 |
-
def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState:
|
121 |
-
byte_str = self.remove_xml_tags(byte_str)
|
122 |
-
for c in byte_str:
|
123 |
-
char_class = Latin1_CharToClass[c]
|
124 |
-
freq = Latin1ClassModel[(self._last_char_class * CLASS_NUM) + char_class]
|
125 |
-
if freq == 0:
|
126 |
-
self._state = ProbingState.NOT_ME
|
127 |
-
break
|
128 |
-
self._freq_counter[freq] += 1
|
129 |
-
self._last_char_class = char_class
|
130 |
-
|
131 |
-
return self.state
|
132 |
-
|
133 |
-
def get_confidence(self) -> float:
|
134 |
-
if self.state == ProbingState.NOT_ME:
|
135 |
-
return 0.01
|
136 |
-
|
137 |
-
total = sum(self._freq_counter)
|
138 |
-
confidence = (
|
139 |
-
0.0
|
140 |
-
if total < 0.01
|
141 |
-
else (self._freq_counter[3] - self._freq_counter[1] * 20.0) / total
|
142 |
-
)
|
143 |
-
confidence = max(confidence, 0.0)
|
144 |
-
# lower the confidence of latin1 so that other more accurate
|
145 |
-
# detector can take priority.
|
146 |
-
confidence *= 0.73
|
147 |
-
return confidence
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/webencodings/__init__.py
DELETED
@@ -1,342 +0,0 @@
|
|
1 |
-
# coding: utf-8
|
2 |
-
"""
|
3 |
-
|
4 |
-
webencodings
|
5 |
-
~~~~~~~~~~~~
|
6 |
-
|
7 |
-
This is a Python implementation of the `WHATWG Encoding standard
|
8 |
-
<http://encoding.spec.whatwg.org/>`. See README for details.
|
9 |
-
|
10 |
-
:copyright: Copyright 2012 by Simon Sapin
|
11 |
-
:license: BSD, see LICENSE for details.
|
12 |
-
|
13 |
-
"""
|
14 |
-
|
15 |
-
from __future__ import unicode_literals
|
16 |
-
|
17 |
-
import codecs
|
18 |
-
|
19 |
-
from .labels import LABELS
|
20 |
-
|
21 |
-
|
22 |
-
VERSION = '0.5.1'
|
23 |
-
|
24 |
-
|
25 |
-
# Some names in Encoding are not valid Python aliases. Remap these.
|
26 |
-
PYTHON_NAMES = {
|
27 |
-
'iso-8859-8-i': 'iso-8859-8',
|
28 |
-
'x-mac-cyrillic': 'mac-cyrillic',
|
29 |
-
'macintosh': 'mac-roman',
|
30 |
-
'windows-874': 'cp874'}
|
31 |
-
|
32 |
-
CACHE = {}
|
33 |
-
|
34 |
-
|
35 |
-
def ascii_lower(string):
|
36 |
-
r"""Transform (only) ASCII letters to lower case: A-Z is mapped to a-z.
|
37 |
-
|
38 |
-
:param string: An Unicode string.
|
39 |
-
:returns: A new Unicode string.
|
40 |
-
|
41 |
-
This is used for `ASCII case-insensitive
|
42 |
-
<http://encoding.spec.whatwg.org/#ascii-case-insensitive>`_
|
43 |
-
matching of encoding labels.
|
44 |
-
The same matching is also used, among other things,
|
45 |
-
for `CSS keywords <http://dev.w3.org/csswg/css-values/#keywords>`_.
|
46 |
-
|
47 |
-
This is different from the :meth:`~py:str.lower` method of Unicode strings
|
48 |
-
which also affect non-ASCII characters,
|
49 |
-
sometimes mapping them into the ASCII range:
|
50 |
-
|
51 |
-
>>> keyword = u'Bac\N{KELVIN SIGN}ground'
|
52 |
-
>>> assert keyword.lower() == u'background'
|
53 |
-
>>> assert ascii_lower(keyword) != keyword.lower()
|
54 |
-
>>> assert ascii_lower(keyword) == u'bac\N{KELVIN SIGN}ground'
|
55 |
-
|
56 |
-
"""
|
57 |
-
# This turns out to be faster than unicode.translate()
|
58 |
-
return string.encode('utf8').lower().decode('utf8')
|
59 |
-
|
60 |
-
|
61 |
-
def lookup(label):
|
62 |
-
"""
|
63 |
-
Look for an encoding by its label.
|
64 |
-
This is the spec’s `get an encoding
|
65 |
-
<http://encoding.spec.whatwg.org/#concept-encoding-get>`_ algorithm.
|
66 |
-
Supported labels are listed there.
|
67 |
-
|
68 |
-
:param label: A string.
|
69 |
-
:returns:
|
70 |
-
An :class:`Encoding` object, or :obj:`None` for an unknown label.
|
71 |
-
|
72 |
-
"""
|
73 |
-
# Only strip ASCII whitespace: U+0009, U+000A, U+000C, U+000D, and U+0020.
|
74 |
-
label = ascii_lower(label.strip('\t\n\f\r '))
|
75 |
-
name = LABELS.get(label)
|
76 |
-
if name is None:
|
77 |
-
return None
|
78 |
-
encoding = CACHE.get(name)
|
79 |
-
if encoding is None:
|
80 |
-
if name == 'x-user-defined':
|
81 |
-
from .x_user_defined import codec_info
|
82 |
-
else:
|
83 |
-
python_name = PYTHON_NAMES.get(name, name)
|
84 |
-
# Any python_name value that gets to here should be valid.
|
85 |
-
codec_info = codecs.lookup(python_name)
|
86 |
-
encoding = Encoding(name, codec_info)
|
87 |
-
CACHE[name] = encoding
|
88 |
-
return encoding
|
89 |
-
|
90 |
-
|
91 |
-
def _get_encoding(encoding_or_label):
|
92 |
-
"""
|
93 |
-
Accept either an encoding object or label.
|
94 |
-
|
95 |
-
:param encoding: An :class:`Encoding` object or a label string.
|
96 |
-
:returns: An :class:`Encoding` object.
|
97 |
-
:raises: :exc:`~exceptions.LookupError` for an unknown label.
|
98 |
-
|
99 |
-
"""
|
100 |
-
if hasattr(encoding_or_label, 'codec_info'):
|
101 |
-
return encoding_or_label
|
102 |
-
|
103 |
-
encoding = lookup(encoding_or_label)
|
104 |
-
if encoding is None:
|
105 |
-
raise LookupError('Unknown encoding label: %r' % encoding_or_label)
|
106 |
-
return encoding
|
107 |
-
|
108 |
-
|
109 |
-
class Encoding(object):
|
110 |
-
"""Reresents a character encoding such as UTF-8,
|
111 |
-
that can be used for decoding or encoding.
|
112 |
-
|
113 |
-
.. attribute:: name
|
114 |
-
|
115 |
-
Canonical name of the encoding
|
116 |
-
|
117 |
-
.. attribute:: codec_info
|
118 |
-
|
119 |
-
The actual implementation of the encoding,
|
120 |
-
a stdlib :class:`~codecs.CodecInfo` object.
|
121 |
-
See :func:`codecs.register`.
|
122 |
-
|
123 |
-
"""
|
124 |
-
def __init__(self, name, codec_info):
|
125 |
-
self.name = name
|
126 |
-
self.codec_info = codec_info
|
127 |
-
|
128 |
-
def __repr__(self):
|
129 |
-
return '<Encoding %s>' % self.name
|
130 |
-
|
131 |
-
|
132 |
-
#: The UTF-8 encoding. Should be used for new content and formats.
|
133 |
-
UTF8 = lookup('utf-8')
|
134 |
-
|
135 |
-
_UTF16LE = lookup('utf-16le')
|
136 |
-
_UTF16BE = lookup('utf-16be')
|
137 |
-
|
138 |
-
|
139 |
-
def decode(input, fallback_encoding, errors='replace'):
|
140 |
-
"""
|
141 |
-
Decode a single string.
|
142 |
-
|
143 |
-
:param input: A byte string
|
144 |
-
:param fallback_encoding:
|
145 |
-
An :class:`Encoding` object or a label string.
|
146 |
-
The encoding to use if :obj:`input` does note have a BOM.
|
147 |
-
:param errors: Type of error handling. See :func:`codecs.register`.
|
148 |
-
:raises: :exc:`~exceptions.LookupError` for an unknown encoding label.
|
149 |
-
:return:
|
150 |
-
A ``(output, encoding)`` tuple of an Unicode string
|
151 |
-
and an :obj:`Encoding`.
|
152 |
-
|
153 |
-
"""
|
154 |
-
# Fail early if `encoding` is an invalid label.
|
155 |
-
fallback_encoding = _get_encoding(fallback_encoding)
|
156 |
-
bom_encoding, input = _detect_bom(input)
|
157 |
-
encoding = bom_encoding or fallback_encoding
|
158 |
-
return encoding.codec_info.decode(input, errors)[0], encoding
|
159 |
-
|
160 |
-
|
161 |
-
def _detect_bom(input):
|
162 |
-
"""Return (bom_encoding, input), with any BOM removed from the input."""
|
163 |
-
if input.startswith(b'\xFF\xFE'):
|
164 |
-
return _UTF16LE, input[2:]
|
165 |
-
if input.startswith(b'\xFE\xFF'):
|
166 |
-
return _UTF16BE, input[2:]
|
167 |
-
if input.startswith(b'\xEF\xBB\xBF'):
|
168 |
-
return UTF8, input[3:]
|
169 |
-
return None, input
|
170 |
-
|
171 |
-
|
172 |
-
def encode(input, encoding=UTF8, errors='strict'):
|
173 |
-
"""
|
174 |
-
Encode a single string.
|
175 |
-
|
176 |
-
:param input: An Unicode string.
|
177 |
-
:param encoding: An :class:`Encoding` object or a label string.
|
178 |
-
:param errors: Type of error handling. See :func:`codecs.register`.
|
179 |
-
:raises: :exc:`~exceptions.LookupError` for an unknown encoding label.
|
180 |
-
:return: A byte string.
|
181 |
-
|
182 |
-
"""
|
183 |
-
return _get_encoding(encoding).codec_info.encode(input, errors)[0]
|
184 |
-
|
185 |
-
|
186 |
-
def iter_decode(input, fallback_encoding, errors='replace'):
|
187 |
-
"""
|
188 |
-
"Pull"-based decoder.
|
189 |
-
|
190 |
-
:param input:
|
191 |
-
An iterable of byte strings.
|
192 |
-
|
193 |
-
The input is first consumed just enough to determine the encoding
|
194 |
-
based on the precense of a BOM,
|
195 |
-
then consumed on demand when the return value is.
|
196 |
-
:param fallback_encoding:
|
197 |
-
An :class:`Encoding` object or a label string.
|
198 |
-
The encoding to use if :obj:`input` does note have a BOM.
|
199 |
-
:param errors: Type of error handling. See :func:`codecs.register`.
|
200 |
-
:raises: :exc:`~exceptions.LookupError` for an unknown encoding label.
|
201 |
-
:returns:
|
202 |
-
An ``(output, encoding)`` tuple.
|
203 |
-
:obj:`output` is an iterable of Unicode strings,
|
204 |
-
:obj:`encoding` is the :obj:`Encoding` that is being used.
|
205 |
-
|
206 |
-
"""
|
207 |
-
|
208 |
-
decoder = IncrementalDecoder(fallback_encoding, errors)
|
209 |
-
generator = _iter_decode_generator(input, decoder)
|
210 |
-
encoding = next(generator)
|
211 |
-
return generator, encoding
|
212 |
-
|
213 |
-
|
214 |
-
def _iter_decode_generator(input, decoder):
|
215 |
-
"""Return a generator that first yields the :obj:`Encoding`,
|
216 |
-
then yields output chukns as Unicode strings.
|
217 |
-
|
218 |
-
"""
|
219 |
-
decode = decoder.decode
|
220 |
-
input = iter(input)
|
221 |
-
for chunck in input:
|
222 |
-
output = decode(chunck)
|
223 |
-
if output:
|
224 |
-
assert decoder.encoding is not None
|
225 |
-
yield decoder.encoding
|
226 |
-
yield output
|
227 |
-
break
|
228 |
-
else:
|
229 |
-
# Input exhausted without determining the encoding
|
230 |
-
output = decode(b'', final=True)
|
231 |
-
assert decoder.encoding is not None
|
232 |
-
yield decoder.encoding
|
233 |
-
if output:
|
234 |
-
yield output
|
235 |
-
return
|
236 |
-
|
237 |
-
for chunck in input:
|
238 |
-
output = decode(chunck)
|
239 |
-
if output:
|
240 |
-
yield output
|
241 |
-
output = decode(b'', final=True)
|
242 |
-
if output:
|
243 |
-
yield output
|
244 |
-
|
245 |
-
|
246 |
-
def iter_encode(input, encoding=UTF8, errors='strict'):
|
247 |
-
"""
|
248 |
-
“Pull”-based encoder.
|
249 |
-
|
250 |
-
:param input: An iterable of Unicode strings.
|
251 |
-
:param encoding: An :class:`Encoding` object or a label string.
|
252 |
-
:param errors: Type of error handling. See :func:`codecs.register`.
|
253 |
-
:raises: :exc:`~exceptions.LookupError` for an unknown encoding label.
|
254 |
-
:returns: An iterable of byte strings.
|
255 |
-
|
256 |
-
"""
|
257 |
-
# Fail early if `encoding` is an invalid label.
|
258 |
-
encode = IncrementalEncoder(encoding, errors).encode
|
259 |
-
return _iter_encode_generator(input, encode)
|
260 |
-
|
261 |
-
|
262 |
-
def _iter_encode_generator(input, encode):
|
263 |
-
for chunck in input:
|
264 |
-
output = encode(chunck)
|
265 |
-
if output:
|
266 |
-
yield output
|
267 |
-
output = encode('', final=True)
|
268 |
-
if output:
|
269 |
-
yield output
|
270 |
-
|
271 |
-
|
272 |
-
class IncrementalDecoder(object):
|
273 |
-
"""
|
274 |
-
“Push”-based decoder.
|
275 |
-
|
276 |
-
:param fallback_encoding:
|
277 |
-
An :class:`Encoding` object or a label string.
|
278 |
-
The encoding to use if :obj:`input` does note have a BOM.
|
279 |
-
:param errors: Type of error handling. See :func:`codecs.register`.
|
280 |
-
:raises: :exc:`~exceptions.LookupError` for an unknown encoding label.
|
281 |
-
|
282 |
-
"""
|
283 |
-
def __init__(self, fallback_encoding, errors='replace'):
|
284 |
-
# Fail early if `encoding` is an invalid label.
|
285 |
-
self._fallback_encoding = _get_encoding(fallback_encoding)
|
286 |
-
self._errors = errors
|
287 |
-
self._buffer = b''
|
288 |
-
self._decoder = None
|
289 |
-
#: The actual :class:`Encoding` that is being used,
|
290 |
-
#: or :obj:`None` if that is not determined yet.
|
291 |
-
#: (Ie. if there is not enough input yet to determine
|
292 |
-
#: if there is a BOM.)
|
293 |
-
self.encoding = None # Not known yet.
|
294 |
-
|
295 |
-
def decode(self, input, final=False):
|
296 |
-
"""Decode one chunk of the input.
|
297 |
-
|
298 |
-
:param input: A byte string.
|
299 |
-
:param final:
|
300 |
-
Indicate that no more input is available.
|
301 |
-
Must be :obj:`True` if this is the last call.
|
302 |
-
:returns: An Unicode string.
|
303 |
-
|
304 |
-
"""
|
305 |
-
decoder = self._decoder
|
306 |
-
if decoder is not None:
|
307 |
-
return decoder(input, final)
|
308 |
-
|
309 |
-
input = self._buffer + input
|
310 |
-
encoding, input = _detect_bom(input)
|
311 |
-
if encoding is None:
|
312 |
-
if len(input) < 3 and not final: # Not enough data yet.
|
313 |
-
self._buffer = input
|
314 |
-
return ''
|
315 |
-
else: # No BOM
|
316 |
-
encoding = self._fallback_encoding
|
317 |
-
decoder = encoding.codec_info.incrementaldecoder(self._errors).decode
|
318 |
-
self._decoder = decoder
|
319 |
-
self.encoding = encoding
|
320 |
-
return decoder(input, final)
|
321 |
-
|
322 |
-
|
323 |
-
class IncrementalEncoder(object):
|
324 |
-
"""
|
325 |
-
“Push”-based encoder.
|
326 |
-
|
327 |
-
:param encoding: An :class:`Encoding` object or a label string.
|
328 |
-
:param errors: Type of error handling. See :func:`codecs.register`.
|
329 |
-
:raises: :exc:`~exceptions.LookupError` for an unknown encoding label.
|
330 |
-
|
331 |
-
.. method:: encode(input, final=False)
|
332 |
-
|
333 |
-
:param input: An Unicode string.
|
334 |
-
:param final:
|
335 |
-
Indicate that no more input is available.
|
336 |
-
Must be :obj:`True` if this is the last call.
|
337 |
-
:returns: A byte string.
|
338 |
-
|
339 |
-
"""
|
340 |
-
def __init__(self, encoding=UTF8, errors='strict'):
|
341 |
-
encoding = _get_encoding(encoding)
|
342 |
-
self.encode = encoding.codec_info.incrementalencoder(errors).encode
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/install_lib.py
DELETED
@@ -1,122 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import sys
|
3 |
-
from itertools import product, starmap
|
4 |
-
import distutils.command.install_lib as orig
|
5 |
-
|
6 |
-
|
7 |
-
class install_lib(orig.install_lib):
|
8 |
-
"""Don't add compiled flags to filenames of non-Python files"""
|
9 |
-
|
10 |
-
def run(self):
|
11 |
-
self.build()
|
12 |
-
outfiles = self.install()
|
13 |
-
if outfiles is not None:
|
14 |
-
# always compile, in case we have any extension stubs to deal with
|
15 |
-
self.byte_compile(outfiles)
|
16 |
-
|
17 |
-
def get_exclusions(self):
|
18 |
-
"""
|
19 |
-
Return a collections.Sized collections.Container of paths to be
|
20 |
-
excluded for single_version_externally_managed installations.
|
21 |
-
"""
|
22 |
-
all_packages = (
|
23 |
-
pkg
|
24 |
-
for ns_pkg in self._get_SVEM_NSPs()
|
25 |
-
for pkg in self._all_packages(ns_pkg)
|
26 |
-
)
|
27 |
-
|
28 |
-
excl_specs = product(all_packages, self._gen_exclusion_paths())
|
29 |
-
return set(starmap(self._exclude_pkg_path, excl_specs))
|
30 |
-
|
31 |
-
def _exclude_pkg_path(self, pkg, exclusion_path):
|
32 |
-
"""
|
33 |
-
Given a package name and exclusion path within that package,
|
34 |
-
compute the full exclusion path.
|
35 |
-
"""
|
36 |
-
parts = pkg.split('.') + [exclusion_path]
|
37 |
-
return os.path.join(self.install_dir, *parts)
|
38 |
-
|
39 |
-
@staticmethod
|
40 |
-
def _all_packages(pkg_name):
|
41 |
-
"""
|
42 |
-
>>> list(install_lib._all_packages('foo.bar.baz'))
|
43 |
-
['foo.bar.baz', 'foo.bar', 'foo']
|
44 |
-
"""
|
45 |
-
while pkg_name:
|
46 |
-
yield pkg_name
|
47 |
-
pkg_name, sep, child = pkg_name.rpartition('.')
|
48 |
-
|
49 |
-
def _get_SVEM_NSPs(self):
|
50 |
-
"""
|
51 |
-
Get namespace packages (list) but only for
|
52 |
-
single_version_externally_managed installations and empty otherwise.
|
53 |
-
"""
|
54 |
-
# TODO: is it necessary to short-circuit here? i.e. what's the cost
|
55 |
-
# if get_finalized_command is called even when namespace_packages is
|
56 |
-
# False?
|
57 |
-
if not self.distribution.namespace_packages:
|
58 |
-
return []
|
59 |
-
|
60 |
-
install_cmd = self.get_finalized_command('install')
|
61 |
-
svem = install_cmd.single_version_externally_managed
|
62 |
-
|
63 |
-
return self.distribution.namespace_packages if svem else []
|
64 |
-
|
65 |
-
@staticmethod
|
66 |
-
def _gen_exclusion_paths():
|
67 |
-
"""
|
68 |
-
Generate file paths to be excluded for namespace packages (bytecode
|
69 |
-
cache files).
|
70 |
-
"""
|
71 |
-
# always exclude the package module itself
|
72 |
-
yield '__init__.py'
|
73 |
-
|
74 |
-
yield '__init__.pyc'
|
75 |
-
yield '__init__.pyo'
|
76 |
-
|
77 |
-
if not hasattr(sys, 'implementation'):
|
78 |
-
return
|
79 |
-
|
80 |
-
base = os.path.join(
|
81 |
-
'__pycache__', '__init__.' + sys.implementation.cache_tag)
|
82 |
-
yield base + '.pyc'
|
83 |
-
yield base + '.pyo'
|
84 |
-
yield base + '.opt-1.pyc'
|
85 |
-
yield base + '.opt-2.pyc'
|
86 |
-
|
87 |
-
def copy_tree(
|
88 |
-
self, infile, outfile,
|
89 |
-
preserve_mode=1, preserve_times=1, preserve_symlinks=0, level=1
|
90 |
-
):
|
91 |
-
assert preserve_mode and preserve_times and not preserve_symlinks
|
92 |
-
exclude = self.get_exclusions()
|
93 |
-
|
94 |
-
if not exclude:
|
95 |
-
return orig.install_lib.copy_tree(self, infile, outfile)
|
96 |
-
|
97 |
-
# Exclude namespace package __init__.py* files from the output
|
98 |
-
|
99 |
-
from setuptools.archive_util import unpack_directory
|
100 |
-
from distutils import log
|
101 |
-
|
102 |
-
outfiles = []
|
103 |
-
|
104 |
-
def pf(src, dst):
|
105 |
-
if dst in exclude:
|
106 |
-
log.warn("Skipping installation of %s (namespace package)",
|
107 |
-
dst)
|
108 |
-
return False
|
109 |
-
|
110 |
-
log.info("copying %s -> %s", src, os.path.dirname(dst))
|
111 |
-
outfiles.append(dst)
|
112 |
-
return dst
|
113 |
-
|
114 |
-
unpack_directory(infile, outfile, pf)
|
115 |
-
return outfiles
|
116 |
-
|
117 |
-
def get_outputs(self):
|
118 |
-
outputs = orig.install_lib.get_outputs(self)
|
119 |
-
exclude = self.get_exclusions()
|
120 |
-
if exclude:
|
121 |
-
return [f for f in outputs if f not in exclude]
|
122 |
-
return outputs
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/test.py
DELETED
@@ -1,251 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import operator
|
3 |
-
import sys
|
4 |
-
import contextlib
|
5 |
-
import itertools
|
6 |
-
import unittest
|
7 |
-
from distutils.errors import DistutilsError, DistutilsOptionError
|
8 |
-
from distutils import log
|
9 |
-
from unittest import TestLoader
|
10 |
-
|
11 |
-
from pkg_resources import (
|
12 |
-
resource_listdir,
|
13 |
-
resource_exists,
|
14 |
-
normalize_path,
|
15 |
-
working_set,
|
16 |
-
evaluate_marker,
|
17 |
-
add_activation_listener,
|
18 |
-
require,
|
19 |
-
)
|
20 |
-
from .._importlib import metadata
|
21 |
-
from setuptools import Command
|
22 |
-
from setuptools.extern.more_itertools import unique_everseen
|
23 |
-
from setuptools.extern.jaraco.functools import pass_none
|
24 |
-
|
25 |
-
|
26 |
-
class ScanningLoader(TestLoader):
|
27 |
-
def __init__(self):
|
28 |
-
TestLoader.__init__(self)
|
29 |
-
self._visited = set()
|
30 |
-
|
31 |
-
def loadTestsFromModule(self, module, pattern=None):
|
32 |
-
"""Return a suite of all tests cases contained in the given module
|
33 |
-
|
34 |
-
If the module is a package, load tests from all the modules in it.
|
35 |
-
If the module has an ``additional_tests`` function, call it and add
|
36 |
-
the return value to the tests.
|
37 |
-
"""
|
38 |
-
if module in self._visited:
|
39 |
-
return None
|
40 |
-
self._visited.add(module)
|
41 |
-
|
42 |
-
tests = []
|
43 |
-
tests.append(TestLoader.loadTestsFromModule(self, module))
|
44 |
-
|
45 |
-
if hasattr(module, "additional_tests"):
|
46 |
-
tests.append(module.additional_tests())
|
47 |
-
|
48 |
-
if hasattr(module, '__path__'):
|
49 |
-
for file in resource_listdir(module.__name__, ''):
|
50 |
-
if file.endswith('.py') and file != '__init__.py':
|
51 |
-
submodule = module.__name__ + '.' + file[:-3]
|
52 |
-
else:
|
53 |
-
if resource_exists(module.__name__, file + '/__init__.py'):
|
54 |
-
submodule = module.__name__ + '.' + file
|
55 |
-
else:
|
56 |
-
continue
|
57 |
-
tests.append(self.loadTestsFromName(submodule))
|
58 |
-
|
59 |
-
if len(tests) != 1:
|
60 |
-
return self.suiteClass(tests)
|
61 |
-
else:
|
62 |
-
return tests[0] # don't create a nested suite for only one return
|
63 |
-
|
64 |
-
|
65 |
-
# adapted from jaraco.classes.properties:NonDataProperty
|
66 |
-
class NonDataProperty:
|
67 |
-
def __init__(self, fget):
|
68 |
-
self.fget = fget
|
69 |
-
|
70 |
-
def __get__(self, obj, objtype=None):
|
71 |
-
if obj is None:
|
72 |
-
return self
|
73 |
-
return self.fget(obj)
|
74 |
-
|
75 |
-
|
76 |
-
class test(Command):
|
77 |
-
"""Command to run unit tests after in-place build"""
|
78 |
-
|
79 |
-
description = "run unit tests after in-place build (deprecated)"
|
80 |
-
|
81 |
-
user_options = [
|
82 |
-
('test-module=', 'm', "Run 'test_suite' in specified module"),
|
83 |
-
(
|
84 |
-
'test-suite=',
|
85 |
-
's',
|
86 |
-
"Run single test, case or suite (e.g. 'module.test_suite')",
|
87 |
-
),
|
88 |
-
('test-runner=', 'r', "Test runner to use"),
|
89 |
-
]
|
90 |
-
|
91 |
-
def initialize_options(self):
|
92 |
-
self.test_suite = None
|
93 |
-
self.test_module = None
|
94 |
-
self.test_loader = None
|
95 |
-
self.test_runner = None
|
96 |
-
|
97 |
-
def finalize_options(self):
|
98 |
-
|
99 |
-
if self.test_suite and self.test_module:
|
100 |
-
msg = "You may specify a module or a suite, but not both"
|
101 |
-
raise DistutilsOptionError(msg)
|
102 |
-
|
103 |
-
if self.test_suite is None:
|
104 |
-
if self.test_module is None:
|
105 |
-
self.test_suite = self.distribution.test_suite
|
106 |
-
else:
|
107 |
-
self.test_suite = self.test_module + ".test_suite"
|
108 |
-
|
109 |
-
if self.test_loader is None:
|
110 |
-
self.test_loader = getattr(self.distribution, 'test_loader', None)
|
111 |
-
if self.test_loader is None:
|
112 |
-
self.test_loader = "setuptools.command.test:ScanningLoader"
|
113 |
-
if self.test_runner is None:
|
114 |
-
self.test_runner = getattr(self.distribution, 'test_runner', None)
|
115 |
-
|
116 |
-
@NonDataProperty
|
117 |
-
def test_args(self):
|
118 |
-
return list(self._test_args())
|
119 |
-
|
120 |
-
def _test_args(self):
|
121 |
-
if not self.test_suite:
|
122 |
-
yield 'discover'
|
123 |
-
if self.verbose:
|
124 |
-
yield '--verbose'
|
125 |
-
if self.test_suite:
|
126 |
-
yield self.test_suite
|
127 |
-
|
128 |
-
def with_project_on_sys_path(self, func):
|
129 |
-
"""
|
130 |
-
Backward compatibility for project_on_sys_path context.
|
131 |
-
"""
|
132 |
-
with self.project_on_sys_path():
|
133 |
-
func()
|
134 |
-
|
135 |
-
@contextlib.contextmanager
|
136 |
-
def project_on_sys_path(self, include_dists=[]):
|
137 |
-
self.run_command('egg_info')
|
138 |
-
|
139 |
-
# Build extensions in-place
|
140 |
-
self.reinitialize_command('build_ext', inplace=1)
|
141 |
-
self.run_command('build_ext')
|
142 |
-
|
143 |
-
ei_cmd = self.get_finalized_command("egg_info")
|
144 |
-
|
145 |
-
old_path = sys.path[:]
|
146 |
-
old_modules = sys.modules.copy()
|
147 |
-
|
148 |
-
try:
|
149 |
-
project_path = normalize_path(ei_cmd.egg_base)
|
150 |
-
sys.path.insert(0, project_path)
|
151 |
-
working_set.__init__()
|
152 |
-
add_activation_listener(lambda dist: dist.activate())
|
153 |
-
require('%s==%s' % (ei_cmd.egg_name, ei_cmd.egg_version))
|
154 |
-
with self.paths_on_pythonpath([project_path]):
|
155 |
-
yield
|
156 |
-
finally:
|
157 |
-
sys.path[:] = old_path
|
158 |
-
sys.modules.clear()
|
159 |
-
sys.modules.update(old_modules)
|
160 |
-
working_set.__init__()
|
161 |
-
|
162 |
-
@staticmethod
|
163 |
-
@contextlib.contextmanager
|
164 |
-
def paths_on_pythonpath(paths):
|
165 |
-
"""
|
166 |
-
Add the indicated paths to the head of the PYTHONPATH environment
|
167 |
-
variable so that subprocesses will also see the packages at
|
168 |
-
these paths.
|
169 |
-
|
170 |
-
Do this in a context that restores the value on exit.
|
171 |
-
"""
|
172 |
-
nothing = object()
|
173 |
-
orig_pythonpath = os.environ.get('PYTHONPATH', nothing)
|
174 |
-
current_pythonpath = os.environ.get('PYTHONPATH', '')
|
175 |
-
try:
|
176 |
-
prefix = os.pathsep.join(unique_everseen(paths))
|
177 |
-
to_join = filter(None, [prefix, current_pythonpath])
|
178 |
-
new_path = os.pathsep.join(to_join)
|
179 |
-
if new_path:
|
180 |
-
os.environ['PYTHONPATH'] = new_path
|
181 |
-
yield
|
182 |
-
finally:
|
183 |
-
if orig_pythonpath is nothing:
|
184 |
-
os.environ.pop('PYTHONPATH', None)
|
185 |
-
else:
|
186 |
-
os.environ['PYTHONPATH'] = orig_pythonpath
|
187 |
-
|
188 |
-
@staticmethod
|
189 |
-
def install_dists(dist):
|
190 |
-
"""
|
191 |
-
Install the requirements indicated by self.distribution and
|
192 |
-
return an iterable of the dists that were built.
|
193 |
-
"""
|
194 |
-
ir_d = dist.fetch_build_eggs(dist.install_requires)
|
195 |
-
tr_d = dist.fetch_build_eggs(dist.tests_require or [])
|
196 |
-
er_d = dist.fetch_build_eggs(
|
197 |
-
v
|
198 |
-
for k, v in dist.extras_require.items()
|
199 |
-
if k.startswith(':') and evaluate_marker(k[1:])
|
200 |
-
)
|
201 |
-
return itertools.chain(ir_d, tr_d, er_d)
|
202 |
-
|
203 |
-
def run(self):
|
204 |
-
self.announce(
|
205 |
-
"WARNING: Testing via this command is deprecated and will be "
|
206 |
-
"removed in a future version. Users looking for a generic test "
|
207 |
-
"entry point independent of test runner are encouraged to use "
|
208 |
-
"tox.",
|
209 |
-
log.WARN,
|
210 |
-
)
|
211 |
-
|
212 |
-
installed_dists = self.install_dists(self.distribution)
|
213 |
-
|
214 |
-
cmd = ' '.join(self._argv)
|
215 |
-
if self.dry_run:
|
216 |
-
self.announce('skipping "%s" (dry run)' % cmd)
|
217 |
-
return
|
218 |
-
|
219 |
-
self.announce('running "%s"' % cmd)
|
220 |
-
|
221 |
-
paths = map(operator.attrgetter('location'), installed_dists)
|
222 |
-
with self.paths_on_pythonpath(paths):
|
223 |
-
with self.project_on_sys_path():
|
224 |
-
self.run_tests()
|
225 |
-
|
226 |
-
def run_tests(self):
|
227 |
-
test = unittest.main(
|
228 |
-
None,
|
229 |
-
None,
|
230 |
-
self._argv,
|
231 |
-
testLoader=self._resolve_as_ep(self.test_loader),
|
232 |
-
testRunner=self._resolve_as_ep(self.test_runner),
|
233 |
-
exit=False,
|
234 |
-
)
|
235 |
-
if not test.result.wasSuccessful():
|
236 |
-
msg = 'Test failed: %s' % test.result
|
237 |
-
self.announce(msg, log.ERROR)
|
238 |
-
raise DistutilsError(msg)
|
239 |
-
|
240 |
-
@property
|
241 |
-
def _argv(self):
|
242 |
-
return ['unittest'] + self.test_args
|
243 |
-
|
244 |
-
@staticmethod
|
245 |
-
@pass_none
|
246 |
-
def _resolve_as_ep(val):
|
247 |
-
"""
|
248 |
-
Load the indicated attribute value, called, as a as if it were
|
249 |
-
specified as an entry point.
|
250 |
-
"""
|
251 |
-
return metadata.EntryPoint(value=val, name=None, group=None).load()()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bl1tzie/Jam/Dockerfile
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
FROM node:18
|
2 |
-
|
3 |
-
WORKDIR /app
|
4 |
-
|
5 |
-
RUN npm install express express-http-proxy
|
6 |
-
|
7 |
-
COPY . .
|
8 |
-
|
9 |
-
EXPOSE 7860
|
10 |
-
|
11 |
-
CMD [ "node", "server.js" ]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BramVanroy/mateo-demo/Dockerfile
DELETED
@@ -1,30 +0,0 @@
|
|
1 |
-
FROM ubuntu:latest
|
2 |
-
LABEL authors="Bram Vanroy"
|
3 |
-
|
4 |
-
ENV DEBIAN_FRONTEND=noninteractive
|
5 |
-
RUN apt-get -y update \
|
6 |
-
&& apt-get -y install build-essential curl git software-properties-common
|
7 |
-
|
8 |
-
RUN add-apt-repository ppa:deadsnakes/ppa \
|
9 |
-
&& apt-get -y update \
|
10 |
-
&& apt-get -y install python3.10 python3.10-dev python3-pip python3.10-distutils \
|
11 |
-
&& ln -s /usr/bin/python3.10 /usr/bin/python \
|
12 |
-
&& rm -rf /var/lib/apt/lists/*
|
13 |
-
|
14 |
-
RUN useradd -m -u 1000 user
|
15 |
-
USER user
|
16 |
-
ENV HOME /home/user
|
17 |
-
ENV PATH $HOME/.local/bin:$PATH
|
18 |
-
|
19 |
-
WORKDIR $HOME
|
20 |
-
RUN git clone https://github.com/BramVanroy/mateo-demo.git
|
21 |
-
WORKDIR $HOME/mateo-demo
|
22 |
-
|
23 |
-
RUN python -m pip install --no-cache-dir --upgrade pip && python -m pip install --no-cache-dir --upgrade .
|
24 |
-
|
25 |
-
EXPOSE 7860
|
26 |
-
HEALTHCHECK CMD curl --fail http://localhost:7860/_stcore/health
|
27 |
-
|
28 |
-
WORKDIR $HOME/mateo-demo/src/mateo_st
|
29 |
-
|
30 |
-
CMD ["streamlit", "run", "01_🎈_MATEO.py", "--server.port", "7860", "--server.enableXsrfProtection", "false", "--", "--no_cuda"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Branon/oai-proxy/greeting.md
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
The coom shall flow.
|
|
|
|
spaces/BreadBytes1/PL-Dashboard/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: PL Dashboard
|
3 |
-
emoji: 🦀
|
4 |
-
colorFrom: yellow
|
5 |
-
colorTo: red
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.17.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: gpl
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|