parquet-converter commited on
Commit
f9510ca
·
1 Parent(s): cd66926

Update parquet files (step 118 of 249)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download and Install QuickBooks Desktop Pro 2021 with These Easy Steps.md +0 -59
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forza Horizon 5 Download Ios.md +0 -31
  3. spaces/1gistliPinn/ChatGPT4/Examples/5.1 Surround Sound Tamil Mp3 Songs UPD Free Download.md +0 -6
  4. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubs Dark Riddle APK Hile The Most Challenging and Scary Game Ever.md +0 -82
  5. spaces/1phancelerku/anime-remove-background/8 Ball Pool bitAIM APK A Complete Guide to the Ultimate Pool Game Experience.md +0 -146
  6. spaces/1phancelerku/anime-remove-background/Boost Your Brain Power with Mental Arithmetic Techniques.md +0 -111
  7. spaces/1phancelerku/anime-remove-background/Enjoy Clash of Clans APK with Unlimited Gems Gold and Elixir.md +0 -94
  8. spaces/1phancelerku/anime-remove-background/Enjoy the Festive Season with Daystar Choirs 12 Days of Christmas MP3 Download.md +0 -147
  9. spaces/801artistry/RVC801/demucs/__main__.py +0 -317
  10. spaces/A666sxr/Genshin_TTS/text/__init__.py +0 -56
  11. spaces/AI-Hobbyist/Hoyo-RVC/trainset_preprocess_pipeline_print.py +0 -139
  12. spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_pretrained_32khz_eval.py +0 -99
  13. spaces/AIFILMS/StyleGANEX/models/bisenet/resnet.py +0 -109
  14. spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddpm.py +0 -1444
  15. spaces/AIGC-Audio/Make_An_Audio/wav_evaluation/models/audio.py +0 -179
  16. spaces/AILab-CVC/SEED-LLaMA/gradio_demo/conversation.py +0 -190
  17. spaces/Abhaykoul/HelpingAI-2.0/README.md +0 -12
  18. spaces/Aditya9790/yolo7-object-tracking/utils/torch_utils.py +0 -374
  19. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/effectlayer-plugin.js +0 -23
  20. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/AddChild.js +0 -16
  21. spaces/Andy1621/uniformer_image_detection/configs/ssd/README.md +0 -21
  22. spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/res_layer.py +0 -187
  23. spaces/AndyCer/TehVenom-MPT-7b-Chat-Instruct-LongCTX-Merge/README.md +0 -12
  24. spaces/Anthony7906/MengHuiMXD_GPT/Dockerfile +0 -15
  25. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/core.py +0 -0
  26. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/__init__.py +0 -0
  27. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/vision.cpp +0 -117
  28. spaces/BAAI/vid2vid-zero/gradio_demo/app_running.py +0 -169
  29. spaces/BFH/BKMotionsAI/app.py +0 -86
  30. spaces/Banbri/zcvzcv/src/app/interface/zoom/index.tsx +0 -35
  31. spaces/Bart92/RVC_HF/colab_for_mdx.py +0 -71
  32. spaces/BartPoint/VoiceChange_Beta/infer_pack/transforms.py +0 -209
  33. spaces/Benson/text-generation/Examples/Avakin Life Pc.md +0 -50
  34. spaces/Benson/text-generation/Examples/Descargar Bus De Conduccin De Telolet 3d Mod Apk V1.2. 4b.md +0 -60
  35. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/json.py +0 -140
  36. spaces/CAMP-ViL/Xplainer/app.py +0 -137
  37. spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_templates/layout.html +0 -35
  38. spaces/CVPR/LIVE/cuda_utils.h +0 -53
  39. spaces/CVPR/LIVE/pybind11/tests/test_eigen.cpp +0 -327
  40. spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_traversal_tags.h +0 -41
  41. spaces/CVPR/LIVE/thrust/thrust/iterator/detail/reverse_iterator_base.h +0 -42
  42. spaces/CVPR/LIVE/thrust/thrust/system/cuda/pointer.h +0 -321
  43. spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/generate.h +0 -22
  44. spaces/CVPR/WALT/mmdet/datasets/pipelines/transforms.py +0 -1812
  45. spaces/CVPR/WALT/mmdet/models/utils/transformer.py +0 -860
  46. spaces/Chintan-Donda/KKMS-KSSW-HF/README.md +0 -12
  47. spaces/CikeyQI/Yunzai/Yunzai/lib/config/redis.js +0 -76
  48. spaces/Cletrason/Cletrason-toad-mario-movie/config.py +0 -1
  49. spaces/Crow34/Comicdraw/README.md +0 -13
  50. spaces/DESUCLUB/BLLAMA/README.md +0 -20
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download and Install QuickBooks Desktop Pro 2021 with These Easy Steps.md DELETED
@@ -1,59 +0,0 @@
1
-
2
- <h1>How to Download and Install QuickBooks Desktop Pro 2021</h1>
3
- <p>QuickBooks Desktop Pro 2021 is the latest version of the popular accounting software for small and medium-sized businesses. It offers new features and improvements that can help you manage your finances more efficiently and effectively. In this article, we will show you how to download and install QuickBooks Desktop Pro 2021 on your computer.</p>
4
- <h2>2021 quickbooks desktop pro download</h2><br /><p><b><b>Download Zip</b> &raquo; <a href="https://byltly.com/2uKvfV">https://byltly.com/2uKvfV</a></b></p><br /><br />
5
- <h2>Step 1: Download QuickBooks Desktop Pro 2021</h2>
6
- <p>To download QuickBooks Desktop Pro 2021, you need to have a valid license or subscription from Intuit. You can purchase one from their official website or from a trusted reseller. Once you have your license or subscription, you can follow these steps to download the software:</p>
7
- <ul>
8
- <li>Go to <a href="https://downloads.quickbooks.com/app/qbdt/products">https://downloads.quickbooks.com/app/qbdt/products</a> and select your country.</li>
9
- <li>Enter your product and license number in the fields provided and click <b>Search</b>.</li>
10
- <li>Select <b>QuickBooks Desktop Pro 2021</b> from the list of products and click <b>Download</b>.</li>
11
- <li>Save the file to a convenient location on your computer.</li>
12
- </ul>
13
- <h2>Step 2: Install QuickBooks Desktop Pro 2021</h2>
14
- <p>After downloading the file, you can install QuickBooks Desktop Pro 2021 by following these steps:</p>
15
- <ul>
16
- <li>Double-click the file you downloaded to launch the installer.</li>
17
- <li>Click <b>Yes</b> to allow the program to make changes to your computer.</li>
18
- <li>Select <b>I accept the terms in the license agreement</b> and click <b>Next</b>.</li>
19
- <li>Enter your license and product number in the fields provided and click <b>Next</b>.</li>
20
- <li>Select <b>Express</b> as the installation type and click <b>Next</b>.</li>
21
- <li>Select where you want to install QuickBooks Desktop Pro 2021 and click <b>Install</b>.</li>
22
- <li>Wait for the installation to complete and click <b>Open QuickBooks</b>.</li>
23
- </ul>
24
- <h2>Congratulations! You have successfully downloaded and installed QuickBooks Desktop Pro 2021 on your computer.</h2>
25
- <p>You can now start using the software to manage your business finances. If you need any help or support, you can visit the official website of Intuit or contact their customer service team. You can also check out their online community forums and tutorials for more tips and tricks on how to use QuickBooks Desktop Pro 2021.</p>
26
-
27
- <h2>What's New in QuickBooks Desktop Pro 2021?</h2>
28
- <p>QuickBooks Desktop Pro 2021 comes with several new features and enhancements that can make your accounting tasks easier and faster. Some of the highlights include:</p>
29
- <ul>
30
- <li><b>Improved bank feeds</b>: You can now connect your bank accounts and credit cards to QuickBooks Desktop Pro 2021 and automatically download and categorize your transactions. You can also customize the rules for matching and adding transactions to save time and reduce errors.</li>
31
- <li><b>Receipt management</b>: You can now scan and upload your receipts to QuickBooks Desktop Pro 2021 and attach them to your transactions. You can also use the QuickBooks Desktop mobile app to capture and upload receipts on the go. This can help you track your expenses and prepare for tax time.</li>
32
- <li><b>Data level permissions</b>: You can now set up different levels of access for your users based on the data they need to see and work with. You can also assign specific roles and permissions to your employees, contractors, and accountants. This can help you protect your sensitive data and prevent unauthorized changes.</li>
33
- <li><b>Automated statements</b>: You can now schedule and send recurring statements to your customers automatically. You can also customize the frequency, format, and content of your statements. This can help you improve your cash flow and customer satisfaction.</li>
34
- </ul>
35
- <h2>How to Upgrade to QuickBooks Desktop Pro 2021?</h2>
36
- <p>If you are already using an older version of QuickBooks Desktop Pro, you can easily upgrade to QuickBooks Desktop Pro 2021 without losing any of your data or settings. You just need to follow these steps:</p>
37
- <p></p>
38
- <ul>
39
- <li>Make sure you have a backup of your company file before upgrading.</li>
40
- <li>Download QuickBooks Desktop Pro 2021 from the link provided in Step 1 above.</li>
41
- <li>Run the installer and follow the instructions on the screen.</li>
42
- <li>Select <b>Upgrade</b> as the installation type and choose the version of QuickBooks Desktop Pro you are currently using.</li>
43
- <li>Click <b>Next</b> and follow the prompts to complete the upgrade process.</li>
44
- <li>Open QuickBooks Desktop Pro 2021 and verify that your company file is updated and working properly.</li>
45
- </ul>
46
- <h2>How to Get Started with QuickBooks Desktop Pro 2021?</h2>
47
- <p>If you are new to QuickBooks Desktop Pro, you can get started with QuickBooks Desktop Pro 2021 by following these steps:</p>
48
- <ul>
49
- <li>Create a new company file or use the sample company file provided by QuickBooks Desktop Pro 2021.</li>
50
- <li>Set up your company information, preferences, chart of accounts, products and services, customers, vendors, employees, etc.</li>
51
- <li>Connect your bank accounts and credit cards to QuickBooks Desktop Pro 2021 and download your transactions.</li>
52
- <li>Scan and upload your receipts to QuickBooks Desktop Pro 2021 and attach them to your transactions.</li>
53
- <li>Create invoices, bills, estimates, sales receipts, payments, etc. for your customers and vendors.</li>
54
- <li>Record deposits, transfers, checks, etc. for your bank accounts and credit cards.</li>
55
- <li>Reconcile your bank accounts and credit cards with QuickBooks Desktop Pro 2021.</li>
56
- <li>Run reports and statements to monitor your business performance and financial health.</li>
57
- </ul></p> ddb901b051<br />
58
- <br />
59
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forza Horizon 5 Download Ios.md DELETED
@@ -1,31 +0,0 @@
1
-
2
- <h1>How to Download and Play Forza Horizon 5 on iOS Devices</h1>
3
- <p>Forza Horizon 5 is the latest installment of the popular racing game series developed by Playground Games and published by Microsoft. The game is set in Mexico, where you can explore a diverse and stunning open world with hundreds of cars to choose from. You can race, drift, stunt, and customize your vehicles as you compete in various events and challenges.</p>
4
- <h2>forza horizon 5 download ios</h2><br /><p><b><b>Download Zip</b> &#10002; &#10002; &#10002; <a href="https://byltly.com/2uKA7b">https://byltly.com/2uKA7b</a></b></p><br /><br />
5
- <p>If you are an iOS user, you might be wondering if you can play Forza Horizon 5 on your iPhone or iPad. The good news is that you can, thanks to a mobile version of the game that is available on the App Store. The mobile version of Forza Horizon 5 offers the same gameplay and graphics as the console and PC versions, but with some optimizations and adjustments for touch controls and smaller screens.</p>
6
- <p>In this article, we will show you how to download and play Forza Horizon 5 on your iOS devices in a few simple steps.</p>
7
- <h2>Step 1: Go to the App Store</h2>
8
- <p>The first step is to go to the App Store on your iOS device and search for Forza Horizon 5. You can also use this link to access the game page directly. You will see a screen with some information and screenshots of the game, as well as a download button.</p>
9
- <h2>Step 2: Download the game</h2>
10
- <p>The next step is to tap on the download button and wait for the game to be installed on your device. The game size is about 345 MB, so make sure you have enough space and a stable internet connection. You might also need to enter your Apple ID and password to confirm the download.</p>
11
- <h2>Step 3: Launch the game</h2>
12
- <p>Once the download is complete, you can launch the game from your home screen or app library. You will see a splash screen with the Forza Horizon 5 logo and some loading animations. The game might take some time to load depending on your device performance and network speed.</p>
13
- <p></p>
14
- <h2>Step 4: Enjoy the game</h2>
15
- <p>After the game loads, you will see a main menu with some options to start playing. You can choose between solo or online modes, customize your profile and settings, view your achievements and leaderboards, and more. You can also access a tutorial that will teach you the basics of the game controls and mechanics.</p>
16
- <p>To play the game, you will need to use touch gestures on your screen to steer, accelerate, brake, drift, and activate special features. You can also tilt your device to use motion controls if you prefer. The game will adapt to your skill level and preferences as you progress through the game.</p>
17
- <h2>Conclusion</h2>
18
- <p>In this article, we have shown you how to download and play Forza Horizon 5 on your iOS devices. We hope this guide was helpful and easy to follow. Now you can enjoy one of the best racing games ever made on your iPhone or iPad anytime and anywhere.</p>
19
-
20
- <h2>Some Tips and Tricks for Forza Horizon 5 on iOS</h2>
21
- <p>If you want to get the most out of Forza Horizon 5 on your iOS devices, here are some tips and tricks that might help you:</p>
22
- <ul>
23
- <li>Use the photo mode to capture and share your best moments in the game. You can access the photo mode by tapping on the camera icon on the top right corner of the screen. You can adjust the camera angle, zoom, focus, filters, and more. You can also save and share your photos with your friends or on social media.</li>
24
- <li>Complete the seasonal events and challenges to earn rewards and unlock new cars and features. You can view the current season and its objectives by tapping on the calendar icon on the top left corner of the screen. You can also join online events and races with other players around the world.</li>
25
- <li>Upgrade and customize your cars to improve their performance and appearance. You can access the garage by tapping on the car icon on the bottom left corner of the screen. You can change the paint, wheels, decals, spoilers, and more. You can also tune your car's engine, suspension, brakes, and more.</li>
26
- <li>Explore the map and discover hidden secrets and locations. You can access the map by tapping on the compass icon on the bottom right corner of the screen. You can zoom in and out, move around, and set waypoints. You can also find collectibles, barn finds, speed traps, danger signs, and more.</li>
27
- <li>Have fun and experiment with different cars and modes. You can switch between different cars by tapping on the car icon on the top center of the screen. You can also change the difficulty level, weather, time of day, and more by tapping on the settings icon on the top right corner of the screen.</li>
28
- </ul>
29
- <p>Forza Horizon 5 is a game that offers endless possibilities and fun for racing fans. Whether you want to race, drift, stunt, or explore, you will find something to enjoy in this game. Download Forza Horizon 5 on your iOS devices today and experience the thrill of driving in Mexico.</p> ddb901b051<br />
30
- <br />
31
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/5.1 Surround Sound Tamil Mp3 Songs UPD Free Download.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>5.1 surround sound tamil mp3 songs free download</h2><br /><p><b><b>Download Zip</b> > <a href="https://imgfil.com/2uy09I">https://imgfil.com/2uy09I</a></b></p><br /><br />
2
-
3
- Welcome to Movie World Tamil Film Flicks YouTube Channel Movie World Entertainments is the leading ... Download Hungama Play app to get access to unlimited free movies, latest music videos, kids ... Manzoor sakhirani all mp3 songs download ... Sec 5.1 geometric and algebra connections linear equations answers. 1fdad05405<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubs Dark Riddle APK Hile The Most Challenging and Scary Game Ever.md DELETED
@@ -1,82 +0,0 @@
1
-
2
- <h1>Dark Riddle APK Hile Android Oyun Club: A Review</h1>
3
- <p>If you are looking for a game that combines escape, adventure and puzzle elements with stealth, humor and mystery, you might want to check out Dark Riddle. This is a popular game on the Android platform that lets you explore your neighbor's house and discover his secrets. But what if you want to enjoy the game without any limitations or interruptions? That's where Dark Riddle APK Hile comes in. This is a modded version of the game that gives you unlimited money and removes ads and in-app purchases. In this article, we will review Dark Riddle APK Hile and tell you how to download and install it from Android Oyun Club. We will also discuss the features, benefits, drawbacks and risks of using this modded version.</p>
4
- <h2>What is Dark Riddle?</h2>
5
- <h3>A game of escape, adventure and puzzle</h3>
6
- <p>Dark Riddle is a game developed by Nika Entertainment that was released in 2019. It is inspired by other games like Hello Neighbor and Granny, where you have to sneak into your neighbor's house and find out what he is hiding. You can use various items and tools to distract, trick or fight your neighbor, who will chase you if he sees you. You can also interact with other characters and objects in the game world, such as animals, cars, plants and more. The game has different levels and modes, each with its own challenges and surprises.</p>
7
- <h2>dark riddle apk hile android oyun club</h2><br /><p><b><b>Download Zip</b> &#9734;&#9734;&#9734; <a href="https://urlin.us/2uSZ6K">https://urlin.us/2uSZ6K</a></b></p><br /><br />
8
- <h3>A game of stealth, humor and mystery</h3>
9
- <p>Dark Riddle is not just a game of escape, adventure and puzzle. It is also a game of stealth, humor and mystery. You have to use your skills and creativity to avoid being detected by your neighbor, who has a lot of traps and cameras in his house. You can also use your sense of humor to prank your neighbor or make him laugh. The game has a lot of funny moments and dialogues that will make you smile. Moreover, the game has a lot of mystery and suspense that will keep you hooked. You will want to know more about your neighbor's secrets and motives, as well as the story behind the game.</p>
10
- <h2>What is Dark Riddle APK Hile?</h2>
11
- <h3>A modded version of the game with unlimited money</h3>
12
- <p>Dark Riddle APK Hile is a modded version of the game that gives you unlimited money. This means that you can buy anything you want in the game without worrying about the cost. You can get all the items, skins and weapons that are available in the game store. You can also upgrade your skills and abilities to make yourself stronger and faster. With unlimited money, you can enjoy the game without any restrictions or limitations.</p>
13
- <h3>A way to enjoy the game without ads or in-app purchases</h3>
14
- <p>Dark Riddle APK Hile is also a way to enjoy the game without ads or in-app purchases. This means that you can play the game without any interruptions or annoyances. You don't have to watch any ads or spend any real money to get extra features or resources in the game. You can play the game smoothly and comfortably without any hassle or pressure.</p>
15
- <h2>How to <h2>How to download and install Dark Riddle APK Hile?</h2>
16
- <h3>The steps to download the file from Android Oyun Club</h3>
17
- <p>Dark Riddle APK Hile is available for download from Android Oyun Club, a website that offers modded versions of various Android games. To download the file from Android Oyun Club, you need to follow these steps:</p>
18
- <ol>
19
- <li>Go to the official website of Android Oyun Club at <a href="">https://androidoyun.club/</a></li>
20
- <li>Search for Dark Riddle in the search bar or browse the categories to find the game.</li>
21
- <li>Click on the game title and scroll down to the download section.</li>
22
- <li>Choose the version of Dark Riddle APK Hile that you want to download and click on the download button.</li>
23
- <li>Wait for the download to complete and save the file on your device.</li>
24
- </ol>
25
- <h3>The steps to install the file on your device</h3>
26
- <p>After downloading the file from Android Oyun Club, you need to install it on your device. To install the file on your device, you need to follow these steps:</p>
27
- <ol>
28
- <li>Go to the settings of your device and enable the option to install apps from unknown sources.</li>
29
- <li>Locate the downloaded file on your device and tap on it.</li>
30
- <li>Follow the instructions on the screen and allow the necessary permissions.</li>
31
- <li>Wait for the installation to finish and launch the game.</li>
32
- </ol>
33
- <h2>What are the features and benefits of Dark Riddle APK Hile?</h2>
34
- <h3>The features of the modded version, such as unlocked items, skins and weapons</h3>
35
- <p>Dark Riddle APK Hile has many features that make it different from the original version of the game. Some of these features are:</p>
36
- <p>This is a first-person adventure thriller with an interactive environment and interesting quests. Solve puzzles and uncover the secrets of a suspicious neighbor who lives across from you.<br />
37
- Your adventure begins in an unusual city where you can find many useful and unique items. You will meet a police officer and a seller of alien devices, and during the game you will get acquainted with unusual creatures. Each item and character has a huge story behind it.<br />
38
- The game has a lot of humor, various levels of difficulty and multiple endings - the outcome of the story depends entirely on your actions and decisions. You can use headphones to explore the city in detail and better understand the plot.</p>
39
- <ul>
40
- <li>Unlimited money: You can buy anything you want in the game without worrying about the cost.</li>
41
- <li>Unlocked items: You can access all the items that are available in the game store, such as flashlights, cameras, binoculars, etc.</li>
42
- <li>Unlocked skins: You can customize your character with different skins, such as clown, pirate, ninja, etc.</li>
43
- <li>Unlocked weapons: You can use different weapons to fight your neighbor, such as guns, knives, bats, etc.</li>
44
- </ul>
45
- <h3>The benefits of the modded version, such as more fun, freedom and challenge</h3>
46
- <p>Dark Riddle APK Hile has many benefits that make it more fun, freedom and challenge than the original version of the game. Some of these benefits are:</p>
47
- <ul>
48
- <li>More fun: You can enjoy the game without any limitations or interruptions. You can prank your neighbor or make him laugh with your humor and creativity.</li>
49
- <li>More freedom: You can explore your neighbor's house and discover his secrets without any restrictions or limitations. You can use any item or tool you want to solve puzzles and escape.</li>
50
- <li>More challenge: You can increase the difficulty and excitement of the game by using different weapons and skins. You can also face new challenges and surprises in each level and mode.</li>
51
- </ul> <h2>What are the drawbacks and risks of Dark Riddle APK Hile?</h2>
52
- <h3>The drawbacks of the modded version, such as possible bugs, glitches and crashes</h3>
53
- <p>Dark Riddle APK Hile is not a perfect version of the game. It has some drawbacks that may affect your gaming experience. Some of these drawbacks are:</p>
54
- <ul>
55
- <li>Possible bugs: The modded version may have some bugs or errors that may cause the game to malfunction or behave unexpectedly.</li>
56
- <li>Possible glitches: The modded version may have some glitches or flaws that may affect the graphics, sound or gameplay of the game.</li>
57
- <li>Possible crashes: The modded version may have some crashes or freezes that may cause the game to stop working or close abruptly.</li>
58
- </ul>
59
- <h3>The risks of the modded version, such as malware, viruses and bans</h3>
60
- <p>Dark Riddle APK Hile is not a safe version of the game. It has some risks that may harm your device or account. Some of these risks are:</p>
61
- <ul>
62
- <li>Possible malware: The modded version may have some malware or malicious code that may infect your device or steal your data.</li>
63
- <li>Possible viruses: The modded version may have some viruses or harmful programs that may damage your device or corrupt your files.</li>
64
- <li>Possible bans: The modded version may have some bans or penalties that may prevent you from playing the game or accessing your account.</li>
65
- </ul>
66
- <h2>Conclusion</h2>
67
- <p>Dark Riddle APK Hile is a modded version of the game that gives you unlimited money and removes ads and in-app purchases. It also unlocks all the items, skins and weapons in the game. It is a way to enjoy the game without any limitations or interruptions. However, it also has some drawbacks and risks that may affect your gaming experience or harm your device or account. Therefore, you should be careful and responsible when using this modded version. You should also respect the original developers and creators of the game and support them if you like their work.</p>
68
- <h2>FAQs</h2>
69
- <ol>
70
- <li>Q: Is Dark Riddle APK Hile legal?</li>
71
- <li>A: Dark Riddle APK Hile is not legal. It is a modded version of the game that violates the terms and conditions of the original game. It also infringes the intellectual property rights of the original developers and creators of the game.</li>
72
- <li>Q: Is Dark Riddle APK Hile safe?</li>
73
- <li>A: Dark Riddle APK Hile is not safe. It is a modded version of the game that may contain malware, viruses or bans that may harm your device or account. It also may have bugs, glitches or crashes that may affect your gaming experience.</li>
74
- <li>Q: How to update Dark Riddle APK Hile?</li>
75
- <li>A: Dark Riddle APK Hile is not easy to update. It is a modded version of the game that may not be compatible with the latest version of the original game. You may need to download and install a new version of Dark Riddle APK Hile from Android Oyun Club whenever there is an update available.</li>
76
- <li>Q: How to uninstall Dark Riddle APK Hile?</li>
77
- <li>A: Dark Riddle APK Hile is easy to uninstall. You can simply delete the file from your device or go to the settings of your device and uninstall the app like any other app.</li>
78
- <li>Q: Where to get more information about Dark Riddle APK Hile?</li>
79
- <li>A: You can get more information about Dark Riddle APK Hile from Android Oyun Club, the website that offers this modded version of the game. You can also visit the official website or social media pages of Dark Riddle, the original game, to get more information about it.</li>
80
- </ol></p> 197e85843d<br />
81
- <br />
82
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/8 Ball Pool bitAIM APK A Complete Guide to the Ultimate Pool Game Experience.md DELETED
@@ -1,146 +0,0 @@
1
- <br />
2
- <h1>What is 8 ball pool bitaim apk?</h1>
3
- <p>If you are a fan of pool games, you might have heard of or played 8 ball pool, one of the most popular and addictive online multiplayer games on Android. 8 ball pool is a game where you can compete with players from all over the world in various modes and tournaments, using your skills and strategies to pocket balls and win coins and rewards. But what if you want to have an edge over your opponents and improve your game performance? That's where 8 ball pool bitaim apk comes in.</p>
4
- <h2>8 ball pool bitaim apk</h2><br /><p><b><b>Download File</b> &middot;&middot;&middot;&middot;&middot; <a href="https://jinyurl.com/2uNSOG">https://jinyurl.com/2uNSOG</a></b></p><br /><br />
5
- <p>8 ball pool bitaim apk is a modded version of 8 ball pool that allows you to hack the aim of your striker and hit the pieces with perfect accuracy. With 8 ball pool bitaim apk, you can win every match and earn more coins and gems. But is 8 ball pool bitaim apk safe and legal to use? How can you download and install it on your device? And what are its features and benefits? In this article, we will answer all these questions and more, so keep reading.</p>
6
- <h2>How to play 8 ball pool?</h2>
7
- <p>Before we dive into the details of 8 ball pool bitaim apk, let's first review the basics of how to play 8 ball pool. 8 ball pool is a game played with a cue ball and fifteen object balls, numbered 1 through 15. Balls 1–7 are solid colors and commonly referred to as “low balls”, and balls 9–15 are striped and commonly referred to as “high balls.” One player must pocket balls of solid colors, while the other player must pocket the striped balls. The player who pockets their entire group and then legally pockets the 8-ball wins the game.</p>
8
- <p>To start the game, one player must break the rack by hitting the cue ball into the triangle of object balls. For the break shot to be legal, the breaker must either pocket a number ball or drive at least four number balls to one or more rails. No ball is called, and the cue ball is not required to hit any particular object ball first. If the breaker fails to make a legal break, the opponent can choose to break again or accept the table as it is.</p>
9
- <p>After a legal break, if any object ball is pocketed, then that determines whether that player has solids or stripes for that game. If no object ball is pocketed on a legal break or if both a solid and a stripe are pocketed on a legal break then it is an open table until one player pockets either a solid or stripe on their turn. Once solids or stripes have been determined for each player then they must continue shooting at their designated group until they have cleared their group from the table.</p>
10
- <p>A player's turn continues until they fail to pocket one of their group or <p>commit a foul. A foul occurs when the player fails to hit any ball with the cue ball, hits the wrong group of balls first, pockets the cue ball, pockets the 8-ball before clearing their group, pockets the 8-ball in the wrong pocket, or drives any ball off the table. If a player commits a foul, their opponent gets ball in hand, meaning they can place the cue ball anywhere on the table for their next shot.</p>
11
- <p>bitAIM app for carrom pool<br />
12
- bitAIM+ download apk free<br />
13
- bitAIM AI aim assistance tool<br />
14
- bitAIM for carrom pool practices<br />
15
- bitAIM apk latest version 3.6.54<br />
16
- bitAIM image recognition technique<br />
17
- bitAIM android app free download<br />
18
- bitAIM apkcombo apps tools<br />
19
- bitAIM app.ai.lab.bitaimplus<br />
20
- bitAIM apk mirror download<br />
21
- bitAIM carrom pool master shots<br />
22
- bitAIM apk file size 28 MB<br />
23
- bitAIM app developer bitAIM+<br />
24
- bitAIM apk update Aug 12, 2022<br />
25
- bitAIM app category tools<br />
26
- bitAIM apk google play id<br />
27
- bitAIM app installs 500+<br />
28
- bitAIM apk direct and indirect shot<br />
29
- bitAIM app description tools advertisement<br />
30
- bitAIM apk multi-collision of coin<br />
31
- bitAIM app reviews and ratings<br />
32
- bitAIM apk how to install<br />
33
- bitAIM app features and benefits<br />
34
- bitAIM apk compatible devices<br />
35
- bitAIM app screenshots and videos<br />
36
- bitAIM apk mod unlimited coins<br />
37
- bitAIM app alternatives and similar apps<br />
38
- bitAIM apk download for pc windows 10<br />
39
- bitAIM app support and contact information<br />
40
- bitAIM apk no root required<br />
41
- bitAIM app privacy policy and terms of service<br />
42
- bitAIM apk online generator tool<br />
43
- bitAIM app tips and tricks guide<br />
44
- bitAIM apk offline mode available<br />
45
- bitAIM app user feedback and suggestions<br />
46
- bitAIM apk safe and secure download link<br />
47
- bitAIM app pros and cons analysis<br />
48
- bitAIM apk hack version download 2023<br />
49
- bitAIM app frequently asked questions (FAQs)<br />
50
- bitAIM apk premium features unlocked</p>
51
- <p>The game ends when one player legally pockets the 8-ball in a designated pocket after clearing their group. The player must call the pocket for the 8-ball before shooting. If the player pockets the 8-ball in an uncalled pocket, or pockets the 8-ball and the cue ball on the same shot, they lose the game.</p>
52
- <h2>How to download and install bitaim apk?</h2>
53
- <p>Now that you know how to play 8 ball pool, you might be wondering how to get bitaim apk on your device. Bitaim apk is not available on the official Google Play Store, so you will need to download it from a third-party source. Here are the steps and requirements for downloading and installing bitaim apk:</p>
54
- <ol>
55
- <li>Make sure your device has enough storage space and meets the minimum system requirements for running 8 ball pool. The game requires Android 4.4 or higher and at least 1 GB of RAM.</li>
56
- <li>Enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.</li>
57
- <li>Download bitaim apk from a reliable and trusted website. You can search for bitaim apk on Google or use this link: (https://bitaimapk.com/). Be careful not to download any fake or malicious files that might harm your device.</li>
58
- <li>Locate the downloaded file on your device and tap on it to start the installation process. Follow the instructions on the screen and grant the necessary permissions for the app to run.</li>
59
- <li>Launch 8 ball pool bitaim apk and enjoy playing with unlimited aim and accuracy.</li>
60
- </ol>
61
- <h2>What are the features of bitaim apk?</h2>
62
- <p>Bitaim apk is a modded version of 8 ball pool that offers many features and benefits that can enhance your gaming experience and make you a better player. Here are some of the features of bitaim apk:</p>
63
- <ul>
64
- <li><strong>AI assistance:</strong> Bitaim apk uses artificial intelligence to help you aim and shoot with precision. It shows you the trajectory and angle of your shots, as well as the best possible pocket for each ball. You can also adjust the sensitivity and speed of your aim according to your preference.</li>
65
- <li><strong>Shots recording:</strong> Bitaim apk allows you to record your shots and replay them later. You can use this feature to analyze your mistakes and improve your skills. You can also share your shots with your friends and challenge them to beat your score.</li>
66
- <li><strong>No ads:</strong> Bitaim apk removes all the annoying ads that interrupt your gameplay and distract you from your focus. You can play without any interruptions and enjoy a smooth and seamless gaming experience.</li>
67
- <li><strong>No root required:</strong> Bitaim apk does not require root access to work on your device. You can use it without any risk of damaging your device or voiding its warranty.</li>
68
- <li><strong>Free updates:</strong> Bitaim apk provides free updates for its users, ensuring that they always have access to the latest features and bug fixes.</li>
69
- </ul>
70
- <h2>How to use bitaim apk?</h2>
71
- <p>Using bitaim apk is very easy and simple. All you need to do is follow these steps:</p>
72
- <ol>
73
- <li>Launch 8 ball pool bitaim apk on your device and log in with your account or create a new one.</li>
74
- <li>Select a game mode or tournament that you want to play and join a match.</li>
75
- <li>When it is your turn to shoot, you will see a green line showing you the direction and angle of your shot. You can also see a yellow circle indicating the best pocket for each ball.</li>
76
- <li>To adjust your aim, swipe left or right on the screen. To adjust your power, swipe up or down on the screen.</li>
77
- <li>To shoot, tap on the screen when you are ready.</li>
78
- <li>Enjoy winning every match with perfect accuracy and skill.</li>
79
- </ol>
80
- <h3>How to activate indirect or premium shots?</h3>
81
- <p>Bitaim apk also offers indirect or premium shots, which are more advanced and challenging shots that require more skill and strategy. Indirect shots are shots that involve hitting one or more rails before pocketing a ball. Premium shots are shots that involve using spin, curve, or jump to pocket a ball. To activate indirect or premium shots, you need to pay a certain amount of coins or gems, depending on the level of difficulty and reward. Here is a table showing the cost and benefit of each type of shot: | Type of shot | Cost | Benefit | | --- | --- | --- | | Indirect shot | 50 coins or 5 gems | Double the coins or gems you win | | Premium shot | 100 coins or 10 gems | Triple the coins or gems you win | To activate indirect or premium shots, you need to tap on the icon that appears on the top right corner of the screen before shooting. You can choose between coins or gems as the payment method. Once you activate the shot, you will see a blue line showing you the trajectory and angle of your shot, as well as a red circle indicating the spin, curve, or jump effect. You can adjust your shot as usual and then shoot when you are ready. <h3>How to use bitaim apk with Lulubox?</h3>
82
- <p>Lulubox is another popular app that can enhance your gaming experience by providing you with various features and hacks for different games. Lulubox is compatible with 8 ball pool bitaim apk, and you can use them together to get more benefits and advantages. Here are some of the features that Lulubox can offer for 8 ball pool:</p>
83
- <ul>
84
- <li><strong>Unlimited coins and gems:</strong> Lulubox can help you get unlimited coins and gems for 8 ball pool, which you can use to buy cues, tables, chat packs, and more. You can also use them to activate indirect or premium shots without any cost.</li>
85
- <li><strong>Free skins and themes:</strong> Lulubox can help you customize your game with free skins and themes for your cues, tables, and background. You can choose from a variety of options and styles to suit your preference.</li>
86
- <li><strong>No verification required:</strong> Lulubox can help you bypass the verification process that 8 ball pool requires for some features and functions. You can use Lulubox to access all the features and functions without any hassle.</li>
87
- </ul>
88
- <p>To use bitaim apk with Lulubox, you need to follow these steps:</p>
89
- <ol>
90
- <li>Download and install Lulubox from a reliable and trusted website. You can search for Lulubox on Google or use this link: (https://www.luluboxapk.com/).</li>
91
- <li>Launch Lulubox on your device and grant the necessary permissions for the app to run.</li>
92
- <li>Find 8 ball pool bitaim apk on the list of games that Lulubox supports and tap on it.</li>
93
- <li>Select the features that you want to activate for 8 ball pool bitaim apk and tap on the launch button.</li>
94
- <li>Enjoy playing 8 ball pool bitaim apk with Lulubox.</li>
95
- </ol>
96
- <h3>How to update bitaim apk?</h3>
97
- <p>Bitaim apk is constantly updated by its developers to ensure that it works smoothly and efficiently with the latest version of 8 ball pool. To update bitaim apk, you need to follow these steps:</p>
98
- <ol>
99
- <li>Check if there is a new version of bitaim apk available on the website where you downloaded it from. You can also check for updates within the app itself by tapping on the menu button and then on the update option.</li>
100
- <li>If there is a new version available, download it from the website or from the app.</li>
101
- <li>Delete the old version of bitaim apk from your device.</li>
102
- <li>Install the new version of bitaim apk following the same steps as before.</li>
103
- <li>Launch 8 ball pool bitaim apk and enjoy playing with the latest features and bug fixes.</li>
104
- </ol>
105
- <h2>What are the pros and cons of bitaim apk?</h2>
106
- <p>Bitaim apk is a modded version of 8 ball pool that offers many features and benefits that can enhance your gaming experience and make you a better player. However, it also has some drawbacks and risks that you should be aware of before using it. Here are some of the pros and cons of bitaim apk:</p>
107
- | Pros | Cons | | --- | --- | | It helps you aim and shoot with perfect accuracy. | It takes away some of the challenge and fun of playing 8 ball pool. | | It allows you to win every match and earn more coins and gems. | It may be considered cheating by some players and may ruin their gaming experience. | | It removes all the ads that interrupt your gameplay. | It may not be compatible with some devices or versions of 8 ball pool. | | It does not require root access to work on your device. | It may expose your device to malware or viruses from unknown sources. | | It provides free updates for its users. | It may get detected and banned by the game developers or moderators. | <h2>What are some alternatives to bitaim apk?</h2>
108
- <p>If you are looking for some alternatives to bitaim apk, you might want to check out these other apps that can provide similar or better features for 8 ball pool:</p>
109
- <ul>
110
- <li><strong>8 Ball Pool Mod Menu:</strong> This is another modded version of 8 ball pool that offers unlimited coins and gems, long line, anti-ban, and more. You can download it from this link: (https://8ballpoolmodmenu.com/).</li>
111
- <li><strong>8 Ball Pool Tool:</strong> This is an app that helps you calculate the angle and power of your shots, as well as the best pocket for each ball. You can download it from this link: (https://play.google.com/store/apps/details?id=com.eivaagames.EightBallPoolToolFree&hl=en_US&gl=US).</li>
112
- <li><strong>8 Ball Pool Guideline Hack:</strong> This is an app that shows you the guideline of your shots, even in no guideline mode. You can download it from this link: (https://play.google.com/store/apps/details?id=com.guideline.hack&hl=en_US&gl=US).</li>
113
- </ul>
114
- <h2>Conclusion</h2>
115
- <p>8 ball pool bitaim apk is a modded version of 8 ball pool that allows you to hack the aim of your striker and hit the pieces with perfect accuracy. It offers many features and benefits that can enhance your gaming experience and make you a better player, such as AI assistance, shots recording, no ads, no root required, and free updates. However, it also has some drawbacks and risks that you should be aware of before using it, such as cheating, compatibility issues, malware threats, and ban risks. Therefore, you should use it at your own discretion and responsibility.</p>
116
- <p>If you want to download and install bitaim apk on your device, you can follow the steps and requirements that we have provided in this article. You can also use bitaim apk with Lulubox to get more features and hacks for 8 ball pool. Alternatively, you can check out some other apps that can provide similar or better features for 8 ball pool, such as 8 Ball Pool Mod Menu, 8 Ball Pool Tool, and 8 Ball Pool Guideline Hack.</p>
117
- <p>We hope that this article has helped you understand what is 8 ball pool bitaim apk and how to use it effectively and safely. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!</p>
118
- <h3>FAQs</h3>
119
- <p>Here are some of the frequently asked questions and answers about 8 ball pool bitaim apk:</p>
120
- <ol>
121
- <li><strong>Q: Is 8 ball pool bitaim apk safe and legal to use?</strong></li>
122
- <li>A: Bitaim apk is not safe or legal to use, as it is a modded version of 8 ball pool that violates the terms and conditions of the game. It may expose your device to malware or viruses from unknown sources, and it may get detected and banned by the game developers or moderators. Therefore, you should use it at your own risk and responsibility.</li>
123
- <li><strong>Q: How can I avoid getting banned by using bitaim apk?</strong></li>
124
- <li>A: There is no guarantee that you will not get banned by using bitaim apk, as it is a modded version of 8 ball pool that violates the terms and conditions of the game. However, you can try to reduce the chances of getting banned by following these tips:</li>
125
- <ul>
126
- <li>Do not use bitaim apk in ranked or tournament matches, as they are more likely to be monitored by the game developers or moderators.</li>
127
- <li>Do not use bitaim apk excessively or obviously, as it may arouse suspicion from other players or observers.</li>
128
- <li>Do not brag or boast about using bitaim apk, as it may attract unwanted attention or reports from other players or observers.</li>
129
- <li>Do not share your account or device with anyone else who might use bitaim apk, as it may compromise your security and privacy.</li>
130
- </ul>
131
- <li><strong>Q: Can I use bitaim apk with other mods or hacks for 8 ball pool?</strong></li>
132
- <li>A: Bitaim apk is compatible with some other mods or hacks for 8 ball pool, such as Lulubox. However, you should be careful not to use too many mods or hacks at the same time, as they may cause conflicts or errors in your game performance. You should also be aware that using more mods or hacks may increase the risk of getting banned by the game developers or moderators.</li>
133
- <li><strong>Q: How can I contact the developers of bitaim apk?</strong></li>
134
- <li>A: Bitaim apk is developed by a team of anonymous and independent developers who do not have an official website or social media account. Therefore, it is difficult to contact them directly or get support from them. However, you can try to leave a comment or feedback on the website where you downloaded bitaim apk from, and hope that they will see it and respond to it.</li>
135
- <li><strong>Q: What are some tips and tricks for playing 8 ball pool?</strong></li>
136
- <li>A: 8 ball pool is a game that requires skill, strategy, and practice to master. Here are some tips and tricks that can help you improve your game and win more matches:</li>
137
- <ul>
138
- <li>Practice your aim and power by playing in offline mode or practice mode.</li>
139
- <li>Learn the different types of shots and when to use them, such as straight shots, bank shots, cut shots, spin shots, curve shots, and jump shots.</li>
140
- <li>Use the right cue for the right situation, and upgrade your cues with coins or gems to increase their attributes, such as aim, power, spin, and time.</li>
141
- <li>Plan your shots ahead and think about the position of the cue ball and the object balls after each shot.</li>
142
- <li>Use the chat feature to communicate with your opponent and show your sportsmanship.</li>
143
- </ul>
144
- </ol></p> 401be4b1e0<br />
145
- <br />
146
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Boost Your Brain Power with Mental Arithmetic Techniques.md DELETED
@@ -1,111 +0,0 @@
1
-
2
- <h1>How to Practice and Improve Your Mental Arithmetic Skills</h1>
3
- <p>Mental arithmetic is the skill of doing calculations in your head without using any tools or devices, such as a calculator, pen and paper, or abacus. It is a valuable skill that can help you in many everyday situations, such as shopping, cooking, traveling, and more. It can also improve your number sense, logical thinking, memory, and speed of computation.</p>
4
- <h2>mental aritmetik</h2><br /><p><b><b>DOWNLOAD</b> &bull; <a href="https://jinyurl.com/2uNLX1">https://jinyurl.com/2uNLX1</a></b></p><br /><br />
5
- <p>But how do you practice and improve your mental arithmetic skills? What are some tips and techniques that can make it easier and faster? And what are some games and resources that can challenge you and make it fun? In this article, we will answer these questions and provide you with some useful information on how to become a master of mental math.</p>
6
- <h2>Tips and Techniques for Mental Arithmetic</h2>
7
- <p>There are many tips and techniques that can help you perform mental arithmetic more efficiently and accurately. Here are some of the most common ones:</p>
8
- <h3>Break down the problems into parts</h3>
9
- <p>One of the easiest ways to simplify mental arithmetic problems is to break them down into smaller parts that are easier to handle. For example, if you need to add or subtract several numbers, you can group them by their place value (hundreds, tens, ones) and add or subtract them separately. For example:</p>
10
- <p>712 + 281 = (700 + 200) + (10 + 80) + (2 + 1) = 900 + 90 + 3 = 993</p>
11
- <p>815 - 521 = (800 - 500) + (10 - 20) + (5 - 1) = 300 - 10 + 4 = 294</p>
12
- <p>mental aritmetik eğitimi<br />
13
- mental aritmetik kursu<br />
14
- mental aritmetik nedir<br />
15
- mental aritmetik nasıl yapılır<br />
16
- mental aritmetik faydaları<br />
17
- mental aritmetik örnekleri<br />
18
- mental aritmetik kitabı<br />
19
- mental aritmetik abaküs<br />
20
- mental aritmetik uygulaması<br />
21
- mental aritmetik sertifikası<br />
22
- mental aritmetik beyin gelişimi<br />
23
- mental aritmetik hafıza teknikleri<br />
24
- mental aritmetik zeka testi<br />
25
- mental aritmetik online eğitim<br />
26
- mental aritmetik ders programı<br />
27
- mental aritmetik egzersizleri<br />
28
- mental aritmetik çarpım tablosu<br />
29
- mental aritmetik matematik oyunları<br />
30
- mental aritmetik soru bankası<br />
31
- mental aritmetik öğretmeni<br />
32
- mental aritmetik franchise<br />
33
- mental aritmetik yorumları<br />
34
- mental aritmetik videoları<br />
35
- mental aritmetik çalışma saatleri<br />
36
- mental aritmetik fiyatları<br />
37
- mental aritmetik indirim kuponu<br />
38
- mental aritmetik başarı hikayeleri<br />
39
- mental aritmetik sınav soruları<br />
40
- mental aritmetik öğrenci girişi<br />
41
- mental aritmetik veli bilgilendirme sistemi<br />
42
- mental aritmetik iş ilanları<br />
43
- mental aritmetik bayilik şartları<br />
44
- mental aritmetik seminerleri<br />
45
- mental aritmetik yarışması<br />
46
- mental aritmetik kampı<br />
47
- mental aritmetik blog yazıları<br />
48
- mental aritmetik sosyal medya hesapları<br />
49
- mental aritmetik web sitesi tasarımı<br />
50
- mental aritmetik logo tasarımı<br />
51
- mental aritmetik broşür tasarımı</p>
52
- <h3>Use round numbers and adjust later</h3>
53
- <p>Another way to make mental arithmetic easier is to use round numbers that are close to the original ones and adjust the answer later by adding or subtracting the difference. For example:</p>
54
- <p>596 + 380 = (600 + 380) - 4 = 980 - 4 = 976</p>
55
- <p>38 x 3 = (40 x 3) - (2 x 3) = 120 - 6 = 114</p>
56
- <h3>Reorder the numbers to make convenient sums</h3>
57
- <p>Sometimes, you can reorder the numbers in an addition or subtraction problem to make convenient sums that are easy to remember or work with. For example, you can look for numbers that add up to a multiple of 10 or a power of 10. For example:</p>
58
- <p>7 + 4 + 9 + 13 + 6 + 51 = (7 + 13) + (9 +51) + (6 +4) =20 +60+10=90</p>
59
- <p>1000+20+1000+30+1000+40+1000+10=4000+100=4100</p>
60
- <h3>Multiply from left to right</h3> <h3>Use square numbers and roots</h3>
61
- <p>Square numbers are the result of multiplying a number by itself, such as 4 × 4 = 16 or 9 × 9 = 81. Knowing some common square numbers can help you with mental arithmetic, especially when you need to multiply or divide large numbers. For example:</p>
62
- <p>48 × 52 = (50 − 2) × (50 + 2) = 50² − 2² = 2500 − 4 = 2496</p>
63
- <p>Here, we used the identity (a − b) × (a + b) = a² − b² to simplify the problem. We also used the fact that 50² = 2500, which is easy to remember.</p>
64
- <p>Roots are the opposite of squares. The square root of a number is the number that, when multiplied by itself, gives that number. For example, the square root of 16 is 4, because 4 × 4 = 16. Finding square roots mentally can be tricky, but there are some methods that can help you estimate them or find them exactly. For example:</p>
65
- <p>To estimate the square root of a number, find the two nearest square numbers and use them as a guide. For example, to estimate the square root of 75, we can use the fact that 64 < 75 < 81, and that the square roots of 64 and 81 are 8 and 9, respectively. Therefore, the square root of 75 is between 8 and 9, closer to 9 than to 8.</p>
66
- <p>To find the exact square root of a number, use the fact that the difference between two consecutive square numbers is equal to the sum of their square roots. For example, to find the square root of 169, we can use the fact that 169 − 144 = 25, and that the square roots of 169 and 144 are x and 12, respectively. Therefore, x + 12 = 25, and x = 13.</p>
67
- <h3>Estimate and approximate</h3>
68
- <p>Sometimes, you don't need to find the exact answer to a mental arithmetic problem, but only an estimate or an approximation. This can save you time and effort, and still give you a reasonable idea of the magnitude of the answer. Estimating and approximating can involve various techniques, such as rounding numbers, using benchmarks or reference points, using fractions or percentages, or using compatible numbers. For example:</p>
69
- <p>To estimate how much money you will save by buying a shirt that is on sale for $24.99 instead of $29.99, you can round both prices to the nearest dollar and subtract them: $30 − $25 = $5. This is not the exact answer, but it is close enough for most purposes.</p>
70
- <p>To approximate how many hours are in a year, you can use the benchmark that one year is about 365 days, and multiply it by 24: 365 × 24 = (360 +5) ×24=360×24+5×24=8640+120=8760. This is not the exact answer either, because it does not account for leap years or fractional hours, but it is a good approximation.</p>
71
- <h2>Games and Resources for Mental Arithmetic</h2>
72
- <p>If you want to practice and improve your mental arithmetic skills further, there are many games and resources that you can use to challenge yourself and have fun. Here are some examples:</p>
73
- <h3>Math Trainer</h3>
74
- <p>Math Trainer is a free online tool that lets you practice mental arithmetic with different types of problems and difficulty levels. You can choose from addition, subtraction, multiplication, division, mixed operations, fractions, decimals, percentages, powers and roots. You can also set a time limit and track your progress and accuracy.</p>
75
- <h3>Mental Math Cards</h3>
76
- <p>Mental Math Cards is a free app for iOS and Android devices that helps you practice mental arithmetic with flashcards. You can customize your settings to choose from different operations, number ranges, decimal places and time limits. You can also view your statistics and achievements.</p>
77
- <h3>Arithmetic Game</h3>
78
- <p>Arithmetic Game is a free online game that tests your mental arithmetic skills with four basic operations: addition, subtraction, multiplication and division. You have to fill in the blanks with the correct numbers to complete the equations as fast as you can. You can choose from three difficulty levels: easy, normal and hard.</p>
79
- <h3>Prodigy Game</h3>
80
- <p>Prodigy Game is a free online game that combines math skills with an adventure story. You have to create your own character and explore a fantasy world where you have to solve math problems to progress and unlock new features. You can choose from different topics and skills, such as mental arithmetic, fractions, geometry, algebra and more. You can also play with your friends and compete with other players. Prodigy Game is available for free on the web, or as an app for iOS and Android devices.</p>
81
- <h3>Mathnasium</h3>
82
- <p>Mathnasium is a learning center that offers personalized math tutoring and instruction for students of all ages and levels. Mathnasium uses a unique method that helps students develop their mental arithmetic skills, as well as their conceptual understanding, problem-solving abilities and confidence in math. Mathnasium has over 1,000 locations across the US and Canada, and you can find the nearest one to you on their website.</p>
83
- <h2>Conclusion</h2>
84
- <p>Mental arithmetic is a skill that can benefit you in many ways, both in school and in life. It can help you perform calculations faster and more accurately, improve your number sense and logical thinking, enhance your memory and concentration, and save you time and resources. By practicing some tips and techniques, such as breaking down problems, using round numbers, reordering numbers, multiplying from left to right, using square numbers and roots, and estimating and approximating, you can make mental arithmetic easier and more efficient. You can also use some games and resources, such as Math Trainer, Mental Math Cards, Arithmetic Game, Prodigy Game and Mathnasium, to challenge yourself and have fun while learning mental arithmetic.</p>
85
- <h2>FAQs</h2>
86
- <p>Here are some common questions and answers about mental arithmetic:</p>
87
- <h3>Q: How can I improve my mental arithmetic speed?</h3>
88
- <p>A: To improve your mental arithmetic speed, you need to practice regularly and consistently. You can use some of the games and resources mentioned above to practice different types of problems and difficulty levels. You can also set a time limit for yourself and try to beat your own records. The more you practice, the more familiar you will become with the numbers and the operations, and the faster you will be able to perform them.</p>
89
- <h3>Q: What are some benefits of mental arithmetic for children?</h3>
90
- <p>A: Mental arithmetic can help children develop their math skills from an early age. It can help them understand the meaning and relationships of numbers, operations, fractions, decimals, percentages and more. It can also help them improve their logical thinking, reasoning, creativity, memory and concentration. Mental arithmetic can also boost their confidence and motivation in math, as they can see their progress and achievements.</p>
91
- <h3>Q: What are some challenges of mental arithmetic?</h3>
92
- <p>A: Mental arithmetic can be challenging for some people because it requires a lot of attention, focus and mental effort. It can also be affected by factors such as stress, anxiety, fatigue or distraction. Some people may also have difficulties with certain types of problems or operations, such as division or fractions. To overcome these challenges, it is important to practice mental arithmetic in a relaxed and positive environment, start with simple problems and gradually increase the complexity, use some tips and techniques to simplify the problems, check your answers for accuracy, and seek help or feedback if needed.</p>
93
- <h3>Q: What are some applications of mental arithmetic in real life?</h3>
94
- <p>A: Mental arithmetic can be useful in many real-life situations, such as:</p>
95
- <ul>
96
- <li>Shopping: You can use mental arithmetic to compare prices, calculate discounts, taxes or tips, or make a budget.</li>
97
- <li>Cooking: You can use mental arithmetic to measure ingredients, convert units or temperatures, or adjust recipes.</li>
98
- <li>Traveling: You can use mental arithmetic to plan your itinerary, convert currencies or distances, or estimate time or costs.</li>
99
- <li>Gaming: You can use mental arithmetic to keep score, strategize or optimize your moves, or increase your chances of winning.</li>
100
- <li>Learning: You can use mental arithmetic to reinforce your math skills, learn new concepts or topics, or prepare for exams or tests.</li>
101
- </ul>
102
- <h3>Q: How can I make mental arithmetic fun?</h3>
103
- <p>A: There are many ways to make mental arithmetic fun, such as:</p>
104
- <ul>
105
- <li>Playing games: You can play some of the games mentioned above or create your own games with cards, dice, coins, or other objects. You can also play with your friends or family and make it a competition or a collaboration.</li>
106
- <li>Using real-life scenarios: You can use mental arithmetic to solve problems or answer questions that relate to your interests, hobbies, or goals. For example, you can use mental arithmetic to calculate how much money you need to save for a trip, how many calories you burn in a workout, or how many books you can read in a year.</li>
107
- <li>Setting goals and rewards: You can set goals for yourself to improve your mental arithmetic skills, such as solving a certain number of problems in a given time, reaching a certain level of difficulty, or learning a new technique or trick. You can also reward yourself for achieving your goals, such as buying yourself a treat, watching your favorite show, or doing something fun.</li>
108
- </ul>
109
- <p>I hope you enjoyed this article and learned something new about mental arithmetic. If you have any questions or comments, feel free to leave them below. And don't forget to practice and have fun with mental arithmetic!</p> 197e85843d<br />
110
- <br />
111
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Enjoy Clash of Clans APK with Unlimited Gems Gold and Elixir.md DELETED
@@ -1,94 +0,0 @@
1
-
2
- <h1>Clash of Clans Orjinal APK: How to Download and Play the Epic Strategy Game</h1>
3
- <p>Clash of Clans is one of the most popular and addictive strategy games in the world. Millions of players worldwide join forces to build their villages, train their troops, and fight in epic clan wars. If you are looking for a fun and challenging game that will keep you entertained for hours, you should definitely try Clash of Clans. But how can you download and play the game on your Android device? In this article, we will show you how to get the orjinal APK of Clash of Clans, which is the official version of the game from a trusted source. We will also give you some tips and tricks on how to play the game and become a successful clasher.</p>
4
- <h2>clash of clans orjinal apk</h2><br /><p><b><b>DOWNLOAD</b> &#187;&#187;&#187; <a href="https://jinyurl.com/2uNL75">https://jinyurl.com/2uNL75</a></b></p><br /><br />
5
- <h2>What is Clash of Clans?</h2>
6
- <p>Clash of Clans is a strategy game that was released in 2012 by Supercell, a Finnish game developer. The game is set in a fantasy world where you can create your own village, customize it with various buildings and defenses, and collect resources such as gold, elixir, and dark elixir. You can also recruit different types of troops, such as barbarians, archers, wizards, dragons, and more, and use them to attack other players' villages or defend your own. The game also features a multiplayer mode where you can join or create a clan, which is a group of players who can chat, donate troops, and participate in clan wars. Clan wars are special events where two clans face each other in a series of attacks and try to earn more stars than their opponents. The game also has a single-player mode where you can fight against the goblin king and his army in a campaign mode.</p>
7
- <h2>Why Download the Orjinal APK?</h2>
8
- <p>The orjinal APK of Clash of Clans is the official version of the game that you can download from Google Play Store or from Supercell's website. There are many advantages of downloading the orjinal APK instead of using unofficial or modded versions of the game. Some of these advantages are:</p>
9
- <ul>
10
- <li>You can enjoy the latest updates and features of the game as soon as they are released by Supercell.</li>
11
- <li>You can avoid any potential risks or problems that may come with using unverified or hacked versions of the game, such as viruses, malware, bans, or loss of data.</li>
12
- <li>You can support Supercell as a developer and help them continue making great games for their fans.</li>
13
- </ul>
14
- <h2>How to Download and Install the Orjinal APK?</h2>
15
- <p>Downloading and installing the orjinal APK of Clash of Clans is very easy and simple. Just follow these steps:</p>
16
- <ol>
17
- <li>Go to Google Play Store on your Android device and search for Clash of Clans. Alternatively, you can go to Supercell's website (https://supercell.com/en/games/clashofclans/) and click on "Download Now".</li>
18
- <li>Tap on "Install" and wait for the download to finish.</li>
19
- <li>Once the download is complete, tap on "Open" and enjoy playing Clash of Clans.</li>
20
- </ol>
21
- <p>Note: If you have an existing account or village on another device, you can link it to your new device by using Supercell ID or Google Play Games. Just go to Settings > Account > Link Device or Sign In.</p>
22
- <h2>How to Play Clash of Clans?</h2>
23
- <p>Playing Clash of Clans is fun and easy once you get the hang of it. Here are some tips and tricks on how to play the game and become a successful clasher:</p>
24
- <p>clash of clans apk download latest version<br />
25
- clash of clans mod apk unlimited everything<br />
26
- clash of clans hack apk free download<br />
27
- clash of clans apk indir android oyun club<br />
28
- clash of clans apk update 2023<br />
29
- clash of clans private server apk download<br />
30
- clash of clans apk hile nasıl yapılır<br />
31
- clash of clans apk mirror download<br />
32
- clash of clans apk pure free download<br />
33
- clash of clans apk mod menu<br />
34
- clash of clans apk offline mode<br />
35
- clash of clans apk no root required<br />
36
- clash of clans apk yeni sürüm indir<br />
37
- clash of clans apk for pc windows 10<br />
38
- clash of clans apk full version download<br />
39
- clash of clans apk cheat codes<br />
40
- clash of clans apk hack online generator<br />
41
- clash of clans apk orjinal kurulumu<br />
42
- clash of clans apk son sürüm 2023<br />
43
- clash of clans apk android 4.4.2<br />
44
- clash of clans apk unlimited gems and coins<br />
45
- clash of clans apk modded by ihackedit<br />
46
- clash of clans apk free shopping<br />
47
- clash of clans apk güncelleme sorunu<br />
48
- clash of clans apk for ios devices<br />
49
- clash of clans apk mod offline unlimited money<br />
50
- clash of clans apk hack tool download<br />
51
- clash of clans apk orjinal nasıl indirilir<br />
52
- clash of clans apk eski sürüm indir<br />
53
- clash of clans apk android 11 support<br />
54
- clash of clans apk unlimited troops and spells<br />
55
- clash of clans apk mod anti ban<br />
56
- clash of clans apk free gems generator<br />
57
- clash of clans apk hileli indir 2023<br />
58
- clash of clans apk for fire tablet<br />
59
- clash of clans apk mod unlimited gold and elixir<br />
60
- clash of clans apk hack no survey no password<br />
61
- clash of clans apk orjinal yükleme yöntemi<br />
62
- clash of clans apk yeni güncelleme ne zaman gelecek<br />
63
- clash of clans apk android 5.1.1 download<br />
64
- clash of clans apk unlimited builder base resources<br />
65
- clash of clans apk mod unlock all heroes and troops<br />
66
- clash of clans apk free download for laptop<br />
67
- clash of clans apk hile yapma programı indir<br />
68
- clash of clans apk for chromebook download<br />
69
- clash of clans apk mod supercell id login fix<br />
70
- clash of clans apk free magic items and books<br />
71
- clash of clans apk hileli oyun indir club</p>
72
- <ul>
73
- <li>Build and upgrade your buildings and defenses. You can use gold and elixir to build and upgrade various structures in your village, such as town hall, barracks, army camps, walls, cannons, archer towers, mortars, and more. These structures will help you protect your village from enemy attacks and produce more resources for you.</li>
74
- <li>Train and upgrade your troops. You can use elixir and dark elixir to train and upgrade different types of troops in your barracks and dark barracks, such as barbarians, archers, giants, goblins, wizards, dragons, pekkas, minions, hog riders, golems, and more. These troops will help you attack other players' villages and loot their resources.</li>
75
- <li>Join or create a clan. You can join or create a clan by tapping on the clan castle in your village. A clan is a group of players who can chat, donate troops, and participate in clan wars. Being in a clan will give you many benefits, such as getting reinforcements from your clanmates, requesting and receiving clan gifts, earning clan perks, and more.</li>
76
- <li>Participate in clan wars. Clan wars are special events where two clans face each other in a series of attacks and try to earn more stars than their opponents. To participate in a clan war, you need to be in a clan and have your clan castle rebuilt. You can also opt out of the war if you don't want to participate. To win a clan war, you need to plan your attacks carefully, use your best troops and spells, scout the enemy bases, and coordinate with your clanmates.</li>
77
- <li>Complete achievements and events. You can earn gems, which are the premium currency of the game, by completing various achievements and events. Achievements are long-term goals that you can accomplish by playing the game regularly, such as reaching a certain town hall level, winning a number of battles, collecting a certain amount of resources, and more. Events are short-term challenges that you can complete by using specific troops or spells in battles, such as using barbarians or rage spells. Gems can be used to speed up building or troop upgrades, buy more resources or shields, or get special items.</li>
78
- </ul>
79
- <h2>Conclusion</h2>
80
- <p>Clash of Clans is an epic strategy game that will keep you hooked for hours. You can download the orjinal APK of the game from Google Play Store or Supercell's website and enjoy the latest updates and features of the game. You can also learn how to play the game and become a successful clasher by following our tips and tricks. So what are you waiting for? Download Clash of Clans today and join the millions of players worldwide who are having fun building their villages, raising their clans, and fighting in clan wars.</p>
81
- <h2>FAQs</h2>
82
- <p>Here are some common questions and answers about Clash of Clans:</p>
83
- <h3>Q: How can I get free gems in Clash of Clans?</h3>
84
- <p>A: You can get free gems by completing achievements and events, removing obstacles from your village, opening gem boxes or gem mines, or participating in special offers or surveys.</p>
85
- <h3>Q: How can I change my name in Clash of Clans?</h3>
86
- <p>A: You can change your name once for free by going to Settings > Change Name. After that, you will need to pay 500 gems to change your name again.</p>
87
- <h3>Q: How can I transfer my village to another device?</h3>
88
- <p>A: You can transfer your village to another device by using Supercell ID or Google Play Games. Just go to Settings > Account > Link Device or Sign In on both devices and follow the instructions.</p>
89
- <h3>Q: How can I contact Supercell for support or feedback?</h3>
90
- <p>A: You can contact Supercell by going to Settings > Help and Support > Contact Us or by visiting their website (https://supercell.helpshift.com/a/clash-of-clans/).</p>
91
- <h3>Q: How can I report a bug or a player in Clash of Clans?</h3>
92
- <p>A: You can report a bug or a player by going to Settings > Help and Support > Report an Issue or Report Player.</p> 197e85843d<br />
93
- <br />
94
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Enjoy the Festive Season with Daystar Choirs 12 Days of Christmas MP3 Download.md DELETED
@@ -1,147 +0,0 @@
1
-
2
- <h1>How to Download 12 Days of Christmas by Daystar Choir MP3</h1>
3
- <p>Christmas is a season of joy, celebration, and music. One of the most festive and cheerful songs that you can listen to during this time is <strong>12 Days of Christmas by Daystar Choir</strong>. This song is a live performance by a Nigerian gospel choir that sings a medley of traditional and modern Christmas carols with a twist. It is a fun and lively song that will make you dance and sing along.</p>
4
- <h2>download 12 days of christmas by daystar choir mp3</h2><br /><p><b><b>Download</b> &gt;&gt;&gt;&gt;&gt; <a href="https://jinyurl.com/2uNN1z">https://jinyurl.com/2uNN1z</a></b></p><br /><br />
5
- <p>But how can you download this song as an MP3 file and enjoy it anytime and anywhere? In this article, we will show you where to find this song online and how to download it as an MP3 file. We will also tell you why this song is so popular and what are the benefits of downloading it as an MP3 file.</p>
6
- <h2>Where to Find 12 Days of Christmas by Daystar Choir MP3</h2>
7
- <p>There are two main ways to find this song online: online streaming platforms and free music download websites. Here are some examples of each option:</p>
8
- <h3>Online Streaming Platforms</h3>
9
- <p>Online streaming platforms are websites or apps that allow you to listen to music online without downloading it. Some of the most popular online streaming platforms that have this song are:</p>
10
- <ul>
11
- <li><strong>Spotify</strong>: Spotify is one of the largest and most popular online streaming platforms in the world. It has millions of songs, podcasts, and playlists that you can listen to for free or with a premium subscription. You can find this song on Spotify by searching for "12 Days of Christmas - Live" by Daystar Choir .</li>
12
- <li><strong>Shazam</strong>: Shazam is an app that can identify any song that is playing around you. It can also show you the lyrics, artist, album, genre, and other information about the song. You can also listen to the song on Shazam or connect it to other streaming platforms like Spotify, Apple Music, YouTube, etc. You can find this song on Shazam by searching for "12 Days of Christmas (Live)" by Daystar Choir.</li>
13
- <li><strong>YouTube</strong>: YouTube is the most popular video-sharing platform in the world. It has billions of videos, including music videos, live performances, covers, remixes, etc. You can watch and listen to this song on YouTube by searching for "Daystar Carol 2016 Ft Taiwo Tfavored in Glory Halleluyah", "Brooklyn Tabernacle Choir \" Daystar \"", or "Daystar Carol 2019 - Daystar Choir Ministration".</li>
14
- </ul>
15
- <h3>Free Music Download Websites</h3 <p>Free music download websites are websites that allow you to download music for free and legally. Some of the free music download websites that have this song are:</p>
16
- <p>download 12 days of christmas by daystar choir mp3 free<br />
17
- download 12 days of christmas by daystar choir mp3 online<br />
18
- download 12 days of christmas by daystar choir mp3 lyrics<br />
19
- download 12 days of christmas by daystar choir mp3 shazam<br />
20
- download 12 days of christmas by daystar choir mp3 last.fm<br />
21
- download 12 days of christmas by daystar choir mp3 album<br />
22
- download 12 days of christmas by daystar choir mp3 live<br />
23
- download 12 days of christmas by daystar choir mp3 video<br />
24
- download 12 days of christmas by daystar choir mp3 song<br />
25
- download 12 days of christmas by daystar choir mp3 music<br />
26
- download 12 days of christmas by daystar choir mp3 youtube<br />
27
- download 12 days of christmas by daystar choir mp3 spotify<br />
28
- download 12 days of christmas by daystar choir mp3 apple music<br />
29
- download 12 days of christmas by daystar choir mp3 soundcloud<br />
30
- download 12 days of christmas by daystar choir mp3 amazon music<br />
31
- download 12 days of christmas by daystar choir mp3 google play music<br />
32
- download 12 days of christmas by daystar choir mp3 deezer<br />
33
- download 12 days of christmas by daystar choir mp3 tidal<br />
34
- download 12 days of christmas by daystar choir mp3 pandora<br />
35
- download 12 days of christmas by daystar choir mp3 napster<br />
36
- download 12 days of christmas by daystar choir mp3 audiomack<br />
37
- download 12 days of christmas by daystar choir mp3 bandcamp<br />
38
- download 12 days of christmas by daystar choir mp3 reverbnation<br />
39
- download 12 days of christmas by daystar choir mp3 datpiff<br />
40
- download 12 days of christmas by daystar choir mp3 mixcloud<br />
41
- download 12 days of christmas by daystar choir mp3 nigerian carols<br />
42
- download 12 days of christmas by daystar choir mp3 ogo ni fun baba<br />
43
- download 12 days of christmas by daystar choir mp3 jesu yi o iwo l'ologo didan<br />
44
- download 12 days of christmas by daystar choir mp3 ding-dong feat taiwo oladoye<br />
45
- download 12 days of christmas by daystar choir mp3 glory halleluyah feat taiwo oladoye<br />
46
- download 12 days of christmas by daystar choir mp3 gbo ohun <br />
47
- download 12 days of christmas by daystar choir mp3 dulci jubilo <br />
48
- download 12 days of christmas by daystar choir mp3 joy festizie <br />
49
- download 12 days of christmas by daystar choir mp3 nina yesu ne chingtok ishaku <br />
50
- download 12 days of christmas by daystar choir mp3 almighty god dr pastor paul enenche <br />
51
- download 12 days of christmas by daystar choir mp3 nagode feat solomon lange worship for change <br />
52
- download 12 days of christmas by daystar choir mp3 you are the god dr paul enenche <br />
53
- download 12 days of christmas by daystar choir mp3 solid rock judikay <br />
54
- download 12 days of christmas by daystar choir mp3 elee dr pastor paul enenche <br />
55
- download 12 days of christmas by daystar choir mp3 alpha and omega praise and worship <br />
56
- how to download 12 days of christmas by daystar choir mp3 <br />
57
- where to download 12 days of christmas by daystar choir mp3 <br />
58
- best site to download 12 days of christmas by daystar choir mp3 <br />
59
- best app to download 12 days of christmas by daystar choir mp3 <br />
60
- best quality to download 12 days of christmas by daystar choir mp3 <br />
61
- best format to download 12 days of christmas by daystar choir mp3 <br />
62
- best device to download 12 days of christmas by daystar choir mp3 <br />
63
- best vpn to download 12 days of christmas by daystar choir mp3 <br />
64
- best proxy to download 12 days of christmas by daystar choir mp3 </p>
65
- <ul>
66
- <li><strong>Chosic</strong>: Chosic is a website that offers free music downloads from various genres and artists. You can also create playlists, discover new music, and share your favorites with others. You can find this song on Chosic by searching for "12 Days of Christmas - Live" by Daystar Choir.</li>
67
- <li><strong>Pixabay</strong>: Pixabay is a website that offers free images, videos, and music that you can use for any purpose. You can browse through thousands of royalty-free music tracks and download them in MP3 or WAV format. You can find this song on Pixabay by searching for "12 Days of Christmas" by Daystar Choir.</li>
68
- <li><strong>Free Music Archive</strong>: Free Music Archive is a website that provides a library of high-quality, legal audio downloads. You can explore music by genre, mood, license, or curator. You can also contribute your own music or support the artists you like. You can find this song on Free Music Archive by searching for "12 Days of Christmas - Live" by Daystar Choir.</li>
69
- </ul>
70
- <h2>How to Download 12 Days of Christmas by Daystar Choir MP3</h2>
71
- <p>Now that you know where to find this song online, how can you download it as an MP3 file? The process may vary depending on the source, but here are some general steps that you can follow:</p>
72
- <h3>From Online Streaming Platforms</h3>
73
- <p>If you want to download this song from online streaming platforms like Spotify, Shazam, or YouTube, you will need to use a third-party tool or app that can convert the song to MP3 format. There are many tools and apps available online, but some of the most popular ones are:</p>
74
- <table>
75
- <tr>
76
- <th>Tool/App</th>
77
- <th>Website/Download Link</th>
78
- <th>Features</th>
79
- </tr>
80
- <tr>
81
- <td><strong>4K Video Downloader</strong></td>
82
- <td></td>
83
- <td>- Supports YouTube, Spotify, SoundCloud, Vimeo, TikTok, and more<br>- Allows you to download videos, playlists, channels, subtitles, and 3D/360° videos<br>- Supports MP3, MP4, MKV, FLV, OGG, and more formats<br>- Offers high-quality and fast downloads<br>- Available for Windows, Mac, and Linux</td>
84
- </tr>
85
- <tr>
86
- <td><strong>AudFree Spotify Music Converter</strong></td>
87
- <td></td>
88
- <td>- Supports Spotify songs, playlists, albums, podcasts, and radio<br>- Allows you to download Spotify music offline without premium<br>- Supports MP3, FLAC, WAV, AAC, M4A, and M4B formats<br>- Offers lossless quality and 5X speed<br>- Available for Windows and Mac</td>
89
- </tr>
90
- <tr>
91
- <td><strong>Shazam Downloader</strong></td>
92
- <td></td>
93
- <td>- Supports Shazam songs and playlists<br>- Allows you to download Shazam music with one click<br>- Supports MP3 format<br>- Offers high-quality downloads<br>- Available for Android devices</td>
94
- </tr>
95
- </table>
96
- <p>To download this song from online streaming platforms using these tools or apps, you need to follow these steps:</p>
97
- <ol>
98
- <li>Open the online streaming platform and find the song that you want to download.</li>
99
- <li>Copy the URL or link of the song.</li>
100
- <li>Open the tool or app that you have chosen and paste the URL or link into the input box.</li>
101
- <li>Select the MP3 format and quality that you want.</li>
102
- <li>Click on the download or convert button and wait for the process to finish.</li>
103
- <li>Save the MP3 file to your device or cloud storage.</li>
104
- </ol>
105
- <h3>From Free Music Download Websites</h3 <p>If you want to download this song from free music download websites like Chosic, Pixabay, or Free Music Archive, you will not need to use any third-party tool or app. You can simply download the song directly from the website. To download this song from free music download websites, you need to follow these steps:</p>
106
- <ol>
107
- <li>Open the free music download website and search for the song that you want to download.</li>
108
- <li>Click on the song title or the download button or link.</li>
109
- <li>Select the MP3 format and quality that you want.</li>
110
- <li>Save the MP3 file to your device or cloud storage.</li>
111
- </ol>
112
- <h2>Conclusion</h2>
113
- <p>12 Days of Christmas by Daystar Choir is a wonderful song that will brighten up your Christmas season. It is a live performance by a talented gospel choir that sings a medley of classic and modern Christmas carols with a twist. It is a fun and lively song that will make you dance and sing along.</p>
114
- <p>You can download this song as an MP3 file and enjoy it anytime and anywhere. You can find this song online on various online streaming platforms and free music download websites. You can also use different tools and apps to convert and download the song as an MP3 file. All you need to do is follow the steps that we have shown you in this article.</p>
115
- <p>So what are you waiting for? Download 12 Days of Christmas by Daystar Choir MP3 today and have a merry Christmas!</p>
116
- <h2>FAQs</h2>
117
- <h4>What is Daystar Choir?</h4>
118
- <p>Daystar Choir is a gospel choir from Nigeria that is part of the Daystar Christian Centre. The choir is known for its annual Christmas carol concerts that feature various songs, dances, and performances. The choir has also released several albums and singles, such as "Glory Halleluyah", "Hark the Herald", and "Joy to the World".</p>
119
- <h4>What are some other songs by Daystar Choir?</h4>
120
- <p>Some other songs by Daystar Choir are:</p>
121
- <ul>
122
- <li>"O Come All Ye Faithful"</li>
123
- <li>"Silent Night"</li>
124
- <li>"We Wish You a Merry Christmas"</li>
125
- <li>"Jingle Bells"</li>
126
- <li>"Feliz Navidad"</li>
127
- </ul>
128
- <h4>How can I support Daystar Choir?</h4>
129
- <p>You can support Daystar Choir by:</p>
130
- <ul>
131
- <li>Following them on their social media accounts, such as Facebook, Twitter, Instagram, and YouTube</li>
132
- <li>Subscribing to their newsletter or blog</li>
133
- <li>Donating to their ministry or charity projects</li>
134
- <li>Purchasing their albums or merchandise</li>
135
- <li>Attending their concerts or events</li>
136
- </ul>
137
- <h4>What are some other Christmas songs that I can download for free?</h4 <p>Some other Christmas songs that you can download for free are:</p>
138
- <ul>
139
- <li><strong>"O Holy Night" by Josh Groban</strong>: This is a beautiful rendition of the classic Christmas hymn by the famous singer and songwriter. You can download this song for free from Chosic by searching for "O Holy Night" by Josh Groban.</li>
140
- <li><strong>"All I Want for Christmas Is You" by Mariah Carey</strong>: This is one of the most popular and catchy Christmas songs of all time. It is a love song that expresses the desire to be with someone special for Christmas. You can download this song for free from Pixabay by searching for "All I Want for Christmas Is You" by Mariah Carey.</li>
141
- <li><strong>"Jingle Bell Rock" by Bobby Helms</strong>: This is a fun and upbeat rock and roll version of the traditional Christmas song. It is a song that will make you want to dance and celebrate. You can download this song for free from Free Music Archive by searching for "Jingle Bell Rock" by Bobby Helms.</li>
142
- </ul>
143
- <p>These are just some examples of the many Christmas songs that you can download for free online. You can explore more options by browsing through the websites that we have mentioned or using other sources that you trust. Just make sure that the songs are legal and royalty-free before you download them.</p>
144
- <p>We hope that this article has helped you learn how to download 12 Days of Christmas by Daystar Choir MP3 and enjoy this wonderful song. We also hope that you have discovered some other Christmas songs that you can download for free and add to your holiday playlist. Have a merry Christmas and a happy new year!</p>
145
- <h2></h2></p> 197e85843d<br />
146
- <br />
147
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/demucs/__main__.py DELETED
@@ -1,317 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- import json
8
- import math
9
- import os
10
- import sys
11
- import time
12
- from dataclasses import dataclass, field
13
-
14
- import torch as th
15
- from torch import distributed, nn
16
- from torch.nn.parallel.distributed import DistributedDataParallel
17
-
18
- from .augment import FlipChannels, FlipSign, Remix, Scale, Shift
19
- from .compressed import get_compressed_datasets
20
- from .model import Demucs
21
- from .parser import get_name, get_parser
22
- from .raw import Rawset
23
- from .repitch import RepitchedWrapper
24
- from .pretrained import load_pretrained, SOURCES
25
- from .tasnet import ConvTasNet
26
- from .test import evaluate
27
- from .train import train_model, validate_model
28
- from .utils import (human_seconds, load_model, save_model, get_state,
29
- save_state, sizeof_fmt, get_quantizer)
30
- from .wav import get_wav_datasets, get_musdb_wav_datasets
31
-
32
-
33
- @dataclass
34
- class SavedState:
35
- metrics: list = field(default_factory=list)
36
- last_state: dict = None
37
- best_state: dict = None
38
- optimizer: dict = None
39
-
40
-
41
- def main():
42
- parser = get_parser()
43
- args = parser.parse_args()
44
- name = get_name(parser, args)
45
- print(f"Experiment {name}")
46
-
47
- if args.musdb is None and args.rank == 0:
48
- print(
49
- "You must provide the path to the MusDB dataset with the --musdb flag. "
50
- "To download the MusDB dataset, see https://sigsep.github.io/datasets/musdb.html.",
51
- file=sys.stderr)
52
- sys.exit(1)
53
-
54
- eval_folder = args.evals / name
55
- eval_folder.mkdir(exist_ok=True, parents=True)
56
- args.logs.mkdir(exist_ok=True)
57
- metrics_path = args.logs / f"{name}.json"
58
- eval_folder.mkdir(exist_ok=True, parents=True)
59
- args.checkpoints.mkdir(exist_ok=True, parents=True)
60
- args.models.mkdir(exist_ok=True, parents=True)
61
-
62
- if args.device is None:
63
- device = "cpu"
64
- if th.cuda.is_available():
65
- device = "cuda"
66
- else:
67
- device = args.device
68
-
69
- th.manual_seed(args.seed)
70
- # Prevents too many threads to be started when running `museval` as it can be quite
71
- # inefficient on NUMA architectures.
72
- os.environ["OMP_NUM_THREADS"] = "1"
73
- os.environ["MKL_NUM_THREADS"] = "1"
74
-
75
- if args.world_size > 1:
76
- if device != "cuda" and args.rank == 0:
77
- print("Error: distributed training is only available with cuda device", file=sys.stderr)
78
- sys.exit(1)
79
- th.cuda.set_device(args.rank % th.cuda.device_count())
80
- distributed.init_process_group(backend="nccl",
81
- init_method="tcp://" + args.master,
82
- rank=args.rank,
83
- world_size=args.world_size)
84
-
85
- checkpoint = args.checkpoints / f"{name}.th"
86
- checkpoint_tmp = args.checkpoints / f"{name}.th.tmp"
87
- if args.restart and checkpoint.exists() and args.rank == 0:
88
- checkpoint.unlink()
89
-
90
- if args.test or args.test_pretrained:
91
- args.epochs = 1
92
- args.repeat = 0
93
- if args.test:
94
- model = load_model(args.models / args.test)
95
- else:
96
- model = load_pretrained(args.test_pretrained)
97
- elif args.tasnet:
98
- model = ConvTasNet(audio_channels=args.audio_channels,
99
- samplerate=args.samplerate, X=args.X,
100
- segment_length=4 * args.samples,
101
- sources=SOURCES)
102
- else:
103
- model = Demucs(
104
- audio_channels=args.audio_channels,
105
- channels=args.channels,
106
- context=args.context,
107
- depth=args.depth,
108
- glu=args.glu,
109
- growth=args.growth,
110
- kernel_size=args.kernel_size,
111
- lstm_layers=args.lstm_layers,
112
- rescale=args.rescale,
113
- rewrite=args.rewrite,
114
- stride=args.conv_stride,
115
- resample=args.resample,
116
- normalize=args.normalize,
117
- samplerate=args.samplerate,
118
- segment_length=4 * args.samples,
119
- sources=SOURCES,
120
- )
121
- model.to(device)
122
- if args.init:
123
- model.load_state_dict(load_pretrained(args.init).state_dict())
124
-
125
- if args.show:
126
- print(model)
127
- size = sizeof_fmt(4 * sum(p.numel() for p in model.parameters()))
128
- print(f"Model size {size}")
129
- return
130
-
131
- try:
132
- saved = th.load(checkpoint, map_location='cpu')
133
- except IOError:
134
- saved = SavedState()
135
-
136
- optimizer = th.optim.Adam(model.parameters(), lr=args.lr)
137
-
138
- quantizer = None
139
- quantizer = get_quantizer(model, args, optimizer)
140
-
141
- if saved.last_state is not None:
142
- model.load_state_dict(saved.last_state, strict=False)
143
- if saved.optimizer is not None:
144
- optimizer.load_state_dict(saved.optimizer)
145
-
146
- model_name = f"{name}.th"
147
- if args.save_model:
148
- if args.rank == 0:
149
- model.to("cpu")
150
- model.load_state_dict(saved.best_state)
151
- save_model(model, quantizer, args, args.models / model_name)
152
- return
153
- elif args.save_state:
154
- model_name = f"{args.save_state}.th"
155
- if args.rank == 0:
156
- model.to("cpu")
157
- model.load_state_dict(saved.best_state)
158
- state = get_state(model, quantizer)
159
- save_state(state, args.models / model_name)
160
- return
161
-
162
- if args.rank == 0:
163
- done = args.logs / f"{name}.done"
164
- if done.exists():
165
- done.unlink()
166
-
167
- augment = [Shift(args.data_stride)]
168
- if args.augment:
169
- augment += [FlipSign(), FlipChannels(), Scale(),
170
- Remix(group_size=args.remix_group_size)]
171
- augment = nn.Sequential(*augment).to(device)
172
- print("Agumentation pipeline:", augment)
173
-
174
- if args.mse:
175
- criterion = nn.MSELoss()
176
- else:
177
- criterion = nn.L1Loss()
178
-
179
- # Setting number of samples so that all convolution windows are full.
180
- # Prevents hard to debug mistake with the prediction being shifted compared
181
- # to the input mixture.
182
- samples = model.valid_length(args.samples)
183
- print(f"Number of training samples adjusted to {samples}")
184
- samples = samples + args.data_stride
185
- if args.repitch:
186
- # We need a bit more audio samples, to account for potential
187
- # tempo change.
188
- samples = math.ceil(samples / (1 - 0.01 * args.max_tempo))
189
-
190
- args.metadata.mkdir(exist_ok=True, parents=True)
191
- if args.raw:
192
- train_set = Rawset(args.raw / "train",
193
- samples=samples,
194
- channels=args.audio_channels,
195
- streams=range(1, len(model.sources) + 1),
196
- stride=args.data_stride)
197
-
198
- valid_set = Rawset(args.raw / "valid", channels=args.audio_channels)
199
- elif args.wav:
200
- train_set, valid_set = get_wav_datasets(args, samples, model.sources)
201
- elif args.is_wav:
202
- train_set, valid_set = get_musdb_wav_datasets(args, samples, model.sources)
203
- else:
204
- train_set, valid_set = get_compressed_datasets(args, samples)
205
-
206
- if args.repitch:
207
- train_set = RepitchedWrapper(
208
- train_set,
209
- proba=args.repitch,
210
- max_tempo=args.max_tempo)
211
-
212
- best_loss = float("inf")
213
- for epoch, metrics in enumerate(saved.metrics):
214
- print(f"Epoch {epoch:03d}: "
215
- f"train={metrics['train']:.8f} "
216
- f"valid={metrics['valid']:.8f} "
217
- f"best={metrics['best']:.4f} "
218
- f"ms={metrics.get('true_model_size', 0):.2f}MB "
219
- f"cms={metrics.get('compressed_model_size', 0):.2f}MB "
220
- f"duration={human_seconds(metrics['duration'])}")
221
- best_loss = metrics['best']
222
-
223
- if args.world_size > 1:
224
- dmodel = DistributedDataParallel(model,
225
- device_ids=[th.cuda.current_device()],
226
- output_device=th.cuda.current_device())
227
- else:
228
- dmodel = model
229
-
230
- for epoch in range(len(saved.metrics), args.epochs):
231
- begin = time.time()
232
- model.train()
233
- train_loss, model_size = train_model(
234
- epoch, train_set, dmodel, criterion, optimizer, augment,
235
- quantizer=quantizer,
236
- batch_size=args.batch_size,
237
- device=device,
238
- repeat=args.repeat,
239
- seed=args.seed,
240
- diffq=args.diffq,
241
- workers=args.workers,
242
- world_size=args.world_size)
243
- model.eval()
244
- valid_loss = validate_model(
245
- epoch, valid_set, model, criterion,
246
- device=device,
247
- rank=args.rank,
248
- split=args.split_valid,
249
- overlap=args.overlap,
250
- world_size=args.world_size)
251
-
252
- ms = 0
253
- cms = 0
254
- if quantizer and args.rank == 0:
255
- ms = quantizer.true_model_size()
256
- cms = quantizer.compressed_model_size(num_workers=min(40, args.world_size * 10))
257
-
258
- duration = time.time() - begin
259
- if valid_loss < best_loss and ms <= args.ms_target:
260
- best_loss = valid_loss
261
- saved.best_state = {
262
- key: value.to("cpu").clone()
263
- for key, value in model.state_dict().items()
264
- }
265
-
266
- saved.metrics.append({
267
- "train": train_loss,
268
- "valid": valid_loss,
269
- "best": best_loss,
270
- "duration": duration,
271
- "model_size": model_size,
272
- "true_model_size": ms,
273
- "compressed_model_size": cms,
274
- })
275
- if args.rank == 0:
276
- json.dump(saved.metrics, open(metrics_path, "w"))
277
-
278
- saved.last_state = model.state_dict()
279
- saved.optimizer = optimizer.state_dict()
280
- if args.rank == 0 and not args.test:
281
- th.save(saved, checkpoint_tmp)
282
- checkpoint_tmp.rename(checkpoint)
283
-
284
- print(f"Epoch {epoch:03d}: "
285
- f"train={train_loss:.8f} valid={valid_loss:.8f} best={best_loss:.4f} ms={ms:.2f}MB "
286
- f"cms={cms:.2f}MB "
287
- f"duration={human_seconds(duration)}")
288
-
289
- if args.world_size > 1:
290
- distributed.barrier()
291
-
292
- del dmodel
293
- model.load_state_dict(saved.best_state)
294
- if args.eval_cpu:
295
- device = "cpu"
296
- model.to(device)
297
- model.eval()
298
- evaluate(model, args.musdb, eval_folder,
299
- is_wav=args.is_wav,
300
- rank=args.rank,
301
- world_size=args.world_size,
302
- device=device,
303
- save=args.save,
304
- split=args.split_valid,
305
- shifts=args.shifts,
306
- overlap=args.overlap,
307
- workers=args.eval_workers)
308
- model.to("cpu")
309
- if args.rank == 0:
310
- if not (args.test or args.test_pretrained):
311
- save_model(model, quantizer, args, args.models / model_name)
312
- print("done")
313
- done.write_text("done")
314
-
315
-
316
- if __name__ == "__main__":
317
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/A666sxr/Genshin_TTS/text/__init__.py DELETED
@@ -1,56 +0,0 @@
1
- """ from https://github.com/keithito/tacotron """
2
- from text import cleaners
3
- from text.symbols import symbols
4
-
5
-
6
- # Mappings from symbol to numeric ID and vice versa:
7
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
8
- _id_to_symbol = {i: s for i, s in enumerate(symbols)}
9
-
10
-
11
- def text_to_sequence(text, cleaner_names):
12
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
13
- Args:
14
- text: string to convert to a sequence
15
- cleaner_names: names of the cleaner functions to run the text through
16
- Returns:
17
- List of integers corresponding to the symbols in the text
18
- '''
19
- sequence = []
20
-
21
- clean_text = _clean_text(text, cleaner_names)
22
- for symbol in clean_text:
23
- if symbol not in _symbol_to_id.keys():
24
- continue
25
- symbol_id = _symbol_to_id[symbol]
26
- sequence += [symbol_id]
27
- return sequence
28
-
29
-
30
- def cleaned_text_to_sequence(cleaned_text):
31
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
32
- Args:
33
- text: string to convert to a sequence
34
- Returns:
35
- List of integers corresponding to the symbols in the text
36
- '''
37
- sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()]
38
- return sequence
39
-
40
-
41
- def sequence_to_text(sequence):
42
- '''Converts a sequence of IDs back to a string'''
43
- result = ''
44
- for symbol_id in sequence:
45
- s = _id_to_symbol[symbol_id]
46
- result += s
47
- return result
48
-
49
-
50
- def _clean_text(text, cleaner_names):
51
- for name in cleaner_names:
52
- cleaner = getattr(cleaners, name)
53
- if not cleaner:
54
- raise Exception('Unknown cleaner: %s' % name)
55
- text = cleaner(text)
56
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Hobbyist/Hoyo-RVC/trainset_preprocess_pipeline_print.py DELETED
@@ -1,139 +0,0 @@
1
- import sys, os, multiprocessing
2
- from scipy import signal
3
-
4
- now_dir = os.getcwd()
5
- sys.path.append(now_dir)
6
-
7
- inp_root = sys.argv[1]
8
- sr = int(sys.argv[2])
9
- n_p = int(sys.argv[3])
10
- exp_dir = sys.argv[4]
11
- noparallel = sys.argv[5] == "True"
12
- import numpy as np, os, traceback
13
- from slicer2 import Slicer
14
- import librosa, traceback
15
- from scipy.io import wavfile
16
- import multiprocessing
17
- from my_utils import load_audio
18
-
19
- mutex = multiprocessing.Lock()
20
- f = open("%s/preprocess.log" % exp_dir, "a+")
21
-
22
-
23
- def println(strr):
24
- mutex.acquire()
25
- print(strr)
26
- f.write("%s\n" % strr)
27
- f.flush()
28
- mutex.release()
29
-
30
-
31
- class PreProcess:
32
- def __init__(self, sr, exp_dir):
33
- self.slicer = Slicer(
34
- sr=sr,
35
- threshold=-42,
36
- min_length=1500,
37
- min_interval=400,
38
- hop_size=15,
39
- max_sil_kept=500,
40
- )
41
- self.sr = sr
42
- self.bh, self.ah = signal.butter(N=5, Wn=48, btype="high", fs=self.sr)
43
- self.per = 3.7
44
- self.overlap = 0.3
45
- self.tail = self.per + self.overlap
46
- self.max = 0.9
47
- self.alpha = 0.75
48
- self.exp_dir = exp_dir
49
- self.gt_wavs_dir = "%s/0_gt_wavs" % exp_dir
50
- self.wavs16k_dir = "%s/1_16k_wavs" % exp_dir
51
- os.makedirs(self.exp_dir, exist_ok=True)
52
- os.makedirs(self.gt_wavs_dir, exist_ok=True)
53
- os.makedirs(self.wavs16k_dir, exist_ok=True)
54
-
55
- def norm_write(self, tmp_audio, idx0, idx1):
56
- tmp_max = np.abs(tmp_audio).max()
57
- if tmp_max > 2.5:
58
- print("%s-%s-%s-filtered" % (idx0, idx1, tmp_max))
59
- return
60
- tmp_audio = (tmp_audio / tmp_max * (self.max * self.alpha)) + (
61
- 1 - self.alpha
62
- ) * tmp_audio
63
- wavfile.write(
64
- "%s/%s_%s.wav" % (self.gt_wavs_dir, idx0, idx1),
65
- self.sr,
66
- tmp_audio.astype(np.float32),
67
- )
68
- tmp_audio = librosa.resample(
69
- tmp_audio, orig_sr=self.sr, target_sr=16000
70
- ) # , res_type="soxr_vhq"
71
- wavfile.write(
72
- "%s/%s_%s.wav" % (self.wavs16k_dir, idx0, idx1),
73
- 16000,
74
- tmp_audio.astype(np.float32),
75
- )
76
-
77
- def pipeline(self, path, idx0):
78
- try:
79
- audio = load_audio(path, self.sr)
80
- # zero phased digital filter cause pre-ringing noise...
81
- # audio = signal.filtfilt(self.bh, self.ah, audio)
82
- audio = signal.lfilter(self.bh, self.ah, audio)
83
-
84
- idx1 = 0
85
- for audio in self.slicer.slice(audio):
86
- i = 0
87
- while 1:
88
- start = int(self.sr * (self.per - self.overlap) * i)
89
- i += 1
90
- if len(audio[start:]) > self.tail * self.sr:
91
- tmp_audio = audio[start : start + int(self.per * self.sr)]
92
- self.norm_write(tmp_audio, idx0, idx1)
93
- idx1 += 1
94
- else:
95
- tmp_audio = audio[start:]
96
- idx1 += 1
97
- break
98
- self.norm_write(tmp_audio, idx0, idx1)
99
- println("%s->Suc." % path)
100
- except:
101
- println("%s->%s" % (path, traceback.format_exc()))
102
-
103
- def pipeline_mp(self, infos):
104
- for path, idx0 in infos:
105
- self.pipeline(path, idx0)
106
-
107
- def pipeline_mp_inp_dir(self, inp_root, n_p):
108
- try:
109
- infos = [
110
- ("%s/%s" % (inp_root, name), idx)
111
- for idx, name in enumerate(sorted(list(os.listdir(inp_root))))
112
- ]
113
- if noparallel:
114
- for i in range(n_p):
115
- self.pipeline_mp(infos[i::n_p])
116
- else:
117
- ps = []
118
- for i in range(n_p):
119
- p = multiprocessing.Process(
120
- target=self.pipeline_mp, args=(infos[i::n_p],)
121
- )
122
- ps.append(p)
123
- p.start()
124
- for i in range(n_p):
125
- ps[i].join()
126
- except:
127
- println("Fail. %s" % traceback.format_exc())
128
-
129
-
130
- def preprocess_trainset(inp_root, sr, n_p, exp_dir):
131
- pp = PreProcess(sr, exp_dir)
132
- println("start preprocess")
133
- println(sys.argv)
134
- pp.pipeline_mp_inp_dir(inp_root, n_p)
135
- println("end preprocess")
136
-
137
-
138
- if __name__ == "__main__":
139
- preprocess_trainset(inp_root, sr, n_p, exp_dir)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_pretrained_32khz_eval.py DELETED
@@ -1,99 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- """
8
- Evaluation with objective metrics for the pretrained MusicGen models.
9
- This grid takes signature from the training grid and runs evaluation-only stage.
10
-
11
- When running the grid for the first time, please use:
12
- REGEN=1 dora grid musicgen.musicgen_pretrained_32khz_eval
13
- and re-use the REGEN=1 option when the grid is changed to force regenerating it.
14
-
15
- Note that you need the proper metrics external libraries setup to use all
16
- the objective metrics activated in this grid. Refer to the README for more information.
17
- """
18
-
19
- import os
20
-
21
- from ._explorers import GenerationEvalExplorer
22
- from ...environment import AudioCraftEnvironment
23
- from ... import train
24
-
25
-
26
- def eval(launcher, batch_size: int = 32, eval_melody: bool = False):
27
- opts = {
28
- 'dset': 'audio/musiccaps_32khz',
29
- 'solver/musicgen/evaluation': 'objective_eval',
30
- 'execute_only': 'evaluate',
31
- '+dataset.evaluate.batch_size': batch_size,
32
- '+metrics.fad.tf.batch_size': 16,
33
- }
34
- # chroma-specific evaluation
35
- chroma_opts = {
36
- 'dset': 'internal/music_400k_32khz',
37
- 'dataset.evaluate.segment_duration': 30,
38
- 'dataset.evaluate.num_samples': 1000,
39
- 'evaluate.metrics.chroma_cosine': True,
40
- 'evaluate.metrics.fad': False,
41
- 'evaluate.metrics.kld': False,
42
- 'evaluate.metrics.text_consistency': False,
43
- }
44
- # binary for FAD computation: replace this path with your own path
45
- metrics_opts = {
46
- 'metrics.fad.tf.bin': '/data/home/jadecopet/local/usr/opt/google-research'
47
- }
48
- opt1 = {'generate.lm.use_sampling': True, 'generate.lm.top_k': 250, 'generate.lm.top_p': 0.}
49
- opt2 = {'transformer_lm.two_step_cfg': True}
50
-
51
- sub = launcher.bind(opts)
52
- sub.bind_(metrics_opts)
53
-
54
- # base objective metrics
55
- sub(opt1, opt2)
56
-
57
- if eval_melody:
58
- # chroma-specific metrics
59
- sub(opt1, opt2, chroma_opts)
60
-
61
-
62
- @GenerationEvalExplorer
63
- def explorer(launcher):
64
- partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global'])
65
- launcher.slurm_(gpus=4, partition=partitions)
66
-
67
- if 'REGEN' not in os.environ:
68
- folder = train.main.dora.dir / 'grids' / __name__.split('.', 2)[-1]
69
- with launcher.job_array():
70
- for sig in folder.iterdir():
71
- if not sig.is_symlink():
72
- continue
73
- xp = train.main.get_xp_from_sig(sig.name)
74
- launcher(xp.argv)
75
- return
76
-
77
- with launcher.job_array():
78
- musicgen_base = launcher.bind(solver="musicgen/musicgen_base_32khz")
79
- musicgen_base.bind_({'autocast': False, 'fsdp.use': True})
80
-
81
- # base musicgen models
82
- musicgen_base_small = musicgen_base.bind({'continue_from': '//pretrained/facebook/musicgen-small'})
83
- eval(musicgen_base_small, batch_size=128)
84
-
85
- musicgen_base_medium = musicgen_base.bind({'continue_from': '//pretrained/facebook/musicgen-medium'})
86
- musicgen_base_medium.bind_({'model/lm/model_scale': 'medium'})
87
- eval(musicgen_base_medium, batch_size=128)
88
-
89
- musicgen_base_large = musicgen_base.bind({'continue_from': '//pretrained/facebook/musicgen-large'})
90
- musicgen_base_large.bind_({'model/lm/model_scale': 'large'})
91
- eval(musicgen_base_large, batch_size=128)
92
-
93
- # melody musicgen model
94
- musicgen_melody = launcher.bind(solver="musicgen/musicgen_melody_32khz")
95
- musicgen_melody.bind_({'autocast': False, 'fsdp.use': True})
96
-
97
- musicgen_melody_medium = musicgen_melody.bind({'continue_from': '//pretrained/facebook/musicgen-melody'})
98
- musicgen_melody_medium.bind_({'model/lm/model_scale': 'medium'})
99
- eval(musicgen_melody_medium, batch_size=128, eval_melody=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/StyleGANEX/models/bisenet/resnet.py DELETED
@@ -1,109 +0,0 @@
1
- #!/usr/bin/python
2
- # -*- encoding: utf-8 -*-
3
-
4
- import torch
5
- import torch.nn as nn
6
- import torch.nn.functional as F
7
- import torch.utils.model_zoo as modelzoo
8
-
9
- # from modules.bn import InPlaceABNSync as BatchNorm2d
10
-
11
- resnet18_url = 'https://download.pytorch.org/models/resnet18-5c106cde.pth'
12
-
13
-
14
- def conv3x3(in_planes, out_planes, stride=1):
15
- """3x3 convolution with padding"""
16
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
17
- padding=1, bias=False)
18
-
19
-
20
- class BasicBlock(nn.Module):
21
- def __init__(self, in_chan, out_chan, stride=1):
22
- super(BasicBlock, self).__init__()
23
- self.conv1 = conv3x3(in_chan, out_chan, stride)
24
- self.bn1 = nn.BatchNorm2d(out_chan)
25
- self.conv2 = conv3x3(out_chan, out_chan)
26
- self.bn2 = nn.BatchNorm2d(out_chan)
27
- self.relu = nn.ReLU(inplace=True)
28
- self.downsample = None
29
- if in_chan != out_chan or stride != 1:
30
- self.downsample = nn.Sequential(
31
- nn.Conv2d(in_chan, out_chan,
32
- kernel_size=1, stride=stride, bias=False),
33
- nn.BatchNorm2d(out_chan),
34
- )
35
-
36
- def forward(self, x):
37
- residual = self.conv1(x)
38
- residual = F.relu(self.bn1(residual))
39
- residual = self.conv2(residual)
40
- residual = self.bn2(residual)
41
-
42
- shortcut = x
43
- if self.downsample is not None:
44
- shortcut = self.downsample(x)
45
-
46
- out = shortcut + residual
47
- out = self.relu(out)
48
- return out
49
-
50
-
51
- def create_layer_basic(in_chan, out_chan, bnum, stride=1):
52
- layers = [BasicBlock(in_chan, out_chan, stride=stride)]
53
- for i in range(bnum-1):
54
- layers.append(BasicBlock(out_chan, out_chan, stride=1))
55
- return nn.Sequential(*layers)
56
-
57
-
58
- class Resnet18(nn.Module):
59
- def __init__(self):
60
- super(Resnet18, self).__init__()
61
- self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
62
- bias=False)
63
- self.bn1 = nn.BatchNorm2d(64)
64
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
65
- self.layer1 = create_layer_basic(64, 64, bnum=2, stride=1)
66
- self.layer2 = create_layer_basic(64, 128, bnum=2, stride=2)
67
- self.layer3 = create_layer_basic(128, 256, bnum=2, stride=2)
68
- self.layer4 = create_layer_basic(256, 512, bnum=2, stride=2)
69
- self.init_weight()
70
-
71
- def forward(self, x):
72
- x = self.conv1(x)
73
- x = F.relu(self.bn1(x))
74
- x = self.maxpool(x)
75
-
76
- x = self.layer1(x)
77
- feat8 = self.layer2(x) # 1/8
78
- feat16 = self.layer3(feat8) # 1/16
79
- feat32 = self.layer4(feat16) # 1/32
80
- return feat8, feat16, feat32
81
-
82
- def init_weight(self):
83
- state_dict = modelzoo.load_url(resnet18_url)
84
- self_state_dict = self.state_dict()
85
- for k, v in state_dict.items():
86
- if 'fc' in k: continue
87
- self_state_dict.update({k: v})
88
- self.load_state_dict(self_state_dict)
89
-
90
- def get_params(self):
91
- wd_params, nowd_params = [], []
92
- for name, module in self.named_modules():
93
- if isinstance(module, (nn.Linear, nn.Conv2d)):
94
- wd_params.append(module.weight)
95
- if not module.bias is None:
96
- nowd_params.append(module.bias)
97
- elif isinstance(module, nn.BatchNorm2d):
98
- nowd_params += list(module.parameters())
99
- return wd_params, nowd_params
100
-
101
-
102
- if __name__ == "__main__":
103
- net = Resnet18()
104
- x = torch.randn(16, 3, 224, 224)
105
- out = net(x)
106
- print(out[0].size())
107
- print(out[1].size())
108
- print(out[2].size())
109
- net.get_params()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddpm.py DELETED
@@ -1,1444 +0,0 @@
1
- """
2
- wild mixture of
3
- https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
4
- https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py
5
- https://github.com/CompVis/taming-transformers
6
- -- merci
7
- """
8
- import torch
9
- import torch.nn as nn
10
- import numpy as np
11
- import pytorch_lightning as pl
12
- from torch.optim.lr_scheduler import LambdaLR
13
- from einops import rearrange, repeat
14
- from contextlib import contextmanager
15
- from functools import partial
16
- from tqdm import tqdm
17
- from torchvision.utils import make_grid
18
- from pytorch_lightning.utilities.distributed import rank_zero_only
19
-
20
- from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config
21
- from ldm.modules.ema import LitEma
22
- from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution
23
- from ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL
24
- from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
25
- from ldm.models.diffusion.ddim import DDIMSampler
26
-
27
-
28
- __conditioning_keys__ = {'concat': 'c_concat',
29
- 'crossattn': 'c_crossattn',
30
- 'adm': 'y'}
31
-
32
-
33
- def disabled_train(self, mode=True):
34
- """Overwrite model.train with this function to make sure train/eval mode
35
- does not change anymore."""
36
- return self
37
-
38
-
39
- def uniform_on_device(r1, r2, shape, device):
40
- return (r1 - r2) * torch.rand(*shape, device=device) + r2
41
-
42
-
43
- class DDPM(pl.LightningModule):
44
- # classic DDPM with Gaussian diffusion, in image space
45
- def __init__(self,
46
- unet_config,
47
- timesteps=1000,
48
- beta_schedule="linear",
49
- loss_type="l2",
50
- ckpt_path=None,
51
- ignore_keys=[],
52
- load_only_unet=False,
53
- monitor="val/loss",
54
- use_ema=True,
55
- first_stage_key="image",
56
- image_size=256,
57
- channels=3,
58
- log_every_t=100,
59
- clip_denoised=True,
60
- linear_start=1e-4,
61
- linear_end=2e-2,
62
- cosine_s=8e-3,
63
- given_betas=None,
64
- original_elbo_weight=0.,
65
- v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta
66
- l_simple_weight=1.,
67
- conditioning_key=None,
68
- parameterization="eps", # all config files uses "eps"
69
- scheduler_config=None,
70
- use_positional_encodings=False,
71
- learn_logvar=False,
72
- logvar_init=0.,
73
- ):
74
- super().__init__()
75
- assert parameterization in ["eps", "x0"], 'currently only supporting "eps" and "x0"'
76
- self.parameterization = parameterization
77
- print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode")
78
- self.cond_stage_model = None
79
- self.clip_denoised = clip_denoised
80
- self.log_every_t = log_every_t
81
- self.first_stage_key = first_stage_key
82
- self.image_size = image_size # try conv?
83
- self.channels = channels
84
- self.use_positional_encodings = use_positional_encodings
85
- self.model = DiffusionWrapper(unet_config, conditioning_key)
86
- count_params(self.model, verbose=True)
87
- self.use_ema = use_ema
88
- if self.use_ema:
89
- self.model_ema = LitEma(self.model)
90
- print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.")
91
-
92
- self.use_scheduler = scheduler_config is not None
93
- if self.use_scheduler:
94
- self.scheduler_config = scheduler_config
95
-
96
- self.v_posterior = v_posterior
97
- self.original_elbo_weight = original_elbo_weight
98
- self.l_simple_weight = l_simple_weight
99
-
100
- if monitor is not None:
101
- self.monitor = monitor
102
- if ckpt_path is not None:
103
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet)
104
-
105
- self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps,
106
- linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s)
107
-
108
- self.loss_type = loss_type
109
-
110
- self.learn_logvar = learn_logvar
111
- self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,))
112
- if self.learn_logvar:
113
- self.logvar = nn.Parameter(self.logvar, requires_grad=True)
114
-
115
- def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000,
116
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
117
- if exists(given_betas):
118
- betas = given_betas
119
- else:
120
- betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end,
121
- cosine_s=cosine_s)
122
- alphas = 1. - betas
123
- alphas_cumprod = np.cumprod(alphas, axis=0)
124
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
125
-
126
- timesteps, = betas.shape
127
- self.num_timesteps = int(timesteps)
128
- self.linear_start = linear_start
129
- self.linear_end = linear_end
130
- assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep'
131
-
132
- to_torch = partial(torch.tensor, dtype=torch.float32)
133
-
134
- self.register_buffer('betas', to_torch(betas))
135
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
136
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
137
-
138
- # calculations for diffusion q(x_t | x_{t-1}) and others
139
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
140
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
141
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
142
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
143
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
144
-
145
- # calculations for posterior q(x_{t-1} | x_t, x_0)
146
- posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / (
147
- 1. - alphas_cumprod) + self.v_posterior * betas
148
- # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
149
- self.register_buffer('posterior_variance', to_torch(posterior_variance))
150
- # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
151
- self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
152
- self.register_buffer('posterior_mean_coef1', to_torch(
153
- betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
154
- self.register_buffer('posterior_mean_coef2', to_torch(
155
- (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
156
-
157
- if self.parameterization == "eps":
158
- lvlb_weights = self.betas ** 2 / (
159
- 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))
160
- elif self.parameterization == "x0":
161
- lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod))
162
- else:
163
- raise NotImplementedError("mu not supported")
164
- # TODO how to choose this term
165
- lvlb_weights[0] = lvlb_weights[1]
166
- self.register_buffer('lvlb_weights', lvlb_weights, persistent=False)
167
- assert not torch.isnan(self.lvlb_weights).all()
168
-
169
- @contextmanager
170
- def ema_scope(self, context=None):
171
- if self.use_ema:
172
- self.model_ema.store(self.model.parameters())
173
- self.model_ema.copy_to(self.model)
174
- if context is not None:
175
- print(f"{context}: Switched to EMA weights")
176
- try:
177
- yield None
178
- finally:
179
- if self.use_ema:
180
- self.model_ema.restore(self.model.parameters())
181
- if context is not None:
182
- print(f"{context}: Restored training weights")
183
-
184
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
185
- sd = torch.load(path, map_location="cpu")
186
- if "state_dict" in list(sd.keys()):
187
- sd = sd["state_dict"]
188
- keys = list(sd.keys())
189
- for k in keys:
190
- for ik in ignore_keys:
191
- if k.startswith(ik):
192
- print("Deleting key {} from state_dict.".format(k))
193
- del sd[k]
194
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
195
- sd, strict=False)
196
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
197
- if len(missing) > 0:
198
- print(f"Missing Keys: {missing}")
199
- if len(unexpected) > 0:
200
- print(f"Unexpected Keys: {unexpected}")
201
-
202
- def q_mean_variance(self, x_start, t):
203
- """
204
- Get the distribution q(x_t | x_0).
205
- :param x_start: the [N x C x ...] tensor of noiseless inputs.
206
- :param t: the number of diffusion steps (minus 1). Here, 0 means one step.
207
- :return: A tuple (mean, variance, log_variance), all of x_start's shape.
208
- """
209
- mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start)
210
- variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape)
211
- log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape)
212
- return mean, variance, log_variance
213
-
214
- def predict_start_from_noise(self, x_t, t, noise):
215
- return (
216
- extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
217
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
218
- )
219
-
220
- def q_posterior(self, x_start, x_t, t):
221
- posterior_mean = (
222
- extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start +
223
- extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t
224
- )
225
- posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape)
226
- posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape)
227
- return posterior_mean, posterior_variance, posterior_log_variance_clipped
228
-
229
- def p_mean_variance(self, x, t, clip_denoised: bool):
230
- model_out = self.model(x, t)
231
- if self.parameterization == "eps":
232
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
233
- elif self.parameterization == "x0":
234
- x_recon = model_out
235
- if clip_denoised:
236
- x_recon.clamp_(-1., 1.)
237
-
238
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
239
- return model_mean, posterior_variance, posterior_log_variance
240
-
241
- @torch.no_grad()
242
- def p_sample(self, x, t, clip_denoised=True, repeat_noise=False):
243
- b, *_, device = *x.shape, x.device
244
- model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised)
245
- noise = noise_like(x.shape, device, repeat_noise)
246
- # no noise when t == 0
247
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
248
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
249
-
250
- @torch.no_grad()
251
- def p_sample_loop(self, shape, return_intermediates=False):
252
- device = self.betas.device
253
- b = shape[0]
254
- img = torch.randn(shape, device=device)
255
- intermediates = [img]
256
- for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps):
257
- img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long),
258
- clip_denoised=self.clip_denoised)
259
- if i % self.log_every_t == 0 or i == self.num_timesteps - 1:
260
- intermediates.append(img)
261
- if return_intermediates:
262
- return img, intermediates
263
- return img
264
-
265
- @torch.no_grad()
266
- def sample(self, batch_size=16, return_intermediates=False):
267
- image_size = self.image_size
268
- channels = self.channels
269
- return self.p_sample_loop((batch_size, channels, image_size, image_size),
270
- return_intermediates=return_intermediates)
271
-
272
- def q_sample(self, x_start, t, noise=None):
273
- noise = default(noise, lambda: torch.randn_like(x_start))
274
- return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
275
- extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise)
276
-
277
- def get_loss(self, pred, target, mean=True):
278
- if self.loss_type == 'l1':
279
- loss = (target - pred).abs()
280
- if mean:
281
- loss = loss.mean()
282
- elif self.loss_type == 'l2':
283
- if mean:
284
- loss = torch.nn.functional.mse_loss(target, pred)
285
- else:
286
- loss = torch.nn.functional.mse_loss(target, pred, reduction='none')
287
- else:
288
- raise NotImplementedError("unknown loss type '{loss_type}'")
289
-
290
- return loss
291
-
292
- def p_losses(self, x_start, t, noise=None):
293
- noise = default(noise, lambda: torch.randn_like(x_start))
294
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
295
- model_out = self.model(x_noisy, t)
296
-
297
- loss_dict = {}
298
- if self.parameterization == "eps":
299
- target = noise
300
- elif self.parameterization == "x0":
301
- target = x_start
302
- else:
303
- raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported")
304
-
305
- loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])
306
-
307
- log_prefix = 'train' if self.training else 'val'
308
-
309
- loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()})
310
- loss_simple = loss.mean() * self.l_simple_weight
311
-
312
- loss_vlb = (self.lvlb_weights[t] * loss).mean()
313
- loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb})
314
-
315
- loss = loss_simple + self.original_elbo_weight * loss_vlb
316
-
317
- loss_dict.update({f'{log_prefix}/loss': loss})
318
-
319
- return loss, loss_dict
320
-
321
- def forward(self, x, *args, **kwargs):
322
- # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size
323
- # assert h == img_size and w == img_size, f'height and width of image must be {img_size}'
324
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
325
- return self.p_losses(x, t, *args, **kwargs)
326
-
327
- def get_input(self, batch, k):
328
- x = batch[k]
329
- if len(x.shape) == 3:
330
- x = x[..., None]
331
- x = rearrange(x, 'b h w c -> b c h w')
332
- x = x.to(memory_format=torch.contiguous_format).float()
333
- return x
334
-
335
- def shared_step(self, batch):
336
- x = self.get_input(batch, self.first_stage_key)
337
- loss, loss_dict = self(x)
338
- return loss, loss_dict
339
-
340
- def training_step(self, batch, batch_idx):
341
- loss, loss_dict = self.shared_step(batch)
342
-
343
- self.log_dict(loss_dict, prog_bar=True,
344
- logger=True, on_step=True, on_epoch=True)
345
-
346
- self.log("global_step", self.global_step,
347
- prog_bar=True, logger=True, on_step=True, on_epoch=False)
348
-
349
- if self.use_scheduler:
350
- lr = self.optimizers().param_groups[0]['lr']
351
- self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False)
352
-
353
- return loss
354
-
355
- @torch.no_grad()
356
- def validation_step(self, batch, batch_idx):
357
- _, loss_dict_no_ema = self.shared_step(batch)
358
- with self.ema_scope():
359
- _, loss_dict_ema = self.shared_step(batch)
360
- loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema}
361
- self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)
362
- self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True)
363
-
364
- def on_train_batch_end(self, *args, **kwargs):
365
- if self.use_ema:
366
- self.model_ema(self.model)
367
-
368
- def _get_rows_from_list(self, samples):
369
- n_imgs_per_row = len(samples)
370
- denoise_grid = rearrange(samples, 'n b c h w -> b n c h w')
371
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
372
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
373
- return denoise_grid
374
-
375
- @torch.no_grad()
376
- def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs):
377
- log = dict()
378
- x = self.get_input(batch, self.first_stage_key)
379
- N = min(x.shape[0], N)
380
- n_row = min(x.shape[0], n_row)
381
- x = x.to(self.device)[:N]
382
- log["inputs"] = x
383
-
384
- # get diffusion row
385
- diffusion_row = list()
386
- x_start = x[:n_row]
387
-
388
- for t in range(self.num_timesteps):
389
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
390
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
391
- t = t.to(self.device).long()
392
- noise = torch.randn_like(x_start)
393
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
394
- diffusion_row.append(x_noisy)
395
-
396
- log["diffusion_row"] = self._get_rows_from_list(diffusion_row)
397
-
398
- if sample:
399
- # get denoise row
400
- with self.ema_scope("Plotting"):
401
- samples, denoise_row = self.sample(batch_size=N, return_intermediates=True)
402
-
403
- log["samples"] = samples
404
- log["denoise_row"] = self._get_rows_from_list(denoise_row)
405
-
406
- if return_keys:
407
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
408
- return log
409
- else:
410
- return {key: log[key] for key in return_keys}
411
- return log
412
-
413
- def configure_optimizers(self):
414
- lr = self.learning_rate
415
- params = list(self.model.parameters())
416
- if self.learn_logvar:
417
- params = params + [self.logvar]
418
- opt = torch.optim.AdamW(params, lr=lr)
419
- return opt
420
-
421
-
422
- class LatentDiffusion(DDPM):
423
- """main class"""
424
- def __init__(self,
425
- first_stage_config,
426
- cond_stage_config,
427
- num_timesteps_cond=None,
428
- cond_stage_key="image",# 'caption' for txt2image, 'masked_image' for inpainting
429
- cond_stage_trainable=False,
430
- concat_mode=True,# true for inpainting
431
- cond_stage_forward=None,
432
- conditioning_key=None, # 'crossattn' for txt2image, None for inpainting
433
- scale_factor=1.0,
434
- scale_by_std=False,
435
- *args, **kwargs):
436
- self.num_timesteps_cond = default(num_timesteps_cond, 1)
437
- self.scale_by_std = scale_by_std
438
- assert self.num_timesteps_cond <= kwargs['timesteps']
439
- # for backwards compatibility after implementation of DiffusionWrapper
440
- if conditioning_key is None:
441
- conditioning_key = 'concat' if concat_mode else 'crossattn'
442
- if cond_stage_config == '__is_unconditional__':
443
- conditioning_key = None
444
- ckpt_path = kwargs.pop("ckpt_path", None)
445
- ignore_keys = kwargs.pop("ignore_keys", [])
446
- super().__init__(conditioning_key=conditioning_key, *args, **kwargs)
447
- self.concat_mode = concat_mode
448
- self.cond_stage_trainable = cond_stage_trainable
449
- self.cond_stage_key = cond_stage_key
450
- try:
451
- self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1
452
- except:
453
- self.num_downs = 0
454
- if not scale_by_std:
455
- self.scale_factor = scale_factor
456
- else:
457
- self.register_buffer('scale_factor', torch.tensor(scale_factor))
458
- self.instantiate_first_stage(first_stage_config)
459
- self.instantiate_cond_stage(cond_stage_config)
460
- self.cond_stage_forward = cond_stage_forward
461
- self.clip_denoised = False
462
- self.bbox_tokenizer = None
463
-
464
- self.restarted_from_ckpt = False
465
- if ckpt_path is not None:
466
- self.init_from_ckpt(ckpt_path, ignore_keys)
467
- self.restarted_from_ckpt = True
468
-
469
- def make_cond_schedule(self, ):
470
- self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long)
471
- ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long()
472
- self.cond_ids[:self.num_timesteps_cond] = ids
473
-
474
- @rank_zero_only
475
- @torch.no_grad()
476
- def on_train_batch_start(self, batch, batch_idx, dataloader_idx):
477
- # only for very first batch
478
- if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt:
479
- assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously'
480
- # set rescale weight to 1./std of encodings
481
- print("### USING STD-RESCALING ###")
482
- x = super().get_input(batch, self.first_stage_key)
483
- x = x.to(self.device)
484
- encoder_posterior = self.encode_first_stage(x)
485
- z = self.get_first_stage_encoding(encoder_posterior).detach()
486
- del self.scale_factor
487
- self.register_buffer('scale_factor', 1. / z.flatten().std())
488
- print(f"setting self.scale_factor to {self.scale_factor}")
489
- print("### USING STD-RESCALING ###")
490
-
491
- def register_schedule(self,
492
- given_betas=None, beta_schedule="linear", timesteps=1000,
493
- linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
494
- super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s)
495
-
496
- self.shorten_cond_schedule = self.num_timesteps_cond > 1
497
- if self.shorten_cond_schedule:
498
- self.make_cond_schedule()
499
-
500
- def instantiate_first_stage(self, config):
501
- model = instantiate_from_config(config)
502
- self.first_stage_model = model.eval()
503
- self.first_stage_model.train = disabled_train
504
- for param in self.first_stage_model.parameters():
505
- param.requires_grad = False
506
-
507
- def instantiate_cond_stage(self, config):
508
- if not self.cond_stage_trainable:
509
- if config == "__is_first_stage__":# inpaint
510
- print("Using first stage also as cond stage.")
511
- self.cond_stage_model = self.first_stage_model
512
- elif config == "__is_unconditional__":
513
- print(f"Training {self.__class__.__name__} as an unconditional model.")
514
- self.cond_stage_model = None
515
- # self.be_unconditional = True
516
- else:
517
- model = instantiate_from_config(config)
518
- self.cond_stage_model = model.eval()
519
- self.cond_stage_model.train = disabled_train
520
- for param in self.cond_stage_model.parameters():
521
- param.requires_grad = False
522
- else:
523
- assert config != '__is_first_stage__'
524
- assert config != '__is_unconditional__'
525
- model = instantiate_from_config(config)
526
- self.cond_stage_model = model
527
-
528
- def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False):
529
- denoise_row = []
530
- for zd in tqdm(samples, desc=desc):
531
- denoise_row.append(self.decode_first_stage(zd.to(self.device),
532
- force_not_quantize=force_no_decoder_quantization))
533
- n_imgs_per_row = len(denoise_row)
534
- denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W
535
- denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w')
536
- denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w')
537
- denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row)
538
- return denoise_grid
539
-
540
- def get_first_stage_encoding(self, encoder_posterior):
541
- if isinstance(encoder_posterior, DiagonalGaussianDistribution):
542
- z = encoder_posterior.sample()
543
- elif isinstance(encoder_posterior, torch.Tensor):
544
- z = encoder_posterior
545
- else:
546
- raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented")
547
- return self.scale_factor * z
548
-
549
- def get_learned_conditioning(self, c):
550
- if self.cond_stage_forward is None:
551
- if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode):
552
- c = self.cond_stage_model.encode(c)
553
- if isinstance(c, DiagonalGaussianDistribution):
554
- c = c.mode()
555
- else:
556
- c = self.cond_stage_model(c)
557
- else:
558
- assert hasattr(self.cond_stage_model, self.cond_stage_forward)
559
- c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
560
- return c
561
-
562
- def meshgrid(self, h, w):
563
- y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1)
564
- x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1)
565
-
566
- arr = torch.cat([y, x], dim=-1)
567
- return arr
568
-
569
- def delta_border(self, h, w):
570
- """
571
- :param h: height
572
- :param w: width
573
- :return: normalized distance to image border,
574
- wtith min distance = 0 at border and max dist = 0.5 at image center
575
- """
576
- lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2)
577
- arr = self.meshgrid(h, w) / lower_right_corner
578
- dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0]
579
- dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0]
580
- edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0]
581
- return edge_dist
582
-
583
- def get_weighting(self, h, w, Ly, Lx, device):
584
- weighting = self.delta_border(h, w)
585
- weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"],
586
- self.split_input_params["clip_max_weight"], )
587
- weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device)
588
-
589
- if self.split_input_params["tie_braker"]:
590
- L_weighting = self.delta_border(Ly, Lx)
591
- L_weighting = torch.clip(L_weighting,
592
- self.split_input_params["clip_min_tie_weight"],
593
- self.split_input_params["clip_max_tie_weight"])
594
-
595
- L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device)
596
- weighting = weighting * L_weighting
597
- return weighting
598
-
599
- def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code
600
- """
601
- :param x: img of size (bs, c, h, w)
602
- :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1])
603
- """
604
- bs, nc, h, w = x.shape
605
-
606
- # number of crops in image
607
- Ly = (h - kernel_size[0]) // stride[0] + 1
608
- Lx = (w - kernel_size[1]) // stride[1] + 1
609
-
610
- if uf == 1 and df == 1:
611
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
612
- unfold = torch.nn.Unfold(**fold_params)
613
-
614
- fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params)
615
-
616
- weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype)
617
- normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap
618
- weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx))
619
-
620
- elif uf > 1 and df == 1:
621
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
622
- unfold = torch.nn.Unfold(**fold_params)
623
-
624
- fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf),
625
- dilation=1, padding=0,
626
- stride=(stride[0] * uf, stride[1] * uf))
627
- fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2)
628
-
629
- weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype)
630
- normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap
631
- weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx))
632
-
633
- elif df > 1 and uf == 1:
634
- fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride)
635
- unfold = torch.nn.Unfold(**fold_params)
636
-
637
- fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df),
638
- dilation=1, padding=0,
639
- stride=(stride[0] // df, stride[1] // df))
640
- fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2)
641
-
642
- weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype)
643
- normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap
644
- weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx))
645
-
646
- else:
647
- raise NotImplementedError
648
-
649
- return fold, unfold, normalization, weighting
650
-
651
- @torch.no_grad()
652
- def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False,
653
- cond_key=None, return_original_cond=False, bs=None):
654
- x = super().get_input(batch, k)
655
- if bs is not None:
656
- x = x[:bs]
657
- x = x.to(self.device)
658
- encoder_posterior = self.encode_first_stage(x)
659
- z = self.get_first_stage_encoding(encoder_posterior).detach()
660
-
661
- if self.model.conditioning_key is not None:
662
- if cond_key is None:
663
- cond_key = self.cond_stage_key
664
- if cond_key != self.first_stage_key:# cond_key is not image. for inapint it's masked_img
665
- if cond_key in ['caption', 'coordinates_bbox']:
666
- xc = batch[cond_key]
667
- elif cond_key == 'class_label':
668
- xc = batch
669
- else:
670
- xc = super().get_input(batch, cond_key).to(self.device)
671
- else:
672
- xc = x
673
- if not self.cond_stage_trainable or force_c_encode:
674
- if isinstance(xc, dict) or isinstance(xc, list):
675
- # import pudb; pudb.set_trace()
676
- c = self.get_learned_conditioning(xc)
677
- else:
678
- c = self.get_learned_conditioning(xc.to(self.device))
679
- else:
680
- c = xc
681
- if bs is not None:
682
- c = c[:bs]
683
-
684
- if self.use_positional_encodings:
685
- pos_x, pos_y = self.compute_latent_shifts(batch)
686
- ckey = __conditioning_keys__[self.model.conditioning_key]
687
- c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y}
688
-
689
- else:
690
- c = None
691
- xc = None
692
- if self.use_positional_encodings:
693
- pos_x, pos_y = self.compute_latent_shifts(batch)
694
- c = {'pos_x': pos_x, 'pos_y': pos_y}
695
- out = [z, c]
696
- if return_first_stage_outputs:
697
- xrec = self.decode_first_stage(z)
698
- out.extend([x, xrec])
699
- if return_original_cond:
700
- out.append(xc)
701
- return out
702
-
703
- @torch.no_grad()
704
- def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
705
- if predict_cids:
706
- if z.dim() == 4:
707
- z = torch.argmax(z.exp(), dim=1).long()
708
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
709
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
710
-
711
- z = 1. / self.scale_factor * z
712
-
713
- if hasattr(self, "split_input_params"):
714
- if self.split_input_params["patch_distributed_vq"]:
715
- ks = self.split_input_params["ks"] # eg. (128, 128)
716
- stride = self.split_input_params["stride"] # eg. (64, 64)
717
- uf = self.split_input_params["vqf"]
718
- bs, nc, h, w = z.shape
719
- if ks[0] > h or ks[1] > w:
720
- ks = (min(ks[0], h), min(ks[1], w))
721
- print("reducing Kernel")
722
-
723
- if stride[0] > h or stride[1] > w:
724
- stride = (min(stride[0], h), min(stride[1], w))
725
- print("reducing stride")
726
-
727
- fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf)
728
-
729
- z = unfold(z) # (bn, nc * prod(**ks), L)
730
- # 1. Reshape to img shape
731
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
732
-
733
- # 2. apply model loop over last dim
734
- if isinstance(self.first_stage_model, VQModelInterface):
735
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i],
736
- force_not_quantize=predict_cids or force_not_quantize)
737
- for i in range(z.shape[-1])]
738
- else:
739
-
740
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i])
741
- for i in range(z.shape[-1])]
742
-
743
- o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L)
744
- o = o * weighting
745
- # Reverse 1. reshape to img shape
746
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
747
- # stitch crops together
748
- decoded = fold(o)
749
- decoded = decoded / normalization # norm is shape (1, 1, h, w)
750
- return decoded
751
- else:
752
- if isinstance(self.first_stage_model, VQModelInterface):
753
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
754
- else:
755
- return self.first_stage_model.decode(z)
756
-
757
- else:
758
- if isinstance(self.first_stage_model, VQModelInterface):
759
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
760
- else:
761
- return self.first_stage_model.decode(z)
762
-
763
- # same as above but without decorator
764
- def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False):
765
- if predict_cids:
766
- if z.dim() == 4:
767
- z = torch.argmax(z.exp(), dim=1).long()
768
- z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None)
769
- z = rearrange(z, 'b h w c -> b c h w').contiguous()
770
-
771
- z = 1. / self.scale_factor * z
772
-
773
- if hasattr(self, "split_input_params"):
774
- if self.split_input_params["patch_distributed_vq"]:
775
- ks = self.split_input_params["ks"] # eg. (128, 128)
776
- stride = self.split_input_params["stride"] # eg. (64, 64)
777
- uf = self.split_input_params["vqf"]
778
- bs, nc, h, w = z.shape
779
- if ks[0] > h or ks[1] > w:
780
- ks = (min(ks[0], h), min(ks[1], w))
781
- print("reducing Kernel")
782
-
783
- if stride[0] > h or stride[1] > w:
784
- stride = (min(stride[0], h), min(stride[1], w))
785
- print("reducing stride")
786
-
787
- fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf)
788
-
789
- z = unfold(z) # (bn, nc * prod(**ks), L)
790
- # 1. Reshape to img shape
791
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
792
-
793
- # 2. apply model loop over last dim
794
- if isinstance(self.first_stage_model, VQModelInterface):
795
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i],
796
- force_not_quantize=predict_cids or force_not_quantize)
797
- for i in range(z.shape[-1])]
798
- else:
799
-
800
- output_list = [self.first_stage_model.decode(z[:, :, :, :, i])
801
- for i in range(z.shape[-1])]
802
-
803
- o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L)
804
- o = o * weighting
805
- # Reverse 1. reshape to img shape
806
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
807
- # stitch crops together
808
- decoded = fold(o)
809
- decoded = decoded / normalization # norm is shape (1, 1, h, w)
810
- return decoded
811
- else:
812
- if isinstance(self.first_stage_model, VQModelInterface):
813
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
814
- else:
815
- return self.first_stage_model.decode(z)
816
-
817
- else:
818
- if isinstance(self.first_stage_model, VQModelInterface):
819
- return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize)
820
- else:
821
- return self.first_stage_model.decode(z)
822
-
823
- @torch.no_grad()
824
- def encode_first_stage(self, x):
825
- if hasattr(self, "split_input_params"):
826
- if self.split_input_params["patch_distributed_vq"]:
827
- ks = self.split_input_params["ks"] # eg. (128, 128)
828
- stride = self.split_input_params["stride"] # eg. (64, 64)
829
- df = self.split_input_params["vqf"]
830
- self.split_input_params['original_image_size'] = x.shape[-2:]
831
- bs, nc, h, w = x.shape
832
- if ks[0] > h or ks[1] > w:
833
- ks = (min(ks[0], h), min(ks[1], w))
834
- print("reducing Kernel")
835
-
836
- if stride[0] > h or stride[1] > w:
837
- stride = (min(stride[0], h), min(stride[1], w))
838
- print("reducing stride")
839
-
840
- fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df)
841
- z = unfold(x) # (bn, nc * prod(**ks), L)
842
- # Reshape to img shape
843
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
844
-
845
- output_list = [self.first_stage_model.encode(z[:, :, :, :, i])
846
- for i in range(z.shape[-1])]
847
-
848
- o = torch.stack(output_list, axis=-1)
849
- o = o * weighting
850
-
851
- # Reverse reshape to img shape
852
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
853
- # stitch crops together
854
- decoded = fold(o)
855
- decoded = decoded / normalization
856
- return decoded
857
-
858
- else:
859
- return self.first_stage_model.encode(x)
860
- else:
861
- return self.first_stage_model.encode(x)
862
-
863
- def shared_step(self, batch, **kwargs):
864
- x, c = self.get_input(batch, self.first_stage_key)
865
- loss = self(x, c)
866
- return loss
867
-
868
- def forward(self, x, c, *args, **kwargs):
869
- t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long()
870
- if self.model.conditioning_key is not None:
871
- assert c is not None
872
- if self.cond_stage_trainable:# true when use text
873
- c = self.get_learned_conditioning(c) # c: string list -> [B, T, Context_dim]
874
- if self.shorten_cond_schedule: # TODO: drop this option
875
- tc = self.cond_ids[t].to(self.device)
876
- c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float()))
877
- return self.p_losses(x, c, t, *args, **kwargs)
878
-
879
- def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset
880
- def rescale_bbox(bbox):
881
- x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2])
882
- y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3])
883
- w = min(bbox[2] / crop_coordinates[2], 1 - x0)
884
- h = min(bbox[3] / crop_coordinates[3], 1 - y0)
885
- return x0, y0, w, h
886
-
887
- return [rescale_bbox(b) for b in bboxes]
888
-
889
- def apply_model(self, x_noisy, t, cond, return_ids=False):
890
-
891
- if isinstance(cond, dict):
892
- # hybrid case, cond is exptected to be a dict
893
- pass
894
- else:
895
- if not isinstance(cond, list):
896
- cond = [cond]
897
- key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn'
898
- cond = {key: cond}
899
-
900
- if hasattr(self, "split_input_params"):
901
- assert len(cond) == 1 # todo can only deal with one conditioning atm
902
- assert not return_ids
903
- ks = self.split_input_params["ks"] # eg. (128, 128)
904
- stride = self.split_input_params["stride"] # eg. (64, 64)
905
-
906
- h, w = x_noisy.shape[-2:]
907
-
908
- fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride)
909
-
910
- z = unfold(x_noisy) # (bn, nc * prod(**ks), L)
911
- # Reshape to img shape
912
- z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L )
913
- z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])]
914
-
915
- if self.cond_stage_key in ["image", "LR_image", "segmentation",
916
- 'bbox_img'] and self.model.conditioning_key: # todo check for completeness
917
- c_key = next(iter(cond.keys())) # get key
918
- c = next(iter(cond.values())) # get value
919
- assert (len(c) == 1) # todo extend to list with more than one elem
920
- c = c[0] # get element
921
-
922
- c = unfold(c)
923
- c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L )
924
-
925
- cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])]
926
-
927
- elif self.cond_stage_key == 'coordinates_bbox':
928
- assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size'
929
-
930
- # assuming padding of unfold is always 0 and its dilation is always 1
931
- n_patches_per_row = int((w - ks[0]) / stride[0] + 1)
932
- full_img_h, full_img_w = self.split_input_params['original_image_size']
933
- # as we are operating on latents, we need the factor from the original image size to the
934
- # spatial latent size to properly rescale the crops for regenerating the bbox annotations
935
- num_downs = self.first_stage_model.encoder.num_resolutions - 1
936
- rescale_latent = 2 ** (num_downs)
937
-
938
- # get top left postions of patches as conforming for the bbbox tokenizer, therefore we
939
- # need to rescale the tl patch coordinates to be in between (0,1)
940
- tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w,
941
- rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h)
942
- for patch_nr in range(z.shape[-1])]
943
-
944
- # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w)
945
- patch_limits = [(x_tl, y_tl,
946
- rescale_latent * ks[0] / full_img_w,
947
- rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates]
948
- # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates]
949
-
950
- # tokenize crop coordinates for the bounding boxes of the respective patches
951
- patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None].to(self.device)
952
- for bbox in patch_limits] # list of length l with tensors of shape (1, 2)
953
- print(patch_limits_tknzd[0].shape)
954
- # cut tknzd crop position from conditioning
955
- assert isinstance(cond, dict), 'cond must be dict to be fed into model'
956
- cut_cond = cond['c_crossattn'][0][..., :-2].to(self.device)
957
- print(cut_cond.shape)
958
-
959
- adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd])
960
- adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n')
961
- print(adapted_cond.shape)
962
- adapted_cond = self.get_learned_conditioning(adapted_cond)
963
- print(adapted_cond.shape)
964
- adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1])
965
- print(adapted_cond.shape)
966
-
967
- cond_list = [{'c_crossattn': [e]} for e in adapted_cond]
968
-
969
- else:
970
- cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient
971
-
972
- # apply model by loop over crops
973
- output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])]
974
- assert not isinstance(output_list[0],
975
- tuple) # todo cant deal with multiple model outputs check this never happens
976
-
977
- o = torch.stack(output_list, axis=-1)
978
- o = o * weighting
979
- # Reverse reshape to img shape
980
- o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L)
981
- # stitch crops together
982
- x_recon = fold(o) / normalization
983
-
984
- else:
985
- x_recon = self.model(x_noisy, t, **cond)
986
-
987
- if isinstance(x_recon, tuple) and not return_ids:
988
- return x_recon[0]
989
- else:
990
- return x_recon
991
-
992
- def _predict_eps_from_xstart(self, x_t, t, pred_xstart):
993
- return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \
994
- extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape)
995
-
996
- def _prior_bpd(self, x_start):
997
- """
998
- Get the prior KL term for the variational lower-bound, measured in
999
- bits-per-dim.
1000
- This term can't be optimized, as it only depends on the encoder.
1001
- :param x_start: the [N x C x ...] tensor of inputs.
1002
- :return: a batch of [N] KL values (in bits), one per batch element.
1003
- """
1004
- batch_size = x_start.shape[0]
1005
- t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device)
1006
- qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t)
1007
- kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0)
1008
- return mean_flat(kl_prior) / np.log(2.0)
1009
-
1010
- def p_losses(self, x_start, cond, t, noise=None):
1011
- noise = default(noise, lambda: torch.randn_like(x_start))
1012
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
1013
- model_output = self.apply_model(x_noisy, t, cond)
1014
-
1015
- loss_dict = {}
1016
- prefix = 'train' if self.training else 'val'
1017
-
1018
- if self.parameterization == "x0":
1019
- target = x_start
1020
- elif self.parameterization == "eps":
1021
- target = noise
1022
- else:
1023
- raise NotImplementedError()
1024
-
1025
- loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3])
1026
- loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()})
1027
-
1028
- logvar_t = self.logvar[t].to(self.device)
1029
- loss = loss_simple / torch.exp(logvar_t) + logvar_t
1030
- # loss = loss_simple / torch.exp(self.logvar) + self.logvar
1031
- if self.learn_logvar:
1032
- loss_dict.update({f'{prefix}/loss_gamma': loss.mean()})
1033
- loss_dict.update({'logvar': self.logvar.data.mean()})
1034
-
1035
- loss = self.l_simple_weight * loss.mean()
1036
-
1037
- loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3))
1038
- loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean()
1039
- loss_dict.update({f'{prefix}/loss_vlb': loss_vlb})
1040
- loss += (self.original_elbo_weight * loss_vlb)
1041
- loss_dict.update({f'{prefix}/loss': loss})
1042
-
1043
- return loss, loss_dict
1044
-
1045
- def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False,
1046
- return_x0=False, score_corrector=None, corrector_kwargs=None):
1047
- t_in = t
1048
- model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids)
1049
-
1050
- if score_corrector is not None:
1051
- assert self.parameterization == "eps"
1052
- model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs)
1053
-
1054
- if return_codebook_ids:
1055
- model_out, logits = model_out
1056
-
1057
- if self.parameterization == "eps":
1058
- x_recon = self.predict_start_from_noise(x, t=t, noise=model_out)
1059
- elif self.parameterization == "x0":
1060
- x_recon = model_out
1061
- else:
1062
- raise NotImplementedError()
1063
-
1064
- if clip_denoised:
1065
- x_recon.clamp_(-1., 1.)
1066
- if quantize_denoised:
1067
- x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon)
1068
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
1069
- if return_codebook_ids:
1070
- return model_mean, posterior_variance, posterior_log_variance, logits
1071
- elif return_x0:
1072
- return model_mean, posterior_variance, posterior_log_variance, x_recon
1073
- else:
1074
- return model_mean, posterior_variance, posterior_log_variance
1075
-
1076
- @torch.no_grad()
1077
- def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False,
1078
- return_codebook_ids=False, quantize_denoised=False, return_x0=False,
1079
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None):
1080
- b, *_, device = *x.shape, x.device
1081
- outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised,
1082
- return_codebook_ids=return_codebook_ids,
1083
- quantize_denoised=quantize_denoised,
1084
- return_x0=return_x0,
1085
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
1086
- if return_codebook_ids:
1087
- raise DeprecationWarning("Support dropped.")
1088
- model_mean, _, model_log_variance, logits = outputs
1089
- elif return_x0:
1090
- model_mean, _, model_log_variance, x0 = outputs
1091
- else:
1092
- model_mean, _, model_log_variance = outputs
1093
-
1094
- noise = noise_like(x.shape, device, repeat_noise) * temperature
1095
- if noise_dropout > 0.:
1096
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
1097
- # no noise when t == 0
1098
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
1099
-
1100
- if return_codebook_ids:
1101
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1)
1102
- if return_x0:
1103
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0
1104
- else:
1105
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
1106
-
1107
- @torch.no_grad()
1108
- def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False,
1109
- img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0.,
1110
- score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None,
1111
- log_every_t=None):
1112
- if not log_every_t:
1113
- log_every_t = self.log_every_t
1114
- timesteps = self.num_timesteps
1115
- if batch_size is not None:
1116
- b = batch_size if batch_size is not None else shape[0]
1117
- shape = [batch_size] + list(shape)
1118
- else:
1119
- b = batch_size = shape[0]
1120
- if x_T is None:
1121
- img = torch.randn(shape, device=self.device)
1122
- else:
1123
- img = x_T
1124
- intermediates = []
1125
- if cond is not None:
1126
- if isinstance(cond, dict):
1127
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
1128
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
1129
- else:
1130
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
1131
-
1132
- if start_T is not None:
1133
- timesteps = min(timesteps, start_T)
1134
- iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation',
1135
- total=timesteps) if verbose else reversed(
1136
- range(0, timesteps))
1137
- if type(temperature) == float:
1138
- temperature = [temperature] * timesteps
1139
-
1140
- for i in iterator:
1141
- ts = torch.full((b,), i, device=self.device, dtype=torch.long)
1142
- if self.shorten_cond_schedule:
1143
- assert self.model.conditioning_key != 'hybrid'
1144
- tc = self.cond_ids[ts].to(cond.device)
1145
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
1146
-
1147
- img, x0_partial = self.p_sample(img, cond, ts,
1148
- clip_denoised=self.clip_denoised,
1149
- quantize_denoised=quantize_denoised, return_x0=True,
1150
- temperature=temperature[i], noise_dropout=noise_dropout,
1151
- score_corrector=score_corrector, corrector_kwargs=corrector_kwargs)
1152
- if mask is not None:
1153
- assert x0 is not None
1154
- img_orig = self.q_sample(x0, ts)
1155
- img = img_orig * mask + (1. - mask) * img
1156
-
1157
- if i % log_every_t == 0 or i == timesteps - 1:
1158
- intermediates.append(x0_partial)
1159
- if callback: callback(i)
1160
- if img_callback: img_callback(img, i)
1161
- return img, intermediates
1162
-
1163
- @torch.no_grad()
1164
- def p_sample_loop(self, cond, shape, return_intermediates=False,
1165
- x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False,
1166
- mask=None, x0=None, img_callback=None, start_T=None,
1167
- log_every_t=None):
1168
-
1169
- if not log_every_t:
1170
- log_every_t = self.log_every_t
1171
- device = self.betas.device
1172
- b = shape[0]
1173
- if x_T is None:
1174
- img = torch.randn(shape, device=device)
1175
- else:
1176
- img = x_T
1177
-
1178
- intermediates = [img]
1179
- if timesteps is None:
1180
- timesteps = self.num_timesteps
1181
-
1182
- if start_T is not None:
1183
- timesteps = min(timesteps, start_T)
1184
- iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed(
1185
- range(0, timesteps))
1186
-
1187
- if mask is not None:
1188
- assert x0 is not None
1189
- assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match
1190
-
1191
- for i in iterator:
1192
- ts = torch.full((b,), i, device=device, dtype=torch.long)
1193
- if self.shorten_cond_schedule:
1194
- assert self.model.conditioning_key != 'hybrid'
1195
- tc = self.cond_ids[ts].to(cond.device)
1196
- cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond))
1197
-
1198
- img = self.p_sample(img, cond, ts,
1199
- clip_denoised=self.clip_denoised,
1200
- quantize_denoised=quantize_denoised)
1201
- if mask is not None:
1202
- img_orig = self.q_sample(x0, ts)
1203
- img = img_orig * mask + (1. - mask) * img
1204
-
1205
- if i % log_every_t == 0 or i == timesteps - 1:
1206
- intermediates.append(img)
1207
- if callback: callback(i)
1208
- if img_callback: img_callback(img, i)
1209
-
1210
- if return_intermediates:
1211
- return img, intermediates
1212
- return img
1213
-
1214
- @torch.no_grad()
1215
- def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None,
1216
- verbose=True, timesteps=None, quantize_denoised=False,
1217
- mask=None, x0=None, shape=None,**kwargs):
1218
- if shape is None:
1219
- shape = (batch_size, self.channels, self.image_size, self.image_size)
1220
- if cond is not None:
1221
- if isinstance(cond, dict):
1222
- cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else
1223
- list(map(lambda x: x[:batch_size], cond[key])) for key in cond}
1224
- else:
1225
- cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size]
1226
- return self.p_sample_loop(cond,
1227
- shape,
1228
- return_intermediates=return_intermediates, x_T=x_T,
1229
- verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised,
1230
- mask=mask, x0=x0)
1231
-
1232
- @torch.no_grad()
1233
- def sample_log(self,cond,batch_size,ddim, ddim_steps,**kwargs):
1234
-
1235
- if ddim:
1236
- ddim_sampler = DDIMSampler(self)
1237
- shape = (self.channels, self.image_size, self.image_size)
1238
- samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size,
1239
- shape,cond,verbose=False,**kwargs)
1240
-
1241
- else:
1242
- samples, intermediates = self.sample(cond=cond, batch_size=batch_size,
1243
- return_intermediates=True,**kwargs)
1244
-
1245
- return samples, intermediates
1246
-
1247
-
1248
- @torch.no_grad()
1249
- def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None,
1250
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
1251
- plot_diffusion_rows=True, **kwargs):
1252
-
1253
- use_ddim = ddim_steps is not None
1254
-
1255
- log = dict()
1256
- z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key,
1257
- return_first_stage_outputs=True,
1258
- force_c_encode=True,
1259
- return_original_cond=True,
1260
- bs=N)
1261
- N = min(x.shape[0], N)
1262
- n_row = min(x.shape[0], n_row)
1263
- log["inputs"] = x
1264
- log["reconstruction"] = xrec
1265
- if self.model.conditioning_key is not None:
1266
- if hasattr(self.cond_stage_model, "decode"):
1267
- xc = self.cond_stage_model.decode(c)
1268
- log["conditioning"] = xc
1269
- elif self.cond_stage_key in ["caption"]:
1270
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["caption"])
1271
- log["conditioning"] = xc
1272
- elif self.cond_stage_key == 'class_label':
1273
- xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"])
1274
- log['conditioning'] = xc
1275
- elif isimage(xc):
1276
- log["conditioning"] = xc
1277
- if ismap(xc):
1278
- log["original_conditioning"] = self.to_rgb(xc)
1279
-
1280
- if plot_diffusion_rows:
1281
- # get diffusion row
1282
- diffusion_row = list()
1283
- z_start = z[:n_row]
1284
- for t in range(self.num_timesteps):
1285
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
1286
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
1287
- t = t.to(self.device).long()
1288
- noise = torch.randn_like(z_start)
1289
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
1290
- diffusion_row.append(self.decode_first_stage(z_noisy))
1291
-
1292
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
1293
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
1294
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
1295
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
1296
- log["diffusion_row"] = diffusion_grid
1297
-
1298
- if sample:
1299
- # get denoise row
1300
- with self.ema_scope("Plotting"):
1301
- samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim,
1302
- ddim_steps=ddim_steps,eta=ddim_eta)
1303
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True)
1304
- x_samples = self.decode_first_stage(samples)
1305
- log["samples"] = x_samples
1306
- if plot_denoise_rows:
1307
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
1308
- log["denoise_row"] = denoise_grid
1309
-
1310
- if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance(
1311
- self.first_stage_model, IdentityFirstStage):
1312
- # also display when quantizing x0 while sampling
1313
- with self.ema_scope("Plotting Quantized Denoised"):
1314
- samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim,
1315
- ddim_steps=ddim_steps,eta=ddim_eta,
1316
- quantize_denoised=True)
1317
- # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True,
1318
- # quantize_denoised=True)
1319
- x_samples = self.decode_first_stage(samples.to(self.device))
1320
- log["samples_x0_quantized"] = x_samples
1321
-
1322
- if inpaint:
1323
- # make a simple center square
1324
- b, h, w = z.shape[0], z.shape[2], z.shape[3]
1325
- mask = torch.ones(N, h, w).to(self.device)
1326
- # zeros will be filled in
1327
- mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0.
1328
- mask = mask[:, None, ...]
1329
- with self.ema_scope("Plotting Inpaint"):
1330
-
1331
- samples, _ = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, eta=ddim_eta,
1332
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
1333
- x_samples = self.decode_first_stage(samples.to(self.device))
1334
- log["samples_inpainting"] = x_samples
1335
- log["mask"] = mask
1336
-
1337
- # outpaint
1338
- with self.ema_scope("Plotting Outpaint"):
1339
- samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,eta=ddim_eta,
1340
- ddim_steps=ddim_steps, x0=z[:N], mask=mask)
1341
- x_samples = self.decode_first_stage(samples.to(self.device))
1342
- log["samples_outpainting"] = x_samples
1343
-
1344
- if plot_progressive_rows:
1345
- with self.ema_scope("Plotting Progressives"):
1346
- img, progressives = self.progressive_denoising(c,
1347
- shape=(self.channels, self.image_size, self.image_size),
1348
- batch_size=N)
1349
- prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation")
1350
- log["progressive_row"] = prog_row
1351
-
1352
- if return_keys:
1353
- if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0:
1354
- return log
1355
- else:
1356
- return {key: log[key] for key in return_keys}
1357
- return log
1358
-
1359
- def configure_optimizers(self):
1360
- lr = self.learning_rate
1361
- params = list(self.model.parameters())
1362
- if self.cond_stage_trainable:
1363
- print(f"{self.__class__.__name__}: Also optimizing conditioner params!")
1364
- params = params + list(self.cond_stage_model.parameters())
1365
- if self.learn_logvar:
1366
- print('Diffusion model optimizing logvar')
1367
- params.append(self.logvar)
1368
- opt = torch.optim.AdamW(params, lr=lr)
1369
- if self.use_scheduler:
1370
- assert 'target' in self.scheduler_config
1371
- scheduler = instantiate_from_config(self.scheduler_config)
1372
-
1373
- print("Setting up LambdaLR scheduler...")
1374
- scheduler = [
1375
- {
1376
- 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule),
1377
- 'interval': 'step',
1378
- 'frequency': 1
1379
- }]
1380
- return [opt], scheduler
1381
- return opt
1382
-
1383
- @torch.no_grad()
1384
- def to_rgb(self, x):
1385
- x = x.float()
1386
- if not hasattr(self, "colorize"):
1387
- self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x)
1388
- x = nn.functional.conv2d(x, weight=self.colorize)
1389
- x = 2. * (x - x.min()) / (x.max() - x.min()) - 1.
1390
- return x
1391
-
1392
-
1393
- class DiffusionWrapper(pl.LightningModule):
1394
- def __init__(self, diff_model_config, conditioning_key):
1395
- super().__init__()
1396
- self.diffusion_model = instantiate_from_config(diff_model_config)
1397
- self.conditioning_key = conditioning_key # 'crossattn' for txt2image, concat for inpainting
1398
- assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm']
1399
-
1400
- def forward(self, x, t, c_concat: list = None, c_crossattn: list = None):
1401
- """param x: tensor with shape:[B,C,mel_len,T]"""
1402
- if self.conditioning_key is None:
1403
- out = self.diffusion_model(x, t)
1404
- elif self.conditioning_key == 'concat':
1405
- xc = torch.cat([x] + c_concat, dim=1)# channel dim,x shape (b,3,64,64) c_concat shape(b,4,64,64)
1406
- out = self.diffusion_model(xc, t)
1407
- elif self.conditioning_key == 'crossattn':
1408
- cc = torch.cat(c_crossattn, 1)# [b,seq_len,dim]
1409
- out = self.diffusion_model(x, t, context=cc)
1410
- elif self.conditioning_key == 'hybrid':# not implemented in the LatentDiffusion
1411
- xc = torch.cat([x] + c_concat, dim=1)
1412
- cc = torch.cat(c_crossattn, 1)
1413
- out = self.diffusion_model(xc, t, context=cc)
1414
- elif self.conditioning_key == 'adm':
1415
- cc = c_crossattn[0]
1416
- out = self.diffusion_model(x, t, y=cc)
1417
- else:
1418
- raise NotImplementedError()
1419
-
1420
- return out
1421
-
1422
-
1423
- class Layout2ImgDiffusion(LatentDiffusion):
1424
- # TODO: move all layout-specific hacks to this class
1425
- def __init__(self, cond_stage_key, *args, **kwargs):
1426
- assert cond_stage_key == 'coordinates_bbox', 'Layout2ImgDiffusion only for cond_stage_key="coordinates_bbox"'
1427
- super().__init__(cond_stage_key=cond_stage_key, *args, **kwargs)
1428
-
1429
- def log_images(self, batch, N=8, *args, **kwargs):
1430
- logs = super().log_images(batch=batch, N=N, *args, **kwargs)
1431
-
1432
- key = 'train' if self.training else 'validation'
1433
- dset = self.trainer.datamodule.datasets[key]
1434
- mapper = dset.conditional_builders[self.cond_stage_key]
1435
-
1436
- bbox_imgs = []
1437
- map_fn = lambda catno: dset.get_textual_label(dset.get_category_id(catno))
1438
- for tknzd_bbox in batch[self.cond_stage_key][:N]:
1439
- bboximg = mapper.plot(tknzd_bbox.detach().cpu(), map_fn, (256, 256))
1440
- bbox_imgs.append(bboximg)
1441
-
1442
- cond_img = torch.stack(bbox_imgs, dim=0)
1443
- logs['bbox_image'] = cond_img
1444
- return logs
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio/wav_evaluation/models/audio.py DELETED
@@ -1,179 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
- from torchlibrosa.stft import Spectrogram, LogmelFilterBank
5
-
6
- def get_audio_encoder(name: str):
7
- if name == "Cnn14":
8
- return Cnn14
9
- else:
10
- raise Exception('The audio encoder name {} is incorrect or not supported'.format(name))
11
-
12
-
13
- class ConvBlock(nn.Module):
14
- def __init__(self, in_channels, out_channels):
15
-
16
- super(ConvBlock, self).__init__()
17
-
18
- self.conv1 = nn.Conv2d(in_channels=in_channels,
19
- out_channels=out_channels,
20
- kernel_size=(3, 3), stride=(1, 1),
21
- padding=(1, 1), bias=False)
22
-
23
- self.conv2 = nn.Conv2d(in_channels=out_channels,
24
- out_channels=out_channels,
25
- kernel_size=(3, 3), stride=(1, 1),
26
- padding=(1, 1), bias=False)
27
-
28
- self.bn1 = nn.BatchNorm2d(out_channels)
29
- self.bn2 = nn.BatchNorm2d(out_channels)
30
-
31
-
32
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
33
-
34
- x = input
35
- x = F.relu_(self.bn1(self.conv1(x)))
36
- x = F.relu_(self.bn2(self.conv2(x)))
37
- if pool_type == 'max':
38
- x = F.max_pool2d(x, kernel_size=pool_size)
39
- elif pool_type == 'avg':
40
- x = F.avg_pool2d(x, kernel_size=pool_size)
41
- elif pool_type == 'avg+max':
42
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
43
- x2 = F.max_pool2d(x, kernel_size=pool_size)
44
- x = x1 + x2
45
- else:
46
- raise Exception('Incorrect argument!')
47
-
48
- return x
49
-
50
-
51
- class ConvBlock5x5(nn.Module):
52
- def __init__(self, in_channels, out_channels):
53
-
54
- super(ConvBlock5x5, self).__init__()
55
-
56
- self.conv1 = nn.Conv2d(in_channels=in_channels,
57
- out_channels=out_channels,
58
- kernel_size=(5, 5), stride=(1, 1),
59
- padding=(2, 2), bias=False)
60
-
61
- self.bn1 = nn.BatchNorm2d(out_channels)
62
-
63
-
64
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
65
-
66
- x = input
67
- x = F.relu_(self.bn1(self.conv1(x)))
68
- if pool_type == 'max':
69
- x = F.max_pool2d(x, kernel_size=pool_size)
70
- elif pool_type == 'avg':
71
- x = F.avg_pool2d(x, kernel_size=pool_size)
72
- elif pool_type == 'avg+max':
73
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
74
- x2 = F.max_pool2d(x, kernel_size=pool_size)
75
- x = x1 + x2
76
- else:
77
- raise Exception('Incorrect argument!')
78
-
79
- return x
80
-
81
-
82
- class AttBlock(nn.Module):
83
- def __init__(self, n_in, n_out, activation='linear', temperature=1.):
84
- super(AttBlock, self).__init__()
85
-
86
- self.activation = activation
87
- self.temperature = temperature
88
- self.att = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
89
- self.cla = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
90
-
91
- self.bn_att = nn.BatchNorm1d(n_out)
92
-
93
- def forward(self, x):
94
- # x: (n_samples, n_in, n_time)
95
- norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1)
96
- cla = self.nonlinear_transform(self.cla(x))
97
- x = torch.sum(norm_att * cla, dim=2)
98
- return x, norm_att, cla
99
-
100
- def nonlinear_transform(self, x):
101
- if self.activation == 'linear':
102
- return x
103
- elif self.activation == 'sigmoid':
104
- return torch.sigmoid(x)
105
-
106
-
107
- class Cnn14(nn.Module):
108
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
109
- fmax, classes_num, out_emb):
110
-
111
- super(Cnn14, self).__init__()
112
-
113
- window = 'hann'
114
- center = True
115
- pad_mode = 'reflect'
116
- ref = 1.0
117
- amin = 1e-10
118
- top_db = None
119
-
120
- # Spectrogram extractor
121
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
122
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
123
- freeze_parameters=True)
124
-
125
- # Logmel feature extractor
126
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
127
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
128
- freeze_parameters=True)
129
-
130
- self.bn0 = nn.BatchNorm2d(64)
131
-
132
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
133
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
134
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
135
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
136
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
137
- self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
138
-
139
- # out_emb is 2048 for best Cnn14
140
- self.fc1 = nn.Linear(2048, out_emb, bias=True)
141
- self.fc_audioset = nn.Linear(out_emb, classes_num, bias=True)
142
-
143
- def forward(self, input, mixup_lambda=None):
144
- """
145
- Input: (batch_size, data_length)
146
- """
147
-
148
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
149
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
150
-
151
- x = x.transpose(1, 3)
152
- x = self.bn0(x)
153
- x = x.transpose(1, 3)
154
-
155
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
156
- x = F.dropout(x, p=0.2, training=self.training)
157
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
158
- x = F.dropout(x, p=0.2, training=self.training)
159
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
160
- x = F.dropout(x, p=0.2, training=self.training)
161
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
162
- x = F.dropout(x, p=0.2, training=self.training)
163
- x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
164
- x = F.dropout(x, p=0.2, training=self.training)
165
- x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
166
- x = F.dropout(x, p=0.2, training=self.training)
167
- x = torch.mean(x, dim=3)
168
-
169
- (x1, _) = torch.max(x, dim=2)
170
- x2 = torch.mean(x, dim=2)
171
- x = x1 + x2
172
- x = F.dropout(x, p=0.5, training=self.training)
173
- x = F.relu_(self.fc1(x))
174
- embedding = F.dropout(x, p=0.5, training=self.training)
175
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
176
-
177
- output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding}
178
-
179
- return output_dict
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AILab-CVC/SEED-LLaMA/gradio_demo/conversation.py DELETED
@@ -1,190 +0,0 @@
1
- import dataclasses
2
- from enum import auto, Enum
3
- from typing import List, Tuple
4
-
5
- import io
6
- import base64
7
- import os
8
- from PIL import Image
9
- import copy
10
-
11
- IMG_FLAG = '<image>'
12
-
13
-
14
- class SeparatorStyle(Enum):
15
- """Different separator style."""
16
- SINGLE = auto()
17
- TWO = auto()
18
- MPT = auto()
19
- PLAIN = auto()
20
- LLAMA_2 = auto()
21
-
22
-
23
- def decode_image(encoded_image: str) -> Image:
24
- decoded_bytes = base64.b64decode(encoded_image.encode('utf-8'))
25
- buffer = io.BytesIO(decoded_bytes)
26
- image = Image.open(buffer)
27
- return image
28
-
29
-
30
- def encode_image(image: Image.Image, format: str = 'PNG') -> str:
31
- with io.BytesIO() as buffer:
32
- image.save(buffer, format=format)
33
- encoded_image = base64.b64encode(buffer.getvalue()).decode('utf-8')
34
- return encoded_image
35
-
36
-
37
- @dataclasses.dataclass
38
- class Conversation:
39
- """A class that keeps all conversation history."""
40
- system: str
41
- roles: List[str]
42
- messages: List[dict] # multi-turn -> user & assistant -> {'images': [PIL.Image,], 'text': str}
43
- offset: int
44
- sep_style: SeparatorStyle = SeparatorStyle.SINGLE
45
- sep: str = "###"
46
- sep2: str = None
47
- version: str = "Unknown"
48
-
49
- skip_next: bool = False
50
-
51
- def get_prompt(self):
52
- messages = copy.deepcopy(self.messages)
53
- if self.sep_style == SeparatorStyle.SINGLE:
54
- if self.system is None or self.system == '':
55
- text = ''
56
- else:
57
- text = self.system + self.sep
58
- images = []
59
- for message in messages:
60
- text += message['role'] + ": " + message['message']['text'] + self.sep
61
- for image_path, image_ids in zip(message['message']['images'], message['message']['images_ids']):
62
- if image_ids is not None:
63
- images.append(image_ids)
64
- else:
65
- image = Image.open(image_path).resize((256, 256))
66
- image_base64 = encode_image(image)
67
- images.append(image_base64)
68
-
69
- text += self.roles[1] + ":"
70
- elif self.sep_style == SeparatorStyle.LLAMA_2:
71
- b_token = "[INST] "
72
- e_token = " [/INST]"
73
- if self.system is None or self.system == '':
74
- text = ''
75
- else:
76
- text = f"<<SYS>>\n{self.system}\n<</SYS>>\n\n"
77
- images = []
78
- for idx, message in enumerate(messages):
79
- # text += message['role'] + ": " + message['message']['text'] + self.sep
80
- if idx % 2 == 0:
81
- text += b_token + message['message']['text'] + e_token + self.sep
82
- else:
83
- text += message['message']['text'] + self.sep
84
-
85
- for image_path, image_ids in zip(message['message']['images'], message['message']['images_ids']):
86
- if image_ids is not None:
87
- images.append(image_ids)
88
- else:
89
- image = Image.open(image_path).resize((256, 256))
90
- image_base64 = encode_image(image)
91
- images.append(image_base64)
92
- else:
93
- raise NotImplementedError
94
-
95
- return {'text': text, 'images': images}
96
-
97
- def update_image_ids(self, images_ids):
98
- image_count = 0
99
- for message in self.messages:
100
- for idx in range(len(message['message']['images_ids'])):
101
- if message['message']["images_ids"][idx] is None:
102
- message['message']["images_ids"][idx] = images_ids[image_count]
103
- image_count += 1
104
-
105
- assert len(images_ids) == image_count, print(len(images_ids), image_count)
106
-
107
- def append_message(self, role, message):
108
- self.messages.append([role, message])
109
-
110
- def to_gradio_chatbot(self):
111
- dialog = []
112
- for i, single_turn in enumerate(self.messages[self.offset:]):
113
- single_turn = single_turn['message']
114
- text_list = single_turn['text'].split(IMG_FLAG)
115
- assert len(text_list) == len(single_turn['images']) + 1, print(text_list, len(single_turn['images']))
116
- message = ''
117
- for image_idx in range(len(single_turn['images'])):
118
- # image = single_turn['images'][image_idx]
119
- # image_base64 = encode_image(image)
120
- # image_str = f'<img src="data:image/png;base64,{image_base64}" alt="user upload image" />'
121
- image_path = single_turn['images'][image_idx]
122
- if image_path == '':
123
- message += text_list[image_idx] + '<corrupt_image>'
124
- else:
125
- message += text_list[image_idx] + f'![](file={image_path})'
126
- message += text_list[-1]
127
-
128
- if i % 2 == 0:
129
- dialog.append([message, None])
130
- else:
131
- dialog[-1][-1] = message
132
-
133
- return dialog
134
-
135
- def copy(self):
136
- return Conversation(system=self.system,
137
- roles=self.roles,
138
- messages=copy.deepcopy(self.messages),
139
- offset=self.offset,
140
- sep_style=self.sep_style,
141
- sep=self.sep,
142
- sep2=self.sep2,
143
- version=self.version)
144
-
145
- def dict(self):
146
- messages = copy.deepcopy(self.messages)
147
- for message in messages:
148
- if 'images_ids' in message:
149
- message.pop('images_ids')
150
- for i in range(len(message['message']['images'])):
151
- message['message']['images'][i] = os.path.basename(message['message']['images'][i])
152
- return {
153
- "system": self.system,
154
- "roles": self.roles,
155
- "messages": messages,
156
- "offset": self.offset,
157
- "sep": self.sep,
158
- "sep2": self.sep2,
159
- }
160
-
161
-
162
- conv_seed_vicuna = Conversation(
163
- system="",
164
- roles=("USER", "ASSISTANT"),
165
- version="v2",
166
- messages=[],
167
- offset=0,
168
- sep_style=SeparatorStyle.SINGLE,
169
- sep='\n',
170
- )
171
-
172
- conv_seed_vicuna_system = Conversation(
173
- system="A chat between a curious user and an artificial intelligence assistant. ",
174
- roles=("USER", "ASSISTANT"),
175
- version="v2",
176
- messages=[],
177
- offset=0,
178
- sep_style=SeparatorStyle.SINGLE,
179
- sep='\n',
180
- )
181
-
182
- conv_seed_llama2 = Conversation(
183
- system="",
184
- roles=("[INST]", "[/INST]"),
185
- version="v2",
186
- messages=[],
187
- offset=0,
188
- sep_style=SeparatorStyle.LLAMA_2,
189
- sep='\n',
190
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abhaykoul/HelpingAI-2.0/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: HelpingAI 2.0
3
- emoji: 👀
4
- colorFrom: blue
5
- colorTo: blue
6
- sdk: streamlit
7
- sdk_version: 1.28.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aditya9790/yolo7-object-tracking/utils/torch_utils.py DELETED
@@ -1,374 +0,0 @@
1
- # YOLOR PyTorch utils
2
-
3
- import datetime
4
- import logging
5
- import math
6
- import os
7
- import platform
8
- import subprocess
9
- import time
10
- from contextlib import contextmanager
11
- from copy import deepcopy
12
- from pathlib import Path
13
-
14
- import torch
15
- import torch.backends.cudnn as cudnn
16
- import torch.nn as nn
17
- import torch.nn.functional as F
18
- import torchvision
19
-
20
- try:
21
- import thop # for FLOPS computation
22
- except ImportError:
23
- thop = None
24
- logger = logging.getLogger(__name__)
25
-
26
-
27
- @contextmanager
28
- def torch_distributed_zero_first(local_rank: int):
29
- """
30
- Decorator to make all processes in distributed training wait for each local_master to do something.
31
- """
32
- if local_rank not in [-1, 0]:
33
- torch.distributed.barrier()
34
- yield
35
- if local_rank == 0:
36
- torch.distributed.barrier()
37
-
38
-
39
- def init_torch_seeds(seed=0):
40
- # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html
41
- torch.manual_seed(seed)
42
- if seed == 0: # slower, more reproducible
43
- cudnn.benchmark, cudnn.deterministic = False, True
44
- else: # faster, less reproducible
45
- cudnn.benchmark, cudnn.deterministic = True, False
46
-
47
-
48
- def date_modified(path=__file__):
49
- # return human-readable file modification date, i.e. '2021-3-26'
50
- t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime)
51
- return f'{t.year}-{t.month}-{t.day}'
52
-
53
-
54
- def git_describe(path=Path(__file__).parent): # path must be a directory
55
- # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe
56
- s = f'git -C {path} describe --tags --long --always'
57
- try:
58
- return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1]
59
- except subprocess.CalledProcessError as e:
60
- return '' # not a git repository
61
-
62
-
63
- def select_device(device='', batch_size=None):
64
- # device = 'cpu' or '0' or '0,1,2,3'
65
- s = f'YOLOR 🚀 {git_describe() or date_modified()} torch {torch.__version__} ' # string
66
- cpu = device.lower() == 'cpu'
67
- if cpu:
68
- os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False
69
- elif device: # non-cpu device requested
70
- os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable
71
- assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested' # check availability
72
-
73
- cuda = not cpu and torch.cuda.is_available()
74
- if cuda:
75
- n = torch.cuda.device_count()
76
- if n > 1 and batch_size: # check that batch_size is compatible with device_count
77
- assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}'
78
- space = ' ' * len(s)
79
- for i, d in enumerate(device.split(',') if device else range(n)):
80
- p = torch.cuda.get_device_properties(i)
81
- s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2}MB)\n" # bytes to MB
82
- else:
83
- s += 'CPU\n'
84
-
85
- logger.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s) # emoji-safe
86
- return torch.device('cuda:0' if cuda else 'cpu')
87
-
88
-
89
- def time_synchronized():
90
- # pytorch-accurate time
91
- if torch.cuda.is_available():
92
- torch.cuda.synchronize()
93
- return time.time()
94
-
95
-
96
- def profile(x, ops, n=100, device=None):
97
- # profile a pytorch module or list of modules. Example usage:
98
- # x = torch.randn(16, 3, 640, 640) # input
99
- # m1 = lambda x: x * torch.sigmoid(x)
100
- # m2 = nn.SiLU()
101
- # profile(x, [m1, m2], n=100) # profile speed over 100 iterations
102
-
103
- device = device or torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
104
- x = x.to(device)
105
- x.requires_grad = True
106
- print(torch.__version__, device.type, torch.cuda.get_device_properties(0) if device.type == 'cuda' else '')
107
- print(f"\n{'Params':>12s}{'GFLOPS':>12s}{'forward (ms)':>16s}{'backward (ms)':>16s}{'input':>24s}{'output':>24s}")
108
- for m in ops if isinstance(ops, list) else [ops]:
109
- m = m.to(device) if hasattr(m, 'to') else m # device
110
- m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m # type
111
- dtf, dtb, t = 0., 0., [0., 0., 0.] # dt forward, backward
112
- try:
113
- flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPS
114
- except:
115
- flops = 0
116
-
117
- for _ in range(n):
118
- t[0] = time_synchronized()
119
- y = m(x)
120
- t[1] = time_synchronized()
121
- try:
122
- _ = y.sum().backward()
123
- t[2] = time_synchronized()
124
- except: # no backward method
125
- t[2] = float('nan')
126
- dtf += (t[1] - t[0]) * 1000 / n # ms per op forward
127
- dtb += (t[2] - t[1]) * 1000 / n # ms per op backward
128
-
129
- s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list'
130
- s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list'
131
- p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters
132
- print(f'{p:12}{flops:12.4g}{dtf:16.4g}{dtb:16.4g}{str(s_in):>24s}{str(s_out):>24s}')
133
-
134
-
135
- def is_parallel(model):
136
- return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)
137
-
138
-
139
- def intersect_dicts(da, db, exclude=()):
140
- # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values
141
- return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape}
142
-
143
-
144
- def initialize_weights(model):
145
- for m in model.modules():
146
- t = type(m)
147
- if t is nn.Conv2d:
148
- pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
149
- elif t is nn.BatchNorm2d:
150
- m.eps = 1e-3
151
- m.momentum = 0.03
152
- elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6]:
153
- m.inplace = True
154
-
155
-
156
- def find_modules(model, mclass=nn.Conv2d):
157
- # Finds layer indices matching module class 'mclass'
158
- return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)]
159
-
160
-
161
- def sparsity(model):
162
- # Return global model sparsity
163
- a, b = 0., 0.
164
- for p in model.parameters():
165
- a += p.numel()
166
- b += (p == 0).sum()
167
- return b / a
168
-
169
-
170
- def prune(model, amount=0.3):
171
- # Prune model to requested global sparsity
172
- import torch.nn.utils.prune as prune
173
- print('Pruning model... ', end='')
174
- for name, m in model.named_modules():
175
- if isinstance(m, nn.Conv2d):
176
- prune.l1_unstructured(m, name='weight', amount=amount) # prune
177
- prune.remove(m, 'weight') # make permanent
178
- print(' %.3g global sparsity' % sparsity(model))
179
-
180
-
181
- def fuse_conv_and_bn(conv, bn):
182
- # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/
183
- fusedconv = nn.Conv2d(conv.in_channels,
184
- conv.out_channels,
185
- kernel_size=conv.kernel_size,
186
- stride=conv.stride,
187
- padding=conv.padding,
188
- groups=conv.groups,
189
- bias=True).requires_grad_(False).to(conv.weight.device)
190
-
191
- # prepare filters
192
- w_conv = conv.weight.clone().view(conv.out_channels, -1)
193
- w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
194
- fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
195
-
196
- # prepare spatial bias
197
- b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias
198
- b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))
199
- fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)
200
-
201
- return fusedconv
202
-
203
-
204
- def model_info(model, verbose=False, img_size=640):
205
- # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320]
206
- n_p = sum(x.numel() for x in model.parameters()) # number parameters
207
- n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients
208
- if verbose:
209
- print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma'))
210
- for i, (name, p) in enumerate(model.named_parameters()):
211
- name = name.replace('module_list.', '')
212
- print('%5g %40s %9s %12g %20s %10.3g %10.3g' %
213
- (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std()))
214
-
215
- try: # FLOPS
216
- from thop import profile
217
- stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32
218
- img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input
219
- flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPS
220
- img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float
221
- fs = ', %.1f GFLOPS' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPS
222
- except (ImportError, Exception):
223
- fs = ''
224
-
225
- logger.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}")
226
-
227
-
228
- def load_classifier(name='resnet101', n=2):
229
- # Loads a pretrained model reshaped to n-class output
230
- model = torchvision.models.__dict__[name](pretrained=True)
231
-
232
- # ResNet model properties
233
- # input_size = [3, 224, 224]
234
- # input_space = 'RGB'
235
- # input_range = [0, 1]
236
- # mean = [0.485, 0.456, 0.406]
237
- # std = [0.229, 0.224, 0.225]
238
-
239
- # Reshape output to n classes
240
- filters = model.fc.weight.shape[1]
241
- model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True)
242
- model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True)
243
- model.fc.out_features = n
244
- return model
245
-
246
-
247
- def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416)
248
- # scales img(bs,3,y,x) by ratio constrained to gs-multiple
249
- if ratio == 1.0:
250
- return img
251
- else:
252
- h, w = img.shape[2:]
253
- s = (int(h * ratio), int(w * ratio)) # new size
254
- img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize
255
- if not same_shape: # pad/crop img
256
- h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)]
257
- return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean
258
-
259
-
260
- def copy_attr(a, b, include=(), exclude=()):
261
- # Copy attributes from b to a, options to only include [...] and to exclude [...]
262
- for k, v in b.__dict__.items():
263
- if (len(include) and k not in include) or k.startswith('_') or k in exclude:
264
- continue
265
- else:
266
- setattr(a, k, v)
267
-
268
-
269
- class ModelEMA:
270
- """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models
271
- Keep a moving average of everything in the model state_dict (parameters and buffers).
272
- This is intended to allow functionality like
273
- https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
274
- A smoothed version of the weights is necessary for some training schemes to perform well.
275
- This class is sensitive where it is initialized in the sequence of model init,
276
- GPU assignment and distributed training wrappers.
277
- """
278
-
279
- def __init__(self, model, decay=0.9999, updates=0):
280
- # Create EMA
281
- self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA
282
- # if next(model.parameters()).device.type != 'cpu':
283
- # self.ema.half() # FP16 EMA
284
- self.updates = updates # number of EMA updates
285
- self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs)
286
- for p in self.ema.parameters():
287
- p.requires_grad_(False)
288
-
289
- def update(self, model):
290
- # Update EMA parameters
291
- with torch.no_grad():
292
- self.updates += 1
293
- d = self.decay(self.updates)
294
-
295
- msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict
296
- for k, v in self.ema.state_dict().items():
297
- if v.dtype.is_floating_point:
298
- v *= d
299
- v += (1. - d) * msd[k].detach()
300
-
301
- def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):
302
- # Update EMA attributes
303
- copy_attr(self.ema, model, include, exclude)
304
-
305
-
306
- class BatchNormXd(torch.nn.modules.batchnorm._BatchNorm):
307
- def _check_input_dim(self, input):
308
- # The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc
309
- # is this method that is overwritten by the sub-class
310
- # This original goal of this method was for tensor sanity checks
311
- # If you're ok bypassing those sanity checks (eg. if you trust your inference
312
- # to provide the right dimensional inputs), then you can just use this method
313
- # for easy conversion from SyncBatchNorm
314
- # (unfortunately, SyncBatchNorm does not store the original class - if it did
315
- # we could return the one that was originally created)
316
- return
317
-
318
- def revert_sync_batchnorm(module):
319
- # this is very similar to the function that it is trying to revert:
320
- # https://github.com/pytorch/pytorch/blob/c8b3686a3e4ba63dc59e5dcfe5db3430df256833/torch/nn/modules/batchnorm.py#L679
321
- module_output = module
322
- if isinstance(module, torch.nn.modules.batchnorm.SyncBatchNorm):
323
- new_cls = BatchNormXd
324
- module_output = BatchNormXd(module.num_features,
325
- module.eps, module.momentum,
326
- module.affine,
327
- module.track_running_stats)
328
- if module.affine:
329
- with torch.no_grad():
330
- module_output.weight = module.weight
331
- module_output.bias = module.bias
332
- module_output.running_mean = module.running_mean
333
- module_output.running_var = module.running_var
334
- module_output.num_batches_tracked = module.num_batches_tracked
335
- if hasattr(module, "qconfig"):
336
- module_output.qconfig = module.qconfig
337
- for name, child in module.named_children():
338
- module_output.add_module(name, revert_sync_batchnorm(child))
339
- del module
340
- return module_output
341
-
342
-
343
- class TracedModel(nn.Module):
344
-
345
- def __init__(self, model=None, device=None, img_size=(640,640)):
346
- super(TracedModel, self).__init__()
347
-
348
- print(" Convert model to Traced-model... ")
349
- self.stride = model.stride
350
- self.names = model.names
351
- self.model = model
352
-
353
- self.model = revert_sync_batchnorm(self.model)
354
- self.model.to('cpu')
355
- self.model.eval()
356
-
357
- self.detect_layer = self.model.model[-1]
358
- self.model.traced = True
359
-
360
- rand_example = torch.rand(1, 3, img_size, img_size)
361
-
362
- traced_script_module = torch.jit.trace(self.model, rand_example, strict=False)
363
- #traced_script_module = torch.jit.script(self.model)
364
- traced_script_module.save("traced_model.pt")
365
- print(" traced_script_module saved! ")
366
- self.model = traced_script_module
367
- self.model.to(device)
368
- self.detect_layer.to(device)
369
- print(" model is traced! \n")
370
-
371
- def forward(self, x, augment=False, profile=False):
372
- out = self.model(x)
373
- out = self.detect_layer(out)
374
- return out
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/effectlayer-plugin.js DELETED
@@ -1,23 +0,0 @@
1
- import Factory from './gameobjects/shader/effectlayer/effectlayer/Factory.js';
2
- import Creator from './gameobjects/shader/effectlayer/effectlayer/Creator.js';
3
- import EffectLayer from './gameobjects/shader/effectlayer/effectlayer/EffectLayer.js';
4
- import SetValue from './utils/object/SetValue.js';
5
-
6
- class EffectLayerPlugin extends Phaser.Plugins.BasePlugin {
7
-
8
- constructor(pluginManager) {
9
- super(pluginManager);
10
-
11
- // Register our new Game Object type
12
- pluginManager.registerGameObject('rexEffectLayer', Factory, Creator);
13
- }
14
-
15
- start() {
16
- var eventEmitter = this.game.events;
17
- eventEmitter.on('destroy', this.destroy, this);
18
- }
19
- }
20
-
21
- SetValue(window, 'RexPlugins.GameObjects.EffectLayer', EffectLayer);
22
-
23
- export default EffectLayerPlugin;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/AddChild.js DELETED
@@ -1,16 +0,0 @@
1
- import Container from '../../container/Container.js';
2
-
3
- const ContainerAdd = Container.prototype.add;
4
-
5
- var AddChild = function (gameObject) {
6
- ContainerAdd.call(this, gameObject);
7
-
8
- if (this.sizerEventsEnable) {
9
- gameObject.emit('sizer.add', gameObject, this);
10
- this.emit('add', gameObject, this);
11
- }
12
-
13
- return this;
14
- }
15
-
16
- export default AddChild;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/ssd/README.md DELETED
@@ -1,21 +0,0 @@
1
- # SSD: Single Shot MultiBox Detector
2
-
3
- ## Introduction
4
-
5
- [ALGORITHM]
6
-
7
- ```latex
8
- @article{Liu_2016,
9
- title={SSD: Single Shot MultiBox Detector},
10
- journal={ECCV},
11
- author={Liu, Wei and Anguelov, Dragomir and Erhan, Dumitru and Szegedy, Christian and Reed, Scott and Fu, Cheng-Yang and Berg, Alexander C.},
12
- year={2016},
13
- }
14
- ```
15
-
16
- ## Results and models
17
-
18
- | Backbone | Size | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
19
- | :------: | :---: | :---: | :-----: | :------: | :------------: | :----: | :------: | :--------: |
20
- | VGG16 | 300 | caffe | 120e | 10.2 | 43.7 | 25.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ssd/ssd300_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ssd/ssd300_coco/ssd300_coco_20200307-a92d2092.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/ssd/ssd300_coco/ssd300_coco_20200307_174216.log.json) |
21
- | VGG16 | 512 | caffe | 120e | 9.3 | 30.7 | 29.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ssd/ssd512_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ssd/ssd512_coco/ssd512_coco_20200308-038c5591.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/ssd/ssd512_coco/ssd512_coco_20200308_134447.log.json) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/res_layer.py DELETED
@@ -1,187 +0,0 @@
1
- from mmcv.cnn import build_conv_layer, build_norm_layer
2
- from torch import nn as nn
3
-
4
-
5
- class ResLayer(nn.Sequential):
6
- """ResLayer to build ResNet style backbone.
7
-
8
- Args:
9
- block (nn.Module): block used to build ResLayer.
10
- inplanes (int): inplanes of block.
11
- planes (int): planes of block.
12
- num_blocks (int): number of blocks.
13
- stride (int): stride of the first block. Default: 1
14
- avg_down (bool): Use AvgPool instead of stride conv when
15
- downsampling in the bottleneck. Default: False
16
- conv_cfg (dict): dictionary to construct and config conv layer.
17
- Default: None
18
- norm_cfg (dict): dictionary to construct and config norm layer.
19
- Default: dict(type='BN')
20
- downsample_first (bool): Downsample at the first block or last block.
21
- False for Hourglass, True for ResNet. Default: True
22
- """
23
-
24
- def __init__(self,
25
- block,
26
- inplanes,
27
- planes,
28
- num_blocks,
29
- stride=1,
30
- avg_down=False,
31
- conv_cfg=None,
32
- norm_cfg=dict(type='BN'),
33
- downsample_first=True,
34
- **kwargs):
35
- self.block = block
36
-
37
- downsample = None
38
- if stride != 1 or inplanes != planes * block.expansion:
39
- downsample = []
40
- conv_stride = stride
41
- if avg_down:
42
- conv_stride = 1
43
- downsample.append(
44
- nn.AvgPool2d(
45
- kernel_size=stride,
46
- stride=stride,
47
- ceil_mode=True,
48
- count_include_pad=False))
49
- downsample.extend([
50
- build_conv_layer(
51
- conv_cfg,
52
- inplanes,
53
- planes * block.expansion,
54
- kernel_size=1,
55
- stride=conv_stride,
56
- bias=False),
57
- build_norm_layer(norm_cfg, planes * block.expansion)[1]
58
- ])
59
- downsample = nn.Sequential(*downsample)
60
-
61
- layers = []
62
- if downsample_first:
63
- layers.append(
64
- block(
65
- inplanes=inplanes,
66
- planes=planes,
67
- stride=stride,
68
- downsample=downsample,
69
- conv_cfg=conv_cfg,
70
- norm_cfg=norm_cfg,
71
- **kwargs))
72
- inplanes = planes * block.expansion
73
- for _ in range(1, num_blocks):
74
- layers.append(
75
- block(
76
- inplanes=inplanes,
77
- planes=planes,
78
- stride=1,
79
- conv_cfg=conv_cfg,
80
- norm_cfg=norm_cfg,
81
- **kwargs))
82
-
83
- else: # downsample_first=False is for HourglassModule
84
- for _ in range(num_blocks - 1):
85
- layers.append(
86
- block(
87
- inplanes=inplanes,
88
- planes=inplanes,
89
- stride=1,
90
- conv_cfg=conv_cfg,
91
- norm_cfg=norm_cfg,
92
- **kwargs))
93
- layers.append(
94
- block(
95
- inplanes=inplanes,
96
- planes=planes,
97
- stride=stride,
98
- downsample=downsample,
99
- conv_cfg=conv_cfg,
100
- norm_cfg=norm_cfg,
101
- **kwargs))
102
- super(ResLayer, self).__init__(*layers)
103
-
104
-
105
- class SimplifiedBasicBlock(nn.Module):
106
- """Simplified version of original basic residual block. This is used in
107
- `SCNet <https://arxiv.org/abs/2012.10150>`_.
108
-
109
- - Norm layer is now optional
110
- - Last ReLU in forward function is removed
111
- """
112
- expansion = 1
113
-
114
- def __init__(self,
115
- inplanes,
116
- planes,
117
- stride=1,
118
- dilation=1,
119
- downsample=None,
120
- style='pytorch',
121
- with_cp=False,
122
- conv_cfg=None,
123
- norm_cfg=dict(type='BN'),
124
- dcn=None,
125
- plugins=None):
126
- super(SimplifiedBasicBlock, self).__init__()
127
- assert dcn is None, 'Not implemented yet.'
128
- assert plugins is None, 'Not implemented yet.'
129
- assert not with_cp, 'Not implemented yet.'
130
- self.with_norm = norm_cfg is not None
131
- with_bias = True if norm_cfg is None else False
132
- self.conv1 = build_conv_layer(
133
- conv_cfg,
134
- inplanes,
135
- planes,
136
- 3,
137
- stride=stride,
138
- padding=dilation,
139
- dilation=dilation,
140
- bias=with_bias)
141
- if self.with_norm:
142
- self.norm1_name, norm1 = build_norm_layer(
143
- norm_cfg, planes, postfix=1)
144
- self.add_module(self.norm1_name, norm1)
145
- self.conv2 = build_conv_layer(
146
- conv_cfg, planes, planes, 3, padding=1, bias=with_bias)
147
- if self.with_norm:
148
- self.norm2_name, norm2 = build_norm_layer(
149
- norm_cfg, planes, postfix=2)
150
- self.add_module(self.norm2_name, norm2)
151
-
152
- self.relu = nn.ReLU(inplace=True)
153
- self.downsample = downsample
154
- self.stride = stride
155
- self.dilation = dilation
156
- self.with_cp = with_cp
157
-
158
- @property
159
- def norm1(self):
160
- """nn.Module: normalization layer after the first convolution layer"""
161
- return getattr(self, self.norm1_name) if self.with_norm else None
162
-
163
- @property
164
- def norm2(self):
165
- """nn.Module: normalization layer after the second convolution layer"""
166
- return getattr(self, self.norm2_name) if self.with_norm else None
167
-
168
- def forward(self, x):
169
- """Forward function."""
170
-
171
- identity = x
172
-
173
- out = self.conv1(x)
174
- if self.with_norm:
175
- out = self.norm1(out)
176
- out = self.relu(out)
177
-
178
- out = self.conv2(out)
179
- if self.with_norm:
180
- out = self.norm2(out)
181
-
182
- if self.downsample is not None:
183
- identity = self.downsample(x)
184
-
185
- out += identity
186
-
187
- return out
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AndyCer/TehVenom-MPT-7b-Chat-Instruct-LongCTX-Merge/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: TehVenom MPT 7b Chat Instruct LongCTX Merge
3
- emoji: 📉
4
- colorFrom: gray
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 3.29.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anthony7906/MengHuiMXD_GPT/Dockerfile DELETED
@@ -1,15 +0,0 @@
1
- FROM python:3.9 as builder
2
- RUN apt-get update && apt-get install -y build-essential
3
- COPY requirements.txt .
4
- COPY requirements_advanced.txt .
5
- RUN pip install --user -r requirements.txt
6
- # RUN pip install --user -r requirements_advanced.txt
7
-
8
- FROM python:3.9
9
- MAINTAINER iskoldt
10
- COPY --from=builder /root/.local /root/.local
11
- ENV PATH=/root/.local/bin:$PATH
12
- COPY . /app
13
- WORKDIR /app
14
- ENV dockerrun yes
15
- CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/core.py DELETED
The diff for this file is too large to render. See raw diff
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/__init__.py DELETED
File without changes
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/vision.cpp DELETED
@@ -1,117 +0,0 @@
1
- // Copyright (c) Facebook, Inc. and its affiliates.
2
-
3
- #include <torch/extension.h>
4
- #include "ROIAlignRotated/ROIAlignRotated.h"
5
- #include "box_iou_rotated/box_iou_rotated.h"
6
- #include "cocoeval/cocoeval.h"
7
- #include "deformable/deform_conv.h"
8
- #include "nms_rotated/nms_rotated.h"
9
-
10
- namespace detectron2 {
11
-
12
- #if defined(WITH_CUDA) || defined(WITH_HIP)
13
- extern int get_cudart_version();
14
- #endif
15
-
16
- std::string get_cuda_version() {
17
- #if defined(WITH_CUDA) || defined(WITH_HIP)
18
- std::ostringstream oss;
19
-
20
- #if defined(WITH_CUDA)
21
- oss << "CUDA ";
22
- #else
23
- oss << "HIP ";
24
- #endif
25
-
26
- // copied from
27
- // https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/cuda/detail/CUDAHooks.cpp#L231
28
- auto printCudaStyleVersion = [&](int v) {
29
- oss << (v / 1000) << "." << (v / 10 % 100);
30
- if (v % 10 != 0) {
31
- oss << "." << (v % 10);
32
- }
33
- };
34
- printCudaStyleVersion(get_cudart_version());
35
- return oss.str();
36
- #else // neither CUDA nor HIP
37
- return std::string("not available");
38
- #endif
39
- }
40
-
41
- bool has_cuda() {
42
- #if defined(WITH_CUDA)
43
- return true;
44
- #else
45
- return false;
46
- #endif
47
- }
48
-
49
- // similar to
50
- // https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Version.cpp
51
- std::string get_compiler_version() {
52
- std::ostringstream ss;
53
- #if defined(__GNUC__)
54
- #ifndef __clang__
55
-
56
- #if ((__GNUC__ <= 4) && (__GNUC_MINOR__ <= 8))
57
- #error "GCC >= 4.9 is required!"
58
- #endif
59
-
60
- { ss << "GCC " << __GNUC__ << "." << __GNUC_MINOR__; }
61
- #endif
62
- #endif
63
-
64
- #if defined(__clang_major__)
65
- {
66
- ss << "clang " << __clang_major__ << "." << __clang_minor__ << "."
67
- << __clang_patchlevel__;
68
- }
69
- #endif
70
-
71
- #if defined(_MSC_VER)
72
- { ss << "MSVC " << _MSC_FULL_VER; }
73
- #endif
74
- return ss.str();
75
- }
76
-
77
- PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
78
- m.def("get_compiler_version", &get_compiler_version, "get_compiler_version");
79
- m.def("get_cuda_version", &get_cuda_version, "get_cuda_version");
80
- m.def("has_cuda", &has_cuda, "has_cuda");
81
-
82
- m.def("deform_conv_forward", &deform_conv_forward, "deform_conv_forward");
83
- m.def(
84
- "deform_conv_backward_input",
85
- &deform_conv_backward_input,
86
- "deform_conv_backward_input");
87
- m.def(
88
- "deform_conv_backward_filter",
89
- &deform_conv_backward_filter,
90
- "deform_conv_backward_filter");
91
- m.def(
92
- "modulated_deform_conv_forward",
93
- &modulated_deform_conv_forward,
94
- "modulated_deform_conv_forward");
95
- m.def(
96
- "modulated_deform_conv_backward",
97
- &modulated_deform_conv_backward,
98
- "modulated_deform_conv_backward");
99
-
100
- m.def("COCOevalAccumulate", &COCOeval::Accumulate, "COCOeval::Accumulate");
101
- m.def(
102
- "COCOevalEvaluateImages",
103
- &COCOeval::EvaluateImages,
104
- "COCOeval::EvaluateImages");
105
- pybind11::class_<COCOeval::InstanceAnnotation>(m, "InstanceAnnotation")
106
- .def(pybind11::init<uint64_t, double, double, bool, bool>());
107
- pybind11::class_<COCOeval::ImageEvaluation>(m, "ImageEvaluation")
108
- .def(pybind11::init<>());
109
- }
110
-
111
- TORCH_LIBRARY(detectron2, m) {
112
- m.def("nms_rotated", &nms_rotated);
113
- m.def("box_iou_rotated", &box_iou_rotated);
114
- m.def("roi_align_rotated_forward", &ROIAlignRotated_forward);
115
- m.def("roi_align_rotated_backward", &ROIAlignRotated_backward);
116
- }
117
- } // namespace detectron2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BAAI/vid2vid-zero/gradio_demo/app_running.py DELETED
@@ -1,169 +0,0 @@
1
- #!/usr/bin/env python
2
-
3
- from __future__ import annotations
4
-
5
- import os
6
-
7
- import gradio as gr
8
-
9
- from gradio_demo.runner import Runner
10
-
11
-
12
- def create_demo(runner: Runner,
13
- pipe: InferencePipeline | None = None) -> gr.Blocks:
14
- hf_token = os.getenv('HF_TOKEN')
15
- with gr.Blocks() as demo:
16
- with gr.Row():
17
- with gr.Column():
18
- with gr.Box():
19
- gr.Markdown('Input Data')
20
- input_video = gr.File(label='Input video')
21
- input_prompt = gr.Textbox(
22
- label='Input prompt',
23
- max_lines=1,
24
- placeholder='A car is moving on the road.')
25
- gr.Markdown('''
26
- - Upload a video and write a `Input Prompt` that describes the video.
27
- ''')
28
-
29
- with gr.Column():
30
- with gr.Box():
31
- gr.Markdown('Input Parameters')
32
- with gr.Row():
33
- model_path = gr.Text(
34
- label='Path to off-the-shelf model',
35
- value='CompVis/stable-diffusion-v1-4',
36
- max_lines=1)
37
- resolution = gr.Dropdown(choices=['512', '768'],
38
- value='512',
39
- label='Resolution',
40
- visible=False)
41
-
42
- with gr.Accordion('Advanced settings', open=False):
43
- sample_start_idx = gr.Number(
44
- label='Start Frame Index',value=0)
45
- sample_frame_rate = gr.Number(
46
- label='Frame Rate',value=1)
47
- n_sample_frames = gr.Number(
48
- label='Number of Frames',value=8)
49
- guidance_scale = gr.Number(
50
- label='Guidance Scale', value=7.5)
51
- seed = gr.Slider(label='Seed',
52
- minimum=0,
53
- maximum=100000,
54
- step=1,
55
- randomize=True,
56
- value=33)
57
- input_token = gr.Text(label='Hugging Face Write Token',
58
- placeholder='',
59
- visible=False if hf_token else True)
60
- gr.Markdown('''
61
- - Upload input video or choose an exmple blow
62
- - Set hyperparameters & click start
63
- - It takes a few minutes to download model first
64
- ''')
65
-
66
- with gr.Row():
67
- with gr.Column():
68
- validation_prompt = gr.Text(
69
- label='Validation Prompt',
70
- placeholder=
71
- 'prompt to test the model, e.g: a Lego man is surfing')
72
-
73
- remove_gpu_after_running = gr.Checkbox(
74
- label='Remove GPU after running',
75
- value=False,
76
- interactive=bool(os.getenv('SPACE_ID')),
77
- visible=False)
78
-
79
- with gr.Row():
80
- result = gr.Video(label='Result')
81
-
82
- # examples
83
- with gr.Row():
84
- examples = [
85
- [
86
- 'CompVis/stable-diffusion-v1-4',
87
- "data/car-moving.mp4",
88
- 'A car is moving on the road.',
89
- 8, 0, 1,
90
- 'A jeep car is moving on the desert.',
91
- 7.5, 512, 33,
92
- False, None,
93
- ],
94
-
95
- [
96
- 'CompVis/stable-diffusion-v1-4',
97
- "data/black-swan.mp4",
98
- 'A blackswan is swimming on the water.',
99
- 8, 0, 4,
100
- 'A white swan is swimming on the water.',
101
- 7.5, 512, 33,
102
- False, None,
103
- ],
104
-
105
- [
106
- 'CompVis/stable-diffusion-v1-4',
107
- "data/child-riding.mp4",
108
- 'A child is riding a bike on the road.',
109
- 8, 0, 1,
110
- 'A lego child is riding a bike on the road.',
111
- 7.5, 512, 33,
112
- False, None,
113
- ],
114
-
115
- [
116
- 'CompVis/stable-diffusion-v1-4',
117
- "data/car-turn.mp4",
118
- 'A jeep car is moving on the road.',
119
- 8, 0, 6,
120
- 'A jeep car is moving on the snow.',
121
- 7.5, 512, 33,
122
- False, None,
123
- ],
124
-
125
- [
126
- 'CompVis/stable-diffusion-v1-4',
127
- "data/rabbit-watermelon.mp4",
128
- 'A rabbit is eating a watermelon.',
129
- 8, 0, 6,
130
- 'A puppy is eating an orange.',
131
- 7.5, 512, 33,
132
- False, None,
133
- ],
134
-
135
- ]
136
- gr.Examples(examples=examples,
137
- fn=runner.run_vid2vid_zero,
138
- inputs=[
139
- model_path, input_video, input_prompt,
140
- n_sample_frames, sample_start_idx, sample_frame_rate,
141
- validation_prompt, guidance_scale, resolution, seed,
142
- remove_gpu_after_running,
143
- input_token,
144
- ],
145
- outputs=result,
146
- cache_examples=os.getenv('SYSTEM') == 'spaces'
147
- )
148
-
149
- # run
150
- run_button_vid2vid_zero = gr.Button('Start vid2vid-zero')
151
- run_button_vid2vid_zero.click(
152
- fn=runner.run_vid2vid_zero,
153
- inputs=[
154
- model_path, input_video, input_prompt,
155
- n_sample_frames, sample_start_idx, sample_frame_rate,
156
- validation_prompt, guidance_scale, resolution, seed,
157
- remove_gpu_after_running,
158
- input_token,
159
- ],
160
- outputs=result)
161
-
162
- return demo
163
-
164
-
165
- if __name__ == '__main__':
166
- hf_token = os.getenv('HF_TOKEN')
167
- runner = Runner(hf_token)
168
- demo = create_demo(runner)
169
- demo.queue(max_size=1).launch(share=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BFH/BKMotionsAI/app.py DELETED
@@ -1,86 +0,0 @@
1
- #!/usr/bin/env python
2
- # coding: utf-8
3
-
4
- import gradio as gr
5
- import numpy as np
6
- import requests
7
- from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline, pipeline
8
- from langdetect import detect
9
- from matplotlib import pyplot as plt
10
- import imageio
11
-
12
- # Load the model
13
- model = AutoModelForSequenceClassification.from_pretrained("saved_model")
14
- tokenizer = AutoTokenizer.from_pretrained("saved_model")
15
- pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer)
16
-
17
- # Function called by the UI
18
- def attribution(text):
19
-
20
- # Clean the plot
21
- plt.clf()
22
-
23
- # Detect the language
24
- language = detect(text)
25
-
26
- # Translate the input in german if necessary
27
- if language == 'fr':
28
- translator = pipeline("translation", model="Helsinki-NLP/opus-mt-fr-de")
29
- translatedText = translator(text[0:1000])
30
- text = translatedText[0]["translation_text"]
31
- elif language != 'de':
32
- return "The language is not recognized, it must be either in German or in French.", None
33
-
34
- # Set the bars of the bar chart
35
- bars = ""
36
- if language == 'fr':
37
- bars = ("DDPS", "DFI", "AS-MPC", "DFJP", "DEFR", "DETEC", "DFAE", "Parl", "ChF", "DFF", "AF", "TF")
38
- else:
39
- bars = ("VBS", "EDI", "AB-BA", "EJPD", "WBF", "UVEK", "EDA", "Parl", "BK", "EFD", "BV", "BGer")
40
-
41
- # Make the prediction with the 1000 first characters
42
- results = pipe(text[0:1000], return_all_scores=True)
43
- rates = [row["score"] for row in results[0]]
44
-
45
- # Bar chart
46
- y_pos = np.arange(len(bars))
47
- plt.barh(y_pos, rates)
48
- plt.yticks(y_pos, bars)
49
-
50
- # Set the output text
51
- name = ""
52
- maxRate = np.max(rates)
53
- maxIndex = np.argmax(rates)
54
-
55
- # ML model not sure if highest probability < 60%
56
- if maxRate < 0.6:
57
- # de / fr
58
- if language == 'de':
59
- name = "Das ML-Modell ist nicht sicher. Das Departement könnte sein : \n\n"
60
- else:
61
- name = "Le modèle ML n'est pas sûr. Le département pourrait être : \n\n"
62
- i = 0
63
- # Show each department that has a probability > 10%
64
- while i == 0:
65
- if rates[maxIndex] >= 0.1:
66
- name = name + "\t" + str(rates[maxIndex])[2:4] + "%" + "\t\t\t\t\t" + bars[maxIndex] + "\n"
67
- rates[maxIndex] = 0
68
- maxIndex = np.argmax(rates)
69
- else:
70
- i = 1
71
- # ML model pretty sure, show only one department
72
- else:
73
- name = str(maxRate)[2:4] + "%" + "\t\t\t\t\t\t" + bars[maxIndex]
74
-
75
- # Save the bar chart as png and load it (enables better display)
76
- plt.savefig('rates.png')
77
- im = imageio.imread('rates.png')
78
-
79
- return name, im
80
-
81
-
82
- # display the UI
83
- interface = gr.Interface(fn=attribution,
84
- inputs=[gr.inputs.Textbox(lines=20, placeholder="Geben Sie bitte den Titel und den Sumbmitted Text des Vorstoss ein.\nVeuillez entrer le titre et le Submitted Text de la requête.")],
85
- outputs=['text', 'image'])
86
- interface.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Banbri/zcvzcv/src/app/interface/zoom/index.tsx DELETED
@@ -1,35 +0,0 @@
1
- import { useStore } from "@/app/store"
2
- import { VerticalSlider } from "@/components/ui/vertical-slider"
3
- import { cn } from "@/lib/utils"
4
-
5
- export function Zoom() {
6
- const zoomLevel = useStore((state) => state.zoomLevel)
7
- const setZoomLevel = useStore((state) => state.setZoomLevel)
8
- const isGeneratingStory = useStore((state) => state.isGeneratingStory)
9
-
10
- return (
11
- <div className={cn(
12
- `print:hidden`,
13
- // `fixed flex items-center justify-center bottom-8 top-32 right-8 z-10 h-screen`,
14
- `fixed flex flex-col items-center bottom-8 top-28 right-2 md:top-20 md:right-6 z-10`,
15
- `animation-all duration-300 ease-in-out`,
16
- isGeneratingStory ? `scale-0 opacity-0` : ``,
17
- )}>
18
- <div className="font-mono font-bold text-xs pb-2 text-stone-600 bg-stone-50 p-1 rounded-sm">
19
- Zoom
20
- </div>
21
- <div className="w-2">
22
- <VerticalSlider
23
- defaultValue={[zoomLevel]}
24
- min={30}
25
- max={250}
26
- step={1}
27
- onValueChange={value => setZoomLevel(value[0] || 10)}
28
- value={[zoomLevel]}
29
- className="h-64 md:h-80"
30
- orientation="vertical"
31
- />
32
- </div>
33
- </div>
34
- )
35
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/colab_for_mdx.py DELETED
@@ -1,71 +0,0 @@
1
- import json
2
- import os
3
- import gc
4
- import psutil
5
- import requests
6
- import subprocess
7
- import time
8
- import logging
9
- import sys
10
- import shutil
11
- now_dir = os.getcwd()
12
- sys.path.append(now_dir)
13
- first_cell_executed = False
14
- file_folder = "Colab-for-MDX_B"
15
- def first_cell_ran():
16
- global first_cell_executed
17
- if first_cell_executed:
18
- #print("The 'first_cell_ran' function has already been executed.")
19
- return
20
-
21
-
22
-
23
- first_cell_executed = True
24
- os.makedirs("tmp_models", exist_ok=True)
25
-
26
-
27
-
28
- class hide_opt: # hide outputs
29
- def __enter__(self):
30
- self._original_stdout = sys.stdout
31
- sys.stdout = open(os.devnull, "w")
32
-
33
- def __exit__(self, exc_type, exc_val, exc_tb):
34
- sys.stdout.close()
35
- sys.stdout = self._original_stdout
36
-
37
- def get_size(bytes, suffix="B"): # read ram
38
- global svmem
39
- factor = 1024
40
- for unit in ["", "K", "M", "G", "T", "P"]:
41
- if bytes < factor:
42
- return f"{bytes:.2f}{unit}{suffix}"
43
- bytes /= factor
44
- svmem = psutil.virtual_memory()
45
-
46
-
47
- def use_uvr_without_saving():
48
- print("Notice: files won't be saved to personal drive.")
49
- print(f"Downloading {file_folder}...", end=" ")
50
- with hide_opt():
51
- #os.chdir(mounting_path)
52
- items_to_move = ["demucs", "diffq","julius","model","separated","tracks","mdx.py","MDX-Net_Colab.ipynb"]
53
- subprocess.run(["git", "clone", "https://github.com/NaJeongMo/Colab-for-MDX_B.git"])
54
- for item_name in items_to_move:
55
- item_path = os.path.join(file_folder, item_name)
56
- if os.path.exists(item_path):
57
- if os.path.isfile(item_path):
58
- shutil.move(item_path, now_dir)
59
- elif os.path.isdir(item_path):
60
- shutil.move(item_path, now_dir)
61
- try:
62
- shutil.rmtree(file_folder)
63
- except PermissionError:
64
- print(f"No se pudo eliminar la carpeta {file_folder}. Puede estar relacionada con Git.")
65
-
66
-
67
- use_uvr_without_saving()
68
- print("done!")
69
- if not os.path.exists("tracks"):
70
- os.mkdir("tracks")
71
- first_cell_ran()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BartPoint/VoiceChange_Beta/infer_pack/transforms.py DELETED
@@ -1,209 +0,0 @@
1
- import torch
2
- from torch.nn import functional as F
3
-
4
- import numpy as np
5
-
6
-
7
- DEFAULT_MIN_BIN_WIDTH = 1e-3
8
- DEFAULT_MIN_BIN_HEIGHT = 1e-3
9
- DEFAULT_MIN_DERIVATIVE = 1e-3
10
-
11
-
12
- def piecewise_rational_quadratic_transform(
13
- inputs,
14
- unnormalized_widths,
15
- unnormalized_heights,
16
- unnormalized_derivatives,
17
- inverse=False,
18
- tails=None,
19
- tail_bound=1.0,
20
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
21
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
22
- min_derivative=DEFAULT_MIN_DERIVATIVE,
23
- ):
24
- if tails is None:
25
- spline_fn = rational_quadratic_spline
26
- spline_kwargs = {}
27
- else:
28
- spline_fn = unconstrained_rational_quadratic_spline
29
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
30
-
31
- outputs, logabsdet = spline_fn(
32
- inputs=inputs,
33
- unnormalized_widths=unnormalized_widths,
34
- unnormalized_heights=unnormalized_heights,
35
- unnormalized_derivatives=unnormalized_derivatives,
36
- inverse=inverse,
37
- min_bin_width=min_bin_width,
38
- min_bin_height=min_bin_height,
39
- min_derivative=min_derivative,
40
- **spline_kwargs
41
- )
42
- return outputs, logabsdet
43
-
44
-
45
- def searchsorted(bin_locations, inputs, eps=1e-6):
46
- bin_locations[..., -1] += eps
47
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
48
-
49
-
50
- def unconstrained_rational_quadratic_spline(
51
- inputs,
52
- unnormalized_widths,
53
- unnormalized_heights,
54
- unnormalized_derivatives,
55
- inverse=False,
56
- tails="linear",
57
- tail_bound=1.0,
58
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
59
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
60
- min_derivative=DEFAULT_MIN_DERIVATIVE,
61
- ):
62
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
63
- outside_interval_mask = ~inside_interval_mask
64
-
65
- outputs = torch.zeros_like(inputs)
66
- logabsdet = torch.zeros_like(inputs)
67
-
68
- if tails == "linear":
69
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
70
- constant = np.log(np.exp(1 - min_derivative) - 1)
71
- unnormalized_derivatives[..., 0] = constant
72
- unnormalized_derivatives[..., -1] = constant
73
-
74
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
75
- logabsdet[outside_interval_mask] = 0
76
- else:
77
- raise RuntimeError("{} tails are not implemented.".format(tails))
78
-
79
- (
80
- outputs[inside_interval_mask],
81
- logabsdet[inside_interval_mask],
82
- ) = rational_quadratic_spline(
83
- inputs=inputs[inside_interval_mask],
84
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
85
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
86
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
87
- inverse=inverse,
88
- left=-tail_bound,
89
- right=tail_bound,
90
- bottom=-tail_bound,
91
- top=tail_bound,
92
- min_bin_width=min_bin_width,
93
- min_bin_height=min_bin_height,
94
- min_derivative=min_derivative,
95
- )
96
-
97
- return outputs, logabsdet
98
-
99
-
100
- def rational_quadratic_spline(
101
- inputs,
102
- unnormalized_widths,
103
- unnormalized_heights,
104
- unnormalized_derivatives,
105
- inverse=False,
106
- left=0.0,
107
- right=1.0,
108
- bottom=0.0,
109
- top=1.0,
110
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
111
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
112
- min_derivative=DEFAULT_MIN_DERIVATIVE,
113
- ):
114
- if torch.min(inputs) < left or torch.max(inputs) > right:
115
- raise ValueError("Input to a transform is not within its domain")
116
-
117
- num_bins = unnormalized_widths.shape[-1]
118
-
119
- if min_bin_width * num_bins > 1.0:
120
- raise ValueError("Minimal bin width too large for the number of bins")
121
- if min_bin_height * num_bins > 1.0:
122
- raise ValueError("Minimal bin height too large for the number of bins")
123
-
124
- widths = F.softmax(unnormalized_widths, dim=-1)
125
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
126
- cumwidths = torch.cumsum(widths, dim=-1)
127
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
128
- cumwidths = (right - left) * cumwidths + left
129
- cumwidths[..., 0] = left
130
- cumwidths[..., -1] = right
131
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
132
-
133
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
134
-
135
- heights = F.softmax(unnormalized_heights, dim=-1)
136
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
137
- cumheights = torch.cumsum(heights, dim=-1)
138
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
139
- cumheights = (top - bottom) * cumheights + bottom
140
- cumheights[..., 0] = bottom
141
- cumheights[..., -1] = top
142
- heights = cumheights[..., 1:] - cumheights[..., :-1]
143
-
144
- if inverse:
145
- bin_idx = searchsorted(cumheights, inputs)[..., None]
146
- else:
147
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
148
-
149
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
150
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
151
-
152
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
153
- delta = heights / widths
154
- input_delta = delta.gather(-1, bin_idx)[..., 0]
155
-
156
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
157
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
158
-
159
- input_heights = heights.gather(-1, bin_idx)[..., 0]
160
-
161
- if inverse:
162
- a = (inputs - input_cumheights) * (
163
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
164
- ) + input_heights * (input_delta - input_derivatives)
165
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
166
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
167
- )
168
- c = -input_delta * (inputs - input_cumheights)
169
-
170
- discriminant = b.pow(2) - 4 * a * c
171
- assert (discriminant >= 0).all()
172
-
173
- root = (2 * c) / (-b - torch.sqrt(discriminant))
174
- outputs = root * input_bin_widths + input_cumwidths
175
-
176
- theta_one_minus_theta = root * (1 - root)
177
- denominator = input_delta + (
178
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
179
- * theta_one_minus_theta
180
- )
181
- derivative_numerator = input_delta.pow(2) * (
182
- input_derivatives_plus_one * root.pow(2)
183
- + 2 * input_delta * theta_one_minus_theta
184
- + input_derivatives * (1 - root).pow(2)
185
- )
186
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
187
-
188
- return outputs, -logabsdet
189
- else:
190
- theta = (inputs - input_cumwidths) / input_bin_widths
191
- theta_one_minus_theta = theta * (1 - theta)
192
-
193
- numerator = input_heights * (
194
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
195
- )
196
- denominator = input_delta + (
197
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
198
- * theta_one_minus_theta
199
- )
200
- outputs = input_cumheights + numerator / denominator
201
-
202
- derivative_numerator = input_delta.pow(2) * (
203
- input_derivatives_plus_one * theta.pow(2)
204
- + 2 * input_delta * theta_one_minus_theta
205
- + input_derivatives * (1 - theta).pow(2)
206
- )
207
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
208
-
209
- return outputs, logabsdet
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Avakin Life Pc.md DELETED
@@ -1,50 +0,0 @@
1
-
2
- <br> - H2: Cómo descargar e instalar Avakin Life en PC <br> - H3: Cómo usar BlueStacks para jugar Avakin Life en PC <br> - H3: Cómo usar el sitio web oficial de Avakin para jugar Avakin Life en PC <br> - H2: Beneficios de jugar Avakin Life en PC <br-> PC H3: Mejores gráficos y rendimiento <br> - H3: Más control y personalización <br> - H3: Comunicación y traducción más fáciles <br> - H2: Conclusión: Comienza tu segunda vida en el PC hoy <br> - H4: Preguntas frecuentes | Tabla 2: Artículo con formato HTML <h1>Avakin Life PC: Cómo jugar al mundo virtual en 3D en tu ordenador</h1>
3
- <p>Si estás buscando un juego de rol que te permita crear tu propio avatar, explorar un mundo virtual y conocer nuevos amigos, entonces deberías echar un vistazo a Avakin Life. Avakin Life es un juego de mundo virtual en 3D de Lockwood Publishing que está disponible en dispositivos iOS y Android. Puede personalizar su apariencia, estilo y hogar, ir a aventuras, unirse a concursos de moda y socializar con millones de jugadores de todo el mundo. </p>
4
- <h2>avakin life pc</h2><br /><p><b><b>Download Zip</b> &#127383; <a href="https://bltlly.com/2v6Jwg">https://bltlly.com/2v6Jwg</a></b></p><br /><br />
5
- <p>Pero ¿sabías que también puedes jugar Avakin Life en tu PC? Sí, lo has oído bien. Puedes disfrutar de este increíble juego en una pantalla más grande, con mejores gráficos, rendimiento y control. En este artículo, le mostraremos cómo descargar e instalar Avakin Life en su computadora usando dos métodos diferentes. También te contaremos los beneficios de jugar a Avakin Life en PC y responderemos algunas preguntas frecuentes. ¡Así que, empecemos! </p>
6
- <h2>Cómo descargar e instalar Avakin Life en PC</h2>
7
- <p>Hay dos formas de jugar Avakin Life en tu PC. Uno es utilizar un software emulador como BlueStacks, que le permite ejecutar aplicaciones y juegos de Android en su ordenador. La otra es utilizar el sitio web oficial de Avakin, que ofrece una versión web del juego a la que puedes acceder a través de tu navegador. Estos son los pasos para cada método:</p>
8
- <h3>Cómo usar BlueStacks para jugar Avakin Life en PC</h3>
9
- <ol>
10
-
11
- <li>Inicie BlueStacks e inicie sesión en su cuenta de Google. Esto le permitirá acceder a la Google Play Store.</li>
12
- <li>Busque Avakin Life en Google Play Store y haga clic en el botón de instalación. Alternativamente, puede descargar el archivo APK de una fuente de confianza y arrastrarlo y soltarlo en BlueStacks.</li>
13
- <li>Una vez completada la instalación, haga clic en el icono de Avakin Life en la pantalla de inicio de BlueStacks para comenzar a jugar. </li>
14
- </ol>
15
- <h3>Cómo usar el sitio web oficial de Avakin para jugar Avakin Life en PC</h3>
16
- <ol>
17
- <li>Vaya al sitio web oficial de Avakin (<a href="( 2 )">https://avakin.com</a>) y haga clic en el botón "Descargar" en la esquina superior derecha. </li>
18
- <li>Seleccione su plataforma preferida entre las opciones disponibles. Puede elegir entre Windows, Mac, Linux o Web.</li>
19
- <li>Si elige Web, será redirigido a una página donde puede jugar Avakin Life directamente en su navegador. Tendrá que iniciar sesión con su cuenta de Facebook o crear una nueva cuenta con su dirección de correo electrónico. </li>
20
- <li>Si elige cualquiera de las otras plataformas, tendrá que descargar e instalar un pequeño archivo lanzador que le permitirá jugar Avakin Life en su computadora. Siga las instrucciones en la pantalla para completar el proceso. </li>
21
- <li>Una vez instalado el lanzador, ábralo e inicie sesión con su cuenta de Facebook o dirección de correo electrónico. A continuación, puede comenzar a jugar Avakin Life en su PC.</li>
22
- </ol>
23
- <h2>Beneficios de jugar Avakin Life en PC</h2>
24
- <p>Ahora que sabe cómo jugar Avakin Life en su PC, es posible que se pregunte por qué debe hacerlo. Bueno, hay muchas ventajas de jugar a este juego en un ordenador en lugar de en un dispositivo móvil. Estas son algunas de ellas:</p>
25
- <h3>Mejores gráficos y rendimiento</h3>
26
-
27
- <h3>Más control y personalización</h3>
28
- <p>Otro beneficio de jugar Avakin Life en PC es que puedes tener más opciones de control y personalización. Puede utilizar el teclado y el ratón para navegar por el juego, que puede ser más conveniente y preciso que el uso de una pantalla táctil. También puedes ajustar la configuración del juego según tus preferencias, como la resolución, el sonido y el idioma. Incluso puedes usar trucos y hacks para mejorar tu juego, como conseguir monedas, gemas o objetos ilimitados. Sin embargo, ten cuidado de no abusar de estas características o podrías ser expulsado del juego. </p>
29
- <h3>Comunicación y traducción más fáciles</h3>
30
- <p>Un tercer beneficio de jugar Avakin Life en PC es que puedes comunicarte y traducir más fácilmente con otros jugadores. Puede usar su teclado para escribir más rápido y cómodamente que usando un teclado virtual. También puedes usar chat de voz o video chat para hablar con tus amigos o hacer otros nuevos. También puedes usar herramientas de traducción para entender e interactuar con jugadores de diferentes países y culturas. Puedes aprender nuevos idiomas, intercambiar ideas y divertirte con gente de todo el mundo. </p>
31
- <h2>Conclusión: Comience su segunda vida en el PC hoy</h2>
32
- <p>Avakin Life es un fantástico juego que te permite crear tu propio avatar, explorar un mundo virtual y conocer nuevos amigos. Pero si quieres llevar tu experiencia de juego al siguiente nivel, deberías intentar jugar a Avakin Life en PC. Puede disfrutar de mejores gráficos, rendimiento, control y personalización. También puede comunicarse y traducir más fácilmente con otros jugadores. Jugar a Avakin Life en PC te hará sentir que estás viviendo una segunda vida en un mundo virtual en 3D. </p>
33
- <p></p>
34
-
35
- <p>Esperamos que este artículo te haya ayudado a aprender a jugar Avakin Life en PC y por qué deberías hacerlo. Si tiene alguna pregunta o comentario, háganoslo saber en los comentarios a continuación. Nos encantaría saber de usted. </p>
36
- <h4>Preguntas frecuentes</h4>
37
- <ul>
38
- <li>Q: ¿Avakin Life es libre de jugar? </li>
39
- <li>A: Sí, Avakin Life es gratis para jugar en dispositivos móviles y PC. Sin embargo, hay algunos elementos del juego y características que requieren dinero real para comprar. También puedes ver anuncios u ofertas completas para ganar monedas y gemas gratis. </li>
40
- <li>Q: ¿Es la vida de Avakin segura para los niños? </li>
41
- <li>A: Avakin Life está clasificado 12+ por la App Store y 13+ por la Google Play Store. Contiene violencia leve, contenido sexual, desnudez, blasfemia, alcohol, tabaco y drogas. También permite a los usuarios chatear con extraños en línea, lo que puede plantear algunos riesgos. Por lo tanto, se recomienda la orientación y supervisión de los padres para los jugadores más jóvenes. </li>
42
- <li>Q: ¿Cómo puedo actualizar Avakin Life en PC? </li>
43
- <li>A: Si está utilizando BlueStacks para jugar Avakin Life en PC, puede actualizar el juego yendo a la Google Play Store y haciendo clic en el botón de actualización. Si estás usando el sitio web oficial de Avakin para jugar a Avakin Life en PC, no necesitas actualizar el juego manualmente, ya que se actualizará automáticamente. </li>
44
- <li>Q: ¿Cómo puedo eliminar mi cuenta de Avakin Life? </li>
45
- <li>A: Si desea eliminar su cuenta de Avakin Life, debe ponerse en contacto con el equipo de atención al cliente a través de su sitio web (<a href="">https://avakin.com/ support/</a>) o correo electrónico ([email protected]). Deberá proporcionar su nombre de usuario, dirección de correo electrónico, ID de dispositivo y razón para eliminar su cuenta. Una vez procesada su solicitud, su cuenta será eliminada permanentemente. </li>
46
- <li>Q: ¿Cómo me comunico con el soporte de Avakin Life? </li>
47
-
48
- </ul></p> 64aa2da5cf<br />
49
- <br />
50
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Bus De Conduccin De Telolet 3d Mod Apk V1.2. 4b.md DELETED
@@ -1,60 +0,0 @@
1
-
2
- <h1>Descargar Telolet autobús de conducción 3D Mod APK v1.2. 4b y disfrutar de la diversión de conducir un autobús realista en Indonesia</h1>
3
- <p>Si eres un fan de los juegos de conducción de autobuses, es posible que hayas oído hablar de Telolet Bus Driving 3D, un juego revolucionario en el género de la conducción árcade sin fin con gráficos y control realistas en 3D. En este juego, usted puede viajar a través de los coches de tráfico de la carretera de Indonesia con un autobús muy fresco y hacer que los niños felices tocando la bocina de su único autobús telolet. Pero lo que si quieres disfrutar del juego sin limitaciones o interrupciones? Bueno, puedes hacerlo descargando Telolet Bus Driving 3D Mod APK v1.2. 4b, que te da dinero ilimitado, todos los autobuses desbloqueados, y sin anuncios. En este artículo, te contaremos más sobre este juego, sus características y cómo descargarlo e instalarlo en tu dispositivo. </p>
4
- <h2>descargar bus de conducción de telolet 3d mod apk v1.2. 4b</h2><br /><p><b><b>DOWNLOAD</b> &#9658;&#9658;&#9658; <a href="https://bltlly.com/2v6JJz">https://bltlly.com/2v6JJz</a></b></p><br /><br />
5
- <h2>¿Qué es Telolet Bus Driving 3D? </h2>
6
- <p>Telolet Bus Driving 3D es un juego desarrollado por LOCOS, un estudio de juegos indonesio que tiene como objetivo crear juegos divertidos y atractivos para todos. El juego se inspiró en el fenómeno viral de "Om Telolet Om", que significa "Señor, toca la bocina, señor" en indonesio. Esta es una frase que los niños gritan a los conductores de autobús para pedirles que toquen sus distintivos cuernos telolet, que producen un sonido musical. El juego fue lanzado en diciembre de 2016 y desde entonces ha ganado más de 10 millones de descargas en Google Play Store.</p>
7
- <h3>Características de Telolet Bus Driving 3D</h3>
8
- <p>Telolet Bus Driving 3D no es solo un juego de conducción simple. Tiene muchas características que lo hacen destacar de otros juegos del mismo género. Estos son algunos de ellos:</p>
9
- <h4>Impresionantes gráficos 3D</h4>
10
- <p>El juego tiene increíbles gráficos en 3D que te hacen sentir como si estuvieras conduciendo un autobús real en Indonesia. Puedes ver los detalles del autobús, el tráfico, el medio ambiente y los niños que te animan cuando tocas la bocina. </p>
11
- <h4>Manejo del coche suave y realista</h4>
12
-
13
- <h4>Muchos autobuses para elegir</h4>
14
- <p>El juego tiene muchos autobuses para elegir, cada uno con su propio diseño, color, velocidad y melodía de cuerno telolet. Puedes desbloquear nuevos buses ganando monedas o usando la versión mod APK. </p>
15
- <h4>3 lugares famosos en Indonesia</h4>
16
- <p>El juego tiene 3 lugares famosos en Indonesia que puedes explorar: Pantura, Kampoeng y Cipali. Cada lugar tiene su propio paisaje, tráfico y desafíos. </p>
17
- <p></p>
18
- <h4>3 modos de juego</h4>
19
- <p>El juego tiene 3 modos de juego: One Way, Rush Hour y Two Way. En el modo One Way, conduce en una carretera de un solo sentido con tráfico moderado. En el modo de hora punta, se enfrenta a un atasco de tráfico pesado y tiene que evitar colisiones. En el modo de dos vías, se conduce en una carretera de dos vías con tráfico entrante y tiene que adelantar a otros vehículos. </p>
20
- <h4>Tipos ricos de tráfico NPC Indonesia</h4>
21
- <p>El juego tiene ricos tipos de tráfico NPC Indonesia que hacen el juego más realista y desafiante. Usted encontrará coches, camiones, motocicletas, autobuses y otros vehículos que tienen diferentes comportamientos y velocidades. También verás peatones, animales y obstáculos en la carretera. </p>
22
- <h4>Actualizaciones de atributos</h4>
23
- <p>El juego tiene actualizaciones de atributos que le permiten mejorar el rendimiento y la apariencia de su autobús. Puedes actualizar tu velocidad, freno, bocina y color usando las monedas que ganes del juego o la versión mod APK. </p>
24
- <h4>Misiones diarias difíciles</h4>
25
- <p>El juego tiene desafiantes misiones diarias que te dan recompensas y objetivos adicionales. Puede completar varias tareas, como conducir cierta distancia, tocar la bocina un cierto número de veces, adelantar un cierto número de vehículos y más. </p>
26
- <h4>Tablas de clasificación en línea y logros</h4>
27
- <p>El juego tiene tablas de clasificación en línea y logros que le permiten competir con otros jugadores y mostrar sus habilidades. Puedes posicionarte en las tablas de clasificación globales y regionales al ganar altas puntuaciones y monedas. También puedes desbloquear logros al completar varios desafíos e hitos. </p>
28
-
29
- <p>Telolet Bus Driving 3D es un juego divertido y adictivo que te mantendrá entretenido durante horas. Sin embargo, si quieres disfrutar del juego sin limitaciones ni interrupciones, debes descargar Telolet Bus Driving 3D Mod APK v1.2. 4b, que le da los siguientes beneficios:</p>
30
- <h4>Dinero ilimitado</h4>
31
- <p>Con la versión APK mod, usted tendrá dinero ilimitado que se puede utilizar para comprar y actualizar cualquier autobús que desee. No tienes que preocuparte por quedarte sin monedas o gastar dinero real para conseguir más. </p>
32
- <h4>Todos los autobuses desbloqueados</h4>
33
- <p>Con la versión mod APK, tendrás todos los buses desbloqueados desde el principio. No tienes que jugar durante horas o completar misiones para desbloquear nuevos autobuses. Puedes elegir el autobús que quieras y disfrutar de sus características únicas. </p>
34
- <h4>No hay anuncios</h4>
35
- <p>Con la versión mod APK, no tendrás anuncios que interrumpan tu juego o te molesten. No tienes que ver videos o hacer clic en banners para obtener monedas o recompensas adicionales. Puedes jugar el juego sin problemas y sin distracciones. </p>
36
- <h2>Cómo descargar e instalar Telolet Bus Driving 3D Mod APK v1.2. 4b? </h2>
37
- <p>Si está interesado en descargar e instalar Telolet Bus Driving 3D Mod APK v1.2. 4b en su dispositivo, puede seguir estos sencillos pasos:</p>
38
- <h3>Paso 1: Descargar el archivo APK de una fuente de confianza</h3>
39
- <p>El primer paso es descargar el archivo APK de una fuente de confianza que proporciona descargas seguras y libres de virus. Puede utilizar este enlace para descargar el archivo directamente a su dispositivo o transferirlo desde su PC.</p>
40
- <h3>Paso 2: Habilitar fuentes desconocidas en el dispositivo</h3>
41
- <p>El segundo paso es habilitar fuentes desconocidas en su dispositivo para que pueda instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. </p>
42
- <h3>Paso 3: Instalar el archivo APK y disfrutar del juego</h3>
43
-
44
- <p>Esperamos que este artículo le haya ayudado a aprender más sobre Telolet Bus Driving 3D Mod APK v1.2. 4b y cómo descargarlo e instalarlo en su dispositivo. Este es un gran juego para los entusiastas de la conducción de autobuses que quieren experimentar la emoción de conducir un autobús realista en Indonesia con un cuerno musical. ¡Descárgalo ahora y diviértete! </p>
45
- <h2>Conclusión</h2>
46
- <p>Telolet Bus Driving 3D es un juego innovador en el género de la conducción árcade sin fin con gráficos y control 3D realistas. Fue inspirado por el fenómeno viral de "Om Telolet Om", que significa "Señor, toca la bocina, señor" en indonesio. El juego tiene muchas características que lo hacen destacar de otros juegos en el mismo género, tales como impresionantes gráficos en 3D, manejo de automóviles suave y realista, muchos autobuses para elegir, 3 lugares famosos en Indonesia, 3 modos de juego, ricos tipos de tráfico NPC Indonesia, actualizaciones de atributos, misiones diarias desafiantes, y tablas de clasificación en línea y logros. Sin embargo, si quieres disfrutar del juego sin limitaciones ni interrupciones, debes descargar Telolet Bus Driving 3D Mod APK v1.2. 4b, que le da dinero ilimitado, todos los autobuses desbloqueados, y sin anuncios. Para descargar e instalar la versión mod APK, solo tiene que seguir tres sencillos pasos: descargar el archivo APK de una fuente de confianza, habilitar fuentes desconocidas en su dispositivo, e instalar el archivo APK y disfrutar del juego. Este es un gran juego para los entusiastas de la conducción de autobuses que quieren experimentar la emoción de conducir un autobús realista en Indonesia con un cuerno musical. ¡Descárgalo ahora y diviértete! </p>
47
- <h2>Preguntas frecuentes</h2>
48
- <p>Aquí hay algunas preguntas frecuentes sobre Telolet Bus Driving 3D Mod APK v1.2. 4b:</p>
49
- <h4>Es Telolet autobús de conducción 3D Mod APK v1.2. 4b seguro para descargar e instalar? </h4>
50
- <p>Sí, Telolet autobús de conducción 3D Mod APK v1.2. 4b es seguro para descargar e instalar siempre y cuando utilice una fuente de confianza que proporciona descargas libres de virus. Puede utilizar este enlace para descargar el archivo de forma segura. </p>
51
-
52
- <p>No, no es necesario rootear el dispositivo para usar Telolet Bus Driving 3D Mod APK v1.2. 4b. Solo necesitas habilitar fuentes desconocidas en la configuración de tu dispositivo e instalar el archivo APK como de costumbre. </p>
53
- <h4>Será Telolet autobús de conducción 3D Mod APK v1.2. 4b afectar mi progreso original del juego? </h4>
54
- <p>No, Telolet Bus Driving 3D Mod APK v1.2. 4b no afectará su progreso original del juego. Puedes jugar ambas versiones por separado y cambiar entre ellas cuando quieras. </p>
55
- <h4>¿Puedo jugar Telolet autobús de conducción 3D Mod APK v1.2. 4b en línea con otros jugadores? </h4>
56
- <p>Sí, se puede jugar Telolet autobús de conducción 3D Mod APK v1.2. 4b en línea con otros jugadores y competir en las tablas de clasificación y logros. Sin embargo, es posible que encuentre algunos problemas de compatibilidad con los jugadores que utilizan la versión original del juego. </p>
57
- <h4>¿Cómo puedo contactar al desarrollador de Telolet Bus Driving 3D Mod APK v1.2. 4b si tengo alguna pregunta o comentario? </h4>
58
- <p>Puede ponerse en contacto con el desarrollador de Telolet Bus Driving 3D Mod APK v1.2. 4b enviando un correo electrónico a [email protected] o visitando su página de Facebook en https://www.facebook.com/locosgames/ Estarán encantados de saber de usted y responder a sus preguntas o comentarios. </p> 64aa2da5cf<br />
59
- <br />
60
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/json.py DELETED
@@ -1,140 +0,0 @@
1
- from pathlib import Path
2
- from json import loads, dumps
3
- from typing import Any, Callable, Optional, Union
4
-
5
- from .text import Text
6
- from .highlighter import JSONHighlighter, NullHighlighter
7
-
8
-
9
- class JSON:
10
- """A renderable which pretty prints JSON.
11
-
12
- Args:
13
- json (str): JSON encoded data.
14
- indent (Union[None, int, str], optional): Number of characters to indent by. Defaults to 2.
15
- highlight (bool, optional): Enable highlighting. Defaults to True.
16
- skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False.
17
- ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False.
18
- check_circular (bool, optional): Check for circular references. Defaults to True.
19
- allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True.
20
- default (Callable, optional): A callable that converts values that can not be encoded
21
- in to something that can be JSON encoded. Defaults to None.
22
- sort_keys (bool, optional): Sort dictionary keys. Defaults to False.
23
- """
24
-
25
- def __init__(
26
- self,
27
- json: str,
28
- indent: Union[None, int, str] = 2,
29
- highlight: bool = True,
30
- skip_keys: bool = False,
31
- ensure_ascii: bool = False,
32
- check_circular: bool = True,
33
- allow_nan: bool = True,
34
- default: Optional[Callable[[Any], Any]] = None,
35
- sort_keys: bool = False,
36
- ) -> None:
37
- data = loads(json)
38
- json = dumps(
39
- data,
40
- indent=indent,
41
- skipkeys=skip_keys,
42
- ensure_ascii=ensure_ascii,
43
- check_circular=check_circular,
44
- allow_nan=allow_nan,
45
- default=default,
46
- sort_keys=sort_keys,
47
- )
48
- highlighter = JSONHighlighter() if highlight else NullHighlighter()
49
- self.text = highlighter(json)
50
- self.text.no_wrap = True
51
- self.text.overflow = None
52
-
53
- @classmethod
54
- def from_data(
55
- cls,
56
- data: Any,
57
- indent: Union[None, int, str] = 2,
58
- highlight: bool = True,
59
- skip_keys: bool = False,
60
- ensure_ascii: bool = False,
61
- check_circular: bool = True,
62
- allow_nan: bool = True,
63
- default: Optional[Callable[[Any], Any]] = None,
64
- sort_keys: bool = False,
65
- ) -> "JSON":
66
- """Encodes a JSON object from arbitrary data.
67
-
68
- Args:
69
- data (Any): An object that may be encoded in to JSON
70
- indent (Union[None, int, str], optional): Number of characters to indent by. Defaults to 2.
71
- highlight (bool, optional): Enable highlighting. Defaults to True.
72
- default (Callable, optional): Optional callable which will be called for objects that cannot be serialized. Defaults to None.
73
- skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False.
74
- ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False.
75
- check_circular (bool, optional): Check for circular references. Defaults to True.
76
- allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True.
77
- default (Callable, optional): A callable that converts values that can not be encoded
78
- in to something that can be JSON encoded. Defaults to None.
79
- sort_keys (bool, optional): Sort dictionary keys. Defaults to False.
80
-
81
- Returns:
82
- JSON: New JSON object from the given data.
83
- """
84
- json_instance: "JSON" = cls.__new__(cls)
85
- json = dumps(
86
- data,
87
- indent=indent,
88
- skipkeys=skip_keys,
89
- ensure_ascii=ensure_ascii,
90
- check_circular=check_circular,
91
- allow_nan=allow_nan,
92
- default=default,
93
- sort_keys=sort_keys,
94
- )
95
- highlighter = JSONHighlighter() if highlight else NullHighlighter()
96
- json_instance.text = highlighter(json)
97
- json_instance.text.no_wrap = True
98
- json_instance.text.overflow = None
99
- return json_instance
100
-
101
- def __rich__(self) -> Text:
102
- return self.text
103
-
104
-
105
- if __name__ == "__main__":
106
-
107
- import argparse
108
- import sys
109
-
110
- parser = argparse.ArgumentParser(description="Pretty print json")
111
- parser.add_argument(
112
- "path",
113
- metavar="PATH",
114
- help="path to file, or - for stdin",
115
- )
116
- parser.add_argument(
117
- "-i",
118
- "--indent",
119
- metavar="SPACES",
120
- type=int,
121
- help="Number of spaces in an indent",
122
- default=2,
123
- )
124
- args = parser.parse_args()
125
-
126
- from pip._vendor.rich.console import Console
127
-
128
- console = Console()
129
- error_console = Console(stderr=True)
130
-
131
- try:
132
- if args.path == "-":
133
- json_data = sys.stdin.read()
134
- else:
135
- json_data = Path(args.path).read_text()
136
- except Exception as error:
137
- error_console.print(f"Unable to read {args.path!r}; {error}")
138
- sys.exit(-1)
139
-
140
- console.print(JSON(json_data, indent=args.indent), soft_wrap=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CAMP-ViL/Xplainer/app.py DELETED
@@ -1,137 +0,0 @@
1
- from pathlib import Path
2
-
3
- import gradio as gr
4
- import numpy as np
5
- from matplotlib import pyplot as plt
6
-
7
- from descriptors import disease_descriptors_chexpert, disease_descriptors_chestxray14
8
- from model import InferenceModel
9
-
10
-
11
- def plot_bars(model_output):
12
- # sort model_output by overall_probability
13
- model_output = {k: v for k, v in sorted(model_output.items(), key=lambda item: item[1]['overall_probability'], reverse=True)}
14
-
15
- # Create a figure with as many subplots as there are diseases, arranged vertically
16
- fig, axs = plt.subplots(len(model_output), 1, figsize=(10, 5 * len(model_output)))
17
- # axs is not iterable if only one subplot is created, so make it a list
18
- if len(model_output) == 1:
19
- axs = [axs]
20
-
21
- for ax, (disease, data) in zip(axs, model_output.items()):
22
- desc_probs = list(data['descriptor_probabilities'].items())
23
- # sort descending
24
- desc_probs = sorted(desc_probs, key=lambda item: item[1], reverse=True)
25
-
26
- my_probs = [p[1] for p in desc_probs]
27
- min_prob = min(my_probs)
28
- max_prob = max(my_probs)
29
- my_labels = [p[0] for p in desc_probs]
30
-
31
- # Convert probabilities to differences from 0.5
32
- diffs = np.abs(np.array(my_probs) - 0.5)
33
-
34
- # Set colors based on sign of difference
35
- colors = ['red' if p < 0.5 else 'forestgreen' for p in my_probs]
36
-
37
- # Plot bars with appropriate colors and left offsets
38
- left = [p if p < 0.5 else 0.5 for p in my_probs]
39
- bars = ax.barh(my_labels, diffs, left=left, color=colors, alpha=0.3)
40
-
41
- for i, bar in enumerate(bars):
42
- ax.text(min_prob - 0.04, bar.get_y() + bar.get_height() / 2, my_labels[i], ha='left', va='center', color='black', fontsize=15)
43
-
44
- ax.set_xlim(min(min_prob - 0.05, 0.49), max(max_prob + 0.05, 0.51))
45
-
46
- # Invert the y-axis to show bars with values less than 0.5 to the left of the center
47
- ax.invert_yaxis()
48
-
49
- ax.set_yticks([])
50
-
51
- # Add a title for the disease
52
- if data['overall_probability'] >= 0.5:
53
- ax.set_title(f"{disease} : score of {data['overall_probability']:.2f}")
54
- else:
55
- ax.set_title(f"No {disease} : score of {data['overall_probability']:.2f}")
56
-
57
- # make title larger and bold
58
- ax.title.set_fontsize(15)
59
- ax.title.set_fontweight(600)
60
-
61
- # Save the plot
62
- plt.tight_layout() # Adjust subplot parameters to give specified padding
63
- file_path = 'plot.png'
64
- plt.savefig(file_path)
65
- plt.close(fig)
66
-
67
- return file_path
68
-
69
-
70
- def classify_image(inference_model, image_path, diseases_to_predict):
71
- descriptors_with_indication = [d + " indicating " + disease for disease, descriptors in diseases_to_predict.items() for d in descriptors]
72
- probs, negative_probs = inference_model.get_descriptor_probs(image_path=Path(image_path), descriptors=descriptors_with_indication,
73
- do_negative_prompting=True, demo=True)
74
-
75
- disease_probs, negative_disease_probs = inference_model.get_diseases_probs(diseases_to_predict, pos_probs=probs, negative_probs=negative_probs)
76
-
77
- model_output = {}
78
- for idx, disease in enumerate(diseases_to_predict.keys()):
79
- model_output[disease] = {
80
- 'overall_probability': disease_probs[disease],
81
- 'descriptor_probabilities': {descriptor: probs[f'{descriptor} indicating {disease}'].item() for descriptor in
82
- diseases_to_predict[disease]}
83
- }
84
-
85
- file_path = plot_bars(model_output)
86
- return file_path
87
-
88
-
89
- # Define the function you want to wrap
90
- def process_input(image_path, prompt_names: list, disease_name: str, descriptors: str):
91
- diseases_to_predict = {}
92
-
93
- for prompt in prompt_names:
94
- if prompt == 'Custom':
95
- diseases_to_predict[disease_name] = descriptors.split('\n')
96
- else:
97
- if prompt in disease_descriptors_chexpert:
98
- diseases_to_predict[prompt] = disease_descriptors_chexpert[prompt]
99
- else: # only chestxray14
100
- diseases_to_predict[prompt] = disease_descriptors_chestxray14[prompt]
101
-
102
- # classify
103
- model = InferenceModel()
104
- output = classify_image(model, image_path, diseases_to_predict)
105
-
106
- return output
107
-
108
- with open("article.md", "r") as f:
109
- article = f.read()
110
- with open("description.md", "r") as f:
111
- description = f.read()
112
-
113
- # Define the Gradio interface
114
- iface = gr.Interface(
115
- fn=process_input,
116
- examples = [['examples/enlarged_cardiomediastinum.jpg', ['Enlarged Cardiomediastinum'], '', ''],['examples/edema.jpg', ['Edema'], '', ''],
117
- ['examples/support_devices.jpg', ['Custom'], 'Pacemaker', 'metalic object\nimplant on the left side of the chest\nimplanted cardiac device']],
118
- inputs=[gr.inputs.Image(type="filepath"), gr.inputs.CheckboxGroup(
119
- choices=['Enlarged Cardiomediastinum', 'Cardiomegaly', 'Lung Opacity', 'Lung Lesion', 'Edema', 'Consolidation', 'Pneumonia',
120
- 'Atelectasis', 'Pneumothorax', 'Pleural Effusion', 'Pleural Other', 'Fracture', 'Support Devices',
121
- 'Infiltration', 'Mass', 'Nodule', 'Emphysema', 'Fibrosis', 'Pleural Thickening', 'Hernia',
122
- 'Custom'],
123
- default=['Enlarged Cardiomediastinum', 'Cardiomegaly', 'Lung Opacity', 'Lung Lesion', 'Edema', 'Consolidation', 'Pneumonia',
124
- 'Atelectasis', 'Pneumothorax', 'Pleural Effusion', 'Pleural Other', 'Fracture', 'Support Devices'],
125
- label='Select to use predefined disease descriptors. Select "Custom" to define your own observations.'),
126
- gr.inputs.Textbox(lines=2, placeholder="Name of pathology for which you want to define custom observations", label='Pathology:'),
127
- gr.inputs.Textbox(lines=2, placeholder="Add your custom (positive) observations separated by a new line"
128
- "\n Note: Each descriptor will automatically be embedded into our prompt format: There is/are (no) <observation> indicating <pathology>"
129
- "\n Example:\n\n Opacity\nPleural Effusion\nConsolidation"
130
- , label='Custom Observations:')],
131
- article=article,
132
- description=description,
133
- outputs=gr.outputs.Image(type="filepath")
134
- )
135
-
136
- # Launch the interface
137
- iface.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_templates/layout.html DELETED
@@ -1,35 +0,0 @@
1
- {% extends "!layout.html" %}
2
-
3
- <link rel="canonical" href="{{ theme_canonical_url }}{{ pagename }}.html" />
4
- {% block menu %}
5
- <div>
6
- <a style="color:#F05732" href="{{ theme_canonical_url }}{{ pagename }}.html">
7
- You are viewing unstable developer preview docs.
8
- Click here to view docs for latest stable release.
9
- </a>
10
- </div>
11
- {{ super() }}
12
- {% endblock %}
13
-
14
- {% block footer %}
15
- {{ super() }}
16
- <script>
17
- (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
18
- (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
19
- m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
20
- })(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
21
- ga('create', 'UA-90545585-1', 'auto');
22
- ga('send', 'pageview');
23
- </script>
24
-
25
- <script async src="https://www.googletagmanager.com/gtag/js?id=UA-117752657-2"></script>
26
-
27
- <script>
28
- window.dataLayer = window.dataLayer || [];
29
- function gtag(){dataLayer.push(arguments);}
30
- gtag('js', new Date());
31
- gtag('config', 'UA-117752657-2');
32
- </script>
33
-
34
- <img height="1" width="1" style="border-style:none;" alt="" src="https://www.googleadservices.com/pagead/conversion/795629140/?label=txkmCPmdtosBENSssfsC&amp;guid=ON&amp;script=0"/>
35
- {% endblock %}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/cuda_utils.h DELETED
@@ -1,53 +0,0 @@
1
- #pragma once
2
-
3
- #ifdef __CUDACC__
4
- #include <cuda.h>
5
- #include <cuda_runtime.h>
6
- #endif
7
- #include <cstdio>
8
- #include <cassert>
9
- #include <limits>
10
-
11
- #ifdef __CUDACC__
12
- #define checkCuda(x) do { if((x)!=cudaSuccess) { \
13
- printf("CUDA Runtime Error: %s at %s:%d\n",\
14
- cudaGetErrorString(x),__FILE__,__LINE__);\
15
- exit(1);}} while(0)
16
- #endif
17
-
18
- template <typename T>
19
- DEVICE
20
- inline T infinity() {
21
- #ifdef __CUDA_ARCH__
22
- const unsigned long long ieee754inf = 0x7ff0000000000000;
23
- return __longlong_as_double(ieee754inf);
24
- #else
25
- return std::numeric_limits<T>::infinity();
26
- #endif
27
- }
28
-
29
- template <>
30
- DEVICE
31
- inline double infinity() {
32
- #ifdef __CUDA_ARCH__
33
- return __longlong_as_double(0x7ff0000000000000ULL);
34
- #else
35
- return std::numeric_limits<double>::infinity();
36
- #endif
37
- }
38
-
39
- template <>
40
- DEVICE
41
- inline float infinity() {
42
- #ifdef __CUDA_ARCH__
43
- return __int_as_float(0x7f800000);
44
- #else
45
- return std::numeric_limits<float>::infinity();
46
- #endif
47
- }
48
-
49
- inline void cuda_synchronize() {
50
- #ifdef __CUDACC__
51
- checkCuda(cudaDeviceSynchronize());
52
- #endif
53
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pybind11/tests/test_eigen.cpp DELETED
@@ -1,327 +0,0 @@
1
- /*
2
- tests/eigen.cpp -- automatic conversion of Eigen types
3
-
4
- Copyright (c) 2016 Wenzel Jakob <[email protected]>
5
-
6
- All rights reserved. Use of this source code is governed by a
7
- BSD-style license that can be found in the LICENSE file.
8
- */
9
-
10
- #include "pybind11_tests.h"
11
- #include "constructor_stats.h"
12
- #include <pybind11/eigen.h>
13
- #include <pybind11/stl.h>
14
-
15
- #if defined(_MSC_VER)
16
- # pragma warning(disable: 4996) // C4996: std::unary_negation is deprecated
17
- #endif
18
-
19
- #include <Eigen/Cholesky>
20
-
21
- using MatrixXdR = Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>;
22
-
23
-
24
-
25
- // Sets/resets a testing reference matrix to have values of 10*r + c, where r and c are the
26
- // (1-based) row/column number.
27
- template <typename M> void reset_ref(M &x) {
28
- for (int i = 0; i < x.rows(); i++) for (int j = 0; j < x.cols(); j++)
29
- x(i, j) = 11 + 10*i + j;
30
- }
31
-
32
- // Returns a static, column-major matrix
33
- Eigen::MatrixXd &get_cm() {
34
- static Eigen::MatrixXd *x;
35
- if (!x) {
36
- x = new Eigen::MatrixXd(3, 3);
37
- reset_ref(*x);
38
- }
39
- return *x;
40
- }
41
- // Likewise, but row-major
42
- MatrixXdR &get_rm() {
43
- static MatrixXdR *x;
44
- if (!x) {
45
- x = new MatrixXdR(3, 3);
46
- reset_ref(*x);
47
- }
48
- return *x;
49
- }
50
- // Resets the values of the static matrices returned by get_cm()/get_rm()
51
- void reset_refs() {
52
- reset_ref(get_cm());
53
- reset_ref(get_rm());
54
- }
55
-
56
- // Returns element 2,1 from a matrix (used to test copy/nocopy)
57
- double get_elem(Eigen::Ref<const Eigen::MatrixXd> m) { return m(2, 1); };
58
-
59
-
60
- // Returns a matrix with 10*r + 100*c added to each matrix element (to help test that the matrix
61
- // reference is referencing rows/columns correctly).
62
- template <typename MatrixArgType> Eigen::MatrixXd adjust_matrix(MatrixArgType m) {
63
- Eigen::MatrixXd ret(m);
64
- for (int c = 0; c < m.cols(); c++) for (int r = 0; r < m.rows(); r++)
65
- ret(r, c) += 10*r + 100*c;
66
- return ret;
67
- }
68
-
69
- struct CustomOperatorNew {
70
- CustomOperatorNew() = default;
71
-
72
- Eigen::Matrix4d a = Eigen::Matrix4d::Zero();
73
- Eigen::Matrix4d b = Eigen::Matrix4d::Identity();
74
-
75
- EIGEN_MAKE_ALIGNED_OPERATOR_NEW;
76
- };
77
-
78
- TEST_SUBMODULE(eigen, m) {
79
- using FixedMatrixR = Eigen::Matrix<float, 5, 6, Eigen::RowMajor>;
80
- using FixedMatrixC = Eigen::Matrix<float, 5, 6>;
81
- using DenseMatrixR = Eigen::Matrix<float, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>;
82
- using DenseMatrixC = Eigen::Matrix<float, Eigen::Dynamic, Eigen::Dynamic>;
83
- using FourRowMatrixC = Eigen::Matrix<float, 4, Eigen::Dynamic>;
84
- using FourColMatrixC = Eigen::Matrix<float, Eigen::Dynamic, 4>;
85
- using FourRowMatrixR = Eigen::Matrix<float, 4, Eigen::Dynamic>;
86
- using FourColMatrixR = Eigen::Matrix<float, Eigen::Dynamic, 4>;
87
- using SparseMatrixR = Eigen::SparseMatrix<float, Eigen::RowMajor>;
88
- using SparseMatrixC = Eigen::SparseMatrix<float>;
89
-
90
- // various tests
91
- m.def("double_col", [](const Eigen::VectorXf &x) -> Eigen::VectorXf { return 2.0f * x; });
92
- m.def("double_row", [](const Eigen::RowVectorXf &x) -> Eigen::RowVectorXf { return 2.0f * x; });
93
- m.def("double_complex", [](const Eigen::VectorXcf &x) -> Eigen::VectorXcf { return 2.0f * x; });
94
- m.def("double_threec", [](py::EigenDRef<Eigen::Vector3f> x) { x *= 2; });
95
- m.def("double_threer", [](py::EigenDRef<Eigen::RowVector3f> x) { x *= 2; });
96
- m.def("double_mat_cm", [](Eigen::MatrixXf x) -> Eigen::MatrixXf { return 2.0f * x; });
97
- m.def("double_mat_rm", [](DenseMatrixR x) -> DenseMatrixR { return 2.0f * x; });
98
-
99
- // test_eigen_ref_to_python
100
- // Different ways of passing via Eigen::Ref; the first and second are the Eigen-recommended
101
- m.def("cholesky1", [](Eigen::Ref<MatrixXdR> x) -> Eigen::MatrixXd { return x.llt().matrixL(); });
102
- m.def("cholesky2", [](const Eigen::Ref<const MatrixXdR> &x) -> Eigen::MatrixXd { return x.llt().matrixL(); });
103
- m.def("cholesky3", [](const Eigen::Ref<MatrixXdR> &x) -> Eigen::MatrixXd { return x.llt().matrixL(); });
104
- m.def("cholesky4", [](Eigen::Ref<const MatrixXdR> x) -> Eigen::MatrixXd { return x.llt().matrixL(); });
105
-
106
- // test_eigen_ref_mutators
107
- // Mutators: these add some value to the given element using Eigen, but Eigen should be mapping into
108
- // the numpy array data and so the result should show up there. There are three versions: one that
109
- // works on a contiguous-row matrix (numpy's default), one for a contiguous-column matrix, and one
110
- // for any matrix.
111
- auto add_rm = [](Eigen::Ref<MatrixXdR> x, int r, int c, double v) { x(r,c) += v; };
112
- auto add_cm = [](Eigen::Ref<Eigen::MatrixXd> x, int r, int c, double v) { x(r,c) += v; };
113
-
114
- // Mutators (Eigen maps into numpy variables):
115
- m.def("add_rm", add_rm); // Only takes row-contiguous
116
- m.def("add_cm", add_cm); // Only takes column-contiguous
117
- // Overloaded versions that will accept either row or column contiguous:
118
- m.def("add1", add_rm);
119
- m.def("add1", add_cm);
120
- m.def("add2", add_cm);
121
- m.def("add2", add_rm);
122
- // This one accepts a matrix of any stride:
123
- m.def("add_any", [](py::EigenDRef<Eigen::MatrixXd> x, int r, int c, double v) { x(r,c) += v; });
124
-
125
- // Return mutable references (numpy maps into eigen variables)
126
- m.def("get_cm_ref", []() { return Eigen::Ref<Eigen::MatrixXd>(get_cm()); });
127
- m.def("get_rm_ref", []() { return Eigen::Ref<MatrixXdR>(get_rm()); });
128
- // The same references, but non-mutable (numpy maps into eigen variables, but is !writeable)
129
- m.def("get_cm_const_ref", []() { return Eigen::Ref<const Eigen::MatrixXd>(get_cm()); });
130
- m.def("get_rm_const_ref", []() { return Eigen::Ref<const MatrixXdR>(get_rm()); });
131
-
132
- m.def("reset_refs", reset_refs); // Restores get_{cm,rm}_ref to original values
133
-
134
- // Increments and returns ref to (same) matrix
135
- m.def("incr_matrix", [](Eigen::Ref<Eigen::MatrixXd> m, double v) {
136
- m += Eigen::MatrixXd::Constant(m.rows(), m.cols(), v);
137
- return m;
138
- }, py::return_value_policy::reference);
139
-
140
- // Same, but accepts a matrix of any strides
141
- m.def("incr_matrix_any", [](py::EigenDRef<Eigen::MatrixXd> m, double v) {
142
- m += Eigen::MatrixXd::Constant(m.rows(), m.cols(), v);
143
- return m;
144
- }, py::return_value_policy::reference);
145
-
146
- // Returns an eigen slice of even rows
147
- m.def("even_rows", [](py::EigenDRef<Eigen::MatrixXd> m) {
148
- return py::EigenDMap<Eigen::MatrixXd>(
149
- m.data(), (m.rows() + 1) / 2, m.cols(),
150
- py::EigenDStride(m.outerStride(), 2 * m.innerStride()));
151
- }, py::return_value_policy::reference);
152
-
153
- // Returns an eigen slice of even columns
154
- m.def("even_cols", [](py::EigenDRef<Eigen::MatrixXd> m) {
155
- return py::EigenDMap<Eigen::MatrixXd>(
156
- m.data(), m.rows(), (m.cols() + 1) / 2,
157
- py::EigenDStride(2 * m.outerStride(), m.innerStride()));
158
- }, py::return_value_policy::reference);
159
-
160
- // Returns diagonals: a vector-like object with an inner stride != 1
161
- m.def("diagonal", [](const Eigen::Ref<const Eigen::MatrixXd> &x) { return x.diagonal(); });
162
- m.def("diagonal_1", [](const Eigen::Ref<const Eigen::MatrixXd> &x) { return x.diagonal<1>(); });
163
- m.def("diagonal_n", [](const Eigen::Ref<const Eigen::MatrixXd> &x, int index) { return x.diagonal(index); });
164
-
165
- // Return a block of a matrix (gives non-standard strides)
166
- m.def("block", [](const Eigen::Ref<const Eigen::MatrixXd> &x, int start_row, int start_col, int block_rows, int block_cols) {
167
- return x.block(start_row, start_col, block_rows, block_cols);
168
- });
169
-
170
- // test_eigen_return_references, test_eigen_keepalive
171
- // return value referencing/copying tests:
172
- class ReturnTester {
173
- Eigen::MatrixXd mat = create();
174
- public:
175
- ReturnTester() { print_created(this); }
176
- ~ReturnTester() { print_destroyed(this); }
177
- static Eigen::MatrixXd create() { return Eigen::MatrixXd::Ones(10, 10); }
178
- static const Eigen::MatrixXd createConst() { return Eigen::MatrixXd::Ones(10, 10); }
179
- Eigen::MatrixXd &get() { return mat; }
180
- Eigen::MatrixXd *getPtr() { return &mat; }
181
- const Eigen::MatrixXd &view() { return mat; }
182
- const Eigen::MatrixXd *viewPtr() { return &mat; }
183
- Eigen::Ref<Eigen::MatrixXd> ref() { return mat; }
184
- Eigen::Ref<const Eigen::MatrixXd> refConst() { return mat; }
185
- Eigen::Block<Eigen::MatrixXd> block(int r, int c, int nrow, int ncol) { return mat.block(r, c, nrow, ncol); }
186
- Eigen::Block<const Eigen::MatrixXd> blockConst(int r, int c, int nrow, int ncol) const { return mat.block(r, c, nrow, ncol); }
187
- py::EigenDMap<Eigen::Matrix2d> corners() { return py::EigenDMap<Eigen::Matrix2d>(mat.data(),
188
- py::EigenDStride(mat.outerStride() * (mat.outerSize()-1), mat.innerStride() * (mat.innerSize()-1))); }
189
- py::EigenDMap<const Eigen::Matrix2d> cornersConst() const { return py::EigenDMap<const Eigen::Matrix2d>(mat.data(),
190
- py::EigenDStride(mat.outerStride() * (mat.outerSize()-1), mat.innerStride() * (mat.innerSize()-1))); }
191
- };
192
- using rvp = py::return_value_policy;
193
- py::class_<ReturnTester>(m, "ReturnTester")
194
- .def(py::init<>())
195
- .def_static("create", &ReturnTester::create)
196
- .def_static("create_const", &ReturnTester::createConst)
197
- .def("get", &ReturnTester::get, rvp::reference_internal)
198
- .def("get_ptr", &ReturnTester::getPtr, rvp::reference_internal)
199
- .def("view", &ReturnTester::view, rvp::reference_internal)
200
- .def("view_ptr", &ReturnTester::view, rvp::reference_internal)
201
- .def("copy_get", &ReturnTester::get) // Default rvp: copy
202
- .def("copy_view", &ReturnTester::view) // "
203
- .def("ref", &ReturnTester::ref) // Default for Ref is to reference
204
- .def("ref_const", &ReturnTester::refConst) // Likewise, but const
205
- .def("ref_safe", &ReturnTester::ref, rvp::reference_internal)
206
- .def("ref_const_safe", &ReturnTester::refConst, rvp::reference_internal)
207
- .def("copy_ref", &ReturnTester::ref, rvp::copy)
208
- .def("copy_ref_const", &ReturnTester::refConst, rvp::copy)
209
- .def("block", &ReturnTester::block)
210
- .def("block_safe", &ReturnTester::block, rvp::reference_internal)
211
- .def("block_const", &ReturnTester::blockConst, rvp::reference_internal)
212
- .def("copy_block", &ReturnTester::block, rvp::copy)
213
- .def("corners", &ReturnTester::corners, rvp::reference_internal)
214
- .def("corners_const", &ReturnTester::cornersConst, rvp::reference_internal)
215
- ;
216
-
217
- // test_special_matrix_objects
218
- // Returns a DiagonalMatrix with diagonal (1,2,3,...)
219
- m.def("incr_diag", [](int k) {
220
- Eigen::DiagonalMatrix<int, Eigen::Dynamic> m(k);
221
- for (int i = 0; i < k; i++) m.diagonal()[i] = i+1;
222
- return m;
223
- });
224
-
225
- // Returns a SelfAdjointView referencing the lower triangle of m
226
- m.def("symmetric_lower", [](const Eigen::MatrixXi &m) {
227
- return m.selfadjointView<Eigen::Lower>();
228
- });
229
- // Returns a SelfAdjointView referencing the lower triangle of m
230
- m.def("symmetric_upper", [](const Eigen::MatrixXi &m) {
231
- return m.selfadjointView<Eigen::Upper>();
232
- });
233
-
234
- // Test matrix for various functions below.
235
- Eigen::MatrixXf mat(5, 6);
236
- mat << 0, 3, 0, 0, 0, 11,
237
- 22, 0, 0, 0, 17, 11,
238
- 7, 5, 0, 1, 0, 11,
239
- 0, 0, 0, 0, 0, 11,
240
- 0, 0, 14, 0, 8, 11;
241
-
242
- // test_fixed, and various other tests
243
- m.def("fixed_r", [mat]() -> FixedMatrixR { return FixedMatrixR(mat); });
244
- m.def("fixed_r_const", [mat]() -> const FixedMatrixR { return FixedMatrixR(mat); });
245
- m.def("fixed_c", [mat]() -> FixedMatrixC { return FixedMatrixC(mat); });
246
- m.def("fixed_copy_r", [](const FixedMatrixR &m) -> FixedMatrixR { return m; });
247
- m.def("fixed_copy_c", [](const FixedMatrixC &m) -> FixedMatrixC { return m; });
248
- // test_mutator_descriptors
249
- m.def("fixed_mutator_r", [](Eigen::Ref<FixedMatrixR>) {});
250
- m.def("fixed_mutator_c", [](Eigen::Ref<FixedMatrixC>) {});
251
- m.def("fixed_mutator_a", [](py::EigenDRef<FixedMatrixC>) {});
252
- // test_dense
253
- m.def("dense_r", [mat]() -> DenseMatrixR { return DenseMatrixR(mat); });
254
- m.def("dense_c", [mat]() -> DenseMatrixC { return DenseMatrixC(mat); });
255
- m.def("dense_copy_r", [](const DenseMatrixR &m) -> DenseMatrixR { return m; });
256
- m.def("dense_copy_c", [](const DenseMatrixC &m) -> DenseMatrixC { return m; });
257
- // test_sparse, test_sparse_signature
258
- m.def("sparse_r", [mat]() -> SparseMatrixR { return Eigen::SparseView<Eigen::MatrixXf>(mat); });
259
- m.def("sparse_c", [mat]() -> SparseMatrixC { return Eigen::SparseView<Eigen::MatrixXf>(mat); });
260
- m.def("sparse_copy_r", [](const SparseMatrixR &m) -> SparseMatrixR { return m; });
261
- m.def("sparse_copy_c", [](const SparseMatrixC &m) -> SparseMatrixC { return m; });
262
- // test_partially_fixed
263
- m.def("partial_copy_four_rm_r", [](const FourRowMatrixR &m) -> FourRowMatrixR { return m; });
264
- m.def("partial_copy_four_rm_c", [](const FourColMatrixR &m) -> FourColMatrixR { return m; });
265
- m.def("partial_copy_four_cm_r", [](const FourRowMatrixC &m) -> FourRowMatrixC { return m; });
266
- m.def("partial_copy_four_cm_c", [](const FourColMatrixC &m) -> FourColMatrixC { return m; });
267
-
268
- // test_cpp_casting
269
- // Test that we can cast a numpy object to a Eigen::MatrixXd explicitly
270
- m.def("cpp_copy", [](py::handle m) { return m.cast<Eigen::MatrixXd>()(1, 0); });
271
- m.def("cpp_ref_c", [](py::handle m) { return m.cast<Eigen::Ref<Eigen::MatrixXd>>()(1, 0); });
272
- m.def("cpp_ref_r", [](py::handle m) { return m.cast<Eigen::Ref<MatrixXdR>>()(1, 0); });
273
- m.def("cpp_ref_any", [](py::handle m) { return m.cast<py::EigenDRef<Eigen::MatrixXd>>()(1, 0); });
274
-
275
-
276
- // test_nocopy_wrapper
277
- // Test that we can prevent copying into an argument that would normally copy: First a version
278
- // that would allow copying (if types or strides don't match) for comparison:
279
- m.def("get_elem", &get_elem);
280
- // Now this alternative that calls the tells pybind to fail rather than copy:
281
- m.def("get_elem_nocopy", [](Eigen::Ref<const Eigen::MatrixXd> m) -> double { return get_elem(m); },
282
- py::arg().noconvert());
283
- // Also test a row-major-only no-copy const ref:
284
- m.def("get_elem_rm_nocopy", [](Eigen::Ref<const Eigen::Matrix<long, -1, -1, Eigen::RowMajor>> &m) -> long { return m(2, 1); },
285
- py::arg().noconvert());
286
-
287
- // test_issue738
288
- // Issue #738: 1xN or Nx1 2D matrices were neither accepted nor properly copied with an
289
- // incompatible stride value on the length-1 dimension--but that should be allowed (without
290
- // requiring a copy!) because the stride value can be safely ignored on a size-1 dimension.
291
- m.def("iss738_f1", &adjust_matrix<const Eigen::Ref<const Eigen::MatrixXd> &>, py::arg().noconvert());
292
- m.def("iss738_f2", &adjust_matrix<const Eigen::Ref<const Eigen::Matrix<double, -1, -1, Eigen::RowMajor>> &>, py::arg().noconvert());
293
-
294
- // test_issue1105
295
- // Issue #1105: when converting from a numpy two-dimensional (Nx1) or (1xN) value into a dense
296
- // eigen Vector or RowVector, the argument would fail to load because the numpy copy would fail:
297
- // numpy won't broadcast a Nx1 into a 1-dimensional vector.
298
- m.def("iss1105_col", [](Eigen::VectorXd) { return true; });
299
- m.def("iss1105_row", [](Eigen::RowVectorXd) { return true; });
300
-
301
- // test_named_arguments
302
- // Make sure named arguments are working properly:
303
- m.def("matrix_multiply", [](const py::EigenDRef<const Eigen::MatrixXd> A, const py::EigenDRef<const Eigen::MatrixXd> B)
304
- -> Eigen::MatrixXd {
305
- if (A.cols() != B.rows()) throw std::domain_error("Nonconformable matrices!");
306
- return A * B;
307
- }, py::arg("A"), py::arg("B"));
308
-
309
- // test_custom_operator_new
310
- py::class_<CustomOperatorNew>(m, "CustomOperatorNew")
311
- .def(py::init<>())
312
- .def_readonly("a", &CustomOperatorNew::a)
313
- .def_readonly("b", &CustomOperatorNew::b);
314
-
315
- // test_eigen_ref_life_support
316
- // In case of a failure (the caster's temp array does not live long enough), creating
317
- // a new array (np.ones(10)) increases the chances that the temp array will be garbage
318
- // collected and/or that its memory will be overridden with different values.
319
- m.def("get_elem_direct", [](Eigen::Ref<const Eigen::VectorXd> v) {
320
- py::module::import("numpy").attr("ones")(10);
321
- return v(5);
322
- });
323
- m.def("get_elem_indirect", [](std::vector<Eigen::Ref<const Eigen::VectorXd>> v) {
324
- py::module::import("numpy").attr("ones")(10);
325
- return v[0](5);
326
- });
327
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_traversal_tags.h DELETED
@@ -1,41 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- namespace thrust
20
- {
21
-
22
- // define Boost's traversal tags
23
- struct no_traversal_tag {};
24
-
25
- struct incrementable_traversal_tag
26
- : no_traversal_tag {};
27
-
28
- struct single_pass_traversal_tag
29
- : incrementable_traversal_tag {};
30
-
31
- struct forward_traversal_tag
32
- : single_pass_traversal_tag {};
33
-
34
- struct bidirectional_traversal_tag
35
- : forward_traversal_tag {};
36
-
37
- struct random_access_traversal_tag
38
- : bidirectional_traversal_tag {};
39
-
40
- } // end thrust
41
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/iterator/detail/reverse_iterator_base.h DELETED
@@ -1,42 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/iterator/iterator_adaptor.h>
20
- #include <thrust/iterator/iterator_traits.h>
21
-
22
- namespace thrust
23
- {
24
-
25
- template <typename> class reverse_iterator;
26
-
27
- namespace detail
28
- {
29
-
30
- template<typename BidirectionalIterator>
31
- struct reverse_iterator_base
32
- {
33
- typedef thrust::iterator_adaptor<
34
- thrust::reverse_iterator<BidirectionalIterator>,
35
- BidirectionalIterator
36
- > type;
37
- }; // end reverse_iterator_base
38
-
39
- } // end detail
40
-
41
- } // end thrust
42
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/cuda/pointer.h DELETED
@@ -1,321 +0,0 @@
1
- /*
2
- * Copyright 2008-2018 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in ccudaliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
- #include <thrust/system/cuda/detail/execution_policy.h>
21
- #include <thrust/detail/type_traits.h>
22
- #include <thrust/detail/pointer.h>
23
- #include <thrust/detail/reference.h>
24
-
25
- namespace thrust
26
- {
27
- namespace cuda_cub
28
- {
29
-
30
- template <typename>
31
- class pointer;
32
-
33
- } // end cuda_cub
34
- } // end thrust
35
-
36
-
37
- // specialize thrust::iterator_traits to avoid problems with the name of
38
- // pointer's constructor shadowing its nested pointer type
39
- // do this before pointer is defined so the specialization is correctly
40
- // used inside the definition
41
- namespace thrust
42
- {
43
-
44
- template <typename Element>
45
- struct iterator_traits<thrust::cuda_cub::pointer<Element> >
46
- {
47
- private:
48
- typedef thrust::cuda_cub::pointer<Element> ptr;
49
-
50
- public:
51
- typedef typename ptr::iterator_category iterator_category;
52
- typedef typename ptr::value_type value_type;
53
- typedef typename ptr::difference_type difference_type;
54
- typedef ptr pointer;
55
- typedef typename ptr::reference reference;
56
- }; // end iterator_traits
57
-
58
- namespace cuda_cub {
59
-
60
- // forward declaration of reference for pointer
61
- template <typename Element>
62
- class reference;
63
-
64
- // XXX nvcc + msvc have trouble instantiating reference below
65
- // this is a workaround
66
- template <typename Element>
67
- struct reference_msvc_workaround
68
- {
69
- typedef thrust::cuda_cub::reference<Element> type;
70
- }; // end reference_msvc_workaround
71
-
72
-
73
- /*! \p pointer stores a pointer to an object allocated in memory available to the cuda system.
74
- * This type provides type safety when dispatching standard algorithms on ranges resident
75
- * in cuda memory.
76
- *
77
- * \p pointer has pointer semantics: it may be dereferenced and manipulated with pointer arithmetic.
78
- *
79
- * \p pointer can be created with the function \p cuda::malloc, or by explicitly calling its constructor
80
- * with a raw pointer.
81
- *
82
- * The raw pointer encapsulated by a \p pointer may be obtained by eiter its <tt>get</tt> member function
83
- * or the \p raw_pointer_cast function.
84
- *
85
- * \note \p pointer is not a "smart" pointer; it is the programmer's responsibility to deallocate memory
86
- * pointed to by \p pointer.
87
- *
88
- * \tparam T specifies the type of the pointee.
89
- *
90
- * \see cuda::malloc
91
- * \see cuda::free
92
- * \see raw_pointer_cast
93
- */
94
- template <typename T>
95
- class pointer
96
- : public thrust::pointer<
97
- T,
98
- thrust::cuda_cub::tag,
99
- thrust::cuda_cub::reference<T>,
100
- thrust::cuda_cub::pointer<T> >
101
- {
102
-
103
- private:
104
- typedef thrust::pointer<
105
- T,
106
- thrust::cuda_cub::tag,
107
- typename reference_msvc_workaround<T>::type,
108
- thrust::cuda_cub::pointer<T> >
109
- super_t;
110
-
111
- public:
112
- /*! \p pointer's no-argument constructor initializes its encapsulated pointer to \c 0.
113
- */
114
- __host__ __device__
115
- pointer() : super_t() {}
116
-
117
- #if THRUST_CPP_DIALECT >= 2011
118
- // NOTE: This is needed so that Thrust smart pointers can be used in
119
- // `std::unique_ptr`.
120
- __host__ __device__
121
- pointer(decltype(nullptr)) : super_t(nullptr) {}
122
- #endif
123
-
124
- /*! This constructor allows construction of a <tt>pointer<const T></tt> from a <tt>T*</tt>.
125
- *
126
- * \param ptr A raw pointer to copy from, presumed to point to a location in memory
127
- * accessible by the \p cuda system.
128
- * \tparam OtherT \p OtherT shall be convertible to \p T.
129
- */
130
- template <typename OtherT>
131
- __host__ __device__ explicit pointer(OtherT *ptr) : super_t(ptr)
132
- {
133
- }
134
-
135
- /*! This constructor allows construction from another pointer-like object with related type.
136
- *
137
- * \param other The \p OtherPointer to copy.
138
- * \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible
139
- * to \p thrust::system::cuda::tag and its element type shall be convertible to \p T.
140
- */
141
- template <typename OtherPointer>
142
- __host__ __device__
143
- pointer(const OtherPointer &other,
144
- typename thrust::detail::enable_if_pointer_is_convertible<
145
- OtherPointer,
146
- pointer>::type * = 0) : super_t(other)
147
- {
148
- }
149
-
150
- /*! This constructor allows construction from another pointer-like object with \p void type.
151
- *
152
- * \param other The \p OtherPointer to copy.
153
- * \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible
154
- * to \p thrust::system::cuda::tag and its element type shall be \p void.
155
- */
156
- template <typename OtherPointer>
157
- __host__ __device__
158
- explicit
159
- pointer(const OtherPointer &other,
160
- typename thrust::detail::enable_if_void_pointer_is_system_convertible<
161
- OtherPointer,
162
- pointer>::type * = 0) : super_t(other)
163
- {
164
- }
165
-
166
- /*! Assignment operator allows assigning from another pointer-like object with related type.
167
- *
168
- * \param other The other pointer-like object to assign from.
169
- * \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible
170
- * to \p thrust::system::cuda::tag and its element type shall be convertible to \p T.
171
- */
172
- template <typename OtherPointer>
173
- __host__ __device__
174
- typename thrust::detail::enable_if_pointer_is_convertible<
175
- OtherPointer,
176
- pointer,
177
- pointer &>::type
178
- operator=(const OtherPointer &other)
179
- {
180
- return super_t::operator=(other);
181
- }
182
-
183
- #if THRUST_CPP_DIALECT >= 2011
184
- // NOTE: This is needed so that Thrust smart pointers can be used in
185
- // `std::unique_ptr`.
186
- __host__ __device__
187
- pointer& operator=(decltype(nullptr))
188
- {
189
- super_t::operator=(nullptr);
190
- return *this;
191
- }
192
- #endif
193
- }; // struct pointer
194
-
195
- /*! \p reference is a wrapped reference to an object stored in memory available to the \p cuda system.
196
- * \p reference is the type of the result of dereferencing a \p cuda::pointer.
197
- *
198
- * \tparam T Specifies the type of the referenced object.
199
- */
200
- template <typename T>
201
- class reference
202
- : public thrust::reference<
203
- T,
204
- thrust::cuda_cub::pointer<T>,
205
- thrust::cuda_cub::reference<T> >
206
- {
207
-
208
- private:
209
- typedef thrust::reference<
210
- T,
211
- thrust::cuda_cub::pointer<T>,
212
- thrust::cuda_cub::reference<T> >
213
- super_t;
214
-
215
- public:
216
- /*! \cond
217
- */
218
-
219
- typedef typename super_t::value_type value_type;
220
- typedef typename super_t::pointer pointer;
221
-
222
- /*! \endcond
223
- */
224
-
225
- /*! This constructor initializes this \p reference to refer to an object
226
- * pointed to by the given \p pointer. After this \p reference is constructed,
227
- * it shall refer to the object pointed to by \p ptr.
228
- *
229
- * \param ptr A \p pointer to copy from.
230
- */
231
- __host__ __device__ explicit reference(const pointer &ptr)
232
- : super_t(ptr)
233
- {
234
- }
235
-
236
- /*! This constructor accepts a const reference to another \p reference of related type.
237
- * After this \p reference is constructed, it shall refer to the same object as \p other.
238
- *
239
- * \param other A \p reference to copy from.
240
- * \tparam OtherT The element type of the other \p reference.
241
- *
242
- * \note This constructor is templated primarily to allow initialization of <tt>reference<const T></tt>
243
- * from <tt>reference<T></tt>.
244
- */
245
- template <typename OtherT>
246
- __host__ __device__
247
- reference(const reference<OtherT> &other,
248
- typename thrust::detail::enable_if_convertible<
249
- typename reference<OtherT>::pointer,
250
- pointer>::type * = 0)
251
- : super_t(other)
252
- {
253
- }
254
-
255
- /*! Copy assignment operator copy assigns from another \p reference of related type.
256
- *
257
- * \param other The other \p reference to assign from.
258
- * \return <tt>*this</tt>
259
- * \tparam OtherT The element type of the other \p reference.
260
- */
261
- template <typename OtherT>
262
- __host__ __device__
263
- reference &
264
- operator=(const reference<OtherT> &other);
265
-
266
- /*! Assignment operator assigns from a \p value_type.
267
- *
268
- * \param x The \p value_type to assign from.
269
- * \return <tt>*this</tt>
270
- */
271
- __host__ __device__
272
- reference &
273
- operator=(const value_type &x);
274
- }; // struct reference
275
-
276
- /*! Exchanges the values of two objects referred to by \p reference.
277
- * \p x The first \p reference of interest.
278
- * \p y The second \p reference of interest.
279
- */
280
- template <typename T>
281
- __host__ __device__ void swap(reference<T> x, reference<T> y);
282
-
283
- } // end cuda_cub
284
-
285
- namespace system {
286
-
287
-
288
- /*! \addtogroup system_backends Systems
289
- * \ingroup system
290
- * \{
291
- */
292
-
293
- /*! \namespace thrust::system::cuda
294
- * \brief \p thrust::system::cuda is the namespace containing functionality for allocating, manipulating,
295
- * and deallocating memory available to Thrust's CUDA backend system.
296
- * The identifiers are provided in a separate namespace underneath <tt>thrust::system</tt>
297
- * for import convenience but are also aliased in the top-level <tt>thrust::cuda</tt>
298
- * namespace for easy access.
299
- *
300
- */
301
-
302
- namespace cuda {
303
- using thrust::cuda_cub::pointer;
304
- using thrust::cuda_cub::reference;
305
- } // end cuda
306
-
307
- /*! \}
308
- */
309
-
310
- } // end system
311
-
312
- /*! \namespace thrust::cuda
313
- * \brief \p thrust::cuda is a top-level alias for \p thrust::system::cuda. */
314
- namespace cuda {
315
- using thrust::cuda_cub::pointer;
316
- using thrust::cuda_cub::reference;
317
- } // end cuda
318
-
319
- } // end thrust
320
-
321
- #include <thrust/system/cuda/detail/pointer.inl>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/generate.h DELETED
@@ -1,22 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
-
21
- // this system has no special generate functions
22
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/datasets/pipelines/transforms.py DELETED
@@ -1,1812 +0,0 @@
1
- import copy
2
- import inspect
3
-
4
- import mmcv
5
- import numpy as np
6
- from numpy import random
7
-
8
- from mmdet.core import PolygonMasks
9
- from mmdet.core.evaluation.bbox_overlaps import bbox_overlaps
10
- from ..builder import PIPELINES
11
-
12
- try:
13
- from imagecorruptions import corrupt
14
- except ImportError:
15
- corrupt = None
16
-
17
- try:
18
- import albumentations
19
- from albumentations import Compose
20
- except ImportError:
21
- albumentations = None
22
- Compose = None
23
-
24
-
25
- @PIPELINES.register_module()
26
- class Resize(object):
27
- """Resize images & bbox & mask.
28
-
29
- This transform resizes the input image to some scale. Bboxes and masks are
30
- then resized with the same scale factor. If the input dict contains the key
31
- "scale", then the scale in the input dict is used, otherwise the specified
32
- scale in the init method is used. If the input dict contains the key
33
- "scale_factor" (if MultiScaleFlipAug does not give img_scale but
34
- scale_factor), the actual scale will be computed by image shape and
35
- scale_factor.
36
-
37
- `img_scale` can either be a tuple (single-scale) or a list of tuple
38
- (multi-scale). There are 3 multiscale modes:
39
-
40
- - ``ratio_range is not None``: randomly sample a ratio from the ratio \
41
- range and multiply it with the image scale.
42
- - ``ratio_range is None`` and ``multiscale_mode == "range"``: randomly \
43
- sample a scale from the multiscale range.
44
- - ``ratio_range is None`` and ``multiscale_mode == "value"``: randomly \
45
- sample a scale from multiple scales.
46
-
47
- Args:
48
- img_scale (tuple or list[tuple]): Images scales for resizing.
49
- multiscale_mode (str): Either "range" or "value".
50
- ratio_range (tuple[float]): (min_ratio, max_ratio)
51
- keep_ratio (bool): Whether to keep the aspect ratio when resizing the
52
- image.
53
- bbox_clip_border (bool, optional): Whether clip the objects outside
54
- the border of the image. Defaults to True.
55
- backend (str): Image resize backend, choices are 'cv2' and 'pillow'.
56
- These two backends generates slightly different results. Defaults
57
- to 'cv2'.
58
- override (bool, optional): Whether to override `scale` and
59
- `scale_factor` so as to call resize twice. Default False. If True,
60
- after the first resizing, the existed `scale` and `scale_factor`
61
- will be ignored so the second resizing can be allowed.
62
- This option is a work-around for multiple times of resize in DETR.
63
- Defaults to False.
64
- """
65
-
66
- def __init__(self,
67
- img_scale=None,
68
- multiscale_mode='range',
69
- ratio_range=None,
70
- keep_ratio=True,
71
- bbox_clip_border=True,
72
- backend='cv2',
73
- override=False):
74
- if img_scale is None:
75
- self.img_scale = None
76
- else:
77
- if isinstance(img_scale, list):
78
- self.img_scale = img_scale
79
- else:
80
- self.img_scale = [img_scale]
81
- assert mmcv.is_list_of(self.img_scale, tuple)
82
-
83
- if ratio_range is not None:
84
- # mode 1: given a scale and a range of image ratio
85
- assert len(self.img_scale) == 1
86
- else:
87
- # mode 2: given multiple scales or a range of scales
88
- assert multiscale_mode in ['value', 'range']
89
-
90
- self.backend = backend
91
- self.multiscale_mode = multiscale_mode
92
- self.ratio_range = ratio_range
93
- self.keep_ratio = keep_ratio
94
- # TODO: refactor the override option in Resize
95
- self.override = override
96
- self.bbox_clip_border = bbox_clip_border
97
-
98
- @staticmethod
99
- def random_select(img_scales):
100
- """Randomly select an img_scale from given candidates.
101
-
102
- Args:
103
- img_scales (list[tuple]): Images scales for selection.
104
-
105
- Returns:
106
- (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, \
107
- where ``img_scale`` is the selected image scale and \
108
- ``scale_idx`` is the selected index in the given candidates.
109
- """
110
-
111
- assert mmcv.is_list_of(img_scales, tuple)
112
- scale_idx = np.random.randint(len(img_scales))
113
- img_scale = img_scales[scale_idx]
114
- return img_scale, scale_idx
115
-
116
- @staticmethod
117
- def random_sample(img_scales):
118
- """Randomly sample an img_scale when ``multiscale_mode=='range'``.
119
-
120
- Args:
121
- img_scales (list[tuple]): Images scale range for sampling.
122
- There must be two tuples in img_scales, which specify the lower
123
- and upper bound of image scales.
124
-
125
- Returns:
126
- (tuple, None): Returns a tuple ``(img_scale, None)``, where \
127
- ``img_scale`` is sampled scale and None is just a placeholder \
128
- to be consistent with :func:`random_select`.
129
- """
130
-
131
- assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2
132
- img_scale_long = [max(s) for s in img_scales]
133
- img_scale_short = [min(s) for s in img_scales]
134
- long_edge = np.random.randint(
135
- min(img_scale_long),
136
- max(img_scale_long) + 1)
137
- short_edge = np.random.randint(
138
- min(img_scale_short),
139
- max(img_scale_short) + 1)
140
- img_scale = (long_edge, short_edge)
141
- return img_scale, None
142
-
143
- @staticmethod
144
- def random_sample_ratio(img_scale, ratio_range):
145
- """Randomly sample an img_scale when ``ratio_range`` is specified.
146
-
147
- A ratio will be randomly sampled from the range specified by
148
- ``ratio_range``. Then it would be multiplied with ``img_scale`` to
149
- generate sampled scale.
150
-
151
- Args:
152
- img_scale (tuple): Images scale base to multiply with ratio.
153
- ratio_range (tuple[float]): The minimum and maximum ratio to scale
154
- the ``img_scale``.
155
-
156
- Returns:
157
- (tuple, None): Returns a tuple ``(scale, None)``, where \
158
- ``scale`` is sampled ratio multiplied with ``img_scale`` and \
159
- None is just a placeholder to be consistent with \
160
- :func:`random_select`.
161
- """
162
-
163
- assert isinstance(img_scale, tuple) and len(img_scale) == 2
164
- min_ratio, max_ratio = ratio_range
165
- assert min_ratio <= max_ratio
166
- ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio
167
- scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio)
168
- return scale, None
169
-
170
- def _random_scale(self, results):
171
- """Randomly sample an img_scale according to ``ratio_range`` and
172
- ``multiscale_mode``.
173
-
174
- If ``ratio_range`` is specified, a ratio will be sampled and be
175
- multiplied with ``img_scale``.
176
- If multiple scales are specified by ``img_scale``, a scale will be
177
- sampled according to ``multiscale_mode``.
178
- Otherwise, single scale will be used.
179
-
180
- Args:
181
- results (dict): Result dict from :obj:`dataset`.
182
-
183
- Returns:
184
- dict: Two new keys 'scale` and 'scale_idx` are added into \
185
- ``results``, which would be used by subsequent pipelines.
186
- """
187
-
188
- if self.ratio_range is not None:
189
- scale, scale_idx = self.random_sample_ratio(
190
- self.img_scale[0], self.ratio_range)
191
- elif len(self.img_scale) == 1:
192
- scale, scale_idx = self.img_scale[0], 0
193
- elif self.multiscale_mode == 'range':
194
- scale, scale_idx = self.random_sample(self.img_scale)
195
- elif self.multiscale_mode == 'value':
196
- scale, scale_idx = self.random_select(self.img_scale)
197
- else:
198
- raise NotImplementedError
199
-
200
- results['scale'] = scale
201
- results['scale_idx'] = scale_idx
202
-
203
- def _resize_img(self, results):
204
- """Resize images with ``results['scale']``."""
205
- for key in results.get('img_fields', ['img']):
206
- if self.keep_ratio:
207
- img, scale_factor = mmcv.imrescale(
208
- results[key],
209
- results['scale'],
210
- return_scale=True,
211
- backend=self.backend)
212
- # the w_scale and h_scale has minor difference
213
- # a real fix should be done in the mmcv.imrescale in the future
214
- new_h, new_w = img.shape[:2]
215
- h, w = results[key].shape[:2]
216
- w_scale = new_w / w
217
- h_scale = new_h / h
218
- else:
219
- img, w_scale, h_scale = mmcv.imresize(
220
- results[key],
221
- results['scale'],
222
- return_scale=True,
223
- backend=self.backend)
224
- results[key] = img
225
-
226
- scale_factor = np.array([w_scale, h_scale, w_scale, h_scale],
227
- dtype=np.float32)
228
- results['img_shape'] = img.shape
229
- # in case that there is no padding
230
- results['pad_shape'] = img.shape
231
- results['scale_factor'] = scale_factor
232
- results['keep_ratio'] = self.keep_ratio
233
-
234
- def _resize_bboxes(self, results):
235
- """Resize bounding boxes with ``results['scale_factor']``."""
236
- for key in results.get('bbox_fields', []):
237
- bboxes = results[key] * results['scale_factor']
238
- if self.bbox_clip_border:
239
- img_shape = results['img_shape']
240
- bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1])
241
- bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0])
242
- results[key] = bboxes
243
-
244
- def _resize_masks(self, results):
245
- """Resize masks with ``results['scale']``"""
246
- for key in results.get('mask_fields', []):
247
- if results[key] is None:
248
- continue
249
- if self.keep_ratio:
250
- results[key] = results[key].rescale(results['scale'])
251
- else:
252
- results[key] = results[key].resize(results['img_shape'][:2])
253
-
254
- def _resize_seg(self, results):
255
- """Resize semantic segmentation map with ``results['scale']``."""
256
- for key in results.get('seg_fields', []):
257
- if self.keep_ratio:
258
- gt_seg = mmcv.imrescale(
259
- results[key],
260
- results['scale'],
261
- interpolation='nearest',
262
- backend=self.backend)
263
- else:
264
- gt_seg = mmcv.imresize(
265
- results[key],
266
- results['scale'],
267
- interpolation='nearest',
268
- backend=self.backend)
269
- results['gt_semantic_seg'] = gt_seg
270
-
271
- def __call__(self, results):
272
- """Call function to resize images, bounding boxes, masks, semantic
273
- segmentation map.
274
-
275
- Args:
276
- results (dict): Result dict from loading pipeline.
277
-
278
- Returns:
279
- dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', \
280
- 'keep_ratio' keys are added into result dict.
281
- """
282
-
283
- if 'scale' not in results:
284
- if 'scale_factor' in results:
285
- img_shape = results['img'].shape[:2]
286
- scale_factor = results['scale_factor']
287
- assert isinstance(scale_factor, float)
288
- results['scale'] = tuple(
289
- [int(x * scale_factor) for x in img_shape][::-1])
290
- else:
291
- self._random_scale(results)
292
- else:
293
- if not self.override:
294
- assert 'scale_factor' not in results, (
295
- 'scale and scale_factor cannot be both set.')
296
- else:
297
- results.pop('scale')
298
- if 'scale_factor' in results:
299
- results.pop('scale_factor')
300
- self._random_scale(results)
301
-
302
- self._resize_img(results)
303
- self._resize_bboxes(results)
304
- self._resize_masks(results)
305
- self._resize_seg(results)
306
- return results
307
-
308
- def __repr__(self):
309
- repr_str = self.__class__.__name__
310
- repr_str += f'(img_scale={self.img_scale}, '
311
- repr_str += f'multiscale_mode={self.multiscale_mode}, '
312
- repr_str += f'ratio_range={self.ratio_range}, '
313
- repr_str += f'keep_ratio={self.keep_ratio}, '
314
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
315
- return repr_str
316
-
317
-
318
- @PIPELINES.register_module()
319
- class RandomFlip(object):
320
- """Flip the image & bbox & mask.
321
-
322
- If the input dict contains the key "flip", then the flag will be used,
323
- otherwise it will be randomly decided by a ratio specified in the init
324
- method.
325
-
326
- When random flip is enabled, ``flip_ratio``/``direction`` can either be a
327
- float/string or tuple of float/string. There are 3 flip modes:
328
-
329
- - ``flip_ratio`` is float, ``direction`` is string: the image will be
330
- ``direction``ly flipped with probability of ``flip_ratio`` .
331
- E.g., ``flip_ratio=0.5``, ``direction='horizontal'``,
332
- then image will be horizontally flipped with probability of 0.5.
333
- - ``flip_ratio`` is float, ``direction`` is list of string: the image wil
334
- be ``direction[i]``ly flipped with probability of
335
- ``flip_ratio/len(direction)``.
336
- E.g., ``flip_ratio=0.5``, ``direction=['horizontal', 'vertical']``,
337
- then image will be horizontally flipped with probability of 0.25,
338
- vertically with probability of 0.25.
339
- - ``flip_ratio`` is list of float, ``direction`` is list of string:
340
- given ``len(flip_ratio) == len(direction)``, the image wil
341
- be ``direction[i]``ly flipped with probability of ``flip_ratio[i]``.
342
- E.g., ``flip_ratio=[0.3, 0.5]``, ``direction=['horizontal',
343
- 'vertical']``, then image will be horizontally flipped with probability
344
- of 0.3, vertically with probability of 0.5
345
-
346
- Args:
347
- flip_ratio (float | list[float], optional): The flipping probability.
348
- Default: None.
349
- direction(str | list[str], optional): The flipping direction. Options
350
- are 'horizontal', 'vertical', 'diagonal'. Default: 'horizontal'.
351
- If input is a list, the length must equal ``flip_ratio``. Each
352
- element in ``flip_ratio`` indicates the flip probability of
353
- corresponding direction.
354
- """
355
-
356
- def __init__(self, flip_ratio=None, direction='horizontal'):
357
- if isinstance(flip_ratio, list):
358
- assert mmcv.is_list_of(flip_ratio, float)
359
- assert 0 <= sum(flip_ratio) <= 1
360
- elif isinstance(flip_ratio, float):
361
- assert 0 <= flip_ratio <= 1
362
- elif flip_ratio is None:
363
- pass
364
- else:
365
- raise ValueError('flip_ratios must be None, float, '
366
- 'or list of float')
367
- self.flip_ratio = flip_ratio
368
-
369
- valid_directions = ['horizontal', 'vertical', 'diagonal']
370
- if isinstance(direction, str):
371
- assert direction in valid_directions
372
- elif isinstance(direction, list):
373
- assert mmcv.is_list_of(direction, str)
374
- assert set(direction).issubset(set(valid_directions))
375
- else:
376
- raise ValueError('direction must be either str or list of str')
377
- self.direction = direction
378
-
379
- if isinstance(flip_ratio, list):
380
- assert len(self.flip_ratio) == len(self.direction)
381
-
382
- def bbox_flip(self, bboxes, img_shape, direction):
383
- """Flip bboxes horizontally.
384
-
385
- Args:
386
- bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k)
387
- img_shape (tuple[int]): Image shape (height, width)
388
- direction (str): Flip direction. Options are 'horizontal',
389
- 'vertical'.
390
-
391
- Returns:
392
- numpy.ndarray: Flipped bounding boxes.
393
- """
394
-
395
- assert bboxes.shape[-1] % 4 == 0
396
- flipped = bboxes.copy()
397
- if direction == 'horizontal':
398
- w = img_shape[1]
399
- flipped[..., 0::4] = w - bboxes[..., 2::4]
400
- flipped[..., 2::4] = w - bboxes[..., 0::4]
401
- elif direction == 'vertical':
402
- h = img_shape[0]
403
- flipped[..., 1::4] = h - bboxes[..., 3::4]
404
- flipped[..., 3::4] = h - bboxes[..., 1::4]
405
- elif direction == 'diagonal':
406
- w = img_shape[1]
407
- h = img_shape[0]
408
- flipped[..., 0::4] = w - bboxes[..., 2::4]
409
- flipped[..., 1::4] = h - bboxes[..., 3::4]
410
- flipped[..., 2::4] = w - bboxes[..., 0::4]
411
- flipped[..., 3::4] = h - bboxes[..., 1::4]
412
- else:
413
- raise ValueError(f"Invalid flipping direction '{direction}'")
414
- return flipped
415
-
416
- def __call__(self, results):
417
- """Call function to flip bounding boxes, masks, semantic segmentation
418
- maps.
419
-
420
- Args:
421
- results (dict): Result dict from loading pipeline.
422
-
423
- Returns:
424
- dict: Flipped results, 'flip', 'flip_direction' keys are added \
425
- into result dict.
426
- """
427
-
428
- if 'flip' not in results:
429
- if isinstance(self.direction, list):
430
- # None means non-flip
431
- direction_list = self.direction + [None]
432
- else:
433
- # None means non-flip
434
- direction_list = [self.direction, None]
435
-
436
- if isinstance(self.flip_ratio, list):
437
- non_flip_ratio = 1 - sum(self.flip_ratio)
438
- flip_ratio_list = self.flip_ratio + [non_flip_ratio]
439
- else:
440
- non_flip_ratio = 1 - self.flip_ratio
441
- # exclude non-flip
442
- single_ratio = self.flip_ratio / (len(direction_list) - 1)
443
- flip_ratio_list = [single_ratio] * (len(direction_list) -
444
- 1) + [non_flip_ratio]
445
-
446
- cur_dir = np.random.choice(direction_list, p=flip_ratio_list)
447
-
448
- results['flip'] = cur_dir is not None
449
- if 'flip_direction' not in results:
450
- results['flip_direction'] = cur_dir
451
- if results['flip']:
452
- # flip image
453
- for key in results.get('img_fields', ['img']):
454
- results[key] = mmcv.imflip(
455
- results[key], direction=results['flip_direction'])
456
- # flip bboxes
457
- for key in results.get('bbox_fields', []):
458
- results[key] = self.bbox_flip(results[key],
459
- results['img_shape'],
460
- results['flip_direction'])
461
- # flip masks
462
- for key in results.get('mask_fields', []):
463
- results[key] = results[key].flip(results['flip_direction'])
464
-
465
- # flip segs
466
- for key in results.get('seg_fields', []):
467
- results[key] = mmcv.imflip(
468
- results[key], direction=results['flip_direction'])
469
- return results
470
-
471
- def __repr__(self):
472
- return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})'
473
-
474
-
475
- @PIPELINES.register_module()
476
- class Pad(object):
477
- """Pad the image & mask.
478
-
479
- There are two padding modes: (1) pad to a fixed size and (2) pad to the
480
- minimum size that is divisible by some number.
481
- Added keys are "pad_shape", "pad_fixed_size", "pad_size_divisor",
482
-
483
- Args:
484
- size (tuple, optional): Fixed padding size.
485
- size_divisor (int, optional): The divisor of padded size.
486
- pad_val (float, optional): Padding value, 0 by default.
487
- """
488
-
489
- def __init__(self, size=None, size_divisor=None, pad_val=0):
490
- self.size = size
491
- self.size_divisor = size_divisor
492
- self.pad_val = pad_val
493
- # only one of size and size_divisor should be valid
494
- assert size is not None or size_divisor is not None
495
- assert size is None or size_divisor is None
496
-
497
- def _pad_img(self, results):
498
- """Pad images according to ``self.size``."""
499
- for key in results.get('img_fields', ['img']):
500
- if self.size is not None:
501
- padded_img = mmcv.impad(
502
- results[key], shape=self.size, pad_val=self.pad_val)
503
- elif self.size_divisor is not None:
504
- padded_img = mmcv.impad_to_multiple(
505
- results[key], self.size_divisor, pad_val=self.pad_val)
506
- results[key] = padded_img
507
- results['pad_shape'] = padded_img.shape
508
- results['pad_fixed_size'] = self.size
509
- results['pad_size_divisor'] = self.size_divisor
510
-
511
- def _pad_masks(self, results):
512
- """Pad masks according to ``results['pad_shape']``."""
513
- pad_shape = results['pad_shape'][:2]
514
- for key in results.get('mask_fields', []):
515
- results[key] = results[key].pad(pad_shape, pad_val=self.pad_val)
516
-
517
- def _pad_seg(self, results):
518
- """Pad semantic segmentation map according to
519
- ``results['pad_shape']``."""
520
- for key in results.get('seg_fields', []):
521
- results[key] = mmcv.impad(
522
- results[key], shape=results['pad_shape'][:2])
523
-
524
- def __call__(self, results):
525
- """Call function to pad images, masks, semantic segmentation maps.
526
-
527
- Args:
528
- results (dict): Result dict from loading pipeline.
529
-
530
- Returns:
531
- dict: Updated result dict.
532
- """
533
- self._pad_img(results)
534
- self._pad_masks(results)
535
- self._pad_seg(results)
536
- return results
537
-
538
- def __repr__(self):
539
- repr_str = self.__class__.__name__
540
- repr_str += f'(size={self.size}, '
541
- repr_str += f'size_divisor={self.size_divisor}, '
542
- repr_str += f'pad_val={self.pad_val})'
543
- return repr_str
544
-
545
-
546
- @PIPELINES.register_module()
547
- class Normalize(object):
548
- """Normalize the image.
549
-
550
- Added key is "img_norm_cfg".
551
-
552
- Args:
553
- mean (sequence): Mean values of 3 channels.
554
- std (sequence): Std values of 3 channels.
555
- to_rgb (bool): Whether to convert the image from BGR to RGB,
556
- default is true.
557
- """
558
-
559
- def __init__(self, mean, std, to_rgb=True):
560
- self.mean = np.array(mean, dtype=np.float32)
561
- self.std = np.array(std, dtype=np.float32)
562
- self.to_rgb = to_rgb
563
-
564
- def __call__(self, results):
565
- """Call function to normalize images.
566
-
567
- Args:
568
- results (dict): Result dict from loading pipeline.
569
-
570
- Returns:
571
- dict: Normalized results, 'img_norm_cfg' key is added into
572
- result dict.
573
- """
574
- for key in results.get('img_fields', ['img']):
575
- results[key] = mmcv.imnormalize(results[key], self.mean, self.std,
576
- self.to_rgb)
577
- results['img_norm_cfg'] = dict(
578
- mean=self.mean, std=self.std, to_rgb=self.to_rgb)
579
- return results
580
-
581
- def __repr__(self):
582
- repr_str = self.__class__.__name__
583
- repr_str += f'(mean={self.mean}, std={self.std}, to_rgb={self.to_rgb})'
584
- return repr_str
585
-
586
-
587
- @PIPELINES.register_module()
588
- class RandomCrop(object):
589
- """Random crop the image & bboxes & masks.
590
-
591
- The absolute `crop_size` is sampled based on `crop_type` and `image_size`,
592
- then the cropped results are generated.
593
-
594
- Args:
595
- crop_size (tuple): The relative ratio or absolute pixels of
596
- height and width.
597
- crop_type (str, optional): one of "relative_range", "relative",
598
- "absolute", "absolute_range". "relative" randomly crops
599
- (h * crop_size[0], w * crop_size[1]) part from an input of size
600
- (h, w). "relative_range" uniformly samples relative crop size from
601
- range [crop_size[0], 1] and [crop_size[1], 1] for height and width
602
- respectively. "absolute" crops from an input with absolute size
603
- (crop_size[0], crop_size[1]). "absolute_range" uniformly samples
604
- crop_h in range [crop_size[0], min(h, crop_size[1])] and crop_w
605
- in range [crop_size[0], min(w, crop_size[1])]. Default "absolute".
606
- allow_negative_crop (bool, optional): Whether to allow a crop that does
607
- not contain any bbox area. Default False.
608
- bbox_clip_border (bool, optional): Whether clip the objects outside
609
- the border of the image. Defaults to True.
610
-
611
- Note:
612
- - If the image is smaller than the absolute crop size, return the
613
- original image.
614
- - The keys for bboxes, labels and masks must be aligned. That is,
615
- `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and
616
- `gt_bboxes_ignore` corresponds to `gt_labels_ignore` and
617
- `gt_masks_ignore`.
618
- - If the crop does not contain any gt-bbox region and
619
- `allow_negative_crop` is set to False, skip this image.
620
- """
621
-
622
- def __init__(self,
623
- crop_size,
624
- crop_type='absolute',
625
- allow_negative_crop=False,
626
- bbox_clip_border=True):
627
- if crop_type not in [
628
- 'relative_range', 'relative', 'absolute', 'absolute_range'
629
- ]:
630
- raise ValueError(f'Invalid crop_type {crop_type}.')
631
- if crop_type in ['absolute', 'absolute_range']:
632
- assert crop_size[0] > 0 and crop_size[1] > 0
633
- assert isinstance(crop_size[0], int) and isinstance(
634
- crop_size[1], int)
635
- else:
636
- assert 0 < crop_size[0] <= 1 and 0 < crop_size[1] <= 1
637
- self.crop_size = crop_size
638
- self.crop_type = crop_type
639
- self.allow_negative_crop = allow_negative_crop
640
- self.bbox_clip_border = bbox_clip_border
641
- # The key correspondence from bboxes to labels and masks.
642
- self.bbox2label = {
643
- 'gt_bboxes': 'gt_labels',
644
- 'gt_bboxes_ignore': 'gt_labels_ignore'
645
- }
646
- self.bbox2mask = {
647
- 'gt_bboxes': 'gt_masks',
648
- 'gt_bboxes_ignore': 'gt_masks_ignore'
649
- }
650
-
651
- def _crop_data(self, results, crop_size, allow_negative_crop):
652
- """Function to randomly crop images, bounding boxes, masks, semantic
653
- segmentation maps.
654
-
655
- Args:
656
- results (dict): Result dict from loading pipeline.
657
- crop_size (tuple): Expected absolute size after cropping, (h, w).
658
- allow_negative_crop (bool): Whether to allow a crop that does not
659
- contain any bbox area. Default to False.
660
-
661
- Returns:
662
- dict: Randomly cropped results, 'img_shape' key in result dict is
663
- updated according to crop size.
664
- """
665
- assert crop_size[0] > 0 and crop_size[1] > 0
666
- for key in results.get('img_fields', ['img']):
667
- img = results[key]
668
- margin_h = max(img.shape[0] - crop_size[0], 0)
669
- margin_w = max(img.shape[1] - crop_size[1], 0)
670
- offset_h = np.random.randint(0, margin_h + 1)
671
- offset_w = np.random.randint(0, margin_w + 1)
672
- crop_y1, crop_y2 = offset_h, offset_h + crop_size[0]
673
- crop_x1, crop_x2 = offset_w, offset_w + crop_size[1]
674
-
675
- # crop the image
676
- img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...]
677
- img_shape = img.shape
678
- results[key] = img
679
- results['img_shape'] = img_shape
680
-
681
- # crop bboxes accordingly and clip to the image boundary
682
- for key in results.get('bbox_fields', []):
683
- # e.g. gt_bboxes and gt_bboxes_ignore
684
- bbox_offset = np.array([offset_w, offset_h, offset_w, offset_h],
685
- dtype=np.float32)
686
- bboxes = results[key] - bbox_offset
687
- if self.bbox_clip_border:
688
- bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1])
689
- bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0])
690
- valid_inds = (bboxes[:, 2] > bboxes[:, 0]) & (
691
- bboxes[:, 3] > bboxes[:, 1])
692
- # If the crop does not contain any gt-bbox area and
693
- # allow_negative_crop is False, skip this image.
694
- if (key == 'gt_bboxes' and not valid_inds.any()
695
- and not allow_negative_crop):
696
- return None
697
- results[key] = bboxes[valid_inds, :]
698
- # label fields. e.g. gt_labels and gt_labels_ignore
699
- label_key = self.bbox2label.get(key)
700
- if label_key in results:
701
- results[label_key] = results[label_key][valid_inds]
702
-
703
- # mask fields, e.g. gt_masks and gt_masks_ignore
704
- mask_key = self.bbox2mask.get(key)
705
- if mask_key in results:
706
- results[mask_key] = results[mask_key][
707
- valid_inds.nonzero()[0]].crop(
708
- np.asarray([crop_x1, crop_y1, crop_x2, crop_y2]))
709
-
710
-
711
- # crop semantic seg
712
- for key in results.get('seg_fields', []):
713
- results[key] = results[key][crop_y1:crop_y2, crop_x1:crop_x2]
714
-
715
- return results
716
-
717
- def _get_crop_size(self, image_size):
718
- """Randomly generates the absolute crop size based on `crop_type` and
719
- `image_size`.
720
-
721
- Args:
722
- image_size (tuple): (h, w).
723
-
724
- Returns:
725
- crop_size (tuple): (crop_h, crop_w) in absolute pixels.
726
- """
727
- h, w = image_size
728
- if self.crop_type == 'absolute':
729
- return (min(self.crop_size[0], h), min(self.crop_size[1], w))
730
- elif self.crop_type == 'absolute_range':
731
- assert self.crop_size[0] <= self.crop_size[1]
732
- crop_h = np.random.randint(
733
- min(h, self.crop_size[0]),
734
- min(h, self.crop_size[1]) + 1)
735
- crop_w = np.random.randint(
736
- min(w, self.crop_size[0]),
737
- min(w, self.crop_size[1]) + 1)
738
- return crop_h, crop_w
739
- elif self.crop_type == 'relative':
740
- crop_h, crop_w = self.crop_size
741
- return int(h * crop_h + 0.5), int(w * crop_w + 0.5)
742
- elif self.crop_type == 'relative_range':
743
- crop_size = np.asarray(self.crop_size, dtype=np.float32)
744
- crop_h, crop_w = crop_size + np.random.rand(2) * (1 - crop_size)
745
- return int(h * crop_h + 0.5), int(w * crop_w + 0.5)
746
-
747
- def __call__(self, results):
748
- """Call function to randomly crop images, bounding boxes, masks,
749
- semantic segmentation maps.
750
-
751
- Args:
752
- results (dict): Result dict from loading pipeline.
753
-
754
- Returns:
755
- dict: Randomly cropped results, 'img_shape' key in result dict is
756
- updated according to crop size.
757
- """
758
- image_size = results['img'].shape[:2]
759
- crop_size = self._get_crop_size(image_size)
760
- results = self._crop_data(results, crop_size, self.allow_negative_crop)
761
- return results
762
-
763
- def __repr__(self):
764
- repr_str = self.__class__.__name__
765
- repr_str += f'(crop_size={self.crop_size}, '
766
- repr_str += f'crop_type={self.crop_type}, '
767
- repr_str += f'allow_negative_crop={self.allow_negative_crop}, '
768
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
769
- return repr_str
770
-
771
-
772
- @PIPELINES.register_module()
773
- class SegRescale(object):
774
- """Rescale semantic segmentation maps.
775
-
776
- Args:
777
- scale_factor (float): The scale factor of the final output.
778
- backend (str): Image rescale backend, choices are 'cv2' and 'pillow'.
779
- These two backends generates slightly different results. Defaults
780
- to 'cv2'.
781
- """
782
-
783
- def __init__(self, scale_factor=1, backend='cv2'):
784
- self.scale_factor = scale_factor
785
- self.backend = backend
786
-
787
- def __call__(self, results):
788
- """Call function to scale the semantic segmentation map.
789
-
790
- Args:
791
- results (dict): Result dict from loading pipeline.
792
-
793
- Returns:
794
- dict: Result dict with semantic segmentation map scaled.
795
- """
796
-
797
- for key in results.get('seg_fields', []):
798
- if self.scale_factor != 1:
799
- results[key] = mmcv.imrescale(
800
- results[key],
801
- self.scale_factor,
802
- interpolation='nearest',
803
- backend=self.backend)
804
- return results
805
-
806
- def __repr__(self):
807
- return self.__class__.__name__ + f'(scale_factor={self.scale_factor})'
808
-
809
-
810
- @PIPELINES.register_module()
811
- class PhotoMetricDistortion(object):
812
- """Apply photometric distortion to image sequentially, every transformation
813
- is applied with a probability of 0.5. The position of random contrast is in
814
- second or second to last.
815
-
816
- 1. random brightness
817
- 2. random contrast (mode 0)
818
- 3. convert color from BGR to HSV
819
- 4. random saturation
820
- 5. random hue
821
- 6. convert color from HSV to BGR
822
- 7. random contrast (mode 1)
823
- 8. randomly swap channels
824
-
825
- Args:
826
- brightness_delta (int): delta of brightness.
827
- contrast_range (tuple): range of contrast.
828
- saturation_range (tuple): range of saturation.
829
- hue_delta (int): delta of hue.
830
- """
831
-
832
- def __init__(self,
833
- brightness_delta=32,
834
- contrast_range=(0.5, 1.5),
835
- saturation_range=(0.5, 1.5),
836
- hue_delta=18):
837
- self.brightness_delta = brightness_delta
838
- self.contrast_lower, self.contrast_upper = contrast_range
839
- self.saturation_lower, self.saturation_upper = saturation_range
840
- self.hue_delta = hue_delta
841
-
842
- def __call__(self, results):
843
- """Call function to perform photometric distortion on images.
844
-
845
- Args:
846
- results (dict): Result dict from loading pipeline.
847
-
848
- Returns:
849
- dict: Result dict with images distorted.
850
- """
851
-
852
- if 'img_fields' in results:
853
- assert results['img_fields'] == ['img'], \
854
- 'Only single img_fields is allowed'
855
- img = results['img']
856
- assert img.dtype == np.float32, \
857
- 'PhotoMetricDistortion needs the input image of dtype np.float32,'\
858
- ' please set "to_float32=True" in "LoadImageFromFile" pipeline'
859
- # random brightness
860
- if random.randint(2):
861
- delta = random.uniform(-self.brightness_delta,
862
- self.brightness_delta)
863
- img += delta
864
-
865
- # mode == 0 --> do random contrast first
866
- # mode == 1 --> do random contrast last
867
- mode = random.randint(2)
868
- if mode == 1:
869
- if random.randint(2):
870
- alpha = random.uniform(self.contrast_lower,
871
- self.contrast_upper)
872
- img *= alpha
873
-
874
- # convert color from BGR to HSV
875
- img = mmcv.bgr2hsv(img)
876
-
877
- # random saturation
878
- if random.randint(2):
879
- img[..., 1] *= random.uniform(self.saturation_lower,
880
- self.saturation_upper)
881
-
882
- # random hue
883
- if random.randint(2):
884
- img[..., 0] += random.uniform(-self.hue_delta, self.hue_delta)
885
- img[..., 0][img[..., 0] > 360] -= 360
886
- img[..., 0][img[..., 0] < 0] += 360
887
-
888
- # convert color from HSV to BGR
889
- img = mmcv.hsv2bgr(img)
890
-
891
- # random contrast
892
- if mode == 0:
893
- if random.randint(2):
894
- alpha = random.uniform(self.contrast_lower,
895
- self.contrast_upper)
896
- img *= alpha
897
-
898
- # randomly swap channels
899
- if random.randint(2):
900
- img = img[..., random.permutation(3)]
901
-
902
- results['img'] = img
903
- return results
904
-
905
- def __repr__(self):
906
- repr_str = self.__class__.__name__
907
- repr_str += f'(\nbrightness_delta={self.brightness_delta},\n'
908
- repr_str += 'contrast_range='
909
- repr_str += f'{(self.contrast_lower, self.contrast_upper)},\n'
910
- repr_str += 'saturation_range='
911
- repr_str += f'{(self.saturation_lower, self.saturation_upper)},\n'
912
- repr_str += f'hue_delta={self.hue_delta})'
913
- return repr_str
914
-
915
-
916
- @PIPELINES.register_module()
917
- class Expand(object):
918
- """Random expand the image & bboxes.
919
-
920
- Randomly place the original image on a canvas of 'ratio' x original image
921
- size filled with mean values. The ratio is in the range of ratio_range.
922
-
923
- Args:
924
- mean (tuple): mean value of dataset.
925
- to_rgb (bool): if need to convert the order of mean to align with RGB.
926
- ratio_range (tuple): range of expand ratio.
927
- prob (float): probability of applying this transformation
928
- """
929
-
930
- def __init__(self,
931
- mean=(0, 0, 0),
932
- to_rgb=True,
933
- ratio_range=(1, 4),
934
- seg_ignore_label=None,
935
- prob=0.5):
936
- self.to_rgb = to_rgb
937
- self.ratio_range = ratio_range
938
- if to_rgb:
939
- self.mean = mean[::-1]
940
- else:
941
- self.mean = mean
942
- self.min_ratio, self.max_ratio = ratio_range
943
- self.seg_ignore_label = seg_ignore_label
944
- self.prob = prob
945
-
946
- def __call__(self, results):
947
- """Call function to expand images, bounding boxes.
948
-
949
- Args:
950
- results (dict): Result dict from loading pipeline.
951
-
952
- Returns:
953
- dict: Result dict with images, bounding boxes expanded
954
- """
955
-
956
- if random.uniform(0, 1) > self.prob:
957
- return results
958
-
959
- if 'img_fields' in results:
960
- assert results['img_fields'] == ['img'], \
961
- 'Only single img_fields is allowed'
962
- img = results['img']
963
-
964
- h, w, c = img.shape
965
- ratio = random.uniform(self.min_ratio, self.max_ratio)
966
- # speedup expand when meets large image
967
- if np.all(self.mean == self.mean[0]):
968
- expand_img = np.empty((int(h * ratio), int(w * ratio), c),
969
- img.dtype)
970
- expand_img.fill(self.mean[0])
971
- else:
972
- expand_img = np.full((int(h * ratio), int(w * ratio), c),
973
- self.mean,
974
- dtype=img.dtype)
975
- left = int(random.uniform(0, w * ratio - w))
976
- top = int(random.uniform(0, h * ratio - h))
977
- expand_img[top:top + h, left:left + w] = img
978
-
979
- results['img'] = expand_img
980
- # expand bboxes
981
- for key in results.get('bbox_fields', []):
982
- results[key] = results[key] + np.tile(
983
- (left, top), 2).astype(results[key].dtype)
984
-
985
- # expand masks
986
- for key in results.get('mask_fields', []):
987
- results[key] = results[key].expand(
988
- int(h * ratio), int(w * ratio), top, left)
989
-
990
- # expand segs
991
- for key in results.get('seg_fields', []):
992
- gt_seg = results[key]
993
- expand_gt_seg = np.full((int(h * ratio), int(w * ratio)),
994
- self.seg_ignore_label,
995
- dtype=gt_seg.dtype)
996
- expand_gt_seg[top:top + h, left:left + w] = gt_seg
997
- results[key] = expand_gt_seg
998
- return results
999
-
1000
- def __repr__(self):
1001
- repr_str = self.__class__.__name__
1002
- repr_str += f'(mean={self.mean}, to_rgb={self.to_rgb}, '
1003
- repr_str += f'ratio_range={self.ratio_range}, '
1004
- repr_str += f'seg_ignore_label={self.seg_ignore_label})'
1005
- return repr_str
1006
-
1007
-
1008
- @PIPELINES.register_module()
1009
- class MinIoURandomCrop(object):
1010
- """Random crop the image & bboxes, the cropped patches have minimum IoU
1011
- requirement with original image & bboxes, the IoU threshold is randomly
1012
- selected from min_ious.
1013
-
1014
- Args:
1015
- min_ious (tuple): minimum IoU threshold for all intersections with
1016
- bounding boxes
1017
- min_crop_size (float): minimum crop's size (i.e. h,w := a*h, a*w,
1018
- where a >= min_crop_size).
1019
- bbox_clip_border (bool, optional): Whether clip the objects outside
1020
- the border of the image. Defaults to True.
1021
-
1022
- Note:
1023
- The keys for bboxes, labels and masks should be paired. That is, \
1024
- `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and \
1025
- `gt_bboxes_ignore` to `gt_labels_ignore` and `gt_masks_ignore`.
1026
- """
1027
-
1028
- def __init__(self,
1029
- min_ious=(0.1, 0.3, 0.5, 0.7, 0.9),
1030
- min_crop_size=0.3,
1031
- bbox_clip_border=True):
1032
- # 1: return ori img
1033
- self.min_ious = min_ious
1034
- self.sample_mode = (1, *min_ious, 0)
1035
- self.min_crop_size = min_crop_size
1036
- self.bbox_clip_border = bbox_clip_border
1037
- self.bbox2label = {
1038
- 'gt_bboxes': 'gt_labels',
1039
- 'gt_bboxes_ignore': 'gt_labels_ignore'
1040
- }
1041
- self.bbox2mask = {
1042
- 'gt_bboxes': 'gt_masks',
1043
- 'gt_bboxes_ignore': 'gt_masks_ignore'
1044
- }
1045
-
1046
- def __call__(self, results):
1047
- """Call function to crop images and bounding boxes with minimum IoU
1048
- constraint.
1049
-
1050
- Args:
1051
- results (dict): Result dict from loading pipeline.
1052
-
1053
- Returns:
1054
- dict: Result dict with images and bounding boxes cropped, \
1055
- 'img_shape' key is updated.
1056
- """
1057
-
1058
- if 'img_fields' in results:
1059
- assert results['img_fields'] == ['img'], \
1060
- 'Only single img_fields is allowed'
1061
- img = results['img']
1062
- assert 'bbox_fields' in results
1063
- boxes = [results[key] for key in results['bbox_fields']]
1064
- boxes = np.concatenate(boxes, 0)
1065
- h, w, c = img.shape
1066
- while True:
1067
- mode = random.choice(self.sample_mode)
1068
- self.mode = mode
1069
- if mode == 1:
1070
- return results
1071
-
1072
- min_iou = mode
1073
- for i in range(50):
1074
- new_w = random.uniform(self.min_crop_size * w, w)
1075
- new_h = random.uniform(self.min_crop_size * h, h)
1076
-
1077
- # h / w in [0.5, 2]
1078
- if new_h / new_w < 0.5 or new_h / new_w > 2:
1079
- continue
1080
-
1081
- left = random.uniform(w - new_w)
1082
- top = random.uniform(h - new_h)
1083
-
1084
- patch = np.array(
1085
- (int(left), int(top), int(left + new_w), int(top + new_h)))
1086
- # Line or point crop is not allowed
1087
- if patch[2] == patch[0] or patch[3] == patch[1]:
1088
- continue
1089
- overlaps = bbox_overlaps(
1090
- patch.reshape(-1, 4), boxes.reshape(-1, 4)).reshape(-1)
1091
- if len(overlaps) > 0 and overlaps.min() < min_iou:
1092
- continue
1093
-
1094
- # center of boxes should inside the crop img
1095
- # only adjust boxes and instance masks when the gt is not empty
1096
- if len(overlaps) > 0:
1097
- # adjust boxes
1098
- def is_center_of_bboxes_in_patch(boxes, patch):
1099
- center = (boxes[:, :2] + boxes[:, 2:]) / 2
1100
- mask = ((center[:, 0] > patch[0]) *
1101
- (center[:, 1] > patch[1]) *
1102
- (center[:, 0] < patch[2]) *
1103
- (center[:, 1] < patch[3]))
1104
- return mask
1105
-
1106
- mask = is_center_of_bboxes_in_patch(boxes, patch)
1107
- if not mask.any():
1108
- continue
1109
- for key in results.get('bbox_fields', []):
1110
- boxes = results[key].copy()
1111
- mask = is_center_of_bboxes_in_patch(boxes, patch)
1112
- boxes = boxes[mask]
1113
- if self.bbox_clip_border:
1114
- boxes[:, 2:] = boxes[:, 2:].clip(max=patch[2:])
1115
- boxes[:, :2] = boxes[:, :2].clip(min=patch[:2])
1116
- boxes -= np.tile(patch[:2], 2)
1117
-
1118
- results[key] = boxes
1119
- # labels
1120
- label_key = self.bbox2label.get(key)
1121
- if label_key in results:
1122
- results[label_key] = results[label_key][mask]
1123
-
1124
- # mask fields
1125
- mask_key = self.bbox2mask.get(key)
1126
- if mask_key in results:
1127
- results[mask_key] = results[mask_key][
1128
- mask.nonzero()[0]].crop(patch)
1129
- # adjust the img no matter whether the gt is empty before crop
1130
- img = img[patch[1]:patch[3], patch[0]:patch[2]]
1131
- results['img'] = img
1132
- results['img_shape'] = img.shape
1133
-
1134
- # seg fields
1135
- for key in results.get('seg_fields', []):
1136
- results[key] = results[key][patch[1]:patch[3],
1137
- patch[0]:patch[2]]
1138
- return results
1139
-
1140
- def __repr__(self):
1141
- repr_str = self.__class__.__name__
1142
- repr_str += f'(min_ious={self.min_ious}, '
1143
- repr_str += f'min_crop_size={self.min_crop_size}, '
1144
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
1145
- return repr_str
1146
-
1147
-
1148
- @PIPELINES.register_module()
1149
- class Corrupt(object):
1150
- """Corruption augmentation.
1151
-
1152
- Corruption transforms implemented based on
1153
- `imagecorruptions <https://github.com/bethgelab/imagecorruptions>`_.
1154
-
1155
- Args:
1156
- corruption (str): Corruption name.
1157
- severity (int, optional): The severity of corruption. Default: 1.
1158
- """
1159
-
1160
- def __init__(self, corruption, severity=1):
1161
- self.corruption = corruption
1162
- self.severity = severity
1163
-
1164
- def __call__(self, results):
1165
- """Call function to corrupt image.
1166
-
1167
- Args:
1168
- results (dict): Result dict from loading pipeline.
1169
-
1170
- Returns:
1171
- dict: Result dict with images corrupted.
1172
- """
1173
-
1174
- if corrupt is None:
1175
- raise RuntimeError('imagecorruptions is not installed')
1176
- if 'img_fields' in results:
1177
- assert results['img_fields'] == ['img'], \
1178
- 'Only single img_fields is allowed'
1179
- results['img'] = corrupt(
1180
- results['img'].astype(np.uint8),
1181
- corruption_name=self.corruption,
1182
- severity=self.severity)
1183
- return results
1184
-
1185
- def __repr__(self):
1186
- repr_str = self.__class__.__name__
1187
- repr_str += f'(corruption={self.corruption}, '
1188
- repr_str += f'severity={self.severity})'
1189
- return repr_str
1190
-
1191
-
1192
- @PIPELINES.register_module()
1193
- class Albu(object):
1194
- """Albumentation augmentation.
1195
-
1196
- Adds custom transformations from Albumentations library.
1197
- Please, visit `https://albumentations.readthedocs.io`
1198
- to get more information.
1199
-
1200
- An example of ``transforms`` is as followed:
1201
-
1202
- .. code-block::
1203
-
1204
- [
1205
- dict(
1206
- type='ShiftScaleRotate',
1207
- shift_limit=0.0625,
1208
- scale_limit=0.0,
1209
- rotate_limit=0,
1210
- interpolation=1,
1211
- p=0.5),
1212
- dict(
1213
- type='RandomBrightnessContrast',
1214
- brightness_limit=[0.1, 0.3],
1215
- contrast_limit=[0.1, 0.3],
1216
- p=0.2),
1217
- dict(type='ChannelShuffle', p=0.1),
1218
- dict(
1219
- type='OneOf',
1220
- transforms=[
1221
- dict(type='Blur', blur_limit=3, p=1.0),
1222
- dict(type='MedianBlur', blur_limit=3, p=1.0)
1223
- ],
1224
- p=0.1),
1225
- ]
1226
-
1227
- Args:
1228
- transforms (list[dict]): A list of albu transformations
1229
- bbox_params (dict): Bbox_params for albumentation `Compose`
1230
- keymap (dict): Contains {'input key':'albumentation-style key'}
1231
- skip_img_without_anno (bool): Whether to skip the image if no ann left
1232
- after aug
1233
- """
1234
-
1235
- def __init__(self,
1236
- transforms,
1237
- bbox_params=None,
1238
- keymap=None,
1239
- update_pad_shape=False,
1240
- skip_img_without_anno=False):
1241
- if Compose is None:
1242
- raise RuntimeError('albumentations is not installed')
1243
-
1244
- # Args will be modified later, copying it will be safer
1245
- transforms = copy.deepcopy(transforms)
1246
- if bbox_params is not None:
1247
- bbox_params = copy.deepcopy(bbox_params)
1248
- if keymap is not None:
1249
- keymap = copy.deepcopy(keymap)
1250
- self.transforms = transforms
1251
- self.filter_lost_elements = False
1252
- self.update_pad_shape = update_pad_shape
1253
- self.skip_img_without_anno = skip_img_without_anno
1254
-
1255
- # A simple workaround to remove masks without boxes
1256
- if (isinstance(bbox_params, dict) and 'label_fields' in bbox_params
1257
- and 'filter_lost_elements' in bbox_params):
1258
- self.filter_lost_elements = True
1259
- self.origin_label_fields = bbox_params['label_fields']
1260
- bbox_params['label_fields'] = ['idx_mapper']
1261
- del bbox_params['filter_lost_elements']
1262
-
1263
- self.bbox_params = (
1264
- self.albu_builder(bbox_params) if bbox_params else None)
1265
- self.aug = Compose([self.albu_builder(t) for t in self.transforms],
1266
- bbox_params=self.bbox_params)
1267
-
1268
- if not keymap:
1269
- self.keymap_to_albu = {
1270
- 'img': 'image',
1271
- 'gt_masks': 'masks',
1272
- 'gt_bboxes': 'bboxes'
1273
- }
1274
- else:
1275
- self.keymap_to_albu = keymap
1276
- self.keymap_back = {v: k for k, v in self.keymap_to_albu.items()}
1277
-
1278
- def albu_builder(self, cfg):
1279
- """Import a module from albumentations.
1280
-
1281
- It inherits some of :func:`build_from_cfg` logic.
1282
-
1283
- Args:
1284
- cfg (dict): Config dict. It should at least contain the key "type".
1285
-
1286
- Returns:
1287
- obj: The constructed object.
1288
- """
1289
-
1290
- assert isinstance(cfg, dict) and 'type' in cfg
1291
- args = cfg.copy()
1292
-
1293
- obj_type = args.pop('type')
1294
- if mmcv.is_str(obj_type):
1295
- if albumentations is None:
1296
- raise RuntimeError('albumentations is not installed')
1297
- obj_cls = getattr(albumentations, obj_type)
1298
- elif inspect.isclass(obj_type):
1299
- obj_cls = obj_type
1300
- else:
1301
- raise TypeError(
1302
- f'type must be a str or valid type, but got {type(obj_type)}')
1303
-
1304
- if 'transforms' in args:
1305
- args['transforms'] = [
1306
- self.albu_builder(transform)
1307
- for transform in args['transforms']
1308
- ]
1309
-
1310
- return obj_cls(**args)
1311
-
1312
- @staticmethod
1313
- def mapper(d, keymap):
1314
- """Dictionary mapper. Renames keys according to keymap provided.
1315
-
1316
- Args:
1317
- d (dict): old dict
1318
- keymap (dict): {'old_key':'new_key'}
1319
- Returns:
1320
- dict: new dict.
1321
- """
1322
-
1323
- updated_dict = {}
1324
- for k, v in zip(d.keys(), d.values()):
1325
- new_k = keymap.get(k, k)
1326
- updated_dict[new_k] = d[k]
1327
- return updated_dict
1328
-
1329
- def __call__(self, results):
1330
- # dict to albumentations format
1331
- results = self.mapper(results, self.keymap_to_albu)
1332
- # TODO: add bbox_fields
1333
- if 'bboxes' in results:
1334
- # to list of boxes
1335
- if isinstance(results['bboxes'], np.ndarray):
1336
- results['bboxes'] = [x for x in results['bboxes']]
1337
- # add pseudo-field for filtration
1338
- if self.filter_lost_elements:
1339
- results['idx_mapper'] = np.arange(len(results['bboxes']))
1340
-
1341
- # TODO: Support mask structure in albu
1342
- if 'masks' in results:
1343
- if isinstance(results['masks'], PolygonMasks):
1344
- raise NotImplementedError(
1345
- 'Albu only supports BitMap masks now')
1346
- ori_masks = results['masks']
1347
- if albumentations.__version__ < '0.5':
1348
- results['masks'] = results['masks'].masks
1349
- else:
1350
- results['masks'] = [mask for mask in results['masks'].masks]
1351
-
1352
- results = self.aug(**results)
1353
-
1354
- if 'bboxes' in results:
1355
- if isinstance(results['bboxes'], list):
1356
- results['bboxes'] = np.array(
1357
- results['bboxes'], dtype=np.float32)
1358
- results['bboxes'] = results['bboxes'].reshape(-1, 4)
1359
-
1360
- # filter label_fields
1361
- if self.filter_lost_elements:
1362
-
1363
- for label in self.origin_label_fields:
1364
- results[label] = np.array(
1365
- [results[label][i] for i in results['idx_mapper']])
1366
- if 'masks' in results:
1367
- results['masks'] = np.array(
1368
- [results['masks'][i] for i in results['idx_mapper']])
1369
- results['masks'] = ori_masks.__class__(
1370
- results['masks'], results['image'].shape[0],
1371
- results['image'].shape[1])
1372
-
1373
- if (not len(results['idx_mapper'])
1374
- and self.skip_img_without_anno):
1375
- return None
1376
-
1377
- if 'gt_labels' in results:
1378
- if isinstance(results['gt_labels'], list):
1379
- results['gt_labels'] = np.array(results['gt_labels'])
1380
- results['gt_labels'] = results['gt_labels'].astype(np.int64)
1381
-
1382
- # back to the original format
1383
- results = self.mapper(results, self.keymap_back)
1384
-
1385
- # update final shape
1386
- if self.update_pad_shape:
1387
- results['pad_shape'] = results['img'].shape
1388
-
1389
- return results
1390
-
1391
- def __repr__(self):
1392
- repr_str = self.__class__.__name__ + f'(transforms={self.transforms})'
1393
- return repr_str
1394
-
1395
-
1396
- @PIPELINES.register_module()
1397
- class RandomCenterCropPad(object):
1398
- """Random center crop and random around padding for CornerNet.
1399
-
1400
- This operation generates randomly cropped image from the original image and
1401
- pads it simultaneously. Different from :class:`RandomCrop`, the output
1402
- shape may not equal to ``crop_size`` strictly. We choose a random value
1403
- from ``ratios`` and the output shape could be larger or smaller than
1404
- ``crop_size``. The padding operation is also different from :class:`Pad`,
1405
- here we use around padding instead of right-bottom padding.
1406
-
1407
- The relation between output image (padding image) and original image:
1408
-
1409
- .. code:: text
1410
-
1411
- output image
1412
-
1413
- +----------------------------+
1414
- | padded area |
1415
- +------|----------------------------|----------+
1416
- | | cropped area | |
1417
- | | +---------------+ | |
1418
- | | | . center | | | original image
1419
- | | | range | | |
1420
- | | +---------------+ | |
1421
- +------|----------------------------|----------+
1422
- | padded area |
1423
- +----------------------------+
1424
-
1425
- There are 5 main areas in the figure:
1426
-
1427
- - output image: output image of this operation, also called padding
1428
- image in following instruction.
1429
- - original image: input image of this operation.
1430
- - padded area: non-intersect area of output image and original image.
1431
- - cropped area: the overlap of output image and original image.
1432
- - center range: a smaller area where random center chosen from.
1433
- center range is computed by ``border`` and original image's shape
1434
- to avoid our random center is too close to original image's border.
1435
-
1436
- Also this operation act differently in train and test mode, the summary
1437
- pipeline is listed below.
1438
-
1439
- Train pipeline:
1440
-
1441
- 1. Choose a ``random_ratio`` from ``ratios``, the shape of padding image
1442
- will be ``random_ratio * crop_size``.
1443
- 2. Choose a ``random_center`` in center range.
1444
- 3. Generate padding image with center matches the ``random_center``.
1445
- 4. Initialize the padding image with pixel value equals to ``mean``.
1446
- 5. Copy the cropped area to padding image.
1447
- 6. Refine annotations.
1448
-
1449
- Test pipeline:
1450
-
1451
- 1. Compute output shape according to ``test_pad_mode``.
1452
- 2. Generate padding image with center matches the original image
1453
- center.
1454
- 3. Initialize the padding image with pixel value equals to ``mean``.
1455
- 4. Copy the ``cropped area`` to padding image.
1456
-
1457
- Args:
1458
- crop_size (tuple | None): expected size after crop, final size will
1459
- computed according to ratio. Requires (h, w) in train mode, and
1460
- None in test mode.
1461
- ratios (tuple): random select a ratio from tuple and crop image to
1462
- (crop_size[0] * ratio) * (crop_size[1] * ratio).
1463
- Only available in train mode.
1464
- border (int): max distance from center select area to image border.
1465
- Only available in train mode.
1466
- mean (sequence): Mean values of 3 channels.
1467
- std (sequence): Std values of 3 channels.
1468
- to_rgb (bool): Whether to convert the image from BGR to RGB.
1469
- test_mode (bool): whether involve random variables in transform.
1470
- In train mode, crop_size is fixed, center coords and ratio is
1471
- random selected from predefined lists. In test mode, crop_size
1472
- is image's original shape, center coords and ratio is fixed.
1473
- test_pad_mode (tuple): padding method and padding shape value, only
1474
- available in test mode. Default is using 'logical_or' with
1475
- 127 as padding shape value.
1476
-
1477
- - 'logical_or': final_shape = input_shape | padding_shape_value
1478
- - 'size_divisor': final_shape = int(
1479
- ceil(input_shape / padding_shape_value) * padding_shape_value)
1480
- bbox_clip_border (bool, optional): Whether clip the objects outside
1481
- the border of the image. Defaults to True.
1482
- """
1483
-
1484
- def __init__(self,
1485
- crop_size=None,
1486
- ratios=(0.9, 1.0, 1.1),
1487
- border=128,
1488
- mean=None,
1489
- std=None,
1490
- to_rgb=None,
1491
- test_mode=False,
1492
- test_pad_mode=('logical_or', 127),
1493
- bbox_clip_border=True):
1494
- if test_mode:
1495
- assert crop_size is None, 'crop_size must be None in test mode'
1496
- assert ratios is None, 'ratios must be None in test mode'
1497
- assert border is None, 'border must be None in test mode'
1498
- assert isinstance(test_pad_mode, (list, tuple))
1499
- assert test_pad_mode[0] in ['logical_or', 'size_divisor']
1500
- else:
1501
- assert isinstance(crop_size, (list, tuple))
1502
- assert crop_size[0] > 0 and crop_size[1] > 0, (
1503
- 'crop_size must > 0 in train mode')
1504
- assert isinstance(ratios, (list, tuple))
1505
- assert test_pad_mode is None, (
1506
- 'test_pad_mode must be None in train mode')
1507
-
1508
- self.crop_size = crop_size
1509
- self.ratios = ratios
1510
- self.border = border
1511
- # We do not set default value to mean, std and to_rgb because these
1512
- # hyper-parameters are easy to forget but could affect the performance.
1513
- # Please use the same setting as Normalize for performance assurance.
1514
- assert mean is not None and std is not None and to_rgb is not None
1515
- self.to_rgb = to_rgb
1516
- self.input_mean = mean
1517
- self.input_std = std
1518
- if to_rgb:
1519
- self.mean = mean[::-1]
1520
- self.std = std[::-1]
1521
- else:
1522
- self.mean = mean
1523
- self.std = std
1524
- self.test_mode = test_mode
1525
- self.test_pad_mode = test_pad_mode
1526
- self.bbox_clip_border = bbox_clip_border
1527
-
1528
- def _get_border(self, border, size):
1529
- """Get final border for the target size.
1530
-
1531
- This function generates a ``final_border`` according to image's shape.
1532
- The area between ``final_border`` and ``size - final_border`` is the
1533
- ``center range``. We randomly choose center from the ``center range``
1534
- to avoid our random center is too close to original image's border.
1535
- Also ``center range`` should be larger than 0.
1536
-
1537
- Args:
1538
- border (int): The initial border, default is 128.
1539
- size (int): The width or height of original image.
1540
- Returns:
1541
- int: The final border.
1542
- """
1543
- k = 2 * border / size
1544
- i = pow(2, np.ceil(np.log2(np.ceil(k))) + (k == int(k)))
1545
- return border // i
1546
-
1547
- def _filter_boxes(self, patch, boxes):
1548
- """Check whether the center of each box is in the patch.
1549
-
1550
- Args:
1551
- patch (list[int]): The cropped area, [left, top, right, bottom].
1552
- boxes (numpy array, (N x 4)): Ground truth boxes.
1553
-
1554
- Returns:
1555
- mask (numpy array, (N,)): Each box is inside or outside the patch.
1556
- """
1557
- center = (boxes[:, :2] + boxes[:, 2:]) / 2
1558
- mask = (center[:, 0] > patch[0]) * (center[:, 1] > patch[1]) * (
1559
- center[:, 0] < patch[2]) * (
1560
- center[:, 1] < patch[3])
1561
- return mask
1562
-
1563
- def _crop_image_and_paste(self, image, center, size):
1564
- """Crop image with a given center and size, then paste the cropped
1565
- image to a blank image with two centers align.
1566
-
1567
- This function is equivalent to generating a blank image with ``size``
1568
- as its shape. Then cover it on the original image with two centers (
1569
- the center of blank image and the random center of original image)
1570
- aligned. The overlap area is paste from the original image and the
1571
- outside area is filled with ``mean pixel``.
1572
-
1573
- Args:
1574
- image (np array, H x W x C): Original image.
1575
- center (list[int]): Target crop center coord.
1576
- size (list[int]): Target crop size. [target_h, target_w]
1577
-
1578
- Returns:
1579
- cropped_img (np array, target_h x target_w x C): Cropped image.
1580
- border (np array, 4): The distance of four border of
1581
- ``cropped_img`` to the original image area, [top, bottom,
1582
- left, right]
1583
- patch (list[int]): The cropped area, [left, top, right, bottom].
1584
- """
1585
- center_y, center_x = center
1586
- target_h, target_w = size
1587
- img_h, img_w, img_c = image.shape
1588
-
1589
- x0 = max(0, center_x - target_w // 2)
1590
- x1 = min(center_x + target_w // 2, img_w)
1591
- y0 = max(0, center_y - target_h // 2)
1592
- y1 = min(center_y + target_h // 2, img_h)
1593
- patch = np.array((int(x0), int(y0), int(x1), int(y1)))
1594
-
1595
- left, right = center_x - x0, x1 - center_x
1596
- top, bottom = center_y - y0, y1 - center_y
1597
-
1598
- cropped_center_y, cropped_center_x = target_h // 2, target_w // 2
1599
- cropped_img = np.zeros((target_h, target_w, img_c), dtype=image.dtype)
1600
- for i in range(img_c):
1601
- cropped_img[:, :, i] += self.mean[i]
1602
- y_slice = slice(cropped_center_y - top, cropped_center_y + bottom)
1603
- x_slice = slice(cropped_center_x - left, cropped_center_x + right)
1604
- cropped_img[y_slice, x_slice, :] = image[y0:y1, x0:x1, :]
1605
-
1606
- border = np.array([
1607
- cropped_center_y - top, cropped_center_y + bottom,
1608
- cropped_center_x - left, cropped_center_x + right
1609
- ],
1610
- dtype=np.float32)
1611
-
1612
- return cropped_img, border, patch
1613
-
1614
- def _train_aug(self, results):
1615
- """Random crop and around padding the original image.
1616
-
1617
- Args:
1618
- results (dict): Image infomations in the augment pipeline.
1619
-
1620
- Returns:
1621
- results (dict): The updated dict.
1622
- """
1623
- img = results['img']
1624
- h, w, c = img.shape
1625
- boxes = results['gt_bboxes']
1626
- while True:
1627
- scale = random.choice(self.ratios)
1628
- new_h = int(self.crop_size[0] * scale)
1629
- new_w = int(self.crop_size[1] * scale)
1630
- h_border = self._get_border(self.border, h)
1631
- w_border = self._get_border(self.border, w)
1632
-
1633
- for i in range(50):
1634
- center_x = random.randint(low=w_border, high=w - w_border)
1635
- center_y = random.randint(low=h_border, high=h - h_border)
1636
-
1637
- cropped_img, border, patch = self._crop_image_and_paste(
1638
- img, [center_y, center_x], [new_h, new_w])
1639
-
1640
- mask = self._filter_boxes(patch, boxes)
1641
- # if image do not have valid bbox, any crop patch is valid.
1642
- if not mask.any() and len(boxes) > 0:
1643
- continue
1644
-
1645
- results['img'] = cropped_img
1646
- results['img_shape'] = cropped_img.shape
1647
- results['pad_shape'] = cropped_img.shape
1648
-
1649
- x0, y0, x1, y1 = patch
1650
-
1651
- left_w, top_h = center_x - x0, center_y - y0
1652
- cropped_center_x, cropped_center_y = new_w // 2, new_h // 2
1653
-
1654
- # crop bboxes accordingly and clip to the image boundary
1655
- for key in results.get('bbox_fields', []):
1656
- mask = self._filter_boxes(patch, results[key])
1657
- bboxes = results[key][mask]
1658
- bboxes[:, 0:4:2] += cropped_center_x - left_w - x0
1659
- bboxes[:, 1:4:2] += cropped_center_y - top_h - y0
1660
- if self.bbox_clip_border:
1661
- bboxes[:, 0:4:2] = np.clip(bboxes[:, 0:4:2], 0, new_w)
1662
- bboxes[:, 1:4:2] = np.clip(bboxes[:, 1:4:2], 0, new_h)
1663
- keep = (bboxes[:, 2] > bboxes[:, 0]) & (
1664
- bboxes[:, 3] > bboxes[:, 1])
1665
- bboxes = bboxes[keep]
1666
- results[key] = bboxes
1667
- if key in ['gt_bboxes']:
1668
- if 'gt_labels' in results:
1669
- labels = results['gt_labels'][mask]
1670
- labels = labels[keep]
1671
- results['gt_labels'] = labels
1672
- if 'gt_masks' in results:
1673
- raise NotImplementedError(
1674
- 'RandomCenterCropPad only supports bbox.')
1675
-
1676
- # crop semantic seg
1677
- for key in results.get('seg_fields', []):
1678
- raise NotImplementedError(
1679
- 'RandomCenterCropPad only supports bbox.')
1680
- return results
1681
-
1682
- def _test_aug(self, results):
1683
- """Around padding the original image without cropping.
1684
-
1685
- The padding mode and value are from ``test_pad_mode``.
1686
-
1687
- Args:
1688
- results (dict): Image infomations in the augment pipeline.
1689
-
1690
- Returns:
1691
- results (dict): The updated dict.
1692
- """
1693
- img = results['img']
1694
- h, w, c = img.shape
1695
- results['img_shape'] = img.shape
1696
- if self.test_pad_mode[0] in ['logical_or']:
1697
- target_h = h | self.test_pad_mode[1]
1698
- target_w = w | self.test_pad_mode[1]
1699
- elif self.test_pad_mode[0] in ['size_divisor']:
1700
- divisor = self.test_pad_mode[1]
1701
- target_h = int(np.ceil(h / divisor)) * divisor
1702
- target_w = int(np.ceil(w / divisor)) * divisor
1703
- else:
1704
- raise NotImplementedError(
1705
- 'RandomCenterCropPad only support two testing pad mode:'
1706
- 'logical-or and size_divisor.')
1707
-
1708
- cropped_img, border, _ = self._crop_image_and_paste(
1709
- img, [h // 2, w // 2], [target_h, target_w])
1710
- results['img'] = cropped_img
1711
- results['pad_shape'] = cropped_img.shape
1712
- results['border'] = border
1713
- return results
1714
-
1715
- def __call__(self, results):
1716
- img = results['img']
1717
- assert img.dtype == np.float32, (
1718
- 'RandomCenterCropPad needs the input image of dtype np.float32,'
1719
- ' please set "to_float32=True" in "LoadImageFromFile" pipeline')
1720
- h, w, c = img.shape
1721
- assert c == len(self.mean)
1722
- if self.test_mode:
1723
- return self._test_aug(results)
1724
- else:
1725
- return self._train_aug(results)
1726
-
1727
- def __repr__(self):
1728
- repr_str = self.__class__.__name__
1729
- repr_str += f'(crop_size={self.crop_size}, '
1730
- repr_str += f'ratios={self.ratios}, '
1731
- repr_str += f'border={self.border}, '
1732
- repr_str += f'mean={self.input_mean}, '
1733
- repr_str += f'std={self.input_std}, '
1734
- repr_str += f'to_rgb={self.to_rgb}, '
1735
- repr_str += f'test_mode={self.test_mode}, '
1736
- repr_str += f'test_pad_mode={self.test_pad_mode}, '
1737
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
1738
- return repr_str
1739
-
1740
-
1741
- @PIPELINES.register_module()
1742
- class CutOut(object):
1743
- """CutOut operation.
1744
-
1745
- Randomly drop some regions of image used in
1746
- `Cutout <https://arxiv.org/abs/1708.04552>`_.
1747
-
1748
- Args:
1749
- n_holes (int | tuple[int, int]): Number of regions to be dropped.
1750
- If it is given as a list, number of holes will be randomly
1751
- selected from the closed interval [`n_holes[0]`, `n_holes[1]`].
1752
- cutout_shape (tuple[int, int] | list[tuple[int, int]]): The candidate
1753
- shape of dropped regions. It can be `tuple[int, int]` to use a
1754
- fixed cutout shape, or `list[tuple[int, int]]` to randomly choose
1755
- shape from the list.
1756
- cutout_ratio (tuple[float, float] | list[tuple[float, float]]): The
1757
- candidate ratio of dropped regions. It can be `tuple[float, float]`
1758
- to use a fixed ratio or `list[tuple[float, float]]` to randomly
1759
- choose ratio from the list. Please note that `cutout_shape`
1760
- and `cutout_ratio` cannot be both given at the same time.
1761
- fill_in (tuple[float, float, float] | tuple[int, int, int]): The value
1762
- of pixel to fill in the dropped regions. Default: (0, 0, 0).
1763
- """
1764
-
1765
- def __init__(self,
1766
- n_holes,
1767
- cutout_shape=None,
1768
- cutout_ratio=None,
1769
- fill_in=(0, 0, 0)):
1770
-
1771
- assert (cutout_shape is None) ^ (cutout_ratio is None), \
1772
- 'Either cutout_shape or cutout_ratio should be specified.'
1773
- assert (isinstance(cutout_shape, (list, tuple))
1774
- or isinstance(cutout_ratio, (list, tuple)))
1775
- if isinstance(n_holes, tuple):
1776
- assert len(n_holes) == 2 and 0 <= n_holes[0] < n_holes[1]
1777
- else:
1778
- n_holes = (n_holes, n_holes)
1779
- self.n_holes = n_holes
1780
- self.fill_in = fill_in
1781
- self.with_ratio = cutout_ratio is not None
1782
- self.candidates = cutout_ratio if self.with_ratio else cutout_shape
1783
- if not isinstance(self.candidates, list):
1784
- self.candidates = [self.candidates]
1785
-
1786
- def __call__(self, results):
1787
- """Call function to drop some regions of image."""
1788
- h, w, c = results['img'].shape
1789
- n_holes = np.random.randint(self.n_holes[0], self.n_holes[1] + 1)
1790
- for _ in range(n_holes):
1791
- x1 = np.random.randint(0, w)
1792
- y1 = np.random.randint(0, h)
1793
- index = np.random.randint(0, len(self.candidates))
1794
- if not self.with_ratio:
1795
- cutout_w, cutout_h = self.candidates[index]
1796
- else:
1797
- cutout_w = int(self.candidates[index][0] * w)
1798
- cutout_h = int(self.candidates[index][1] * h)
1799
-
1800
- x2 = np.clip(x1 + cutout_w, 0, w)
1801
- y2 = np.clip(y1 + cutout_h, 0, h)
1802
- results['img'][y1:y2, x1:x2, :] = self.fill_in
1803
-
1804
- return results
1805
-
1806
- def __repr__(self):
1807
- repr_str = self.__class__.__name__
1808
- repr_str += f'(n_holes={self.n_holes}, '
1809
- repr_str += (f'cutout_ratio={self.candidates}, ' if self.with_ratio
1810
- else f'cutout_shape={self.candidates}, ')
1811
- repr_str += f'fill_in={self.fill_in})'
1812
- return repr_str
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/utils/transformer.py DELETED
@@ -1,860 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- from mmcv.cnn import (Linear, build_activation_layer, build_norm_layer,
4
- xavier_init)
5
-
6
- from .builder import TRANSFORMER
7
-
8
-
9
- class MultiheadAttention(nn.Module):
10
- """A warpper for torch.nn.MultiheadAttention.
11
-
12
- This module implements MultiheadAttention with residual connection,
13
- and positional encoding used in DETR is also passed as input.
14
-
15
- Args:
16
- embed_dims (int): The embedding dimension.
17
- num_heads (int): Parallel attention heads. Same as
18
- `nn.MultiheadAttention`.
19
- dropout (float): A Dropout layer on attn_output_weights. Default 0.0.
20
- """
21
-
22
- def __init__(self, embed_dims, num_heads, dropout=0.0):
23
- super(MultiheadAttention, self).__init__()
24
- assert embed_dims % num_heads == 0, 'embed_dims must be ' \
25
- f'divisible by num_heads. got {embed_dims} and {num_heads}.'
26
- self.embed_dims = embed_dims
27
- self.num_heads = num_heads
28
- self.dropout = dropout
29
- self.attn = nn.MultiheadAttention(embed_dims, num_heads, dropout)
30
- self.dropout = nn.Dropout(dropout)
31
-
32
- def forward(self,
33
- x,
34
- key=None,
35
- value=None,
36
- residual=None,
37
- query_pos=None,
38
- key_pos=None,
39
- attn_mask=None,
40
- key_padding_mask=None):
41
- """Forward function for `MultiheadAttention`.
42
-
43
- Args:
44
- x (Tensor): The input query with shape [num_query, bs,
45
- embed_dims]. Same in `nn.MultiheadAttention.forward`.
46
- key (Tensor): The key tensor with shape [num_key, bs,
47
- embed_dims]. Same in `nn.MultiheadAttention.forward`.
48
- Default None. If None, the `query` will be used.
49
- value (Tensor): The value tensor with same shape as `key`.
50
- Same in `nn.MultiheadAttention.forward`. Default None.
51
- If None, the `key` will be used.
52
- residual (Tensor): The tensor used for addition, with the
53
- same shape as `x`. Default None. If None, `x` will be used.
54
- query_pos (Tensor): The positional encoding for query, with
55
- the same shape as `x`. Default None. If not None, it will
56
- be added to `x` before forward function.
57
- key_pos (Tensor): The positional encoding for `key`, with the
58
- same shape as `key`. Default None. If not None, it will
59
- be added to `key` before forward function. If None, and
60
- `query_pos` has the same shape as `key`, then `query_pos`
61
- will be used for `key_pos`.
62
- attn_mask (Tensor): ByteTensor mask with shape [num_query,
63
- num_key]. Same in `nn.MultiheadAttention.forward`.
64
- Default None.
65
- key_padding_mask (Tensor): ByteTensor with shape [bs, num_key].
66
- Same in `nn.MultiheadAttention.forward`. Default None.
67
-
68
- Returns:
69
- Tensor: forwarded results with shape [num_query, bs, embed_dims].
70
- """
71
- query = x
72
- if key is None:
73
- key = query
74
- if value is None:
75
- value = key
76
- if residual is None:
77
- residual = x
78
- if key_pos is None:
79
- if query_pos is not None and key is not None:
80
- if query_pos.shape == key.shape:
81
- key_pos = query_pos
82
- if query_pos is not None:
83
- query = query + query_pos
84
- if key_pos is not None:
85
- key = key + key_pos
86
- out = self.attn(
87
- query,
88
- key,
89
- value=value,
90
- attn_mask=attn_mask,
91
- key_padding_mask=key_padding_mask)[0]
92
-
93
- return residual + self.dropout(out)
94
-
95
- def __repr__(self):
96
- """str: a string that describes the module"""
97
- repr_str = self.__class__.__name__
98
- repr_str += f'(embed_dims={self.embed_dims}, '
99
- repr_str += f'num_heads={self.num_heads}, '
100
- repr_str += f'dropout={self.dropout})'
101
- return repr_str
102
-
103
-
104
- class FFN(nn.Module):
105
- """Implements feed-forward networks (FFNs) with residual connection.
106
-
107
- Args:
108
- embed_dims (int): The feature dimension. Same as
109
- `MultiheadAttention`.
110
- feedforward_channels (int): The hidden dimension of FFNs.
111
- num_fcs (int, optional): The number of fully-connected layers in
112
- FFNs. Defaults to 2.
113
- act_cfg (dict, optional): The activation config for FFNs.
114
- dropout (float, optional): Probability of an element to be
115
- zeroed. Default 0.0.
116
- add_residual (bool, optional): Add resudual connection.
117
- Defaults to True.
118
- """
119
-
120
- def __init__(self,
121
- embed_dims,
122
- feedforward_channels,
123
- num_fcs=2,
124
- act_cfg=dict(type='ReLU', inplace=True),
125
- dropout=0.0,
126
- add_residual=True):
127
- super(FFN, self).__init__()
128
- assert num_fcs >= 2, 'num_fcs should be no less ' \
129
- f'than 2. got {num_fcs}.'
130
- self.embed_dims = embed_dims
131
- self.feedforward_channels = feedforward_channels
132
- self.num_fcs = num_fcs
133
- self.act_cfg = act_cfg
134
- self.dropout = dropout
135
- self.activate = build_activation_layer(act_cfg)
136
-
137
- layers = nn.ModuleList()
138
- in_channels = embed_dims
139
- for _ in range(num_fcs - 1):
140
- layers.append(
141
- nn.Sequential(
142
- Linear(in_channels, feedforward_channels), self.activate,
143
- nn.Dropout(dropout)))
144
- in_channels = feedforward_channels
145
- layers.append(Linear(feedforward_channels, embed_dims))
146
- self.layers = nn.Sequential(*layers)
147
- self.dropout = nn.Dropout(dropout)
148
- self.add_residual = add_residual
149
-
150
- def forward(self, x, residual=None):
151
- """Forward function for `FFN`."""
152
- out = self.layers(x)
153
- if not self.add_residual:
154
- return out
155
- if residual is None:
156
- residual = x
157
- return residual + self.dropout(out)
158
-
159
- def __repr__(self):
160
- """str: a string that describes the module"""
161
- repr_str = self.__class__.__name__
162
- repr_str += f'(embed_dims={self.embed_dims}, '
163
- repr_str += f'feedforward_channels={self.feedforward_channels}, '
164
- repr_str += f'num_fcs={self.num_fcs}, '
165
- repr_str += f'act_cfg={self.act_cfg}, '
166
- repr_str += f'dropout={self.dropout}, '
167
- repr_str += f'add_residual={self.add_residual})'
168
- return repr_str
169
-
170
-
171
- class TransformerEncoderLayer(nn.Module):
172
- """Implements one encoder layer in DETR transformer.
173
-
174
- Args:
175
- embed_dims (int): The feature dimension. Same as `FFN`.
176
- num_heads (int): Parallel attention heads.
177
- feedforward_channels (int): The hidden dimension for FFNs.
178
- dropout (float): Probability of an element to be zeroed. Default 0.0.
179
- order (tuple[str]): The order for encoder layer. Valid examples are
180
- ('selfattn', 'norm', 'ffn', 'norm') and ('norm', 'selfattn',
181
- 'norm', 'ffn'). Default ('selfattn', 'norm', 'ffn', 'norm').
182
- act_cfg (dict): The activation config for FFNs. Default ReLU.
183
- norm_cfg (dict): Config dict for normalization layer. Default
184
- layer normalization.
185
- num_fcs (int): The number of fully-connected layers for FFNs.
186
- Default 2.
187
- """
188
-
189
- def __init__(self,
190
- embed_dims,
191
- num_heads,
192
- feedforward_channels,
193
- dropout=0.0,
194
- order=('selfattn', 'norm', 'ffn', 'norm'),
195
- act_cfg=dict(type='ReLU', inplace=True),
196
- norm_cfg=dict(type='LN'),
197
- num_fcs=2):
198
- super(TransformerEncoderLayer, self).__init__()
199
- assert isinstance(order, tuple) and len(order) == 4
200
- assert set(order) == set(['selfattn', 'norm', 'ffn'])
201
- self.embed_dims = embed_dims
202
- self.num_heads = num_heads
203
- self.feedforward_channels = feedforward_channels
204
- self.dropout = dropout
205
- self.order = order
206
- self.act_cfg = act_cfg
207
- self.norm_cfg = norm_cfg
208
- self.num_fcs = num_fcs
209
- self.pre_norm = order[0] == 'norm'
210
- self.self_attn = MultiheadAttention(embed_dims, num_heads, dropout)
211
- self.ffn = FFN(embed_dims, feedforward_channels, num_fcs, act_cfg,
212
- dropout)
213
- self.norms = nn.ModuleList()
214
- self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1])
215
- self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1])
216
-
217
- def forward(self, x, pos=None, attn_mask=None, key_padding_mask=None):
218
- """Forward function for `TransformerEncoderLayer`.
219
-
220
- Args:
221
- x (Tensor): The input query with shape [num_key, bs,
222
- embed_dims]. Same in `MultiheadAttention.forward`.
223
- pos (Tensor): The positional encoding for query. Default None.
224
- Same as `query_pos` in `MultiheadAttention.forward`.
225
- attn_mask (Tensor): ByteTensor mask with shape [num_key,
226
- num_key]. Same in `MultiheadAttention.forward`. Default None.
227
- key_padding_mask (Tensor): ByteTensor with shape [bs, num_key].
228
- Same in `MultiheadAttention.forward`. Default None.
229
-
230
- Returns:
231
- Tensor: forwarded results with shape [num_key, bs, embed_dims].
232
- """
233
- norm_cnt = 0
234
- inp_residual = x
235
- for layer in self.order:
236
- if layer == 'selfattn':
237
- # self attention
238
- query = key = value = x
239
- x = self.self_attn(
240
- query,
241
- key,
242
- value,
243
- inp_residual if self.pre_norm else None,
244
- query_pos=pos,
245
- key_pos=pos,
246
- attn_mask=attn_mask,
247
- key_padding_mask=key_padding_mask)
248
- inp_residual = x
249
- elif layer == 'norm':
250
- x = self.norms[norm_cnt](x)
251
- norm_cnt += 1
252
- elif layer == 'ffn':
253
- x = self.ffn(x, inp_residual if self.pre_norm else None)
254
- return x
255
-
256
- def __repr__(self):
257
- """str: a string that describes the module"""
258
- repr_str = self.__class__.__name__
259
- repr_str += f'(embed_dims={self.embed_dims}, '
260
- repr_str += f'num_heads={self.num_heads}, '
261
- repr_str += f'feedforward_channels={self.feedforward_channels}, '
262
- repr_str += f'dropout={self.dropout}, '
263
- repr_str += f'order={self.order}, '
264
- repr_str += f'act_cfg={self.act_cfg}, '
265
- repr_str += f'norm_cfg={self.norm_cfg}, '
266
- repr_str += f'num_fcs={self.num_fcs})'
267
- return repr_str
268
-
269
-
270
- class TransformerDecoderLayer(nn.Module):
271
- """Implements one decoder layer in DETR transformer.
272
-
273
- Args:
274
- embed_dims (int): The feature dimension. Same as
275
- `TransformerEncoderLayer`.
276
- num_heads (int): Parallel attention heads.
277
- feedforward_channels (int): Same as `TransformerEncoderLayer`.
278
- dropout (float): Same as `TransformerEncoderLayer`. Default 0.0.
279
- order (tuple[str]): The order for decoder layer. Valid examples are
280
- ('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', 'norm') and
281
- ('norm', 'selfattn', 'norm', 'multiheadattn', 'norm', 'ffn').
282
- Default the former.
283
- act_cfg (dict): Same as `TransformerEncoderLayer`. Default ReLU.
284
- norm_cfg (dict): Config dict for normalization layer. Default
285
- layer normalization.
286
- num_fcs (int): The number of fully-connected layers in FFNs.
287
- """
288
-
289
- def __init__(self,
290
- embed_dims,
291
- num_heads,
292
- feedforward_channels,
293
- dropout=0.0,
294
- order=('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn',
295
- 'norm'),
296
- act_cfg=dict(type='ReLU', inplace=True),
297
- norm_cfg=dict(type='LN'),
298
- num_fcs=2):
299
- super(TransformerDecoderLayer, self).__init__()
300
- assert isinstance(order, tuple) and len(order) == 6
301
- assert set(order) == set(['selfattn', 'norm', 'multiheadattn', 'ffn'])
302
- self.embed_dims = embed_dims
303
- self.num_heads = num_heads
304
- self.feedforward_channels = feedforward_channels
305
- self.dropout = dropout
306
- self.order = order
307
- self.act_cfg = act_cfg
308
- self.norm_cfg = norm_cfg
309
- self.num_fcs = num_fcs
310
- self.pre_norm = order[0] == 'norm'
311
- self.self_attn = MultiheadAttention(embed_dims, num_heads, dropout)
312
- self.multihead_attn = MultiheadAttention(embed_dims, num_heads,
313
- dropout)
314
- self.ffn = FFN(embed_dims, feedforward_channels, num_fcs, act_cfg,
315
- dropout)
316
- self.norms = nn.ModuleList()
317
- # 3 norm layers in official DETR's TransformerDecoderLayer
318
- for _ in range(3):
319
- self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1])
320
-
321
- def forward(self,
322
- x,
323
- memory,
324
- memory_pos=None,
325
- query_pos=None,
326
- memory_attn_mask=None,
327
- target_attn_mask=None,
328
- memory_key_padding_mask=None,
329
- target_key_padding_mask=None):
330
- """Forward function for `TransformerDecoderLayer`.
331
-
332
- Args:
333
- x (Tensor): Input query with shape [num_query, bs, embed_dims].
334
- memory (Tensor): Tensor got from `TransformerEncoder`, with shape
335
- [num_key, bs, embed_dims].
336
- memory_pos (Tensor): The positional encoding for `memory`. Default
337
- None. Same as `key_pos` in `MultiheadAttention.forward`.
338
- query_pos (Tensor): The positional encoding for `query`. Default
339
- None. Same as `query_pos` in `MultiheadAttention.forward`.
340
- memory_attn_mask (Tensor): ByteTensor mask for `memory`, with
341
- shape [num_key, num_key]. Same as `attn_mask` in
342
- `MultiheadAttention.forward`. Default None.
343
- target_attn_mask (Tensor): ByteTensor mask for `x`, with shape
344
- [num_query, num_query]. Same as `attn_mask` in
345
- `MultiheadAttention.forward`. Default None.
346
- memory_key_padding_mask (Tensor): ByteTensor for `memory`, with
347
- shape [bs, num_key]. Same as `key_padding_mask` in
348
- `MultiheadAttention.forward`. Default None.
349
- target_key_padding_mask (Tensor): ByteTensor for `x`, with shape
350
- [bs, num_query]. Same as `key_padding_mask` in
351
- `MultiheadAttention.forward`. Default None.
352
-
353
- Returns:
354
- Tensor: forwarded results with shape [num_query, bs, embed_dims].
355
- """
356
- norm_cnt = 0
357
- inp_residual = x
358
- for layer in self.order:
359
- if layer == 'selfattn':
360
- query = key = value = x
361
- x = self.self_attn(
362
- query,
363
- key,
364
- value,
365
- inp_residual if self.pre_norm else None,
366
- query_pos,
367
- key_pos=query_pos,
368
- attn_mask=target_attn_mask,
369
- key_padding_mask=target_key_padding_mask)
370
- inp_residual = x
371
- elif layer == 'norm':
372
- x = self.norms[norm_cnt](x)
373
- norm_cnt += 1
374
- elif layer == 'multiheadattn':
375
- query = x
376
- key = value = memory
377
- x = self.multihead_attn(
378
- query,
379
- key,
380
- value,
381
- inp_residual if self.pre_norm else None,
382
- query_pos,
383
- key_pos=memory_pos,
384
- attn_mask=memory_attn_mask,
385
- key_padding_mask=memory_key_padding_mask)
386
- inp_residual = x
387
- elif layer == 'ffn':
388
- x = self.ffn(x, inp_residual if self.pre_norm else None)
389
- return x
390
-
391
- def __repr__(self):
392
- """str: a string that describes the module"""
393
- repr_str = self.__class__.__name__
394
- repr_str += f'(embed_dims={self.embed_dims}, '
395
- repr_str += f'num_heads={self.num_heads}, '
396
- repr_str += f'feedforward_channels={self.feedforward_channels}, '
397
- repr_str += f'dropout={self.dropout}, '
398
- repr_str += f'order={self.order}, '
399
- repr_str += f'act_cfg={self.act_cfg}, '
400
- repr_str += f'norm_cfg={self.norm_cfg}, '
401
- repr_str += f'num_fcs={self.num_fcs})'
402
- return repr_str
403
-
404
-
405
- class TransformerEncoder(nn.Module):
406
- """Implements the encoder in DETR transformer.
407
-
408
- Args:
409
- num_layers (int): The number of `TransformerEncoderLayer`.
410
- embed_dims (int): Same as `TransformerEncoderLayer`.
411
- num_heads (int): Same as `TransformerEncoderLayer`.
412
- feedforward_channels (int): Same as `TransformerEncoderLayer`.
413
- dropout (float): Same as `TransformerEncoderLayer`. Default 0.0.
414
- order (tuple[str]): Same as `TransformerEncoderLayer`.
415
- act_cfg (dict): Same as `TransformerEncoderLayer`. Default ReLU.
416
- norm_cfg (dict): Same as `TransformerEncoderLayer`. Default
417
- layer normalization.
418
- num_fcs (int): Same as `TransformerEncoderLayer`. Default 2.
419
- """
420
-
421
- def __init__(self,
422
- num_layers,
423
- embed_dims,
424
- num_heads,
425
- feedforward_channels,
426
- dropout=0.0,
427
- order=('selfattn', 'norm', 'ffn', 'norm'),
428
- act_cfg=dict(type='ReLU', inplace=True),
429
- norm_cfg=dict(type='LN'),
430
- num_fcs=2):
431
- super(TransformerEncoder, self).__init__()
432
- assert isinstance(order, tuple) and len(order) == 4
433
- assert set(order) == set(['selfattn', 'norm', 'ffn'])
434
- self.num_layers = num_layers
435
- self.embed_dims = embed_dims
436
- self.num_heads = num_heads
437
- self.feedforward_channels = feedforward_channels
438
- self.dropout = dropout
439
- self.order = order
440
- self.act_cfg = act_cfg
441
- self.norm_cfg = norm_cfg
442
- self.num_fcs = num_fcs
443
- self.pre_norm = order[0] == 'norm'
444
- self.layers = nn.ModuleList()
445
- for _ in range(num_layers):
446
- self.layers.append(
447
- TransformerEncoderLayer(embed_dims, num_heads,
448
- feedforward_channels, dropout, order,
449
- act_cfg, norm_cfg, num_fcs))
450
- self.norm = build_norm_layer(norm_cfg,
451
- embed_dims)[1] if self.pre_norm else None
452
-
453
- def forward(self, x, pos=None, attn_mask=None, key_padding_mask=None):
454
- """Forward function for `TransformerEncoder`.
455
-
456
- Args:
457
- x (Tensor): Input query. Same in `TransformerEncoderLayer.forward`.
458
- pos (Tensor): Positional encoding for query. Default None.
459
- Same in `TransformerEncoderLayer.forward`.
460
- attn_mask (Tensor): ByteTensor attention mask. Default None.
461
- Same in `TransformerEncoderLayer.forward`.
462
- key_padding_mask (Tensor): Same in
463
- `TransformerEncoderLayer.forward`. Default None.
464
-
465
- Returns:
466
- Tensor: Results with shape [num_key, bs, embed_dims].
467
- """
468
- for layer in self.layers:
469
- x = layer(x, pos, attn_mask, key_padding_mask)
470
- if self.norm is not None:
471
- x = self.norm(x)
472
- return x
473
-
474
- def __repr__(self):
475
- """str: a string that describes the module"""
476
- repr_str = self.__class__.__name__
477
- repr_str += f'(num_layers={self.num_layers}, '
478
- repr_str += f'embed_dims={self.embed_dims}, '
479
- repr_str += f'num_heads={self.num_heads}, '
480
- repr_str += f'feedforward_channels={self.feedforward_channels}, '
481
- repr_str += f'dropout={self.dropout}, '
482
- repr_str += f'order={self.order}, '
483
- repr_str += f'act_cfg={self.act_cfg}, '
484
- repr_str += f'norm_cfg={self.norm_cfg}, '
485
- repr_str += f'num_fcs={self.num_fcs})'
486
- return repr_str
487
-
488
-
489
- class TransformerDecoder(nn.Module):
490
- """Implements the decoder in DETR transformer.
491
-
492
- Args:
493
- num_layers (int): The number of `TransformerDecoderLayer`.
494
- embed_dims (int): Same as `TransformerDecoderLayer`.
495
- num_heads (int): Same as `TransformerDecoderLayer`.
496
- feedforward_channels (int): Same as `TransformerDecoderLayer`.
497
- dropout (float): Same as `TransformerDecoderLayer`. Default 0.0.
498
- order (tuple[str]): Same as `TransformerDecoderLayer`.
499
- act_cfg (dict): Same as `TransformerDecoderLayer`. Default ReLU.
500
- norm_cfg (dict): Same as `TransformerDecoderLayer`. Default
501
- layer normalization.
502
- num_fcs (int): Same as `TransformerDecoderLayer`. Default 2.
503
- """
504
-
505
- def __init__(self,
506
- num_layers,
507
- embed_dims,
508
- num_heads,
509
- feedforward_channels,
510
- dropout=0.0,
511
- order=('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn',
512
- 'norm'),
513
- act_cfg=dict(type='ReLU', inplace=True),
514
- norm_cfg=dict(type='LN'),
515
- num_fcs=2,
516
- return_intermediate=False):
517
- super(TransformerDecoder, self).__init__()
518
- assert isinstance(order, tuple) and len(order) == 6
519
- assert set(order) == set(['selfattn', 'norm', 'multiheadattn', 'ffn'])
520
- self.num_layers = num_layers
521
- self.embed_dims = embed_dims
522
- self.num_heads = num_heads
523
- self.feedforward_channels = feedforward_channels
524
- self.dropout = dropout
525
- self.order = order
526
- self.act_cfg = act_cfg
527
- self.norm_cfg = norm_cfg
528
- self.num_fcs = num_fcs
529
- self.return_intermediate = return_intermediate
530
- self.layers = nn.ModuleList()
531
- for _ in range(num_layers):
532
- self.layers.append(
533
- TransformerDecoderLayer(embed_dims, num_heads,
534
- feedforward_channels, dropout, order,
535
- act_cfg, norm_cfg, num_fcs))
536
- self.norm = build_norm_layer(norm_cfg, embed_dims)[1]
537
-
538
- def forward(self,
539
- x,
540
- memory,
541
- memory_pos=None,
542
- query_pos=None,
543
- memory_attn_mask=None,
544
- target_attn_mask=None,
545
- memory_key_padding_mask=None,
546
- target_key_padding_mask=None):
547
- """Forward function for `TransformerDecoder`.
548
-
549
- Args:
550
- x (Tensor): Input query. Same in `TransformerDecoderLayer.forward`.
551
- memory (Tensor): Same in `TransformerDecoderLayer.forward`.
552
- memory_pos (Tensor): Same in `TransformerDecoderLayer.forward`.
553
- Default None.
554
- query_pos (Tensor): Same in `TransformerDecoderLayer.forward`.
555
- Default None.
556
- memory_attn_mask (Tensor): Same in
557
- `TransformerDecoderLayer.forward`. Default None.
558
- target_attn_mask (Tensor): Same in
559
- `TransformerDecoderLayer.forward`. Default None.
560
- memory_key_padding_mask (Tensor): Same in
561
- `TransformerDecoderLayer.forward`. Default None.
562
- target_key_padding_mask (Tensor): Same in
563
- `TransformerDecoderLayer.forward`. Default None.
564
-
565
- Returns:
566
- Tensor: Results with shape [num_query, bs, embed_dims].
567
- """
568
- intermediate = []
569
- for layer in self.layers:
570
- x = layer(x, memory, memory_pos, query_pos, memory_attn_mask,
571
- target_attn_mask, memory_key_padding_mask,
572
- target_key_padding_mask)
573
- if self.return_intermediate:
574
- intermediate.append(self.norm(x))
575
- if self.norm is not None:
576
- x = self.norm(x)
577
- if self.return_intermediate:
578
- intermediate.pop()
579
- intermediate.append(x)
580
- if self.return_intermediate:
581
- return torch.stack(intermediate)
582
- return x.unsqueeze(0)
583
-
584
- def __repr__(self):
585
- """str: a string that describes the module"""
586
- repr_str = self.__class__.__name__
587
- repr_str += f'(num_layers={self.num_layers}, '
588
- repr_str += f'embed_dims={self.embed_dims}, '
589
- repr_str += f'num_heads={self.num_heads}, '
590
- repr_str += f'feedforward_channels={self.feedforward_channels}, '
591
- repr_str += f'dropout={self.dropout}, '
592
- repr_str += f'order={self.order}, '
593
- repr_str += f'act_cfg={self.act_cfg}, '
594
- repr_str += f'norm_cfg={self.norm_cfg}, '
595
- repr_str += f'num_fcs={self.num_fcs}, '
596
- repr_str += f'return_intermediate={self.return_intermediate})'
597
- return repr_str
598
-
599
-
600
- @TRANSFORMER.register_module()
601
- class Transformer(nn.Module):
602
- """Implements the DETR transformer.
603
-
604
- Following the official DETR implementation, this module copy-paste
605
- from torch.nn.Transformer with modifications:
606
-
607
- * positional encodings are passed in MultiheadAttention
608
- * extra LN at the end of encoder is removed
609
- * decoder returns a stack of activations from all decoding layers
610
-
611
- See `paper: End-to-End Object Detection with Transformers
612
- <https://arxiv.org/pdf/2005.12872>`_ for details.
613
-
614
- Args:
615
- embed_dims (int): The feature dimension.
616
- num_heads (int): Parallel attention heads. Same as
617
- `nn.MultiheadAttention`.
618
- num_encoder_layers (int): Number of `TransformerEncoderLayer`.
619
- num_decoder_layers (int): Number of `TransformerDecoderLayer`.
620
- feedforward_channels (int): The hidden dimension for FFNs used in both
621
- encoder and decoder.
622
- dropout (float): Probability of an element to be zeroed. Default 0.0.
623
- act_cfg (dict): Activation config for FFNs used in both encoder
624
- and decoder. Default ReLU.
625
- norm_cfg (dict): Config dict for normalization used in both encoder
626
- and decoder. Default layer normalization.
627
- num_fcs (int): The number of fully-connected layers in FFNs, which is
628
- used for both encoder and decoder.
629
- pre_norm (bool): Whether the normalization layer is ordered
630
- first in the encoder and decoder. Default False.
631
- return_intermediate_dec (bool): Whether to return the intermediate
632
- output from each TransformerDecoderLayer or only the last
633
- TransformerDecoderLayer. Default False. If False, the returned
634
- `hs` has shape [num_decoder_layers, bs, num_query, embed_dims].
635
- If True, the returned `hs` will have shape [1, bs, num_query,
636
- embed_dims].
637
- """
638
-
639
- def __init__(self,
640
- embed_dims=512,
641
- num_heads=8,
642
- num_encoder_layers=6,
643
- num_decoder_layers=6,
644
- feedforward_channels=2048,
645
- dropout=0.0,
646
- act_cfg=dict(type='ReLU', inplace=True),
647
- norm_cfg=dict(type='LN'),
648
- num_fcs=2,
649
- pre_norm=False,
650
- return_intermediate_dec=False):
651
- super(Transformer, self).__init__()
652
- self.embed_dims = embed_dims
653
- self.num_heads = num_heads
654
- self.num_encoder_layers = num_encoder_layers
655
- self.num_decoder_layers = num_decoder_layers
656
- self.feedforward_channels = feedforward_channels
657
- self.dropout = dropout
658
- self.act_cfg = act_cfg
659
- self.norm_cfg = norm_cfg
660
- self.num_fcs = num_fcs
661
- self.pre_norm = pre_norm
662
- self.return_intermediate_dec = return_intermediate_dec
663
- if self.pre_norm:
664
- encoder_order = ('norm', 'selfattn', 'norm', 'ffn')
665
- decoder_order = ('norm', 'selfattn', 'norm', 'multiheadattn',
666
- 'norm', 'ffn')
667
- else:
668
- encoder_order = ('selfattn', 'norm', 'ffn', 'norm')
669
- decoder_order = ('selfattn', 'norm', 'multiheadattn', 'norm',
670
- 'ffn', 'norm')
671
- self.encoder = TransformerEncoder(num_encoder_layers, embed_dims,
672
- num_heads, feedforward_channels,
673
- dropout, encoder_order, act_cfg,
674
- norm_cfg, num_fcs)
675
- self.decoder = TransformerDecoder(num_decoder_layers, embed_dims,
676
- num_heads, feedforward_channels,
677
- dropout, decoder_order, act_cfg,
678
- norm_cfg, num_fcs,
679
- return_intermediate_dec)
680
-
681
- def init_weights(self, distribution='uniform'):
682
- """Initialize the transformer weights."""
683
- # follow the official DETR to init parameters
684
- for m in self.modules():
685
- if hasattr(m, 'weight') and m.weight.dim() > 1:
686
- xavier_init(m, distribution=distribution)
687
-
688
- def forward(self, x, mask, query_embed, pos_embed):
689
- """Forward function for `Transformer`.
690
-
691
- Args:
692
- x (Tensor): Input query with shape [bs, c, h, w] where
693
- c = embed_dims.
694
- mask (Tensor): The key_padding_mask used for encoder and decoder,
695
- with shape [bs, h, w].
696
- query_embed (Tensor): The query embedding for decoder, with shape
697
- [num_query, c].
698
- pos_embed (Tensor): The positional encoding for encoder and
699
- decoder, with the same shape as `x`.
700
-
701
- Returns:
702
- tuple[Tensor]: results of decoder containing the following tensor.
703
-
704
- - out_dec: Output from decoder. If return_intermediate_dec \
705
- is True output has shape [num_dec_layers, bs,
706
- num_query, embed_dims], else has shape [1, bs, \
707
- num_query, embed_dims].
708
- - memory: Output results from encoder, with shape \
709
- [bs, embed_dims, h, w].
710
- """
711
- bs, c, h, w = x.shape
712
- x = x.flatten(2).permute(2, 0, 1) # [bs, c, h, w] -> [h*w, bs, c]
713
- pos_embed = pos_embed.flatten(2).permute(2, 0, 1)
714
- query_embed = query_embed.unsqueeze(1).repeat(
715
- 1, bs, 1) # [num_query, dim] -> [num_query, bs, dim]
716
- mask = mask.flatten(1) # [bs, h, w] -> [bs, h*w]
717
- memory = self.encoder(
718
- x, pos=pos_embed, attn_mask=None, key_padding_mask=mask)
719
- target = torch.zeros_like(query_embed)
720
- # out_dec: [num_layers, num_query, bs, dim]
721
- out_dec = self.decoder(
722
- target,
723
- memory,
724
- memory_pos=pos_embed,
725
- query_pos=query_embed,
726
- memory_attn_mask=None,
727
- target_attn_mask=None,
728
- memory_key_padding_mask=mask,
729
- target_key_padding_mask=None)
730
- out_dec = out_dec.transpose(1, 2)
731
- memory = memory.permute(1, 2, 0).reshape(bs, c, h, w)
732
- return out_dec, memory
733
-
734
- def __repr__(self):
735
- """str: a string that describes the module"""
736
- repr_str = self.__class__.__name__
737
- repr_str += f'(embed_dims={self.embed_dims}, '
738
- repr_str += f'num_heads={self.num_heads}, '
739
- repr_str += f'num_encoder_layers={self.num_encoder_layers}, '
740
- repr_str += f'num_decoder_layers={self.num_decoder_layers}, '
741
- repr_str += f'feedforward_channels={self.feedforward_channels}, '
742
- repr_str += f'dropout={self.dropout}, '
743
- repr_str += f'act_cfg={self.act_cfg}, '
744
- repr_str += f'norm_cfg={self.norm_cfg}, '
745
- repr_str += f'num_fcs={self.num_fcs}, '
746
- repr_str += f'pre_norm={self.pre_norm}, '
747
- repr_str += f'return_intermediate_dec={self.return_intermediate_dec})'
748
- return repr_str
749
-
750
-
751
- @TRANSFORMER.register_module()
752
- class DynamicConv(nn.Module):
753
- """Implements Dynamic Convolution.
754
-
755
- This module generate parameters for each sample and
756
- use bmm to implement 1*1 convolution. Code is modified
757
- from the `official github repo <https://github.com/PeizeSun/
758
- SparseR-CNN/blob/main/projects/SparseRCNN/sparsercnn/head.py#L258>`_ .
759
-
760
- Args:
761
- in_channels (int): The input feature channel.
762
- Defaults to 256.
763
- feat_channels (int): The inner feature channel.
764
- Defaults to 64.
765
- out_channels (int, optional): The output feature channel.
766
- When not specified, it will be set to `in_channels`
767
- by default
768
- input_feat_shape (int): The shape of input feature.
769
- Defaults to 7.
770
- act_cfg (dict): The activation config for DynamicConv.
771
- norm_cfg (dict): Config dict for normalization layer. Default
772
- layer normalization.
773
- """
774
-
775
- def __init__(self,
776
- in_channels=256,
777
- feat_channels=64,
778
- out_channels=None,
779
- input_feat_shape=7,
780
- act_cfg=dict(type='ReLU', inplace=True),
781
- norm_cfg=dict(type='LN')):
782
- super(DynamicConv, self).__init__()
783
- self.in_channels = in_channels
784
- self.feat_channels = feat_channels
785
- self.out_channels_raw = out_channels
786
- self.input_feat_shape = input_feat_shape
787
- self.act_cfg = act_cfg
788
- self.norm_cfg = norm_cfg
789
- self.out_channels = out_channels if out_channels else in_channels
790
-
791
- self.num_params_in = self.in_channels * self.feat_channels
792
- self.num_params_out = self.out_channels * self.feat_channels
793
- self.dynamic_layer = nn.Linear(
794
- self.in_channels, self.num_params_in + self.num_params_out)
795
-
796
- self.norm_in = build_norm_layer(norm_cfg, self.feat_channels)[1]
797
- self.norm_out = build_norm_layer(norm_cfg, self.out_channels)[1]
798
-
799
- self.activation = build_activation_layer(act_cfg)
800
-
801
- num_output = self.out_channels * input_feat_shape**2
802
- self.fc_layer = nn.Linear(num_output, self.out_channels)
803
- self.fc_norm = build_norm_layer(norm_cfg, self.out_channels)[1]
804
-
805
- def forward(self, param_feature, input_feature):
806
- """Forward function for `DynamicConv`.
807
-
808
- Args:
809
- param_feature (Tensor): The feature can be used
810
- to generate the parameter, has shape
811
- (num_all_proposals, in_channels).
812
- input_feature (Tensor): Feature that
813
- interact with parameters, has shape
814
- (num_all_proposals, in_channels, H, W).
815
-
816
- Returns:
817
- Tensor: The output feature has shape
818
- (num_all_proposals, out_channels).
819
- """
820
- num_proposals = param_feature.size(0)
821
- input_feature = input_feature.view(num_proposals, self.in_channels,
822
- -1).permute(2, 0, 1)
823
-
824
- input_feature = input_feature.permute(1, 0, 2)
825
- parameters = self.dynamic_layer(param_feature)
826
-
827
- param_in = parameters[:, :self.num_params_in].view(
828
- -1, self.in_channels, self.feat_channels)
829
- param_out = parameters[:, -self.num_params_out:].view(
830
- -1, self.feat_channels, self.out_channels)
831
-
832
- # input_feature has shape (num_all_proposals, H*W, in_channels)
833
- # param_in has shape (num_all_proposals, in_channels, feat_channels)
834
- # feature has shape (num_all_proposals, H*W, feat_channels)
835
- features = torch.bmm(input_feature, param_in)
836
- features = self.norm_in(features)
837
- features = self.activation(features)
838
-
839
- # param_out has shape (batch_size, feat_channels, out_channels)
840
- features = torch.bmm(features, param_out)
841
- features = self.norm_out(features)
842
- features = self.activation(features)
843
-
844
- features = features.flatten(1)
845
- features = self.fc_layer(features)
846
- features = self.fc_norm(features)
847
- features = self.activation(features)
848
-
849
- return features
850
-
851
- def __repr__(self):
852
- """str: a string that describes the module"""
853
- repr_str = self.__class__.__name__
854
- repr_str += f'(in_channels={self.in_channels}, '
855
- repr_str += f'feat_channels={self.feat_channels}, '
856
- repr_str += f'out_channels={self.out_channels_raw}, '
857
- repr_str += f'input_feat_shape={self.input_feat_shape}, '
858
- repr_str += f'act_cfg={self.act_cfg}, '
859
- repr_str += f'norm_cfg={self.norm_cfg})'
860
- return repr_str
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Chintan-Donda/KKMS-KSSW-HF/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: KKMS KSSW
3
- emoji: 🔥
4
- colorFrom: blue
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 3.24.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/Yunzai/Yunzai/lib/config/redis.js DELETED
@@ -1,76 +0,0 @@
1
- import cfg from "./config.js"
2
- import common from "../common/common.js"
3
- import { createClient } from "redis"
4
- import { exec } from "node:child_process"
5
-
6
- /**
7
- * 初始化全局redis客户端
8
- */
9
- export default async function redisInit() {
10
- const rc = cfg.redis
11
- const redisUn = rc.username || ""
12
- let redisPw = rc.password ? `:${rc.password}` : ""
13
- if (rc.username || rc.password)
14
- redisPw += "@"
15
- const redisUrl = `redis://${redisUn}${redisPw}${rc.host}:${rc.port}/${rc.db}`
16
- let client = createClient({ url: redisUrl })
17
-
18
- try {
19
- logger.info(`正在连接 ${logger.blue(redisUrl)}`)
20
- await client.connect()
21
- } catch (err) {
22
- logger.error(`Redis 错误:${logger.red(err)}`)
23
-
24
- const cmd = "redis-server --save 900 1 --save 300 10 --daemonize yes" + await aarch64()
25
- logger.info("正在启动 Redis...")
26
- await execSync(cmd)
27
- await common.sleep(1000)
28
-
29
- try {
30
- client = createClient({ url: redisUrl })
31
- await client.connect()
32
- } catch (err) {
33
- logger.error(`Redis 错误:${logger.red(err)}`)
34
- logger.error(`请先启动 Redis:${logger.blue(cmd)}`)
35
- process.exit()
36
- }
37
- }
38
-
39
- client.on("error", async err => {
40
- logger.error(`Redis 错误:${logger.red(err)}`)
41
- const cmd = "redis-server --save 900 1 --save 300 10 --daemonize yes" + await aarch64()
42
- logger.error(`请先启动 Redis:${cmd}`)
43
- process.exit()
44
- })
45
-
46
- /** 全局变量 redis */
47
- global.redis = client
48
- logger.info("Redis 连接成功")
49
- return client
50
- }
51
-
52
- async function aarch64() {
53
- if (process.platform == "win32")
54
- return ""
55
- /** 判断arch */
56
- const arch = await execSync("uname -m")
57
- if (arch.stdout && arch.stdout.includes("aarch64")) {
58
- /** 判断redis版本 */
59
- let v = await execSync("redis-server -v")
60
- if (v.stdout) {
61
- v = v.stdout.match(/v=(\d)./)
62
- /** 忽略arm警告 */
63
- if (v && v[1] >= 6)
64
- return " --ignore-warnings ARM64-COW-BUG"
65
- }
66
- }
67
- return ""
68
- }
69
-
70
- function execSync (cmd) {
71
- return new Promise((resolve, reject) => {
72
- exec(cmd, (error, stdout, stderr) => {
73
- resolve({ error, stdout, stderr })
74
- })
75
- })
76
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cletrason/Cletrason-toad-mario-movie/config.py DELETED
@@ -1 +0,0 @@
1
- save_memory = False
 
 
spaces/Crow34/Comicdraw/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Comicdraw
3
- emoji: 💻
4
- colorFrom: red
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.23.0
8
- app_file: app.py
9
- pinned: false
10
- license: openrail
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DESUCLUB/BLLAMA/README.md DELETED
@@ -1,20 +0,0 @@
1
- ---
2
- license: apache-2.0
3
- title: 'BLLAMA: ALPACA with BLIP 2'
4
- sdk: gradio
5
- emoji: 🔥
6
- colorFrom: red
7
- colorTo: purple
8
- pinned: true
9
- app_file: generate.py
10
- ---
11
- ## 🦙🌲🤏 BLLAMA: A BLIP2 + ALPACA-LORA Pipeline
12
-
13
- # Training
14
- This is just a pipeline involving the use of both ALPACA and BLIP-2, without any prior finetuning. You can refer to the details in ALPACA_LORA's repo [here](https://github.com/tloen/alpaca-lora) and the BLIP-2 training details on their GitHub page [here](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). For the pipeline, I have used the BLIP-2 model found on HuggingSpace [here](https://huggingface.co/spaces/Salesforce/BLIP2)
15
-
16
-
17
-
18
- ## Acknowledgements
19
- Once again, I would like to credit the Salesforce team for creating BLIP2, as well as tloen, the original creator of alpaca-lora. I would also like to credit Meta, the original
20
- creators of LLAMA, as well as the people behind the HuggingFace implementation of ALPACA