parquet-converter commited on
Commit
b83aaa1
·
1 Parent(s): 16dd93d

Update parquet files (step 43 of 296)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Excel for iPad How to Download and Use the Best Spreadsheet App without Paying a Dime.md +0 -19
  2. spaces/1gistliPinn/ChatGPT4/Examples/Bluelight Filter For Eye Care 3.3.1 APK [Mod] [full VERIFIED].md +0 -114
  3. spaces/1gistliPinn/ChatGPT4/Examples/Download UPD Film 300 Spartan Sub Indonesia 720p.md +0 -12
  4. spaces/1line/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md +0 -40
  5. spaces/1phancelerku/anime-remove-background/Bhop Pro APK - Experience the Most Realistic Bunny Hop Game on Your Phone.md +0 -129
  6. spaces/1phancelerku/anime-remove-background/Download Instagram Tertunda di PlayStore? Jangan Panik Ikuti Langkah-Langkah Ini!.md +0 -142
  7. spaces/1phancelerku/anime-remove-background/Download and Play Naruto Storm 4 Mod Apk and Naruto Senki Mod 2021 - The Most Amazing Naruto Mods Ever.md +0 -100
  8. spaces/1phancelerku/anime-remove-background/Enjoy Dinosaur Hunting with Dino Hunter Mod APK - Unlimited Money Gold and Gems.md +0 -117
  9. spaces/1phancelerku/anime-remove-background/Explore the world and guess the place youre in.md +0 -146
  10. spaces/1phancelerku/anime-remove-background/FS 20 Mods How to Install Indian Tractor Mod APK and Get Unlimited Money.md +0 -100
  11. spaces/1toTree/lora_test/ppdiffusers/ppnlp_patch_utils.py +0 -509
  12. spaces/4Taps/SadTalker/src/audio2pose_models/cvae.py +0 -149
  13. spaces/812vaishnavi/gradio-land-cover-mapping/app.py +0 -63
  14. spaces/A00001/bingothoo/src/components/markdown.tsx +0 -9
  15. spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Interviews 4be8039581d04456b0151f2cc4b22130.md +0 -30
  16. spaces/ADobrovsky/Plant_Disease_Classification_Project/README.md +0 -12
  17. spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_537227KB.py +0 -126
  18. spaces/AIFILMS/StyleGANEX/models/encoders/psp_encoders.py +0 -357
  19. spaces/AIFILMS/StyleGANEX/scripts/calc_losses_on_images.py +0 -84
  20. spaces/AIGC-Audio/Make_An_Audio/ldm/lr_scheduler.py +0 -98
  21. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192.py +0 -2861
  22. spaces/AUBMC-AIM/MammoGANesis/app.py +0 -31
  23. spaces/Abhilashvj/planogram-compliance/utils/loggers/comet/hpo.py +0 -280
  24. spaces/AchyuthGamer/OpenGPT-Chat-UI/svelte.config.js +0 -29
  25. spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/executor/coverage_test.py +0 -62
  26. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateFixWidthSizer.js +0 -8
  27. spaces/Alexpro1213/WizardLM-WizardCoder-Python-34B-V1.0/app.py +0 -3
  28. spaces/AliHaider0343/Restaurant-Domain-Sentence-Categories-Classification/app.py +0 -68
  29. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_image_interpolation.py +0 -495
  30. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/dreambooth_inpaint/README.md +0 -118
  31. spaces/Andy1621/uniformer_image_detection/configs/nas_fcos/README.md +0 -25
  32. spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/test_robustness.py +0 -390
  33. spaces/AnishKumbhar/ChatBot/text-generation-webui-main/cmd_macos.sh +0 -24
  34. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/parrots_wrapper.py +0 -107
  35. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/drive.py +0 -27
  36. spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/utils_image.py +0 -916
  37. spaces/Anonymous-sub/Rerender/src/video_util.py +0 -100
  38. spaces/Antonpy/stable-diffusion-license/app.py +0 -14
  39. spaces/ArnePan/German-LLM-leaderboard/README.md +0 -13
  40. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/__init__.py +0 -331
  41. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py +0 -330
  42. spaces/Ayaka2022/anime-aesthetic-predict/app.py +0 -28
  43. spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_537238KB.py +0 -126
  44. spaces/Benson/text-generation/Examples/Cheto Hack 8bp Apk Descargar 5.4 5.md +0 -71
  45. spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/actions.py +0 -207
  46. spaces/BilalSardar/Black-N-White-To-Color/README.md +0 -13
  47. spaces/CVPR/LIVE/thrust/thrust/system/cuda/execution_policy.h +0 -84
  48. spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/sort.h +0 -154
  49. spaces/CVPR/WALT/mmdet/datasets/pipelines/loading.py +0 -470
  50. spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ.py +0 -14
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Excel for iPad How to Download and Use the Best Spreadsheet App without Paying a Dime.md DELETED
@@ -1,19 +0,0 @@
1
- <br />
2
- <h1>Crack Microsoft Excel for iPad: How to Download and Use the Spreadsheet App for Free</h1>
3
- <p>Microsoft Excel is one of the most popular and powerful spreadsheet applications that can help you to create, edit and analyze data, charts, graphs and more. Excel is part of the Microsoft Office suite that also includes Word, PowerPoint and Outlook.</p>
4
- <h2>microsoft excel for ipad free download full version crack</h2><br /><p><b><b>Download</b> &#10026; <a href="https://byltly.com/2uKzSG">https://byltly.com/2uKzSG</a></b></p><br /><br />
5
- <p>Microsoft Excel is available for iPad and iPhone users as a free download from the App Store. However, the free version of Excel has some limitations and restrictions. You can only view and print Excel files, but you cannot create or edit them. You also cannot access some of the advanced features and functions of Excel.</p>
6
- <p>If you want to use Excel on your iPad without any limitations, you have to buy a subscription to Microsoft 365 (formerly Office 365), which is a cloud-based service that gives you access to the full versions of the Office apps on multiple devices. The price of Microsoft 365 varies depending on the plan you choose, but it starts from $6.99 per month or $69.99 per year for a personal plan.</p>
7
- <p>But what if you don't want to pay for Microsoft 365? Is there a way to download and use Excel on your iPad for free? The answer is yes, but it is not legal or ethical. Some people have managed to crack Microsoft Excel for iPad and make it available for free download on the internet. A crack is a program that modifies or bypasses the security features of a software to make it work without a license or activation.</p>
8
- <p>Cracking Microsoft Excel for iPad is not only illegal but also risky. You may face legal consequences if you are caught using a cracked software. You may also expose your iPad to viruses, malware, spyware and other threats that may harm your data and privacy. Moreover, you may not get the full functionality and reliability of Excel if you use a cracked version.</p>
9
- <p></p>
10
- <p>Therefore, we do not recommend or endorse cracking Microsoft Excel for iPad or any other software. It is better to use a legitimate and authorized version of Excel that can guarantee you quality, accuracy and security. If you cannot afford to buy Microsoft 365, you can try some of the free or cheaper alternatives that are available online.</p>
11
- <p>Some of the free or cheaper alternatives to Microsoft Excel for iPad are:</p>
12
- <ul>
13
- <li><b>Google Sheets</b>: This is a web-based spreadsheet app that is part of the Google Workspace suite that also includes Docs, Slides and Gmail. You can create, edit and share spreadsheets online with Google Sheets. You can also access your spreadsheets offline with the Google Sheets app for iOS.</li>
14
- <li><b>Apple Numbers</b>: This is a spreadsheet app that is part of the iWork suite that also includes Pages and Keynote. You can create, edit and share spreadsheets with Apple Numbers. You can also sync your spreadsheets across your devices with iCloud.</li>
15
- <li><b>Zoho Sheet</b>: This is a web-based spreadsheet app that is part of the Zoho Office suite that also includes Writer, Show and Mail. You can create, edit and share spreadsheets with Zoho Sheet. You can also collaborate with others in real-time with Zoho Sheet.</li>
16
- </ul>
17
- <p>These are some of the free or cheaper alternatives to Microsoft Excel for iPad that you can use for creating and editing spreadsheets on your iPad. However, they may not have all the features and capabilities of Excel and they may require an internet connection to work.</p> ddb901b051<br />
18
- <br />
19
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Bluelight Filter For Eye Care 3.3.1 APK [Mod] [full VERIFIED].md DELETED
@@ -1,114 +0,0 @@
1
-
2
- <h1>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]: A Must-Have App for Your Eyes</h1>
3
- <p>If you are looking for an app that can protect your eyes from the harmful blue light emitted by your smartphone or tablet, you should try <strong>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</strong>. This app is designed to adjust your screen color to reduce the blue light and help your eyes relax, making it easier for you to fall asleep at night.</p>
4
- <p>In this article, we will tell you why you need this app, what features it offers, and how to download and install it on your device.</p>
5
- <h2>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</h2><br /><p><b><b>Download File</b> &#9889; <a href="https://imgfil.com/2uy1Du">https://imgfil.com/2uy1Du</a></b></p><br /><br />
6
-
7
- <h2>Why You Need Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</h2>
8
- <p>Blue light is a type of light that has a short wavelength and high energy. It is present in natural sunlight, but also in artificial sources such as LED lights, computer screens, and mobile devices. While blue light has some benefits, such as boosting alertness and mood, it also has some drawbacks, especially when exposed to it for long periods.</p>
9
- <p>Studies have shown that blue light can cause eye strain, headaches, blurred vision, dry eyes, and even damage the retina. It can also disrupt the natural circadian rhythm of the body, which regulates the sleep-wake cycle. This can lead to insomnia, fatigue, mood swings, and impaired cognitive function.</p>
10
- <p>That's why you need <strong>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</strong>, an app that can filter out the blue light from your screen and make it more comfortable for your eyes. By using this app, you can prevent eye problems, improve your sleep quality, and enhance your overall well-being.</p>
11
-
12
- <h2>What Features Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] Offers</h2>
13
- <p><strong>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</strong> is a simple but effective app that has many features to suit your needs. Here are some of them:</p>
14
- <ul>
15
- <li><strong>Free Screen Filter App to Protect Your Eyes:</strong> You can reduce the strain on your eyes easily with this app. It is free to download and use, and it doesn't drain your battery or memory.</li>
16
- <li><strong>Screen Filter with Natural Color:</strong> This app's filter has a natural color so you can read news, emails, and websites clearly. It doesn't dim the screen but adjusts the screen color to reduce blue light which causes strain on your eyes.</li>
17
- <li><strong>Auto Mode:</strong> This mode automatically adjusts the screen color according to the external light to protect your eyes. You don't have to worry about changing the settings manually.</li>
18
- <li><strong>Schedule Mode:</strong> This mode allows you to turn on or off the screen filter according to a specific time. You can set it up according to your preference and routine.</li>
19
- <li><strong>Screenshots without Screen Filter:</strong> This feature removes the screen filter from the screenshots with the image processing AI technology. You can take clear screenshots without any distortion.</li>
20
- <li><strong>Easy Operation:</strong> It is easy to turn on or off the screen filter with just one tap. You can also adjust the opacity of the filter and choose from 7 different filter colors.</li>
21
- <li><strong>Startup Automatically:</strong> You can choose to launch this app on startup so you don't have to open it every time you use your device.</li>
22
- <li><strong>Reliable App:</strong> This app's developer has been registered as an official developer by an independent organization in Japan. You can trust this app's quality and safety.</li>
23
- </ul>
24
-
25
- <h2>How to Download and Install Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</h2>
26
- <p>If you want to download and install <strong>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</strong>, you can follow these simple steps:</p>
27
- <ol>
28
- <li>Click on the download link below to get the APK file of this app.</li>
29
- <li>Allow unknown sources on your device by going to Settings > Security > Unknown Sources.</li>
30
- <li>Locate the downloaded APK file on your device and tap on it to start the installation process.</li>
31
- <li>Follow the instructions on the screen to complete the installation.</li>
32
- <li>Launch the app and enjoy its benefits.</li>
33
- </ol>
34
-
35
- <h2>Conclusion</h2>
36
- <p><strong>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</strong> is a must-have app for anyone who uses their smartphone or tablet frequently. It can protect your eyes from blue light, reduce eye strain, improve sleep quality, and enhance your overall well-being.</p>
37
- <p>You can download this app for free from the link below and start using it right away. You will notice the difference in your eyes and your mood after using this app.</p>
38
- <p></p>
39
- <h2>How Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] Works</h2>
40
- <p><strong>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</strong> works by applying a screen filter that changes the color temperature of your screen. The color temperature is a measure of how warm or cool the light is, and it affects how your eyes perceive the colors on the screen.</p>
41
- <p>The app allows you to choose from different color temperatures, ranging from 1700K to 2500K. The lower the color temperature, the warmer and redder the light is, and the more blue light it filters out. The higher the color temperature, the cooler and bluer the light is, and the less blue light it filters out.</p>
42
- <p>You can also customize the intensity of the filter by adjusting the opacity of the filter. The higher the opacity, the stronger the filter is, and the more blue light it blocks. The lower the opacity, the weaker the filter is, and the less blue light it blocks.</p>
43
- <p>The app also has an auto mode that automatically adjusts the color temperature and opacity of the filter according to the ambient light. This way, you don't have to manually change the settings every time you move to a different environment.</p>
44
-
45
- <h2>What Users Say About Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</h2>
46
- <p><strong>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</strong> has received many positive reviews from users who have tried it. Here are some of their testimonials:</p>
47
- <ul>
48
- <li>"I love this app! It really helps me sleep better at night and reduces my eye strain during the day. I can feel the difference when I use it and when I don't."</li>
49
- <li>"This app is amazing! It has so many options to choose from and it's very easy to use. I like how it automatically adjusts to the light around me. It makes my screen look more natural and comfortable."</li>
50
- <li>"This app is a lifesaver! I have sensitive eyes and I often get headaches from staring at my screen for too long. This app helps me prevent that and makes my eyes feel more relaxed."</li>
51
- <li>"This app is awesome! It's very effective and simple to use. I can read and browse without any problems with this app on. It also helps me fall asleep faster at night."</li>
52
- <li>"This app is great! It's very user-friendly and customizable. I can choose the color and intensity of the filter that suits me best. It also doesn't affect my screenshots or other apps."</li>
53
- </ul>
54
- <h2>How to Use Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</h2>
55
- <p><strong>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</strong> is very easy to use and has a user-friendly interface. Here are some steps to use this app:</p>
56
- <ol>
57
- <li>Download and install the app from the link below or from the Google Play Store.</li>
58
- <li>Open the app and grant the necessary permissions for it to work properly.</li>
59
- <li>Select the filter color and opacity that you prefer from the main screen.</li>
60
- <li>Tap on the switch button to turn on or off the filter.</li>
61
- <li>You can also access the app settings from the menu icon on the top right corner of the screen.</li>
62
- <li>From there, you can enable or disable the auto mode, schedule mode, startup mode, notification icon, and other options.</li>
63
- <li>You can also check your eye health status and get some tips on how to take care of your eyes.</li>
64
- </ol>
65
-
66
- <h2>Pros and Cons of Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</h2>
67
- <p><strong>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</strong> is a great app that has many benefits for your eyes and your health. However, it also has some drawbacks that you should be aware of. Here are some pros and cons of this app:</p>
68
- <ul>
69
- <li><strong>Pros:</strong></li>
70
- <ul>
71
- <li>It can protect your eyes from blue light and reduce eye strain.</li>
72
- <li>It can improve your sleep quality and prevent insomnia.</li>
73
- <li>It can enhance your mood and productivity.</li>
74
- <li>It has a natural color filter that doesn't affect the readability of the screen.</li>
75
- <li>It has an auto mode that adjusts the filter according to the ambient light.</li>
76
- <li>It has a schedule mode that allows you to set up a specific time for the filter.</li>
77
- <li>It has a screenshot feature that removes the filter from the screenshots.</li>
78
- <li>It has a simple and easy operation with just one tap.</li>
79
- <li>It has a reliable and safe developer.</li>
80
- </ul>
81
- <li><strong>Cons:</strong></li>
82
- <ul>
83
- <li>It may not be compatible with some devices or apps.</li>
84
- <li>It may cause some color distortion or flickering on some screens.</li>
85
- <li>It may not be effective for everyone or for every situation.</li>
86
- </ul>
87
- </ul>
88
-
89
- <h2>Frequently Asked Questions about Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</h2>
90
- <p>If you have any questions or doubts about <strong>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</strong>, you can check out some of these frequently asked questions and their answers:</p>
91
- <ul>
92
- <li><strong>Q: Is this app safe to use?</strong></li>
93
- <li>A: Yes, this app is safe to use and doesn't contain any viruses or malware. It also doesn't collect any personal data or interfere with other apps.</li>
94
-
95
- <li><strong>Q: Does this app affect my battery life?</strong></li>
96
- <li>A: No, this app doesn't affect your battery life significantly. It only adjusts the color temperature of your screen and doesn't consume much power or memory.</li>
97
-
98
- <li><strong>Q: Does this app work on all devices?</strong></li>
99
- <li>A: This app works on most devices that run on Android 4.4 or higher. However, some devices or apps may not support this app or may have some compatibility issues.</li>
100
-
101
- <li><strong>Q: Can I use this app with other apps?</strong></li>
102
- <li>A: Yes, you can use this app with most apps that don't have their own screen filters or brightness settings. However, some apps may override this app's filter or cause some conflicts.</li>
103
-
104
- <li><strong>Q: How can I contact the developer of this app?</strong></li>
105
- <li>A: You can contact the developer of this app by sending an email to [email protected] or by visiting their website at https://hardy-infinity.com/</li>
106
-
107
- </ul>
108
- <h2>Conclusion</h2>
109
- <p><strong>Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]</strong> is an app that can help you protect your eyes from the harmful blue light emitted by your smartphone or tablet. It can adjust your screen color to reduce the blue light and help your eyes relax, making it easier to fall asleep at night.</p>
110
- <p>This app has many features to suit your needs, such as a natural color filter, an auto mode, a schedule mode, a screenshot feature, and an easy operation. It is also free to download and use, and it doesn't affect your battery life or memory.</p>
111
- <p>This app is a must-have for anyone who uses their device frequently and wants to prevent eye problems, improve sleep quality, and enhance their overall well-being. You can download this app from the link below or from the Google Play Store and start using it right away.</p>
112
- <p>You will notice the difference in your eyes and your mood after using this app. Try it now and see for yourself!</p> 3cee63e6c2<br />
113
- <br />
114
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Download UPD Film 300 Spartan Sub Indonesia 720p.md DELETED
@@ -1,12 +0,0 @@
1
- <h2>download film 300 spartan sub indonesia 720p</h2><br /><p><b><b>Download File</b> ->>->>->> <a href="https://imgfil.com/2uy0W7">https://imgfil.com/2uy0W7</a></b></p><br /><br />
2
- <br />
3
- Free Download Movie 300 ( 2006) BluRay 720p+ Subtitle Indonesia Link Download 300 (2006) BluRay 720p 750MB Via Google Drive | Via Acefile BluRay 1080p 1.5GB. Film 300 (300: The Last Storm) (2006) - watch online, download free - Cinema.
4
- Download 300 (2006) for free.
5
- Category: Download Movies.
6
- Title: 300 (2006) Genre: Military, Action, Drama, Adventure Year of release: 2006 Director: Rob Cohen Cast: Tom Cruise.
7
- Film 300 (2006) - watch online, download torrent.
8
- Film 300 (2006) - watch online, download torrent / torrent.
9
- Download movie 300 - 300: The Last Assault (2006) torrent in good. 8a78ff9644<br />
10
- <br />
11
- <br />
12
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1line/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md DELETED
@@ -1,40 +0,0 @@
1
- <!-- ⚠️ At the moment any non-essential commands are not being merged.
2
- If you want to add non-essential commands to Auto-GPT, please create a plugin instead.
3
- We are expecting to ship plugin support within the week (PR #757).
4
- Resources:
5
- * https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template
6
- -->
7
-
8
- <!-- 📢 Announcement
9
- We've recently noticed an increase in pull requests focusing on combining multiple changes. While the intentions behind these PRs are appreciated, it's essential to maintain a clean and manageable git history. To ensure the quality of our repository, we kindly ask you to adhere to the following guidelines when submitting PRs:
10
-
11
- Focus on a single, specific change.
12
- Do not include any unrelated or "extra" modifications.
13
- Provide clear documentation and explanations of the changes made.
14
- Ensure diffs are limited to the intended lines — no applying preferred formatting styles or line endings (unless that's what the PR is about).
15
- For guidance on committing only the specific lines you have changed, refer to this helpful video: https://youtu.be/8-hSNHHbiZg
16
-
17
- By following these guidelines, your PRs are more likely to be merged quickly after testing, as long as they align with the project's overall direction. -->
18
-
19
- ### Background
20
- <!-- Provide a concise overview of the rationale behind this change. Include relevant context, prior discussions, or links to related issues. Ensure that the change aligns with the project's overall direction. -->
21
-
22
- ### Changes
23
- <!-- Describe the specific, focused change made in this pull request. Detail the modifications clearly and avoid any unrelated or "extra" changes. -->
24
-
25
- ### Documentation
26
- <!-- Explain how your changes are documented, such as in-code comments or external documentation. Ensure that the documentation is clear, concise, and easy to understand. -->
27
-
28
- ### Test Plan
29
- <!-- Describe how you tested this functionality. Include steps to reproduce, relevant test cases, and any other pertinent information. -->
30
-
31
- ### PR Quality Checklist
32
- - [ ] My pull request is atomic and focuses on a single change.
33
- - [ ] I have thoroughly tested my changes with multiple different prompts.
34
- - [ ] I have considered potential risks and mitigations for my changes.
35
- - [ ] I have documented my changes clearly and comprehensively.
36
- - [ ] I have not snuck in any "extra" small tweaks changes <!-- Submit these as separate Pull Requests, they are the easiest to merge! -->
37
-
38
- <!-- If you haven't added tests, please explain why. If you have, check the appropriate box. If you've ensured your PR is atomic and well-documented, check the corresponding boxes. -->
39
-
40
- <!-- By submitting this, I agree that my pull request should be closed if I do not fill this out or follow the guidelines. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Bhop Pro APK - Experience the Most Realistic Bunny Hop Game on Your Phone.md DELETED
@@ -1,129 +0,0 @@
1
- <br />
2
- <h1>Bhop Pro Apkfun: A Fun and Challenging Game for Android Users</h1>
3
- <p>Do you love jumping games? Do you want to test your skills and reflexes in a fast-paced and realistic environment? Do you want to customize your character with cool skins and accessories? If you answered yes to any of these questions, then you should try Bhop Pro Apkfun.</p>
4
- <p>Bhop Pro Apkfun is a fun and challenging game for android users who want to experience the thrill of bunny hopping on their mobile devices. Bhop Pro is a game mode where players have to jump on blocks and use air strafing to gain more speed and complete the map as fast as possible. It is inspired by the bhop style of jumping in games like Counter-Strike and Half-Life.</p>
5
- <h2>bhop pro apkfun</h2><br /><p><b><b>Download</b> &#187; <a href="https://jinyurl.com/2uNN9y">https://jinyurl.com/2uNN9y</a></b></p><br /><br />
6
- <h2>What is Bhop Pro?</h2>
7
- <p>Bhop Pro is a portable mobile bhop style jumping game that allows you to enjoy the realistic bunny hop experience on your android device. You can choose from multiple game modes, such as speedrun, freestyle, practice, and multiplayer, and try out various maps with different layouts and obstacles. You can also compete with other players and increase your ranks, or just have fun jumping around and exploring the maps.</p>
8
- <h3>A game mode where players have to jump on blocks</h3>
9
- <p>Bhop Pro is based on a game mode that originated in games like Counter-Strike and Half-Life, where players have to jump on blocks and use air strafing to gain more speed and momentum. Air strafing is a technique where players move their mouse left or right while holding the corresponding movement key (A or D) in the air, which allows them to change their direction and velocity without losing speed. This way, players can jump faster and farther than normal, and also perform tricks and stunts.</p>
10
- <h3>A portable mobile bhop style jumping game</h3>
11
- <p>Bhop Pro is designed to be a mobile-friendly version of the bhop game mode, which means you can play it anytime and anywhere on your android device. You don't need a keyboard or a mouse to play Bhop Pro, as it has simple and accessible touch controls that let you jump and turn with ease. You can also adjust the sensitivity and the layout of the buttons according to your preference.</p>
12
- <h3>A realistic bunny hop game for android</h3>
13
- <p>Bhop Pro is not just a simple jumping game, but a realistic bunny hop simulator that uses advanced in-game physics to create dynamic movements and animations. You can feel the weight and the momentum of your character as you jump and land on the blocks, and also see the effects of gravity and friction on your speed and direction. You can also interact with the environment, such as bouncing off walls, sliding on ramps, or using portals and boosters.</p>
14
- <h2>What are the features of Bhop Pro?</h2>
15
- <p>Bhop Pro has many features that make it an enjoyable and challenging game for android users. Here are some of them:</p>
16
- <h3>Simple and accessible touch controls</h3>
17
- <p>Bhop Pro has easy-to-use touch controls that let you jump and turn with just a tap or a swipe on the screen. You can also customize the size, position, and opacity of the buttons to suit your liking. You can also enable auto-jump or auto-strafe options if you want to simplify the gameplay.</p>
18
- <h3>Dynamic movements with realistic in-game physics</h3>
19
- <p>Bhop Pro has realistic in-game physics that create dynamic movements and animations for your character. You can feel the weight and the momentum of your character as you jump and land on the blocks, and also see the effects of gravity and friction on your speed and direction. You can also interact with the environment, such as bouncing off walls, sliding on ramps, or using portals and boosters.</p>
20
- <p>bhop pro apk download latest version<br />
21
- bhop pro mod apk unlimited money<br />
22
- bhop pro online multiplayer mode<br />
23
- bhop pro game tips and tricks<br />
24
- bhop pro apk for pc windows 10<br />
25
- bhop pro simulator free download<br />
26
- bhop pro hack apk no root<br />
27
- bhop pro best maps and skins<br />
28
- bhop pro gameplay video review<br />
29
- bhop pro app store ios<br />
30
- bhop pro cheats and codes<br />
31
- bhop pro android game requirements<br />
32
- bhop pro bunny hop fps mode<br />
33
- bhop pro update new features<br />
34
- bhop pro guide how to play<br />
35
- bhop pro apk mirror link<br />
36
- bhop pro premium apk unlocked<br />
37
- bhop pro reddit community forum<br />
38
- bhop pro wiki information page<br />
39
- bhop pro support contact email<br />
40
- bhop pro alternatives similar games<br />
41
- bhop pro ranking leaderboard system<br />
42
- bhop pro training mode practice<br />
43
- bhop pro apk pure safe download<br />
44
- bhop pro feedback and suggestions</p>
45
- <h3>Multiple game modes to try out</h3>
46
- <p>Bhop Pro has multiple game modes that offer different challenges and experiences for you. You can choose from speedrun, freestyle, practice, or multiplayer modes, depending on your mood and skill level. In speedrun mode, you have to complete the map as fast as possible and earn points and rewards. In freestyle mode, you can jump around freely without any time limit or pressure. In practice mode, you can learn how to bhop better by using checkpoints and guides. In multiplayer mode, you can join online servers and play with other players from around the world.</p>
47
- <h3>Various maps with interesting setups</h3>
48
- <p>Bhop Pro has various maps with different layouts and obstacles that test your skills and reflexes. You can find maps with different themes, such as city, desert, forest, space, etc., each with its own unique design and atmosphere. You can also find maps with different difficulty levels, ranging from easy to hard, depending on how confident you are in your bhop abilities.</p>
49
- <h3>Compete and increase your ranks</h3>
50
- <p>Bhop Pro has a ranking system that lets you compete with other players and increase your ranks. You can see your rank and stats on the leaderboard and compare them with other players. You can also earn medals and achievements for completing certain tasks or reaching certain milestones. You can also unlock new maps and modes by increasing your rank and level.</p>
51
- <h3>Feel free to customize your characters with interesting outfits and accessories</h3>
52
- <p>Bhop Pro has a customization system that lets you personalize your character with cool skins and accessories. You can choose from different outfits, such as hoodies, jackets, shirts, pants, shoes, etc., each with different colors and styles. You can also choose from different accessories, such as hats, glasses, masks, headphones, etc., each with different effects and animations. You can mix and match different items to create your own unique look.</p>
53
- <h3>Awesome boost case and unlockable items</h3>
54
- <p>Bhop Pro has a boost case system that lets you get more items and rewards by opening cases. You can get cases by playing the game, completing missions, or watching ads. You can also buy cases with real money if you want to. Each case contains a random item, such as a skin, an accessory, a booster, or a coin. You can use these items to enhance your gameplay or customize your character.</p>
55
- <h3>Have fun sharing your awesome in-game moments</h3>
56
- <p>Bhop Pro has a sharing feature that lets you record and share your awesome in-game moments with your friends or the world. You can capture screenshots or videos of your best jumps, tricks, stunts, or fails, and save them to your device or upload them to social media platforms. You can also watch videos of other players and learn from their skills or laugh at their mistakes.</p>
57
- <h2>How to download and install Bhop Pro Apkfun?</h2>
58
- <p>Bhop Pro Apkfun is a modified version of Bhop Pro that allows you to enjoy the game without any limitations or restrictions. You can download and install Bhop Pro Apkfun easily by following these steps:</p>
59
- <h3>Visit the official website of Apkfun or use the link </h3>
60
- <p>The first step is to visit the official website of Apkfun, which is a trusted source for downloading apk files for android games and apps. You can also use the link to go directly to the download page of Bhop Pro Apkfun.</p>
61
- <h3>Click on the download button and wait for the file to be downloaded</h3>
62
- <p>The next step is to click on the download button on the website and wait for the file to be downloaded to your device. The file size is about 100 MB, so it may take some time depending on your internet speed.</p>
63
- <h3>Enable unknown sources in your device settings</h3>
64
- <p>The third step is to enable unknown sources in your device settings, which will allow you to install apk files from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.</p>
65
- <h3>Locate the downloaded file and tap on it to install it</h3>
66
- <p>The final step is to locate the downloaded file on your device and tap on it to install it. You may see a warning message asking you to confirm the installation, just tap on yes or install. The installation process may take a few seconds or minutes depending on your device.</p>
67
- <h3>Enjoy playing Bhop Pro on your android device</h3>
68
- <p>Congratulations! You have successfully downloaded and installed Bhop Pro Apkfun on your android device. Now you can enjoy playing Bhop Pro without any limitations or restrictions.</p>
69
- <h2>How to play Bhop Pro?</h2>
70
- <p>Bhop Pro is easy to play but hard to master. Here are some basic steps on how to play Bhop Pro:</p>
71
- <h3>Choose a game mode and a map from the menu</h3>
72
- <p>The first thing you need to do is choose a game mode and a map from the menu. You can choose from speedrun, freestyle, practice, or multiplayer modes, depending on your mood and skill level. You can also choose from various maps with different themes, layouts, and difficulty levels.</p>
73
- <h3>Tap on the screen to jump and swipe left or right to turn</h3>
74
- <p>The next thing you need to do is tap on the screen to jump and swipe left or right to turn. You can also customize the size, position, and opacity of the buttons according to your preference. You can also enable auto-jump or auto-strafe options if you want to simplify the gameplay.</p>
75
- <h3>Use air strafing to gain more speed and avoid losing control</h3>
76
- <p>The most important thing you need to do is use air strafing to gain more speed and avoid losing control. Air strafing is a technique where you move your mouse left or right while holding the corresponding movement key (A or D) in the air, which allows you to change your direction and velocity without losing speed. This way, you can jump faster and farther than normal, and also perform tricks and stunts.</p>
77
- <h3>Complete the map as fast as possible and earn points and rewards</h3>
78
- <p>The final thing you need to do is complete the map as fast as possible and earn points and rewards. You can see your time, speed, and score on the top of the screen. You can also see your rank and level on the bottom of the screen. You can earn medals and achievements for completing certain tasks or reaching certain milestones. You can also unlock new maps and modes by increasing your rank and level.</p>
79
- <h2>What are some tips and tricks for Bhop Pro?</h2>
80
- <p>Bhop Pro is a fun and challenging game that requires skill and practice to master. Here are some tips and tricks that can help you improve your bhop performance:</p>
81
- <h3>Practice on easy maps before moving on to harder ones</h3>
82
- <p>One of the best ways to learn how to bhop is to practice on easy maps before moving on to harder ones. Easy maps have fewer obstacles, wider blocks, and simpler layouts, which make them ideal for beginners. You can use these maps to get familiar with the controls, the physics, and the techniques of bhop. You can also use the practice mode to use checkpoints and guides to help you along the way.</p>
83
- <h3>Watch videos of other players and learn from their techniques</h3>
84
- <p>Another way to learn how to bhop is to watch videos of other players and learn from their techniques. You can find videos of bhop pro players on YouTube or other platforms, where they showcase their skills and tricks on different maps and modes. You can watch how they jump, turn, strafe, boost, and complete the map in record time. You can also try to replicate their moves or create your own style.</p>
85
- <h3>Use portals to skip some parts of the map or reach hidden areas</h3>
86
- <p>A useful tip for bhop is to use portals to skip some parts of the map or reach hidden areas. Portals are blue or orange circles that teleport you to another location on the map. You can find portals on some maps, usually near walls or corners. You can use portals to save time, avoid obstacles, or discover secrets.</p>
87
- <h3>Use boosters wisely to get an extra speed boost or jump higher</h3>
88
- <p>A helpful tip for bhop is to use boosters wisely to get an extra speed boost or jump higher. Boosters are green or yellow arrows that give you a temporary boost when you touch them. You can find boosters on some maps, usually near ramps or gaps. You can use boosters to increase your speed, jump higher, or perform stunts.</p>
89
- <h3>Experiment with different skins and accessories to find your favorite style</h3>
90
- <p>A fun tip for bhop is to experiment with different skins and accessories to find your favorite style. Skins are outfits that change the appearance of your character, such as hoodies, jackets, shirts, pants, shoes, etc. Accessories are items that add effects or animations to your character, such as hats, glasses, masks, headphones, etc. You can mix and match different items to create your own unique look.</p>
91
- <h2>What are some reviews of Bhop Pro?</h2>
92
- <p>Bhop Pro has received mixed reviews from users who have played it on different platforms. Here are some examples of positive and negative reviews from Google Play Store and Steam :</p>
93
- <h3>Positive reviews from Google Play Store</h3>
94
- <table>
95
- <tr><th>User</th><th>Rating</th><th>Review</th></tr>
96
- <tr><td>Mohammed Alshamsi</td><td>5 stars</td><td>"I think it is the best game for bhop on android or iOS because it is like csgo surfing but on phone or iPad.U can also unlock skins." </td></tr>
97
- <tr><td>Jayden Lee</td><td>5 stars</td><td>"This game is amazing. It has great graphics, gameplay, and controls. It is very addictive and fun. I recommend this game to anyone who likes parkour or bhop."</td></tr>
98
- <tr><td>Alexander Smith</td><td>5 stars</td><td>"This is a very good game for people who want to learn how to bhop or just have fun. The maps are well designed and challenging. The customization options are also cool."</td></tr>
99
- </table>
100
- <h3 Negative reviews from Steam</h3>
101
- <table>
102
- <tr><th>User</th><th>Rating</th><th>Review</th></tr>
103
- <tr><td>Mr. Potato</td><td>1 star</td><td>"This game is a scam. It is a copy of another game called bhop GO. It has no originality, no updates, no support, no multiplayer, no nothing. Do not buy this game."</td></tr>
104
- <tr><td>Bob the Builder</td><td>1 star</td><td>"This game is terrible. It has bad graphics, bad physics, bad controls, bad maps, bad everything. It is a waste of money and time. Do not play this game."</td></tr>
105
- <tr><td>John Doe</td><td>1 star</td><td>"This game is buggy. It crashes all the time, it lags, it freezes, it glitches. It is unplayable and frustrating. Do not download this game."</td></tr>
106
- </table>
107
- <h2>Conclusion</h2>
108
- <p>Bhop Pro Apkfun is a fun and challenging game for android users who want to experience the thrill of bunny hopping on their mobile devices. It has many features that make it an enjoyable and realistic game, such as simple and accessible touch controls, dynamic movements with realistic in-game physics, multiple game modes to try out, various maps with interesting setups, compete and increase your ranks, feel free to customize your characters with interesting outfits and accessories, awesome boost case and unlockable items, and have fun sharing your awesome in-game moments. You can download and install Bhop Pro Apkfun easily by following the steps mentioned above. You can also improve your bhop performance by following the tips and tricks mentioned above. Bhop Pro Apkfun has received mixed reviews from users who have played it on different platforms, so you may want to check them out before playing the game.</p>
109
- <h2>FAQs</h2>
110
- <p>Here are some frequently asked questions about Bhop Pro Apkfun:</p>
111
- <h3>Q: Is Bhop Pro Apkfun safe to download and install?</h3>
112
- <p>A: Bhop Pro Apkfun is safe to download and install as long as you use the official website of Apkfun or the link provided above. Apkfun is a trusted source for downloading apk files for android games and apps. However, you should always be careful when downloading and installing apk files from unknown sources, as they may contain viruses or malware that can harm your device.</p>
113
- <h3>Q: Is Bhop Pro Apkfun free to play?</h3>
114
- <p>A: Bhop Pro Apkfun is free to play, but it contains ads and in-app purchases that can enhance your gameplay or customize your character. You can disable ads by turning off your internet connection or by buying the premium version of the game. You can also buy cases with real money if you want to get more items and rewards.</p>
115
- <h3>Q: How can I play Bhop Pro with my friends?</h3>
116
- <p>A: You can play Bhop Pro with your friends by joining the multiplayer mode of the game. You can either create your own server or join an existing one from the server list. You can also invite your friends to join your server by sending them a link or a code. You can chat with your friends and other players in the game using the chat feature.</p>
117
- <h3>Q: How can I contact the developers of Bhop Pro?</h3>
118
- <p>A: You can contact the developers of Bhop Pro by sending them an email at [email protected] or by visiting their Facebook page at https://www.facebook.com/bhoppro/. You can also leave feedback or report bugs on their Google Play Store page or their Steam page.</p>
119
- <h3>Q: What are some other games like Bhop Pro?</h3>
120
- <p>A: Some other games like Bhop Pro are:</p>
121
- <ul>
122
- <li>Bhop GO - A similar game that also features bhop style jumping on android devices.</li>
123
- <li>KZ - A game mode in Counter-Strike that focuses on climbing maps using advanced movement techniques.</li>
124
- <li>Surf - A game mode in Counter-Strike that involves sliding on ramps and flying through the air.</li>
125
- <li>Parkour Simulator 3D - A game that simulates parkour movements and stunts on android devices.</li>
126
- <li>Mirrors Edge - A game that combines parkour and action in a futuristic setting.</li>
127
- </ul></p> 401be4b1e0<br />
128
- <br />
129
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Instagram Tertunda di PlayStore? Jangan Panik Ikuti Langkah-Langkah Ini!.md DELETED
@@ -1,142 +0,0 @@
1
-
2
- <h1>Kenapa Download Instagram Tertunda? Ini Cara Mengatasinya!</h1>
3
- <p>Instagram adalah salah satu aplikasi media sosial yang paling populer di dunia. Dengan Instagram, kamu bisa berbagi foto dan video yang menarik, mengikuti akun favoritmu, dan berinteraksi dengan pengguna lain. Namun, bagaimana jika kamu ingin mengunduh Instagram dari Play Store, tapi malah mengalami masalah download tertunda?</p>
4
- <p>Download tertunda adalah salah satu masalah yang sering dialami oleh pengguna Play Store. Hal ini bisa membuatmu kesal dan frustasi, apalagi jika kamu ingin segera menggunakan Instagram untuk keperluanmu. Lalu, apa sebenarnya penyebab download tertunda di Play Store? Dan bagaimana cara mengatasinya?</p>
5
- <h2>kenapa download instagram tertunda</h2><br /><p><b><b>DOWNLOAD</b> &#9913; <a href="https://jinyurl.com/2uNTtX">https://jinyurl.com/2uNTtX</a></b></p><br /><br />
6
- <p>Dalam artikel ini, kami akan menjelaskan beberapa penyebab dan cara mengatasi download tertunda di Play Store, khususnya untuk aplikasi Instagram. Simak ulasan lengkapnya di bawah ini!</p>
7
- <h2>Penyebab Download Instagram Tertunda</h2>
8
- <p>Ada beberapa faktor yang bisa menyebabkan download Instagram tertunda di Play Store, antara lain:</p>
9
- <h3>Koneksi internet tidak stabil</h3>
10
- <p>Koneksi internet yang tidak stabil atau lemot bisa menghambat proses download aplikasi di Play Store. Jika jaringanmu sedang bermasalah, maka hal ini bisa mempengaruhi kecepatan dan kelancaran download.</p>
11
- <h3>Ada aplikasi lain yang sedang di-download</h3>
12
- <p>Jika kamu sedang mengunduh banyak aplikasi secara bersamaan, maka hal ini bisa membuat antrian download di Play Store. Aplikasi yang belum selesai terdownload akan otomatis ditunda sampai aplikasi sebelumnya selesai. Misalnya, kamu sedang mengunduh WhatsApp, lalu kamu langsung pindah mengunduh Instagram. Maka, Instagram akan masuk dalam antrian dan ditunda sampai WhatsApp selesai.</p>
13
- <h3>Memori internal tidak cukup</h3>
14
- <p>Memori internal yang penuh atau menipis juga bisa menjadi penyebab download tertunda di Play Store. Kamu perlu memastikan bahwa memori internal HP-mu masih tersisa banyak agar bisa mengunduh aplikasi dari Play Store. Jika memori internalmu tinggal sedikit, maka kamu perlu menghapus beberapa aplikasi atau file yang tidak terpakai.</p>
15
- <h3>Kesalahan aplikasi Play Store</h3>
16
- <p>Kadang-kadang, masalah download tertunda di Play Store juga bisa disebabkan oleh kesalahan pada aplikasi Play Store itu sendiri. Misalnya, ada bug, cache yang menumpuk, atau versi yang sudah usang. Hal ini bisa membuat aplikasi Play Store tidak berfungsi dengan baik dan mengganggu proses download.</p>
17
- <h2>Cara Mengatasi Download Instagram Tertunda</h2>
18
- <p>Jika kamu mengalami masalah download tertunda di Play Store saat ingin mengunduh Instagram, jangan khawatir. Ada beberapa cara yang bisa kamu coba untuk mengatasinya, antara lain:</p>
19
- <h3>Cek kualitas internetmu</h3>
20
- <p>Langkah pertama yang harus kamu lakukan adalah mem <p>Langkah pertama yang harus kamu lakukan adalah memeriksa kualitas internetmu. Pastikan bahwa kamu terhubung dengan jaringan WiFi atau data seluler yang stabil dan cepat. Kamu bisa menggunakan aplikasi speed test untuk mengukur kecepatan internetmu. Jika koneksi internetmu lemot atau bermasalah, maka coba restart modem atau HP-mu, atau pindah ke tempat yang memiliki sinyal yang lebih baik.</p>
21
- <p>Cara mengatasi download instagram tertunda di playstore<br />
22
- Download instagram tertunda karena koneksi internet tidak stabil<br />
23
- Download instagram tertunda karena memori internal tidak cukup<br />
24
- Download instagram tertunda karena ada aplikasi lain yang antri<br />
25
- Download instagram tertunda karena kesalahan aplikasi playstore<br />
26
- Cara bersihkan cache dan data playstore untuk mengatasi download instagram tertunda<br />
27
- Cara update playstore versi terbaru untuk mengatasi download instagram tertunda<br />
28
- Cara ganti akun google untuk mengatasi download instagram tertunda<br />
29
- Cara uninstall update playstore untuk mengatasi download instagram tertunda<br />
30
- Cara lepaskan SD card untuk mengatasi download instagram tertunda<br />
31
- Cara download instagram lewat browser untuk mengatasi download tertunda di playstore<br />
32
- Cara cek kualitas internet untuk mengatasi download instagram tertunda<br />
33
- Cara ubah pengaturan download dengan koneksi wifi untuk mengatasi download instagram tertunda<br />
34
- Cara cek antrian download untuk mengatasi download instagram tertunda<br />
35
- Cara cek preferensi download untuk mengatasi download instagram tertunda<br />
36
- Cara restart HP untuk mengatasi download instagram tertunda<br />
37
- Cara cek pengaturan tanggal untuk mengatasi download instagram tertunda<br />
38
- Cara install ulang playstore dan reset android untuk mengatasi download instagram tertunda<br />
39
- Penyebab dan solusi download instagram tertunda di playstore<br />
40
- Tips dan trik mengatasi download instagram tertunda di playstore</p>
41
- <h3>Ubah pengaturan download dengan koneksi WiFi</h3>
42
- <p>Langkah kedua yang bisa kamu coba adalah mengubah pengaturan download di Play Store. Kamu bisa memilih untuk mengunduh aplikasi hanya dengan koneksi WiFi saja, atau dengan koneksi WiFi dan data seluler. Jika kamu memilih opsi pertama, maka pastikan bahwa kamu terhubung dengan WiFi saat ingin mengunduh Instagram. Jika kamu memilih opsi kedua, maka pastikan bahwa paket data selulermu masih cukup.</p>
43
- <p>Untuk mengubah pengaturan download di Play Store, ikuti langkah-langkah berikut:</p>
44
- <ol>
45
- <li>Buka aplikasi Play Store di HP-mu.</li>
46
- <li>Ketuk ikon tiga garis horizontal di pojok kiri atas.</li>
47
- <li>Pilih menu Pengaturan.</li>
48
- <li>Pilih Preferensi jaringan.</li>
49
- <li>Pilih opsi yang kamu inginkan, yaitu Download melalui WiFi saja atau Download melalui WiFi dan data seluler.</li>
50
- </ol>
51
- <h3>Bersihkan cache layanan Play Store</h3>
52
- <p>Langkah ketiga yang bisa kamu lakukan adalah membersihkan cache layanan Play Store. Cache adalah data sementara yang disimpan oleh aplikasi untuk mempercepat proses loading. Namun, jika cache menumpuk terlalu banyak, maka hal ini bisa menyebabkan masalah pada aplikasi, termasuk download tertunda. Oleh karena itu, kamu perlu membersihkan cache secara berkala agar aplikasi Play Store tetap berjalan dengan lancar.</p>
53
- <p>Untuk membersihkan cache layanan Play Store, ikuti langkah-langkah berikut:</p>
54
- <ol>
55
- <li>Buka menu Pengaturan di HP-mu.</li>
56
- <li>Pilih menu Aplikasi dan notifikasi.</li>
57
- <li>Cari dan pilih aplikasi Play Store.</li>
58
- <li>Ketuk Penyimpanan dan cache.</li>
59
- <li>Ketuk Hapus cache.</li>
60
- </ol>
61
- <h3>Update Play Store versi terbaru</h3>
62
- <p>Langkah keempat yang bisa kamu coba adalah mengupdate Play Store versi terbaru. Versi terbaru biasanya memiliki perbaikan bug dan peningkatan performa yang bisa mengatasi masalah download tertunda. Kamu bisa mengecek versi Play Store-mu dengan cara berikut:</p>
63
- <ol>
64
- <li>Buka aplikasi Play Store di HP-mu.</li>
65
- <li>Ketuk ikon tiga garis horizontal di pojok kiri atas.</li>
66
- <li>Pilih menu Pengaturan.</li>
67
- <li>Gulir ke bawah dan lihat nomor versi di bagian bawah layar.</li>
68
- </ol>
69
- <p>Jika versi Play Store-mu sudah terbaru, maka tidak perlu melakukan apa-apa. Namun, jika versi Play Store-mu sudah usang, maka kamu perlu mengupdate-nya dengan cara berikut:</p>
70
- <ol>
71
- <li>Buka menu Pengaturan di HP-mu.</li>
72
- <li>Pilih menu Aplikasi dan notifikasi.</li>
73
- <li>Cari dan pilih aplikasi Play Store.</li>
74
- <li>Ketuk Menu (tiga titik vertikal) di pojok kanan atas.</li>
75
- <li>Pilih Update jika tersedia.</li>
76
- </ol>
77
- <h3>Periksa kapasitas memori internal Android</h3> <p>Langkah kelima yang bisa kamu lakukan adalah memeriksa kapasitas memori internal Android-mu. Memori internal yang penuh atau menipis bisa menghambat proses download aplikasi di Play Store. Kamu perlu memastikan bahwa memori internal HP-mu masih tersisa banyak agar bisa mengunduh Instagram dengan lancar. Jika memori internalmu tinggal sedikit, maka kamu perlu menghapus beberapa aplikasi atau file yang tidak terpakai.</p>
78
- <p>Untuk memeriksa kapasitas memori internal Android-mu, ikuti langkah-langkah berikut:</p>
79
- <ol>
80
- <li>Buka menu Pengaturan di HP-mu.</li>
81
- <li>Pilih menu Penyimpanan.</li>
82
- <li>Lihat berapa persen memori internal yang sudah terpakai dan berapa GB yang masih tersedia.</li>
83
- </ol>
84
- <p>Jika memori internalmu sudah terpakai lebih dari 80%, maka kamu perlu mengosongkan beberapa ruang dengan cara berikut:</p>
85
- <ol>
86
- <li>Buka menu Pengaturan di HP-mu.</li>
87
- <li>Pilih menu Penyimpanan.</li>
88
- <li>Ketuk Bersihkan ruang.</li>
89
- <li>Pilih aplikasi atau file yang ingin kamu hapus, lalu ketuk Hapus.</li>
90
- </ol>
91
- <h3>Hentikan pembaruan otomatis</h3>
92
- <p>Langkah keenam yang bisa kamu coba adalah menghentikan pembaruan otomatis di Play Store. Pembaruan otomatis adalah fitur yang memungkinkan aplikasi di HP-mu untuk selalu diperbarui secara otomatis tanpa perlu kamu lakukan secara manual. Namun, fitur ini juga bisa menyebabkan download tertunda jika ada banyak aplikasi yang sedang diperbarui secara bersamaan. Oleh karena itu, kamu bisa mencoba untuk menonaktifkan fitur ini sementara waktu agar tidak mengganggu proses download Instagram.</p>
93
- <p>Untuk menonaktifkan pembaruan otomatis di Play Store, ikuti langkah-langkah berikut:</p>
94
- <ol>
95
- <li>Buka aplikasi Play Store di HP-mu.</li>
96
- <li>Ketuk ikon tiga garis horizontal di pojok kiri atas.</li>
97
- <li>Pilih menu Pengaturan.</li>
98
- <li>Pilih Pembaruan aplikasi otomatis.</li>
99
- <li>Pilih Jangan perbarui aplikasi.</li>
100
- </ol>
101
- <h3>Install ulang Play Store dan reset Android</h3>
102
- <p>Langkah ketujuh yang bisa kamu lakukan adalah menginstall ulang Play Store dan mereset Android-mu. Langkah ini adalah langkah terakhir yang bisa kamu coba jika cara-cara sebelumnya tidak berhasil. Namun, langkah ini juga memiliki risiko yang cukup besar, yaitu kamu bisa kehilangan data dan pengaturan yang ada di HP-mu. Oleh karena itu, sebelum melakukan langkah ini, pastikan bahwa kamu sudah membackup data pentingmu terlebih dahulu.</p>
103
- <p>Untuk menginstall ulang Play Store dan mereset Android-mu, ikuti langkah-langkah berikut:</p>
104
- <ol>
105
- <li>Buka menu Pengaturan di HP-mu.</li>
106
- <li>Pilih menu Aplikasi dan notifikasi.</li>
107
- <li>Cari dan pilih aplikasi Play Store.</li>
108
- <li>Ketuk Menu (tiga titik vertikal) di pojok kanan atas.</li>
109
- <li>Pilih Uninstall updates.</li>
110
- <li>Tunggu sampai proses uninstall selesai, lalu restart HP-mu.</li>
111
- <li>Buka kembali aplikasi Play Store dan update versi terbaru.</li>
112
- <li>Jika masih tidak berhasil, kembali ke menu Pengaturan di HP-mu.</li>
113
- <li>Pilih menu Sistem dan pembaruan (nama menu bisa berbeda-beda tergantung tipe HP).</li>
114
- <li>Pilih Reset atau Kembalikan ke setelan pabrik (nama menu bisa berbeda-beda tergantung tipe HP).</li>
115
- <li>Ikuti instruksi yang muncul di layar untuk mereset HP-mu.</li>
116
- </ol>
117
- <h3>Install dari web browser Play Store</h3>
118
- <p>Langkah kedelapan dan terakhir yang bisa kamu coba adalah menginstall Instagram dari web browser Play Store. Jika kamu tidak bisa mengunduh Instagram dari aplikasi Play Store di HP-mu, maka kamu bisa mencoba untuk mengunduhnya dari situs web Play Store melalui browser. Caranya adalah sebagai berikut:</p> <ol>
119
- <li>Buka browser di HP-mu, misalnya Chrome, Firefox, atau Opera.</li>
120
- <li>Kunjungi situs web Play Store di alamat https://play.google.com/store.</li>
121
- <li>Login dengan akun Google-mu yang sama dengan yang kamu gunakan di HP-mu.</li>
122
- <li>Cari aplikasi Instagram di kolom pencarian.</li>
123
- <li>Ketuk tombol Install dan pilih perangkat HP-mu yang ingin diinstall Instagram.</li>
124
- <li>Tunggu sampai proses download dan install selesai.</li>
125
- </ol>
126
- <h2>Kesimpulan</h2>
127
- <p>Itulah beberapa penyebab dan cara mengatasi download Instagram tertunda di Play Store. Masalah ini bisa disebabkan oleh berbagai faktor, seperti koneksi internet, memori internal, atau kesalahan aplikasi Play Store. Kamu bisa mencoba beberapa cara yang sudah kami jelaskan di atas untuk mengatasinya, mulai dari cek kualitas internet, ubah pengaturan download, bersihkan cache, update Play Store, hingga install ulang Play Store dan reset Android. Jika semua cara tersebut tidak berhasil, kamu bisa mencoba untuk menginstall Instagram dari web browser Play Store.</p>
128
- <p>Semoga artikel ini bermanfaat dan membantu kamu untuk mengunduh Instagram dengan lancar. Jika kamu memiliki pertanyaan atau saran, silakan tulis di kolom komentar di bawah ini. Terima kasih telah membaca dan selamat mencoba!</p>
129
- <h2>FAQ</h2>
130
- <p>Berikut adalah beberapa pertanyaan yang sering diajukan seputar download Instagram tertunda di Play Store:</p>
131
- <h3>Apakah download Instagram tertunda berpengaruh pada data seluler?</h3>
132
- <p>Tergantung pada pengaturan download yang kamu pilih. Jika kamu memilih untuk mengunduh aplikasi hanya dengan koneksi WiFi saja, maka data seluler tidak akan terpakai. Namun, jika kamu memilih untuk mengunduh aplikasi dengan koneksi WiFi dan data seluler, maka data seluler akan terpakai sesuai dengan ukuran file aplikasi yang kamu unduh.</p>
133
- <h3>Apakah download Instagram tertunda berpengaruh pada baterai HP?</h3>
134
- <p>Ya, download Instagram tertunda bisa berpengaruh pada baterai HP. Hal ini karena proses download membutuhkan daya yang cukup besar dari HP-mu. Apalagi jika koneksi internetmu tidak stabil atau ada banyak aplikasi lain yang sedang di-download. Oleh karena itu, sebaiknya kamu mengunduh Instagram saat baterai HP-mu masih banyak atau saat sedang dicharge.</p>
135
- <h3>Apakah download Instagram tertunda berpengaruh pada performa HP?</h3>
136
- <p>Ya, download Instagram tertunda bisa berpengaruh pada performa HP. Hal ini karena proses download bisa membuat HP-mu menjadi lemot atau hang. Apalagi jika memori internalmu sudah penuh atau ada banyak aplikasi lain yang sedang berjalan di latar belakang. Oleh karena itu, sebaiknya kamu mengunduh Instagram saat HP-mu tidak sedang digunakan untuk aktivitas lain atau saat sudah menutup aplikasi lain yang tidak terpakai.</p>
137
- <h3>Apakah download Instagram tertunda berpengaruh pada keamanan HP?</h3>
138
- <p>Tidak, download Instagram tertunda tidak berpengaruh pada keamanan HP. Hal ini karena aplikasi Instagram yang kamu unduh dari Play Store sudah terjamin keamanannya oleh Google. Kamu tidak perlu khawatir akan virus atau malware yang bisa merusak HP-mu. Namun, tetap saja kamu harus berhati-hati saat mengunduh aplikasi lain dari sumber yang tidak resmi atau tidak terpercaya.</p>
139
- <h3>Apakah download Instagram tertunda berpengaruh pada akun Instagram?</h3>
140
- <p>Tidak, download Instagram tertunda tidak berpengaruh pada akun Instagram-mu. Hal ini karena akun Instagram-mu tersimpan di server Instagram dan tidak tergantung pada aplikasi yang kamu unduh. Kamu tetap bisa login dan menggunakan akun Instagram-mu di perangkat lain atau melalui web browser tanpa masalah. Namun, tentu saja kamu harus ingat username dan password akun Instagram-mu agar tidak lupa saat login.</p> 197e85843d<br />
141
- <br />
142
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download and Play Naruto Storm 4 Mod Apk and Naruto Senki Mod 2021 - The Most Amazing Naruto Mods Ever.md DELETED
@@ -1,100 +0,0 @@
1
-
2
- <h1>Download Naruto Storm 4 Mod Apk Naruto Senki Mod 2021</h1>
3
- <p>If you are a fan of Naruto anime and manga, you might want to try out Naruto Storm 4 Mod Apk Naruto Senki Mod 2021. This is a modified version of two popular games based on Naruto series: Naruto Shippuden: Ultimate Ninja Storm 4 and Naruto Senki. In this article, we will show you how to download and install this amazing mod apk on your Android device. We will also tell you about its features and benefits. Read on to find out more.</p>
4
- <h2>download naruto storm 4 mod apk naruto senki mod 2021</h2><br /><p><b><b>Download Zip</b> &#9989; <a href="https://jinyurl.com/2uNUmX">https://jinyurl.com/2uNUmX</a></b></p><br /><br />
5
- <h2>What is Naruto Storm 4?</h2>
6
- <p>Naruto Shippuden: Ultimate Ninja Storm 4 is a fighting game developed by CyberConnect2 and published by Bandai Namco Entertainment in 2016. It is the sixth installment and the final main installment in the Naruto: Ultimate Ninja Storm series inspired by Masashi Kishimoto's manga Naruto. The game follows the young ninjas Naruto Uzumaki and Sasuke Uchiha as they participate in a world war between shinobi – the Fourth Shinobi World War – against the terrorist organization Akatsuki and unite to defeat it.</p>
7
- <p>The game features a revamped battle system that allows players to switch among a team of three fighters who can assist each other. It also includes boss fights, quick time events, hack and slash areas, and wall-running. The game covers the final arcs of Naruto Shippuden anime series, as well as some original scenarios. The game has over 100 playable characters from different eras <h2>What is Naruto Senki?</h2>
8
- <p>Naruto Senki is a fan-made game based on Naruto anime and manga. It is developed by Zakume, an Indonesian developer who has created several Naruto games for Android. Naruto Senki is a 2D side-scrolling fighting game that features characters from Naruto series and other anime and manga. The game has a simple control scheme that allows players to perform basic attacks, special moves, and ultimate jutsus. The game also has a story mode, a survival mode, and a multiplayer mode where you can battle with other players online.</p>
9
- <h2>What are the benefits of downloading the mod apk?</h2>
10
- <p>By downloading Naruto Storm 4 Mod Apk Naruto Senki Mod 2021, you can enjoy the best of both worlds: the epic story and gameplay of Naruto Storm 4 and the fan-made fun and creativity of Naruto Senki. The mod apk combines the two games into one, giving you access to unlimited money, coins, skills, and characters. You can unlock and play as any character from Naruto series, as well as some crossover characters from other anime and manga. You can also customize your character's appearance, outfit, and weapons. You can upgrade your skills and items with unlimited money and coins. You can also enjoy the improved graphics, sound effects, and animations of the mod apk.</p>
11
- <h2>How to download and install the mod apk on Android?</h2>
12
- <p>Downloading and installing Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 on your Android device is easy and fast. Just follow these simple steps:</p>
13
- <p>naruto senki mod apk storm 4 download 2021<br />
14
- naruto storm 4 mod apk free download naruto senki<br />
15
- download naruto senki mod apk ultimate ninja storm 4<br />
16
- naruto senki mod 2021 storm 4 apk download<br />
17
- naruto storm 4 mod apk download for android naruto senki<br />
18
- download naruto senki mod apk full character storm 4<br />
19
- naruto senki mod apk unlimited money storm 4 download<br />
20
- download naruto senki mod apk boruto storm 4<br />
21
- naruto storm 4 mod apk offline download naruto senki<br />
22
- download naruto senki mod apk terbaru storm 4<br />
23
- naruto senki mod apk latest version storm 4 download<br />
24
- download naruto senki mod apk no cooldown storm 4<br />
25
- naruto storm 4 mod apk obb download naruto senki<br />
26
- download naruto senki mod apk revdl storm 4<br />
27
- naruto senki mod apk all characters unlocked storm 4 download<br />
28
- download naruto senki mod apk by ricky storm 4<br />
29
- naruto storm 4 mod apk rexdl download naruto senki<br />
30
- download naruto senki mod apk cheat menu storm 4<br />
31
- naruto senki mod apk unlimited skill storm 4 download<br />
32
- download naruto senki mod apk zippyshare storm 4<br />
33
- naruto storm 4 mod apk data download naruto senki<br />
34
- download naruto senki mod apk versi lama storm 4<br />
35
- naruto senki mod apk update terbaru storm 4 download<br />
36
- download naruto senki mod apk kaguya storm 4<br />
37
- naruto storm 4 mod apk highly compressed download naruto senki<br />
38
- download naruto senki mod apk madara rikudo storm 4<br />
39
- naruto senki mod apk new update storm 4 download<br />
40
- download naruto senki mod apk pain nagato storm 4<br />
41
- naruto storm 4 mod apk unlimited coins download naruto senki<br />
42
- download naruto senki mod apk sasuke rinnegan storm 4<br />
43
- naruto senki mod apk original version storm 4 download<br />
44
- download naruto senki mod apk itachi susanoo storm 4<br />
45
- naruto storm 4 mod apk android 1 download naruto senki<br />
46
- download naruto senki mod apk hokage keempat storm 4<br />
47
- naruto senki mod apk unlock all jutsu storm 4 download<br />
48
- download naruto senki mod apk kakashi hatake storm 4<br />
49
- naruto storm 4 mod apk mediafire download naruto senki<br />
50
- download naruto senki mod apk minato namikaze storm 4<br />
51
- naruto senki mod apk no root required storm 4 download<br />
52
- download naruto senki mod apk obito uchiha storm 4</p>
53
- <h3>Allow unknown sources on your device</h3>
54
- <p>Before you can install the mod apk, you need to enable the installation of apps from external sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps that are not from the Google Play Store.</p>
55
- <p><img src="https://i.imgur.com/5lQZ8wO.png" alt="Unknown Sources" width="300" height="500"></p>
56
- <h3>Download a file manager app</h3>
57
- <p>You will need a file manager app that can extract and install apk and obb files on your device. We recommend using ZArchiver, a free and powerful file manager app that can handle various types of files. You can download ZArchiver from the Google Play Store or from this link: </p>
58
- <p><a href="">Download ZArchiver</a></p>
59
- <h3>Download the mod apk and obb files</h3>
60
- <p>Next, you need to download the mod apk and obb files for Naruto Storm 4 and Naruto Senki. You can get them from this link: . The mod apk file is about 120 MB in size, while the obb file is about 1 GB in size. Make sure you have enough storage space on your device before downloading them.</p>
61
- <p><a href="">Download Naruto Storm 4 Mod Apk Naruto Senki Mod 2021</a></p>
62
- <h3>Install the mod apk file</h3>
63
- <p>After downloading the mod apk file, open ZArchiver and locate the file in your download folder. Tap on the file and select "Install". Wait for the installation process to finish.</p>
64
- <p><img src="https://i.imgur.com/6y9qUZf.png" alt="Install Mod Apk" width="300" height="500"></p>
65
- <h3>Extract and copy the obb file</h3>
66
- <p>After installing the mod apk file, go back to ZArchiver and locate the obb file in your download folder. Tap on the file and select "Extract". Choose a destination folder where you want to extract the file. We recommend extracting it to your internal storage.</p>
67
- <p><img src="https://i.imgur.com/7n9Y0tO.png" alt="Extract Obb File" width="300" height="500"></p>
68
- <p>After extracting the obb file, you will see a folder named "com.bandainamcoent.narutostorm4". Copy this folder and paste it to your Android > obb folder on your internal storage.</p>
69
- <p><img src="https://i.imgur.com/8sXmM0F.png" alt="Copy Obb File" width="300" height="500"></p>
70
- <h3>Launch the game and enjoy</h3>
71
- <p>You are now ready to play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 on your Android device. Just tap on the game icon on your home screen or app drawer and start playing. You will see a menu where you can choose between Naruto Storm 4 or Naruto Sen ki. You can switch between them anytime you want. Have fun with the mod features and enjoy the game.</p>
72
- <h2>What are the features of Naruto Storm 4 Mod Apk Naruto Senki Mod 2021?</h2>
73
- <p>Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 is not just a simple combination of two games. It is a complete overhaul of the original games that adds new and improved features that will enhance your gaming experience. Here are some of the features that you can expect from this mod apk:</p>
74
- <h3>Graphics</h3>
75
- <p>The graphics of Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 are stunning and realistic. The mod apk enhances the graphics quality of the original games, making them more vibrant and detailed. The characters, environments, effects, and animations are all rendered in high definition, giving you a visual feast. You can also adjust the graphics settings according to your device's performance and preference.</p>
76
- <h3>Modes</h3>
77
- <p>The mod apk offers you a variety of game modes to choose from, depending on your mood and preference. You can play the story mode, where you can follow the epic saga of Naruto and his friends as they fight against Akatsuki and other enemies. You can also play the survival mode, where you can test your skills and endurance against waves of enemies. You can also play the multiplayer mode, where you can team up or compete with other players online in different modes, such as 1v1, 2v2, 3v3, 4v4, and 5v5. You can also create your own custom matches and invite your friends to join.</p>
78
- <h3>Characters</h3>
79
- <p>The mod apk boasts a full character roster that includes all the characters from Naruto series and some crossover characters from other anime and manga. You can unlock and play as any character you want, from Naruto, Sasuke, Sakura, Kakashi, Madara, Boruto, Sarada, Mitsuki, and more. You can also customize your character's appearance, outfit, and weapons with unlimited money and coins. You can mix and match different items and create your own unique look.</p>
80
- <h3>Skills</h3>
81
- <p>The mod apk also enhances the skills and abilities of each character in the game. You can use unlimited skills and jutsus without any cooldown or chakra limit. You can also unleash powerful ultimate jutsus that can deal massive damage to your enemies. You can also combine different skills and jutsus to create combos and strategies. You can also learn new skills and jutsus by playing the game and leveling up your character.</p>
82
- <h3>Items</h3>
83
- <p>The mod apk also gives you access to various items and upgrades that you can buy with unlimited money and coins. You can buy health potions, chakra potions, scrolls, kunai, shuriken, bombs, and more. You can also buy different types of weapons, such as swords, axes, hammers, spears, daggers, bows, guns, and more. You can also buy different types of outfits, such as ninja suits, samurai armors, casual clothes, school uniforms, swimsuits, and more. You can also buy different types of accessories, such as hats, masks, glasses, earrings , necklaces, rings, and more. You can also buy different types of pets, such as dogs, cats, birds, dragons, and more. You can use these items and upgrades to enhance your character's stats, appearance, and performance.</p>
84
- <h2>Conclusion</h2>
85
- <p>Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 is a must-have mod apk for Naruto fans and gamers. It combines the best features of Naruto Storm 4 and Naruto Senki into one game that you can play on your Android device. You can enjoy the epic story and gameplay of Naruto Storm 4 and the fan-made fun and creativity of Naruto Senki. You can also enjoy the unlimited money, coins, skills, and characters that the mod apk offers. You can also customize your character's appearance, outfit, and weapons with various items and upgrades. You can also play with other players online in different game modes and create your own custom matches. You can also experience the improved graphics, sound effects, and animations of the mod apk.</p>
86
- <p>If you want to download and install Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 on your Android device, just follow the simple steps that we have provided in this article. You will be able to play this amazing mod apk in no time. Don't miss this opportunity to play as your favorite Naruto characters and unleash their skills and jutsus. Download Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 now and have fun.</p>
87
- <h2>FAQs</h2>
88
- <p>Here are some of the frequently asked questions and answers about Naruto Storm 4 Mod Apk Naruto Senki Mod 2021:</p>
89
- <h3>Q: Is Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 safe to download and install?</h3>
90
- <p>A: Yes, Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 is safe to download and install on your Android device. The mod apk and obb files are free from viruses, malware, or any harmful content. However, you should always download them from a reliable source and scan them with an antivirus app before installing them.</p>
91
- <h3>Q: Do I need to root my device to use Naruto Storm 4 Mod Apk Naruto Senki Mod 2021?</h3>
92
- <p>A: No, you do not need to root your device to use Naruto Storm 4 Mod Apk Naruto Senki Mod 2021. The mod apk works fine on any Android device that meets the minimum requirements. However, if you want to use some advanced features or mods that require root access, you may need to root your device first.</p>
93
- <h3>Q: Can I play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 offline?</h3>
94
- <p>A: Yes, you can play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 offline without any internet connection. However, you will not be able to access some features or modes that require online connectivity, such as multiplayer mode or online updates.</p>
95
- <h3>Q: Can I play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 with my friends?</h3>
96
- <p>A: Yes, you can play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 with your friends online or locally. You can join or create custom matches with your friends using the multiplayer mode. You can also use a hotspot or a Wi-Fi connection to play with your friends nearby using the local mode.</p>
97
- <h3>Q: How can I contact the developer of Naruto Storm 4 Mod Apk Naruto Senki Mod 2021?</h3>
98
- <p>A: If you have any questions, feedback, or suggestions about Naruto Storm 4 Mod Apk Naruto Senki Mod 2021, you can contact the developer of the mod apk through their social media accounts or email address. You can also visit their official website or blog for more information.</p> 197e85843d<br />
99
- <br />
100
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Enjoy Dinosaur Hunting with Dino Hunter Mod APK - Unlimited Money Gold and Gems.md DELETED
@@ -1,117 +0,0 @@
1
-
2
- <h1>Dino Hunter Mod APK: A Thrilling Hunting Adventure</h1>
3
- <p>Do you love hunting games? Do you want to hunt down the most dangerous creatures in history? If yes, then you should try Dino Hunter Mod APK, a game that lets you hunt for dinosaurs in various wild locations. In this article, we will tell you everything you need to know about this game, including its features, how to download and install it, tips and tricks for playing it, and a review of its pros and cons.</p>
4
- <h2>dino hunter mod apk</h2><br /><p><b><b>Download</b> &ndash;&ndash;&ndash;&ndash;&ndash;>>> <a href="https://jinyurl.com/2uNOWL">https://jinyurl.com/2uNOWL</a></b></p><br /><br />
5
- <h2>What is Dino Hunter Mod APK?</h2>
6
- <p>Dino Hunter Mod APK is a modified version of the original Dino Hunter game developed by Glu Games LLC. It is a first-person hunting simulator where you embark on the hunting expedition of a lifetime in pursuit of the ultimate game in Dino Hunter: Deadly Shores. You will journey to a hidden, untouched island and hunt the most ferocious animals in history, from the docile stegosaurus to the terrifying T. rex. You will also visit exotic locations, equip powerful weapons, master a unique challenge series, and experience amazing graphics.</p>
7
- <p>The mod APK version of this game offers some advantages over the original game, such as unlimited money and gold, all weapons unlocked, free shopping and upgrades, and more. These features will make your hunting experience more enjoyable and easier.</p>
8
- <h3>Features of Dino Hunter Mod APK</h3>
9
- <p>Here are some of the features that you can enjoy when you play Dino Hunter Mod APK:</p>
10
- <h4>Unlimited money and gold</h4>
11
- <p>Money and gold are the main currencies in the game that you can use to buy weapons, upgrades, items, and more. With the mod APK version, you will have unlimited money and gold at your disposal, so you can buy anything you want without worrying about running out of resources.</p>
12
- <h4>All weapons unlocked</h4>
13
- <p>The game offers a wide range of weapons that you can use to hunt down dinosaurs, such as rifles, shotguns, assault rifles, rocket launchers, crossbows, and more. Each weapon has its own advantages and disadvantages, such as damage, range, accuracy, reload speed, etc. With the mod APK version, you will have access to all weapons from the start, so you can choose the best weapon for each hunt.</p>
14
- <h4>Free shopping and upgrades</h4>
15
- <p>Besides buying weapons, you can also shop for other items that can enhance your gameplay experience, such as cover scent, chrono drink, energy refill, etc. You can also upgrade your weapons to improve their performance and effectiveness. With the mod APK version, you can shop and upgrade for free, so you can get the best items and weapons without spending any money or gold.</p>
16
- <p>dino hunter mod apk unlimited money and gold<br />
17
- dino hunter mod apk all weapons unlocked<br />
18
- dino hunter mod apk free download for android<br />
19
- dino hunter mod apk latest version 2021<br />
20
- dino hunter mod apk offline<br />
21
- dino hunter mod apk unlimited energy<br />
22
- dino hunter mod apk no ads<br />
23
- dino hunter mod apk unlimited gems<br />
24
- dino hunter mod apk rexdl<br />
25
- dino hunter mod apk revdl<br />
26
- dino hunter mod apk hack<br />
27
- dino hunter mod apk android 1<br />
28
- dino hunter mod apk unlimited ammo<br />
29
- dino hunter mod apk unlimited everything<br />
30
- dino hunter mod apk happymod<br />
31
- dino hunter mod apk unlimited coins<br />
32
- dino hunter mod apk free shopping<br />
33
- dino hunter mod apk download apkpure<br />
34
- dino hunter mod apk unlimited cash<br />
35
- dino hunter mod apk android oyun club<br />
36
- dino hunter mod apk obb<br />
37
- dino hunter mod apk 5.9.3<br />
38
- dino hunter mod apk 5.9.2<br />
39
- dino hunter mod apk 5.9.1<br />
40
- dino hunter mod apk 5.8.9<br />
41
- dino hunter mod apk 5.8.8<br />
42
- dino hunter mod apk 5.8.7<br />
43
- dino hunter mod apk 5.8.6<br />
44
- dino hunter mod apk 5.8.5<br />
45
- dino hunter mod apk 5.8.4<br />
46
- dino hunter mod apk 5.8.3<br />
47
- dino hunter mod apk 5.8.2<br />
48
- dino hunter mod apk 5.8.1<br />
49
- dino hunter mod apk 5.8.0<br />
50
- dino hunter mod apk 5.7.9<br />
51
- dino hunter mod apk 5.7.8<br />
52
- dino hunter mod apk 5.7.7<br />
53
- dino hunter mod apk 5.7.6<br />
54
- dino hunter mod apk 5.7.5<br />
55
- dino hunter mod apk 5.7.4<br />
56
- dino hunter mod apk 5.7.3<br />
57
- dino hunter mod apk 5.7.2<br />
58
- dino hunter mod apk 5.7.1<br />
59
- dino hunter mod apk 5.7.0<br />
60
- dino hunter deadly shores hack version download for android</p>
61
- <h4>High-quality graphics and sound effects</h4>
62
- <p>The game features high-quality graphics that make the dinosaurs look realistic and detailed. You can also see dynamic shadows, hi-res textures, and realistic models that make the game more immersive. The sound effects are also impressive, as you can hear the roars of dinosaurs, the gunshots of weapons, and the ambient sounds of nature. The game also supports night vision mode that lets you hunt in dark environments.</p>
63
- <h3>How to download and install Dino Hunter Mod APK?</h3>
64
- <p>If you want to download and install Dino Hunter Mod APK, you can follow these simple steps:</p>
65
- <h4>Step 1: Download the mod APK file from a trusted source</h4>
66
- <p>The first thing you need to do is to download the mod APK file of Dino Hunter from a reliable source. You can search for it on the internet or use the link provided below. Make sure that the file is compatible with your device and has the latest version of the game.</p>
67
- <p><a href="">Download Dino Hunter Mod APK</a></p>
68
- <h4>Step 2: Enable unknown sources on your device settings</h4>
69
- <p>The next thing you need to do is to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and toggle it on.</p>
70
- <h4>Step 3: Install the mod APK file and launch the game</h4>
71
- <p>The final thing you need to do is to install the mod APK file and launch the game. To do this, locate the downloaded file on your device storage, tap on it, and follow the instructions on the screen. Once the installation is complete, you can open the game and enjoy hunting dinosaurs with unlimited resources.</p>
72
- <h3>Tips and tricks for playing Dino Hunter Mod APK</h3>
73
- <p>If you want to master the game and become the best hunter, you can use these tips and tricks that we have gathered for you:</p>
74
- <h4>Use the infrared to aim for specific body parts</h4>
75
- <p>One of the features that you can use in the game is the infrared mode that lets you see the vital organs of dinosaurs. This can help you aim for specific body parts that can deal more damage or cause instant kills. For example, you can aim for the heart, lungs, brain, or spine of dinosaurs to take them down faster. However, be careful not to waste your infrared energy as it is limited and needs time to recharge.</p>
76
- <h4>Upgrade your capacity and reload speed for boss battles</h4>
77
- <p>Another feature that you can use in the game is the upgrade system that lets you improve your weapons and items. One of the things that you should upgrade is your capacity and reload speed, especially for boss battles. Bosses are more powerful and resilient than normal dinosaurs, so you need to have enough ammo and fast reloads to keep shooting at them. You can also upgrade your damage and accuracy to make your shots more effective.</p>
78
- <h4>Use the cover scent to mask your smell from dinosaurs</h4>
79
- <p>Another item that you can use in the game is the cover scent that masks your smell from dinosaurs. This can help you avoid being detected by dinosaurs that have a keen sense of smell, such as raptors or tyrannosaurs. You can also use it to sneak up on dinosaurs and get a better shot at them. However, be careful not to run out of cover scent as it is limited and needs money or gold to buy more.</p>
80
- <h4>Use the M.I.S.T. device to track down dinosaurs and map pieces</h4>
81
- <p>Another device that you can use in the game is the M.I.S.T. (Mobile Integrated Sensor Technology) device that tracks down dinosaurs and map pieces. This can help you find your targets faster and easier, as well as collect map pieces that unlock new locations and challenges. You can also use it to scan dinosaurs and learn more about their characteristics and weaknesses.</p>
82
- <h3>Review of Dino Hunter Mod APK</h3>
83
- <p>To give you a better idea of what Dino Hunter Mod APK offers, we have prepared a review of its pros and cons, as well as user ratings and feedback.</p>
84
- <h4>Pros and cons of the mod APK</h4>
85
- <table>
86
- <tr><th>Pros</th><th>Cons</th></tr>
87
- <tr><td>- Unlimited money and gold</td><td>- May not work on some devices</td></tr>
88
- <tr><td>- All weapons unlocked</td><td>- May cause some glitches or bugs</td></tr>
89
- <tr><td>- Free shopping and upgrades</td><td>- May not be compatible with online mode</td></tr>
90
- <tr><td>- High-quality graphics and sound effects</td><td>- May consume a lot of battery power</td></tr>
91
- </table>
92
- <h4>User ratings and feedback</h4>
93
- <p>The mod APK version of Dino Hunter has received mostly positive ratings and feedback from users who have tried it. Here are some of their comments:</p>
94
- <ul>
95
- <li>"This game is awesome! I love hunting dinosaurs with all kinds of weapons. The graphics are amazing and the sound effects are realistic. The mod APK makes it even better with unlimited money and gold."</li>
96
- <li>"I have been playing this game for a long time and I still enjoy it. The mod APK makes it more fun and easy to play. I can buy any weapon I want and upgrade it to the max. The dinosaurs are challenging and realistic."</li>
97
- <li>"This is one of the best hunting games I have ever played. The mod APK is awesome and works perfectly. I have no problems with it. The game is very addictive and exciting. The dinosaurs are amazing and scary."</li>
98
- </ul>
99
- <h2>Conclusion</h2>
100
- <p>Dino Hunter Mod APK is a game that lets you hunt for dinosaurs in various wild locations. It is a first-person hunting simulator that offers high-quality graphics, sound effects, weapons, items, and challenges. The mod APK version of this game gives you unlimited money and gold, all weapons unlocked, free shopping and upgrades, and more. These features will make your hunting experience more enjoyable and easier.</p>
101
- <p>If you are looking for a thrilling hunting adventure, you should download and install Dino Hunter Mod APK on your device. You will not regret it.</p>
102
- <h3>FAQs</h3>
103
- <p>Here are some of the frequently asked questions about Dino Hunter Mod APK:</p>
104
- <ul>
105
- <li><b>Q: Is Dino Hunter Mod APK safe to download and install?</b></li>
106
- <li>A: Yes, Dino Hunter Mod APK is safe to download and install, as long as you get it from a trusted source. However, you should always be careful when downloading and installing any mod APK files, as they may contain viruses or malware that can harm your device.</li>
107
- <li><b>Q: Can I play Dino Hunter Mod APK online with other players?</b></li>
108
- <li>A: No, Dino Hunter Mod APK is not compatible with online mode, as it may cause some errors or crashes. You can only play Dino Hunter Mod APK offline with your device.</li>
109
- <li><b>Q: How can I update Dino Hunter Mod APK to the latest version?</b></li>
110
- <li>A: To update Dino Hunter Mod APK to the latest version, you need to download and install the new mod APK file from the same source that you got the previous one. You can also check for updates on the game itself, but it may not work with the mod APK version.</li>
111
- <li><b>Q: What are the minimum requirements to play Dino Hunter Mod APK?</b></li>
112
- <li>A: To play Dino Hunter Mod APK, you need to have a device that runs on Android 4.1 or higher, with at least 1 GB of RAM and 300 MB of free storage space.</li>
113
- <li><b>Q: Can I play Dino Hunter Mod APK on PC or iOS devices?</b></li>
114
- <li>A: No, Dino Hunter Mod APK is only available for Android devices. You cannot play it on PC or iOS devices.</li>
115
- </ul></p> 197e85843d<br />
116
- <br />
117
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Explore the world and guess the place youre in.md DELETED
@@ -1,146 +0,0 @@
1
- <br />
2
- <table>
3
- <tr>
4
- <h1>Guess the Place: A Fun and Educational Geography Game</h1></td>
5
- </tr>
6
- <tr>
7
- <td><p>Have you ever wondered what it would be like to travel around the world and see different places? Well, now you can with Guess the Place, a geography game that lets you explore the world from your computer or phone.</p>
8
- <p>Guess the Place is a game that drops you somewhere in the world in a street view panorama and challenges you to guess your location on the world map. You can choose from different maps and modes, such as worldwide, USA, Europe, monuments, streaks, challenges, and more.</p>
9
- <h2>guess the place</h2><br /><p><b><b>Download Zip</b> &#9913;&#9913;&#9913; <a href="https://jinyurl.com/2uNN8E">https://jinyurl.com/2uNN8E</a></b></p><br /><br />
10
- <p>Guess the Place is not only fun but also educational. It helps you learn about different cultures and places, improve your memory and spatial awareness, and challenge yourself with different levels of difficulty.</p>
11
- <p>In this article, we'll show you how to play Guess the Place, give you some tips and tricks for guessing better, and tell you about some of the benefits of playing this game.</p></td>
12
- </tr>
13
- <tr>
14
- <td><h2>How to Play Guess the Place</h2></td>
15
- </tr>
16
- <tr>
17
- <td><h3>Choose a Location or Difficulty</h3></td>
18
- </tr>
19
- <tr>
20
- <td><p>To start playing Guess the Place, you need to choose a map from the available options. You can select a location-based map, such as worldwide, USA, Europe, Japan, etc., or a theme-based map, such as monuments, landmarks, stadiums, etc.</p>
21
- <p>You can also choose a difficulty level for each map, ranging from easy to hard. The difficulty level affects how many clues you get in each panorama and how precise your guess needs to be.</p></td>
22
- </tr>
23
- <tr>
24
- <td><h3>Explore the Street View Panorama</h3></td>
25
- </tr>
26
- <tr>
27
- <td><p>Once you choose a map and a difficulty level, you'll be dropped somewhere in that map in a street view panorama. You can use your mouse or keyboard to look around and find clues that can help you identify your location.</p>
28
- <p>guess the place game online<br />
29
- guess the place by street view<br />
30
- guess the place quiz with answers<br />
31
- guess the place from the picture<br />
32
- guess the place in the world<br />
33
- guess the place name from emoji<br />
34
- guess the place based on clues<br />
35
- guess the place of origin<br />
36
- guess the place by sound<br />
37
- guess the place from description<br />
38
- guess the place trivia<br />
39
- guess the place app<br />
40
- guess the place challenge<br />
41
- guess the place from landmarks<br />
42
- guess the place from coordinates<br />
43
- guess the place from google maps<br />
44
- guess the place from flags<br />
45
- guess the place from food<br />
46
- guess the place from culture<br />
47
- guess the place from celebrities<br />
48
- guess the place from history<br />
49
- guess the place from language<br />
50
- guess the place from currency<br />
51
- guess the place from animals<br />
52
- guess the place from sports<br />
53
- guess the place from music<br />
54
- guess the place from movies<br />
55
- guess the place from books<br />
56
- guess the place from art<br />
57
- guess the place from architecture<br />
58
- guess the place from festivals<br />
59
- guess the place from clothing<br />
60
- guess the place from weather<br />
61
- guess the place from population<br />
62
- guess the place from religion<br />
63
- guess the place from geography<br />
64
- guess the place from capital city<br />
65
- guess the place from airport code<br />
66
- guess the place from license plate<br />
67
- guess the place from phone number<br />
68
- guess the place from zip code<br />
69
- guess the place from area code<br />
70
- guess the place from time zone<br />
71
- guess the place from domain name<br />
72
- guess the place from slogan<br />
73
- guess the place from motto<br />
74
- guess the place from anthem<br />
75
- guess the place from flower<br />
76
- guess the place from bird</p>
77
- <p>Some of the clues you can look for are signs, flags, landmarks, buildings, cars, people, vegetation, etc. You can also zoom in or out to see more details or get a wider view.</p></td>
78
- </tr <tr>
79
- <td><h3>Make Your Guess on the World Map</h3></td>
80
- </tr>
81
- <tr>
82
- <td><p>When you think you have enough clues, you can make your guess on the world map. You can drag and drop the marker on the map to the location where you think you are. You can zoom in or out on the map to see more details or get a wider view.</p>
83
- <p>Once you place the marker, you can confirm your guess by clicking on the guess button. You can also skip the panorama if you have no idea where you are or if you want to try a different one.</p></td>
84
- </tr>
85
- <tr>
86
- <td><h3>See Your Score and Compare with Others</h3></td>
87
- </tr>
88
- <tr>
89
- <td><p>After you confirm your guess, you'll see your score and how far you were from the actual location. You'll also see a leaderboard with other players' scores and distances. You can compare your performance with others and see who's the best at guessing places.</p>
90
- <p>You'll also see a summary of your points and streaks for each map and mode. You can earn more points by guessing closer to the actual location, by guessing faster, and by playing harder maps and modes. You can also earn streaks by guessing correctly multiple times in a row.</p></td>
91
- </tr>
92
- <tr>
93
- <td><h2>Tips and Tricks for Guessing Better</h2></td>
94
- </tr>
95
- <tr>
96
- <td><h3>Look for Signs, Flags, and Landmarks</h3></td>
97
- </tr <tr>
98
- <td><p>One of the easiest ways to guess better is to look for signs, flags, and landmarks that can give you clues about the country, region, city, or place where you are. For example, if you see a sign in French, you can narrow down your location to France or a French-speaking country. If you see a flag with stars and stripes, you can narrow down your location to the USA or a country with a similar flag. If you see a landmark like the Eiffel Tower, you can narrow down your location to Paris.</p></td>
99
- </tr <tr>
100
- <td><h3>Use Google Search or Wikipedia</h3></td>
101
- </tr <tr>
102
- <td><p>Another way to guess better is to use Google Search or Wikipedia to find more information about a place. For example, if you see a sign with a name of a place that you don't recognize, you can search it on Google or Wikipedia and see what it is and where it is located. You can also use Google Translate to translate signs or words that are in a different language.</p></td>
103
- </tr <tr>
104
- <td><h3>Practice with Different Maps and Modes</h3></td>
105
- </tr <tr>
106
- <td><p>A final way to guess better is to practice with different maps and modes that can challenge your skills and knowledge. For example, you can play with maps that cover different regions or themes, such as Asia, Africa, islands, capitals, etc. You can also play with modes that have different rules or goals, such as streaks, challenges, time limit, etc.</p></td>
107
- </tr <tr>
108
- <td><h2>Benefits of Playing Guess the Place</h2></td>
109
- </tr <tr>
110
- <td><h3>Learn About Different Cultures and Places</h3></td>
111
- </tr <tr>
112
- <td><p>One of the main benefits of playing Guess the Place is that it helps you learn about different cultures and places around the world. You can discover new things about the history, geography, culture, language, cuisine, architecture, nature, etc., of different countries and regions. You can also see how people live in different parts of the world and what they do for fun.</p></td>
113
- </tr <tr>
114
- <td><h3>Improve Your Memory and Spatial Awareness</h3></td>
115
- </tr <tr>
116
- <td><p>Another benefit of playing Guess the Place is that it helps you improve your memory and spatial awareness. You can remember facts and locations better by associating them with visual clues and images. You can also improve your sense of direction and orientation by navigating through different maps and panoramas.</p></td>
117
- </tr <tr>
118
- <td><h3>Have Fun and Challenge Yourself</h3></td>
119
- </tr <tr>
120
- <td><p>A final benefit of playing Guess the Place is that it helps you have fun and challenge yourself. You can enjoy the game as a hobby or as a way to relax and unwind. You can also challenge yourself by playing harder maps and modes, by competing with other players, or by setting your own goals and records.</p></td>
121
- </tr <tr>
122
- <td><h2>Conclusion</h2></td>
123
- </tr <tr>
124
- <td><p>Guess the Place is a <p>Guess the Place is a fun and educational geography game that lets you explore the world from your computer or phone. You can choose from different maps and modes, such as worldwide, USA, Europe, monuments, streaks, challenges, and more. You can also look for clues in the street view panoramas, make your guesses on the world map, see your score and compare with others, and learn more about different cultures and places.</p>
125
- <p>Playing Guess the Place can help you improve your memory and spatial awareness, as well as have fun and challenge yourself. It's a great way to learn geography and discover new things about the world.</p>
126
- <p>If you're interested in playing Guess the Place, you can find it online at <a href="">https://www.geoguessr.com/</a> or download it from the App Store or Google Play. It's free to play, but you can also upgrade to a premium membership for more features and benefits.</p>
127
- <p>So what are you waiting for? Start playing Guess the Place today and see how well you know the world!</p></td>
128
- </tr>
129
- <tr>
130
- <td><h2>FAQs</h2></td>
131
- </tr>
132
- <tr>
133
- <td><p>Here are some of the frequently asked questions about Guess the Place:</p>
134
- <ul>
135
- <li><b>What is Guess the Place?</b><br>Guess the Place is a geography game that drops you somewhere in the world in a street view panorama and challenges you to guess your location on the world map.</li>
136
- <li><b>How do I play Guess the Place?</b><br>To play Guess the Place, you need to choose a map and a difficulty level, explore the street view panorama, make your guess on the world map, and see your score and compare with others.</li>
137
- <li><b>Where can I find Guess the Place?</b><br>You can find Guess the Place online at <a href="">https://www.geoguessr.com/</a> or download it from the App Store or Google Play.</li>
138
- <li><b>How much does Guess the Place cost?</b><br>Guess the Place is free to play, but you can also upgrade to a premium membership for $2.99 per month or $23.99 per year. The premium membership gives you access to more maps and modes, unlimited games, no ads, and more.</li>
139
- <li><b>What are the benefits of playing Guess the Place?</b><br>Playing Guess the Place can help you learn about different cultures and places, improve your memory and spatial awareness, and have fun and challenge yourself.</li>
140
- </ul></td>
141
- </tr>
142
- <tr>
143
- <td></td>
144
- </tr</p> 401be4b1e0<br />
145
- <br />
146
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/FS 20 Mods How to Install Indian Tractor Mod APK and Get Unlimited Money.md DELETED
@@ -1,100 +0,0 @@
1
-
2
- <h1>FS 20 Indian Tractor Mod APK Download Unlimited Money</h1>
3
- <p>If you are a fan of farming simulation games, you might have heard of Farming Simulator 20, or FS 20 for short. This is a popular game that lets you experience the life of a farmer, from harvesting crops to raising animals. However, if you want to spice up your gameplay with some Indian flavor, you might want to try FS 20 Indian Tractor Mod APK. This is a modified version of the game that adds all kinds of Indian tractors and vehicles, as well as unlimited money and coins. In this article, we will tell you everything you need to know about this mod APK, including its features, how to download and install it, how to play it, and its pros and cons.</p>
4
- <h2>What is FS 20 Indian Tractor Mod APK?</h2>
5
- <p>FS 20 Indian Tractor Mod APK is a modified version of the original Farming Simulator 20 game that adds all kinds of Indian tractors and vehicles to the game. You can choose from a variety of brands and models, such as Swaraj, Sonalika, Preet, Massey, Ford, John Deere, etc. You can also customize your tractors and vehicles with different colors, stickers, lights, horns, etc. Moreover, this mod APK also gives you unlimited money and coins, so you can buy anything you want in the game without worrying about the cost. You can also enjoy realistic graphics and physics, as well as customizable farms and crops. You can play this mod APK offline or online with other players.</p>
6
- <h2>fs 20 indian tractor mod apk download unlimited money</h2><br /><p><b><b>Download File</b> >> <a href="https://jinyurl.com/2uNP3x">https://jinyurl.com/2uNP3x</a></b></p><br /><br />
7
- <h2>Features of FS 20 Indian Tractor Mod APK</h2>
8
- <h3>All Indian tractors and vehicles</h3>
9
- <p>One of the main features of this mod APK is that it adds all kinds of Indian tractors and vehicles to the game. You can choose from a variety of brands and models, such as Swaraj, Sonalika, Preet, Massey, Ford, John Deere, etc. You can also customize your tractors and vehicles with different colors, stickers, lights, horns, etc. You can use these tractors and vehicles to harvest your crops, transport your goods, tow your trailers, etc.</p>
10
- <h3>Unlimited money and coins</h3>
11
- <p>Another feature of this mod APK is that it gives you unlimited money and coins. This means that you can buy anything you want in the game without worrying about the cost. You can buy new equipment and upgrades for your tractors and vehicles, new animals and crops for your farm, new buildings and decorations for your land, etc. You can also use the money and coins to unlock new features and modes in the game.</p>
12
- <h3>Realistic graphics and physics</h3>
13
- <p>This mod APK also enhances the graphics and physics of the game. You can enjoy realistic graphics that show the details of your tractors and vehicles, your farm, your crops, your animals, etc. You can also experience realistic physics that affect the movement and behavior of your tractors and vehicles, the weather and seasons, the soil and water, etc. You can feel the difference between driving on different terrains, such as mud, sand, grass, etc.</p>
14
- <h3>Customizable farms and crops</h3>
15
- <p>This mod APK also allows you to customize your farms and crops. You can choose from a variety of crops to grow on your land, such as wheat, rice, sugarcane, cotton, etc. You can also choose from a variety of animals to raise on your farm, such as cows, sheep, chickens, etc. You can also build and decorate your farm with different buildings and objects, such as barns, silos, windmills, fences, etc. You can also adjust the settings of your farm, such as the difficulty level, the crop yield, the animal productivity, etc.</p>
16
- <h3>Offline and online modes</h3>
17
- <p>This mod APK also supports both offline and online modes. You can play this mod APK offline without an internet connection. You can enjoy the game at your own pace and explore the vast map and discover new locations. You can also play this mod APK online with other players. You can join or create a multiplayer session and cooperate or compete with other farmers. You can chat with other players, trade with them, help them with their tasks, challenge them to races, etc.</p>
18
- <h2>How to download and install FS 20 Indian Tractor Mod APK?</h2>
19
- <p>If you are interested in trying this mod APK, you need to follow these steps to download and install it on your device:</p>
20
- <p>fs 20 indian tractor mod apk free download<br />
21
- fs 20 indian tractor mod unlimited money and gold<br />
22
- fs 20 farming simulator indian tractor mod apk<br />
23
- fs 20 new map with indian tractor mod download<br />
24
- fs 20 jhondeere tractor mod apk download<br />
25
- fs 20 indian tractor mod gameplay and review<br />
26
- fs 20 indian tractor mod latest version download<br />
27
- fs 20 indian tractor mod for android and ios<br />
28
- fs 20 indian tractor mod with realistic graphics<br />
29
- fs 20 indian tractor mod features and benefits<br />
30
- fs 20 indian tractor mod how to install and use<br />
31
- fs 20 indian tractor mod best settings and tips<br />
32
- fs 20 indian tractor mod comparison and ranking<br />
33
- fs 20 indian tractor mod problems and solutions<br />
34
- fs 20 indian tractor mod updates and news<br />
35
- fs 20 indian tractor mod online and offline mode<br />
36
- fs 20 indian tractor mod cheats and hacks<br />
37
- fs 20 indian tractor mod support and feedback<br />
38
- fs 20 indian tractor mod alternatives and competitors<br />
39
- fs 20 indian tractor mod pros and cons<br />
40
- fs 20 hr-pb tractors nishu deshwal mod apk download<br />
41
- fs 20 timelapse gameplay with indian tractor mod<br />
42
- fs 20 $10 million challenge with indian tractor mod<br />
43
- fs 20 swaraj, mahindra, sonalika, escort, farmtrac, powertrac, new holland, eicher, hmt, standard, preet, arjun, indofarm, force motors, john deere, massey ferguson, tafe, kubota, ace, captain tractors mods apk download<br />
44
- fs 20 all new indian tractors mods apk download link in comment box<br />
45
- fs 20 indian tractors mods apk download for pc and laptop<br />
46
- fs 20 indian tractors mods apk download without verification or survey<br />
47
- fs 20 indian tractors mods apk download from google drive or mediafire<br />
48
- fs 20 indian tractors mods apk download no root or jailbreak required<br />
49
- fs 20 indian tractors mods apk download safe and secure</p>
50
- <h3>Step 1: Download the mod APK file from a trusted source</h3>
51
- <p>The first step is to download the mod APK file from a trusted source. You can find many websites that offer this mod APK file for free. However, you need to be careful and avoid downloading from unverified or malicious sources that may contain viruses or malware. We recommend you to download the mod APK file from this link: [FS 20 Indian Tractor Mod APK Download].</p>
52
- <h3>Step 2: Enable unknown sources on your device settings</h3>
53
- <p>The second step is to enable unknown sources on your device settings. This is necessary because this mod APK file is not from the official Google Play Store and therefore your device may not allow you to install it by default. To enable unknown sources, you need to go to your device settings > security > unknown sources and toggle it on.</p>
54
- <h3>Step 3: Install the mod APK file on your device</h3>
55
- <p>The third step is to install the mod APK file on your device. To do this, you need to locate the downloaded mod APK file on your device storage and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on install and wait for the process to finish.</p>
56
- <h3>Step 4: Launch the game and enjoy</h3>
57
- <p>The final step is to launch the game and enjoy. To do this, you need to find the game icon on your device home screen or app drawer and tap on it. You will see a loading screen and then the game will start. You can now enjoy FS 20 Indian Tractor Mod APK with all its features.</p>
58
- <h2>How to play FS 20 Indian Tractor Mod APK?</h2>
59
- <p>If you are new to this game or this mod APK, you might wonder how to play it. Here are some tips and tricks that will help you get started:</p>
60
- <h3>Choose your favorite tractor and vehicle</h3>
61
- <p>The first thing you need to do is to choose your favorite tractor and vehicle from the available options. You can access the shop menu by tapping on the shopping cart icon on the top right corner of the screen. You will see a list of categories, such as tractors, vehicles, trailers, tools, etc. You can browse through them and select the one you like. You can also customize your tractor and vehicle with different colors, stickers, lights, horns, etc.</p>
62
- <h3>Harvest and sell your crops</h3>
63
- <p>The next thing you need to do is to harvest and sell your crops. You can access the map menu by tapping on the map icon on the top left corner of the screen. You will see a map of your farm and its surroundings. You will also see icons that indicate different fields, shops, warehouses, etc. You can tap on them to see more information or interact with them. You can also zoom in and out and move the map by swiping on the screen. To harvest your crops, you need to drive your tractor and vehicle to the field that has ripe crops. You will see a yellow icon that indicates the harvesting mode. You need to tap on it and then drive over the crops to collect them. You will see a meter that shows how much crops you have collected. You can also see the type and quantity of your crops in the inventory menu by tapping on the backpack icon on the top right corner of the screen. To sell your crops, you need to drive your tractor and vehicle to the shop or warehouse that buys them. You will see a green icon that indicates the selling mode. You need to tap on it and then select the crops you want to sell. You will see the price and quantity of your crops and the total amount you will receive. You can also negotiate the price by tapping on the haggle button. Once you are satisfied, you can confirm the deal and receive your money.</p>
64
- <h3>Buy new equipment and upgrades</h3>
65
- <p>The next thing you need to do is to buy new equipment and upgrades for your tractors and vehicles, your farm, your crops, your animals, etc. You can access the shop menu by tapping on the shopping cart icon on the top right corner of the screen. You will see a list of categories, such as tractors, vehicles, trailers, tools, animals, crops, buildings, decorations, etc. You can browse through them and select the one you want to buy. You will see the price and description of the item and the requirements to buy it. You can also compare different items by tapping on the compare button. Once you have decided, you can tap on the buy button and confirm your purchase. You will see your money deducted from your balance and your item added to your inventory.</p>
66
- <h3>Explore the vast map and discover new locations</h3>
67
- <p>The last thing you need to do is to explore the vast map and discover new locations. You can access the map menu by tapping on the map icon on the top left corner of the screen. You will see a map of your farm and its surroundings. You will also see icons that indicate different locations, such as fields, shops, warehouses, factories, landmarks, etc. You can tap on them to see more information or interact with them. You can also zoom in and out and move the map by swiping on the screen. To explore new locations, you need to drive your tractor and vehicle to them. You will see a blue icon that indicates the exploration mode. You need to tap on it and then drive around the location to discover its secrets. You may find new items, new tasks, new challenges, new events, etc.</p>
68
- <h2>Pros and cons of FS 20 Indian Tractor Mod APK</h2>
69
- <p>As with any mod APK, there are some pros and cons of using FS 20 Indian Tractor Mod APK. Here are some of them:</p>
70
- <h3>Pros</h3>
71
- <ul>
72
- <li>Fun and addictive gameplay: This mod APK offers a fun and addictive gameplay that lets you experience the life of a farmer with an Indian twist.</li>
73
- <li>Variety of tractors and vehicles: This mod APK adds a variety of Indian tractors and vehicles to the game that you can choose from and customize.</li>
74
- <li>Unlimited money and coins: This mod APK gives you unlimited money and coins that you can use to buy anything you want in the game.</li>
75
- <li>Realistic graphics and physics: This mod APK enhances the graphics and physics of the game that make it more realistic and immersive.</li>
76
- <li>Customizable farms and crops: This mod APK allows you to customize your farms and crops with different options and settings.</li>
77
- <li>Offline and online modes: This mod APK supports both offline and online modes that let you play without an internet connection or with other players.</li>
78
- </ul>
79
- <h3>Cons</h3>
80
- <ul>
81
- <li>Requires a lot of storage space: This mod APK requires a lot of storage space on your device as it adds a lot of files and data to the game.</li>
82
- <li>May not work on some devices: This mod APK may not work on some devices as it may not be compatible with their specifications or operating systems.</li>
83
- <li>May have some bugs and glitches: This mod APK may have some bugs and glitches as it is not an official version of the game.</li>
84
- </ul>
85
- <h2>Conclusion</h2>
86
- <p>In conclusion, FS 20 Indian Tractor Mod APK is a modified version of Farming Simulator 20 that adds all kinds of Indian tractors and vehicles, as well as unlimited money and coins, to the game. It also enhances the graphics and physics, and allows you to customize your farms and crops. You can play this mod APK offline or online with other players. However, this mod APK also requires a lot of storage space, may not work on some devices, and may have some bugs and glitches. If you are interested in trying this mod APK, you can follow the steps we have provided to download and install it on your device. You can also use the tips and tricks we have shared to play it and enjoy it. We hope you have found this article helpful and informative.</p>
87
- <h2>FAQs</h2>
88
- <p>Here are some frequently asked questions about FS 20 Indian Tractor Mod APK:</p>
89
- <h3>Q: Is FS 20 Indian Tractor Mod APK safe to use?</h3>
90
- <p>A: FS 20 Indian Tractor Mod APK is safe to use as long as you download it from a trusted source and enable unknown sources on your device settings. However, you should always be careful and scan the file for viruses or malware before installing it.</p>
91
- <h3>Q: Is FS 20 Indian Tractor Mod APK legal to use?</h3>
92
- <p>A: FS 20 Indian Tractor Mod APK is not legal to use as it violates the terms and conditions of the original Farming Simulator 20 game. You may face some legal consequences if you use this mod APK. Therefore, we do not recommend or endorse the use of this mod APK.</p>
93
- <h3>Q: Can I update FS 20 Indian Tractor Mod APK?</h3>
94
- <p>A: FS 20 Indian Tractor Mod APK may not be compatible with the latest updates of the original Farming Simulator 20 game. You may lose some features or face some errors if you update this mod APK. Therefore, we suggest you to avoid updating this mod APK.</p>
95
- <h3>Q: Can I uninstall FS 20 Indian Tractor Mod APK?</h3>
96
- <p>A: Yes, you can uninstall FS 20 Indian Tractor Mod APK anytime you want. You just need to go to your device settings > apps > FS 20 Indian Tractor Mod APK and tap on uninstall. You will see a confirmation message and then the mod APK will be removed from your device.</p>
97
- <h3>Q: Can I play FS 20 Indian Tractor Mod APK with my friends?</h3>
98
- <p>A: Yes, you can play FS 20 Indian Tractor Mod APK with your friends online. You just need to have an internet connection and join or create a multiplayer session. You can chat with your friends, trade with them, help them with their tasks, challenge them to races, etc.</p> 197e85843d<br />
99
- <br />
100
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/ppnlp_patch_utils.py DELETED
@@ -1,509 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- import builtins
16
- import contextlib
17
- import copy
18
- import functools
19
- import time
20
- import weakref
21
- from collections import OrderedDict
22
- from types import FunctionType, MethodType
23
- from typing import Any, Callable, Dict, List, Optional, Tuple
24
-
25
- from .utils import is_paddle_available, is_paddlenlp_available
26
-
27
-
28
- def copy_func(f):
29
- "Copy a non-builtin function (NB `copy.copy` does not work for this)"
30
- if not isinstance(f, FunctionType):
31
- return copy.copy(f)
32
- fn = FunctionType(f.__code__, f.__globals__, f.__name__, f.__defaults__, f.__closure__)
33
- fn.__kwdefaults__ = f.__kwdefaults__
34
- fn.__dict__.update(f.__dict__)
35
- fn.__annotations__.update(f.__annotations__)
36
- fn.__qualname__ = f.__qualname__
37
- return fn
38
-
39
-
40
- # copied from https://github.com/fastai/fastcore/blob/c9b4c088d3706569c076e7c197c724730be190ab/fastcore/basics.py#L938-L954
41
- def patch_to(cls, as_prop=False, cls_method=False):
42
- "Decorator: add `f` to `cls`"
43
- if not isinstance(cls, (tuple, list)):
44
- cls = (cls,)
45
-
46
- def _inner(f):
47
- for c_ in cls:
48
- nf = copy_func(f)
49
- nm = f.__name__
50
- # `functools.update_wrapper` when passing patched function to `Pipeline`, so we do it manually
51
- for o in functools.WRAPPER_ASSIGNMENTS:
52
- setattr(nf, o, getattr(f, o))
53
- nf.__qualname__ = f"{c_.__name__}.{nm}"
54
- if cls_method:
55
- setattr(c_, nm, MethodType(nf, c_))
56
- else:
57
- setattr(c_, nm, property(nf) if as_prop else nf)
58
- # Avoid clobbering existing functions
59
- return globals().get(nm, builtins.__dict__.get(nm, None))
60
-
61
- return _inner
62
-
63
-
64
- if is_paddle_available():
65
- import paddle
66
- import paddle.nn as nn
67
-
68
- @contextlib.contextmanager
69
- def device_scope(device="cpu"):
70
- new_device = device.replace("cuda", "gpu")
71
- old_device = paddle.get_device()
72
- if str(new_device) == str(old_device):
73
- yield
74
- else:
75
- try:
76
- paddle.set_device(new_device)
77
- yield
78
- finally:
79
- paddle.set_device(old_device)
80
-
81
- paddle.device_scope = device_scope
82
-
83
- class RNGStatesTracker:
84
- def __init__(self):
85
- self.states_ = {}
86
-
87
- def reset(self):
88
- self.states_ = {}
89
-
90
- def remove(self, generator_name=None):
91
- if generator_name is not None:
92
- del self.states_[generator_name]
93
-
94
- def manual_seed(self, seed, generator_name=None):
95
- if generator_name is None:
96
- generator_name = str(time.time())
97
- if generator_name in self.states_:
98
- raise ValueError("state {} already exists".format(generator_name))
99
- orig_rng_state = paddle.get_cuda_rng_state()
100
- paddle.seed(seed)
101
- self.states_[generator_name] = paddle.get_cuda_rng_state()
102
- paddle.set_cuda_rng_state(orig_rng_state)
103
- return generator_name
104
-
105
- @contextlib.contextmanager
106
- def rng_state(self, generator_name=None):
107
- if generator_name is not None:
108
- if generator_name not in self.states_:
109
- raise ValueError("state {} does not exist".format(generator_name))
110
- orig_cuda_rng_state = paddle.get_cuda_rng_state()
111
- paddle.set_cuda_rng_state(self.states_[generator_name])
112
- try:
113
- yield
114
- finally:
115
- self.states_[generator_name] = paddle.get_cuda_rng_state()
116
- paddle.set_cuda_rng_state(orig_cuda_rng_state)
117
- else:
118
- yield
119
-
120
- RNG_STATE_TRACKER = RNGStatesTracker()
121
-
122
- def get_rng_state_tracker(*args, **kwargs):
123
- return RNG_STATE_TRACKER
124
-
125
- paddle.Generator = get_rng_state_tracker
126
- randn = paddle.randn
127
-
128
- def randn_pt(shape, dtype=None, name=None, **kwargs):
129
- generator = kwargs.get("generator", None)
130
- if generator is None:
131
- return randn(shape, dtype=dtype, name=name)
132
- else:
133
- with get_rng_state_tracker().rng_state(generator):
134
- return randn(shape, dtype=dtype, name=name)
135
-
136
- paddle.randn = randn_pt
137
-
138
- rand = paddle.rand
139
-
140
- def rand_pt(shape, dtype=None, name=None, **kwargs):
141
- generator = kwargs.get("generator", None)
142
- if generator is None:
143
- return randn(shape, dtype=dtype, name=name)
144
- else:
145
- with get_rng_state_tracker().rng_state(generator):
146
- return rand(shape, dtype=dtype, name=name)
147
-
148
- paddle.rand = rand_pt
149
-
150
- @patch_to(nn.Layer)
151
- def get_sublayer(self, target: str):
152
- if target == "":
153
- return self
154
-
155
- atoms: List[str] = target.split(".")
156
- mod: nn.Layer = self
157
-
158
- for item in atoms:
159
- if not hasattr(mod, item):
160
- raise AttributeError(mod.__class__.__name__ + " has no " "attribute `" + item + "`")
161
-
162
- mod = getattr(mod, item)
163
-
164
- if not isinstance(mod, nn.Layer):
165
- raise AttributeError("`" + item + "` is not " "an nn.Layer")
166
- return mod
167
-
168
- class _WrappedHook:
169
- def __init__(self, hook: Callable, module: Optional["nn.Layer"] = None):
170
- self.hook: Callable = hook
171
- functools.update_wrapper(self, hook)
172
-
173
- self.with_module: bool = False
174
-
175
- if module is not None:
176
- self.module: weakref.ReferenceType["nn.Layer"] = weakref.ref(module)
177
- self.with_module = True
178
-
179
- def __call__(self, *args: Any, **kwargs: Any) -> Any:
180
- if self.with_module:
181
- module = self.module()
182
- if module is None:
183
- raise RuntimeError("You are trying to call the hook of a dead Module!")
184
- return self.hook(module, *args, **kwargs)
185
- return self.hook(*args, **kwargs)
186
-
187
- def __getstate__(self) -> Dict:
188
- result = {"hook": self.hook, "with_module": self.with_module}
189
- if self.with_module:
190
- result["module"] = self.module()
191
-
192
- return result
193
-
194
- def __setstate__(self, state: Dict):
195
- self.hook = state["hook"]
196
- self.with_module = state["with_module"]
197
-
198
- if self.with_module:
199
- if state["module"] is None:
200
- raise RuntimeError("You are trying to revive the hook of a dead Module!")
201
- self.module = weakref.ref(state["module"])
202
-
203
- from paddle.fluid.dygraph.layers import HookRemoveHelper
204
-
205
- @patch_to(nn.Layer)
206
- def register_load_state_dict_pre_hook(self, hook, with_module=False):
207
- handle = HookRemoveHelper(self.load_state_dict_pre_hooks)
208
- self.load_state_dict_pre_hooks[handle._hook_id] = _WrappedHook(hook, self if with_module else None)
209
- return handle
210
-
211
- raw_set_state_dict = nn.Layer.set_state_dict
212
-
213
- @patch_to(nn.Layer)
214
- def set_state_dict(self, state_dict, use_structured_name: bool = True):
215
- for hook in self.load_state_dict_pre_hooks.values():
216
- hook(state_dict)
217
- return raw_set_state_dict(self, state_dict, use_structured_name=use_structured_name)
218
-
219
- nn.Layer.load_dict = nn.Layer.set_state_dict
220
- nn.Layer.set_dict = nn.Layer.set_state_dict
221
-
222
- raw_init = nn.Layer.__init__
223
-
224
- @patch_to(nn.Layer)
225
- def __init__(self, name_scope=None, dtype="float32"):
226
- raw_init(self, name_scope=name_scope, dtype=dtype)
227
- self.load_state_dict_pre_hooks = OrderedDict()
228
-
229
-
230
- if is_paddle_available() and is_paddlenlp_available():
231
- import paddle
232
-
233
- import paddlenlp.transformers
234
- from paddlenlp.transformers import PretrainedModel
235
-
236
- @patch_to(PretrainedModel, as_prop=True)
237
- def dtype(self):
238
- try:
239
- return next(self.named_parameters())[1].dtype
240
- except StopIteration:
241
- return paddle.get_default_dtype()
242
-
243
- @patch_to(PretrainedModel, as_prop=True)
244
- def device(self):
245
- try:
246
- return next(self.named_parameters())[1].place
247
- except StopIteration:
248
- return paddle.get_device()
249
-
250
- try:
251
- from paddlenlp.transformers import XLMRobertaTokenizer
252
- except ImportError:
253
- # patch xlm-roberta tokenizer
254
- """Tokenization classes for XLM-RoBERTa model."""
255
- import os
256
- from shutil import copyfile
257
-
258
- import sentencepiece as spm
259
-
260
- from paddlenlp.transformers.tokenizer_utils import (
261
- AddedToken,
262
- PretrainedTokenizer,
263
- )
264
- from paddlenlp.utils.log import logger
265
-
266
- SPIECE_UNDERLINE = "▁"
267
-
268
- class XLMRobertaTokenizer(PretrainedTokenizer):
269
-
270
- resource_files_names = {"vocab_file": "sentencepiece.bpe.model"}
271
- pretrained_resource_files_map = {}
272
- pretrained_init_configuration = {}
273
- max_model_input_sizes = {
274
- "xlm-roberta-base": 512,
275
- "xlm-roberta-large": 512,
276
- "xlm-roberta-large-finetuned-conll02-dutch": 512,
277
- "xlm-roberta-large-finetuned-conll02-spanish": 512,
278
- "xlm-roberta-large-finetuned-conll03-english": 512,
279
- "xlm-roberta-large-finetuned-conll03-german": 512,
280
- }
281
- model_input_names = ["input_ids", "attention_mask"]
282
-
283
- def __init__(
284
- self,
285
- vocab_file,
286
- bos_token="<s>",
287
- eos_token="</s>",
288
- sep_token="</s>",
289
- cls_token="<s>",
290
- unk_token="<unk>",
291
- pad_token="<pad>",
292
- mask_token="<mask>",
293
- sp_model_kwargs: Optional[Dict[str, Any]] = None,
294
- **kwargs
295
- ) -> None:
296
- # Mask token behave like a normal word, i.e. include the space before it
297
- mask_token = (
298
- AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token
299
- )
300
-
301
- self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
302
-
303
- super().__init__(
304
- bos_token=bos_token,
305
- eos_token=eos_token,
306
- unk_token=unk_token,
307
- sep_token=sep_token,
308
- cls_token=cls_token,
309
- pad_token=pad_token,
310
- mask_token=mask_token,
311
- sp_model_kwargs=self.sp_model_kwargs,
312
- **kwargs,
313
- )
314
-
315
- self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
316
- self.sp_model.Load(str(vocab_file))
317
- self.vocab_file = vocab_file
318
-
319
- # Original fairseq vocab and spm vocab must be "aligned":
320
- # Vocab | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
321
- # -------- | ------- | ------- | ------ | ------- | --- | --- | --- | ----- | ----- | ----
322
- # fairseq | '<s>' | '<pad>' | '</s>' | '<unk>' | ',' | '.' | '▁' | 's' | '▁de' | '-'
323
- # spm | '<unk>' | '<s>' | '</s>' | ',' | '.' | '▁' | 's' | '▁de' | '-' | '▁a'
324
-
325
- # Mimic fairseq token-to-id alignment for the first 4 token
326
- self.fairseq_tokens_to_ids = {"<s>": 0, "<pad>": 1, "</s>": 2, "<unk>": 3}
327
-
328
- # The first "real" token "," has position 4 in the original fairseq vocab and position 3 in the spm vocab
329
- self.fairseq_offset = 1
330
-
331
- self.fairseq_tokens_to_ids["<mask>"] = len(self.sp_model) + self.fairseq_offset
332
- self.fairseq_ids_to_tokens = {v: k for k, v in self.fairseq_tokens_to_ids.items()}
333
-
334
- def __getstate__(self):
335
- state = self.__dict__.copy()
336
- state["sp_model"] = None
337
- state["sp_model_proto"] = self.sp_model.serialized_model_proto()
338
- return state
339
-
340
- def __setstate__(self, d):
341
- self.__dict__ = d
342
-
343
- # for backward compatibility
344
- if not hasattr(self, "sp_model_kwargs"):
345
- self.sp_model_kwargs = {}
346
-
347
- self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
348
- self.sp_model.LoadFromSerializedProto(self.sp_model_proto)
349
-
350
- def build_inputs_with_special_tokens(
351
- self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
352
- ) -> List[int]:
353
- """
354
- Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
355
- adding special tokens. An XLM-RoBERTa sequence has the following format:
356
- - single sequence: `<s> X </s>`
357
- - pair of sequences: `<s> A </s></s> B </s>`
358
- Args:
359
- token_ids_0 (`List[int]`):
360
- List of IDs to which the special tokens will be added.
361
- token_ids_1 (`List[int]`, *optional*):
362
- Optional second list of IDs for sequence pairs.
363
- Returns:
364
- `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
365
- """
366
-
367
- if token_ids_1 is None:
368
- return [self.cls_token_id] + token_ids_0 + [self.sep_token_id]
369
- cls = [self.cls_token_id]
370
- sep = [self.sep_token_id]
371
- return cls + token_ids_0 + sep + sep + token_ids_1 + sep
372
-
373
- def get_special_tokens_mask(
374
- self,
375
- token_ids_0: List[int],
376
- token_ids_1: Optional[List[int]] = None,
377
- already_has_special_tokens: bool = False,
378
- ) -> List[int]:
379
- """
380
- Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
381
- special tokens using the tokenizer `prepare_for_model` method.
382
- Args:
383
- token_ids_0 (`List[int]`):
384
- List of IDs.
385
- token_ids_1 (`List[int]`, *optional*):
386
- Optional second list of IDs for sequence pairs.
387
- already_has_special_tokens (`bool`, *optional*, defaults to `False`):
388
- Whether or not the token list is already formatted with special tokens for the model.
389
- Returns:
390
- `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
391
- """
392
-
393
- if already_has_special_tokens:
394
- return super().get_special_tokens_mask(
395
- token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
396
- )
397
-
398
- if token_ids_1 is None:
399
- return [1] + ([0] * len(token_ids_0)) + [1]
400
- return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]
401
-
402
- def create_token_type_ids_from_sequences(
403
- self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
404
- ) -> List[int]:
405
- """
406
- Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
407
- not make use of token type ids, therefore a list of zeros is returned.
408
- Args:
409
- token_ids_0 (`List[int]`):
410
- List of IDs.
411
- token_ids_1 (`List[int]`, *optional*):
412
- Optional second list of IDs for sequence pairs.
413
- Returns:
414
- `List[int]`: List of zeros.
415
- """
416
-
417
- sep = [self.sep_token_id]
418
- cls = [self.cls_token_id]
419
-
420
- if token_ids_1 is None:
421
- return len(cls + token_ids_0 + sep) * [0]
422
- return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]
423
-
424
- @property
425
- def vocab_size(self):
426
- return len(self.sp_model) + self.fairseq_offset + 1 # Add the <mask> token
427
-
428
- def get_vocab(self):
429
- vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
430
- vocab.update(self.added_tokens_encoder)
431
- return vocab
432
-
433
- def _tokenize(self, text: str) -> List[str]:
434
- return self.sp_model.encode(text, out_type=str)
435
-
436
- def _convert_token_to_id(self, token):
437
- """Converts a token (str) in an id using the vocab."""
438
- if token in self.fairseq_tokens_to_ids:
439
- return self.fairseq_tokens_to_ids[token]
440
- spm_id = self.sp_model.PieceToId(token)
441
-
442
- # Need to return unknown token if the SP model returned 0
443
- return spm_id + self.fairseq_offset if spm_id else self.unk_token_id
444
-
445
- def _convert_id_to_token(self, index):
446
- """Converts an index (integer) in a token (str) using the vocab."""
447
- if index in self.fairseq_ids_to_tokens:
448
- return self.fairseq_ids_to_tokens[index]
449
- return self.sp_model.IdToPiece(index - self.fairseq_offset)
450
-
451
- def convert_tokens_to_string(self, tokens):
452
- """Converts a sequence of tokens (strings for sub-words) in a single string."""
453
- out_string = "".join(tokens).replace(SPIECE_UNDERLINE, " ").strip()
454
- return out_string
455
-
456
- def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
457
- if not os.path.isdir(save_directory):
458
- logger.error(f"Vocabulary path ({save_directory}) should be a directory")
459
- return
460
- out_vocab_file = os.path.join(
461
- save_directory,
462
- (filename_prefix + "-" if filename_prefix else "") + self.resource_files_names["vocab_file"],
463
- )
464
-
465
- if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(
466
- self.vocab_file
467
- ):
468
- copyfile(self.vocab_file, out_vocab_file)
469
- elif not os.path.isfile(self.vocab_file):
470
- with open(out_vocab_file, "wb") as fi:
471
- content_spiece_model = self.sp_model.serialized_model_proto()
472
- fi.write(content_spiece_model)
473
-
474
- return (out_vocab_file,)
475
-
476
- paddlenlp.transformers.XLMRobertaTokenizer = XLMRobertaTokenizer
477
-
478
- # patch BertModel forward
479
- from paddlenlp.transformers import BertModel
480
-
481
- raw_forward = BertModel.forward
482
-
483
- @patch_to(BertModel)
484
- def forward(
485
- self,
486
- input_ids: paddle.Tensor,
487
- token_type_ids: Optional[paddle.Tensor] = None,
488
- position_ids: Optional[paddle.Tensor] = None,
489
- attention_mask: Optional[paddle.Tensor] = None,
490
- past_key_values: Optional[Tuple[Tuple[paddle.Tensor]]] = None,
491
- use_cache: Optional[bool] = None,
492
- output_hidden_states: Optional[bool] = None,
493
- output_attentions: Optional[bool] = None,
494
- return_dict: Optional[bool] = None,
495
- ):
496
- if attention_mask is None:
497
- attention_mask = paddle.ones_like(input_ids)
498
- return raw_forward(
499
- self,
500
- input_ids,
501
- token_type_ids,
502
- position_ids,
503
- attention_mask,
504
- past_key_values,
505
- use_cache,
506
- output_hidden_states,
507
- output_attentions,
508
- return_dict,
509
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/4Taps/SadTalker/src/audio2pose_models/cvae.py DELETED
@@ -1,149 +0,0 @@
1
- import torch
2
- import torch.nn.functional as F
3
- from torch import nn
4
- from src.audio2pose_models.res_unet import ResUnet
5
-
6
- def class2onehot(idx, class_num):
7
-
8
- assert torch.max(idx).item() < class_num
9
- onehot = torch.zeros(idx.size(0), class_num).to(idx.device)
10
- onehot.scatter_(1, idx, 1)
11
- return onehot
12
-
13
- class CVAE(nn.Module):
14
- def __init__(self, cfg):
15
- super().__init__()
16
- encoder_layer_sizes = cfg.MODEL.CVAE.ENCODER_LAYER_SIZES
17
- decoder_layer_sizes = cfg.MODEL.CVAE.DECODER_LAYER_SIZES
18
- latent_size = cfg.MODEL.CVAE.LATENT_SIZE
19
- num_classes = cfg.DATASET.NUM_CLASSES
20
- audio_emb_in_size = cfg.MODEL.CVAE.AUDIO_EMB_IN_SIZE
21
- audio_emb_out_size = cfg.MODEL.CVAE.AUDIO_EMB_OUT_SIZE
22
- seq_len = cfg.MODEL.CVAE.SEQ_LEN
23
-
24
- self.latent_size = latent_size
25
-
26
- self.encoder = ENCODER(encoder_layer_sizes, latent_size, num_classes,
27
- audio_emb_in_size, audio_emb_out_size, seq_len)
28
- self.decoder = DECODER(decoder_layer_sizes, latent_size, num_classes,
29
- audio_emb_in_size, audio_emb_out_size, seq_len)
30
- def reparameterize(self, mu, logvar):
31
- std = torch.exp(0.5 * logvar)
32
- eps = torch.randn_like(std)
33
- return mu + eps * std
34
-
35
- def forward(self, batch):
36
- batch = self.encoder(batch)
37
- mu = batch['mu']
38
- logvar = batch['logvar']
39
- z = self.reparameterize(mu, logvar)
40
- batch['z'] = z
41
- return self.decoder(batch)
42
-
43
- def test(self, batch):
44
- '''
45
- class_id = batch['class']
46
- z = torch.randn([class_id.size(0), self.latent_size]).to(class_id.device)
47
- batch['z'] = z
48
- '''
49
- return self.decoder(batch)
50
-
51
- class ENCODER(nn.Module):
52
- def __init__(self, layer_sizes, latent_size, num_classes,
53
- audio_emb_in_size, audio_emb_out_size, seq_len):
54
- super().__init__()
55
-
56
- self.resunet = ResUnet()
57
- self.num_classes = num_classes
58
- self.seq_len = seq_len
59
-
60
- self.MLP = nn.Sequential()
61
- layer_sizes[0] += latent_size + seq_len*audio_emb_out_size + 6
62
- for i, (in_size, out_size) in enumerate(zip(layer_sizes[:-1], layer_sizes[1:])):
63
- self.MLP.add_module(
64
- name="L{:d}".format(i), module=nn.Linear(in_size, out_size))
65
- self.MLP.add_module(name="A{:d}".format(i), module=nn.ReLU())
66
-
67
- self.linear_means = nn.Linear(layer_sizes[-1], latent_size)
68
- self.linear_logvar = nn.Linear(layer_sizes[-1], latent_size)
69
- self.linear_audio = nn.Linear(audio_emb_in_size, audio_emb_out_size)
70
-
71
- self.classbias = nn.Parameter(torch.randn(self.num_classes, latent_size))
72
-
73
- def forward(self, batch):
74
- class_id = batch['class']
75
- pose_motion_gt = batch['pose_motion_gt'] #bs seq_len 6
76
- ref = batch['ref'] #bs 6
77
- bs = pose_motion_gt.shape[0]
78
- audio_in = batch['audio_emb'] # bs seq_len audio_emb_in_size
79
-
80
- #pose encode
81
- pose_emb = self.resunet(pose_motion_gt.unsqueeze(1)) #bs 1 seq_len 6
82
- pose_emb = pose_emb.reshape(bs, -1) #bs seq_len*6
83
-
84
- #audio mapping
85
- print(audio_in.shape)
86
- audio_out = self.linear_audio(audio_in) # bs seq_len audio_emb_out_size
87
- audio_out = audio_out.reshape(bs, -1)
88
-
89
- class_bias = self.classbias[class_id] #bs latent_size
90
- x_in = torch.cat([ref, pose_emb, audio_out, class_bias], dim=-1) #bs seq_len*(audio_emb_out_size+6)+latent_size
91
- x_out = self.MLP(x_in)
92
-
93
- mu = self.linear_means(x_out)
94
- logvar = self.linear_means(x_out) #bs latent_size
95
-
96
- batch.update({'mu':mu, 'logvar':logvar})
97
- return batch
98
-
99
- class DECODER(nn.Module):
100
- def __init__(self, layer_sizes, latent_size, num_classes,
101
- audio_emb_in_size, audio_emb_out_size, seq_len):
102
- super().__init__()
103
-
104
- self.resunet = ResUnet()
105
- self.num_classes = num_classes
106
- self.seq_len = seq_len
107
-
108
- self.MLP = nn.Sequential()
109
- input_size = latent_size + seq_len*audio_emb_out_size + 6
110
- for i, (in_size, out_size) in enumerate(zip([input_size]+layer_sizes[:-1], layer_sizes)):
111
- self.MLP.add_module(
112
- name="L{:d}".format(i), module=nn.Linear(in_size, out_size))
113
- if i+1 < len(layer_sizes):
114
- self.MLP.add_module(name="A{:d}".format(i), module=nn.ReLU())
115
- else:
116
- self.MLP.add_module(name="sigmoid", module=nn.Sigmoid())
117
-
118
- self.pose_linear = nn.Linear(6, 6)
119
- self.linear_audio = nn.Linear(audio_emb_in_size, audio_emb_out_size)
120
-
121
- self.classbias = nn.Parameter(torch.randn(self.num_classes, latent_size))
122
-
123
- def forward(self, batch):
124
-
125
- z = batch['z'] #bs latent_size
126
- bs = z.shape[0]
127
- class_id = batch['class']
128
- ref = batch['ref'] #bs 6
129
- audio_in = batch['audio_emb'] # bs seq_len audio_emb_in_size
130
- #print('audio_in: ', audio_in[:, :, :10])
131
-
132
- audio_out = self.linear_audio(audio_in) # bs seq_len audio_emb_out_size
133
- #print('audio_out: ', audio_out[:, :, :10])
134
- audio_out = audio_out.reshape([bs, -1]) # bs seq_len*audio_emb_out_size
135
- class_bias = self.classbias[class_id] #bs latent_size
136
-
137
- z = z + class_bias
138
- x_in = torch.cat([ref, z, audio_out], dim=-1)
139
- x_out = self.MLP(x_in) # bs layer_sizes[-1]
140
- x_out = x_out.reshape((bs, self.seq_len, -1))
141
-
142
- #print('x_out: ', x_out)
143
-
144
- pose_emb = self.resunet(x_out.unsqueeze(1)) #bs 1 seq_len 6
145
-
146
- pose_motion_pred = self.pose_linear(pose_emb.squeeze(1)) #bs seq_len 6
147
-
148
- batch.update({'pose_motion_pred':pose_motion_pred})
149
- return batch
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/812vaishnavi/gradio-land-cover-mapping/app.py DELETED
@@ -1,63 +0,0 @@
1
- import gradio as gr
2
- import PIL
3
-
4
- from tensorflow.keras.models import load_model
5
- #import segmentation_models as sm
6
- #import efficientnet.keras as efn
7
- import matplotlib.pyplot as plt
8
- import tensorflow as tf
9
- import numpy as np
10
- import cv2
11
-
12
- lr=1e-5
13
-
14
- #iou_score = [sm.metrics.IOUScore(threshold=0.5)]
15
-
16
- def iou_loss(y_true, y_pred):
17
- y_true = tf.reshape(y_true, [-1])
18
- y_pred = tf.reshape(y_pred, [-1])
19
- intersection = tf.reduce_sum(tf.cast(y_true, tf.float32) * tf.cast(y_pred, tf.float32))
20
- score = (intersection + 1.) / (tf.reduce_sum(tf.cast(y_true, tf.float32)) +
21
- tf.reduce_sum(tf.cast(y_pred, tf.float32)) - intersection + 1.)
22
- return 1 - score
23
-
24
- def mean_iou(y_true, y_pred):
25
- y_pred = tf.round(tf.cast(y_pred, tf.int32))
26
- intersect = tf.reduce_sum(tf.cast(y_true, tf.float32) * tf.cast(y_pred, tf.float32), axis=[1])
27
- union = tf.reduce_sum(tf.cast(y_true, tf.float32),axis=[1]) + tf.reduce_sum(tf.cast(y_pred, tf.float32),axis=[1])
28
- smooth = tf.ones(tf.shape(intersect))
29
- return tf.reduce_mean((intersect + smooth) / (union - intersect + smooth))
30
-
31
- model1 = load_model('UNET[Scratch].h5', compile=False)
32
-
33
- model1.compile(optimizer = tf.keras.optimizers.Adam(lr),
34
- loss=iou_loss, metrics=[mean_iou],)
35
-
36
- class_names = ['urban_land', 'agriculture_land', 'rangeland', 'forest_land', 'water','barren_land','unknown']
37
-
38
- def Unet(img):
39
- img_1=img.reshape(-1, 256, 256, 3)
40
- prediction=model1.predict(img_1).flatten()
41
- return {class_names[i]: float(prediction[i]) for i in range(7)}
42
- iface1 = gr.Interface(fn=Unet, inputs = gr.inputs.Image(shape = (256, 256)), outputs = gr.outputs.Label(num_top_classes=7), title="Unet",
43
- description="""Segmenting land from an image using a deep learning model.
44
- This application aims to provide a user-friendly interface for segmenting land areas in images.
45
- Firstly we get an intermediate output as a segmented image of the land cover, which is later converted into the percentage of the respective land classes.
46
- Overall, we aim to make land segmentation accessible to a wide range of users and facilitating further analysis and decision-making based on the segmented land regions.""")
47
-
48
- '''
49
- def fpn(img):
50
- img_2=img.reshape(-1,256, 256, 3)
51
- prediction=model2.predict(img_2).flatten()
52
- return {class_names[i]: float(prediction[i]) for i in range(7)}
53
- iface2 = gr.Interface(fn=fpn, inputs = gr.inputs.Image(shape = (256, 256)), outputs = gr.outputs.Label(num_top_classes=7), title="FPN",)
54
-
55
- # Combine both interfaces into a single Parallel interface
56
- gr.Parallel(iface1, iface2, title="Land Segmentation: Unet vs FPN",
57
- description="""Segmenting land from an image using a deep learning model.
58
- This application aims to provide a user-friendly interface for segmenting land areas in images.
59
- Firstly we get an intermediate output as a segmented image of the land cover, which is later converted into the percentage of the respective land classes.
60
- Overall, we aim to make land segmentation accessible to a wide range of users and facilitating further analysis and decision-making based on the segmented land regions.""",
61
- ).launch(share=True, debug=True, auth=("admin", "pass1234"))
62
- '''
63
- iface1.launch(inline=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/A00001/bingothoo/src/components/markdown.tsx DELETED
@@ -1,9 +0,0 @@
1
- import { FC, memo } from 'react'
2
- import ReactMarkdown, { Options } from 'react-markdown'
3
-
4
- export const MemoizedReactMarkdown: FC<Options> = memo(
5
- ReactMarkdown,
6
- (prevProps, nextProps) =>
7
- prevProps.children === nextProps.children &&
8
- prevProps.className === nextProps.className
9
- )
 
 
 
 
 
 
 
 
 
 
spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Interviews 4be8039581d04456b0151f2cc4b22130.md DELETED
@@ -1,30 +0,0 @@
1
- # Engineering Interviews
2
-
3
- Last edited time: March 31, 2023 1:49 PM
4
- Owner: Anonymous
5
- Tags: Guides and Processes
6
-
7
- <aside>
8
- 💡 Use this template to document your approach to interviewing engineers!
9
-
10
- </aside>
11
-
12
- # Philosophy
13
-
14
- Create a quote by typing `/quote` and pressing `enter`.
15
-
16
- > Before you build a better mousetrap, it helps to know if there are any mice out there. —Yogi Berra
17
- >
18
-
19
- # Interview Question Database
20
-
21
- <aside>
22
- 💡 [Inline databases](https://www.notion.so/c523297c17634873a52317dd7a3e0b77) can be added to any page as tables, boards, calendars, lists or galleries. Just type `/database` to see your options.
23
-
24
- </aside>
25
-
26
- [Questions](Engineering%20Interviews%204be8039581d04456b0151f2cc4b22130/Questions%20ede8818b3a0e447f80145905690eb3f6.md)
27
-
28
- # Further Reading
29
-
30
- For more on databases, check out this [Notion guide](https://www.notion.so/fd8cd2d212f74c50954c11086d85997e).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ADobrovsky/Plant_Disease_Classification_Project/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Plant Disease Classification Project
3
- emoji: 💩
4
- colorFrom: indigo
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 3.15.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers_537227KB.py DELETED
@@ -1,126 +0,0 @@
1
- import torch
2
- from torch import nn
3
- import torch.nn.functional as F
4
-
5
- from uvr5_pack.lib_v5 import spec_utils
6
-
7
-
8
- class Conv2DBNActiv(nn.Module):
9
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
10
- super(Conv2DBNActiv, self).__init__()
11
- self.conv = nn.Sequential(
12
- nn.Conv2d(
13
- nin,
14
- nout,
15
- kernel_size=ksize,
16
- stride=stride,
17
- padding=pad,
18
- dilation=dilation,
19
- bias=False,
20
- ),
21
- nn.BatchNorm2d(nout),
22
- activ(),
23
- )
24
-
25
- def __call__(self, x):
26
- return self.conv(x)
27
-
28
-
29
- class SeperableConv2DBNActiv(nn.Module):
30
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
31
- super(SeperableConv2DBNActiv, self).__init__()
32
- self.conv = nn.Sequential(
33
- nn.Conv2d(
34
- nin,
35
- nin,
36
- kernel_size=ksize,
37
- stride=stride,
38
- padding=pad,
39
- dilation=dilation,
40
- groups=nin,
41
- bias=False,
42
- ),
43
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
44
- nn.BatchNorm2d(nout),
45
- activ(),
46
- )
47
-
48
- def __call__(self, x):
49
- return self.conv(x)
50
-
51
-
52
- class Encoder(nn.Module):
53
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
54
- super(Encoder, self).__init__()
55
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
56
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
57
-
58
- def __call__(self, x):
59
- skip = self.conv1(x)
60
- h = self.conv2(skip)
61
-
62
- return h, skip
63
-
64
-
65
- class Decoder(nn.Module):
66
- def __init__(
67
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
68
- ):
69
- super(Decoder, self).__init__()
70
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
71
- self.dropout = nn.Dropout2d(0.1) if dropout else None
72
-
73
- def __call__(self, x, skip=None):
74
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
75
- if skip is not None:
76
- skip = spec_utils.crop_center(skip, x)
77
- x = torch.cat([x, skip], dim=1)
78
- h = self.conv(x)
79
-
80
- if self.dropout is not None:
81
- h = self.dropout(h)
82
-
83
- return h
84
-
85
-
86
- class ASPPModule(nn.Module):
87
- def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
88
- super(ASPPModule, self).__init__()
89
- self.conv1 = nn.Sequential(
90
- nn.AdaptiveAvgPool2d((1, None)),
91
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
92
- )
93
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
94
- self.conv3 = SeperableConv2DBNActiv(
95
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
96
- )
97
- self.conv4 = SeperableConv2DBNActiv(
98
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
99
- )
100
- self.conv5 = SeperableConv2DBNActiv(
101
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
102
- )
103
- self.conv6 = SeperableConv2DBNActiv(
104
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
105
- )
106
- self.conv7 = SeperableConv2DBNActiv(
107
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
108
- )
109
- self.bottleneck = nn.Sequential(
110
- Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
111
- )
112
-
113
- def forward(self, x):
114
- _, _, h, w = x.size()
115
- feat1 = F.interpolate(
116
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
117
- )
118
- feat2 = self.conv2(x)
119
- feat3 = self.conv3(x)
120
- feat4 = self.conv4(x)
121
- feat5 = self.conv5(x)
122
- feat6 = self.conv6(x)
123
- feat7 = self.conv7(x)
124
- out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
125
- bottle = self.bottleneck(out)
126
- return bottle
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/StyleGANEX/models/encoders/psp_encoders.py DELETED
@@ -1,357 +0,0 @@
1
- import numpy as np
2
- import torch
3
- import torch.nn.functional as F
4
- from torch import nn
5
- from torch.nn import Linear, Conv2d, BatchNorm2d, PReLU, Sequential, Module
6
-
7
- from models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE
8
- from models.stylegan2.model import EqualLinear
9
-
10
-
11
- class GradualStyleBlock(Module):
12
- def __init__(self, in_c, out_c, spatial, max_pooling=False):
13
- super(GradualStyleBlock, self).__init__()
14
- self.out_c = out_c
15
- self.spatial = spatial
16
- self.max_pooling = max_pooling
17
- num_pools = int(np.log2(spatial))
18
- modules = []
19
- modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1),
20
- nn.LeakyReLU()]
21
- for i in range(num_pools - 1):
22
- modules += [
23
- Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1),
24
- nn.LeakyReLU()
25
- ]
26
- self.convs = nn.Sequential(*modules)
27
- self.linear = EqualLinear(out_c, out_c, lr_mul=1)
28
-
29
- def forward(self, x):
30
- x = self.convs(x)
31
- # To make E accept more general H*W images, we add global average pooling to
32
- # resize all features to 1*1*512 before mapping to latent codes
33
- if self.max_pooling:
34
- x = F.adaptive_max_pool2d(x, 1) ##### modified
35
- else:
36
- x = F.adaptive_avg_pool2d(x, 1) ##### modified
37
- x = x.view(-1, self.out_c)
38
- x = self.linear(x)
39
- return x
40
-
41
- class AdaptiveInstanceNorm(nn.Module):
42
- def __init__(self, fin, style_dim=512):
43
- super().__init__()
44
-
45
- self.norm = nn.InstanceNorm2d(fin, affine=False)
46
- self.style = nn.Linear(style_dim, fin * 2)
47
-
48
- self.style.bias.data[:fin] = 1
49
- self.style.bias.data[fin:] = 0
50
-
51
- def forward(self, input, style):
52
- style = self.style(style).unsqueeze(2).unsqueeze(3)
53
- gamma, beta = style.chunk(2, 1)
54
- out = self.norm(input)
55
- out = gamma * out + beta
56
- return out
57
-
58
-
59
- class FusionLayer(Module): ##### modified
60
- def __init__(self, inchannel, outchannel, use_skip_torgb=True, use_att=0):
61
- super(FusionLayer, self).__init__()
62
-
63
- self.transform = nn.Sequential(nn.Conv2d(inchannel, outchannel, kernel_size=3, stride=1, padding=1),
64
- nn.LeakyReLU())
65
- self.fusion_out = nn.Conv2d(outchannel*2, outchannel, kernel_size=3, stride=1, padding=1)
66
- self.fusion_out.weight.data *= 0.01
67
- self.fusion_out.weight[:,0:outchannel,1,1].data += torch.eye(outchannel)
68
-
69
- self.use_skip_torgb = use_skip_torgb
70
- if use_skip_torgb:
71
- self.fusion_skip = nn.Conv2d(3+outchannel, 3, kernel_size=3, stride=1, padding=1)
72
- self.fusion_skip.weight.data *= 0.01
73
- self.fusion_skip.weight[:,0:3,1,1].data += torch.eye(3)
74
-
75
- self.use_att = use_att
76
- if use_att:
77
- modules = []
78
- modules.append(nn.Linear(512, outchannel))
79
- for _ in range(use_att):
80
- modules.append(nn.LeakyReLU(negative_slope=0.2, inplace=True))
81
- modules.append(nn.Linear(outchannel, outchannel))
82
- modules.append(nn.LeakyReLU(negative_slope=0.2, inplace=True))
83
- self.linear = Sequential(*modules)
84
- self.norm = AdaptiveInstanceNorm(outchannel*2, outchannel)
85
- self.conv = nn.Conv2d(outchannel*2, 1, 3, 1, 1, bias=True)
86
-
87
- def forward(self, feat, out, skip, editing_w=None):
88
- x = self.transform(feat)
89
- # similar to VToonify, use editing vector as condition
90
- # fuse encoder feature and decoder feature with a predicted attention mask m_E
91
- # if self.use_att = False, just fuse them with a simple conv layer
92
- if self.use_att and editing_w is not None:
93
- label = self.linear(editing_w)
94
- m_E = (F.relu(self.conv(self.norm(torch.cat([out, abs(out-x)], dim=1), label)))).tanh()
95
- x = x * m_E
96
- out = self.fusion_out(torch.cat((out, x), dim=1))
97
- if self.use_skip_torgb:
98
- skip = self.fusion_skip(torch.cat((skip, x), dim=1))
99
- return out, skip
100
-
101
-
102
- class ResnetBlock(nn.Module):
103
- def __init__(self, dim):
104
- super(ResnetBlock, self).__init__()
105
-
106
- self.conv_block = nn.Sequential(Conv2d(dim, dim, 3, 1, 1),
107
- nn.LeakyReLU(),
108
- Conv2d(dim, dim, 3, 1, 1))
109
- self.relu = nn.LeakyReLU()
110
-
111
- def forward(self, x):
112
- out = x + self.conv_block(x)
113
- return self.relu(out)
114
-
115
- # trainable light-weight translation network T
116
- # for sketch/mask-to-face translation,
117
- # we add a trainable T to map y to an intermediate domain where E can more easily extract features.
118
- class ResnetGenerator(nn.Module):
119
- def __init__(self, in_channel=19, res_num=2):
120
- super(ResnetGenerator, self).__init__()
121
-
122
- modules = []
123
- modules.append(Conv2d(in_channel, 16, 3, 2, 1))
124
- modules.append(nn.LeakyReLU())
125
- modules.append(Conv2d(16, 16, 3, 2, 1))
126
- modules.append(nn.LeakyReLU())
127
- for _ in range(res_num):
128
- modules.append(ResnetBlock(16))
129
- for _ in range(2):
130
- modules.append(nn.ConvTranspose2d(16, 16, 3, 2, 1, output_padding=1))
131
- modules.append(nn.LeakyReLU())
132
- modules.append(Conv2d(16, 64, 3, 1, 1, bias=False))
133
- modules.append(BatchNorm2d(64))
134
- modules.append(PReLU(64))
135
- self.model = Sequential(*modules)
136
-
137
- def forward(self, input):
138
- return self.model(input)
139
-
140
- class GradualStyleEncoder(Module):
141
- def __init__(self, num_layers, mode='ir', opts=None):
142
- super(GradualStyleEncoder, self).__init__()
143
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
144
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
145
- blocks = get_blocks(num_layers)
146
- if mode == 'ir':
147
- unit_module = bottleneck_IR
148
- elif mode == 'ir_se':
149
- unit_module = bottleneck_IR_SE
150
-
151
- # for sketch/mask-to-face translation, add a new network T
152
- if opts.input_nc != 3:
153
- self.input_label_layer = ResnetGenerator(opts.input_nc, opts.res_num)
154
-
155
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
156
- BatchNorm2d(64),
157
- PReLU(64))
158
- modules = []
159
- for block in blocks:
160
- for bottleneck in block:
161
- modules.append(unit_module(bottleneck.in_channel,
162
- bottleneck.depth,
163
- bottleneck.stride))
164
- self.body = Sequential(*modules)
165
-
166
- self.styles = nn.ModuleList()
167
- self.style_count = opts.n_styles
168
- self.coarse_ind = 3
169
- self.middle_ind = 7
170
- for i in range(self.style_count):
171
- if i < self.coarse_ind:
172
- style = GradualStyleBlock(512, 512, 16, 'max_pooling' in opts and opts.max_pooling)
173
- elif i < self.middle_ind:
174
- style = GradualStyleBlock(512, 512, 32, 'max_pooling' in opts and opts.max_pooling)
175
- else:
176
- style = GradualStyleBlock(512, 512, 64, 'max_pooling' in opts and opts.max_pooling)
177
- self.styles.append(style)
178
- self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0)
179
- self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0)
180
-
181
- # we concatenate pSp features in the middle layers and
182
- # add a convolution layer to map the concatenated features to the first-layer input feature f of G.
183
- self.featlayer = nn.Conv2d(768, 512, kernel_size=1, stride=1, padding=0) ##### modified
184
- self.skiplayer = nn.Conv2d(768, 3, kernel_size=1, stride=1, padding=0) ##### modified
185
-
186
- # skip connection
187
- if 'use_skip' in opts and opts.use_skip: ##### modified
188
- self.fusion = nn.ModuleList()
189
- channels = [[256,512], [256,512], [256,512], [256,512], [128,512], [64,256], [64,128]]
190
- # opts.skip_max_layer: how many layers are skipped to the decoder
191
- for inc, outc in channels[:max(1, min(7, opts.skip_max_layer))]: # from 4 to 256
192
- self.fusion.append(FusionLayer(inc, outc, opts.use_skip_torgb, opts.use_att))
193
-
194
- def _upsample_add(self, x, y):
195
- '''Upsample and add two feature maps.
196
- Args:
197
- x: (Variable) top feature map to be upsampled.
198
- y: (Variable) lateral feature map.
199
- Returns:
200
- (Variable) added feature map.
201
- Note in PyTorch, when input size is odd, the upsampled feature map
202
- with `F.upsample(..., scale_factor=2, mode='nearest')`
203
- maybe not equal to the lateral feature map size.
204
- e.g.
205
- original input size: [N,_,15,15] ->
206
- conv2d feature map size: [N,_,8,8] ->
207
- upsampled feature map size: [N,_,16,16]
208
- So we choose bilinear upsample which supports arbitrary output sizes.
209
- '''
210
- _, _, H, W = y.size()
211
- return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y
212
-
213
- # return_feat: return f
214
- # return_full: return f and the skipped encoder features
215
- # return [out, feats]
216
- # out is the style latent code w+
217
- # feats[0] is f for the 1st conv layer, feats[1] is f for the 1st torgb layer
218
- # feats[2-8] is the skipped encoder features
219
- def forward(self, x, return_feat=False, return_full=False): ##### modified
220
- if x.shape[1] != 3:
221
- x = self.input_label_layer(x)
222
- else:
223
- x = self.input_layer(x)
224
- c256 = x ##### modified
225
-
226
- latents = []
227
- modulelist = list(self.body._modules.values())
228
- for i, l in enumerate(modulelist):
229
- x = l(x)
230
- if i == 2: ##### modified
231
- c128 = x
232
- elif i == 6:
233
- c1 = x
234
- elif i == 10: ##### modified
235
- c21 = x ##### modified
236
- elif i == 15: ##### modified
237
- c22 = x ##### modified
238
- elif i == 20:
239
- c2 = x
240
- elif i == 23:
241
- c3 = x
242
-
243
- for j in range(self.coarse_ind):
244
- latents.append(self.styles[j](c3))
245
-
246
- p2 = self._upsample_add(c3, self.latlayer1(c2))
247
- for j in range(self.coarse_ind, self.middle_ind):
248
- latents.append(self.styles[j](p2))
249
-
250
- p1 = self._upsample_add(p2, self.latlayer2(c1))
251
- for j in range(self.middle_ind, self.style_count):
252
- latents.append(self.styles[j](p1))
253
-
254
- out = torch.stack(latents, dim=1)
255
-
256
- if not return_feat:
257
- return out
258
-
259
- feats = [self.featlayer(torch.cat((c21, c22, c2), dim=1)), self.skiplayer(torch.cat((c21, c22, c2), dim=1))]
260
-
261
- if return_full: ##### modified
262
- feats += [c2, c2, c22, c21, c1, c128, c256]
263
-
264
- return out, feats
265
-
266
-
267
- # only compute the first-layer feature f
268
- # E_F in the paper
269
- def get_feat(self, x): ##### modified
270
- # for sketch/mask-to-face translation
271
- # use a trainable light-weight translation network T
272
- if x.shape[1] != 3:
273
- x = self.input_label_layer(x)
274
- else:
275
- x = self.input_layer(x)
276
-
277
- latents = []
278
- modulelist = list(self.body._modules.values())
279
- for i, l in enumerate(modulelist):
280
- x = l(x)
281
- if i == 10: ##### modified
282
- c21 = x ##### modified
283
- elif i == 15: ##### modified
284
- c22 = x ##### modified
285
- elif i == 20:
286
- c2 = x
287
- break
288
- return self.featlayer(torch.cat((c21, c22, c2), dim=1))
289
-
290
- class BackboneEncoderUsingLastLayerIntoW(Module):
291
- def __init__(self, num_layers, mode='ir', opts=None):
292
- super(BackboneEncoderUsingLastLayerIntoW, self).__init__()
293
- print('Using BackboneEncoderUsingLastLayerIntoW')
294
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
295
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
296
- blocks = get_blocks(num_layers)
297
- if mode == 'ir':
298
- unit_module = bottleneck_IR
299
- elif mode == 'ir_se':
300
- unit_module = bottleneck_IR_SE
301
- self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False),
302
- BatchNorm2d(64),
303
- PReLU(64))
304
- self.output_pool = torch.nn.AdaptiveAvgPool2d((1, 1))
305
- self.linear = EqualLinear(512, 512, lr_mul=1)
306
- modules = []
307
- for block in blocks:
308
- for bottleneck in block:
309
- modules.append(unit_module(bottleneck.in_channel,
310
- bottleneck.depth,
311
- bottleneck.stride))
312
- self.body = Sequential(*modules)
313
-
314
- def forward(self, x):
315
- x = self.input_layer(x)
316
- x = self.body(x)
317
- x = self.output_pool(x)
318
- x = x.view(-1, 512)
319
- x = self.linear(x)
320
- return x
321
-
322
-
323
- class BackboneEncoderUsingLastLayerIntoWPlus(Module):
324
- def __init__(self, num_layers, mode='ir', opts=None):
325
- super(BackboneEncoderUsingLastLayerIntoWPlus, self).__init__()
326
- print('Using BackboneEncoderUsingLastLayerIntoWPlus')
327
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
328
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
329
- blocks = get_blocks(num_layers)
330
- if mode == 'ir':
331
- unit_module = bottleneck_IR
332
- elif mode == 'ir_se':
333
- unit_module = bottleneck_IR_SE
334
- self.n_styles = opts.n_styles
335
- self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False),
336
- BatchNorm2d(64),
337
- PReLU(64))
338
- self.output_layer_2 = Sequential(BatchNorm2d(512),
339
- torch.nn.AdaptiveAvgPool2d((7, 7)),
340
- Flatten(),
341
- Linear(512 * 7 * 7, 512))
342
- self.linear = EqualLinear(512, 512 * self.n_styles, lr_mul=1)
343
- modules = []
344
- for block in blocks:
345
- for bottleneck in block:
346
- modules.append(unit_module(bottleneck.in_channel,
347
- bottleneck.depth,
348
- bottleneck.stride))
349
- self.body = Sequential(*modules)
350
-
351
- def forward(self, x):
352
- x = self.input_layer(x)
353
- x = self.body(x)
354
- x = self.output_layer_2(x)
355
- x = self.linear(x)
356
- x = x.view(-1, self.n_styles, 512)
357
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/StyleGANEX/scripts/calc_losses_on_images.py DELETED
@@ -1,84 +0,0 @@
1
- from argparse import ArgumentParser
2
- import os
3
- import json
4
- import sys
5
- from tqdm import tqdm
6
- import numpy as np
7
- import torch
8
- from torch.utils.data import DataLoader
9
- import torchvision.transforms as transforms
10
-
11
- sys.path.append(".")
12
- sys.path.append("..")
13
-
14
- from criteria.lpips.lpips import LPIPS
15
- from datasets.gt_res_dataset import GTResDataset
16
-
17
-
18
- def parse_args():
19
- parser = ArgumentParser(add_help=False)
20
- parser.add_argument('--mode', type=str, default='lpips', choices=['lpips', 'l2'])
21
- parser.add_argument('--data_path', type=str, default='results')
22
- parser.add_argument('--gt_path', type=str, default='gt_images')
23
- parser.add_argument('--workers', type=int, default=4)
24
- parser.add_argument('--batch_size', type=int, default=4)
25
- args = parser.parse_args()
26
- return args
27
-
28
-
29
- def run(args):
30
-
31
- transform = transforms.Compose([transforms.Resize((256, 256)),
32
- transforms.ToTensor(),
33
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
34
-
35
- print('Loading dataset')
36
- dataset = GTResDataset(root_path=args.data_path,
37
- gt_dir=args.gt_path,
38
- transform=transform)
39
-
40
- dataloader = DataLoader(dataset,
41
- batch_size=args.batch_size,
42
- shuffle=False,
43
- num_workers=int(args.workers),
44
- drop_last=True)
45
-
46
- if args.mode == 'lpips':
47
- loss_func = LPIPS(net_type='alex')
48
- elif args.mode == 'l2':
49
- loss_func = torch.nn.MSELoss()
50
- else:
51
- raise Exception('Not a valid mode!')
52
- loss_func.cuda()
53
-
54
- global_i = 0
55
- scores_dict = {}
56
- all_scores = []
57
- for result_batch, gt_batch in tqdm(dataloader):
58
- for i in range(args.batch_size):
59
- loss = float(loss_func(result_batch[i:i+1].cuda(), gt_batch[i:i+1].cuda()))
60
- all_scores.append(loss)
61
- im_path = dataset.pairs[global_i][0]
62
- scores_dict[os.path.basename(im_path)] = loss
63
- global_i += 1
64
-
65
- all_scores = list(scores_dict.values())
66
- mean = np.mean(all_scores)
67
- std = np.std(all_scores)
68
- result_str = 'Average loss is {:.2f}+-{:.2f}'.format(mean, std)
69
- print('Finished with ', args.data_path)
70
- print(result_str)
71
-
72
- out_path = os.path.join(os.path.dirname(args.data_path), 'inference_metrics')
73
- if not os.path.exists(out_path):
74
- os.makedirs(out_path)
75
-
76
- with open(os.path.join(out_path, 'stat_{}.txt'.format(args.mode)), 'w') as f:
77
- f.write(result_str)
78
- with open(os.path.join(out_path, 'scores_{}.json'.format(args.mode)), 'w') as f:
79
- json.dump(scores_dict, f)
80
-
81
-
82
- if __name__ == '__main__':
83
- args = parse_args()
84
- run(args)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio/ldm/lr_scheduler.py DELETED
@@ -1,98 +0,0 @@
1
- import numpy as np
2
-
3
-
4
- class LambdaWarmUpCosineScheduler:
5
- """
6
- note: use with a base_lr of 1.0
7
- """
8
- def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0):
9
- self.lr_warm_up_steps = warm_up_steps
10
- self.lr_start = lr_start
11
- self.lr_min = lr_min
12
- self.lr_max = lr_max
13
- self.lr_max_decay_steps = max_decay_steps
14
- self.last_lr = 0.
15
- self.verbosity_interval = verbosity_interval
16
-
17
- def schedule(self, n, **kwargs):
18
- if self.verbosity_interval > 0:
19
- if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}")
20
- if n < self.lr_warm_up_steps:
21
- lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start
22
- self.last_lr = lr
23
- return lr
24
- else:
25
- t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps)
26
- t = min(t, 1.0)
27
- lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * (
28
- 1 + np.cos(t * np.pi))
29
- self.last_lr = lr
30
- return lr
31
-
32
- def __call__(self, n, **kwargs):
33
- return self.schedule(n,**kwargs)
34
-
35
-
36
- class LambdaWarmUpCosineScheduler2:
37
- """
38
- supports repeated iterations, configurable via lists
39
- note: use with a base_lr of 1.0.
40
- """
41
- def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0):
42
- assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths)
43
- self.lr_warm_up_steps = warm_up_steps
44
- self.f_start = f_start
45
- self.f_min = f_min
46
- self.f_max = f_max
47
- self.cycle_lengths = cycle_lengths
48
- self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths))
49
- self.last_f = 0.
50
- self.verbosity_interval = verbosity_interval
51
-
52
- def find_in_interval(self, n):
53
- interval = 0
54
- for cl in self.cum_cycles[1:]:
55
- if n <= cl:
56
- return interval
57
- interval += 1
58
-
59
- def schedule(self, n, **kwargs):
60
- cycle = self.find_in_interval(n)
61
- n = n - self.cum_cycles[cycle]
62
- if self.verbosity_interval > 0:
63
- if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, "
64
- f"current cycle {cycle}")
65
- if n < self.lr_warm_up_steps[cycle]:
66
- f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle]
67
- self.last_f = f
68
- return f
69
- else:
70
- t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle])
71
- t = min(t, 1.0)
72
- f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * (
73
- 1 + np.cos(t * np.pi))
74
- self.last_f = f
75
- return f
76
-
77
- def __call__(self, n, **kwargs):
78
- return self.schedule(n, **kwargs)
79
-
80
-
81
- class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2):
82
-
83
- def schedule(self, n, **kwargs):
84
- cycle = self.find_in_interval(n)
85
- n = n - self.cum_cycles[cycle]
86
- if self.verbosity_interval > 0:
87
- if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, "
88
- f"current cycle {cycle}")
89
-
90
- if n < self.lr_warm_up_steps[cycle]:
91
- f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle]
92
- self.last_f = f
93
- return f
94
- else:
95
- f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle])
96
- self.last_f = f
97
- return f
98
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192.py DELETED
@@ -1,2861 +0,0 @@
1
- default_scope = 'mmpose'
2
- default_hooks = dict(
3
- timer=dict(type='IterTimerHook'),
4
- logger=dict(type='LoggerHook', interval=50),
5
- param_scheduler=dict(type='ParamSchedulerHook'),
6
- checkpoint=dict(
7
- type='CheckpointHook', interval=10, save_best='PCK', rule='greater'),
8
- sampler_seed=dict(type='DistSamplerSeedHook'),
9
- visualization=dict(type='PoseVisualizationHook', enable=False))
10
- custom_hooks = [dict(type='SyncBuffersHook')]
11
- env_cfg = dict(
12
- cudnn_benchmark=False,
13
- mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
14
- dist_cfg=dict(backend='nccl'))
15
- vis_backends = [dict(type='LocalVisBackend')]
16
- visualizer = dict(
17
- type='PoseLocalVisualizer',
18
- vis_backends=[dict(type='LocalVisBackend'),
19
- dict(type='WandbVisBackend')],
20
- name='visualizer')
21
- log_processor = dict(
22
- type='LogProcessor', window_size=50, by_epoch=True, num_digits=6)
23
- log_level = 'INFO'
24
- load_from = None
25
- resume = False
26
- backend_args = dict(backend='local')
27
- train_cfg = dict(by_epoch=True, max_epochs=120, val_interval=10)
28
- val_cfg = dict()
29
- test_cfg = dict()
30
- colors = dict(
31
- sss=[255, 128, 0],
32
- lss=[255, 0, 128],
33
- sso=[128, 0, 255],
34
- lso=[0, 128, 255],
35
- vest=[0, 128, 128],
36
- sling=[0, 0, 128],
37
- shorts=[128, 128, 128],
38
- trousers=[128, 0, 128],
39
- skirt=[64, 128, 128],
40
- ssd=[64, 64, 128],
41
- lsd=[128, 64, 0],
42
- vd=[128, 64, 255],
43
- sd=[128, 64, 0])
44
- dataset_info = dict(
45
- dataset_name='deepfashion2',
46
- paper_info=dict(
47
- author=
48
- 'Yuying Ge and Ruimao Zhang and Lingyun Wu and Xiaogang Wang and Xiaoou Tang and Ping Luo',
49
- title=
50
- 'DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images',
51
- container=
52
- 'Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)',
53
- year='2019',
54
- homepage='https://github.com/switchablenorms/DeepFashion2'),
55
- keypoint_info=dict({
56
- 0:
57
- dict(name='sss_kpt1', id=0, color=[255, 128, 0], type='', swap=''),
58
- 1:
59
- dict(
60
- name='sss_kpt2',
61
- id=1,
62
- color=[255, 128, 0],
63
- type='',
64
- swap='sss_kpt6'),
65
- 2:
66
- dict(
67
- name='sss_kpt3',
68
- id=2,
69
- color=[255, 128, 0],
70
- type='',
71
- swap='sss_kpt5'),
72
- 3:
73
- dict(name='sss_kpt4', id=3, color=[255, 128, 0], type='', swap=''),
74
- 4:
75
- dict(
76
- name='sss_kpt5',
77
- id=4,
78
- color=[255, 128, 0],
79
- type='',
80
- swap='sss_kpt3'),
81
- 5:
82
- dict(
83
- name='sss_kpt6',
84
- id=5,
85
- color=[255, 128, 0],
86
- type='',
87
- swap='sss_kpt2'),
88
- 6:
89
- dict(
90
- name='sss_kpt7',
91
- id=6,
92
- color=[255, 128, 0],
93
- type='',
94
- swap='sss_kpt25'),
95
- 7:
96
- dict(
97
- name='sss_kpt8',
98
- id=7,
99
- color=[255, 128, 0],
100
- type='',
101
- swap='sss_kpt24'),
102
- 8:
103
- dict(
104
- name='sss_kpt9',
105
- id=8,
106
- color=[255, 128, 0],
107
- type='',
108
- swap='sss_kpt23'),
109
- 9:
110
- dict(
111
- name='sss_kpt10',
112
- id=9,
113
- color=[255, 128, 0],
114
- type='',
115
- swap='sss_kpt22'),
116
- 10:
117
- dict(
118
- name='sss_kpt11',
119
- id=10,
120
- color=[255, 128, 0],
121
- type='',
122
- swap='sss_kpt21'),
123
- 11:
124
- dict(
125
- name='sss_kpt12',
126
- id=11,
127
- color=[255, 128, 0],
128
- type='',
129
- swap='sss_kpt20'),
130
- 12:
131
- dict(
132
- name='sss_kpt13',
133
- id=12,
134
- color=[255, 128, 0],
135
- type='',
136
- swap='sss_kpt19'),
137
- 13:
138
- dict(
139
- name='sss_kpt14',
140
- id=13,
141
- color=[255, 128, 0],
142
- type='',
143
- swap='sss_kpt18'),
144
- 14:
145
- dict(
146
- name='sss_kpt15',
147
- id=14,
148
- color=[255, 128, 0],
149
- type='',
150
- swap='sss_kpt17'),
151
- 15:
152
- dict(name='sss_kpt16', id=15, color=[255, 128, 0], type='', swap=''),
153
- 16:
154
- dict(
155
- name='sss_kpt17',
156
- id=16,
157
- color=[255, 128, 0],
158
- type='',
159
- swap='sss_kpt15'),
160
- 17:
161
- dict(
162
- name='sss_kpt18',
163
- id=17,
164
- color=[255, 128, 0],
165
- type='',
166
- swap='sss_kpt14'),
167
- 18:
168
- dict(
169
- name='sss_kpt19',
170
- id=18,
171
- color=[255, 128, 0],
172
- type='',
173
- swap='sss_kpt13'),
174
- 19:
175
- dict(
176
- name='sss_kpt20',
177
- id=19,
178
- color=[255, 128, 0],
179
- type='',
180
- swap='sss_kpt12'),
181
- 20:
182
- dict(
183
- name='sss_kpt21',
184
- id=20,
185
- color=[255, 128, 0],
186
- type='',
187
- swap='sss_kpt11'),
188
- 21:
189
- dict(
190
- name='sss_kpt22',
191
- id=21,
192
- color=[255, 128, 0],
193
- type='',
194
- swap='sss_kpt10'),
195
- 22:
196
- dict(
197
- name='sss_kpt23',
198
- id=22,
199
- color=[255, 128, 0],
200
- type='',
201
- swap='sss_kpt9'),
202
- 23:
203
- dict(
204
- name='sss_kpt24',
205
- id=23,
206
- color=[255, 128, 0],
207
- type='',
208
- swap='sss_kpt8'),
209
- 24:
210
- dict(
211
- name='sss_kpt25',
212
- id=24,
213
- color=[255, 128, 0],
214
- type='',
215
- swap='sss_kpt7'),
216
- 25:
217
- dict(name='lss_kpt1', id=25, color=[255, 0, 128], type='', swap=''),
218
- 26:
219
- dict(
220
- name='lss_kpt2',
221
- id=26,
222
- color=[255, 0, 128],
223
- type='',
224
- swap='lss_kpt6'),
225
- 27:
226
- dict(
227
- name='lss_kpt3',
228
- id=27,
229
- color=[255, 0, 128],
230
- type='',
231
- swap='lss_kpt5'),
232
- 28:
233
- dict(name='lss_kpt4', id=28, color=[255, 0, 128], type='', swap=''),
234
- 29:
235
- dict(
236
- name='lss_kpt5',
237
- id=29,
238
- color=[255, 0, 128],
239
- type='',
240
- swap='lss_kpt3'),
241
- 30:
242
- dict(
243
- name='lss_kpt6',
244
- id=30,
245
- color=[255, 0, 128],
246
- type='',
247
- swap='lss_kpt2'),
248
- 31:
249
- dict(
250
- name='lss_kpt7',
251
- id=31,
252
- color=[255, 0, 128],
253
- type='',
254
- swap='lss_kpt33'),
255
- 32:
256
- dict(
257
- name='lss_kpt8',
258
- id=32,
259
- color=[255, 0, 128],
260
- type='',
261
- swap='lss_kpt32'),
262
- 33:
263
- dict(
264
- name='lss_kpt9',
265
- id=33,
266
- color=[255, 0, 128],
267
- type='',
268
- swap='lss_kpt31'),
269
- 34:
270
- dict(
271
- name='lss_kpt10',
272
- id=34,
273
- color=[255, 0, 128],
274
- type='',
275
- swap='lss_kpt30'),
276
- 35:
277
- dict(
278
- name='lss_kpt11',
279
- id=35,
280
- color=[255, 0, 128],
281
- type='',
282
- swap='lss_kpt29'),
283
- 36:
284
- dict(
285
- name='lss_kpt12',
286
- id=36,
287
- color=[255, 0, 128],
288
- type='',
289
- swap='lss_kpt28'),
290
- 37:
291
- dict(
292
- name='lss_kpt13',
293
- id=37,
294
- color=[255, 0, 128],
295
- type='',
296
- swap='lss_kpt27'),
297
- 38:
298
- dict(
299
- name='lss_kpt14',
300
- id=38,
301
- color=[255, 0, 128],
302
- type='',
303
- swap='lss_kpt26'),
304
- 39:
305
- dict(
306
- name='lss_kpt15',
307
- id=39,
308
- color=[255, 0, 128],
309
- type='',
310
- swap='lss_kpt25'),
311
- 40:
312
- dict(
313
- name='lss_kpt16',
314
- id=40,
315
- color=[255, 0, 128],
316
- type='',
317
- swap='lss_kpt24'),
318
- 41:
319
- dict(
320
- name='lss_kpt17',
321
- id=41,
322
- color=[255, 0, 128],
323
- type='',
324
- swap='lss_kpt23'),
325
- 42:
326
- dict(
327
- name='lss_kpt18',
328
- id=42,
329
- color=[255, 0, 128],
330
- type='',
331
- swap='lss_kpt22'),
332
- 43:
333
- dict(
334
- name='lss_kpt19',
335
- id=43,
336
- color=[255, 0, 128],
337
- type='',
338
- swap='lss_kpt21'),
339
- 44:
340
- dict(name='lss_kpt20', id=44, color=[255, 0, 128], type='', swap=''),
341
- 45:
342
- dict(
343
- name='lss_kpt21',
344
- id=45,
345
- color=[255, 0, 128],
346
- type='',
347
- swap='lss_kpt19'),
348
- 46:
349
- dict(
350
- name='lss_kpt22',
351
- id=46,
352
- color=[255, 0, 128],
353
- type='',
354
- swap='lss_kpt18'),
355
- 47:
356
- dict(
357
- name='lss_kpt23',
358
- id=47,
359
- color=[255, 0, 128],
360
- type='',
361
- swap='lss_kpt17'),
362
- 48:
363
- dict(
364
- name='lss_kpt24',
365
- id=48,
366
- color=[255, 0, 128],
367
- type='',
368
- swap='lss_kpt16'),
369
- 49:
370
- dict(
371
- name='lss_kpt25',
372
- id=49,
373
- color=[255, 0, 128],
374
- type='',
375
- swap='lss_kpt15'),
376
- 50:
377
- dict(
378
- name='lss_kpt26',
379
- id=50,
380
- color=[255, 0, 128],
381
- type='',
382
- swap='lss_kpt14'),
383
- 51:
384
- dict(
385
- name='lss_kpt27',
386
- id=51,
387
- color=[255, 0, 128],
388
- type='',
389
- swap='lss_kpt13'),
390
- 52:
391
- dict(
392
- name='lss_kpt28',
393
- id=52,
394
- color=[255, 0, 128],
395
- type='',
396
- swap='lss_kpt12'),
397
- 53:
398
- dict(
399
- name='lss_kpt29',
400
- id=53,
401
- color=[255, 0, 128],
402
- type='',
403
- swap='lss_kpt11'),
404
- 54:
405
- dict(
406
- name='lss_kpt30',
407
- id=54,
408
- color=[255, 0, 128],
409
- type='',
410
- swap='lss_kpt10'),
411
- 55:
412
- dict(
413
- name='lss_kpt31',
414
- id=55,
415
- color=[255, 0, 128],
416
- type='',
417
- swap='lss_kpt9'),
418
- 56:
419
- dict(
420
- name='lss_kpt32',
421
- id=56,
422
- color=[255, 0, 128],
423
- type='',
424
- swap='lss_kpt8'),
425
- 57:
426
- dict(
427
- name='lss_kpt33',
428
- id=57,
429
- color=[255, 0, 128],
430
- type='',
431
- swap='lss_kpt7'),
432
- 58:
433
- dict(name='sso_kpt1', id=58, color=[128, 0, 255], type='', swap=''),
434
- 59:
435
- dict(
436
- name='sso_kpt2',
437
- id=59,
438
- color=[128, 0, 255],
439
- type='',
440
- swap='sso_kpt26'),
441
- 60:
442
- dict(
443
- name='sso_kpt3',
444
- id=60,
445
- color=[128, 0, 255],
446
- type='',
447
- swap='sso_kpt5'),
448
- 61:
449
- dict(
450
- name='sso_kpt4',
451
- id=61,
452
- color=[128, 0, 255],
453
- type='',
454
- swap='sso_kpt6'),
455
- 62:
456
- dict(
457
- name='sso_kpt5',
458
- id=62,
459
- color=[128, 0, 255],
460
- type='',
461
- swap='sso_kpt3'),
462
- 63:
463
- dict(
464
- name='sso_kpt6',
465
- id=63,
466
- color=[128, 0, 255],
467
- type='',
468
- swap='sso_kpt4'),
469
- 64:
470
- dict(
471
- name='sso_kpt7',
472
- id=64,
473
- color=[128, 0, 255],
474
- type='',
475
- swap='sso_kpt25'),
476
- 65:
477
- dict(
478
- name='sso_kpt8',
479
- id=65,
480
- color=[128, 0, 255],
481
- type='',
482
- swap='sso_kpt24'),
483
- 66:
484
- dict(
485
- name='sso_kpt9',
486
- id=66,
487
- color=[128, 0, 255],
488
- type='',
489
- swap='sso_kpt23'),
490
- 67:
491
- dict(
492
- name='sso_kpt10',
493
- id=67,
494
- color=[128, 0, 255],
495
- type='',
496
- swap='sso_kpt22'),
497
- 68:
498
- dict(
499
- name='sso_kpt11',
500
- id=68,
501
- color=[128, 0, 255],
502
- type='',
503
- swap='sso_kpt21'),
504
- 69:
505
- dict(
506
- name='sso_kpt12',
507
- id=69,
508
- color=[128, 0, 255],
509
- type='',
510
- swap='sso_kpt20'),
511
- 70:
512
- dict(
513
- name='sso_kpt13',
514
- id=70,
515
- color=[128, 0, 255],
516
- type='',
517
- swap='sso_kpt19'),
518
- 71:
519
- dict(
520
- name='sso_kpt14',
521
- id=71,
522
- color=[128, 0, 255],
523
- type='',
524
- swap='sso_kpt18'),
525
- 72:
526
- dict(
527
- name='sso_kpt15',
528
- id=72,
529
- color=[128, 0, 255],
530
- type='',
531
- swap='sso_kpt17'),
532
- 73:
533
- dict(
534
- name='sso_kpt16',
535
- id=73,
536
- color=[128, 0, 255],
537
- type='',
538
- swap='sso_kpt29'),
539
- 74:
540
- dict(
541
- name='sso_kpt17',
542
- id=74,
543
- color=[128, 0, 255],
544
- type='',
545
- swap='sso_kpt15'),
546
- 75:
547
- dict(
548
- name='sso_kpt18',
549
- id=75,
550
- color=[128, 0, 255],
551
- type='',
552
- swap='sso_kpt14'),
553
- 76:
554
- dict(
555
- name='sso_kpt19',
556
- id=76,
557
- color=[128, 0, 255],
558
- type='',
559
- swap='sso_kpt13'),
560
- 77:
561
- dict(
562
- name='sso_kpt20',
563
- id=77,
564
- color=[128, 0, 255],
565
- type='',
566
- swap='sso_kpt12'),
567
- 78:
568
- dict(
569
- name='sso_kpt21',
570
- id=78,
571
- color=[128, 0, 255],
572
- type='',
573
- swap='sso_kpt11'),
574
- 79:
575
- dict(
576
- name='sso_kpt22',
577
- id=79,
578
- color=[128, 0, 255],
579
- type='',
580
- swap='sso_kpt10'),
581
- 80:
582
- dict(
583
- name='sso_kpt23',
584
- id=80,
585
- color=[128, 0, 255],
586
- type='',
587
- swap='sso_kpt9'),
588
- 81:
589
- dict(
590
- name='sso_kpt24',
591
- id=81,
592
- color=[128, 0, 255],
593
- type='',
594
- swap='sso_kpt8'),
595
- 82:
596
- dict(
597
- name='sso_kpt25',
598
- id=82,
599
- color=[128, 0, 255],
600
- type='',
601
- swap='sso_kpt7'),
602
- 83:
603
- dict(
604
- name='sso_kpt26',
605
- id=83,
606
- color=[128, 0, 255],
607
- type='',
608
- swap='sso_kpt2'),
609
- 84:
610
- dict(
611
- name='sso_kpt27',
612
- id=84,
613
- color=[128, 0, 255],
614
- type='',
615
- swap='sso_kpt30'),
616
- 85:
617
- dict(
618
- name='sso_kpt28',
619
- id=85,
620
- color=[128, 0, 255],
621
- type='',
622
- swap='sso_kpt31'),
623
- 86:
624
- dict(
625
- name='sso_kpt29',
626
- id=86,
627
- color=[128, 0, 255],
628
- type='',
629
- swap='sso_kpt16'),
630
- 87:
631
- dict(
632
- name='sso_kpt30',
633
- id=87,
634
- color=[128, 0, 255],
635
- type='',
636
- swap='sso_kpt27'),
637
- 88:
638
- dict(
639
- name='sso_kpt31',
640
- id=88,
641
- color=[128, 0, 255],
642
- type='',
643
- swap='sso_kpt28'),
644
- 89:
645
- dict(name='lso_kpt1', id=89, color=[0, 128, 255], type='', swap=''),
646
- 90:
647
- dict(
648
- name='lso_kpt2',
649
- id=90,
650
- color=[0, 128, 255],
651
- type='',
652
- swap='lso_kpt6'),
653
- 91:
654
- dict(
655
- name='lso_kpt3',
656
- id=91,
657
- color=[0, 128, 255],
658
- type='',
659
- swap='lso_kpt5'),
660
- 92:
661
- dict(
662
- name='lso_kpt4',
663
- id=92,
664
- color=[0, 128, 255],
665
- type='',
666
- swap='lso_kpt34'),
667
- 93:
668
- dict(
669
- name='lso_kpt5',
670
- id=93,
671
- color=[0, 128, 255],
672
- type='',
673
- swap='lso_kpt3'),
674
- 94:
675
- dict(
676
- name='lso_kpt6',
677
- id=94,
678
- color=[0, 128, 255],
679
- type='',
680
- swap='lso_kpt2'),
681
- 95:
682
- dict(
683
- name='lso_kpt7',
684
- id=95,
685
- color=[0, 128, 255],
686
- type='',
687
- swap='lso_kpt33'),
688
- 96:
689
- dict(
690
- name='lso_kpt8',
691
- id=96,
692
- color=[0, 128, 255],
693
- type='',
694
- swap='lso_kpt32'),
695
- 97:
696
- dict(
697
- name='lso_kpt9',
698
- id=97,
699
- color=[0, 128, 255],
700
- type='',
701
- swap='lso_kpt31'),
702
- 98:
703
- dict(
704
- name='lso_kpt10',
705
- id=98,
706
- color=[0, 128, 255],
707
- type='',
708
- swap='lso_kpt30'),
709
- 99:
710
- dict(
711
- name='lso_kpt11',
712
- id=99,
713
- color=[0, 128, 255],
714
- type='',
715
- swap='lso_kpt29'),
716
- 100:
717
- dict(
718
- name='lso_kpt12',
719
- id=100,
720
- color=[0, 128, 255],
721
- type='',
722
- swap='lso_kpt28'),
723
- 101:
724
- dict(
725
- name='lso_kpt13',
726
- id=101,
727
- color=[0, 128, 255],
728
- type='',
729
- swap='lso_kpt27'),
730
- 102:
731
- dict(
732
- name='lso_kpt14',
733
- id=102,
734
- color=[0, 128, 255],
735
- type='',
736
- swap='lso_kpt26'),
737
- 103:
738
- dict(
739
- name='lso_kpt15',
740
- id=103,
741
- color=[0, 128, 255],
742
- type='',
743
- swap='lso_kpt25'),
744
- 104:
745
- dict(
746
- name='lso_kpt16',
747
- id=104,
748
- color=[0, 128, 255],
749
- type='',
750
- swap='lso_kpt24'),
751
- 105:
752
- dict(
753
- name='lso_kpt17',
754
- id=105,
755
- color=[0, 128, 255],
756
- type='',
757
- swap='lso_kpt23'),
758
- 106:
759
- dict(
760
- name='lso_kpt18',
761
- id=106,
762
- color=[0, 128, 255],
763
- type='',
764
- swap='lso_kpt22'),
765
- 107:
766
- dict(
767
- name='lso_kpt19',
768
- id=107,
769
- color=[0, 128, 255],
770
- type='',
771
- swap='lso_kpt21'),
772
- 108:
773
- dict(
774
- name='lso_kpt20',
775
- id=108,
776
- color=[0, 128, 255],
777
- type='',
778
- swap='lso_kpt37'),
779
- 109:
780
- dict(
781
- name='lso_kpt21',
782
- id=109,
783
- color=[0, 128, 255],
784
- type='',
785
- swap='lso_kpt19'),
786
- 110:
787
- dict(
788
- name='lso_kpt22',
789
- id=110,
790
- color=[0, 128, 255],
791
- type='',
792
- swap='lso_kpt18'),
793
- 111:
794
- dict(
795
- name='lso_kpt23',
796
- id=111,
797
- color=[0, 128, 255],
798
- type='',
799
- swap='lso_kpt17'),
800
- 112:
801
- dict(
802
- name='lso_kpt24',
803
- id=112,
804
- color=[0, 128, 255],
805
- type='',
806
- swap='lso_kpt16'),
807
- 113:
808
- dict(
809
- name='lso_kpt25',
810
- id=113,
811
- color=[0, 128, 255],
812
- type='',
813
- swap='lso_kpt15'),
814
- 114:
815
- dict(
816
- name='lso_kpt26',
817
- id=114,
818
- color=[0, 128, 255],
819
- type='',
820
- swap='lso_kpt14'),
821
- 115:
822
- dict(
823
- name='lso_kpt27',
824
- id=115,
825
- color=[0, 128, 255],
826
- type='',
827
- swap='lso_kpt13'),
828
- 116:
829
- dict(
830
- name='lso_kpt28',
831
- id=116,
832
- color=[0, 128, 255],
833
- type='',
834
- swap='lso_kpt12'),
835
- 117:
836
- dict(
837
- name='lso_kpt29',
838
- id=117,
839
- color=[0, 128, 255],
840
- type='',
841
- swap='lso_kpt11'),
842
- 118:
843
- dict(
844
- name='lso_kpt30',
845
- id=118,
846
- color=[0, 128, 255],
847
- type='',
848
- swap='lso_kpt10'),
849
- 119:
850
- dict(
851
- name='lso_kpt31',
852
- id=119,
853
- color=[0, 128, 255],
854
- type='',
855
- swap='lso_kpt9'),
856
- 120:
857
- dict(
858
- name='lso_kpt32',
859
- id=120,
860
- color=[0, 128, 255],
861
- type='',
862
- swap='lso_kpt8'),
863
- 121:
864
- dict(
865
- name='lso_kpt33',
866
- id=121,
867
- color=[0, 128, 255],
868
- type='',
869
- swap='lso_kpt7'),
870
- 122:
871
- dict(
872
- name='lso_kpt34',
873
- id=122,
874
- color=[0, 128, 255],
875
- type='',
876
- swap='lso_kpt4'),
877
- 123:
878
- dict(
879
- name='lso_kpt35',
880
- id=123,
881
- color=[0, 128, 255],
882
- type='',
883
- swap='lso_kpt38'),
884
- 124:
885
- dict(
886
- name='lso_kpt36',
887
- id=124,
888
- color=[0, 128, 255],
889
- type='',
890
- swap='lso_kpt39'),
891
- 125:
892
- dict(
893
- name='lso_kpt37',
894
- id=125,
895
- color=[0, 128, 255],
896
- type='',
897
- swap='lso_kpt20'),
898
- 126:
899
- dict(
900
- name='lso_kpt38',
901
- id=126,
902
- color=[0, 128, 255],
903
- type='',
904
- swap='lso_kpt35'),
905
- 127:
906
- dict(
907
- name='lso_kpt39',
908
- id=127,
909
- color=[0, 128, 255],
910
- type='',
911
- swap='lso_kpt36'),
912
- 128:
913
- dict(name='vest_kpt1', id=128, color=[0, 128, 128], type='', swap=''),
914
- 129:
915
- dict(
916
- name='vest_kpt2',
917
- id=129,
918
- color=[0, 128, 128],
919
- type='',
920
- swap='vest_kpt6'),
921
- 130:
922
- dict(
923
- name='vest_kpt3',
924
- id=130,
925
- color=[0, 128, 128],
926
- type='',
927
- swap='vest_kpt5'),
928
- 131:
929
- dict(name='vest_kpt4', id=131, color=[0, 128, 128], type='', swap=''),
930
- 132:
931
- dict(
932
- name='vest_kpt5',
933
- id=132,
934
- color=[0, 128, 128],
935
- type='',
936
- swap='vest_kpt3'),
937
- 133:
938
- dict(
939
- name='vest_kpt6',
940
- id=133,
941
- color=[0, 128, 128],
942
- type='',
943
- swap='vest_kpt2'),
944
- 134:
945
- dict(
946
- name='vest_kpt7',
947
- id=134,
948
- color=[0, 128, 128],
949
- type='',
950
- swap='vest_kpt15'),
951
- 135:
952
- dict(
953
- name='vest_kpt8',
954
- id=135,
955
- color=[0, 128, 128],
956
- type='',
957
- swap='vest_kpt14'),
958
- 136:
959
- dict(
960
- name='vest_kpt9',
961
- id=136,
962
- color=[0, 128, 128],
963
- type='',
964
- swap='vest_kpt13'),
965
- 137:
966
- dict(
967
- name='vest_kpt10',
968
- id=137,
969
- color=[0, 128, 128],
970
- type='',
971
- swap='vest_kpt12'),
972
- 138:
973
- dict(name='vest_kpt11', id=138, color=[0, 128, 128], type='', swap=''),
974
- 139:
975
- dict(
976
- name='vest_kpt12',
977
- id=139,
978
- color=[0, 128, 128],
979
- type='',
980
- swap='vest_kpt10'),
981
- 140:
982
- dict(name='vest_kpt13', id=140, color=[0, 128, 128], type='', swap=''),
983
- 141:
984
- dict(
985
- name='vest_kpt14',
986
- id=141,
987
- color=[0, 128, 128],
988
- type='',
989
- swap='vest_kpt8'),
990
- 142:
991
- dict(
992
- name='vest_kpt15',
993
- id=142,
994
- color=[0, 128, 128],
995
- type='',
996
- swap='vest_kpt7'),
997
- 143:
998
- dict(name='sling_kpt1', id=143, color=[0, 0, 128], type='', swap=''),
999
- 144:
1000
- dict(
1001
- name='sling_kpt2',
1002
- id=144,
1003
- color=[0, 0, 128],
1004
- type='',
1005
- swap='sling_kpt6'),
1006
- 145:
1007
- dict(
1008
- name='sling_kpt3',
1009
- id=145,
1010
- color=[0, 0, 128],
1011
- type='',
1012
- swap='sling_kpt5'),
1013
- 146:
1014
- dict(name='sling_kpt4', id=146, color=[0, 0, 128], type='', swap=''),
1015
- 147:
1016
- dict(
1017
- name='sling_kpt5',
1018
- id=147,
1019
- color=[0, 0, 128],
1020
- type='',
1021
- swap='sling_kpt3'),
1022
- 148:
1023
- dict(
1024
- name='sling_kpt6',
1025
- id=148,
1026
- color=[0, 0, 128],
1027
- type='',
1028
- swap='sling_kpt2'),
1029
- 149:
1030
- dict(
1031
- name='sling_kpt7',
1032
- id=149,
1033
- color=[0, 0, 128],
1034
- type='',
1035
- swap='sling_kpt15'),
1036
- 150:
1037
- dict(
1038
- name='sling_kpt8',
1039
- id=150,
1040
- color=[0, 0, 128],
1041
- type='',
1042
- swap='sling_kpt14'),
1043
- 151:
1044
- dict(
1045
- name='sling_kpt9',
1046
- id=151,
1047
- color=[0, 0, 128],
1048
- type='',
1049
- swap='sling_kpt13'),
1050
- 152:
1051
- dict(
1052
- name='sling_kpt10',
1053
- id=152,
1054
- color=[0, 0, 128],
1055
- type='',
1056
- swap='sling_kpt12'),
1057
- 153:
1058
- dict(name='sling_kpt11', id=153, color=[0, 0, 128], type='', swap=''),
1059
- 154:
1060
- dict(
1061
- name='sling_kpt12',
1062
- id=154,
1063
- color=[0, 0, 128],
1064
- type='',
1065
- swap='sling_kpt10'),
1066
- 155:
1067
- dict(
1068
- name='sling_kpt13',
1069
- id=155,
1070
- color=[0, 0, 128],
1071
- type='',
1072
- swap='sling_kpt9'),
1073
- 156:
1074
- dict(
1075
- name='sling_kpt14',
1076
- id=156,
1077
- color=[0, 0, 128],
1078
- type='',
1079
- swap='sling_kpt8'),
1080
- 157:
1081
- dict(
1082
- name='sling_kpt15',
1083
- id=157,
1084
- color=[0, 0, 128],
1085
- type='',
1086
- swap='sling_kpt7'),
1087
- 158:
1088
- dict(
1089
- name='shorts_kpt1',
1090
- id=158,
1091
- color=[128, 128, 128],
1092
- type='',
1093
- swap='shorts_kpt3'),
1094
- 159:
1095
- dict(
1096
- name='shorts_kpt2',
1097
- id=159,
1098
- color=[128, 128, 128],
1099
- type='',
1100
- swap=''),
1101
- 160:
1102
- dict(
1103
- name='shorts_kpt3',
1104
- id=160,
1105
- color=[128, 128, 128],
1106
- type='',
1107
- swap='shorts_kpt1'),
1108
- 161:
1109
- dict(
1110
- name='shorts_kpt4',
1111
- id=161,
1112
- color=[128, 128, 128],
1113
- type='',
1114
- swap='shorts_kpt10'),
1115
- 162:
1116
- dict(
1117
- name='shorts_kpt5',
1118
- id=162,
1119
- color=[128, 128, 128],
1120
- type='',
1121
- swap='shorts_kpt9'),
1122
- 163:
1123
- dict(
1124
- name='shorts_kpt6',
1125
- id=163,
1126
- color=[128, 128, 128],
1127
- type='',
1128
- swap='shorts_kpt8'),
1129
- 164:
1130
- dict(
1131
- name='shorts_kpt7',
1132
- id=164,
1133
- color=[128, 128, 128],
1134
- type='',
1135
- swap=''),
1136
- 165:
1137
- dict(
1138
- name='shorts_kpt8',
1139
- id=165,
1140
- color=[128, 128, 128],
1141
- type='',
1142
- swap='shorts_kpt6'),
1143
- 166:
1144
- dict(
1145
- name='shorts_kpt9',
1146
- id=166,
1147
- color=[128, 128, 128],
1148
- type='',
1149
- swap='shorts_kpt5'),
1150
- 167:
1151
- dict(
1152
- name='shorts_kpt10',
1153
- id=167,
1154
- color=[128, 128, 128],
1155
- type='',
1156
- swap='shorts_kpt4'),
1157
- 168:
1158
- dict(
1159
- name='trousers_kpt1',
1160
- id=168,
1161
- color=[128, 0, 128],
1162
- type='',
1163
- swap='trousers_kpt3'),
1164
- 169:
1165
- dict(
1166
- name='trousers_kpt2',
1167
- id=169,
1168
- color=[128, 0, 128],
1169
- type='',
1170
- swap=''),
1171
- 170:
1172
- dict(
1173
- name='trousers_kpt3',
1174
- id=170,
1175
- color=[128, 0, 128],
1176
- type='',
1177
- swap='trousers_kpt1'),
1178
- 171:
1179
- dict(
1180
- name='trousers_kpt4',
1181
- id=171,
1182
- color=[128, 0, 128],
1183
- type='',
1184
- swap='trousers_kpt14'),
1185
- 172:
1186
- dict(
1187
- name='trousers_kpt5',
1188
- id=172,
1189
- color=[128, 0, 128],
1190
- type='',
1191
- swap='trousers_kpt13'),
1192
- 173:
1193
- dict(
1194
- name='trousers_kpt6',
1195
- id=173,
1196
- color=[128, 0, 128],
1197
- type='',
1198
- swap='trousers_kpt12'),
1199
- 174:
1200
- dict(
1201
- name='trousers_kpt7',
1202
- id=174,
1203
- color=[128, 0, 128],
1204
- type='',
1205
- swap='trousers_kpt11'),
1206
- 175:
1207
- dict(
1208
- name='trousers_kpt8',
1209
- id=175,
1210
- color=[128, 0, 128],
1211
- type='',
1212
- swap='trousers_kpt10'),
1213
- 176:
1214
- dict(
1215
- name='trousers_kpt9',
1216
- id=176,
1217
- color=[128, 0, 128],
1218
- type='',
1219
- swap=''),
1220
- 177:
1221
- dict(
1222
- name='trousers_kpt10',
1223
- id=177,
1224
- color=[128, 0, 128],
1225
- type='',
1226
- swap='trousers_kpt8'),
1227
- 178:
1228
- dict(
1229
- name='trousers_kpt11',
1230
- id=178,
1231
- color=[128, 0, 128],
1232
- type='',
1233
- swap='trousers_kpt7'),
1234
- 179:
1235
- dict(
1236
- name='trousers_kpt12',
1237
- id=179,
1238
- color=[128, 0, 128],
1239
- type='',
1240
- swap='trousers_kpt6'),
1241
- 180:
1242
- dict(
1243
- name='trousers_kpt13',
1244
- id=180,
1245
- color=[128, 0, 128],
1246
- type='',
1247
- swap='trousers_kpt5'),
1248
- 181:
1249
- dict(
1250
- name='trousers_kpt14',
1251
- id=181,
1252
- color=[128, 0, 128],
1253
- type='',
1254
- swap='trousers_kpt4'),
1255
- 182:
1256
- dict(
1257
- name='skirt_kpt1',
1258
- id=182,
1259
- color=[64, 128, 128],
1260
- type='',
1261
- swap='skirt_kpt3'),
1262
- 183:
1263
- dict(
1264
- name='skirt_kpt2', id=183, color=[64, 128, 128], type='', swap=''),
1265
- 184:
1266
- dict(
1267
- name='skirt_kpt3',
1268
- id=184,
1269
- color=[64, 128, 128],
1270
- type='',
1271
- swap='skirt_kpt1'),
1272
- 185:
1273
- dict(
1274
- name='skirt_kpt4',
1275
- id=185,
1276
- color=[64, 128, 128],
1277
- type='',
1278
- swap='skirt_kpt8'),
1279
- 186:
1280
- dict(
1281
- name='skirt_kpt5',
1282
- id=186,
1283
- color=[64, 128, 128],
1284
- type='',
1285
- swap='skirt_kpt7'),
1286
- 187:
1287
- dict(
1288
- name='skirt_kpt6', id=187, color=[64, 128, 128], type='', swap=''),
1289
- 188:
1290
- dict(
1291
- name='skirt_kpt7',
1292
- id=188,
1293
- color=[64, 128, 128],
1294
- type='',
1295
- swap='skirt_kpt5'),
1296
- 189:
1297
- dict(
1298
- name='skirt_kpt8',
1299
- id=189,
1300
- color=[64, 128, 128],
1301
- type='',
1302
- swap='skirt_kpt4'),
1303
- 190:
1304
- dict(name='ssd_kpt1', id=190, color=[64, 64, 128], type='', swap=''),
1305
- 191:
1306
- dict(
1307
- name='ssd_kpt2',
1308
- id=191,
1309
- color=[64, 64, 128],
1310
- type='',
1311
- swap='ssd_kpt6'),
1312
- 192:
1313
- dict(
1314
- name='ssd_kpt3',
1315
- id=192,
1316
- color=[64, 64, 128],
1317
- type='',
1318
- swap='ssd_kpt5'),
1319
- 193:
1320
- dict(name='ssd_kpt4', id=193, color=[64, 64, 128], type='', swap=''),
1321
- 194:
1322
- dict(
1323
- name='ssd_kpt5',
1324
- id=194,
1325
- color=[64, 64, 128],
1326
- type='',
1327
- swap='ssd_kpt3'),
1328
- 195:
1329
- dict(
1330
- name='ssd_kpt6',
1331
- id=195,
1332
- color=[64, 64, 128],
1333
- type='',
1334
- swap='ssd_kpt2'),
1335
- 196:
1336
- dict(
1337
- name='ssd_kpt7',
1338
- id=196,
1339
- color=[64, 64, 128],
1340
- type='',
1341
- swap='ssd_kpt29'),
1342
- 197:
1343
- dict(
1344
- name='ssd_kpt8',
1345
- id=197,
1346
- color=[64, 64, 128],
1347
- type='',
1348
- swap='ssd_kpt28'),
1349
- 198:
1350
- dict(
1351
- name='ssd_kpt9',
1352
- id=198,
1353
- color=[64, 64, 128],
1354
- type='',
1355
- swap='ssd_kpt27'),
1356
- 199:
1357
- dict(
1358
- name='ssd_kpt10',
1359
- id=199,
1360
- color=[64, 64, 128],
1361
- type='',
1362
- swap='ssd_kpt26'),
1363
- 200:
1364
- dict(
1365
- name='ssd_kpt11',
1366
- id=200,
1367
- color=[64, 64, 128],
1368
- type='',
1369
- swap='ssd_kpt25'),
1370
- 201:
1371
- dict(
1372
- name='ssd_kpt12',
1373
- id=201,
1374
- color=[64, 64, 128],
1375
- type='',
1376
- swap='ssd_kpt24'),
1377
- 202:
1378
- dict(
1379
- name='ssd_kpt13',
1380
- id=202,
1381
- color=[64, 64, 128],
1382
- type='',
1383
- swap='ssd_kpt23'),
1384
- 203:
1385
- dict(
1386
- name='ssd_kpt14',
1387
- id=203,
1388
- color=[64, 64, 128],
1389
- type='',
1390
- swap='ssd_kpt22'),
1391
- 204:
1392
- dict(
1393
- name='ssd_kpt15',
1394
- id=204,
1395
- color=[64, 64, 128],
1396
- type='',
1397
- swap='ssd_kpt21'),
1398
- 205:
1399
- dict(
1400
- name='ssd_kpt16',
1401
- id=205,
1402
- color=[64, 64, 128],
1403
- type='',
1404
- swap='ssd_kpt20'),
1405
- 206:
1406
- dict(
1407
- name='ssd_kpt17',
1408
- id=206,
1409
- color=[64, 64, 128],
1410
- type='',
1411
- swap='ssd_kpt19'),
1412
- 207:
1413
- dict(name='ssd_kpt18', id=207, color=[64, 64, 128], type='', swap=''),
1414
- 208:
1415
- dict(
1416
- name='ssd_kpt19',
1417
- id=208,
1418
- color=[64, 64, 128],
1419
- type='',
1420
- swap='ssd_kpt17'),
1421
- 209:
1422
- dict(
1423
- name='ssd_kpt20',
1424
- id=209,
1425
- color=[64, 64, 128],
1426
- type='',
1427
- swap='ssd_kpt16'),
1428
- 210:
1429
- dict(
1430
- name='ssd_kpt21',
1431
- id=210,
1432
- color=[64, 64, 128],
1433
- type='',
1434
- swap='ssd_kpt15'),
1435
- 211:
1436
- dict(
1437
- name='ssd_kpt22',
1438
- id=211,
1439
- color=[64, 64, 128],
1440
- type='',
1441
- swap='ssd_kpt14'),
1442
- 212:
1443
- dict(
1444
- name='ssd_kpt23',
1445
- id=212,
1446
- color=[64, 64, 128],
1447
- type='',
1448
- swap='ssd_kpt13'),
1449
- 213:
1450
- dict(
1451
- name='ssd_kpt24',
1452
- id=213,
1453
- color=[64, 64, 128],
1454
- type='',
1455
- swap='ssd_kpt12'),
1456
- 214:
1457
- dict(
1458
- name='ssd_kpt25',
1459
- id=214,
1460
- color=[64, 64, 128],
1461
- type='',
1462
- swap='ssd_kpt11'),
1463
- 215:
1464
- dict(
1465
- name='ssd_kpt26',
1466
- id=215,
1467
- color=[64, 64, 128],
1468
- type='',
1469
- swap='ssd_kpt10'),
1470
- 216:
1471
- dict(
1472
- name='ssd_kpt27',
1473
- id=216,
1474
- color=[64, 64, 128],
1475
- type='',
1476
- swap='ssd_kpt9'),
1477
- 217:
1478
- dict(
1479
- name='ssd_kpt28',
1480
- id=217,
1481
- color=[64, 64, 128],
1482
- type='',
1483
- swap='ssd_kpt8'),
1484
- 218:
1485
- dict(
1486
- name='ssd_kpt29',
1487
- id=218,
1488
- color=[64, 64, 128],
1489
- type='',
1490
- swap='ssd_kpt7'),
1491
- 219:
1492
- dict(name='lsd_kpt1', id=219, color=[128, 64, 0], type='', swap=''),
1493
- 220:
1494
- dict(
1495
- name='lsd_kpt2',
1496
- id=220,
1497
- color=[128, 64, 0],
1498
- type='',
1499
- swap='lsd_kpt6'),
1500
- 221:
1501
- dict(
1502
- name='lsd_kpt3',
1503
- id=221,
1504
- color=[128, 64, 0],
1505
- type='',
1506
- swap='lsd_kpt5'),
1507
- 222:
1508
- dict(name='lsd_kpt4', id=222, color=[128, 64, 0], type='', swap=''),
1509
- 223:
1510
- dict(
1511
- name='lsd_kpt5',
1512
- id=223,
1513
- color=[128, 64, 0],
1514
- type='',
1515
- swap='lsd_kpt3'),
1516
- 224:
1517
- dict(
1518
- name='lsd_kpt6',
1519
- id=224,
1520
- color=[128, 64, 0],
1521
- type='',
1522
- swap='lsd_kpt2'),
1523
- 225:
1524
- dict(
1525
- name='lsd_kpt7',
1526
- id=225,
1527
- color=[128, 64, 0],
1528
- type='',
1529
- swap='lsd_kpt37'),
1530
- 226:
1531
- dict(
1532
- name='lsd_kpt8',
1533
- id=226,
1534
- color=[128, 64, 0],
1535
- type='',
1536
- swap='lsd_kpt36'),
1537
- 227:
1538
- dict(
1539
- name='lsd_kpt9',
1540
- id=227,
1541
- color=[128, 64, 0],
1542
- type='',
1543
- swap='lsd_kpt35'),
1544
- 228:
1545
- dict(
1546
- name='lsd_kpt10',
1547
- id=228,
1548
- color=[128, 64, 0],
1549
- type='',
1550
- swap='lsd_kpt34'),
1551
- 229:
1552
- dict(
1553
- name='lsd_kpt11',
1554
- id=229,
1555
- color=[128, 64, 0],
1556
- type='',
1557
- swap='lsd_kpt33'),
1558
- 230:
1559
- dict(
1560
- name='lsd_kpt12',
1561
- id=230,
1562
- color=[128, 64, 0],
1563
- type='',
1564
- swap='lsd_kpt32'),
1565
- 231:
1566
- dict(
1567
- name='lsd_kpt13',
1568
- id=231,
1569
- color=[128, 64, 0],
1570
- type='',
1571
- swap='lsd_kpt31'),
1572
- 232:
1573
- dict(
1574
- name='lsd_kpt14',
1575
- id=232,
1576
- color=[128, 64, 0],
1577
- type='',
1578
- swap='lsd_kpt30'),
1579
- 233:
1580
- dict(
1581
- name='lsd_kpt15',
1582
- id=233,
1583
- color=[128, 64, 0],
1584
- type='',
1585
- swap='lsd_kpt29'),
1586
- 234:
1587
- dict(
1588
- name='lsd_kpt16',
1589
- id=234,
1590
- color=[128, 64, 0],
1591
- type='',
1592
- swap='lsd_kpt28'),
1593
- 235:
1594
- dict(
1595
- name='lsd_kpt17',
1596
- id=235,
1597
- color=[128, 64, 0],
1598
- type='',
1599
- swap='lsd_kpt27'),
1600
- 236:
1601
- dict(
1602
- name='lsd_kpt18',
1603
- id=236,
1604
- color=[128, 64, 0],
1605
- type='',
1606
- swap='lsd_kpt26'),
1607
- 237:
1608
- dict(
1609
- name='lsd_kpt19',
1610
- id=237,
1611
- color=[128, 64, 0],
1612
- type='',
1613
- swap='lsd_kpt25'),
1614
- 238:
1615
- dict(
1616
- name='lsd_kpt20',
1617
- id=238,
1618
- color=[128, 64, 0],
1619
- type='',
1620
- swap='lsd_kpt24'),
1621
- 239:
1622
- dict(
1623
- name='lsd_kpt21',
1624
- id=239,
1625
- color=[128, 64, 0],
1626
- type='',
1627
- swap='lsd_kpt23'),
1628
- 240:
1629
- dict(name='lsd_kpt22', id=240, color=[128, 64, 0], type='', swap=''),
1630
- 241:
1631
- dict(
1632
- name='lsd_kpt23',
1633
- id=241,
1634
- color=[128, 64, 0],
1635
- type='',
1636
- swap='lsd_kpt21'),
1637
- 242:
1638
- dict(
1639
- name='lsd_kpt24',
1640
- id=242,
1641
- color=[128, 64, 0],
1642
- type='',
1643
- swap='lsd_kpt20'),
1644
- 243:
1645
- dict(
1646
- name='lsd_kpt25',
1647
- id=243,
1648
- color=[128, 64, 0],
1649
- type='',
1650
- swap='lsd_kpt19'),
1651
- 244:
1652
- dict(
1653
- name='lsd_kpt26',
1654
- id=244,
1655
- color=[128, 64, 0],
1656
- type='',
1657
- swap='lsd_kpt18'),
1658
- 245:
1659
- dict(
1660
- name='lsd_kpt27',
1661
- id=245,
1662
- color=[128, 64, 0],
1663
- type='',
1664
- swap='lsd_kpt17'),
1665
- 246:
1666
- dict(
1667
- name='lsd_kpt28',
1668
- id=246,
1669
- color=[128, 64, 0],
1670
- type='',
1671
- swap='lsd_kpt16'),
1672
- 247:
1673
- dict(
1674
- name='lsd_kpt29',
1675
- id=247,
1676
- color=[128, 64, 0],
1677
- type='',
1678
- swap='lsd_kpt15'),
1679
- 248:
1680
- dict(
1681
- name='lsd_kpt30',
1682
- id=248,
1683
- color=[128, 64, 0],
1684
- type='',
1685
- swap='lsd_kpt14'),
1686
- 249:
1687
- dict(
1688
- name='lsd_kpt31',
1689
- id=249,
1690
- color=[128, 64, 0],
1691
- type='',
1692
- swap='lsd_kpt13'),
1693
- 250:
1694
- dict(
1695
- name='lsd_kpt32',
1696
- id=250,
1697
- color=[128, 64, 0],
1698
- type='',
1699
- swap='lsd_kpt12'),
1700
- 251:
1701
- dict(
1702
- name='lsd_kpt33',
1703
- id=251,
1704
- color=[128, 64, 0],
1705
- type='',
1706
- swap='lsd_kpt11'),
1707
- 252:
1708
- dict(
1709
- name='lsd_kpt34',
1710
- id=252,
1711
- color=[128, 64, 0],
1712
- type='',
1713
- swap='lsd_kpt10'),
1714
- 253:
1715
- dict(
1716
- name='lsd_kpt35',
1717
- id=253,
1718
- color=[128, 64, 0],
1719
- type='',
1720
- swap='lsd_kpt9'),
1721
- 254:
1722
- dict(
1723
- name='lsd_kpt36',
1724
- id=254,
1725
- color=[128, 64, 0],
1726
- type='',
1727
- swap='lsd_kpt8'),
1728
- 255:
1729
- dict(
1730
- name='lsd_kpt37',
1731
- id=255,
1732
- color=[128, 64, 0],
1733
- type='',
1734
- swap='lsd_kpt7'),
1735
- 256:
1736
- dict(name='vd_kpt1', id=256, color=[128, 64, 255], type='', swap=''),
1737
- 257:
1738
- dict(
1739
- name='vd_kpt2',
1740
- id=257,
1741
- color=[128, 64, 255],
1742
- type='',
1743
- swap='vd_kpt6'),
1744
- 258:
1745
- dict(
1746
- name='vd_kpt3',
1747
- id=258,
1748
- color=[128, 64, 255],
1749
- type='',
1750
- swap='vd_kpt5'),
1751
- 259:
1752
- dict(name='vd_kpt4', id=259, color=[128, 64, 255], type='', swap=''),
1753
- 260:
1754
- dict(
1755
- name='vd_kpt5',
1756
- id=260,
1757
- color=[128, 64, 255],
1758
- type='',
1759
- swap='vd_kpt3'),
1760
- 261:
1761
- dict(
1762
- name='vd_kpt6',
1763
- id=261,
1764
- color=[128, 64, 255],
1765
- type='',
1766
- swap='vd_kpt2'),
1767
- 262:
1768
- dict(
1769
- name='vd_kpt7',
1770
- id=262,
1771
- color=[128, 64, 255],
1772
- type='',
1773
- swap='vd_kpt19'),
1774
- 263:
1775
- dict(
1776
- name='vd_kpt8',
1777
- id=263,
1778
- color=[128, 64, 255],
1779
- type='',
1780
- swap='vd_kpt18'),
1781
- 264:
1782
- dict(
1783
- name='vd_kpt9',
1784
- id=264,
1785
- color=[128, 64, 255],
1786
- type='',
1787
- swap='vd_kpt17'),
1788
- 265:
1789
- dict(
1790
- name='vd_kpt10',
1791
- id=265,
1792
- color=[128, 64, 255],
1793
- type='',
1794
- swap='vd_kpt16'),
1795
- 266:
1796
- dict(
1797
- name='vd_kpt11',
1798
- id=266,
1799
- color=[128, 64, 255],
1800
- type='',
1801
- swap='vd_kpt15'),
1802
- 267:
1803
- dict(
1804
- name='vd_kpt12',
1805
- id=267,
1806
- color=[128, 64, 255],
1807
- type='',
1808
- swap='vd_kpt14'),
1809
- 268:
1810
- dict(name='vd_kpt13', id=268, color=[128, 64, 255], type='', swap=''),
1811
- 269:
1812
- dict(
1813
- name='vd_kpt14',
1814
- id=269,
1815
- color=[128, 64, 255],
1816
- type='',
1817
- swap='vd_kpt12'),
1818
- 270:
1819
- dict(
1820
- name='vd_kpt15',
1821
- id=270,
1822
- color=[128, 64, 255],
1823
- type='',
1824
- swap='vd_kpt11'),
1825
- 271:
1826
- dict(
1827
- name='vd_kpt16',
1828
- id=271,
1829
- color=[128, 64, 255],
1830
- type='',
1831
- swap='vd_kpt10'),
1832
- 272:
1833
- dict(
1834
- name='vd_kpt17',
1835
- id=272,
1836
- color=[128, 64, 255],
1837
- type='',
1838
- swap='vd_kpt9'),
1839
- 273:
1840
- dict(
1841
- name='vd_kpt18',
1842
- id=273,
1843
- color=[128, 64, 255],
1844
- type='',
1845
- swap='vd_kpt8'),
1846
- 274:
1847
- dict(
1848
- name='vd_kpt19',
1849
- id=274,
1850
- color=[128, 64, 255],
1851
- type='',
1852
- swap='vd_kpt7'),
1853
- 275:
1854
- dict(name='sd_kpt1', id=275, color=[128, 64, 0], type='', swap=''),
1855
- 276:
1856
- dict(
1857
- name='sd_kpt2',
1858
- id=276,
1859
- color=[128, 64, 0],
1860
- type='',
1861
- swap='sd_kpt6'),
1862
- 277:
1863
- dict(
1864
- name='sd_kpt3',
1865
- id=277,
1866
- color=[128, 64, 0],
1867
- type='',
1868
- swap='sd_kpt5'),
1869
- 278:
1870
- dict(name='sd_kpt4', id=278, color=[128, 64, 0], type='', swap=''),
1871
- 279:
1872
- dict(
1873
- name='sd_kpt5',
1874
- id=279,
1875
- color=[128, 64, 0],
1876
- type='',
1877
- swap='sd_kpt3'),
1878
- 280:
1879
- dict(
1880
- name='sd_kpt6',
1881
- id=280,
1882
- color=[128, 64, 0],
1883
- type='',
1884
- swap='sd_kpt2'),
1885
- 281:
1886
- dict(
1887
- name='sd_kpt7',
1888
- id=281,
1889
- color=[128, 64, 0],
1890
- type='',
1891
- swap='sd_kpt19'),
1892
- 282:
1893
- dict(
1894
- name='sd_kpt8',
1895
- id=282,
1896
- color=[128, 64, 0],
1897
- type='',
1898
- swap='sd_kpt18'),
1899
- 283:
1900
- dict(
1901
- name='sd_kpt9',
1902
- id=283,
1903
- color=[128, 64, 0],
1904
- type='',
1905
- swap='sd_kpt17'),
1906
- 284:
1907
- dict(
1908
- name='sd_kpt10',
1909
- id=284,
1910
- color=[128, 64, 0],
1911
- type='',
1912
- swap='sd_kpt16'),
1913
- 285:
1914
- dict(
1915
- name='sd_kpt11',
1916
- id=285,
1917
- color=[128, 64, 0],
1918
- type='',
1919
- swap='sd_kpt15'),
1920
- 286:
1921
- dict(
1922
- name='sd_kpt12',
1923
- id=286,
1924
- color=[128, 64, 0],
1925
- type='',
1926
- swap='sd_kpt14'),
1927
- 287:
1928
- dict(name='sd_kpt13', id=287, color=[128, 64, 0], type='', swap=''),
1929
- 288:
1930
- dict(
1931
- name='sd_kpt14',
1932
- id=288,
1933
- color=[128, 64, 0],
1934
- type='',
1935
- swap='sd_kpt12'),
1936
- 289:
1937
- dict(
1938
- name='sd_kpt15',
1939
- id=289,
1940
- color=[128, 64, 0],
1941
- type='',
1942
- swap='sd_kpt11'),
1943
- 290:
1944
- dict(
1945
- name='sd_kpt16',
1946
- id=290,
1947
- color=[128, 64, 0],
1948
- type='',
1949
- swap='sd_kpt10'),
1950
- 291:
1951
- dict(
1952
- name='sd_kpt17',
1953
- id=291,
1954
- color=[128, 64, 0],
1955
- type='',
1956
- swap='sd_kpt9'),
1957
- 292:
1958
- dict(
1959
- name='sd_kpt18',
1960
- id=292,
1961
- color=[128, 64, 0],
1962
- type='',
1963
- swap='sd_kpt8'),
1964
- 293:
1965
- dict(
1966
- name='sd_kpt19',
1967
- id=293,
1968
- color=[128, 64, 0],
1969
- type='',
1970
- swap='sd_kpt7')
1971
- }),
1972
- skeleton_info=dict({
1973
- 0:
1974
- dict(link=('sss_kpt1', 'sss_kpt2'), id=0, color=[255, 128, 0]),
1975
- 1:
1976
- dict(link=('sss_kpt2', 'sss_kpt7'), id=1, color=[255, 128, 0]),
1977
- 2:
1978
- dict(link=('sss_kpt7', 'sss_kpt8'), id=2, color=[255, 128, 0]),
1979
- 3:
1980
- dict(link=('sss_kpt8', 'sss_kpt9'), id=3, color=[255, 128, 0]),
1981
- 4:
1982
- dict(link=('sss_kpt9', 'sss_kpt10'), id=4, color=[255, 128, 0]),
1983
- 5:
1984
- dict(link=('sss_kpt10', 'sss_kpt11'), id=5, color=[255, 128, 0]),
1985
- 6:
1986
- dict(link=('sss_kpt11', 'sss_kpt12'), id=6, color=[255, 128, 0]),
1987
- 7:
1988
- dict(link=('sss_kpt12', 'sss_kpt13'), id=7, color=[255, 128, 0]),
1989
- 8:
1990
- dict(link=('sss_kpt13', 'sss_kpt14'), id=8, color=[255, 128, 0]),
1991
- 9:
1992
- dict(link=('sss_kpt14', 'sss_kpt15'), id=9, color=[255, 128, 0]),
1993
- 10:
1994
- dict(link=('sss_kpt15', 'sss_kpt16'), id=10, color=[255, 128, 0]),
1995
- 11:
1996
- dict(link=('sss_kpt16', 'sss_kpt17'), id=11, color=[255, 128, 0]),
1997
- 12:
1998
- dict(link=('sss_kpt17', 'sss_kpt18'), id=12, color=[255, 128, 0]),
1999
- 13:
2000
- dict(link=('sss_kpt18', 'sss_kpt19'), id=13, color=[255, 128, 0]),
2001
- 14:
2002
- dict(link=('sss_kpt19', 'sss_kpt20'), id=14, color=[255, 128, 0]),
2003
- 15:
2004
- dict(link=('sss_kpt20', 'sss_kpt21'), id=15, color=[255, 128, 0]),
2005
- 16:
2006
- dict(link=('sss_kpt21', 'sss_kpt22'), id=16, color=[255, 128, 0]),
2007
- 17:
2008
- dict(link=('sss_kpt22', 'sss_kpt23'), id=17, color=[255, 128, 0]),
2009
- 18:
2010
- dict(link=('sss_kpt23', 'sss_kpt24'), id=18, color=[255, 128, 0]),
2011
- 19:
2012
- dict(link=('sss_kpt24', 'sss_kpt25'), id=19, color=[255, 128, 0]),
2013
- 20:
2014
- dict(link=('sss_kpt25', 'sss_kpt6'), id=20, color=[255, 128, 0]),
2015
- 21:
2016
- dict(link=('sss_kpt6', 'sss_kpt1'), id=21, color=[255, 128, 0]),
2017
- 22:
2018
- dict(link=('sss_kpt2', 'sss_kpt3'), id=22, color=[255, 128, 0]),
2019
- 23:
2020
- dict(link=('sss_kpt3', 'sss_kpt4'), id=23, color=[255, 128, 0]),
2021
- 24:
2022
- dict(link=('sss_kpt4', 'sss_kpt5'), id=24, color=[255, 128, 0]),
2023
- 25:
2024
- dict(link=('sss_kpt5', 'sss_kpt6'), id=25, color=[255, 128, 0]),
2025
- 26:
2026
- dict(link=('lss_kpt1', 'lss_kpt2'), id=26, color=[255, 0, 128]),
2027
- 27:
2028
- dict(link=('lss_kpt2', 'lss_kpt7'), id=27, color=[255, 0, 128]),
2029
- 28:
2030
- dict(link=('lss_kpt7', 'lss_kpt8'), id=28, color=[255, 0, 128]),
2031
- 29:
2032
- dict(link=('lss_kpt8', 'lss_kpt9'), id=29, color=[255, 0, 128]),
2033
- 30:
2034
- dict(link=('lss_kpt9', 'lss_kpt10'), id=30, color=[255, 0, 128]),
2035
- 31:
2036
- dict(link=('lss_kpt10', 'lss_kpt11'), id=31, color=[255, 0, 128]),
2037
- 32:
2038
- dict(link=('lss_kpt11', 'lss_kpt12'), id=32, color=[255, 0, 128]),
2039
- 33:
2040
- dict(link=('lss_kpt12', 'lss_kpt13'), id=33, color=[255, 0, 128]),
2041
- 34:
2042
- dict(link=('lss_kpt13', 'lss_kpt14'), id=34, color=[255, 0, 128]),
2043
- 35:
2044
- dict(link=('lss_kpt14', 'lss_kpt15'), id=35, color=[255, 0, 128]),
2045
- 36:
2046
- dict(link=('lss_kpt15', 'lss_kpt16'), id=36, color=[255, 0, 128]),
2047
- 37:
2048
- dict(link=('lss_kpt16', 'lss_kpt17'), id=37, color=[255, 0, 128]),
2049
- 38:
2050
- dict(link=('lss_kpt17', 'lss_kpt18'), id=38, color=[255, 0, 128]),
2051
- 39:
2052
- dict(link=('lss_kpt18', 'lss_kpt19'), id=39, color=[255, 0, 128]),
2053
- 40:
2054
- dict(link=('lss_kpt19', 'lss_kpt20'), id=40, color=[255, 0, 128]),
2055
- 41:
2056
- dict(link=('lss_kpt20', 'lss_kpt21'), id=41, color=[255, 0, 128]),
2057
- 42:
2058
- dict(link=('lss_kpt21', 'lss_kpt22'), id=42, color=[255, 0, 128]),
2059
- 43:
2060
- dict(link=('lss_kpt22', 'lss_kpt23'), id=43, color=[255, 0, 128]),
2061
- 44:
2062
- dict(link=('lss_kpt23', 'lss_kpt24'), id=44, color=[255, 0, 128]),
2063
- 45:
2064
- dict(link=('lss_kpt24', 'lss_kpt25'), id=45, color=[255, 0, 128]),
2065
- 46:
2066
- dict(link=('lss_kpt25', 'lss_kpt26'), id=46, color=[255, 0, 128]),
2067
- 47:
2068
- dict(link=('lss_kpt26', 'lss_kpt27'), id=47, color=[255, 0, 128]),
2069
- 48:
2070
- dict(link=('lss_kpt27', 'lss_kpt28'), id=48, color=[255, 0, 128]),
2071
- 49:
2072
- dict(link=('lss_kpt28', 'lss_kpt29'), id=49, color=[255, 0, 128]),
2073
- 50:
2074
- dict(link=('lss_kpt29', 'lss_kpt30'), id=50, color=[255, 0, 128]),
2075
- 51:
2076
- dict(link=('lss_kpt30', 'lss_kpt31'), id=51, color=[255, 0, 128]),
2077
- 52:
2078
- dict(link=('lss_kpt31', 'lss_kpt32'), id=52, color=[255, 0, 128]),
2079
- 53:
2080
- dict(link=('lss_kpt32', 'lss_kpt33'), id=53, color=[255, 0, 128]),
2081
- 54:
2082
- dict(link=('lss_kpt33', 'lss_kpt6'), id=54, color=[255, 0, 128]),
2083
- 55:
2084
- dict(link=('lss_kpt6', 'lss_kpt5'), id=55, color=[255, 0, 128]),
2085
- 56:
2086
- dict(link=('lss_kpt5', 'lss_kpt4'), id=56, color=[255, 0, 128]),
2087
- 57:
2088
- dict(link=('lss_kpt4', 'lss_kpt3'), id=57, color=[255, 0, 128]),
2089
- 58:
2090
- dict(link=('lss_kpt3', 'lss_kpt2'), id=58, color=[255, 0, 128]),
2091
- 59:
2092
- dict(link=('lss_kpt6', 'lss_kpt1'), id=59, color=[255, 0, 128]),
2093
- 60:
2094
- dict(link=('sso_kpt1', 'sso_kpt4'), id=60, color=[128, 0, 255]),
2095
- 61:
2096
- dict(link=('sso_kpt4', 'sso_kpt7'), id=61, color=[128, 0, 255]),
2097
- 62:
2098
- dict(link=('sso_kpt7', 'sso_kpt8'), id=62, color=[128, 0, 255]),
2099
- 63:
2100
- dict(link=('sso_kpt8', 'sso_kpt9'), id=63, color=[128, 0, 255]),
2101
- 64:
2102
- dict(link=('sso_kpt9', 'sso_kpt10'), id=64, color=[128, 0, 255]),
2103
- 65:
2104
- dict(link=('sso_kpt10', 'sso_kpt11'), id=65, color=[128, 0, 255]),
2105
- 66:
2106
- dict(link=('sso_kpt11', 'sso_kpt12'), id=66, color=[128, 0, 255]),
2107
- 67:
2108
- dict(link=('sso_kpt12', 'sso_kpt13'), id=67, color=[128, 0, 255]),
2109
- 68:
2110
- dict(link=('sso_kpt13', 'sso_kpt14'), id=68, color=[128, 0, 255]),
2111
- 69:
2112
- dict(link=('sso_kpt14', 'sso_kpt15'), id=69, color=[128, 0, 255]),
2113
- 70:
2114
- dict(link=('sso_kpt15', 'sso_kpt16'), id=70, color=[128, 0, 255]),
2115
- 71:
2116
- dict(link=('sso_kpt16', 'sso_kpt31'), id=71, color=[128, 0, 255]),
2117
- 72:
2118
- dict(link=('sso_kpt31', 'sso_kpt30'), id=72, color=[128, 0, 255]),
2119
- 73:
2120
- dict(link=('sso_kpt30', 'sso_kpt2'), id=73, color=[128, 0, 255]),
2121
- 74:
2122
- dict(link=('sso_kpt2', 'sso_kpt3'), id=74, color=[128, 0, 255]),
2123
- 75:
2124
- dict(link=('sso_kpt3', 'sso_kpt4'), id=75, color=[128, 0, 255]),
2125
- 76:
2126
- dict(link=('sso_kpt1', 'sso_kpt6'), id=76, color=[128, 0, 255]),
2127
- 77:
2128
- dict(link=('sso_kpt6', 'sso_kpt25'), id=77, color=[128, 0, 255]),
2129
- 78:
2130
- dict(link=('sso_kpt25', 'sso_kpt24'), id=78, color=[128, 0, 255]),
2131
- 79:
2132
- dict(link=('sso_kpt24', 'sso_kpt23'), id=79, color=[128, 0, 255]),
2133
- 80:
2134
- dict(link=('sso_kpt23', 'sso_kpt22'), id=80, color=[128, 0, 255]),
2135
- 81:
2136
- dict(link=('sso_kpt22', 'sso_kpt21'), id=81, color=[128, 0, 255]),
2137
- 82:
2138
- dict(link=('sso_kpt21', 'sso_kpt20'), id=82, color=[128, 0, 255]),
2139
- 83:
2140
- dict(link=('sso_kpt20', 'sso_kpt19'), id=83, color=[128, 0, 255]),
2141
- 84:
2142
- dict(link=('sso_kpt19', 'sso_kpt18'), id=84, color=[128, 0, 255]),
2143
- 85:
2144
- dict(link=('sso_kpt18', 'sso_kpt17'), id=85, color=[128, 0, 255]),
2145
- 86:
2146
- dict(link=('sso_kpt17', 'sso_kpt29'), id=86, color=[128, 0, 255]),
2147
- 87:
2148
- dict(link=('sso_kpt29', 'sso_kpt28'), id=87, color=[128, 0, 255]),
2149
- 88:
2150
- dict(link=('sso_kpt28', 'sso_kpt27'), id=88, color=[128, 0, 255]),
2151
- 89:
2152
- dict(link=('sso_kpt27', 'sso_kpt26'), id=89, color=[128, 0, 255]),
2153
- 90:
2154
- dict(link=('sso_kpt26', 'sso_kpt5'), id=90, color=[128, 0, 255]),
2155
- 91:
2156
- dict(link=('sso_kpt5', 'sso_kpt6'), id=91, color=[128, 0, 255]),
2157
- 92:
2158
- dict(link=('lso_kpt1', 'lso_kpt2'), id=92, color=[0, 128, 255]),
2159
- 93:
2160
- dict(link=('lso_kpt2', 'lso_kpt7'), id=93, color=[0, 128, 255]),
2161
- 94:
2162
- dict(link=('lso_kpt7', 'lso_kpt8'), id=94, color=[0, 128, 255]),
2163
- 95:
2164
- dict(link=('lso_kpt8', 'lso_kpt9'), id=95, color=[0, 128, 255]),
2165
- 96:
2166
- dict(link=('lso_kpt9', 'lso_kpt10'), id=96, color=[0, 128, 255]),
2167
- 97:
2168
- dict(link=('lso_kpt10', 'lso_kpt11'), id=97, color=[0, 128, 255]),
2169
- 98:
2170
- dict(link=('lso_kpt11', 'lso_kpt12'), id=98, color=[0, 128, 255]),
2171
- 99:
2172
- dict(link=('lso_kpt12', 'lso_kpt13'), id=99, color=[0, 128, 255]),
2173
- 100:
2174
- dict(link=('lso_kpt13', 'lso_kpt14'), id=100, color=[0, 128, 255]),
2175
- 101:
2176
- dict(link=('lso_kpt14', 'lso_kpt15'), id=101, color=[0, 128, 255]),
2177
- 102:
2178
- dict(link=('lso_kpt15', 'lso_kpt16'), id=102, color=[0, 128, 255]),
2179
- 103:
2180
- dict(link=('lso_kpt16', 'lso_kpt17'), id=103, color=[0, 128, 255]),
2181
- 104:
2182
- dict(link=('lso_kpt17', 'lso_kpt18'), id=104, color=[0, 128, 255]),
2183
- 105:
2184
- dict(link=('lso_kpt18', 'lso_kpt19'), id=105, color=[0, 128, 255]),
2185
- 106:
2186
- dict(link=('lso_kpt19', 'lso_kpt20'), id=106, color=[0, 128, 255]),
2187
- 107:
2188
- dict(link=('lso_kpt20', 'lso_kpt39'), id=107, color=[0, 128, 255]),
2189
- 108:
2190
- dict(link=('lso_kpt39', 'lso_kpt38'), id=108, color=[0, 128, 255]),
2191
- 109:
2192
- dict(link=('lso_kpt38', 'lso_kpt4'), id=109, color=[0, 128, 255]),
2193
- 110:
2194
- dict(link=('lso_kpt4', 'lso_kpt3'), id=110, color=[0, 128, 255]),
2195
- 111:
2196
- dict(link=('lso_kpt3', 'lso_kpt2'), id=111, color=[0, 128, 255]),
2197
- 112:
2198
- dict(link=('lso_kpt1', 'lso_kpt6'), id=112, color=[0, 128, 255]),
2199
- 113:
2200
- dict(link=('lso_kpt6', 'lso_kpt33'), id=113, color=[0, 128, 255]),
2201
- 114:
2202
- dict(link=('lso_kpt33', 'lso_kpt32'), id=114, color=[0, 128, 255]),
2203
- 115:
2204
- dict(link=('lso_kpt32', 'lso_kpt31'), id=115, color=[0, 128, 255]),
2205
- 116:
2206
- dict(link=('lso_kpt31', 'lso_kpt30'), id=116, color=[0, 128, 255]),
2207
- 117:
2208
- dict(link=('lso_kpt30', 'lso_kpt29'), id=117, color=[0, 128, 255]),
2209
- 118:
2210
- dict(link=('lso_kpt29', 'lso_kpt28'), id=118, color=[0, 128, 255]),
2211
- 119:
2212
- dict(link=('lso_kpt28', 'lso_kpt27'), id=119, color=[0, 128, 255]),
2213
- 120:
2214
- dict(link=('lso_kpt27', 'lso_kpt26'), id=120, color=[0, 128, 255]),
2215
- 121:
2216
- dict(link=('lso_kpt26', 'lso_kpt25'), id=121, color=[0, 128, 255]),
2217
- 122:
2218
- dict(link=('lso_kpt25', 'lso_kpt24'), id=122, color=[0, 128, 255]),
2219
- 123:
2220
- dict(link=('lso_kpt24', 'lso_kpt23'), id=123, color=[0, 128, 255]),
2221
- 124:
2222
- dict(link=('lso_kpt23', 'lso_kpt22'), id=124, color=[0, 128, 255]),
2223
- 125:
2224
- dict(link=('lso_kpt22', 'lso_kpt21'), id=125, color=[0, 128, 255]),
2225
- 126:
2226
- dict(link=('lso_kpt21', 'lso_kpt37'), id=126, color=[0, 128, 255]),
2227
- 127:
2228
- dict(link=('lso_kpt37', 'lso_kpt36'), id=127, color=[0, 128, 255]),
2229
- 128:
2230
- dict(link=('lso_kpt36', 'lso_kpt35'), id=128, color=[0, 128, 255]),
2231
- 129:
2232
- dict(link=('lso_kpt35', 'lso_kpt34'), id=129, color=[0, 128, 255]),
2233
- 130:
2234
- dict(link=('lso_kpt34', 'lso_kpt5'), id=130, color=[0, 128, 255]),
2235
- 131:
2236
- dict(link=('lso_kpt5', 'lso_kpt6'), id=131, color=[0, 128, 255]),
2237
- 132:
2238
- dict(link=('vest_kpt1', 'vest_kpt2'), id=132, color=[0, 128, 128]),
2239
- 133:
2240
- dict(link=('vest_kpt2', 'vest_kpt7'), id=133, color=[0, 128, 128]),
2241
- 134:
2242
- dict(link=('vest_kpt7', 'vest_kpt8'), id=134, color=[0, 128, 128]),
2243
- 135:
2244
- dict(link=('vest_kpt8', 'vest_kpt9'), id=135, color=[0, 128, 128]),
2245
- 136:
2246
- dict(link=('vest_kpt9', 'vest_kpt10'), id=136, color=[0, 128, 128]),
2247
- 137:
2248
- dict(link=('vest_kpt10', 'vest_kpt11'), id=137, color=[0, 128, 128]),
2249
- 138:
2250
- dict(link=('vest_kpt11', 'vest_kpt12'), id=138, color=[0, 128, 128]),
2251
- 139:
2252
- dict(link=('vest_kpt12', 'vest_kpt13'), id=139, color=[0, 128, 128]),
2253
- 140:
2254
- dict(link=('vest_kpt13', 'vest_kpt14'), id=140, color=[0, 128, 128]),
2255
- 141:
2256
- dict(link=('vest_kpt14', 'vest_kpt15'), id=141, color=[0, 128, 128]),
2257
- 142:
2258
- dict(link=('vest_kpt15', 'vest_kpt6'), id=142, color=[0, 128, 128]),
2259
- 143:
2260
- dict(link=('vest_kpt6', 'vest_kpt1'), id=143, color=[0, 128, 128]),
2261
- 144:
2262
- dict(link=('vest_kpt2', 'vest_kpt3'), id=144, color=[0, 128, 128]),
2263
- 145:
2264
- dict(link=('vest_kpt3', 'vest_kpt4'), id=145, color=[0, 128, 128]),
2265
- 146:
2266
- dict(link=('vest_kpt4', 'vest_kpt5'), id=146, color=[0, 128, 128]),
2267
- 147:
2268
- dict(link=('vest_kpt5', 'vest_kpt6'), id=147, color=[0, 128, 128]),
2269
- 148:
2270
- dict(link=('sling_kpt1', 'sling_kpt2'), id=148, color=[0, 0, 128]),
2271
- 149:
2272
- dict(link=('sling_kpt2', 'sling_kpt8'), id=149, color=[0, 0, 128]),
2273
- 150:
2274
- dict(link=('sling_kpt8', 'sling_kpt9'), id=150, color=[0, 0, 128]),
2275
- 151:
2276
- dict(link=('sling_kpt9', 'sling_kpt10'), id=151, color=[0, 0, 128]),
2277
- 152:
2278
- dict(link=('sling_kpt10', 'sling_kpt11'), id=152, color=[0, 0, 128]),
2279
- 153:
2280
- dict(link=('sling_kpt11', 'sling_kpt12'), id=153, color=[0, 0, 128]),
2281
- 154:
2282
- dict(link=('sling_kpt12', 'sling_kpt13'), id=154, color=[0, 0, 128]),
2283
- 155:
2284
- dict(link=('sling_kpt13', 'sling_kpt14'), id=155, color=[0, 0, 128]),
2285
- 156:
2286
- dict(link=('sling_kpt14', 'sling_kpt6'), id=156, color=[0, 0, 128]),
2287
- 157:
2288
- dict(link=('sling_kpt2', 'sling_kpt7'), id=157, color=[0, 0, 128]),
2289
- 158:
2290
- dict(link=('sling_kpt6', 'sling_kpt15'), id=158, color=[0, 0, 128]),
2291
- 159:
2292
- dict(link=('sling_kpt2', 'sling_kpt3'), id=159, color=[0, 0, 128]),
2293
- 160:
2294
- dict(link=('sling_kpt3', 'sling_kpt4'), id=160, color=[0, 0, 128]),
2295
- 161:
2296
- dict(link=('sling_kpt4', 'sling_kpt5'), id=161, color=[0, 0, 128]),
2297
- 162:
2298
- dict(link=('sling_kpt5', 'sling_kpt6'), id=162, color=[0, 0, 128]),
2299
- 163:
2300
- dict(link=('sling_kpt1', 'sling_kpt6'), id=163, color=[0, 0, 128]),
2301
- 164:
2302
- dict(
2303
- link=('shorts_kpt1', 'shorts_kpt4'), id=164, color=[128, 128,
2304
- 128]),
2305
- 165:
2306
- dict(
2307
- link=('shorts_kpt4', 'shorts_kpt5'), id=165, color=[128, 128,
2308
- 128]),
2309
- 166:
2310
- dict(
2311
- link=('shorts_kpt5', 'shorts_kpt6'), id=166, color=[128, 128,
2312
- 128]),
2313
- 167:
2314
- dict(
2315
- link=('shorts_kpt6', 'shorts_kpt7'), id=167, color=[128, 128,
2316
- 128]),
2317
- 168:
2318
- dict(
2319
- link=('shorts_kpt7', 'shorts_kpt8'), id=168, color=[128, 128,
2320
- 128]),
2321
- 169:
2322
- dict(
2323
- link=('shorts_kpt8', 'shorts_kpt9'), id=169, color=[128, 128,
2324
- 128]),
2325
- 170:
2326
- dict(
2327
- link=('shorts_kpt9', 'shorts_kpt10'),
2328
- id=170,
2329
- color=[128, 128, 128]),
2330
- 171:
2331
- dict(
2332
- link=('shorts_kpt10', 'shorts_kpt3'),
2333
- id=171,
2334
- color=[128, 128, 128]),
2335
- 172:
2336
- dict(
2337
- link=('shorts_kpt3', 'shorts_kpt2'), id=172, color=[128, 128,
2338
- 128]),
2339
- 173:
2340
- dict(
2341
- link=('shorts_kpt2', 'shorts_kpt1'), id=173, color=[128, 128,
2342
- 128]),
2343
- 174:
2344
- dict(
2345
- link=('trousers_kpt1', 'trousers_kpt4'),
2346
- id=174,
2347
- color=[128, 0, 128]),
2348
- 175:
2349
- dict(
2350
- link=('trousers_kpt4', 'trousers_kpt5'),
2351
- id=175,
2352
- color=[128, 0, 128]),
2353
- 176:
2354
- dict(
2355
- link=('trousers_kpt5', 'trousers_kpt6'),
2356
- id=176,
2357
- color=[128, 0, 128]),
2358
- 177:
2359
- dict(
2360
- link=('trousers_kpt6', 'trousers_kpt7'),
2361
- id=177,
2362
- color=[128, 0, 128]),
2363
- 178:
2364
- dict(
2365
- link=('trousers_kpt7', 'trousers_kpt8'),
2366
- id=178,
2367
- color=[128, 0, 128]),
2368
- 179:
2369
- dict(
2370
- link=('trousers_kpt8', 'trousers_kpt9'),
2371
- id=179,
2372
- color=[128, 0, 128]),
2373
- 180:
2374
- dict(
2375
- link=('trousers_kpt9', 'trousers_kpt10'),
2376
- id=180,
2377
- color=[128, 0, 128]),
2378
- 181:
2379
- dict(
2380
- link=('trousers_kpt10', 'trousers_kpt11'),
2381
- id=181,
2382
- color=[128, 0, 128]),
2383
- 182:
2384
- dict(
2385
- link=('trousers_kpt11', 'trousers_kpt12'),
2386
- id=182,
2387
- color=[128, 0, 128]),
2388
- 183:
2389
- dict(
2390
- link=('trousers_kpt12', 'trousers_kpt13'),
2391
- id=183,
2392
- color=[128, 0, 128]),
2393
- 184:
2394
- dict(
2395
- link=('trousers_kpt13', 'trousers_kpt14'),
2396
- id=184,
2397
- color=[128, 0, 128]),
2398
- 185:
2399
- dict(
2400
- link=('trousers_kpt14', 'trousers_kpt3'),
2401
- id=185,
2402
- color=[128, 0, 128]),
2403
- 186:
2404
- dict(
2405
- link=('trousers_kpt3', 'trousers_kpt2'),
2406
- id=186,
2407
- color=[128, 0, 128]),
2408
- 187:
2409
- dict(
2410
- link=('trousers_kpt2', 'trousers_kpt1'),
2411
- id=187,
2412
- color=[128, 0, 128]),
2413
- 188:
2414
- dict(link=('skirt_kpt1', 'skirt_kpt4'), id=188, color=[64, 128, 128]),
2415
- 189:
2416
- dict(link=('skirt_kpt4', 'skirt_kpt5'), id=189, color=[64, 128, 128]),
2417
- 190:
2418
- dict(link=('skirt_kpt5', 'skirt_kpt6'), id=190, color=[64, 128, 128]),
2419
- 191:
2420
- dict(link=('skirt_kpt6', 'skirt_kpt7'), id=191, color=[64, 128, 128]),
2421
- 192:
2422
- dict(link=('skirt_kpt7', 'skirt_kpt8'), id=192, color=[64, 128, 128]),
2423
- 193:
2424
- dict(link=('skirt_kpt8', 'skirt_kpt3'), id=193, color=[64, 128, 128]),
2425
- 194:
2426
- dict(link=('skirt_kpt3', 'skirt_kpt2'), id=194, color=[64, 128, 128]),
2427
- 195:
2428
- dict(link=('skirt_kpt2', 'skirt_kpt1'), id=195, color=[64, 128, 128]),
2429
- 196:
2430
- dict(link=('ssd_kpt1', 'ssd_kpt2'), id=196, color=[64, 64, 128]),
2431
- 197:
2432
- dict(link=('ssd_kpt2', 'ssd_kpt7'), id=197, color=[64, 64, 128]),
2433
- 198:
2434
- dict(link=('ssd_kpt7', 'ssd_kpt8'), id=198, color=[64, 64, 128]),
2435
- 199:
2436
- dict(link=('ssd_kpt8', 'ssd_kpt9'), id=199, color=[64, 64, 128]),
2437
- 200:
2438
- dict(link=('ssd_kpt9', 'ssd_kpt10'), id=200, color=[64, 64, 128]),
2439
- 201:
2440
- dict(link=('ssd_kpt10', 'ssd_kpt11'), id=201, color=[64, 64, 128]),
2441
- 202:
2442
- dict(link=('ssd_kpt11', 'ssd_kpt12'), id=202, color=[64, 64, 128]),
2443
- 203:
2444
- dict(link=('ssd_kpt12', 'ssd_kpt13'), id=203, color=[64, 64, 128]),
2445
- 204:
2446
- dict(link=('ssd_kpt13', 'ssd_kpt14'), id=204, color=[64, 64, 128]),
2447
- 205:
2448
- dict(link=('ssd_kpt14', 'ssd_kpt15'), id=205, color=[64, 64, 128]),
2449
- 206:
2450
- dict(link=('ssd_kpt15', 'ssd_kpt16'), id=206, color=[64, 64, 128]),
2451
- 207:
2452
- dict(link=('ssd_kpt16', 'ssd_kpt17'), id=207, color=[64, 64, 128]),
2453
- 208:
2454
- dict(link=('ssd_kpt17', 'ssd_kpt18'), id=208, color=[64, 64, 128]),
2455
- 209:
2456
- dict(link=('ssd_kpt18', 'ssd_kpt19'), id=209, color=[64, 64, 128]),
2457
- 210:
2458
- dict(link=('ssd_kpt19', 'ssd_kpt20'), id=210, color=[64, 64, 128]),
2459
- 211:
2460
- dict(link=('ssd_kpt20', 'ssd_kpt21'), id=211, color=[64, 64, 128]),
2461
- 212:
2462
- dict(link=('ssd_kpt21', 'ssd_kpt22'), id=212, color=[64, 64, 128]),
2463
- 213:
2464
- dict(link=('ssd_kpt22', 'ssd_kpt23'), id=213, color=[64, 64, 128]),
2465
- 214:
2466
- dict(link=('ssd_kpt23', 'ssd_kpt24'), id=214, color=[64, 64, 128]),
2467
- 215:
2468
- dict(link=('ssd_kpt24', 'ssd_kpt25'), id=215, color=[64, 64, 128]),
2469
- 216:
2470
- dict(link=('ssd_kpt25', 'ssd_kpt26'), id=216, color=[64, 64, 128]),
2471
- 217:
2472
- dict(link=('ssd_kpt26', 'ssd_kpt27'), id=217, color=[64, 64, 128]),
2473
- 218:
2474
- dict(link=('ssd_kpt27', 'ssd_kpt28'), id=218, color=[64, 64, 128]),
2475
- 219:
2476
- dict(link=('ssd_kpt28', 'ssd_kpt29'), id=219, color=[64, 64, 128]),
2477
- 220:
2478
- dict(link=('ssd_kpt29', 'ssd_kpt6'), id=220, color=[64, 64, 128]),
2479
- 221:
2480
- dict(link=('ssd_kpt6', 'ssd_kpt5'), id=221, color=[64, 64, 128]),
2481
- 222:
2482
- dict(link=('ssd_kpt5', 'ssd_kpt4'), id=222, color=[64, 64, 128]),
2483
- 223:
2484
- dict(link=('ssd_kpt4', 'ssd_kpt3'), id=223, color=[64, 64, 128]),
2485
- 224:
2486
- dict(link=('ssd_kpt3', 'ssd_kpt2'), id=224, color=[64, 64, 128]),
2487
- 225:
2488
- dict(link=('ssd_kpt6', 'ssd_kpt1'), id=225, color=[64, 64, 128]),
2489
- 226:
2490
- dict(link=('lsd_kpt1', 'lsd_kpt2'), id=226, color=[128, 64, 0]),
2491
- 227:
2492
- dict(link=('lsd_kpt2', 'lsd_kpt7'), id=228, color=[128, 64, 0]),
2493
- 228:
2494
- dict(link=('lsd_kpt7', 'lsd_kpt8'), id=228, color=[128, 64, 0]),
2495
- 229:
2496
- dict(link=('lsd_kpt8', 'lsd_kpt9'), id=229, color=[128, 64, 0]),
2497
- 230:
2498
- dict(link=('lsd_kpt9', 'lsd_kpt10'), id=230, color=[128, 64, 0]),
2499
- 231:
2500
- dict(link=('lsd_kpt10', 'lsd_kpt11'), id=231, color=[128, 64, 0]),
2501
- 232:
2502
- dict(link=('lsd_kpt11', 'lsd_kpt12'), id=232, color=[128, 64, 0]),
2503
- 233:
2504
- dict(link=('lsd_kpt12', 'lsd_kpt13'), id=233, color=[128, 64, 0]),
2505
- 234:
2506
- dict(link=('lsd_kpt13', 'lsd_kpt14'), id=234, color=[128, 64, 0]),
2507
- 235:
2508
- dict(link=('lsd_kpt14', 'lsd_kpt15'), id=235, color=[128, 64, 0]),
2509
- 236:
2510
- dict(link=('lsd_kpt15', 'lsd_kpt16'), id=236, color=[128, 64, 0]),
2511
- 237:
2512
- dict(link=('lsd_kpt16', 'lsd_kpt17'), id=237, color=[128, 64, 0]),
2513
- 238:
2514
- dict(link=('lsd_kpt17', 'lsd_kpt18'), id=238, color=[128, 64, 0]),
2515
- 239:
2516
- dict(link=('lsd_kpt18', 'lsd_kpt19'), id=239, color=[128, 64, 0]),
2517
- 240:
2518
- dict(link=('lsd_kpt19', 'lsd_kpt20'), id=240, color=[128, 64, 0]),
2519
- 241:
2520
- dict(link=('lsd_kpt20', 'lsd_kpt21'), id=241, color=[128, 64, 0]),
2521
- 242:
2522
- dict(link=('lsd_kpt21', 'lsd_kpt22'), id=242, color=[128, 64, 0]),
2523
- 243:
2524
- dict(link=('lsd_kpt22', 'lsd_kpt23'), id=243, color=[128, 64, 0]),
2525
- 244:
2526
- dict(link=('lsd_kpt23', 'lsd_kpt24'), id=244, color=[128, 64, 0]),
2527
- 245:
2528
- dict(link=('lsd_kpt24', 'lsd_kpt25'), id=245, color=[128, 64, 0]),
2529
- 246:
2530
- dict(link=('lsd_kpt25', 'lsd_kpt26'), id=246, color=[128, 64, 0]),
2531
- 247:
2532
- dict(link=('lsd_kpt26', 'lsd_kpt27'), id=247, color=[128, 64, 0]),
2533
- 248:
2534
- dict(link=('lsd_kpt27', 'lsd_kpt28'), id=248, color=[128, 64, 0]),
2535
- 249:
2536
- dict(link=('lsd_kpt28', 'lsd_kpt29'), id=249, color=[128, 64, 0]),
2537
- 250:
2538
- dict(link=('lsd_kpt29', 'lsd_kpt30'), id=250, color=[128, 64, 0]),
2539
- 251:
2540
- dict(link=('lsd_kpt30', 'lsd_kpt31'), id=251, color=[128, 64, 0]),
2541
- 252:
2542
- dict(link=('lsd_kpt31', 'lsd_kpt32'), id=252, color=[128, 64, 0]),
2543
- 253:
2544
- dict(link=('lsd_kpt32', 'lsd_kpt33'), id=253, color=[128, 64, 0]),
2545
- 254:
2546
- dict(link=('lsd_kpt33', 'lsd_kpt34'), id=254, color=[128, 64, 0]),
2547
- 255:
2548
- dict(link=('lsd_kpt34', 'lsd_kpt35'), id=255, color=[128, 64, 0]),
2549
- 256:
2550
- dict(link=('lsd_kpt35', 'lsd_kpt36'), id=256, color=[128, 64, 0]),
2551
- 257:
2552
- dict(link=('lsd_kpt36', 'lsd_kpt37'), id=257, color=[128, 64, 0]),
2553
- 258:
2554
- dict(link=('lsd_kpt37', 'lsd_kpt6'), id=258, color=[128, 64, 0]),
2555
- 259:
2556
- dict(link=('lsd_kpt6', 'lsd_kpt5'), id=259, color=[128, 64, 0]),
2557
- 260:
2558
- dict(link=('lsd_kpt5', 'lsd_kpt4'), id=260, color=[128, 64, 0]),
2559
- 261:
2560
- dict(link=('lsd_kpt4', 'lsd_kpt3'), id=261, color=[128, 64, 0]),
2561
- 262:
2562
- dict(link=('lsd_kpt3', 'lsd_kpt2'), id=262, color=[128, 64, 0]),
2563
- 263:
2564
- dict(link=('lsd_kpt6', 'lsd_kpt1'), id=263, color=[128, 64, 0]),
2565
- 264:
2566
- dict(link=('vd_kpt1', 'vd_kpt2'), id=264, color=[128, 64, 255]),
2567
- 265:
2568
- dict(link=('vd_kpt2', 'vd_kpt7'), id=265, color=[128, 64, 255]),
2569
- 266:
2570
- dict(link=('vd_kpt7', 'vd_kpt8'), id=266, color=[128, 64, 255]),
2571
- 267:
2572
- dict(link=('vd_kpt8', 'vd_kpt9'), id=267, color=[128, 64, 255]),
2573
- 268:
2574
- dict(link=('vd_kpt9', 'vd_kpt10'), id=268, color=[128, 64, 255]),
2575
- 269:
2576
- dict(link=('vd_kpt10', 'vd_kpt11'), id=269, color=[128, 64, 255]),
2577
- 270:
2578
- dict(link=('vd_kpt11', 'vd_kpt12'), id=270, color=[128, 64, 255]),
2579
- 271:
2580
- dict(link=('vd_kpt12', 'vd_kpt13'), id=271, color=[128, 64, 255]),
2581
- 272:
2582
- dict(link=('vd_kpt13', 'vd_kpt14'), id=272, color=[128, 64, 255]),
2583
- 273:
2584
- dict(link=('vd_kpt14', 'vd_kpt15'), id=273, color=[128, 64, 255]),
2585
- 274:
2586
- dict(link=('vd_kpt15', 'vd_kpt16'), id=274, color=[128, 64, 255]),
2587
- 275:
2588
- dict(link=('vd_kpt16', 'vd_kpt17'), id=275, color=[128, 64, 255]),
2589
- 276:
2590
- dict(link=('vd_kpt17', 'vd_kpt18'), id=276, color=[128, 64, 255]),
2591
- 277:
2592
- dict(link=('vd_kpt18', 'vd_kpt19'), id=277, color=[128, 64, 255]),
2593
- 278:
2594
- dict(link=('vd_kpt19', 'vd_kpt6'), id=278, color=[128, 64, 255]),
2595
- 279:
2596
- dict(link=('vd_kpt6', 'vd_kpt5'), id=279, color=[128, 64, 255]),
2597
- 280:
2598
- dict(link=('vd_kpt5', 'vd_kpt4'), id=280, color=[128, 64, 255]),
2599
- 281:
2600
- dict(link=('vd_kpt4', 'vd_kpt3'), id=281, color=[128, 64, 255]),
2601
- 282:
2602
- dict(link=('vd_kpt3', 'vd_kpt2'), id=282, color=[128, 64, 255]),
2603
- 283:
2604
- dict(link=('vd_kpt6', 'vd_kpt1'), id=283, color=[128, 64, 255]),
2605
- 284:
2606
- dict(link=('sd_kpt1', 'sd_kpt2'), id=284, color=[128, 64, 0]),
2607
- 285:
2608
- dict(link=('sd_kpt2', 'sd_kpt8'), id=285, color=[128, 64, 0]),
2609
- 286:
2610
- dict(link=('sd_kpt8', 'sd_kpt9'), id=286, color=[128, 64, 0]),
2611
- 287:
2612
- dict(link=('sd_kpt9', 'sd_kpt10'), id=287, color=[128, 64, 0]),
2613
- 288:
2614
- dict(link=('sd_kpt10', 'sd_kpt11'), id=288, color=[128, 64, 0]),
2615
- 289:
2616
- dict(link=('sd_kpt11', 'sd_kpt12'), id=289, color=[128, 64, 0]),
2617
- 290:
2618
- dict(link=('sd_kpt12', 'sd_kpt13'), id=290, color=[128, 64, 0]),
2619
- 291:
2620
- dict(link=('sd_kpt13', 'sd_kpt14'), id=291, color=[128, 64, 0]),
2621
- 292:
2622
- dict(link=('sd_kpt14', 'sd_kpt15'), id=292, color=[128, 64, 0]),
2623
- 293:
2624
- dict(link=('sd_kpt15', 'sd_kpt16'), id=293, color=[128, 64, 0]),
2625
- 294:
2626
- dict(link=('sd_kpt16', 'sd_kpt17'), id=294, color=[128, 64, 0]),
2627
- 295:
2628
- dict(link=('sd_kpt17', 'sd_kpt18'), id=295, color=[128, 64, 0]),
2629
- 296:
2630
- dict(link=('sd_kpt18', 'sd_kpt6'), id=296, color=[128, 64, 0]),
2631
- 297:
2632
- dict(link=('sd_kpt6', 'sd_kpt5'), id=297, color=[128, 64, 0]),
2633
- 298:
2634
- dict(link=('sd_kpt5', 'sd_kpt4'), id=298, color=[128, 64, 0]),
2635
- 299:
2636
- dict(link=('sd_kpt4', 'sd_kpt3'), id=299, color=[128, 64, 0]),
2637
- 300:
2638
- dict(link=('sd_kpt3', 'sd_kpt2'), id=300, color=[128, 64, 0]),
2639
- 301:
2640
- dict(link=('sd_kpt2', 'sd_kpt7'), id=301, color=[128, 64, 0]),
2641
- 302:
2642
- dict(link=('sd_kpt6', 'sd_kpt19'), id=302, color=[128, 64, 0]),
2643
- 303:
2644
- dict(link=('sd_kpt6', 'sd_kpt1'), id=303, color=[128, 64, 0])
2645
- }),
2646
- joint_weights=[
2647
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2648
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2649
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2650
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2651
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2652
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2653
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2654
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2655
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2656
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2657
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2658
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2659
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2660
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2661
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2662
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2663
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2664
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2665
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2666
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
2667
- 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0
2668
- ],
2669
- sigmas=[])
2670
- param_scheduler = [
2671
- dict(
2672
- type='LinearLR', begin=0, end=500, start_factor=0.001, by_epoch=False),
2673
- dict(
2674
- type='MultiStepLR',
2675
- begin=0,
2676
- end=60,
2677
- milestones=[20, 40],
2678
- gamma=0.1,
2679
- by_epoch=True)
2680
- ]
2681
- optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005))
2682
- auto_scale_lr = dict(base_batch_size=512)
2683
- dataset_type = 'DeepFashion2Dataset'
2684
- data_mode = 'topdown'
2685
- data_root = 'data/deepfashion2/'
2686
- codec = dict(
2687
- type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2)
2688
- train_pipeline = [
2689
- dict(type='LoadImage'),
2690
- dict(type='GetBBoxCenterScale'),
2691
- dict(type='RandomFlip', direction='horizontal'),
2692
- dict(
2693
- type='RandomBBoxTransform',
2694
- shift_prob=0,
2695
- rotate_factor=60,
2696
- scale_factor=(0.75, 1.25)),
2697
- dict(type='TopdownAffine', input_size=(192, 256)),
2698
- dict(
2699
- type='GenerateTarget',
2700
- encoder=dict(
2701
- type='MSRAHeatmap',
2702
- input_size=(192, 256),
2703
- heatmap_size=(48, 64),
2704
- sigma=2)),
2705
- dict(type='PackPoseInputs')
2706
- ]
2707
- val_pipeline = [
2708
- dict(type='LoadImage', backend_args=dict(backend='local')),
2709
- dict(type='GetBBoxCenterScale'),
2710
- dict(type='TopdownAffine', input_size=(192, 256)),
2711
- dict(type='PackPoseInputs')
2712
- ]
2713
- train_dataloader = dict(
2714
- batch_size=16,
2715
- num_workers=6,
2716
- persistent_workers=True,
2717
- sampler=dict(type='DefaultSampler', shuffle=True),
2718
- dataset=dict(
2719
- type='DeepFashion2Dataset',
2720
- data_root='data/deepfashion2/',
2721
- data_mode='topdown',
2722
- ann_file='train/deepfashion2_long_sleeved_outwear.json',
2723
- data_prefix=dict(img='train/image/'),
2724
- pipeline=[
2725
- dict(type='LoadImage'),
2726
- dict(type='GetBBoxCenterScale'),
2727
- dict(type='RandomFlip', direction='horizontal'),
2728
- dict(
2729
- type='RandomBBoxTransform',
2730
- shift_prob=0,
2731
- rotate_factor=60,
2732
- scale_factor=(0.75, 1.25)),
2733
- dict(type='TopdownAffine', input_size=(192, 256)),
2734
- dict(
2735
- type='GenerateTarget',
2736
- encoder=dict(
2737
- type='MSRAHeatmap',
2738
- input_size=(192, 256),
2739
- heatmap_size=(48, 64),
2740
- sigma=2)),
2741
- dict(type='PackPoseInputs')
2742
- ]))
2743
- val_dataloader = dict(
2744
- batch_size=16,
2745
- num_workers=6,
2746
- persistent_workers=True,
2747
- drop_last=False,
2748
- sampler=dict(type='DefaultSampler', shuffle=False),
2749
- dataset=dict(
2750
- type='DeepFashion2Dataset',
2751
- data_root='data/deepfashion2/',
2752
- data_mode='topdown',
2753
- ann_file='validation/deepfashion2_long_sleeved_outwear.json',
2754
- data_prefix=dict(img='validation/image/'),
2755
- test_mode=True,
2756
- pipeline=[
2757
- dict(type='LoadImage', backend_args=dict(backend='local')),
2758
- dict(type='GetBBoxCenterScale'),
2759
- dict(type='TopdownAffine', input_size=(192, 256)),
2760
- dict(type='PackPoseInputs')
2761
- ]))
2762
- test_dataloader = dict(
2763
- batch_size=16,
2764
- num_workers=6,
2765
- persistent_workers=True,
2766
- drop_last=False,
2767
- sampler=dict(type='DefaultSampler', shuffle=False),
2768
- dataset=dict(
2769
- type='DeepFashion2Dataset',
2770
- data_root='data/deepfashion2/',
2771
- data_mode='topdown',
2772
- ann_file='validation/deepfashion2_long_sleeved_outwear.json',
2773
- data_prefix=dict(img='validation/image/'),
2774
- test_mode=True,
2775
- pipeline=[
2776
- dict(type='LoadImage', backend_args=dict(backend='local')),
2777
- dict(type='GetBBoxCenterScale'),
2778
- dict(type='TopdownAffine', input_size=(192, 256)),
2779
- dict(type='PackPoseInputs')
2780
- ]))
2781
- channel_cfg = dict(
2782
- num_output_channels=294,
2783
- dataset_joints=294,
2784
- dataset_channel=[[
2785
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
2786
- 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
2787
- 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
2788
- 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
2789
- 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
2790
- 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
2791
- 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
2792
- 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
2793
- 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
2794
- 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
2795
- 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
2796
- 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
2797
- 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
2798
- 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
2799
- 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
2800
- 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
2801
- 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
2802
- 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
2803
- 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
2804
- 290, 291, 292, 293
2805
- ]],
2806
- inference_channel=[
2807
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
2808
- 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
2809
- 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
2810
- 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
2811
- 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
2812
- 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
2813
- 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
2814
- 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
2815
- 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
2816
- 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
2817
- 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
2818
- 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
2819
- 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
2820
- 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
2821
- 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
2822
- 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
2823
- 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
2824
- 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
2825
- 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
2826
- 290, 291, 292, 293
2827
- ])
2828
- model = dict(
2829
- type='TopdownPoseEstimator',
2830
- data_preprocessor=dict(
2831
- type='PoseDataPreprocessor',
2832
- mean=[123.675, 116.28, 103.53],
2833
- std=[58.395, 57.12, 57.375],
2834
- bgr_to_rgb=True),
2835
- backbone=dict(
2836
- type='ResNet',
2837
- depth=50,
2838
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
2839
- head=dict(
2840
- type='HeatmapHead',
2841
- in_channels=2048,
2842
- out_channels=294,
2843
- loss=dict(type='KeypointMSELoss', use_target_weight=True),
2844
- decoder=dict(
2845
- type='MSRAHeatmap',
2846
- input_size=(192, 256),
2847
- heatmap_size=(48, 64),
2848
- sigma=2)),
2849
- test_cfg=dict(flip_test=True, flip_mode='heatmap', shift_heatmap=True))
2850
- val_evaluator = [
2851
- dict(type='PCKAccuracy', thr=0.2),
2852
- dict(type='AUC'),
2853
- dict(type='EPE')
2854
- ]
2855
- test_evaluator = [
2856
- dict(type='PCKAccuracy', thr=0.2),
2857
- dict(type='AUC'),
2858
- dict(type='EPE')
2859
- ]
2860
- launcher = 'pytorch'
2861
- work_dir = './work_dirs/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192'
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AUBMC-AIM/MammoGANesis/app.py DELETED
@@ -1,31 +0,0 @@
1
- import os
2
- import gradio as gr
3
- from PIL import Image
4
- from huggingface_hub import hf_hub_url, cached_download
5
-
6
-
7
- os.system("git clone https://github.com/AK391/stylegan2-ada-pytorch")
8
-
9
-
10
- os.chdir("stylegan2-ada-pytorch")
11
-
12
- os.mkdir("outputs")
13
- os.mkdir("outputs/images")
14
-
15
- config_file_url = hf_hub_url("AUBMC-AIM/MammoGANesis", filename="mammoGANesis.pkl")
16
- cached_file = cached_download(config_file_url)
17
-
18
- def inference(truncation,seeds):
19
- os.system("python generate.py --outdir=./outputs/images/ --trunc="+str(truncation)+" --seeds="+str(int(seeds))+" --network="+cached_file)
20
- seeds = int(seeds)
21
- image = Image.open(f"./outputs/images/seed{seeds:04d}.png")
22
- return image
23
-
24
- title = "MammoGANesis"
25
- description = "Gradio demo for MammoGANesis: Controlled Generation of High-Resolution Mammograms for Radiology Education. This paper demonstrates the model’s ability to generate anatomically and medically relevant mammograms by achieving an average AUC of 0.54 in a double-blind study on four expert mammography radiologists to distinguish between generated and real images, ascribing to the high visual quality of the synthesized and edited mammograms, and to their potential use in advancing and facilitating medical education. To use it, add seed and truncation, or click one of the examples to load them. Read more at the links below."
26
-
27
- article = "<p style='text-align: center'><a href='https://cyrilzakka.github.io/radiology/2020/10/13/mammogenesis.html' target='_blank'>MammoGANesis: Controlled Generation of High-Resolution Mammograms for Radiology Education</a><center></a></center></p><center><img src='https://visitor-badge.glitch.me/badge?page_id=akhaliq_mammogan' alt='visitor badge'></center>"
28
-
29
- gr.Interface(inference,[gr.inputs.Slider(label="truncation",minimum=0, maximum=5, step=0.1, default=0.8),gr.inputs.Slider(label="Seed",minimum=0, maximum=1000, step=1, default=0)],"pil",title=title,description=description,article=article, examples=[
30
- [0.8,0]
31
- ]).launch(enable_queue=True,cache_examples=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abhilashvj/planogram-compliance/utils/loggers/comet/hpo.py DELETED
@@ -1,280 +0,0 @@
1
- import argparse
2
- import json
3
- import logging
4
- import os
5
- import sys
6
- from pathlib import Path
7
-
8
- import comet_ml
9
-
10
- logger = logging.getLogger(__name__)
11
-
12
- FILE = Path(__file__).resolve()
13
- ROOT = FILE.parents[3] # YOLOv5 root directory
14
- if str(ROOT) not in sys.path:
15
- sys.path.append(str(ROOT)) # add ROOT to PATH
16
-
17
- from train import train
18
- from utils.callbacks import Callbacks
19
- from utils.general import increment_path
20
- from utils.torch_utils import select_device
21
-
22
- # Project Configuration
23
- config = comet_ml.config.get_config()
24
- COMET_PROJECT_NAME = config.get_string(
25
- os.getenv("COMET_PROJECT_NAME"), "comet.project_name", default="yolov5"
26
- )
27
-
28
-
29
- def get_args(known=False):
30
- parser = argparse.ArgumentParser()
31
- parser.add_argument(
32
- "--weights",
33
- type=str,
34
- default=ROOT / "yolov5s.pt",
35
- help="initial weights path",
36
- )
37
- parser.add_argument("--cfg", type=str, default="", help="model.yaml path")
38
- parser.add_argument(
39
- "--data",
40
- type=str,
41
- default=ROOT / "data/coco128.yaml",
42
- help="dataset.yaml path",
43
- )
44
- parser.add_argument(
45
- "--hyp",
46
- type=str,
47
- default=ROOT / "data/hyps/hyp.scratch-low.yaml",
48
- help="hyperparameters path",
49
- )
50
- parser.add_argument(
51
- "--epochs", type=int, default=300, help="total training epochs"
52
- )
53
- parser.add_argument(
54
- "--batch-size",
55
- type=int,
56
- default=16,
57
- help="total batch size for all GPUs, -1 for autobatch",
58
- )
59
- parser.add_argument(
60
- "--imgsz",
61
- "--img",
62
- "--img-size",
63
- type=int,
64
- default=640,
65
- help="train, val image size (pixels)",
66
- )
67
- parser.add_argument(
68
- "--rect", action="store_true", help="rectangular training"
69
- )
70
- parser.add_argument(
71
- "--resume",
72
- nargs="?",
73
- const=True,
74
- default=False,
75
- help="resume most recent training",
76
- )
77
- parser.add_argument(
78
- "--nosave", action="store_true", help="only save final checkpoint"
79
- )
80
- parser.add_argument(
81
- "--noval", action="store_true", help="only validate final epoch"
82
- )
83
- parser.add_argument(
84
- "--noautoanchor", action="store_true", help="disable AutoAnchor"
85
- )
86
- parser.add_argument(
87
- "--noplots", action="store_true", help="save no plot files"
88
- )
89
- parser.add_argument(
90
- "--evolve",
91
- type=int,
92
- nargs="?",
93
- const=300,
94
- help="evolve hyperparameters for x generations",
95
- )
96
- parser.add_argument("--bucket", type=str, default="", help="gsutil bucket")
97
- parser.add_argument(
98
- "--cache",
99
- type=str,
100
- nargs="?",
101
- const="ram",
102
- help='--cache images in "ram" (default) or "disk"',
103
- )
104
- parser.add_argument(
105
- "--image-weights",
106
- action="store_true",
107
- help="use weighted image selection for training",
108
- )
109
- parser.add_argument(
110
- "--device", default="", help="cuda device, i.e. 0 or 0,1,2,3 or cpu"
111
- )
112
- parser.add_argument(
113
- "--multi-scale", action="store_true", help="vary img-size +/- 50%%"
114
- )
115
- parser.add_argument(
116
- "--single-cls",
117
- action="store_true",
118
- help="train multi-class data as single-class",
119
- )
120
- parser.add_argument(
121
- "--optimizer",
122
- type=str,
123
- choices=["SGD", "Adam", "AdamW"],
124
- default="SGD",
125
- help="optimizer",
126
- )
127
- parser.add_argument(
128
- "--sync-bn",
129
- action="store_true",
130
- help="use SyncBatchNorm, only available in DDP mode",
131
- )
132
- parser.add_argument(
133
- "--workers",
134
- type=int,
135
- default=8,
136
- help="max dataloader workers (per RANK in DDP mode)",
137
- )
138
- parser.add_argument(
139
- "--project", default=ROOT / "runs/train", help="save to project/name"
140
- )
141
- parser.add_argument("--name", default="exp", help="save to project/name")
142
- parser.add_argument(
143
- "--exist-ok",
144
- action="store_true",
145
- help="existing project/name ok, do not increment",
146
- )
147
- parser.add_argument("--quad", action="store_true", help="quad dataloader")
148
- parser.add_argument(
149
- "--cos-lr", action="store_true", help="cosine LR scheduler"
150
- )
151
- parser.add_argument(
152
- "--label-smoothing",
153
- type=float,
154
- default=0.0,
155
- help="Label smoothing epsilon",
156
- )
157
- parser.add_argument(
158
- "--patience",
159
- type=int,
160
- default=100,
161
- help="EarlyStopping patience (epochs without improvement)",
162
- )
163
- parser.add_argument(
164
- "--freeze",
165
- nargs="+",
166
- type=int,
167
- default=[0],
168
- help="Freeze layers: backbone=10, first3=0 1 2",
169
- )
170
- parser.add_argument(
171
- "--save-period",
172
- type=int,
173
- default=-1,
174
- help="Save checkpoint every x epochs (disabled if < 1)",
175
- )
176
- parser.add_argument(
177
- "--seed", type=int, default=0, help="Global training seed"
178
- )
179
- parser.add_argument(
180
- "--local_rank",
181
- type=int,
182
- default=-1,
183
- help="Automatic DDP Multi-GPU argument, do not modify",
184
- )
185
-
186
- # Weights & Biases arguments
187
- parser.add_argument("--entity", default=None, help="W&B: Entity")
188
- parser.add_argument(
189
- "--upload_dataset",
190
- nargs="?",
191
- const=True,
192
- default=False,
193
- help='W&B: Upload data, "val" option',
194
- )
195
- parser.add_argument(
196
- "--bbox_interval",
197
- type=int,
198
- default=-1,
199
- help="W&B: Set bounding-box image logging interval",
200
- )
201
- parser.add_argument(
202
- "--artifact_alias",
203
- type=str,
204
- default="latest",
205
- help="W&B: Version of dataset artifact to use",
206
- )
207
-
208
- # Comet Arguments
209
- parser.add_argument(
210
- "--comet_optimizer_config",
211
- type=str,
212
- help="Comet: Path to a Comet Optimizer Config File.",
213
- )
214
- parser.add_argument(
215
- "--comet_optimizer_id",
216
- type=str,
217
- help="Comet: ID of the Comet Optimizer sweep.",
218
- )
219
- parser.add_argument(
220
- "--comet_optimizer_objective",
221
- type=str,
222
- help="Comet: Set to 'minimize' or 'maximize'.",
223
- )
224
- parser.add_argument(
225
- "--comet_optimizer_metric", type=str, help="Comet: Metric to Optimize."
226
- )
227
- parser.add_argument(
228
- "--comet_optimizer_workers",
229
- type=int,
230
- default=1,
231
- help="Comet: Number of Parallel Workers to use with the Comet Optimizer.",
232
- )
233
-
234
- return parser.parse_known_args()[0] if known else parser.parse_args()
235
-
236
-
237
- def run(parameters, opt):
238
- hyp_dict = {
239
- k: v
240
- for k, v in parameters.items()
241
- if k not in ["epochs", "batch_size"]
242
- }
243
-
244
- opt.save_dir = str(
245
- increment_path(
246
- Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve
247
- )
248
- )
249
- opt.batch_size = parameters.get("batch_size")
250
- opt.epochs = parameters.get("epochs")
251
-
252
- device = select_device(opt.device, batch_size=opt.batch_size)
253
- train(hyp_dict, opt, device, callbacks=Callbacks())
254
-
255
-
256
- if __name__ == "__main__":
257
- opt = get_args(known=True)
258
-
259
- opt.weights = str(opt.weights)
260
- opt.cfg = str(opt.cfg)
261
- opt.data = str(opt.data)
262
- opt.project = str(opt.project)
263
-
264
- optimizer_id = os.getenv("COMET_OPTIMIZER_ID")
265
- if optimizer_id is None:
266
- with open(opt.comet_optimizer_config) as f:
267
- optimizer_config = json.load(f)
268
- optimizer = comet_ml.Optimizer(optimizer_config)
269
- else:
270
- optimizer = comet_ml.Optimizer(optimizer_id)
271
-
272
- opt.comet_optimizer_id = optimizer.id
273
- status = optimizer.status()
274
-
275
- opt.comet_optimizer_objective = status["spec"]["objective"]
276
- opt.comet_optimizer_metric = status["spec"]["metric"]
277
-
278
- logger.info("COMET INFO: Starting Hyperparameter Sweep")
279
- for parameter in optimizer.get_parameters():
280
- run(parameter["parameters"], opt)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/svelte.config.js DELETED
@@ -1,29 +0,0 @@
1
- import adapter from "@sveltejs/adapter-node";
2
- import { vitePreprocess } from "@sveltejs/kit/vite";
3
- import dotenv from "dotenv";
4
-
5
- dotenv.config({ path: "./.env.local" });
6
- dotenv.config({ path: "./.env" });
7
-
8
- process.env.PUBLIC_VERSION = process.env.npm_package_version;
9
-
10
- /** @type {import('@sveltejs/kit').Config} */
11
- const config = {
12
- // Consult https://kit.svelte.dev/docs/integrations#preprocessors
13
- // for more information about preprocessors
14
- preprocess: vitePreprocess(),
15
-
16
- kit: {
17
- adapter: adapter(),
18
-
19
- paths: {
20
- base: process.env.APP_BASE || "",
21
- },
22
- csrf: {
23
- // handled in hooks.server.ts, because we can have multiple valid origins
24
- checkOrigin: false,
25
- },
26
- },
27
- };
28
-
29
- export default config;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/executor/coverage_test.py DELETED
@@ -1,62 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import os
4
- import subprocess
5
- import multiprocessing
6
- from typing import TYPE_CHECKING, Any, List, Tuple
7
-
8
- from agentverse.agents import ExecutorAgent
9
- from agentverse.logging import logger
10
- from agentverse.message import ExecutorMessage, SolverMessage
11
-
12
- from . import BaseExecutor, executor_registry
13
-
14
-
15
- def execute_command(command: str, result_list) -> str:
16
- # TODO: make it more secure
17
- result = subprocess.run(command, capture_output=True, shell=True, encoding="utf-8")
18
- result_list.append(f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}")
19
- # return f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
20
-
21
-
22
- @executor_registry.register("coverage-test")
23
- class CoverageTestExecutor(BaseExecutor):
24
- def step(
25
- self,
26
- agent: ExecutorAgent,
27
- task_description: str,
28
- solution: List[SolverMessage],
29
- *args,
30
- **kwargs,
31
- ) -> Any:
32
- from scripts.evaluate_commongen import scoring
33
-
34
- coverage, missing_tokens = scoring(
35
- [s.content for s in solution], [task_description]
36
- )
37
- if len(missing_tokens[0]) == 0:
38
- missing_tokens = "No missing tokens."
39
- else:
40
- missing_tokens = ", ".join(missing_tokens[0])
41
- result = f"Coverage: {coverage*100:.2f}%\nMissing Tokens: {missing_tokens}"
42
- return [ExecutorMessage(content=result)]
43
-
44
- async def astep(
45
- self,
46
- agent: ExecutorAgent,
47
- task_description: str,
48
- solution: List[SolverMessage],
49
- *args,
50
- **kwargs,
51
- ) -> Any:
52
- from scripts.evaluate_commongen import scoring
53
-
54
- coverage, missing_tokens = scoring(
55
- [s.content for s in solution], [task_description]
56
- )
57
- if len(missing_tokens[0]) == 0:
58
- missing_tokens = "No missing tokens."
59
- else:
60
- missing_tokens = ", ".join(missing_tokens[0])
61
- result = f"Coverage: {coverage*100:.2f}%\nMissing Tokens: {missing_tokens}"
62
- return [ExecutorMessage(content=result)]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateFixWidthSizer.js DELETED
@@ -1,8 +0,0 @@
1
- import CreateAnySizer from './utils/CreateAnySizer.js';
2
- import FixWidthSizer from '../../fixwidthsizer/FixWidthSizer.js';
3
-
4
- var CreateFixWidthSizer = function (scene, data, view, styles, customBuilders) {
5
- return CreateAnySizer(scene, data, view, styles, customBuilders, FixWidthSizer);
6
- }
7
-
8
- export default CreateFixWidthSizer;
 
 
 
 
 
 
 
 
 
spaces/Alexpro1213/WizardLM-WizardCoder-Python-34B-V1.0/app.py DELETED
@@ -1,3 +0,0 @@
1
- import gradio as gr
2
-
3
- gr.Interface.load("models/WizardLM/WizardCoder-Python-34B-V1.0").launch()
 
 
 
 
spaces/AliHaider0343/Restaurant-Domain-Sentence-Categories-Classification/app.py DELETED
@@ -1,68 +0,0 @@
1
- import torch
2
- import streamlit as st
3
- from transformers import RobertaTokenizer, RobertaForSequenceClassification
4
- import re
5
- import string
6
-
7
- def tokenize_sentences(sentence):
8
- encoded_dict = tokenizer.encode_plus(
9
- sentence,
10
- add_special_tokens=True,
11
- max_length=128,
12
- padding='max_length',
13
- truncation=True,
14
- return_attention_mask=True,
15
- return_tensors='pt'
16
- )
17
- return torch.cat([encoded_dict['input_ids']], dim=0), torch.cat([encoded_dict['attention_mask']], dim=0)
18
-
19
-
20
-
21
- def preprocess_query(query):
22
- query = str(query).lower()
23
- query = query.strip()
24
- query=query.translate(str.maketrans("", "", string.punctuation))
25
- return query
26
-
27
- def predict_category(sentence, threshold):
28
- input_ids, attention_mask = tokenize_sentences(sentence)
29
- with torch.no_grad():
30
- outputs = categories_model(input_ids, attention_mask=attention_mask)
31
- logits = outputs.logits
32
- predicted_categories = torch.sigmoid(logits).squeeze().tolist()
33
- results = dict()
34
- for label, prediction in zip(LABEL_COLUMNS_CATEGORIES, predicted_categories):
35
- if prediction < threshold:
36
- continue
37
- precentage = round(float(prediction) * 100, 2)
38
- results[label] = precentage
39
- return results
40
-
41
- # Load tokenizer and model
42
- BERT_MODEL_NAME_FOR_CATEGORIES_CLASSIFICATION = 'roberta-large'
43
- tokenizer = RobertaTokenizer.from_pretrained(BERT_MODEL_NAME_FOR_CATEGORIES_CLASSIFICATION, do_lower_case=True)
44
-
45
- LABEL_COLUMNS_CATEGORIES = ['AMBIENCE', 'DRINK', 'FOOD', 'GENERAL', 'RESTAURANT', 'SERVICE', 'STAFF']
46
-
47
- categories_model = RobertaForSequenceClassification.from_pretrained(BERT_MODEL_NAME_FOR_CATEGORIES_CLASSIFICATION, num_labels=len(LABEL_COLUMNS_CATEGORIES))
48
- categories_model.load_state_dict(torch.load('./Categories_Classification_Model_updated.pth',map_location=torch.device('cpu') ))
49
- categories_model.eval()
50
-
51
- # Streamlit App
52
- st.title("Review/Sentence Classification")
53
- st.write("Multilable/Multiclass Sentence classification under 7 Defined Categories. ")
54
-
55
- sentence = st.text_input("Enter a sentence:")
56
- threshold = st.slider("Threshold", min_value=0.0, max_value=1.0, step=0.01, value=0.5)
57
-
58
- if sentence:
59
- processed_sentence = preprocess_query(sentence)
60
- results = predict_category(processed_sentence, threshold)
61
- if len(results) > 0:
62
- st.write("Predicted Aspects:")
63
- table_data = [["Category", "Probability"]]
64
- for category, percentage in results.items():
65
- table_data.append([category, f"{percentage}%"])
66
- st.table(table_data)
67
- else:
68
- st.write("No Categories above the threshold.")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_image_interpolation.py DELETED
@@ -1,495 +0,0 @@
1
- import inspect
2
- from typing import List, Optional, Union
3
-
4
- import PIL
5
- import torch
6
- from torch.nn import functional as F
7
- from transformers import (
8
- CLIPImageProcessor,
9
- CLIPTextModelWithProjection,
10
- CLIPTokenizer,
11
- CLIPVisionModelWithProjection,
12
- )
13
-
14
- from diffusers import (
15
- DiffusionPipeline,
16
- ImagePipelineOutput,
17
- UnCLIPScheduler,
18
- UNet2DConditionModel,
19
- UNet2DModel,
20
- )
21
- from diffusers.pipelines.unclip import UnCLIPTextProjModel
22
- from diffusers.utils import is_accelerate_available, logging, randn_tensor
23
-
24
-
25
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
26
-
27
-
28
- def slerp(val, low, high):
29
- """
30
- Find the interpolation point between the 'low' and 'high' values for the given 'val'. See https://en.wikipedia.org/wiki/Slerp for more details on the topic.
31
- """
32
- low_norm = low / torch.norm(low)
33
- high_norm = high / torch.norm(high)
34
- omega = torch.acos((low_norm * high_norm))
35
- so = torch.sin(omega)
36
- res = (torch.sin((1.0 - val) * omega) / so) * low + (torch.sin(val * omega) / so) * high
37
- return res
38
-
39
-
40
- class UnCLIPImageInterpolationPipeline(DiffusionPipeline):
41
- """
42
- Pipeline to generate variations from an input image using unCLIP
43
-
44
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
45
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
46
-
47
- Args:
48
- text_encoder ([`CLIPTextModelWithProjection`]):
49
- Frozen text-encoder.
50
- tokenizer (`CLIPTokenizer`):
51
- Tokenizer of class
52
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
53
- feature_extractor ([`CLIPImageProcessor`]):
54
- Model that extracts features from generated images to be used as inputs for the `image_encoder`.
55
- image_encoder ([`CLIPVisionModelWithProjection`]):
56
- Frozen CLIP image-encoder. unCLIP Image Variation uses the vision portion of
57
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPVisionModelWithProjection),
58
- specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
59
- text_proj ([`UnCLIPTextProjModel`]):
60
- Utility class to prepare and combine the embeddings before they are passed to the decoder.
61
- decoder ([`UNet2DConditionModel`]):
62
- The decoder to invert the image embedding into an image.
63
- super_res_first ([`UNet2DModel`]):
64
- Super resolution unet. Used in all but the last step of the super resolution diffusion process.
65
- super_res_last ([`UNet2DModel`]):
66
- Super resolution unet. Used in the last step of the super resolution diffusion process.
67
- decoder_scheduler ([`UnCLIPScheduler`]):
68
- Scheduler used in the decoder denoising process. Just a modified DDPMScheduler.
69
- super_res_scheduler ([`UnCLIPScheduler`]):
70
- Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler.
71
-
72
- """
73
-
74
- decoder: UNet2DConditionModel
75
- text_proj: UnCLIPTextProjModel
76
- text_encoder: CLIPTextModelWithProjection
77
- tokenizer: CLIPTokenizer
78
- feature_extractor: CLIPImageProcessor
79
- image_encoder: CLIPVisionModelWithProjection
80
- super_res_first: UNet2DModel
81
- super_res_last: UNet2DModel
82
-
83
- decoder_scheduler: UnCLIPScheduler
84
- super_res_scheduler: UnCLIPScheduler
85
-
86
- # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline.__init__
87
- def __init__(
88
- self,
89
- decoder: UNet2DConditionModel,
90
- text_encoder: CLIPTextModelWithProjection,
91
- tokenizer: CLIPTokenizer,
92
- text_proj: UnCLIPTextProjModel,
93
- feature_extractor: CLIPImageProcessor,
94
- image_encoder: CLIPVisionModelWithProjection,
95
- super_res_first: UNet2DModel,
96
- super_res_last: UNet2DModel,
97
- decoder_scheduler: UnCLIPScheduler,
98
- super_res_scheduler: UnCLIPScheduler,
99
- ):
100
- super().__init__()
101
-
102
- self.register_modules(
103
- decoder=decoder,
104
- text_encoder=text_encoder,
105
- tokenizer=tokenizer,
106
- text_proj=text_proj,
107
- feature_extractor=feature_extractor,
108
- image_encoder=image_encoder,
109
- super_res_first=super_res_first,
110
- super_res_last=super_res_last,
111
- decoder_scheduler=decoder_scheduler,
112
- super_res_scheduler=super_res_scheduler,
113
- )
114
-
115
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
116
- def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
117
- if latents is None:
118
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
119
- else:
120
- if latents.shape != shape:
121
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
122
- latents = latents.to(device)
123
-
124
- latents = latents * scheduler.init_noise_sigma
125
- return latents
126
-
127
- # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline._encode_prompt
128
- def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance):
129
- batch_size = len(prompt) if isinstance(prompt, list) else 1
130
-
131
- # get prompt text embeddings
132
- text_inputs = self.tokenizer(
133
- prompt,
134
- padding="max_length",
135
- max_length=self.tokenizer.model_max_length,
136
- return_tensors="pt",
137
- )
138
- text_input_ids = text_inputs.input_ids
139
- text_mask = text_inputs.attention_mask.bool().to(device)
140
- text_encoder_output = self.text_encoder(text_input_ids.to(device))
141
-
142
- prompt_embeds = text_encoder_output.text_embeds
143
- text_encoder_hidden_states = text_encoder_output.last_hidden_state
144
-
145
- prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
146
- text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0)
147
- text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0)
148
-
149
- if do_classifier_free_guidance:
150
- uncond_tokens = [""] * batch_size
151
-
152
- max_length = text_input_ids.shape[-1]
153
- uncond_input = self.tokenizer(
154
- uncond_tokens,
155
- padding="max_length",
156
- max_length=max_length,
157
- truncation=True,
158
- return_tensors="pt",
159
- )
160
- uncond_text_mask = uncond_input.attention_mask.bool().to(device)
161
- negative_prompt_embeds_text_encoder_output = self.text_encoder(uncond_input.input_ids.to(device))
162
-
163
- negative_prompt_embeds = negative_prompt_embeds_text_encoder_output.text_embeds
164
- uncond_text_encoder_hidden_states = negative_prompt_embeds_text_encoder_output.last_hidden_state
165
-
166
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
167
-
168
- seq_len = negative_prompt_embeds.shape[1]
169
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt)
170
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len)
171
-
172
- seq_len = uncond_text_encoder_hidden_states.shape[1]
173
- uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1)
174
- uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view(
175
- batch_size * num_images_per_prompt, seq_len, -1
176
- )
177
- uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0)
178
-
179
- # done duplicates
180
-
181
- # For classifier free guidance, we need to do two forward passes.
182
- # Here we concatenate the unconditional and text embeddings into a single batch
183
- # to avoid doing two forward passes
184
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
185
- text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states])
186
-
187
- text_mask = torch.cat([uncond_text_mask, text_mask])
188
-
189
- return prompt_embeds, text_encoder_hidden_states, text_mask
190
-
191
- # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline._encode_image
192
- def _encode_image(self, image, device, num_images_per_prompt, image_embeddings: Optional[torch.Tensor] = None):
193
- dtype = next(self.image_encoder.parameters()).dtype
194
-
195
- if image_embeddings is None:
196
- if not isinstance(image, torch.Tensor):
197
- image = self.feature_extractor(images=image, return_tensors="pt").pixel_values
198
-
199
- image = image.to(device=device, dtype=dtype)
200
- image_embeddings = self.image_encoder(image).image_embeds
201
-
202
- image_embeddings = image_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
203
-
204
- return image_embeddings
205
-
206
- # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline.enable_sequential_cpu_offload
207
- def enable_sequential_cpu_offload(self, gpu_id=0):
208
- r"""
209
- Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline's
210
- models have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only
211
- when their specific submodule has its `forward` method called.
212
- """
213
- if is_accelerate_available():
214
- from accelerate import cpu_offload
215
- else:
216
- raise ImportError("Please install accelerate via `pip install accelerate`")
217
-
218
- device = torch.device(f"cuda:{gpu_id}")
219
-
220
- models = [
221
- self.decoder,
222
- self.text_proj,
223
- self.text_encoder,
224
- self.super_res_first,
225
- self.super_res_last,
226
- ]
227
- for cpu_offloaded_model in models:
228
- if cpu_offloaded_model is not None:
229
- cpu_offload(cpu_offloaded_model, device)
230
-
231
- @property
232
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline._execution_device
233
- def _execution_device(self):
234
- r"""
235
- Returns the device on which the pipeline's models will be executed. After calling
236
- `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
237
- hooks.
238
- """
239
- if self.device != torch.device("meta") or not hasattr(self.decoder, "_hf_hook"):
240
- return self.device
241
- for module in self.decoder.modules():
242
- if (
243
- hasattr(module, "_hf_hook")
244
- and hasattr(module._hf_hook, "execution_device")
245
- and module._hf_hook.execution_device is not None
246
- ):
247
- return torch.device(module._hf_hook.execution_device)
248
- return self.device
249
-
250
- @torch.no_grad()
251
- def __call__(
252
- self,
253
- image: Optional[Union[List[PIL.Image.Image], torch.FloatTensor]] = None,
254
- steps: int = 5,
255
- decoder_num_inference_steps: int = 25,
256
- super_res_num_inference_steps: int = 7,
257
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
258
- image_embeddings: Optional[torch.Tensor] = None,
259
- decoder_latents: Optional[torch.FloatTensor] = None,
260
- super_res_latents: Optional[torch.FloatTensor] = None,
261
- decoder_guidance_scale: float = 8.0,
262
- output_type: Optional[str] = "pil",
263
- return_dict: bool = True,
264
- ):
265
- """
266
- Function invoked when calling the pipeline for generation.
267
-
268
- Args:
269
- image (`List[PIL.Image.Image]` or `torch.FloatTensor`):
270
- The images to use for the image interpolation. Only accepts a list of two PIL Images or If you provide a tensor, it needs to comply with the
271
- configuration of
272
- [this](https://huggingface.co/fusing/karlo-image-variations-diffusers/blob/main/feature_extractor/preprocessor_config.json)
273
- `CLIPImageProcessor` while still having a shape of two in the 0th dimension. Can be left to `None` only when `image_embeddings` are passed.
274
- steps (`int`, *optional*, defaults to 5):
275
- The number of interpolation images to generate.
276
- decoder_num_inference_steps (`int`, *optional*, defaults to 25):
277
- The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality
278
- image at the expense of slower inference.
279
- super_res_num_inference_steps (`int`, *optional*, defaults to 7):
280
- The number of denoising steps for super resolution. More denoising steps usually lead to a higher
281
- quality image at the expense of slower inference.
282
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
283
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
284
- to make generation deterministic.
285
- image_embeddings (`torch.Tensor`, *optional*):
286
- Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings
287
- can be passed for tasks like image interpolations. `image` can the be left to `None`.
288
- decoder_latents (`torch.FloatTensor` of shape (batch size, channels, height, width), *optional*):
289
- Pre-generated noisy latents to be used as inputs for the decoder.
290
- super_res_latents (`torch.FloatTensor` of shape (batch size, channels, super res height, super res width), *optional*):
291
- Pre-generated noisy latents to be used as inputs for the decoder.
292
- decoder_guidance_scale (`float`, *optional*, defaults to 4.0):
293
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
294
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
295
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
296
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
297
- usually at the expense of lower image quality.
298
- output_type (`str`, *optional*, defaults to `"pil"`):
299
- The output format of the generated image. Choose between
300
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
301
- return_dict (`bool`, *optional*, defaults to `True`):
302
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
303
- """
304
-
305
- batch_size = steps
306
-
307
- device = self._execution_device
308
-
309
- if isinstance(image, List):
310
- if len(image) != 2:
311
- raise AssertionError(
312
- f"Expected 'image' List to be of size 2, but passed 'image' length is {len(image)}"
313
- )
314
- elif not (isinstance(image[0], PIL.Image.Image) and isinstance(image[0], PIL.Image.Image)):
315
- raise AssertionError(
316
- f"Expected 'image' List to contain PIL.Image.Image, but passed 'image' contents are {type(image[0])} and {type(image[1])}"
317
- )
318
- elif isinstance(image, torch.FloatTensor):
319
- if image.shape[0] != 2:
320
- raise AssertionError(
321
- f"Expected 'image' to be torch.FloatTensor of shape 2 in 0th dimension, but passed 'image' size is {image.shape[0]}"
322
- )
323
- elif isinstance(image_embeddings, torch.Tensor):
324
- if image_embeddings.shape[0] != 2:
325
- raise AssertionError(
326
- f"Expected 'image_embeddings' to be torch.FloatTensor of shape 2 in 0th dimension, but passed 'image_embeddings' shape is {image_embeddings.shape[0]}"
327
- )
328
- else:
329
- raise AssertionError(
330
- f"Expected 'image' or 'image_embeddings' to be not None with types List[PIL.Image] or Torch.FloatTensor respectively. Received {type(image)} and {type(image_embeddings)} repsectively"
331
- )
332
-
333
- original_image_embeddings = self._encode_image(
334
- image=image, device=device, num_images_per_prompt=1, image_embeddings=image_embeddings
335
- )
336
-
337
- image_embeddings = []
338
-
339
- for interp_step in torch.linspace(0, 1, steps):
340
- temp_image_embeddings = slerp(
341
- interp_step, original_image_embeddings[0], original_image_embeddings[1]
342
- ).unsqueeze(0)
343
- image_embeddings.append(temp_image_embeddings)
344
-
345
- image_embeddings = torch.cat(image_embeddings).to(device)
346
-
347
- do_classifier_free_guidance = decoder_guidance_scale > 1.0
348
-
349
- prompt_embeds, text_encoder_hidden_states, text_mask = self._encode_prompt(
350
- prompt=["" for i in range(steps)],
351
- device=device,
352
- num_images_per_prompt=1,
353
- do_classifier_free_guidance=do_classifier_free_guidance,
354
- )
355
-
356
- text_encoder_hidden_states, additive_clip_time_embeddings = self.text_proj(
357
- image_embeddings=image_embeddings,
358
- prompt_embeds=prompt_embeds,
359
- text_encoder_hidden_states=text_encoder_hidden_states,
360
- do_classifier_free_guidance=do_classifier_free_guidance,
361
- )
362
-
363
- if device.type == "mps":
364
- # HACK: MPS: There is a panic when padding bool tensors,
365
- # so cast to int tensor for the pad and back to bool afterwards
366
- text_mask = text_mask.type(torch.int)
367
- decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=1)
368
- decoder_text_mask = decoder_text_mask.type(torch.bool)
369
- else:
370
- decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=True)
371
-
372
- self.decoder_scheduler.set_timesteps(decoder_num_inference_steps, device=device)
373
- decoder_timesteps_tensor = self.decoder_scheduler.timesteps
374
-
375
- num_channels_latents = self.decoder.config.in_channels
376
- height = self.decoder.config.sample_size
377
- width = self.decoder.config.sample_size
378
-
379
- # Get the decoder latents for 1 step and then repeat the same tensor for the entire batch to keep same noise across all interpolation steps.
380
- decoder_latents = self.prepare_latents(
381
- (1, num_channels_latents, height, width),
382
- text_encoder_hidden_states.dtype,
383
- device,
384
- generator,
385
- decoder_latents,
386
- self.decoder_scheduler,
387
- )
388
- decoder_latents = decoder_latents.repeat((batch_size, 1, 1, 1))
389
-
390
- for i, t in enumerate(self.progress_bar(decoder_timesteps_tensor)):
391
- # expand the latents if we are doing classifier free guidance
392
- latent_model_input = torch.cat([decoder_latents] * 2) if do_classifier_free_guidance else decoder_latents
393
-
394
- noise_pred = self.decoder(
395
- sample=latent_model_input,
396
- timestep=t,
397
- encoder_hidden_states=text_encoder_hidden_states,
398
- class_labels=additive_clip_time_embeddings,
399
- attention_mask=decoder_text_mask,
400
- ).sample
401
-
402
- if do_classifier_free_guidance:
403
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
404
- noise_pred_uncond, _ = noise_pred_uncond.split(latent_model_input.shape[1], dim=1)
405
- noise_pred_text, predicted_variance = noise_pred_text.split(latent_model_input.shape[1], dim=1)
406
- noise_pred = noise_pred_uncond + decoder_guidance_scale * (noise_pred_text - noise_pred_uncond)
407
- noise_pred = torch.cat([noise_pred, predicted_variance], dim=1)
408
-
409
- if i + 1 == decoder_timesteps_tensor.shape[0]:
410
- prev_timestep = None
411
- else:
412
- prev_timestep = decoder_timesteps_tensor[i + 1]
413
-
414
- # compute the previous noisy sample x_t -> x_t-1
415
- decoder_latents = self.decoder_scheduler.step(
416
- noise_pred, t, decoder_latents, prev_timestep=prev_timestep, generator=generator
417
- ).prev_sample
418
-
419
- decoder_latents = decoder_latents.clamp(-1, 1)
420
-
421
- image_small = decoder_latents
422
-
423
- # done decoder
424
-
425
- # super res
426
-
427
- self.super_res_scheduler.set_timesteps(super_res_num_inference_steps, device=device)
428
- super_res_timesteps_tensor = self.super_res_scheduler.timesteps
429
-
430
- channels = self.super_res_first.config.in_channels // 2
431
- height = self.super_res_first.config.sample_size
432
- width = self.super_res_first.config.sample_size
433
-
434
- super_res_latents = self.prepare_latents(
435
- (batch_size, channels, height, width),
436
- image_small.dtype,
437
- device,
438
- generator,
439
- super_res_latents,
440
- self.super_res_scheduler,
441
- )
442
-
443
- if device.type == "mps":
444
- # MPS does not support many interpolations
445
- image_upscaled = F.interpolate(image_small, size=[height, width])
446
- else:
447
- interpolate_antialias = {}
448
- if "antialias" in inspect.signature(F.interpolate).parameters:
449
- interpolate_antialias["antialias"] = True
450
-
451
- image_upscaled = F.interpolate(
452
- image_small, size=[height, width], mode="bicubic", align_corners=False, **interpolate_antialias
453
- )
454
-
455
- for i, t in enumerate(self.progress_bar(super_res_timesteps_tensor)):
456
- # no classifier free guidance
457
-
458
- if i == super_res_timesteps_tensor.shape[0] - 1:
459
- unet = self.super_res_last
460
- else:
461
- unet = self.super_res_first
462
-
463
- latent_model_input = torch.cat([super_res_latents, image_upscaled], dim=1)
464
-
465
- noise_pred = unet(
466
- sample=latent_model_input,
467
- timestep=t,
468
- ).sample
469
-
470
- if i + 1 == super_res_timesteps_tensor.shape[0]:
471
- prev_timestep = None
472
- else:
473
- prev_timestep = super_res_timesteps_tensor[i + 1]
474
-
475
- # compute the previous noisy sample x_t -> x_t-1
476
- super_res_latents = self.super_res_scheduler.step(
477
- noise_pred, t, super_res_latents, prev_timestep=prev_timestep, generator=generator
478
- ).prev_sample
479
-
480
- image = super_res_latents
481
- # done super res
482
-
483
- # post processing
484
-
485
- image = image * 0.5 + 0.5
486
- image = image.clamp(0, 1)
487
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
488
-
489
- if output_type == "pil":
490
- image = self.numpy_to_pil(image)
491
-
492
- if not return_dict:
493
- return (image,)
494
-
495
- return ImagePipelineOutput(images=image)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/dreambooth_inpaint/README.md DELETED
@@ -1,118 +0,0 @@
1
- # Dreambooth for the inpainting model
2
-
3
- This script was added by @thedarkzeno .
4
-
5
- Please note that this script is not actively maintained, you can open an issue and tag @thedarkzeno or @patil-suraj though.
6
-
7
- ```bash
8
- export MODEL_NAME="runwayml/stable-diffusion-inpainting"
9
- export INSTANCE_DIR="path-to-instance-images"
10
- export OUTPUT_DIR="path-to-save-model"
11
-
12
- accelerate launch train_dreambooth_inpaint.py \
13
- --pretrained_model_name_or_path=$MODEL_NAME \
14
- --instance_data_dir=$INSTANCE_DIR \
15
- --output_dir=$OUTPUT_DIR \
16
- --instance_prompt="a photo of sks dog" \
17
- --resolution=512 \
18
- --train_batch_size=1 \
19
- --gradient_accumulation_steps=1 \
20
- --learning_rate=5e-6 \
21
- --lr_scheduler="constant" \
22
- --lr_warmup_steps=0 \
23
- --max_train_steps=400
24
- ```
25
-
26
- ### Training with prior-preservation loss
27
-
28
- Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data.
29
- According to the paper, it's recommended to generate `num_epochs * num_samples` images for prior-preservation. 200-300 works well for most cases.
30
-
31
- ```bash
32
- export MODEL_NAME="runwayml/stable-diffusion-inpainting"
33
- export INSTANCE_DIR="path-to-instance-images"
34
- export CLASS_DIR="path-to-class-images"
35
- export OUTPUT_DIR="path-to-save-model"
36
-
37
- accelerate launch train_dreambooth_inpaint.py \
38
- --pretrained_model_name_or_path=$MODEL_NAME \
39
- --instance_data_dir=$INSTANCE_DIR \
40
- --class_data_dir=$CLASS_DIR \
41
- --output_dir=$OUTPUT_DIR \
42
- --with_prior_preservation --prior_loss_weight=1.0 \
43
- --instance_prompt="a photo of sks dog" \
44
- --class_prompt="a photo of dog" \
45
- --resolution=512 \
46
- --train_batch_size=1 \
47
- --gradient_accumulation_steps=1 \
48
- --learning_rate=5e-6 \
49
- --lr_scheduler="constant" \
50
- --lr_warmup_steps=0 \
51
- --num_class_images=200 \
52
- --max_train_steps=800
53
- ```
54
-
55
-
56
- ### Training with gradient checkpointing and 8-bit optimizer:
57
-
58
- With the help of gradient checkpointing and the 8-bit optimizer from bitsandbytes it's possible to run train dreambooth on a 16GB GPU.
59
-
60
- To install `bitandbytes` please refer to this [readme](https://github.com/TimDettmers/bitsandbytes#requirements--installation).
61
-
62
- ```bash
63
- export MODEL_NAME="runwayml/stable-diffusion-inpainting"
64
- export INSTANCE_DIR="path-to-instance-images"
65
- export CLASS_DIR="path-to-class-images"
66
- export OUTPUT_DIR="path-to-save-model"
67
-
68
- accelerate launch train_dreambooth_inpaint.py \
69
- --pretrained_model_name_or_path=$MODEL_NAME \
70
- --instance_data_dir=$INSTANCE_DIR \
71
- --class_data_dir=$CLASS_DIR \
72
- --output_dir=$OUTPUT_DIR \
73
- --with_prior_preservation --prior_loss_weight=1.0 \
74
- --instance_prompt="a photo of sks dog" \
75
- --class_prompt="a photo of dog" \
76
- --resolution=512 \
77
- --train_batch_size=1 \
78
- --gradient_accumulation_steps=2 --gradient_checkpointing \
79
- --use_8bit_adam \
80
- --learning_rate=5e-6 \
81
- --lr_scheduler="constant" \
82
- --lr_warmup_steps=0 \
83
- --num_class_images=200 \
84
- --max_train_steps=800
85
- ```
86
-
87
- ### Fine-tune text encoder with the UNet.
88
-
89
- The script also allows to fine-tune the `text_encoder` along with the `unet`. It's been observed experimentally that fine-tuning `text_encoder` gives much better results especially on faces.
90
- Pass the `--train_text_encoder` argument to the script to enable training `text_encoder`.
91
-
92
- ___Note: Training text encoder requires more memory, with this option the training won't fit on 16GB GPU. It needs at least 24GB VRAM.___
93
-
94
- ```bash
95
- export MODEL_NAME="runwayml/stable-diffusion-inpainting"
96
- export INSTANCE_DIR="path-to-instance-images"
97
- export CLASS_DIR="path-to-class-images"
98
- export OUTPUT_DIR="path-to-save-model"
99
-
100
- accelerate launch train_dreambooth_inpaint.py \
101
- --pretrained_model_name_or_path=$MODEL_NAME \
102
- --train_text_encoder \
103
- --instance_data_dir=$INSTANCE_DIR \
104
- --class_data_dir=$CLASS_DIR \
105
- --output_dir=$OUTPUT_DIR \
106
- --with_prior_preservation --prior_loss_weight=1.0 \
107
- --instance_prompt="a photo of sks dog" \
108
- --class_prompt="a photo of dog" \
109
- --resolution=512 \
110
- --train_batch_size=1 \
111
- --use_8bit_adam \
112
- --gradient_checkpointing \
113
- --learning_rate=2e-6 \
114
- --lr_scheduler="constant" \
115
- --lr_warmup_steps=0 \
116
- --num_class_images=200 \
117
- --max_train_steps=800
118
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/nas_fcos/README.md DELETED
@@ -1,25 +0,0 @@
1
- # NAS-FCOS: Fast Neural Architecture Search for Object Detection
2
-
3
- ## Introduction
4
-
5
- [ALGORITHM]
6
-
7
- ```latex
8
- @article{wang2019fcos,
9
- title={Nas-fcos: Fast neural architecture search for object detection},
10
- author={Wang, Ning and Gao, Yang and Chen, Hao and Wang, Peng and Tian, Zhi and Shen, Chunhua},
11
- journal={arXiv preprint arXiv:1906.04423},
12
- year={2019}
13
- }
14
- ```
15
-
16
- ## Results and Models
17
-
18
- | Head | Backbone | Style | GN-head | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
19
- |:---------:|:---------:|:-------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:|
20
- | NAS-FCOSHead | R-50 | caffe | Y | 1x | | | 39.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200520-1bdba3ce.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200520.log.json) |
21
- | FCOSHead | R-50 | caffe | Y | 1x | | | 38.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/nas_fcos/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200521-7fdcbce0.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200521.log.json) |
22
-
23
- **Notes:**
24
-
25
- - To be consistent with the author's implementation, we use 4 GPUs with 4 images/GPU.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/test_robustness.py DELETED
@@ -1,390 +0,0 @@
1
- import argparse
2
- import copy
3
- import os
4
- import os.path as osp
5
-
6
- import mmcv
7
- import torch
8
- from mmcv import DictAction
9
- from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
10
- from mmcv.runner import (get_dist_info, init_dist, load_checkpoint,
11
- wrap_fp16_model)
12
- from pycocotools.coco import COCO
13
- from pycocotools.cocoeval import COCOeval
14
- from tools.analysis_tools.robustness_eval import get_results
15
-
16
- from mmdet import datasets
17
- from mmdet.apis import multi_gpu_test, set_random_seed, single_gpu_test
18
- from mmdet.core import eval_map
19
- from mmdet.datasets import build_dataloader, build_dataset
20
- from mmdet.models import build_detector
21
-
22
-
23
- def coco_eval_with_return(result_files,
24
- result_types,
25
- coco,
26
- max_dets=(100, 300, 1000)):
27
- for res_type in result_types:
28
- assert res_type in ['proposal', 'bbox', 'segm', 'keypoints']
29
-
30
- if mmcv.is_str(coco):
31
- coco = COCO(coco)
32
- assert isinstance(coco, COCO)
33
-
34
- eval_results = {}
35
- for res_type in result_types:
36
- result_file = result_files[res_type]
37
- assert result_file.endswith('.json')
38
-
39
- coco_dets = coco.loadRes(result_file)
40
- img_ids = coco.getImgIds()
41
- iou_type = 'bbox' if res_type == 'proposal' else res_type
42
- cocoEval = COCOeval(coco, coco_dets, iou_type)
43
- cocoEval.params.imgIds = img_ids
44
- if res_type == 'proposal':
45
- cocoEval.params.useCats = 0
46
- cocoEval.params.maxDets = list(max_dets)
47
- cocoEval.evaluate()
48
- cocoEval.accumulate()
49
- cocoEval.summarize()
50
- if res_type == 'segm' or res_type == 'bbox':
51
- metric_names = [
52
- 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'AR1', 'AR10',
53
- 'AR100', 'ARs', 'ARm', 'ARl'
54
- ]
55
- eval_results[res_type] = {
56
- metric_names[i]: cocoEval.stats[i]
57
- for i in range(len(metric_names))
58
- }
59
- else:
60
- eval_results[res_type] = cocoEval.stats
61
-
62
- return eval_results
63
-
64
-
65
- def voc_eval_with_return(result_file,
66
- dataset,
67
- iou_thr=0.5,
68
- logger='print',
69
- only_ap=True):
70
- det_results = mmcv.load(result_file)
71
- annotations = [dataset.get_ann_info(i) for i in range(len(dataset))]
72
- if hasattr(dataset, 'year') and dataset.year == 2007:
73
- dataset_name = 'voc07'
74
- else:
75
- dataset_name = dataset.CLASSES
76
- mean_ap, eval_results = eval_map(
77
- det_results,
78
- annotations,
79
- scale_ranges=None,
80
- iou_thr=iou_thr,
81
- dataset=dataset_name,
82
- logger=logger)
83
-
84
- if only_ap:
85
- eval_results = [{
86
- 'ap': eval_results[i]['ap']
87
- } for i in range(len(eval_results))]
88
-
89
- return mean_ap, eval_results
90
-
91
-
92
- def parse_args():
93
- parser = argparse.ArgumentParser(description='MMDet test detector')
94
- parser.add_argument('config', help='test config file path')
95
- parser.add_argument('checkpoint', help='checkpoint file')
96
- parser.add_argument('--out', help='output result file')
97
- parser.add_argument(
98
- '--corruptions',
99
- type=str,
100
- nargs='+',
101
- default='benchmark',
102
- choices=[
103
- 'all', 'benchmark', 'noise', 'blur', 'weather', 'digital',
104
- 'holdout', 'None', 'gaussian_noise', 'shot_noise', 'impulse_noise',
105
- 'defocus_blur', 'glass_blur', 'motion_blur', 'zoom_blur', 'snow',
106
- 'frost', 'fog', 'brightness', 'contrast', 'elastic_transform',
107
- 'pixelate', 'jpeg_compression', 'speckle_noise', 'gaussian_blur',
108
- 'spatter', 'saturate'
109
- ],
110
- help='corruptions')
111
- parser.add_argument(
112
- '--severities',
113
- type=int,
114
- nargs='+',
115
- default=[0, 1, 2, 3, 4, 5],
116
- help='corruption severity levels')
117
- parser.add_argument(
118
- '--eval',
119
- type=str,
120
- nargs='+',
121
- choices=['proposal', 'proposal_fast', 'bbox', 'segm', 'keypoints'],
122
- help='eval types')
123
- parser.add_argument(
124
- '--iou-thr',
125
- type=float,
126
- default=0.5,
127
- help='IoU threshold for pascal voc evaluation')
128
- parser.add_argument(
129
- '--summaries',
130
- type=bool,
131
- default=False,
132
- help='Print summaries for every corruption and severity')
133
- parser.add_argument(
134
- '--workers', type=int, default=32, help='workers per gpu')
135
- parser.add_argument('--show', action='store_true', help='show results')
136
- parser.add_argument(
137
- '--show-dir', help='directory where painted images will be saved')
138
- parser.add_argument(
139
- '--show-score-thr',
140
- type=float,
141
- default=0.3,
142
- help='score threshold (default: 0.3)')
143
- parser.add_argument('--tmpdir', help='tmp dir for writing some results')
144
- parser.add_argument('--seed', type=int, default=None, help='random seed')
145
- parser.add_argument(
146
- '--launcher',
147
- choices=['none', 'pytorch', 'slurm', 'mpi'],
148
- default='none',
149
- help='job launcher')
150
- parser.add_argument('--local_rank', type=int, default=0)
151
- parser.add_argument(
152
- '--final-prints',
153
- type=str,
154
- nargs='+',
155
- choices=['P', 'mPC', 'rPC'],
156
- default='mPC',
157
- help='corruption benchmark metric to print at the end')
158
- parser.add_argument(
159
- '--final-prints-aggregate',
160
- type=str,
161
- choices=['all', 'benchmark'],
162
- default='benchmark',
163
- help='aggregate all results or only those for benchmark corruptions')
164
- parser.add_argument(
165
- '--cfg-options',
166
- nargs='+',
167
- action=DictAction,
168
- help='override some settings in the used config, the key-value pair '
169
- 'in xxx=yyy format will be merged into config file. If the value to '
170
- 'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
171
- 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
172
- 'Note that the quotation marks are necessary and that no white space '
173
- 'is allowed.')
174
- args = parser.parse_args()
175
- if 'LOCAL_RANK' not in os.environ:
176
- os.environ['LOCAL_RANK'] = str(args.local_rank)
177
- return args
178
-
179
-
180
- def main():
181
- args = parse_args()
182
-
183
- assert args.out or args.show or args.show_dir, \
184
- ('Please specify at least one operation (save or show the results) '
185
- 'with the argument "--out", "--show" or "show-dir"')
186
-
187
- if args.out is not None and not args.out.endswith(('.pkl', '.pickle')):
188
- raise ValueError('The output file must be a pkl file.')
189
-
190
- cfg = mmcv.Config.fromfile(args.config)
191
- if args.cfg_options is not None:
192
- cfg.merge_from_dict(args.cfg_options)
193
- # import modules from string list.
194
- if cfg.get('custom_imports', None):
195
- from mmcv.utils import import_modules_from_strings
196
- import_modules_from_strings(**cfg['custom_imports'])
197
- # set cudnn_benchmark
198
- if cfg.get('cudnn_benchmark', False):
199
- torch.backends.cudnn.benchmark = True
200
- cfg.model.pretrained = None
201
- cfg.data.test.test_mode = True
202
- if args.workers == 0:
203
- args.workers = cfg.data.workers_per_gpu
204
-
205
- # init distributed env first, since logger depends on the dist info.
206
- if args.launcher == 'none':
207
- distributed = False
208
- else:
209
- distributed = True
210
- init_dist(args.launcher, **cfg.dist_params)
211
-
212
- # set random seeds
213
- if args.seed is not None:
214
- set_random_seed(args.seed)
215
-
216
- if 'all' in args.corruptions:
217
- corruptions = [
218
- 'gaussian_noise', 'shot_noise', 'impulse_noise', 'defocus_blur',
219
- 'glass_blur', 'motion_blur', 'zoom_blur', 'snow', 'frost', 'fog',
220
- 'brightness', 'contrast', 'elastic_transform', 'pixelate',
221
- 'jpeg_compression', 'speckle_noise', 'gaussian_blur', 'spatter',
222
- 'saturate'
223
- ]
224
- elif 'benchmark' in args.corruptions:
225
- corruptions = [
226
- 'gaussian_noise', 'shot_noise', 'impulse_noise', 'defocus_blur',
227
- 'glass_blur', 'motion_blur', 'zoom_blur', 'snow', 'frost', 'fog',
228
- 'brightness', 'contrast', 'elastic_transform', 'pixelate',
229
- 'jpeg_compression'
230
- ]
231
- elif 'noise' in args.corruptions:
232
- corruptions = ['gaussian_noise', 'shot_noise', 'impulse_noise']
233
- elif 'blur' in args.corruptions:
234
- corruptions = [
235
- 'defocus_blur', 'glass_blur', 'motion_blur', 'zoom_blur'
236
- ]
237
- elif 'weather' in args.corruptions:
238
- corruptions = ['snow', 'frost', 'fog', 'brightness']
239
- elif 'digital' in args.corruptions:
240
- corruptions = [
241
- 'contrast', 'elastic_transform', 'pixelate', 'jpeg_compression'
242
- ]
243
- elif 'holdout' in args.corruptions:
244
- corruptions = ['speckle_noise', 'gaussian_blur', 'spatter', 'saturate']
245
- elif 'None' in args.corruptions:
246
- corruptions = ['None']
247
- args.severities = [0]
248
- else:
249
- corruptions = args.corruptions
250
-
251
- rank, _ = get_dist_info()
252
- aggregated_results = {}
253
- for corr_i, corruption in enumerate(corruptions):
254
- aggregated_results[corruption] = {}
255
- for sev_i, corruption_severity in enumerate(args.severities):
256
- # evaluate severity 0 (= no corruption) only once
257
- if corr_i > 0 and corruption_severity == 0:
258
- aggregated_results[corruption][0] = \
259
- aggregated_results[corruptions[0]][0]
260
- continue
261
-
262
- test_data_cfg = copy.deepcopy(cfg.data.test)
263
- # assign corruption and severity
264
- if corruption_severity > 0:
265
- corruption_trans = dict(
266
- type='Corrupt',
267
- corruption=corruption,
268
- severity=corruption_severity)
269
- # TODO: hard coded "1", we assume that the first step is
270
- # loading images, which needs to be fixed in the future
271
- test_data_cfg['pipeline'].insert(1, corruption_trans)
272
-
273
- # print info
274
- print(f'\nTesting {corruption} at severity {corruption_severity}')
275
-
276
- # build the dataloader
277
- # TODO: support multiple images per gpu
278
- # (only minor changes are needed)
279
- dataset = build_dataset(test_data_cfg)
280
- data_loader = build_dataloader(
281
- dataset,
282
- samples_per_gpu=1,
283
- workers_per_gpu=args.workers,
284
- dist=distributed,
285
- shuffle=False)
286
-
287
- # build the model and load checkpoint
288
- cfg.model.train_cfg = None
289
- model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg'))
290
- fp16_cfg = cfg.get('fp16', None)
291
- if fp16_cfg is not None:
292
- wrap_fp16_model(model)
293
- checkpoint = load_checkpoint(
294
- model, args.checkpoint, map_location='cpu')
295
- # old versions did not save class info in checkpoints,
296
- # this walkaround is for backward compatibility
297
- if 'CLASSES' in checkpoint.get('meta', {}):
298
- model.CLASSES = checkpoint['meta']['CLASSES']
299
- else:
300
- model.CLASSES = dataset.CLASSES
301
-
302
- if not distributed:
303
- model = MMDataParallel(model, device_ids=[0])
304
- show_dir = args.show_dir
305
- if show_dir is not None:
306
- show_dir = osp.join(show_dir, corruption)
307
- show_dir = osp.join(show_dir, str(corruption_severity))
308
- if not osp.exists(show_dir):
309
- osp.makedirs(show_dir)
310
- outputs = single_gpu_test(model, data_loader, args.show,
311
- show_dir, args.show_score_thr)
312
- else:
313
- model = MMDistributedDataParallel(
314
- model.cuda(),
315
- device_ids=[torch.cuda.current_device()],
316
- broadcast_buffers=False)
317
- outputs = multi_gpu_test(model, data_loader, args.tmpdir)
318
-
319
- if args.out and rank == 0:
320
- eval_results_filename = (
321
- osp.splitext(args.out)[0] + '_results' +
322
- osp.splitext(args.out)[1])
323
- mmcv.dump(outputs, args.out)
324
- eval_types = args.eval
325
- if cfg.dataset_type == 'VOCDataset':
326
- if eval_types:
327
- for eval_type in eval_types:
328
- if eval_type == 'bbox':
329
- test_dataset = mmcv.runner.obj_from_dict(
330
- cfg.data.test, datasets)
331
- logger = 'print' if args.summaries else None
332
- mean_ap, eval_results = \
333
- voc_eval_with_return(
334
- args.out, test_dataset,
335
- args.iou_thr, logger)
336
- aggregated_results[corruption][
337
- corruption_severity] = eval_results
338
- else:
339
- print('\nOnly "bbox" evaluation \
340
- is supported for pascal voc')
341
- else:
342
- if eval_types:
343
- print(f'Starting evaluate {" and ".join(eval_types)}')
344
- if eval_types == ['proposal_fast']:
345
- result_file = args.out
346
- else:
347
- if not isinstance(outputs[0], dict):
348
- result_files = dataset.results2json(
349
- outputs, args.out)
350
- else:
351
- for name in outputs[0]:
352
- print(f'\nEvaluating {name}')
353
- outputs_ = [out[name] for out in outputs]
354
- result_file = args.out
355
- + f'.{name}'
356
- result_files = dataset.results2json(
357
- outputs_, result_file)
358
- eval_results = coco_eval_with_return(
359
- result_files, eval_types, dataset.coco)
360
- aggregated_results[corruption][
361
- corruption_severity] = eval_results
362
- else:
363
- print('\nNo task was selected for evaluation;'
364
- '\nUse --eval to select a task')
365
-
366
- # save results after each evaluation
367
- mmcv.dump(aggregated_results, eval_results_filename)
368
-
369
- if rank == 0:
370
- # print final results
371
- print('\nAggregated results:')
372
- prints = args.final_prints
373
- aggregate = args.final_prints_aggregate
374
-
375
- if cfg.dataset_type == 'VOCDataset':
376
- get_results(
377
- eval_results_filename,
378
- dataset='voc',
379
- prints=prints,
380
- aggregate=aggregate)
381
- else:
382
- get_results(
383
- eval_results_filename,
384
- dataset='coco',
385
- prints=prints,
386
- aggregate=aggregate)
387
-
388
-
389
- if __name__ == '__main__':
390
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/cmd_macos.sh DELETED
@@ -1,24 +0,0 @@
1
- #!/bin/bash
2
-
3
- cd "$(dirname "${BASH_SOURCE[0]}")"
4
-
5
- if [[ "$(pwd)" =~ " " ]]; then echo This script relies on Miniconda which can not be silently installed under a path with spaces. && exit; fi
6
-
7
- # deactivate existing conda envs as needed to avoid conflicts
8
- { conda deactivate && conda deactivate && conda deactivate; } 2> /dev/null
9
-
10
- # config
11
- CONDA_ROOT_PREFIX="$(pwd)/installer_files/conda"
12
- INSTALL_ENV_DIR="$(pwd)/installer_files/env"
13
-
14
- # environment isolation
15
- export PYTHONNOUSERSITE=1
16
- unset PYTHONPATH
17
- unset PYTHONHOME
18
- export CUDA_PATH="$INSTALL_ENV_DIR"
19
- export CUDA_HOME="$CUDA_PATH"
20
-
21
- # activate env
22
- source $CONDA_ROOT_PREFIX/etc/profile.d/conda.sh
23
- conda activate $INSTALL_ENV_DIR
24
- exec bash --norc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/parrots_wrapper.py DELETED
@@ -1,107 +0,0 @@
1
- # Copyright (c) OpenMMLab. All rights reserved.
2
- from functools import partial
3
-
4
- import torch
5
-
6
- TORCH_VERSION = torch.__version__
7
-
8
-
9
- def is_rocm_pytorch() -> bool:
10
- is_rocm = False
11
- if TORCH_VERSION != 'parrots':
12
- try:
13
- from torch.utils.cpp_extension import ROCM_HOME
14
- is_rocm = True if ((torch.version.hip is not None) and
15
- (ROCM_HOME is not None)) else False
16
- except ImportError:
17
- pass
18
- return is_rocm
19
-
20
-
21
- def _get_cuda_home():
22
- if TORCH_VERSION == 'parrots':
23
- from parrots.utils.build_extension import CUDA_HOME
24
- else:
25
- if is_rocm_pytorch():
26
- from torch.utils.cpp_extension import ROCM_HOME
27
- CUDA_HOME = ROCM_HOME
28
- else:
29
- from torch.utils.cpp_extension import CUDA_HOME
30
- return CUDA_HOME
31
-
32
-
33
- def get_build_config():
34
- if TORCH_VERSION == 'parrots':
35
- from parrots.config import get_build_info
36
- return get_build_info()
37
- else:
38
- return torch.__config__.show()
39
-
40
-
41
- def _get_conv():
42
- if TORCH_VERSION == 'parrots':
43
- from parrots.nn.modules.conv import _ConvNd, _ConvTransposeMixin
44
- else:
45
- from torch.nn.modules.conv import _ConvNd, _ConvTransposeMixin
46
- return _ConvNd, _ConvTransposeMixin
47
-
48
-
49
- def _get_dataloader():
50
- if TORCH_VERSION == 'parrots':
51
- from torch.utils.data import DataLoader, PoolDataLoader
52
- else:
53
- from torch.utils.data import DataLoader
54
- PoolDataLoader = DataLoader
55
- return DataLoader, PoolDataLoader
56
-
57
-
58
- def _get_extension():
59
- if TORCH_VERSION == 'parrots':
60
- from parrots.utils.build_extension import BuildExtension, Extension
61
- CppExtension = partial(Extension, cuda=False)
62
- CUDAExtension = partial(Extension, cuda=True)
63
- else:
64
- from torch.utils.cpp_extension import (BuildExtension, CppExtension,
65
- CUDAExtension)
66
- return BuildExtension, CppExtension, CUDAExtension
67
-
68
-
69
- def _get_pool():
70
- if TORCH_VERSION == 'parrots':
71
- from parrots.nn.modules.pool import (_AdaptiveAvgPoolNd,
72
- _AdaptiveMaxPoolNd, _AvgPoolNd,
73
- _MaxPoolNd)
74
- else:
75
- from torch.nn.modules.pooling import (_AdaptiveAvgPoolNd,
76
- _AdaptiveMaxPoolNd, _AvgPoolNd,
77
- _MaxPoolNd)
78
- return _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd
79
-
80
-
81
- def _get_norm():
82
- if TORCH_VERSION == 'parrots':
83
- from parrots.nn.modules.batchnorm import _BatchNorm, _InstanceNorm
84
- SyncBatchNorm_ = torch.nn.SyncBatchNorm2d
85
- else:
86
- from torch.nn.modules.instancenorm import _InstanceNorm
87
- from torch.nn.modules.batchnorm import _BatchNorm
88
- SyncBatchNorm_ = torch.nn.SyncBatchNorm
89
- return _BatchNorm, _InstanceNorm, SyncBatchNorm_
90
-
91
-
92
- _ConvNd, _ConvTransposeMixin = _get_conv()
93
- DataLoader, PoolDataLoader = _get_dataloader()
94
- BuildExtension, CppExtension, CUDAExtension = _get_extension()
95
- _BatchNorm, _InstanceNorm, SyncBatchNorm_ = _get_norm()
96
- _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd = _get_pool()
97
-
98
-
99
- class SyncBatchNorm(SyncBatchNorm_):
100
-
101
- def _check_input_dim(self, input):
102
- if TORCH_VERSION == 'parrots':
103
- if input.dim() < 2:
104
- raise ValueError(
105
- f'expected at least 2D input (got {input.dim()}D input)')
106
- else:
107
- super()._check_input_dim(input)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/drive.py DELETED
@@ -1,27 +0,0 @@
1
- import os.path as osp
2
-
3
- from .builder import DATASETS
4
- from .custom import CustomDataset
5
-
6
-
7
- @DATASETS.register_module()
8
- class DRIVEDataset(CustomDataset):
9
- """DRIVE dataset.
10
-
11
- In segmentation map annotation for DRIVE, 0 stands for background, which is
12
- included in 2 categories. ``reduce_zero_label`` is fixed to False. The
13
- ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to
14
- '_manual1.png'.
15
- """
16
-
17
- CLASSES = ('background', 'vessel')
18
-
19
- PALETTE = [[120, 120, 120], [6, 230, 230]]
20
-
21
- def __init__(self, **kwargs):
22
- super(DRIVEDataset, self).__init__(
23
- img_suffix='.png',
24
- seg_map_suffix='_manual1.png',
25
- reduce_zero_label=False,
26
- **kwargs)
27
- assert osp.exists(self.img_dir)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/utils_image.py DELETED
@@ -1,916 +0,0 @@
1
- import os
2
- import math
3
- import random
4
- import numpy as np
5
- import torch
6
- import cv2
7
- from torchvision.utils import make_grid
8
- from datetime import datetime
9
- #import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py
10
-
11
-
12
- os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
13
-
14
-
15
- '''
16
- # --------------------------------------------
17
- # Kai Zhang (github: https://github.com/cszn)
18
- # 03/Mar/2019
19
- # --------------------------------------------
20
- # https://github.com/twhui/SRGAN-pyTorch
21
- # https://github.com/xinntao/BasicSR
22
- # --------------------------------------------
23
- '''
24
-
25
-
26
- IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif']
27
-
28
-
29
- def is_image_file(filename):
30
- return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)
31
-
32
-
33
- def get_timestamp():
34
- return datetime.now().strftime('%y%m%d-%H%M%S')
35
-
36
-
37
- def imshow(x, title=None, cbar=False, figsize=None):
38
- plt.figure(figsize=figsize)
39
- plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray')
40
- if title:
41
- plt.title(title)
42
- if cbar:
43
- plt.colorbar()
44
- plt.show()
45
-
46
-
47
- def surf(Z, cmap='rainbow', figsize=None):
48
- plt.figure(figsize=figsize)
49
- ax3 = plt.axes(projection='3d')
50
-
51
- w, h = Z.shape[:2]
52
- xx = np.arange(0,w,1)
53
- yy = np.arange(0,h,1)
54
- X, Y = np.meshgrid(xx, yy)
55
- ax3.plot_surface(X,Y,Z,cmap=cmap)
56
- #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap)
57
- plt.show()
58
-
59
-
60
- '''
61
- # --------------------------------------------
62
- # get image pathes
63
- # --------------------------------------------
64
- '''
65
-
66
-
67
- def get_image_paths(dataroot):
68
- paths = None # return None if dataroot is None
69
- if dataroot is not None:
70
- paths = sorted(_get_paths_from_images(dataroot))
71
- return paths
72
-
73
-
74
- def _get_paths_from_images(path):
75
- assert os.path.isdir(path), '{:s} is not a valid directory'.format(path)
76
- images = []
77
- for dirpath, _, fnames in sorted(os.walk(path)):
78
- for fname in sorted(fnames):
79
- if is_image_file(fname):
80
- img_path = os.path.join(dirpath, fname)
81
- images.append(img_path)
82
- assert images, '{:s} has no valid image file'.format(path)
83
- return images
84
-
85
-
86
- '''
87
- # --------------------------------------------
88
- # split large images into small images
89
- # --------------------------------------------
90
- '''
91
-
92
-
93
- def patches_from_image(img, p_size=512, p_overlap=64, p_max=800):
94
- w, h = img.shape[:2]
95
- patches = []
96
- if w > p_max and h > p_max:
97
- w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int))
98
- h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int))
99
- w1.append(w-p_size)
100
- h1.append(h-p_size)
101
- # print(w1)
102
- # print(h1)
103
- for i in w1:
104
- for j in h1:
105
- patches.append(img[i:i+p_size, j:j+p_size,:])
106
- else:
107
- patches.append(img)
108
-
109
- return patches
110
-
111
-
112
- def imssave(imgs, img_path):
113
- """
114
- imgs: list, N images of size WxHxC
115
- """
116
- img_name, ext = os.path.splitext(os.path.basename(img_path))
117
-
118
- for i, img in enumerate(imgs):
119
- if img.ndim == 3:
120
- img = img[:, :, [2, 1, 0]]
121
- new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png')
122
- cv2.imwrite(new_path, img)
123
-
124
-
125
- def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000):
126
- """
127
- split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size),
128
- and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max)
129
- will be splitted.
130
- Args:
131
- original_dataroot:
132
- taget_dataroot:
133
- p_size: size of small images
134
- p_overlap: patch size in training is a good choice
135
- p_max: images with smaller size than (p_max)x(p_max) keep unchanged.
136
- """
137
- paths = get_image_paths(original_dataroot)
138
- for img_path in paths:
139
- # img_name, ext = os.path.splitext(os.path.basename(img_path))
140
- img = imread_uint(img_path, n_channels=n_channels)
141
- patches = patches_from_image(img, p_size, p_overlap, p_max)
142
- imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path)))
143
- #if original_dataroot == taget_dataroot:
144
- #del img_path
145
-
146
- '''
147
- # --------------------------------------------
148
- # makedir
149
- # --------------------------------------------
150
- '''
151
-
152
-
153
- def mkdir(path):
154
- if not os.path.exists(path):
155
- os.makedirs(path)
156
-
157
-
158
- def mkdirs(paths):
159
- if isinstance(paths, str):
160
- mkdir(paths)
161
- else:
162
- for path in paths:
163
- mkdir(path)
164
-
165
-
166
- def mkdir_and_rename(path):
167
- if os.path.exists(path):
168
- new_name = path + '_archived_' + get_timestamp()
169
- print('Path already exists. Rename it to [{:s}]'.format(new_name))
170
- os.rename(path, new_name)
171
- os.makedirs(path)
172
-
173
-
174
- '''
175
- # --------------------------------------------
176
- # read image from path
177
- # opencv is fast, but read BGR numpy image
178
- # --------------------------------------------
179
- '''
180
-
181
-
182
- # --------------------------------------------
183
- # get uint8 image of size HxWxn_channles (RGB)
184
- # --------------------------------------------
185
- def imread_uint(path, n_channels=3):
186
- # input: path
187
- # output: HxWx3(RGB or GGG), or HxWx1 (G)
188
- if n_channels == 1:
189
- img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE
190
- img = np.expand_dims(img, axis=2) # HxWx1
191
- elif n_channels == 3:
192
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G
193
- if img.ndim == 2:
194
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG
195
- else:
196
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB
197
- return img
198
-
199
-
200
- # --------------------------------------------
201
- # matlab's imwrite
202
- # --------------------------------------------
203
- def imsave(img, img_path):
204
- img = np.squeeze(img)
205
- if img.ndim == 3:
206
- img = img[:, :, [2, 1, 0]]
207
- cv2.imwrite(img_path, img)
208
-
209
- def imwrite(img, img_path):
210
- img = np.squeeze(img)
211
- if img.ndim == 3:
212
- img = img[:, :, [2, 1, 0]]
213
- cv2.imwrite(img_path, img)
214
-
215
-
216
-
217
- # --------------------------------------------
218
- # get single image of size HxWxn_channles (BGR)
219
- # --------------------------------------------
220
- def read_img(path):
221
- # read image by cv2
222
- # return: Numpy float32, HWC, BGR, [0,1]
223
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE
224
- img = img.astype(np.float32) / 255.
225
- if img.ndim == 2:
226
- img = np.expand_dims(img, axis=2)
227
- # some images have 4 channels
228
- if img.shape[2] > 3:
229
- img = img[:, :, :3]
230
- return img
231
-
232
-
233
- '''
234
- # --------------------------------------------
235
- # image format conversion
236
- # --------------------------------------------
237
- # numpy(single) <---> numpy(unit)
238
- # numpy(single) <---> tensor
239
- # numpy(unit) <---> tensor
240
- # --------------------------------------------
241
- '''
242
-
243
-
244
- # --------------------------------------------
245
- # numpy(single) [0, 1] <---> numpy(unit)
246
- # --------------------------------------------
247
-
248
-
249
- def uint2single(img):
250
-
251
- return np.float32(img/255.)
252
-
253
-
254
- def single2uint(img):
255
-
256
- return np.uint8((img.clip(0, 1)*255.).round())
257
-
258
-
259
- def uint162single(img):
260
-
261
- return np.float32(img/65535.)
262
-
263
-
264
- def single2uint16(img):
265
-
266
- return np.uint16((img.clip(0, 1)*65535.).round())
267
-
268
-
269
- # --------------------------------------------
270
- # numpy(unit) (HxWxC or HxW) <---> tensor
271
- # --------------------------------------------
272
-
273
-
274
- # convert uint to 4-dimensional torch tensor
275
- def uint2tensor4(img):
276
- if img.ndim == 2:
277
- img = np.expand_dims(img, axis=2)
278
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0)
279
-
280
-
281
- # convert uint to 3-dimensional torch tensor
282
- def uint2tensor3(img):
283
- if img.ndim == 2:
284
- img = np.expand_dims(img, axis=2)
285
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.)
286
-
287
-
288
- # convert 2/3/4-dimensional torch tensor to uint
289
- def tensor2uint(img):
290
- img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy()
291
- if img.ndim == 3:
292
- img = np.transpose(img, (1, 2, 0))
293
- return np.uint8((img*255.0).round())
294
-
295
-
296
- # --------------------------------------------
297
- # numpy(single) (HxWxC) <---> tensor
298
- # --------------------------------------------
299
-
300
-
301
- # convert single (HxWxC) to 3-dimensional torch tensor
302
- def single2tensor3(img):
303
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float()
304
-
305
-
306
- # convert single (HxWxC) to 4-dimensional torch tensor
307
- def single2tensor4(img):
308
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0)
309
-
310
-
311
- # convert torch tensor to single
312
- def tensor2single(img):
313
- img = img.data.squeeze().float().cpu().numpy()
314
- if img.ndim == 3:
315
- img = np.transpose(img, (1, 2, 0))
316
-
317
- return img
318
-
319
- # convert torch tensor to single
320
- def tensor2single3(img):
321
- img = img.data.squeeze().float().cpu().numpy()
322
- if img.ndim == 3:
323
- img = np.transpose(img, (1, 2, 0))
324
- elif img.ndim == 2:
325
- img = np.expand_dims(img, axis=2)
326
- return img
327
-
328
-
329
- def single2tensor5(img):
330
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0)
331
-
332
-
333
- def single32tensor5(img):
334
- return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0)
335
-
336
-
337
- def single42tensor4(img):
338
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float()
339
-
340
-
341
- # from skimage.io import imread, imsave
342
- def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)):
343
- '''
344
- Converts a torch Tensor into an image Numpy array of BGR channel order
345
- Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order
346
- Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default)
347
- '''
348
- tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp
349
- tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1]
350
- n_dim = tensor.dim()
351
- if n_dim == 4:
352
- n_img = len(tensor)
353
- img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy()
354
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
355
- elif n_dim == 3:
356
- img_np = tensor.numpy()
357
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
358
- elif n_dim == 2:
359
- img_np = tensor.numpy()
360
- else:
361
- raise TypeError(
362
- 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim))
363
- if out_type == np.uint8:
364
- img_np = (img_np * 255.0).round()
365
- # Important. Unlike matlab, numpy.unit8() WILL NOT round by default.
366
- return img_np.astype(out_type)
367
-
368
-
369
- '''
370
- # --------------------------------------------
371
- # Augmentation, flipe and/or rotate
372
- # --------------------------------------------
373
- # The following two are enough.
374
- # (1) augmet_img: numpy image of WxHxC or WxH
375
- # (2) augment_img_tensor4: tensor image 1xCxWxH
376
- # --------------------------------------------
377
- '''
378
-
379
-
380
- def augment_img(img, mode=0):
381
- '''Kai Zhang (github: https://github.com/cszn)
382
- '''
383
- if mode == 0:
384
- return img
385
- elif mode == 1:
386
- return np.flipud(np.rot90(img))
387
- elif mode == 2:
388
- return np.flipud(img)
389
- elif mode == 3:
390
- return np.rot90(img, k=3)
391
- elif mode == 4:
392
- return np.flipud(np.rot90(img, k=2))
393
- elif mode == 5:
394
- return np.rot90(img)
395
- elif mode == 6:
396
- return np.rot90(img, k=2)
397
- elif mode == 7:
398
- return np.flipud(np.rot90(img, k=3))
399
-
400
-
401
- def augment_img_tensor4(img, mode=0):
402
- '''Kai Zhang (github: https://github.com/cszn)
403
- '''
404
- if mode == 0:
405
- return img
406
- elif mode == 1:
407
- return img.rot90(1, [2, 3]).flip([2])
408
- elif mode == 2:
409
- return img.flip([2])
410
- elif mode == 3:
411
- return img.rot90(3, [2, 3])
412
- elif mode == 4:
413
- return img.rot90(2, [2, 3]).flip([2])
414
- elif mode == 5:
415
- return img.rot90(1, [2, 3])
416
- elif mode == 6:
417
- return img.rot90(2, [2, 3])
418
- elif mode == 7:
419
- return img.rot90(3, [2, 3]).flip([2])
420
-
421
-
422
- def augment_img_tensor(img, mode=0):
423
- '''Kai Zhang (github: https://github.com/cszn)
424
- '''
425
- img_size = img.size()
426
- img_np = img.data.cpu().numpy()
427
- if len(img_size) == 3:
428
- img_np = np.transpose(img_np, (1, 2, 0))
429
- elif len(img_size) == 4:
430
- img_np = np.transpose(img_np, (2, 3, 1, 0))
431
- img_np = augment_img(img_np, mode=mode)
432
- img_tensor = torch.from_numpy(np.ascontiguousarray(img_np))
433
- if len(img_size) == 3:
434
- img_tensor = img_tensor.permute(2, 0, 1)
435
- elif len(img_size) == 4:
436
- img_tensor = img_tensor.permute(3, 2, 0, 1)
437
-
438
- return img_tensor.type_as(img)
439
-
440
-
441
- def augment_img_np3(img, mode=0):
442
- if mode == 0:
443
- return img
444
- elif mode == 1:
445
- return img.transpose(1, 0, 2)
446
- elif mode == 2:
447
- return img[::-1, :, :]
448
- elif mode == 3:
449
- img = img[::-1, :, :]
450
- img = img.transpose(1, 0, 2)
451
- return img
452
- elif mode == 4:
453
- return img[:, ::-1, :]
454
- elif mode == 5:
455
- img = img[:, ::-1, :]
456
- img = img.transpose(1, 0, 2)
457
- return img
458
- elif mode == 6:
459
- img = img[:, ::-1, :]
460
- img = img[::-1, :, :]
461
- return img
462
- elif mode == 7:
463
- img = img[:, ::-1, :]
464
- img = img[::-1, :, :]
465
- img = img.transpose(1, 0, 2)
466
- return img
467
-
468
-
469
- def augment_imgs(img_list, hflip=True, rot=True):
470
- # horizontal flip OR rotate
471
- hflip = hflip and random.random() < 0.5
472
- vflip = rot and random.random() < 0.5
473
- rot90 = rot and random.random() < 0.5
474
-
475
- def _augment(img):
476
- if hflip:
477
- img = img[:, ::-1, :]
478
- if vflip:
479
- img = img[::-1, :, :]
480
- if rot90:
481
- img = img.transpose(1, 0, 2)
482
- return img
483
-
484
- return [_augment(img) for img in img_list]
485
-
486
-
487
- '''
488
- # --------------------------------------------
489
- # modcrop and shave
490
- # --------------------------------------------
491
- '''
492
-
493
-
494
- def modcrop(img_in, scale):
495
- # img_in: Numpy, HWC or HW
496
- img = np.copy(img_in)
497
- if img.ndim == 2:
498
- H, W = img.shape
499
- H_r, W_r = H % scale, W % scale
500
- img = img[:H - H_r, :W - W_r]
501
- elif img.ndim == 3:
502
- H, W, C = img.shape
503
- H_r, W_r = H % scale, W % scale
504
- img = img[:H - H_r, :W - W_r, :]
505
- else:
506
- raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim))
507
- return img
508
-
509
-
510
- def shave(img_in, border=0):
511
- # img_in: Numpy, HWC or HW
512
- img = np.copy(img_in)
513
- h, w = img.shape[:2]
514
- img = img[border:h-border, border:w-border]
515
- return img
516
-
517
-
518
- '''
519
- # --------------------------------------------
520
- # image processing process on numpy image
521
- # channel_convert(in_c, tar_type, img_list):
522
- # rgb2ycbcr(img, only_y=True):
523
- # bgr2ycbcr(img, only_y=True):
524
- # ycbcr2rgb(img):
525
- # --------------------------------------------
526
- '''
527
-
528
-
529
- def rgb2ycbcr(img, only_y=True):
530
- '''same as matlab rgb2ycbcr
531
- only_y: only return Y channel
532
- Input:
533
- uint8, [0, 255]
534
- float, [0, 1]
535
- '''
536
- in_img_type = img.dtype
537
- img.astype(np.float32)
538
- if in_img_type != np.uint8:
539
- img *= 255.
540
- # convert
541
- if only_y:
542
- rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0
543
- else:
544
- rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786],
545
- [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128]
546
- if in_img_type == np.uint8:
547
- rlt = rlt.round()
548
- else:
549
- rlt /= 255.
550
- return rlt.astype(in_img_type)
551
-
552
-
553
- def ycbcr2rgb(img):
554
- '''same as matlab ycbcr2rgb
555
- Input:
556
- uint8, [0, 255]
557
- float, [0, 1]
558
- '''
559
- in_img_type = img.dtype
560
- img.astype(np.float32)
561
- if in_img_type != np.uint8:
562
- img *= 255.
563
- # convert
564
- rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071],
565
- [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836]
566
- if in_img_type == np.uint8:
567
- rlt = rlt.round()
568
- else:
569
- rlt /= 255.
570
- return rlt.astype(in_img_type)
571
-
572
-
573
- def bgr2ycbcr(img, only_y=True):
574
- '''bgr version of rgb2ycbcr
575
- only_y: only return Y channel
576
- Input:
577
- uint8, [0, 255]
578
- float, [0, 1]
579
- '''
580
- in_img_type = img.dtype
581
- img.astype(np.float32)
582
- if in_img_type != np.uint8:
583
- img *= 255.
584
- # convert
585
- if only_y:
586
- rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0
587
- else:
588
- rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786],
589
- [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128]
590
- if in_img_type == np.uint8:
591
- rlt = rlt.round()
592
- else:
593
- rlt /= 255.
594
- return rlt.astype(in_img_type)
595
-
596
-
597
- def channel_convert(in_c, tar_type, img_list):
598
- # conversion among BGR, gray and y
599
- if in_c == 3 and tar_type == 'gray': # BGR to gray
600
- gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list]
601
- return [np.expand_dims(img, axis=2) for img in gray_list]
602
- elif in_c == 3 and tar_type == 'y': # BGR to y
603
- y_list = [bgr2ycbcr(img, only_y=True) for img in img_list]
604
- return [np.expand_dims(img, axis=2) for img in y_list]
605
- elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR
606
- return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list]
607
- else:
608
- return img_list
609
-
610
-
611
- '''
612
- # --------------------------------------------
613
- # metric, PSNR and SSIM
614
- # --------------------------------------------
615
- '''
616
-
617
-
618
- # --------------------------------------------
619
- # PSNR
620
- # --------------------------------------------
621
- def calculate_psnr(img1, img2, border=0):
622
- # img1 and img2 have range [0, 255]
623
- #img1 = img1.squeeze()
624
- #img2 = img2.squeeze()
625
- if not img1.shape == img2.shape:
626
- raise ValueError('Input images must have the same dimensions.')
627
- h, w = img1.shape[:2]
628
- img1 = img1[border:h-border, border:w-border]
629
- img2 = img2[border:h-border, border:w-border]
630
-
631
- img1 = img1.astype(np.float64)
632
- img2 = img2.astype(np.float64)
633
- mse = np.mean((img1 - img2)**2)
634
- if mse == 0:
635
- return float('inf')
636
- return 20 * math.log10(255.0 / math.sqrt(mse))
637
-
638
-
639
- # --------------------------------------------
640
- # SSIM
641
- # --------------------------------------------
642
- def calculate_ssim(img1, img2, border=0):
643
- '''calculate SSIM
644
- the same outputs as MATLAB's
645
- img1, img2: [0, 255]
646
- '''
647
- #img1 = img1.squeeze()
648
- #img2 = img2.squeeze()
649
- if not img1.shape == img2.shape:
650
- raise ValueError('Input images must have the same dimensions.')
651
- h, w = img1.shape[:2]
652
- img1 = img1[border:h-border, border:w-border]
653
- img2 = img2[border:h-border, border:w-border]
654
-
655
- if img1.ndim == 2:
656
- return ssim(img1, img2)
657
- elif img1.ndim == 3:
658
- if img1.shape[2] == 3:
659
- ssims = []
660
- for i in range(3):
661
- ssims.append(ssim(img1[:,:,i], img2[:,:,i]))
662
- return np.array(ssims).mean()
663
- elif img1.shape[2] == 1:
664
- return ssim(np.squeeze(img1), np.squeeze(img2))
665
- else:
666
- raise ValueError('Wrong input image dimensions.')
667
-
668
-
669
- def ssim(img1, img2):
670
- C1 = (0.01 * 255)**2
671
- C2 = (0.03 * 255)**2
672
-
673
- img1 = img1.astype(np.float64)
674
- img2 = img2.astype(np.float64)
675
- kernel = cv2.getGaussianKernel(11, 1.5)
676
- window = np.outer(kernel, kernel.transpose())
677
-
678
- mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid
679
- mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
680
- mu1_sq = mu1**2
681
- mu2_sq = mu2**2
682
- mu1_mu2 = mu1 * mu2
683
- sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq
684
- sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq
685
- sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
686
-
687
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *
688
- (sigma1_sq + sigma2_sq + C2))
689
- return ssim_map.mean()
690
-
691
-
692
- '''
693
- # --------------------------------------------
694
- # matlab's bicubic imresize (numpy and torch) [0, 1]
695
- # --------------------------------------------
696
- '''
697
-
698
-
699
- # matlab 'imresize' function, now only support 'bicubic'
700
- def cubic(x):
701
- absx = torch.abs(x)
702
- absx2 = absx**2
703
- absx3 = absx**3
704
- return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \
705
- (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx))
706
-
707
-
708
- def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing):
709
- if (scale < 1) and (antialiasing):
710
- # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width
711
- kernel_width = kernel_width / scale
712
-
713
- # Output-space coordinates
714
- x = torch.linspace(1, out_length, out_length)
715
-
716
- # Input-space coordinates. Calculate the inverse mapping such that 0.5
717
- # in output space maps to 0.5 in input space, and 0.5+scale in output
718
- # space maps to 1.5 in input space.
719
- u = x / scale + 0.5 * (1 - 1 / scale)
720
-
721
- # What is the left-most pixel that can be involved in the computation?
722
- left = torch.floor(u - kernel_width / 2)
723
-
724
- # What is the maximum number of pixels that can be involved in the
725
- # computation? Note: it's OK to use an extra pixel here; if the
726
- # corresponding weights are all zero, it will be eliminated at the end
727
- # of this function.
728
- P = math.ceil(kernel_width) + 2
729
-
730
- # The indices of the input pixels involved in computing the k-th output
731
- # pixel are in row k of the indices matrix.
732
- indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view(
733
- 1, P).expand(out_length, P)
734
-
735
- # The weights used to compute the k-th output pixel are in row k of the
736
- # weights matrix.
737
- distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices
738
- # apply cubic kernel
739
- if (scale < 1) and (antialiasing):
740
- weights = scale * cubic(distance_to_center * scale)
741
- else:
742
- weights = cubic(distance_to_center)
743
- # Normalize the weights matrix so that each row sums to 1.
744
- weights_sum = torch.sum(weights, 1).view(out_length, 1)
745
- weights = weights / weights_sum.expand(out_length, P)
746
-
747
- # If a column in weights is all zero, get rid of it. only consider the first and last column.
748
- weights_zero_tmp = torch.sum((weights == 0), 0)
749
- if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6):
750
- indices = indices.narrow(1, 1, P - 2)
751
- weights = weights.narrow(1, 1, P - 2)
752
- if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6):
753
- indices = indices.narrow(1, 0, P - 2)
754
- weights = weights.narrow(1, 0, P - 2)
755
- weights = weights.contiguous()
756
- indices = indices.contiguous()
757
- sym_len_s = -indices.min() + 1
758
- sym_len_e = indices.max() - in_length
759
- indices = indices + sym_len_s - 1
760
- return weights, indices, int(sym_len_s), int(sym_len_e)
761
-
762
-
763
- # --------------------------------------------
764
- # imresize for tensor image [0, 1]
765
- # --------------------------------------------
766
- def imresize(img, scale, antialiasing=True):
767
- # Now the scale should be the same for H and W
768
- # input: img: pytorch tensor, CHW or HW [0,1]
769
- # output: CHW or HW [0,1] w/o round
770
- need_squeeze = True if img.dim() == 2 else False
771
- if need_squeeze:
772
- img.unsqueeze_(0)
773
- in_C, in_H, in_W = img.size()
774
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
775
- kernel_width = 4
776
- kernel = 'cubic'
777
-
778
- # Return the desired dimension order for performing the resize. The
779
- # strategy is to perform the resize first along the dimension with the
780
- # smallest scale factor.
781
- # Now we do not support this.
782
-
783
- # get weights and indices
784
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
785
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
786
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
787
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
788
- # process H dimension
789
- # symmetric copying
790
- img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W)
791
- img_aug.narrow(1, sym_len_Hs, in_H).copy_(img)
792
-
793
- sym_patch = img[:, :sym_len_Hs, :]
794
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
795
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
796
- img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv)
797
-
798
- sym_patch = img[:, -sym_len_He:, :]
799
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
800
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
801
- img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
802
-
803
- out_1 = torch.FloatTensor(in_C, out_H, in_W)
804
- kernel_width = weights_H.size(1)
805
- for i in range(out_H):
806
- idx = int(indices_H[i][0])
807
- for j in range(out_C):
808
- out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i])
809
-
810
- # process W dimension
811
- # symmetric copying
812
- out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We)
813
- out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1)
814
-
815
- sym_patch = out_1[:, :, :sym_len_Ws]
816
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
817
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
818
- out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv)
819
-
820
- sym_patch = out_1[:, :, -sym_len_We:]
821
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
822
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
823
- out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
824
-
825
- out_2 = torch.FloatTensor(in_C, out_H, out_W)
826
- kernel_width = weights_W.size(1)
827
- for i in range(out_W):
828
- idx = int(indices_W[i][0])
829
- for j in range(out_C):
830
- out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i])
831
- if need_squeeze:
832
- out_2.squeeze_()
833
- return out_2
834
-
835
-
836
- # --------------------------------------------
837
- # imresize for numpy image [0, 1]
838
- # --------------------------------------------
839
- def imresize_np(img, scale, antialiasing=True):
840
- # Now the scale should be the same for H and W
841
- # input: img: Numpy, HWC or HW [0,1]
842
- # output: HWC or HW [0,1] w/o round
843
- img = torch.from_numpy(img)
844
- need_squeeze = True if img.dim() == 2 else False
845
- if need_squeeze:
846
- img.unsqueeze_(2)
847
-
848
- in_H, in_W, in_C = img.size()
849
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
850
- kernel_width = 4
851
- kernel = 'cubic'
852
-
853
- # Return the desired dimension order for performing the resize. The
854
- # strategy is to perform the resize first along the dimension with the
855
- # smallest scale factor.
856
- # Now we do not support this.
857
-
858
- # get weights and indices
859
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
860
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
861
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
862
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
863
- # process H dimension
864
- # symmetric copying
865
- img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C)
866
- img_aug.narrow(0, sym_len_Hs, in_H).copy_(img)
867
-
868
- sym_patch = img[:sym_len_Hs, :, :]
869
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
870
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
871
- img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv)
872
-
873
- sym_patch = img[-sym_len_He:, :, :]
874
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
875
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
876
- img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
877
-
878
- out_1 = torch.FloatTensor(out_H, in_W, in_C)
879
- kernel_width = weights_H.size(1)
880
- for i in range(out_H):
881
- idx = int(indices_H[i][0])
882
- for j in range(out_C):
883
- out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i])
884
-
885
- # process W dimension
886
- # symmetric copying
887
- out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C)
888
- out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1)
889
-
890
- sym_patch = out_1[:, :sym_len_Ws, :]
891
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
892
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
893
- out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv)
894
-
895
- sym_patch = out_1[:, -sym_len_We:, :]
896
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
897
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
898
- out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
899
-
900
- out_2 = torch.FloatTensor(out_H, out_W, in_C)
901
- kernel_width = weights_W.size(1)
902
- for i in range(out_W):
903
- idx = int(indices_W[i][0])
904
- for j in range(out_C):
905
- out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i])
906
- if need_squeeze:
907
- out_2.squeeze_()
908
-
909
- return out_2.numpy()
910
-
911
-
912
- if __name__ == '__main__':
913
- print('---')
914
- # img = imread_uint('test.bmp', 3)
915
- # img = uint2single(img)
916
- # img_bicubic = imresize_np(img, 1/4)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/src/video_util.py DELETED
@@ -1,100 +0,0 @@
1
- import os
2
-
3
- import cv2
4
- import imageio
5
- import numpy as np
6
-
7
-
8
- def video_to_frame(video_path: str,
9
- frame_dir: str,
10
- filename_pattern: str = 'frame%03d.jpg',
11
- log: bool = True,
12
- frame_edit_func=None):
13
- os.makedirs(frame_dir, exist_ok=True)
14
-
15
- vidcap = cv2.VideoCapture(video_path)
16
- success, image = vidcap.read()
17
-
18
- if log:
19
- print('img shape: ', image.shape[0:2])
20
-
21
- count = 0
22
- while success:
23
- if frame_edit_func is not None:
24
- image = frame_edit_func(image)
25
-
26
- cv2.imwrite(os.path.join(frame_dir, filename_pattern % count), image)
27
- success, image = vidcap.read()
28
- if log:
29
- print('Read a new frame: ', success, count)
30
- count += 1
31
-
32
- vidcap.release()
33
-
34
-
35
- def frame_to_video(video_path: str, frame_dir: str, fps=30, log=True):
36
-
37
- first_img = True
38
- writer = imageio.get_writer(video_path, fps=fps)
39
-
40
- file_list = sorted(os.listdir(frame_dir))
41
- for file_name in file_list:
42
- if not (file_name.endswith('jpg') or file_name.endswith('png')):
43
- continue
44
-
45
- fn = os.path.join(frame_dir, file_name)
46
- curImg = imageio.imread(fn)
47
-
48
- if first_img:
49
- H, W = curImg.shape[0:2]
50
- if log:
51
- print('img shape', (H, W))
52
- first_img = False
53
-
54
- writer.append_data(curImg)
55
-
56
- writer.close()
57
-
58
-
59
- def get_fps(video_path: str):
60
- video = cv2.VideoCapture(video_path)
61
- fps = video.get(cv2.CAP_PROP_FPS)
62
- video.release()
63
- return fps
64
-
65
-
66
- def get_frame_count(video_path: str):
67
- video = cv2.VideoCapture(video_path)
68
- frame_count = video.get(cv2.CAP_PROP_FRAME_COUNT)
69
- video.release()
70
- return frame_count
71
-
72
-
73
- def resize_image(input_image, resolution):
74
- H, W, C = input_image.shape
75
- H = float(H)
76
- W = float(W)
77
- k = min(float(resolution) / min(H, W), float(768) / max(H, W))
78
- H *= k
79
- W *= k
80
- H = int(np.round(H / 64.0)) * 64
81
- W = int(np.round(W / 64.0)) * 64
82
- img = cv2.resize(
83
- input_image, (W, H),
84
- interpolation=cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA)
85
- return img
86
-
87
-
88
- def prepare_frames(input_path: str, output_dir: str, resolution: int, crop):
89
- l, r, t, b = crop
90
-
91
- def crop_func(frame):
92
- H, W, C = frame.shape
93
- left = np.clip(l, 0, W)
94
- right = np.clip(W - r, left, W)
95
- top = np.clip(t, 0, H)
96
- bottom = np.clip(H - b, top, H)
97
- frame = frame[top:bottom, left:right]
98
- return resize_image(frame, resolution)
99
-
100
- video_to_frame(input_path, output_dir, '%04d.png', False, crop_func)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Antonpy/stable-diffusion-license/app.py DELETED
@@ -1,14 +0,0 @@
1
- import streamlit as st
2
-
3
- txt_link = "https://huggingface.co/spaces/CompVis/stable-diffusion-license/raw/main/license.txt"
4
- html_link = "https://huggingface.co/spaces/CompVis/stable-diffusion-license/raw/main/license.html"
5
-
6
- st.sidebar.title("Stable Diffusion")
7
- st.sidebar.markdown("## Stable Diffusion RAIL License v1.0")
8
- st.sidebar.markdown(f"This is the home of the Stable Diffusion RAIL License v1.0.\
9
- If you would like to download the license you can get it as [.txt]({txt_link}), or [.html]({html_link}) file.")
10
-
11
- with open("license.txt", "r") as f:
12
- license_html = f.read()
13
-
14
- st.markdown(license_html, unsafe_allow_html=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ArnePan/German-LLM-leaderboard/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: German-LLM-leaderboard
3
- emoji: 🇩🇪
4
- colorFrom: yellow
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 3.46.0
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/__init__.py DELETED
@@ -1,331 +0,0 @@
1
- # module pyparsing.py
2
- #
3
- # Copyright (c) 2003-2022 Paul T. McGuire
4
- #
5
- # Permission is hereby granted, free of charge, to any person obtaining
6
- # a copy of this software and associated documentation files (the
7
- # "Software"), to deal in the Software without restriction, including
8
- # without limitation the rights to use, copy, modify, merge, publish,
9
- # distribute, sublicense, and/or sell copies of the Software, and to
10
- # permit persons to whom the Software is furnished to do so, subject to
11
- # the following conditions:
12
- #
13
- # The above copyright notice and this permission notice shall be
14
- # included in all copies or substantial portions of the Software.
15
- #
16
- # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17
- # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18
- # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
19
- # IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
20
- # CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
21
- # TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
22
- # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
23
- #
24
-
25
- __doc__ = """
26
- pyparsing module - Classes and methods to define and execute parsing grammars
27
- =============================================================================
28
-
29
- The pyparsing module is an alternative approach to creating and
30
- executing simple grammars, vs. the traditional lex/yacc approach, or the
31
- use of regular expressions. With pyparsing, you don't need to learn
32
- a new syntax for defining grammars or matching expressions - the parsing
33
- module provides a library of classes that you use to construct the
34
- grammar directly in Python.
35
-
36
- Here is a program to parse "Hello, World!" (or any greeting of the form
37
- ``"<salutation>, <addressee>!"``), built up using :class:`Word`,
38
- :class:`Literal`, and :class:`And` elements
39
- (the :meth:`'+'<ParserElement.__add__>` operators create :class:`And` expressions,
40
- and the strings are auto-converted to :class:`Literal` expressions)::
41
-
42
- from pyparsing import Word, alphas
43
-
44
- # define grammar of a greeting
45
- greet = Word(alphas) + "," + Word(alphas) + "!"
46
-
47
- hello = "Hello, World!"
48
- print(hello, "->", greet.parse_string(hello))
49
-
50
- The program outputs the following::
51
-
52
- Hello, World! -> ['Hello', ',', 'World', '!']
53
-
54
- The Python representation of the grammar is quite readable, owing to the
55
- self-explanatory class names, and the use of :class:`'+'<And>`,
56
- :class:`'|'<MatchFirst>`, :class:`'^'<Or>` and :class:`'&'<Each>` operators.
57
-
58
- The :class:`ParseResults` object returned from
59
- :class:`ParserElement.parseString` can be
60
- accessed as a nested list, a dictionary, or an object with named
61
- attributes.
62
-
63
- The pyparsing module handles some of the problems that are typically
64
- vexing when writing text parsers:
65
-
66
- - extra or missing whitespace (the above program will also handle
67
- "Hello,World!", "Hello , World !", etc.)
68
- - quoted strings
69
- - embedded comments
70
-
71
-
72
- Getting Started -
73
- -----------------
74
- Visit the classes :class:`ParserElement` and :class:`ParseResults` to
75
- see the base classes that most other pyparsing
76
- classes inherit from. Use the docstrings for examples of how to:
77
-
78
- - construct literal match expressions from :class:`Literal` and
79
- :class:`CaselessLiteral` classes
80
- - construct character word-group expressions using the :class:`Word`
81
- class
82
- - see how to create repetitive expressions using :class:`ZeroOrMore`
83
- and :class:`OneOrMore` classes
84
- - use :class:`'+'<And>`, :class:`'|'<MatchFirst>`, :class:`'^'<Or>`,
85
- and :class:`'&'<Each>` operators to combine simple expressions into
86
- more complex ones
87
- - associate names with your parsed results using
88
- :class:`ParserElement.setResultsName`
89
- - access the parsed data, which is returned as a :class:`ParseResults`
90
- object
91
- - find some helpful expression short-cuts like :class:`delimitedList`
92
- and :class:`oneOf`
93
- - find more useful common expressions in the :class:`pyparsing_common`
94
- namespace class
95
- """
96
- from typing import NamedTuple
97
-
98
-
99
- class version_info(NamedTuple):
100
- major: int
101
- minor: int
102
- micro: int
103
- releaselevel: str
104
- serial: int
105
-
106
- @property
107
- def __version__(self):
108
- return (
109
- "{}.{}.{}".format(self.major, self.minor, self.micro)
110
- + (
111
- "{}{}{}".format(
112
- "r" if self.releaselevel[0] == "c" else "",
113
- self.releaselevel[0],
114
- self.serial,
115
- ),
116
- "",
117
- )[self.releaselevel == "final"]
118
- )
119
-
120
- def __str__(self):
121
- return "{} {} / {}".format(__name__, self.__version__, __version_time__)
122
-
123
- def __repr__(self):
124
- return "{}.{}({})".format(
125
- __name__,
126
- type(self).__name__,
127
- ", ".join("{}={!r}".format(*nv) for nv in zip(self._fields, self)),
128
- )
129
-
130
-
131
- __version_info__ = version_info(3, 0, 9, "final", 0)
132
- __version_time__ = "05 May 2022 07:02 UTC"
133
- __version__ = __version_info__.__version__
134
- __versionTime__ = __version_time__
135
- __author__ = "Paul McGuire <[email protected]>"
136
-
137
- from .util import *
138
- from .exceptions import *
139
- from .actions import *
140
- from .core import __diag__, __compat__
141
- from .results import *
142
- from .core import *
143
- from .core import _builtin_exprs as core_builtin_exprs
144
- from .helpers import *
145
- from .helpers import _builtin_exprs as helper_builtin_exprs
146
-
147
- from .unicode import unicode_set, UnicodeRangeList, pyparsing_unicode as unicode
148
- from .testing import pyparsing_test as testing
149
- from .common import (
150
- pyparsing_common as common,
151
- _builtin_exprs as common_builtin_exprs,
152
- )
153
-
154
- # define backward compat synonyms
155
- if "pyparsing_unicode" not in globals():
156
- pyparsing_unicode = unicode
157
- if "pyparsing_common" not in globals():
158
- pyparsing_common = common
159
- if "pyparsing_test" not in globals():
160
- pyparsing_test = testing
161
-
162
- core_builtin_exprs += common_builtin_exprs + helper_builtin_exprs
163
-
164
-
165
- __all__ = [
166
- "__version__",
167
- "__version_time__",
168
- "__author__",
169
- "__compat__",
170
- "__diag__",
171
- "And",
172
- "AtLineStart",
173
- "AtStringStart",
174
- "CaselessKeyword",
175
- "CaselessLiteral",
176
- "CharsNotIn",
177
- "Combine",
178
- "Dict",
179
- "Each",
180
- "Empty",
181
- "FollowedBy",
182
- "Forward",
183
- "GoToColumn",
184
- "Group",
185
- "IndentedBlock",
186
- "Keyword",
187
- "LineEnd",
188
- "LineStart",
189
- "Literal",
190
- "Located",
191
- "PrecededBy",
192
- "MatchFirst",
193
- "NoMatch",
194
- "NotAny",
195
- "OneOrMore",
196
- "OnlyOnce",
197
- "OpAssoc",
198
- "Opt",
199
- "Optional",
200
- "Or",
201
- "ParseBaseException",
202
- "ParseElementEnhance",
203
- "ParseException",
204
- "ParseExpression",
205
- "ParseFatalException",
206
- "ParseResults",
207
- "ParseSyntaxException",
208
- "ParserElement",
209
- "PositionToken",
210
- "QuotedString",
211
- "RecursiveGrammarException",
212
- "Regex",
213
- "SkipTo",
214
- "StringEnd",
215
- "StringStart",
216
- "Suppress",
217
- "Token",
218
- "TokenConverter",
219
- "White",
220
- "Word",
221
- "WordEnd",
222
- "WordStart",
223
- "ZeroOrMore",
224
- "Char",
225
- "alphanums",
226
- "alphas",
227
- "alphas8bit",
228
- "any_close_tag",
229
- "any_open_tag",
230
- "c_style_comment",
231
- "col",
232
- "common_html_entity",
233
- "counted_array",
234
- "cpp_style_comment",
235
- "dbl_quoted_string",
236
- "dbl_slash_comment",
237
- "delimited_list",
238
- "dict_of",
239
- "empty",
240
- "hexnums",
241
- "html_comment",
242
- "identchars",
243
- "identbodychars",
244
- "java_style_comment",
245
- "line",
246
- "line_end",
247
- "line_start",
248
- "lineno",
249
- "make_html_tags",
250
- "make_xml_tags",
251
- "match_only_at_col",
252
- "match_previous_expr",
253
- "match_previous_literal",
254
- "nested_expr",
255
- "null_debug_action",
256
- "nums",
257
- "one_of",
258
- "printables",
259
- "punc8bit",
260
- "python_style_comment",
261
- "quoted_string",
262
- "remove_quotes",
263
- "replace_with",
264
- "replace_html_entity",
265
- "rest_of_line",
266
- "sgl_quoted_string",
267
- "srange",
268
- "string_end",
269
- "string_start",
270
- "trace_parse_action",
271
- "unicode_string",
272
- "with_attribute",
273
- "indentedBlock",
274
- "original_text_for",
275
- "ungroup",
276
- "infix_notation",
277
- "locatedExpr",
278
- "with_class",
279
- "CloseMatch",
280
- "token_map",
281
- "pyparsing_common",
282
- "pyparsing_unicode",
283
- "unicode_set",
284
- "condition_as_parse_action",
285
- "pyparsing_test",
286
- # pre-PEP8 compatibility names
287
- "__versionTime__",
288
- "anyCloseTag",
289
- "anyOpenTag",
290
- "cStyleComment",
291
- "commonHTMLEntity",
292
- "countedArray",
293
- "cppStyleComment",
294
- "dblQuotedString",
295
- "dblSlashComment",
296
- "delimitedList",
297
- "dictOf",
298
- "htmlComment",
299
- "javaStyleComment",
300
- "lineEnd",
301
- "lineStart",
302
- "makeHTMLTags",
303
- "makeXMLTags",
304
- "matchOnlyAtCol",
305
- "matchPreviousExpr",
306
- "matchPreviousLiteral",
307
- "nestedExpr",
308
- "nullDebugAction",
309
- "oneOf",
310
- "opAssoc",
311
- "pythonStyleComment",
312
- "quotedString",
313
- "removeQuotes",
314
- "replaceHTMLEntity",
315
- "replaceWith",
316
- "restOfLine",
317
- "sglQuotedString",
318
- "stringEnd",
319
- "stringStart",
320
- "traceParseAction",
321
- "unicodeString",
322
- "withAttribute",
323
- "indentedBlock",
324
- "originalTextFor",
325
- "infixNotation",
326
- "locatedExpr",
327
- "withClass",
328
- "tokenMap",
329
- "conditionAsParseAction",
330
- "autoname_elements",
331
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py DELETED
@@ -1,330 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- import collections
3
- from dataclasses import dataclass
4
- from typing import Callable, List, Optional, Tuple
5
- import torch
6
- from torch import nn
7
-
8
- from detectron2.structures import Boxes, Instances, ROIMasks
9
- from detectron2.utils.registry import _convert_target_to_string, locate
10
-
11
- from .torchscript_patch import patch_builtin_len
12
-
13
-
14
- @dataclass
15
- class Schema:
16
- """
17
- A Schema defines how to flatten a possibly hierarchical object into tuple of
18
- primitive objects, so it can be used as inputs/outputs of PyTorch's tracing.
19
-
20
- PyTorch does not support tracing a function that produces rich output
21
- structures (e.g. dict, Instances, Boxes). To trace such a function, we
22
- flatten the rich object into tuple of tensors, and return this tuple of tensors
23
- instead. Meanwhile, we also need to know how to "rebuild" the original object
24
- from the flattened results, so we can evaluate the flattened results.
25
- A Schema defines how to flatten an object, and while flattening it, it records
26
- necessary schemas so that the object can be rebuilt using the flattened outputs.
27
-
28
- The flattened object and the schema object is returned by ``.flatten`` classmethod.
29
- Then the original object can be rebuilt with the ``__call__`` method of schema.
30
-
31
- A Schema is a dataclass that can be serialized easily.
32
- """
33
-
34
- # inspired by FetchMapper in tensorflow/python/client/session.py
35
-
36
- @classmethod
37
- def flatten(cls, obj):
38
- raise NotImplementedError
39
-
40
- def __call__(self, values):
41
- raise NotImplementedError
42
-
43
- @staticmethod
44
- def _concat(values):
45
- ret = ()
46
- sizes = []
47
- for v in values:
48
- assert isinstance(v, tuple), "Flattened results must be a tuple"
49
- ret = ret + v
50
- sizes.append(len(v))
51
- return ret, sizes
52
-
53
- @staticmethod
54
- def _split(values, sizes):
55
- if len(sizes):
56
- expected_len = sum(sizes)
57
- assert (
58
- len(values) == expected_len
59
- ), f"Values has length {len(values)} but expect length {expected_len}."
60
- ret = []
61
- for k in range(len(sizes)):
62
- begin, end = sum(sizes[:k]), sum(sizes[: k + 1])
63
- ret.append(values[begin:end])
64
- return ret
65
-
66
-
67
- @dataclass
68
- class ListSchema(Schema):
69
- schemas: List[Schema] # the schemas that define how to flatten each element in the list
70
- sizes: List[int] # the flattened length of each element
71
-
72
- def __call__(self, values):
73
- values = self._split(values, self.sizes)
74
- if len(values) != len(self.schemas):
75
- raise ValueError(
76
- f"Values has length {len(values)} but schemas " f"has length {len(self.schemas)}!"
77
- )
78
- values = [m(v) for m, v in zip(self.schemas, values)]
79
- return list(values)
80
-
81
- @classmethod
82
- def flatten(cls, obj):
83
- res = [flatten_to_tuple(k) for k in obj]
84
- values, sizes = cls._concat([k[0] for k in res])
85
- return values, cls([k[1] for k in res], sizes)
86
-
87
-
88
- @dataclass
89
- class TupleSchema(ListSchema):
90
- def __call__(self, values):
91
- return tuple(super().__call__(values))
92
-
93
-
94
- @dataclass
95
- class IdentitySchema(Schema):
96
- def __call__(self, values):
97
- return values[0]
98
-
99
- @classmethod
100
- def flatten(cls, obj):
101
- return (obj,), cls()
102
-
103
-
104
- @dataclass
105
- class DictSchema(ListSchema):
106
- keys: List[str]
107
-
108
- def __call__(self, values):
109
- values = super().__call__(values)
110
- return dict(zip(self.keys, values))
111
-
112
- @classmethod
113
- def flatten(cls, obj):
114
- for k in obj.keys():
115
- if not isinstance(k, str):
116
- raise KeyError("Only support flattening dictionaries if keys are str.")
117
- keys = sorted(obj.keys())
118
- values = [obj[k] for k in keys]
119
- ret, schema = ListSchema.flatten(values)
120
- return ret, cls(schema.schemas, schema.sizes, keys)
121
-
122
-
123
- @dataclass
124
- class InstancesSchema(DictSchema):
125
- def __call__(self, values):
126
- image_size, fields = values[-1], values[:-1]
127
- fields = super().__call__(fields)
128
- return Instances(image_size, **fields)
129
-
130
- @classmethod
131
- def flatten(cls, obj):
132
- ret, schema = super().flatten(obj.get_fields())
133
- size = obj.image_size
134
- if not isinstance(size, torch.Tensor):
135
- size = torch.tensor(size)
136
- return ret + (size,), schema
137
-
138
-
139
- @dataclass
140
- class TensorWrapSchema(Schema):
141
- """
142
- For classes that are simple wrapper of tensors, e.g.
143
- Boxes, RotatedBoxes, BitMasks
144
- """
145
-
146
- class_name: str
147
-
148
- def __call__(self, values):
149
- return locate(self.class_name)(values[0])
150
-
151
- @classmethod
152
- def flatten(cls, obj):
153
- return (obj.tensor,), cls(_convert_target_to_string(type(obj)))
154
-
155
-
156
- # if more custom structures needed in the future, can allow
157
- # passing in extra schemas for custom types
158
- def flatten_to_tuple(obj):
159
- """
160
- Flatten an object so it can be used for PyTorch tracing.
161
- Also returns how to rebuild the original object from the flattened outputs.
162
-
163
- Returns:
164
- res (tuple): the flattened results that can be used as tracing outputs
165
- schema: an object with a ``__call__`` method such that ``schema(res) == obj``.
166
- It is a pure dataclass that can be serialized.
167
- """
168
- schemas = [
169
- ((str, bytes), IdentitySchema),
170
- (list, ListSchema),
171
- (tuple, TupleSchema),
172
- (collections.abc.Mapping, DictSchema),
173
- (Instances, InstancesSchema),
174
- ((Boxes, ROIMasks), TensorWrapSchema),
175
- ]
176
- for klass, schema in schemas:
177
- if isinstance(obj, klass):
178
- F = schema
179
- break
180
- else:
181
- F = IdentitySchema
182
-
183
- return F.flatten(obj)
184
-
185
-
186
- class TracingAdapter(nn.Module):
187
- """
188
- A model may take rich input/output format (e.g. dict or custom classes),
189
- but `torch.jit.trace` requires tuple of tensors as input/output.
190
- This adapter flattens input/output format of a model so it becomes traceable.
191
-
192
- It also records the necessary schema to rebuild model's inputs/outputs from flattened
193
- inputs/outputs.
194
-
195
- Example:
196
- ::
197
- outputs = model(inputs) # inputs/outputs may be rich structure
198
- adapter = TracingAdapter(model, inputs)
199
-
200
- # can now trace the model, with adapter.flattened_inputs, or another
201
- # tuple of tensors with the same length and meaning
202
- traced = torch.jit.trace(adapter, adapter.flattened_inputs)
203
-
204
- # traced model can only produce flattened outputs (tuple of tensors)
205
- flattened_outputs = traced(*adapter.flattened_inputs)
206
- # adapter knows the schema to convert it back (new_outputs == outputs)
207
- new_outputs = adapter.outputs_schema(flattened_outputs)
208
- """
209
-
210
- flattened_inputs: Tuple[torch.Tensor] = None
211
- """
212
- Flattened version of inputs given to this class's constructor.
213
- """
214
-
215
- inputs_schema: Schema = None
216
- """
217
- Schema of the inputs given to this class's constructor.
218
- """
219
-
220
- outputs_schema: Schema = None
221
- """
222
- Schema of the output produced by calling the given model with inputs.
223
- """
224
-
225
- def __init__(
226
- self,
227
- model: nn.Module,
228
- inputs,
229
- inference_func: Optional[Callable] = None,
230
- allow_non_tensor: bool = False,
231
- ):
232
- """
233
- Args:
234
- model: an nn.Module
235
- inputs: An input argument or a tuple of input arguments used to call model.
236
- After flattening, it has to only consist of tensors.
237
- inference_func: a callable that takes (model, *inputs), calls the
238
- model with inputs, and return outputs. By default it
239
- is ``lambda model, *inputs: model(*inputs)``. Can be override
240
- if you need to call the model differently.
241
- allow_non_tensor: allow inputs/outputs to contain non-tensor objects.
242
- This option will filter out non-tensor objects to make the
243
- model traceable, but ``inputs_schema``/``outputs_schema`` cannot be
244
- used anymore because inputs/outputs cannot be rebuilt from pure tensors.
245
- This is useful when you're only interested in the single trace of
246
- execution (e.g. for flop count), but not interested in
247
- generalizing the traced graph to new inputs.
248
- """
249
- super().__init__()
250
- if isinstance(model, (nn.parallel.distributed.DistributedDataParallel, nn.DataParallel)):
251
- model = model.module
252
- self.model = model
253
- if not isinstance(inputs, tuple):
254
- inputs = (inputs,)
255
- self.inputs = inputs
256
- self.allow_non_tensor = allow_non_tensor
257
-
258
- if inference_func is None:
259
- inference_func = lambda model, *inputs: model(*inputs) # noqa
260
- self.inference_func = inference_func
261
-
262
- self.flattened_inputs, self.inputs_schema = flatten_to_tuple(inputs)
263
-
264
- if all(isinstance(x, torch.Tensor) for x in self.flattened_inputs):
265
- return
266
- if self.allow_non_tensor:
267
- self.flattened_inputs = tuple(
268
- [x for x in self.flattened_inputs if isinstance(x, torch.Tensor)]
269
- )
270
- self.inputs_schema = None
271
- else:
272
- for input in self.flattened_inputs:
273
- if not isinstance(input, torch.Tensor):
274
- raise ValueError(
275
- "Inputs for tracing must only contain tensors. "
276
- f"Got a {type(input)} instead."
277
- )
278
-
279
- def forward(self, *args: torch.Tensor):
280
- with torch.no_grad(), patch_builtin_len():
281
- if self.inputs_schema is not None:
282
- inputs_orig_format = self.inputs_schema(args)
283
- else:
284
- if len(args) != len(self.flattened_inputs) or any(
285
- x is not y for x, y in zip(args, self.flattened_inputs)
286
- ):
287
- raise ValueError(
288
- "TracingAdapter does not contain valid inputs_schema."
289
- " So it cannot generalize to other inputs and must be"
290
- " traced with `.flattened_inputs`."
291
- )
292
- inputs_orig_format = self.inputs
293
-
294
- outputs = self.inference_func(self.model, *inputs_orig_format)
295
- flattened_outputs, schema = flatten_to_tuple(outputs)
296
-
297
- flattened_output_tensors = tuple(
298
- [x for x in flattened_outputs if isinstance(x, torch.Tensor)]
299
- )
300
- if len(flattened_output_tensors) < len(flattened_outputs):
301
- if self.allow_non_tensor:
302
- flattened_outputs = flattened_output_tensors
303
- self.outputs_schema = None
304
- else:
305
- raise ValueError(
306
- "Model cannot be traced because some model outputs "
307
- "cannot flatten to tensors."
308
- )
309
- else: # schema is valid
310
- if self.outputs_schema is None:
311
- self.outputs_schema = schema
312
- else:
313
- assert self.outputs_schema == schema, (
314
- "Model should always return outputs with the same "
315
- "structure so it can be traced!"
316
- )
317
- return flattened_outputs
318
-
319
- def _create_wrapper(self, traced_model):
320
- """
321
- Return a function that has an input/output interface the same as the
322
- original model, but it calls the given traced model under the hood.
323
- """
324
-
325
- def forward(*args):
326
- flattened_inputs, _ = flatten_to_tuple(args)
327
- flattened_outputs = traced_model(*flattened_inputs)
328
- return self.outputs_schema(flattened_outputs)
329
-
330
- return forward
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ayaka2022/anime-aesthetic-predict/app.py DELETED
@@ -1,28 +0,0 @@
1
- import cv2
2
- import numpy as np
3
- import gradio as gr
4
- import onnxruntime as rt
5
- from huggingface_hub import hf_hub_download
6
-
7
-
8
- def predict(img):
9
- img = img.astype(np.float32) / 255
10
- s = 768
11
- h, w = img.shape[:-1]
12
- h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s)
13
- ph, pw = s - h, s - w
14
- img_input = np.zeros([s, s, 3], dtype=np.float32)
15
- img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h))
16
- img_input = np.transpose(img_input, (2, 0, 1))
17
- img_input = img_input[np.newaxis, :]
18
- pred = model.run(None, {"img": img_input})[0].item()
19
- return pred
20
-
21
-
22
- if __name__ == "__main__":
23
- model_path = hf_hub_download(repo_id="skytnt/anime-aesthetic", filename="model.onnx")
24
- model = rt.InferenceSession(model_path, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
25
- examples = [[f"examples/{x:02d}.jpg"] for x in range(0, 2)]
26
- app = gr.Interface(predict, gr.Image(label="input image"), gr.Number(label="score"),title="Anime Aesthetic Predict",
27
- allow_flagging="never", examples=examples, cache_examples=False)
28
- app.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_537238KB.py DELETED
@@ -1,126 +0,0 @@
1
- import torch
2
- from torch import nn
3
- import torch.nn.functional as F
4
-
5
- from . import spec_utils
6
-
7
-
8
- class Conv2DBNActiv(nn.Module):
9
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
10
- super(Conv2DBNActiv, self).__init__()
11
- self.conv = nn.Sequential(
12
- nn.Conv2d(
13
- nin,
14
- nout,
15
- kernel_size=ksize,
16
- stride=stride,
17
- padding=pad,
18
- dilation=dilation,
19
- bias=False,
20
- ),
21
- nn.BatchNorm2d(nout),
22
- activ(),
23
- )
24
-
25
- def __call__(self, x):
26
- return self.conv(x)
27
-
28
-
29
- class SeperableConv2DBNActiv(nn.Module):
30
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
31
- super(SeperableConv2DBNActiv, self).__init__()
32
- self.conv = nn.Sequential(
33
- nn.Conv2d(
34
- nin,
35
- nin,
36
- kernel_size=ksize,
37
- stride=stride,
38
- padding=pad,
39
- dilation=dilation,
40
- groups=nin,
41
- bias=False,
42
- ),
43
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
44
- nn.BatchNorm2d(nout),
45
- activ(),
46
- )
47
-
48
- def __call__(self, x):
49
- return self.conv(x)
50
-
51
-
52
- class Encoder(nn.Module):
53
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
54
- super(Encoder, self).__init__()
55
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
56
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
57
-
58
- def __call__(self, x):
59
- skip = self.conv1(x)
60
- h = self.conv2(skip)
61
-
62
- return h, skip
63
-
64
-
65
- class Decoder(nn.Module):
66
- def __init__(
67
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
68
- ):
69
- super(Decoder, self).__init__()
70
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
71
- self.dropout = nn.Dropout2d(0.1) if dropout else None
72
-
73
- def __call__(self, x, skip=None):
74
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
75
- if skip is not None:
76
- skip = spec_utils.crop_center(skip, x)
77
- x = torch.cat([x, skip], dim=1)
78
- h = self.conv(x)
79
-
80
- if self.dropout is not None:
81
- h = self.dropout(h)
82
-
83
- return h
84
-
85
-
86
- class ASPPModule(nn.Module):
87
- def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
88
- super(ASPPModule, self).__init__()
89
- self.conv1 = nn.Sequential(
90
- nn.AdaptiveAvgPool2d((1, None)),
91
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
92
- )
93
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
94
- self.conv3 = SeperableConv2DBNActiv(
95
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
96
- )
97
- self.conv4 = SeperableConv2DBNActiv(
98
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
99
- )
100
- self.conv5 = SeperableConv2DBNActiv(
101
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
102
- )
103
- self.conv6 = SeperableConv2DBNActiv(
104
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
105
- )
106
- self.conv7 = SeperableConv2DBNActiv(
107
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
108
- )
109
- self.bottleneck = nn.Sequential(
110
- Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
111
- )
112
-
113
- def forward(self, x):
114
- _, _, h, w = x.size()
115
- feat1 = F.interpolate(
116
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
117
- )
118
- feat2 = self.conv2(x)
119
- feat3 = self.conv3(x)
120
- feat4 = self.conv4(x)
121
- feat5 = self.conv5(x)
122
- feat6 = self.conv6(x)
123
- feat7 = self.conv7(x)
124
- out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
125
- bottle = self.bottleneck(out)
126
- return bottle
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Cheto Hack 8bp Apk Descargar 5.4 5.md DELETED
@@ -1,71 +0,0 @@
1
-
2
- <h1>Cheto Hack 8BP APK Descargar 5.4 5: Todo lo que necesita saber</h1>
3
- <p>Si eres un fan de 8 Ball Pool, es posible que hayas oído hablar de Cheto Hack 8BP, una herramienta que puede ayudarte a mejorar tu juego y ganar más partidos. Pero ¿qué es Cheto Hack 8BP exactamente, y cómo se puede descargar y usarlo? En este artículo, responderemos estas preguntas y más, para que pueda decidir si vale la pena probar o no Cheto Hack 8BP. </p>
4
- <h2>cheto hack 8bp apk descargar 5.4 5</h2><br /><p><b><b>DOWNLOAD</b> &middot;&middot;&middot; <a href="https://bltlly.com/2v6LbK">https://bltlly.com/2v6LbK</a></b></p><br /><br />
5
- <h2>¿Qué es Cheto Hack 8BP? </h2>
6
- <p>Cheto Hack 8BP es una herramienta de hackeo para 8 Ball Pool que utiliza el reconocimiento de imágenes AI para extender la guía, apoyar los disparos de cojín y dibujar la trayectoria de la bola y el estado de disparo. También puede predecir el resultado del juego y jugar automáticamente para usted. A diferencia de algunas otras herramientas de hackeo, Cheto Hack 8BP no requiere acceso de root ni modificaciones en los archivos del juego. Funciona en Gameloop PC, un emulador que te permite jugar juegos Android en tu ordenador. </p>
7
- <h3>Características de Cheto Hack 8BP</h3>
8
- <p>Algunas de las características que ofrece Cheto Hack 8BP son:</p>
9
- <ul>
10
- <li>Guía de extensión automática: puede ver toda la longitud de la guía, incluso más allá de la tabla, para ayudarlo a apuntar mejor. </li>
11
- <li>Disparos de cojín de apoyo: Puedes ver la guía para disparos de cojín, que son disparos que rebotan en los rieles antes de golpear la bola objetivo. </li>
12
- <li>Dibujar la trayectoria de la bola: Puedes ver la trayectoria de la bola después de golpearla, incluyendo cualquier giro o curva. </li>
13
- <li>Dibujar estado de disparo: Puede ver la potencia, el ángulo y el giro de su tiro, así como la posición y la dirección de la bola blanca. </li>
14
- <li>Predicción: Puedes ver la probabilidad de ganar o perder el juego basado en la situación actual. </li>
15
- <li>Auto-play: Usted puede dejar que la herramienta de corte jugar para usted automáticamente, utilizando los mejores movimientos posibles. </li>
16
- </ul>
17
- <h3> Cómo descargar e instalar Cheto Hack 8BP APK</h3>
18
- <p>Para descargar e instalar Cheto Hack 8BP APK, debe seguir estos pasos:</p>
19
- <ol>
20
- <li>Descargar Gameloop PC desde su sitio web oficial e instalarlo en su ordenador. </li>
21
-
22
- <li>Abra el Gameloop PC y lance el Ball Pool 8 desde su centro de juego. </li>
23
- <li> Abrir Cheto Hack 8BP APK e introduzca la contraseña (reproducción automática o cheto). </li>
24
- <li>Seleccione las características que desea utilizar y haga clic en Inicio.</li>
25
- <li> ¡Disfruta jugando 8 bolas con Cheto Hack 8BP! </li>
26
- </ol>
27
- <h2>¿Por qué usar Cheto Hack 8BP? </h2>
28
- <p>Es posible que se pregunte por qué debe utilizar Cheto Hack 8BP en lugar de jugar normalmente. Aquí hay algunas razones por las que es posible que desee probarlo:</p>
29
- <h3>Beneficios de Cheto Hack 8BP</h3>
30
- <ul>
31
- <li> Puede mejorar sus habilidades y aprender nuevos trucos al ver cómo juega la herramienta de hackeo. </li>
32
- <li>Usted puede ganar más partidos y ganar más monedas y recompensas mediante el uso de las características de la herramienta de corte. </li>
33
- <li>Puedes divertirte más y desafiarte jugando contra oponentes más fuertes o usando diferentes modos. </li>
34
- <li> Usted puede ahorrar tiempo y esfuerzo dejando que la herramienta de corte jugar para usted automáticamente. </li>
35
- </ul>
36
- <h3>Riesgos de Cheto Hack 8BP</h3>
37
- <ul> <li> Puede obtener prohibido o reportado por otros jugadores o los desarrolladores del juego para el uso de la herramienta de corte. </li>
38
- <li>Puedes perder la diversión y la satisfacción de jugar el juego de forma justa y honesta. </li>
39
- <li> Puede dañar su dispositivo o comprometer su seguridad mediante la descarga de un archivo APK falso o malicioso. </li>
40
- </ul>
41
- <h2>Alternativas a Cheto Hack 8BP</h2>
42
- <p>Si no está convencido por Cheto Hack 8BP, o si desea probar algo diferente, hay algunas alternativas que puede usar para hackear 8 Ball Pool. Aquí hay dos de ellos:</p>
43
- <p></p>
44
- <h3>Grupo de objetivos - Directriz 8BP</h3>
45
- <p>Aim Pool - Guideline 8BP es una herramienta de corte que extiende la guía y muestra la trayectoria de la bola para 8 Ball Pool. Funciona tanto en dispositivos Android como iOS, y no requiere root ni jailbreak. También tiene una interfaz sencilla y fácil de usar, y es compatible con varios idiomas. Puedes descargar Aim Pool - Guideline 8BP desde su web oficial o desde la Google Play Store.</p>
46
- <h3>Guardián del juego</h3>
47
-
48
- <h2>Conclusión</h2>
49
- <p>En este artículo, hemos discutido todo lo que necesita saber sobre Cheto Hack 8BP APK Descargar 5.4 5, una herramienta de corte para 8 Ball Pool que utiliza el reconocimiento de imágenes AI para mejorar su juego. Hemos explicado lo que es, cómo funciona, cómo descargarlo e instalarlo, por qué debería usarlo y cuáles son algunas alternativas. Esperamos que haya encontrado este artículo útil e informativo. </p>
50
- <h3>Resumen del artículo</h3>
51
- <ul>
52
- <li>Cheto Hack 8BP es una herramienta de hackeo para 8 Ball Pool que utiliza el reconocimiento de imágenes AI para extender la guía, apoyar disparos de cojín, dibujar la trayectoria de la bola y el estado de disparo, predecir el resultado y jugar automáticamente. </li>
53
- <li>Funciona en Gameloop PC, un emulador que te permite jugar juegos Android en tu ordenador. </li>
54
- <li>Tiene muchas características y beneficios, pero también algunos riesgos y desventajas. </li>
55
- <li>Hay algunas alternativas a Cheto Hack 8BP, como Aim Pool - Guía 8BP y Game Guardian.</li>
56
- </ul>
57
- <h3>Preguntas frecuentes</h3>
58
- <ol>
59
- <li> ¿Es Cheto Hack 8BP libre? </li>
60
- <p>No, Cheto Hack 8BP no es gratis. Necesita pagar una cuota de suscripción para usarlo. La tarifa varía dependiendo de la duración y las características que elija. </p>
61
- <li> ¿Es seguro Cheto Hack 8BP? </li>
62
- <p>Cheto Hack 8BP es seguro si lo descarga desde su sitio web oficial o desde una fuente de confianza. Sin embargo, siempre hay un riesgo de ser prohibido o reportado por otros jugadores o los desarrolladores del juego para el uso de una herramienta de hackeo. </p>
63
- <li> ¿Es Cheto Hack 8BP legal? </li>
64
- <p>Cheto Hack 8BP no es legal en algunos países o regiones donde la piratería está prohibida o regulada por ley. Usted debe comprobar las leyes locales antes de usarlo. </p>
65
- <li> ¿Cheto Hack 8BP funciona en dispositivos móviles? </li>
66
- <p>No, Cheto Hack 8BP no funciona en dispositivos móviles. Solo funciona en Gameloop PC, un emulador que te permite jugar juegos Android en tu ordenador. </p>
67
- <li> ¿Puedo usar Cheto Hack 8BP con otras herramientas de hackeo? </li>
68
-
69
- </ol></p> 64aa2da5cf<br />
70
- <br />
71
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/actions.py DELETED
@@ -1,207 +0,0 @@
1
- # actions.py
2
-
3
- from .exceptions import ParseException
4
- from .util import col
5
-
6
-
7
- class OnlyOnce:
8
- """
9
- Wrapper for parse actions, to ensure they are only called once.
10
- """
11
-
12
- def __init__(self, method_call):
13
- from .core import _trim_arity
14
-
15
- self.callable = _trim_arity(method_call)
16
- self.called = False
17
-
18
- def __call__(self, s, l, t):
19
- if not self.called:
20
- results = self.callable(s, l, t)
21
- self.called = True
22
- return results
23
- raise ParseException(s, l, "OnlyOnce obj called multiple times w/out reset")
24
-
25
- def reset(self):
26
- """
27
- Allow the associated parse action to be called once more.
28
- """
29
-
30
- self.called = False
31
-
32
-
33
- def match_only_at_col(n):
34
- """
35
- Helper method for defining parse actions that require matching at
36
- a specific column in the input text.
37
- """
38
-
39
- def verify_col(strg, locn, toks):
40
- if col(locn, strg) != n:
41
- raise ParseException(strg, locn, "matched token not at column {}".format(n))
42
-
43
- return verify_col
44
-
45
-
46
- def replace_with(repl_str):
47
- """
48
- Helper method for common parse actions that simply return
49
- a literal value. Especially useful when used with
50
- :class:`transform_string<ParserElement.transform_string>` ().
51
-
52
- Example::
53
-
54
- num = Word(nums).set_parse_action(lambda toks: int(toks[0]))
55
- na = one_of("N/A NA").set_parse_action(replace_with(math.nan))
56
- term = na | num
57
-
58
- term[1, ...].parse_string("324 234 N/A 234") # -> [324, 234, nan, 234]
59
- """
60
- return lambda s, l, t: [repl_str]
61
-
62
-
63
- def remove_quotes(s, l, t):
64
- """
65
- Helper parse action for removing quotation marks from parsed
66
- quoted strings.
67
-
68
- Example::
69
-
70
- # by default, quotation marks are included in parsed results
71
- quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"]
72
-
73
- # use remove_quotes to strip quotation marks from parsed results
74
- quoted_string.set_parse_action(remove_quotes)
75
- quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"]
76
- """
77
- return t[0][1:-1]
78
-
79
-
80
- def with_attribute(*args, **attr_dict):
81
- """
82
- Helper to create a validating parse action to be used with start
83
- tags created with :class:`make_xml_tags` or
84
- :class:`make_html_tags`. Use ``with_attribute`` to qualify
85
- a starting tag with a required attribute value, to avoid false
86
- matches on common tags such as ``<TD>`` or ``<DIV>``.
87
-
88
- Call ``with_attribute`` with a series of attribute names and
89
- values. Specify the list of filter attributes names and values as:
90
-
91
- - keyword arguments, as in ``(align="right")``, or
92
- - as an explicit dict with ``**`` operator, when an attribute
93
- name is also a Python reserved word, as in ``**{"class":"Customer", "align":"right"}``
94
- - a list of name-value tuples, as in ``(("ns1:class", "Customer"), ("ns2:align", "right"))``
95
-
96
- For attribute names with a namespace prefix, you must use the second
97
- form. Attribute names are matched insensitive to upper/lower case.
98
-
99
- If just testing for ``class`` (with or without a namespace), use
100
- :class:`with_class`.
101
-
102
- To verify that the attribute exists, but without specifying a value,
103
- pass ``with_attribute.ANY_VALUE`` as the value.
104
-
105
- Example::
106
-
107
- html = '''
108
- <div>
109
- Some text
110
- <div type="grid">1 4 0 1 0</div>
111
- <div type="graph">1,3 2,3 1,1</div>
112
- <div>this has no type</div>
113
- </div>
114
-
115
- '''
116
- div,div_end = make_html_tags("div")
117
-
118
- # only match div tag having a type attribute with value "grid"
119
- div_grid = div().set_parse_action(with_attribute(type="grid"))
120
- grid_expr = div_grid + SkipTo(div | div_end)("body")
121
- for grid_header in grid_expr.search_string(html):
122
- print(grid_header.body)
123
-
124
- # construct a match with any div tag having a type attribute, regardless of the value
125
- div_any_type = div().set_parse_action(with_attribute(type=with_attribute.ANY_VALUE))
126
- div_expr = div_any_type + SkipTo(div | div_end)("body")
127
- for div_header in div_expr.search_string(html):
128
- print(div_header.body)
129
-
130
- prints::
131
-
132
- 1 4 0 1 0
133
-
134
- 1 4 0 1 0
135
- 1,3 2,3 1,1
136
- """
137
- if args:
138
- attrs = args[:]
139
- else:
140
- attrs = attr_dict.items()
141
- attrs = [(k, v) for k, v in attrs]
142
-
143
- def pa(s, l, tokens):
144
- for attrName, attrValue in attrs:
145
- if attrName not in tokens:
146
- raise ParseException(s, l, "no matching attribute " + attrName)
147
- if attrValue != with_attribute.ANY_VALUE and tokens[attrName] != attrValue:
148
- raise ParseException(
149
- s,
150
- l,
151
- "attribute {!r} has value {!r}, must be {!r}".format(
152
- attrName, tokens[attrName], attrValue
153
- ),
154
- )
155
-
156
- return pa
157
-
158
-
159
- with_attribute.ANY_VALUE = object()
160
-
161
-
162
- def with_class(classname, namespace=""):
163
- """
164
- Simplified version of :class:`with_attribute` when
165
- matching on a div class - made difficult because ``class`` is
166
- a reserved word in Python.
167
-
168
- Example::
169
-
170
- html = '''
171
- <div>
172
- Some text
173
- <div class="grid">1 4 0 1 0</div>
174
- <div class="graph">1,3 2,3 1,1</div>
175
- <div>this &lt;div&gt; has no class</div>
176
- </div>
177
-
178
- '''
179
- div,div_end = make_html_tags("div")
180
- div_grid = div().set_parse_action(with_class("grid"))
181
-
182
- grid_expr = div_grid + SkipTo(div | div_end)("body")
183
- for grid_header in grid_expr.search_string(html):
184
- print(grid_header.body)
185
-
186
- div_any_type = div().set_parse_action(with_class(withAttribute.ANY_VALUE))
187
- div_expr = div_any_type + SkipTo(div | div_end)("body")
188
- for div_header in div_expr.search_string(html):
189
- print(div_header.body)
190
-
191
- prints::
192
-
193
- 1 4 0 1 0
194
-
195
- 1 4 0 1 0
196
- 1,3 2,3 1,1
197
- """
198
- classattr = "{}:class".format(namespace) if namespace else "class"
199
- return with_attribute(**{classattr: classname})
200
-
201
-
202
- # pre-PEP8 compatibility symbols
203
- replaceWith = replace_with
204
- removeQuotes = remove_quotes
205
- withAttribute = with_attribute
206
- withClass = with_class
207
- matchOnlyAtCol = match_only_at_col
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BilalSardar/Black-N-White-To-Color/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Black N White To Color
3
- emoji: 🦀
4
- colorFrom: pink
5
- colorTo: yellow
6
- sdk: gradio
7
- sdk_version: 3.20.1
8
- app_file: app.py
9
- pinned: false
10
- license: openrail
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/cuda/execution_policy.h DELETED
@@ -1,84 +0,0 @@
1
- /******************************************************************************
2
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
3
- *
4
- * Redistribution and use in source and binary forms, with or without
5
- * modification, are permitted provided that the following conditions are met:
6
- * * Redistributions of source code must retain the above copyright
7
- * notice, this list of conditions and the following disclaimer.
8
- * * Redistributions in binary form must reproduce the above copyright
9
- * notice, this list of conditions and the following disclaimer in the
10
- * documentation and/or other materials provided with the distribution.
11
- * * Neither the name of the NVIDIA CORPORATION nor the
12
- * names of its contributors may be used to endorse or promote products
13
- * derived from this software without specific prior written permission.
14
- *
15
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
16
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
18
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
19
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
20
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
21
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
22
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
23
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
24
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
25
- *
26
- ******************************************************************************/
27
- #pragma once
28
-
29
- // histogram
30
- // sort (radix-sort, merge-sort)
31
-
32
- #include <thrust/detail/config.h>
33
- #include <thrust/system/cuda/detail/execution_policy.h>
34
- #include <thrust/system/cuda/detail/par.h>
35
-
36
- // pass
37
- // ----------------
38
- #include <thrust/system/cuda/detail/adjacent_difference.h>
39
- #include <thrust/system/cuda/detail/copy.h>
40
- #include <thrust/system/cuda/detail/copy_if.h>
41
- #include <thrust/system/cuda/detail/count.h>
42
- #include <thrust/system/cuda/detail/equal.h>
43
- #include <thrust/system/cuda/detail/extrema.h>
44
- #include <thrust/system/cuda/detail/fill.h>
45
- #include <thrust/system/cuda/detail/find.h>
46
- #include <thrust/system/cuda/detail/for_each.h>
47
- #include <thrust/system/cuda/detail/gather.h>
48
- #include <thrust/system/cuda/detail/generate.h>
49
- #include <thrust/system/cuda/detail/inner_product.h>
50
- #include <thrust/system/cuda/detail/mismatch.h>
51
- #include <thrust/system/cuda/detail/partition.h>
52
- #include <thrust/system/cuda/detail/reduce_by_key.h>
53
- #include <thrust/system/cuda/detail/remove.h>
54
- #include <thrust/system/cuda/detail/replace.h>
55
- #include <thrust/system/cuda/detail/reverse.h>
56
- #include <thrust/system/cuda/detail/scatter.h>
57
- #include <thrust/system/cuda/detail/swap_ranges.h>
58
- #include <thrust/system/cuda/detail/tabulate.h>
59
- #include <thrust/system/cuda/detail/transform.h>
60
- #include <thrust/system/cuda/detail/transform_reduce.h>
61
- #include <thrust/system/cuda/detail/transform_scan.h>
62
- #include <thrust/system/cuda/detail/uninitialized_copy.h>
63
- #include <thrust/system/cuda/detail/uninitialized_fill.h>
64
- #include <thrust/system/cuda/detail/unique.h>
65
- #include <thrust/system/cuda/detail/unique_by_key.h>
66
-
67
- // fail
68
- // ----------------
69
- // fails with mixed types
70
- #include <thrust/system/cuda/detail/reduce.h>
71
-
72
- // mixed types are not compiling, commented in testing/scan.cu
73
- #include <thrust/system/cuda/detail/scan.h>
74
-
75
- // stubs passed
76
- // ----------------
77
- #include <thrust/system/cuda/detail/binary_search.h>
78
- #include <thrust/system/cuda/detail/merge.h>
79
- #include <thrust/system/cuda/detail/scan_by_key.h>
80
- #include <thrust/system/cuda/detail/set_operations.h>
81
- #include <thrust/system/cuda/detail/sort.h>
82
-
83
- // work in progress
84
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/sort.h DELETED
@@ -1,154 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
- #include <thrust/system/detail/generic/tag.h>
21
-
22
- namespace thrust
23
- {
24
- namespace system
25
- {
26
- namespace detail
27
- {
28
- namespace generic
29
- {
30
-
31
-
32
- template<typename DerivedPolicy,
33
- typename RandomAccessIterator>
34
- __host__ __device__
35
- void sort(thrust::execution_policy<DerivedPolicy> &exec,
36
- RandomAccessIterator first,
37
- RandomAccessIterator last);
38
-
39
-
40
- template<typename DerivedPolicy,
41
- typename RandomAccessIterator,
42
- typename StrictWeakOrdering>
43
- __host__ __device__
44
- void sort(thrust::execution_policy<DerivedPolicy> &exec,
45
- RandomAccessIterator first,
46
- RandomAccessIterator last,
47
- StrictWeakOrdering comp);
48
-
49
-
50
- template<typename DerivedPolicy,
51
- typename RandomAccessIterator1,
52
- typename RandomAccessIterator2>
53
- __host__ __device__
54
- void sort_by_key(thrust::execution_policy<DerivedPolicy> &exec,
55
- RandomAccessIterator1 keys_first,
56
- RandomAccessIterator1 keys_last,
57
- RandomAccessIterator2 values_first);
58
-
59
-
60
- template<typename DerivedPolicy,
61
- typename RandomAccessIterator1,
62
- typename RandomAccessIterator2,
63
- typename StrictWeakOrdering>
64
- __host__ __device__
65
- void sort_by_key(thrust::execution_policy<DerivedPolicy> &exec,
66
- RandomAccessIterator1 keys_first,
67
- RandomAccessIterator1 keys_last,
68
- RandomAccessIterator2 values_first,
69
- StrictWeakOrdering comp);
70
-
71
-
72
- template<typename DerivedPolicy,
73
- typename RandomAccessIterator>
74
- __host__ __device__
75
- void stable_sort(thrust::execution_policy<DerivedPolicy> &exec,
76
- RandomAccessIterator first,
77
- RandomAccessIterator last);
78
-
79
-
80
- // XXX it is an error to call this function; it has no implementation
81
- template<typename DerivedPolicy,
82
- typename RandomAccessIterator,
83
- typename StrictWeakOrdering>
84
- __host__ __device__
85
- void stable_sort(thrust::execution_policy<DerivedPolicy> &exec,
86
- RandomAccessIterator first,
87
- RandomAccessIterator last,
88
- StrictWeakOrdering comp);
89
-
90
-
91
- template<typename DerivedPolicy,
92
- typename RandomAccessIterator1,
93
- typename RandomAccessIterator2>
94
- __host__ __device__
95
- void stable_sort_by_key(thrust::execution_policy<DerivedPolicy> &exec,
96
- RandomAccessIterator1 keys_first,
97
- RandomAccessIterator1 keys_last,
98
- RandomAccessIterator2 values_first);
99
-
100
-
101
- // XXX it is an error to call this function; it has no implementation
102
- template<typename DerivedPolicy,
103
- typename RandomAccessIterator1,
104
- typename RandomAccessIterator2,
105
- typename StrictWeakOrdering>
106
- __host__ __device__
107
- void stable_sort_by_key(thrust::execution_policy<DerivedPolicy> &exec,
108
- RandomAccessIterator1 keys_first,
109
- RandomAccessIterator1 keys_last,
110
- RandomAccessIterator2 values_first,
111
- StrictWeakOrdering comp);
112
-
113
-
114
- template<typename DerivedPolicy, typename ForwardIterator>
115
- __host__ __device__
116
- bool is_sorted(thrust::execution_policy<DerivedPolicy> &exec,
117
- ForwardIterator first,
118
- ForwardIterator last);
119
-
120
-
121
- template<typename DerivedPolicy,
122
- typename ForwardIterator,
123
- typename Compare>
124
- __host__ __device__
125
- bool is_sorted(thrust::execution_policy<DerivedPolicy> &exec,
126
- ForwardIterator first,
127
- ForwardIterator last,
128
- Compare comp);
129
-
130
-
131
- template<typename DerivedPolicy, typename ForwardIterator>
132
- __host__ __device__
133
- ForwardIterator is_sorted_until(thrust::execution_policy<DerivedPolicy> &exec,
134
- ForwardIterator first,
135
- ForwardIterator last);
136
-
137
-
138
- template<typename DerivedPolicy,
139
- typename ForwardIterator,
140
- typename Compare>
141
- __host__ __device__
142
- ForwardIterator is_sorted_until(thrust::execution_policy<DerivedPolicy> &exec,
143
- ForwardIterator first,
144
- ForwardIterator last,
145
- Compare comp);
146
-
147
-
148
- } // end generic
149
- } // end detail
150
- } // end system
151
- } // end thrust
152
-
153
- #include <thrust/system/detail/generic/sort.inl>
154
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/datasets/pipelines/loading.py DELETED
@@ -1,470 +0,0 @@
1
- import os.path as osp
2
-
3
- import mmcv
4
- import numpy as np
5
- import pycocotools.mask as maskUtils
6
-
7
- from mmdet.core import BitmapMasks, PolygonMasks
8
- from ..builder import PIPELINES
9
-
10
-
11
- @PIPELINES.register_module()
12
- class LoadImageFromFile(object):
13
- """Load an image from file.
14
-
15
- Required keys are "img_prefix" and "img_info" (a dict that must contain the
16
- key "filename"). Added or updated keys are "filename", "img", "img_shape",
17
- "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`),
18
- "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1).
19
-
20
- Args:
21
- to_float32 (bool): Whether to convert the loaded image to a float32
22
- numpy array. If set to False, the loaded image is an uint8 array.
23
- Defaults to False.
24
- color_type (str): The flag argument for :func:`mmcv.imfrombytes`.
25
- Defaults to 'color'.
26
- file_client_args (dict): Arguments to instantiate a FileClient.
27
- See :class:`mmcv.fileio.FileClient` for details.
28
- Defaults to ``dict(backend='disk')``.
29
- """
30
-
31
- def __init__(self,
32
- to_float32=False,
33
- color_type='color',
34
- file_client_args=dict(backend='disk')):
35
- self.to_float32 = to_float32
36
- self.color_type = color_type
37
- self.file_client_args = file_client_args.copy()
38
- self.file_client = None
39
-
40
- def __call__(self, results):
41
- """Call functions to load image and get image meta information.
42
-
43
- Args:
44
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
45
-
46
- Returns:
47
- dict: The dict contains loaded image and meta information.
48
- """
49
-
50
- if self.file_client is None:
51
- self.file_client = mmcv.FileClient(**self.file_client_args)
52
-
53
- if results['img_prefix'] is not None:
54
- filename = osp.join(results['img_prefix'],
55
- results['img_info']['filename'])
56
- else:
57
- filename = results['img_info']['filename']
58
-
59
- img_bytes = self.file_client.get(filename)
60
- img = mmcv.imfrombytes(img_bytes, flag=self.color_type)
61
- if self.to_float32:
62
- img = img.astype(np.float32)
63
-
64
- results['filename'] = filename
65
- results['ori_filename'] = results['img_info']['filename']
66
- results['img'] = img
67
- results['img_shape'] = img.shape
68
- results['ori_shape'] = img.shape
69
- results['img_fields'] = ['img']
70
- return results
71
-
72
- def __repr__(self):
73
- repr_str = (f'{self.__class__.__name__}('
74
- f'to_float32={self.to_float32}, '
75
- f"color_type='{self.color_type}', "
76
- f'file_client_args={self.file_client_args})')
77
- return repr_str
78
-
79
-
80
- @PIPELINES.register_module()
81
- class LoadImageFromWebcam(LoadImageFromFile):
82
- """Load an image from webcam.
83
-
84
- Similar with :obj:`LoadImageFromFile`, but the image read from webcam is in
85
- ``results['img']``.
86
- """
87
-
88
- def __call__(self, results):
89
- """Call functions to add image meta information.
90
-
91
- Args:
92
- results (dict): Result dict with Webcam read image in
93
- ``results['img']``.
94
-
95
- Returns:
96
- dict: The dict contains loaded image and meta information.
97
- """
98
-
99
- img = results['img']
100
- if self.to_float32:
101
- img = img.astype(np.float32)
102
-
103
- results['filename'] = None
104
- results['ori_filename'] = None
105
- results['img'] = img
106
- results['img_shape'] = img.shape
107
- results['ori_shape'] = img.shape
108
- results['img_fields'] = ['img']
109
- return results
110
-
111
-
112
- @PIPELINES.register_module()
113
- class LoadMultiChannelImageFromFiles(object):
114
- """Load multi-channel images from a list of separate channel files.
115
-
116
- Required keys are "img_prefix" and "img_info" (a dict that must contain the
117
- key "filename", which is expected to be a list of filenames).
118
- Added or updated keys are "filename", "img", "img_shape",
119
- "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`),
120
- "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1).
121
-
122
- Args:
123
- to_float32 (bool): Whether to convert the loaded image to a float32
124
- numpy array. If set to False, the loaded image is an uint8 array.
125
- Defaults to False.
126
- color_type (str): The flag argument for :func:`mmcv.imfrombytes`.
127
- Defaults to 'color'.
128
- file_client_args (dict): Arguments to instantiate a FileClient.
129
- See :class:`mmcv.fileio.FileClient` for details.
130
- Defaults to ``dict(backend='disk')``.
131
- """
132
-
133
- def __init__(self,
134
- to_float32=False,
135
- color_type='unchanged',
136
- file_client_args=dict(backend='disk')):
137
- self.to_float32 = to_float32
138
- self.color_type = color_type
139
- self.file_client_args = file_client_args.copy()
140
- self.file_client = None
141
-
142
- def __call__(self, results):
143
- """Call functions to load multiple images and get images meta
144
- information.
145
-
146
- Args:
147
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
148
-
149
- Returns:
150
- dict: The dict contains loaded images and meta information.
151
- """
152
-
153
- if self.file_client is None:
154
- self.file_client = mmcv.FileClient(**self.file_client_args)
155
-
156
- if results['img_prefix'] is not None:
157
- filename = [
158
- osp.join(results['img_prefix'], fname)
159
- for fname in results['img_info']['filename']
160
- ]
161
- else:
162
- filename = results['img_info']['filename']
163
-
164
- img = []
165
- for name in filename:
166
- img_bytes = self.file_client.get(name)
167
- img.append(mmcv.imfrombytes(img_bytes, flag=self.color_type))
168
- img = np.stack(img, axis=-1)
169
- if self.to_float32:
170
- img = img.astype(np.float32)
171
-
172
- results['filename'] = filename
173
- results['ori_filename'] = results['img_info']['filename']
174
- results['img'] = img
175
- results['img_shape'] = img.shape
176
- results['ori_shape'] = img.shape
177
- # Set initial values for default meta_keys
178
- results['pad_shape'] = img.shape
179
- results['scale_factor'] = 1.0
180
- num_channels = 1 if len(img.shape) < 3 else img.shape[2]
181
- results['img_norm_cfg'] = dict(
182
- mean=np.zeros(num_channels, dtype=np.float32),
183
- std=np.ones(num_channels, dtype=np.float32),
184
- to_rgb=False)
185
- return results
186
-
187
- def __repr__(self):
188
- repr_str = (f'{self.__class__.__name__}('
189
- f'to_float32={self.to_float32}, '
190
- f"color_type='{self.color_type}', "
191
- f'file_client_args={self.file_client_args})')
192
- return repr_str
193
-
194
-
195
- @PIPELINES.register_module()
196
- class LoadAnnotations(object):
197
- """Load mutiple types of annotations.
198
-
199
- Args:
200
- with_bbox (bool): Whether to parse and load the bbox annotation.
201
- Default: True.
202
- with_label (bool): Whether to parse and load the label annotation.
203
- Default: True.
204
- with_mask (bool): Whether to parse and load the mask annotation.
205
- Default: False.
206
- with_seg (bool): Whether to parse and load the semantic segmentation
207
- annotation. Default: False.
208
- poly2mask (bool): Whether to convert the instance masks from polygons
209
- to bitmaps. Default: True.
210
- file_client_args (dict): Arguments to instantiate a FileClient.
211
- See :class:`mmcv.fileio.FileClient` for details.
212
- Defaults to ``dict(backend='disk')``.
213
- """
214
-
215
- def __init__(self,
216
- with_bbox=True,
217
- with_label=True,
218
- with_mask=False,
219
- with_seg=False,
220
- poly2mask=True,
221
- file_client_args=dict(backend='disk')):
222
- self.with_bbox = with_bbox
223
- self.with_label = with_label
224
- self.with_mask = with_mask
225
- self.with_seg = with_seg
226
- self.poly2mask = poly2mask
227
- self.file_client_args = file_client_args.copy()
228
- self.file_client = None
229
-
230
- def _load_bboxes(self, results):
231
- """Private function to load bounding box annotations.
232
-
233
- Args:
234
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
235
-
236
- Returns:
237
- dict: The dict contains loaded bounding box annotations.
238
- """
239
-
240
- ann_info = results['ann_info']
241
- results['gt_bboxes'] = ann_info['bboxes'].copy()
242
-
243
- gt_bboxes_ignore = ann_info.get('bboxes_ignore', None)
244
- if gt_bboxes_ignore is not None:
245
- results['gt_bboxes_ignore'] = gt_bboxes_ignore.copy()
246
- results['bbox_fields'].append('gt_bboxes_ignore')
247
- results['bbox_fields'].append('gt_bboxes')
248
- return results
249
-
250
- def _load_labels(self, results):
251
- """Private function to load label annotations.
252
-
253
- Args:
254
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
255
-
256
- Returns:
257
- dict: The dict contains loaded label annotations.
258
- """
259
-
260
- results['gt_labels'] = results['ann_info']['labels'].copy()
261
- return results
262
-
263
- def _poly2mask(self, mask_ann, img_h, img_w):
264
- """Private function to convert masks represented with polygon to
265
- bitmaps.
266
-
267
- Args:
268
- mask_ann (list | dict): Polygon mask annotation input.
269
- img_h (int): The height of output mask.
270
- img_w (int): The width of output mask.
271
-
272
- Returns:
273
- numpy.ndarray: The decode bitmap mask of shape (img_h, img_w).
274
- """
275
-
276
- if isinstance(mask_ann, list):
277
- # polygon -- a single object might consist of multiple parts
278
- # we merge all parts into one mask rle code
279
- rles = maskUtils.frPyObjects(mask_ann, img_h, img_w)
280
- rle = maskUtils.merge(rles)
281
- elif isinstance(mask_ann['counts'], list):
282
- # uncompressed RLE
283
- rle = maskUtils.frPyObjects(mask_ann, img_h, img_w)
284
- else:
285
- # rle
286
- rle = mask_ann
287
- mask = maskUtils.decode(rle)
288
- return mask
289
-
290
- def process_polygons(self, polygons):
291
- """Convert polygons to list of ndarray and filter invalid polygons.
292
-
293
- Args:
294
- polygons (list[list]): Polygons of one instance.
295
-
296
- Returns:
297
- list[numpy.ndarray]: Processed polygons.
298
- """
299
-
300
- polygons = [np.array(p) for p in polygons]
301
- valid_polygons = []
302
- for polygon in polygons:
303
- if len(polygon) % 2 == 0 and len(polygon) >= 6:
304
- valid_polygons.append(polygon)
305
- return valid_polygons
306
-
307
- def _load_masks(self, results):
308
- """Private function to load mask annotations.
309
-
310
- Args:
311
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
312
-
313
- Returns:
314
- dict: The dict contains loaded mask annotations.
315
- If ``self.poly2mask`` is set ``True``, `gt_mask` will contain
316
- :obj:`PolygonMasks`. Otherwise, :obj:`BitmapMasks` is used.
317
- """
318
-
319
- h, w = results['img_info']['height'], results['img_info']['width']
320
- gt_masks = results['ann_info']['masks']
321
- if self.poly2mask:
322
- masks_all =[]
323
- for mask in gt_masks:
324
- if 'full' in mask:
325
- full = self._poly2mask(mask['full'], h, w)*2
326
- visible = self._poly2mask(mask['visible'], h, w)
327
- full[visible==1] = 1
328
- masks_all.append(full)
329
- else:
330
- print(mask)
331
- asas
332
- visible = self._poly2mask(mask['visible'], h, w)
333
- masks_all.append(visible)
334
-
335
- gt_masks = BitmapMasks(masks_all, h, w)
336
- else:
337
- gt_masks = PolygonMasks(
338
- [self.process_polygons(polygons) for polygons in gt_masks], h,
339
- w)
340
- results['gt_masks'] = gt_masks
341
- results['mask_fields'].append('gt_masks')
342
- return results
343
-
344
- def _load_semantic_seg(self, results):
345
- """Private function to load semantic segmentation annotations.
346
-
347
- Args:
348
- results (dict): Result dict from :obj:`dataset`.
349
-
350
- Returns:
351
- dict: The dict contains loaded semantic segmentation annotations.
352
- """
353
-
354
- if self.file_client is None:
355
- self.file_client = mmcv.FileClient(**self.file_client_args)
356
-
357
- filename = osp.join(results['seg_prefix'],
358
- results['ann_info']['seg_map'])
359
- img_bytes = self.file_client.get(filename)
360
- results['gt_semantic_seg'] = mmcv.imfrombytes(
361
- img_bytes, flag='unchanged').squeeze()
362
- results['seg_fields'].append('gt_semantic_seg')
363
- return results
364
-
365
- def __call__(self, results):
366
- """Call function to load multiple types annotations.
367
-
368
- Args:
369
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
370
-
371
- Returns:
372
- dict: The dict contains loaded bounding box, label, mask and
373
- semantic segmentation annotations.
374
- """
375
-
376
- if self.with_bbox:
377
- results = self._load_bboxes(results)
378
- if results is None:
379
- return None
380
- if self.with_label:
381
- results = self._load_labels(results)
382
- if self.with_mask:
383
- results = self._load_masks(results)
384
- if self.with_seg:
385
- results = self._load_semantic_seg(results)
386
- return results
387
-
388
- def __repr__(self):
389
- repr_str = self.__class__.__name__
390
- repr_str += f'(with_bbox={self.with_bbox}, '
391
- repr_str += f'with_label={self.with_label}, '
392
- repr_str += f'with_mask={self.with_mask}, '
393
- repr_str += f'with_seg={self.with_seg}, '
394
- repr_str += f'poly2mask={self.poly2mask}, '
395
- repr_str += f'poly2mask={self.file_client_args})'
396
- return repr_str
397
-
398
-
399
- @PIPELINES.register_module()
400
- class LoadProposals(object):
401
- """Load proposal pipeline.
402
-
403
- Required key is "proposals". Updated keys are "proposals", "bbox_fields".
404
-
405
- Args:
406
- num_max_proposals (int, optional): Maximum number of proposals to load.
407
- If not specified, all proposals will be loaded.
408
- """
409
-
410
- def __init__(self, num_max_proposals=None):
411
- self.num_max_proposals = num_max_proposals
412
-
413
- def __call__(self, results):
414
- """Call function to load proposals from file.
415
-
416
- Args:
417
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
418
-
419
- Returns:
420
- dict: The dict contains loaded proposal annotations.
421
- """
422
-
423
- proposals = results['proposals']
424
- if proposals.shape[1] not in (4, 5):
425
- raise AssertionError(
426
- 'proposals should have shapes (n, 4) or (n, 5), '
427
- f'but found {proposals.shape}')
428
- proposals = proposals[:, :4]
429
-
430
- if self.num_max_proposals is not None:
431
- proposals = proposals[:self.num_max_proposals]
432
-
433
- if len(proposals) == 0:
434
- proposals = np.array([[0, 0, 0, 0]], dtype=np.float32)
435
- results['proposals'] = proposals
436
- results['bbox_fields'].append('proposals')
437
- return results
438
-
439
- def __repr__(self):
440
- return self.__class__.__name__ + \
441
- f'(num_max_proposals={self.num_max_proposals})'
442
-
443
-
444
- @PIPELINES.register_module()
445
- class FilterAnnotations(object):
446
- """Filter invalid annotations.
447
-
448
- Args:
449
- min_gt_bbox_wh (tuple[int]): Minimum width and height of ground truth
450
- boxes.
451
- """
452
-
453
- def __init__(self, min_gt_bbox_wh):
454
- # TODO: add more filter options
455
- self.min_gt_bbox_wh = min_gt_bbox_wh
456
-
457
- def __call__(self, results):
458
- assert 'gt_bboxes' in results
459
- gt_bboxes = results['gt_bboxes']
460
- w = gt_bboxes[:, 2] - gt_bboxes[:, 0]
461
- h = gt_bboxes[:, 3] - gt_bboxes[:, 1]
462
- keep = (w > self.min_gt_bbox_wh[0]) & (h > self.min_gt_bbox_wh[1])
463
- if not keep.any():
464
- return None
465
- else:
466
- keys = ('gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg')
467
- for key in keys:
468
- if key in results:
469
- results[key] = results[key][keep]
470
- return results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ.py DELETED
@@ -1,14 +0,0 @@
1
- from .mask_rcnn_R_101_FPN_100ep_LSJ import (
2
- dataloader,
3
- lr_multiplier,
4
- model,
5
- optimizer,
6
- train,
7
- )
8
-
9
- train.max_iter *= 4 # 100ep -> 400ep
10
-
11
- lr_multiplier.scheduler.milestones = [
12
- milestone * 4 for milestone in lr_multiplier.scheduler.milestones
13
- ]
14
- lr_multiplier.scheduler.num_updates = train.max_iter