parquet-converter commited on
Commit
35e45da
·
1 Parent(s): f689cf2

Update parquet files (step 103 of 249)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/8bf Download BEST Full.md +0 -116
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Can 39t Install 32 Bit Windows 10.md +0 -16
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Edius 5 Free Download Full Version with Key 64 Bit What You Need to Know.md +0 -20
  4. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fonelab For Ios WORK Crack.md +0 -32
  5. spaces/1gistliPinn/ChatGPT4/Examples/Autodesk AutoCad 2019.1.1 (x86x64) Crack Keygen.md +0 -6
  6. spaces/1gistliPinn/ChatGPT4/Examples/D16 Group PunchBOX V1.0.1 WiN OSX.md +0 -8
  7. spaces/1gistliPinn/ChatGPT4/Examples/Film India Kabhi Khushi Kabhie Gham Online Subtitrat.md +0 -6
  8. spaces/1phancelerku/anime-remove-background/Asphalt Nitro 2 Mod Apk The Ultimate Guide to 60 FPS and Infinite Money.md +0 -87
  9. spaces/1phancelerku/anime-remove-background/Doodle Army 2 Mini Militia Hile APK 4.3. 4 - The Ultimate Guide.md +0 -125
  10. spaces/1phancelerku/anime-remove-background/Download Getting Over It with Bennett Foddy MOD APK 1.9.6 - All Unlocked.md +0 -101
  11. spaces/1phancelerku/anime-remove-background/Experience the Epic Story of Seven Deadly Sins Grand Cross - APK Download Available.md +0 -114
  12. spaces/A00001/bingothoo/src/components/turn-counter.tsx +0 -23
  13. spaces/AFCMEgypt/WCB/app.py +0 -122
  14. spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/hifigan/modules.py +0 -332
  15. spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/base_binarizer.py +0 -412
  16. spaces/AIZeroToHero/Video-Automatic-Speech-Recognition/streaming.py +0 -66
  17. spaces/ASJMO/freegpt/g4f/Provider/Providers/You.py +0 -24
  18. spaces/AbeShinzo0708/AI_Kishida_Fumio_speaker/hooks/hook-streamlit.py +0 -3
  19. spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/stores/pendingMessageIdToRetry.ts +0 -4
  20. spaces/AchyuthGamer/OpenGPT/server/website.py +0 -58
  21. spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/vertical.py +0 -58
  22. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/InputText.js +0 -2
  23. spaces/Alex132/togethercomputer-LLaMA-2-7B-32K/app.py +0 -3
  24. spaces/AlexWang/lama/models/ade20k/segm_lib/utils/th.py +0 -41
  25. spaces/AlhitawiMohammed22/CER_Hu-Evaluation-Metrics/eval_cer.py +0 -145
  26. spaces/Altinas/vits-uma-genshin-honkais/text/__init__.py +0 -57
  27. spaces/Amrrs/DragGan-Inversion/PTI/training/coaches/single_id_coach.py +0 -80
  28. spaces/Amrrs/image-caption-with-vit-gpt2/README.md +0 -46
  29. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py +0 -578
  30. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_image_processor.py +0 -149
  31. spaces/Andy1621/uniformer_image_detection/mmdet/core/mask/__init__.py +0 -8
  32. spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/paa_head.py +0 -671
  33. spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py +0 -27
  34. spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_769x769_40k_cityscapes.py +0 -9
  35. spaces/Armored-Atom/Image-To-Motion/app.py +0 -128
  36. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/spinners.py +0 -159
  37. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/build_clib.py +0 -208
  38. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/extend.md +0 -141
  39. spaces/Benson/text-generation/Examples/Agua Clasificacin Rompecabezas Mod Apk Descargar.md +0 -48
  40. spaces/Benson/text-generation/Examples/Bmw Drift Apk.md +0 -54
  41. spaces/Benson/text-generation/Examples/Descargar El Juego Mod Township Offline.md +0 -79
  42. spaces/BetterAPI/BetterChat_new/src/lib/utils/sha256.ts +0 -7
  43. spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/attr.py +0 -72
  44. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/resolvelib/candidates.py +0 -552
  45. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/cascade_rcnn.py +0 -245
  46. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/train_net.py +0 -110
  47. spaces/CVPR/LIVE/pydiffvg/shape.py +0 -172
  48. spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/logical.h +0 -22
  49. spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/fcn_occmask_head.py +0 -570
  50. spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/audio_text.py +0 -36
spaces/1acneusushi/gradio-2dmoleculeeditor/data/8bf Download BEST Full.md DELETED
@@ -1,116 +0,0 @@
1
- <br />
2
- <h1>8bf Download Full: How to Enhance Your Image Editing with Free Plugins</h1>
3
- <p>If you are an avid user of Photoshop or other image editing software, you may have heard of <strong>8bf files</strong>. These are files that contain <strong>Photoshop filter plug-ins</strong>, which are extensions that add extra functionality, such as new image filters, to Photoshop and compatible programs. These plug-ins can help you customize your Photoshop experience and create stunning images with ease.</p>
4
- <p>In this article, we will show you <strong>how to download and install 8bf plugins</strong> from reliable sources, and <strong>how to use them</strong> in your image editing projects. We will also introduce you to some of the <strong>best 8bf plugins</strong> that you can get for free, and how they can enhance your creative and professional image editing. Whether you are a beginner or an expert, you will find something useful and interesting in this article.</p>
5
- <h2>8bf download full</h2><br /><p><b><b>Download Zip</b> >>> <a href="https://byltly.com/2uKzZp">https://byltly.com/2uKzZp</a></b></p><br /><br />
6
- <h2>How to Download and Install 8bf Plugins</h2>
7
- <p>The first step to using 8bf plugins is to <strong>download them</strong> from the internet. There are many websites that offer free or paid plugins for Photoshop and other image editing software, but not all of them are trustworthy or compatible. You need to be careful when choosing where to download your plugins from, and make sure they are safe and suitable for your software version.</p>
8
- <p>One of the best places to get free Photoshop plug-ins is <strong>Adobe's own website</strong>. You can sort the hundreds of free resources by rating, popularity, or date added, and find what you need easily. These plug-ins are installed differently than the others on this list. You must have a free Adobe account and the Creative Cloud program installed to use them.</p>
9
- <p>Another good source of free Photoshop filters and plug-ins is <strong>Lifewire</strong>, which has compiled a list of five best sites for free Photoshop filters and plug-ins. You can find links to these sites on their page, along with directions on how to install them.</p>
10
- <p>Once you have downloaded your desired plugin, you need to <strong>install it</strong> on your computer. The installation process may vary depending on the file format and the software you are using, but here are some general steps that you can follow:</p>
11
- <ul>
12
- <li>If the plugin is downloaded in <code>.8bf</code> format, you need to copy and paste the file into Photoshop's filter folder. On Windows, it's usually here: <code>C:\Program Files\Adobe\Adobe Photoshop (version)\Plug-ins\Filters\</code>. However, if putting the filter in that folder doesn't work, try this one: <code>C:\Program Files\Common Files\Adobe\Plug-Ins\CC</code>.</li>
13
- <li>If the plugin is downloaded in <code>.zxp </code> format, you need to use a program called <strong>Adobe Extension Manager</strong> or <strong>ZXPInstaller</strong> to install it. You can download these programs for free from their official websites . Once you have them, you can drag and drop the zxp file into the program and follow the instructions.</li>
14
- <li>If the plugin is downloaded in <code>.exe</code> format, you need to run the file and follow the installation wizard. Make sure you choose the correct destination folder for your Photoshop version.</li>
15
- </ul>
16
- <p>After you have installed your plugin, you need to <strong>access it</strong> from your image editing software. In Photoshop, you can usually find your plugins under the <strong>window menu</strong>, under <strong>extensions</strong> or <strong>filters</strong>. You can also use the <strong>search bar</strong> at the top of Photoshop to find your plugin by name. Once you have opened your plugin, you can use it as instructed by the developer.</p>
17
- <h2>Best 8bf Plugins for Creative and Professional Image Editing</h2>
18
- <p>Now that you know how to download and install 8bf plugins, you may be wondering which ones are worth trying. There are thousands of plugins available online, but not all of them are equally useful or high-quality. To help you narrow down your choices, we have selected some of the <strong>best 8bf plugins</strong> that you can get for free, and how they can enhance your creative and professional image editing.</p>
19
- <p></p>
20
- <h3>Adobe's Free Photoshop Plug-ins</h3>
21
- <p>If you want to get the most out of your Photoshop experience, you should definitely check out Adobe's own collection of free plug-ins. These plug-ins are designed by Adobe experts and offer a huge variety of features and effects that can improve your workflow and creativity. Some of the most popular and useful plug-ins are:</p>
22
- <ul>
23
- <li><strong>Kuler Panel</strong>: This plug-in allows you to create, explore, and share color themes that you can use in your projects. You can browse thousands of color combinations created by other users, or create your own using various color rules and modes. You can also sync your themes with other Adobe products, such as Illustrator or InDesign.</li>
24
- <li><strong>Lens Profile Creator</strong>: This plug-in helps you correct lens distortions, such as barrel or pincushion distortion, vignetting, or chromatic aberration. You can create custom profiles for your lenses based on calibration images, or use predefined profiles for common lenses. You can also share your profiles with other users or download profiles created by others.</li>
25
- <li><strong>Perspective Warp</strong>: This plug-in allows you to adjust the perspective of your images without distorting them. You can create multiple planes in your image and manipulate them independently, or merge them into a single plane. You can also change the viewpoint of your image, such as changing from a bird's eye view to a worm's eye view.</li>
26
- <li><strong>Pixlr-o-matic</strong>: This plug-in lets you add retro effects to your images with just a few clicks. You can choose from hundreds of filters, overlays, and borders to create vintage-looking photos. You can also mix and match different effects to create your own unique style.</li>
27
- <li><strong>Social Kit Pro</strong>: This plug-in helps you create professional-looking social media graphics, such as cover photos, profile pictures, or ads. You can choose from various templates that match the dimensions and guidelines of different social platforms, such as Facebook, Twitter, or YouTube. You can also customize your graphics with text, shapes, images, or logos.</li>
28
- </ul>
29
- <h3>Mehdi's Free Photoshop Filters</h3>
30
- <p>If you are looking for some simple but powerful filters that can transform your images in amazing ways, you should try Mehdi's free Photoshop filters. These filters are created by Mehdi Rabah, a French developer who has been making Photoshop plugins since 2002. His website offers dozens of filters with detailed explanations and examples of what they do. Some of his most popular and useful filters are:</p>
31
- <ul>
32
- <li><strong>Kaleidoscope 2.1</strong>: This filter allows you to create kaleidoscopic patterns from any image. You can adjust the number of segments, the angle, the zoom, and the offset of the pattern. You can also apply different blending modes and colors to create stunning effects.</li>
33
- <li><strong>Weaver 2.0</strong>: This filter allows you to create realistic woven textures from any image. You can adjust the size, the shape, the color, and the opacity of the threads. You can also apply different effects, such as embossing or shadowing, to make the texture more 3D.</li>
34
- <li><strong>Seamless Border 2.0</strong>: This filter allows you to create seamless borders from any image. You can adjust the width, the height, the angle, and the offset of the border. You can also apply different effects, such as mirroring, flipping, or rotating, to make the border more interesting.</li>
35
- <li><strong>Sorting Tiles 1.1</strong>: This filter allows you to create mosaic-like effects from any image. You can adjust the size, the shape, the color, and the order of the tiles. You can also apply different effects, such as blurring, sharpening, or inverting, to make the tiles more varied.</li>
36
- <li><strong>Wavy Lab 1.1</strong>: This filter allows you to create wavy patterns from any image. You can adjust the frequency, the amplitude, the phase, and the direction of the waves. You can also apply different effects, such as colorization, gradient, or transparency, to make the waves more colorful.</li>
37
- </ul>
38
- <h3>The Plugin Site's Free Photoshop Filters</h3>
39
- <p>If you want to get a lot of filters for a single download, you should check out The Plugin Site's free Photoshop filters. These filters are created by Harald Heim, a German developer who has been making Photoshop plugins since 1997. His website offers a single download that contains 70 image effects that can be applied to any image. Some of his most popular and useful filters are:</p>
40
- <ul>
41
- <li><strong>Color MegaMix 1.1</strong>: This filter allows you to modify the colors of your image in various ways. You can choose from 20 color modes and adjust the intensity and contrast of each mode. You can also mix different modes together to create new color effects.</li>
42
- <li><strong>Contrast Mask 1.0</strong>: This filter allows you to enhance the contrast and details of your image without losing quality. You can adjust the strength and radius of the contrast mask and apply it to different tonal ranges of your image.</li>
43
- <li><strong>Edge Detector 1.0</strong>: This filter allows you to detect and highlight the edges of your image in various ways. You can choose from 10 edge modes and adjust the threshold and smoothness of each mode. You can also invert or colorize the edges to create different effects.</li>
44
- <li><strong>Old Movie 1.0</strong>: This filter allows you to simulate the look of old movies on your image. You can adjust the amount and size of scratches, dust, hair, jitter, flicker, and noise on your image. You can also change the color and brightness of your image to make it look more aged.</li>
45
- <li><strong>Posterizer 1.0</strong>: This filter allows you to reduce the number of colors in your image and create poster-like effects. You can adjust the number of colors and levels for each color channel and apply dithering or smoothing to your image.</li>
46
- </ul>
47
- <h3>Lokas Software's Free 3D Shadow Filter</h3>
48
- <p>If you want to add realistic shadows to your images, you should try Lokas Software's free 3D Shadow filter. This filter is created by Lokas Software, a Russian company that specializes in graphics software development since 1997. Their website offers a free filter that can create various types of shadows from any image or text layer. Some of the features of this filter are:</p>
49
- <ul>
50
- <li><strong>Shadow Type</strong>: You can choose from four types of shadows: drop shadow, perspective shadow, inner shadow, or reflection shadow.</li>
51
- <li><strong>Shadow Position</strong>: You can adjust the angle, distance, scale, and perspective of your shadow.</li>
52
- <li><strong>Shadow Color</strong>: You can choose any color for your shadow or use a gradient or a texture.</li>
53
- <li><strong>Shadow Quality</strong>: You can adjust the opacity, blur, noise, and softness of your shadow.</li>
54
- <li><strong>Shadow Effects</strong>: You can apply various effects to your shadow, such as glow, bevel, emboss, or contour.</li>
55
- </ul>
56
- <h3>Flaticon</h3>
57
- <p>If you need icons for your projects, you should check out Flaticon. Flaticon is a website that offers a large collection of free icons in various formats: PNG, SVG, EPS, PSD, or BASE 64. You can browse thousands of icons by category or keyword or use their online editor to customize them. Some of the benefits of using Flaticon are:</p>
58
- <ul>
59
- <li><strong>Variety</strong>: You can find icons for any topic or theme you need: business, education, health, technology, etc. You can also find icons in different styles: flat, outline, 3D, hand-drawn, etc.</li>
60
- <li><strong>Quality</strong>: You can download icons in high resolution and vector format, which means they can be scaled and edited without losing quality. You can also use their online editor to change the color, size, orientation, or shape of the icons.</li>
61
- <li><strong>Compatibility</strong>: You can use Flaticon's icons in any software or platform that supports images. You can also use their Photoshop plugin to access and insert icons directly from Photoshop's window menu.</li>
62
- <li><strong>License</strong>: You can use Flaticon's icons for free for personal and commercial projects, as long as you credit the author and Flaticon. You can also get a premium subscription to access more icons and features without attribution.</li>
63
- </ul>
64
- <h3>Ink</h3>
65
- <p>If you are a designer who works with developers, you should try Ink. Ink is a Photoshop plugin that helps you create comprehensive design specifications for your projects. You can use Ink to generate useful information about your layers, such as dimensions, typography, colors, effects, etc. You can also export your design specifications as a PNG file or a HTML document. Some of the advantages of using Ink are:</p>
66
- <ul>
67
- <li><strong>Accuracy</strong>: You can ensure that your design is implemented exactly as you intended by providing precise and detailed information about your layers. You can also avoid misunderstandings and errors by communicating clearly with your developers.</li>
68
- <li><strong>Efficiency</strong>: You can save time and effort by generating design specifications automatically with Ink. You don't have to manually measure, label, or document your layers. You can also update your specifications easily if you make any changes to your design.</li>
69
- <li><strong>Convenience</strong>: You can access Ink from Photoshop's window menu or by using a keyboard shortcut. You can also customize Ink's settings to suit your preferences and needs. You can choose which information to include or exclude, how to format it, and how to export it.</li>
70
- <li><strong>Compatibility</strong>: You can use Ink with any version of Photoshop from CS6 to CC 2021. You can also use Ink with any language or operating system that supports Photoshop.</li>
71
- </ul>
72
- <h2>Conclusion</h2>
73
- <p>In conclusion, 8bf plugins are files that contain Photoshop filter plug-ins, which are extensions that add extra functionality to Photoshop and compatible programs. These plug-ins can help you customize your Photoshop experience and create stunning images with ease.</p>
74
- <p>To use 8bf plugins, you need to download them from reliable sources, install them on your computer, and access them from your image editing software. In this article, we have shown you how to do that, and introduced you to some of the best 8bf plugins that you can get for free.</p>
75
- <p>We hope you have found this article useful and informative. If you want to learn more about 8bf plugins and how to use them in your projects, you can check out the following resources:</p>
76
- <ul>
77
- <li><a href="">Adobe's Free Photoshop Plug-ins</a></li>
78
- <li><a href="">Lifewire's List of Free Photoshop Filters and Plug-ins</a></li>
79
- <li><a href="">Mehdi's Free Photoshop Filters</a></li>
80
- <li><a href="">The Plugin Site's Free Photoshop Filters</a></li>
81
- <li><a href="">Lokas Software's Free 3D Shadow Filter</a></li>
82
- <li><a href="">Flaticon</a></li>
83
- <li><a href="">Ink</a></li>
84
- </ul>
85
- <h2>FAQs</h2>
86
- <p>Here are some frequently asked questions about 8bf plugins and their answers:</p>
87
- <h4>What is the difference between filters and plugins?</h4>
88
- <p>Filters are a type of plugin that apply specific effects or transformations to an image or a layer. Plugins are a broader term that includes filters as well as other extensions that add extra functionality to Photoshop or compatible programs.</p>
89
- <h4>How can I uninstall or disable a plugin that I don't need?</h4>
90
- <p>To uninstall a plugin, you need to delete the file from the folder where you installed it. To disable a plugin temporarily, you can rename the file extension from <code>.8bf</code> or <code>.zxp</code> to something else, such as <code>.bak</code>. To enable it again, you need to rename it back to its original extension.</p>
91
- <h4>Are there any risks or drawbacks of using 8bf plugins?</h4>
92
- <p>Using 8bf plugins is generally safe and beneficial, as long as you download them from reputable sources and install them correctly. However, there are some potential risks or drawbacks that you should be aware of, such as:</p>
93
- <ul>
94
- <li><strong>Compatibility issues</strong>: Some plugins may not work well with your software version or operating system. They may cause errors, crashes, or performance issues. To avoid this, you should always check the compatibility and requirements of the plugins before installing them.</li>
95
- <li><strong>Security issues</strong>: Some plugins may contain malware or viruses that can harm your computer or compromise your data. To avoid this, you should always scan the files with an antivirus program before installing them. You should also only download plugins from trusted sources and avoid clicking on suspicious links or pop-ups.</li>
96
- <li><strong>Quality issues</strong>: Some plugins may not be well-designed or well-maintained. They may have bugs, glitches, or limitations that can affect your image quality or user experience. To avoid this, you should always read the reviews and ratings of the plugins before installing them. You should also update your plugins regularly and report any problems to the developers.</li>
97
- </ul>
98
- <h4>How can I update or troubleshoot my plugins?</h4>
99
- <p>To update your plugins, you need to check the websites of the developers for any new versions or updates. You can also use programs like Adobe Extension Manager or ZXPInstaller to manage your plugins and check for updates. To troubleshoot your plugins, you need to identify the source of the problem and try some common solutions, such as:</p>
100
- <ul>
101
- <li><strong>Restarting your software or computer</strong>: This can help resolve any temporary issues or conflicts that may cause your plugins to malfunction.</li>
102
- <li><strong>Reinstalling your plugins</strong>: This can help fix any corrupted or missing files that may prevent your plugins from working properly.</li>
103
- <li><strong>Disabling other plugins</strong>: This can help determine if there is any incompatibility or interference between your plugins that may cause errors or crashes.</li>
104
- <li><strong>Contacting the developers</strong>: This can help get support and guidance from the creators of the plugins. You can find their contact information on their websites or in the plugin documentation.</li>
105
- </ul>
106
- <h4>Where can I find more resources and tutorials on using 8bf plugins?</h4>
107
- <p>If you want to learn more about using 8bf plugins in your projects, you can find many resources and tutorials online. Some of the best ones are:</p>
108
- <ul>
109
- <li><a href="">Photoshop Essentials: How to Install Plugins in Photoshop</a></li>
110
- <li><a href="">Photoshop Tutorials: 50 Best Photoshop Plugins for 2021</a></li>
111
- <li><a href="">Creative Bloq: The Best Photoshop Plugins for Designers</a></li>
112
- <li><a href="">Envato Tuts+: How to Use Photoshop Plugins: A Beginner's Guide</a></li>
113
- <li><a href="">YouTube: Photoshop Plugin Tutorials Playlist</a></li>
114
- </ul></p> b2dd77e56b<br />
115
- <br />
116
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Can 39t Install 32 Bit Windows 10.md DELETED
@@ -1,16 +0,0 @@
1
- <br />
2
- <h1>What to Do If You Can't Install 32 Bit Windows 10</h1>
3
- <p>Windows 10 is the latest and most advanced operating system from Microsoft. It comes in two versions: 32 bit and 64 bit. The 32 bit version is designed for older computers that have less than 4 GB of RAM, while the 64 bit version is designed for newer computers that have more than 4 GB of RAM. The 64 bit version also has some advantages over the 32 bit version, such as better security, performance, and compatibility.</p>
4
- <h2>can 39;t install 32 bit windows 10</h2><br /><p><b><b>Download File</b> &#9675; <a href="https://byltly.com/2uKwZe">https://byltly.com/2uKwZe</a></b></p><br /><br />
5
- <p>However, some users may prefer to install the 32 bit version of Windows 10 on their computers for various reasons. For example, they may have some legacy software or hardware that only works with the 32 bit version, or they may want to save some disk space or memory. In some cases, users may also encounter problems when trying to install the 64 bit version of Windows 10, such as compatibility issues, error messages, or slow installation.</p>
6
- <p>If you are one of those users who want to install the 32 bit version of Windows 10 on your computer, but you can't do it for some reason, don't worry. There are some possible solutions that can help you fix this problem and enjoy the benefits of Windows 10. Here are some of them:</p>
7
- <ul>
8
- <li>Check your system requirements. Before you try to install the 32 bit version of Windows 10, make sure that your computer meets the minimum system requirements for it. According to Microsoft, you need at least a 1 GHz processor, 1 GB of RAM, 16 GB of free disk space, a DirectX 9 compatible graphics card, and a DVD drive or a USB port. If your computer does not meet these requirements, you may not be able to install the 32 bit version of Windows 10.</li>
9
- <li>Check your BIOS settings. Another possible reason why you can't install the 32 bit version of Windows 10 is that your BIOS settings are preventing it. BIOS stands for Basic Input/Output System, and it is a software that controls the basic functions of your computer, such as booting up, detecting hardware, and setting up the operating system. Sometimes, the BIOS settings may be configured to only allow the installation of the 64 bit version of Windows 10. To fix this, you need to access your BIOS settings and change them accordingly. The exact steps may vary depending on your computer model and manufacturer, but generally, you need to restart your computer and press a certain key (such as F2, F10, or Del) to enter the BIOS setup menu. Then, look for an option that says something like "OS Type", "Boot Mode", or "UEFI/Legacy". Change this option to "Legacy" or "Other OS" if it is set to "UEFI" or "Windows". Save your changes and exit the BIOS setup menu.</li>
10
- <li>Use a bootable USB drive or DVD. Another possible solution is to use a bootable USB drive or DVD that contains the installation files for the 32 bit version of Windows 10. You can create one using another computer that has Windows 10 installed on it. To do this, you need a USB drive or a DVD with at least 8 GB of storage space, and a tool called Media Creation Tool from Microsoft. You can download this tool from <a href="https://www.microsoft.com/en-us/software-download/windows10">here</a>. Once you have downloaded and run the tool, follow the instructions on the screen to create a bootable USB drive or DVD with the 32 bit version of Windows 10. Then, insert the USB drive or DVD into your computer and restart it. You should see a message that prompts you to press any key to boot from the USB drive or DVD. Press any key and follow the instructions on the screen to install the 32 bit version of Windows 10.</li>
11
- <li>Contact Microsoft support. If none of the above solutions work for you, you may need to contact Microsoft support for further assistance. You can do this by visiting <a href="https://support.microsoft.com/en-us/contactus/">this page</a> and choosing the option that best suits your problem. You can also call them at +1-800-642-7676 (US) or +44-800-026-03-30 (UK). They will guide you through the steps to troubleshoot and resolve your issue.</li>
12
- </ul>
13
- <p>Installing the 32 bit version of Windows 10 on your computer can be tricky sometimes</p>
14
- <p></p> ddb901b051<br />
15
- <br />
16
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Edius 5 Free Download Full Version with Key 64 Bit What You Need to Know.md DELETED
@@ -1,20 +0,0 @@
1
- <br />
2
- <h1>Edius 5 Free Download Full Version with Key 64 Bit: How to Edit Videos Like a Pro</h1>
3
- <p>Edius 5 is a professional video editing software that can handle various formats and resolutions. It is widely used by broadcasters, filmmakers, and enthusiasts who want to create high-quality videos with ease and speed. However, Edius 5 is not a free software, and it requires a valid license key to activate and use it. If you are looking for a way to get Edius 5 for free, you may have come across some websites that offer Edius 5 free download full version with key 64 bit. A key is a tool that can generate and inject product keys into your software to bypass the activation process. In this article, we will explain what Edius 5 free download full version with key 64 bit is, how it works, and how to download and use it safely.</p>
4
- <h2>edius 5 free download full version with key 64 bit</h2><br /><p><b><b>Download Zip</b> ===> <a href="https://byltly.com/2uKygL">https://byltly.com/2uKygL</a></b></p><br /><br />
5
- <h2>What is Edius 5 Free Download Full Version with Key 64 Bit?</h2>
6
- <p>Edius 5 free download full version with key 64 bit is a package that contains the installation files of Edius 5 and a key tool that can create and apply product keys for Edius 5. The product key is a code that identifies your software license and allows you to activate and use it. Normally, you need to purchase a product key from Grass Valley or an authorized reseller, but with a key, you can generate your own product key for free.</p>
7
- <p>The key tool that comes with Edius 5 free download full version with key 64 bit is called X-Force 2016. It is a popular and reliable tool that can activate various Grass Valley products, such as Edius, ProCoder, Storm, etc. X-Force 2016 works by contacting a custom KMS server instead of the official Grass Valley Activation Server. KMS stands for Key Management Service, which is a feature that allows large organizations to activate multiple devices with a single product key. X-Force 2016 mimics this feature and creates new product keys that are verified by the custom KMS server. This way, your Edius 5 will think it is activated by a legitimate source.</p>
8
- <h2>How to Download and Use Edius 5 Free Download Full Version with Key 64 Bit?</h2>
9
- <p>Before you download and use Edius 5 free download full version with key 64 bit, you should know that it is not an official or legal product. It may violate Grass Valley's terms of service and cause some security risks. Therefore, you should use it at your own discretion and responsibility.</p>
10
- <p></p>
11
- <p>That being said, here are the steps to download and use Edius 5 free download full version with key 64 bit:</p>
12
- <ol>
13
- <li>Download Edius 5 free download full version with key 64 bit from a reliable source. You can find many websites that offer this package, but some of them may contain malware or viruses. We recommend you to download it from <a href="https://en.freedownloadmanager.org/users-choice/Download_Free_Edius_5_For_Pc.html">this website</a>, which is a free download manager that can help you find and download various software. You will get a ZIP file with an executable file named EDIUS_5.exe.</li>
14
- <li>Extract the ZIP file using a password provided. The password is "www.downloadly.ir" (without quotes).</li>
15
- <li>Run EDIUS_5.exe as administrator. You may see a Windows Protected Your PC message, but you can ignore it and choose Run Anyway.</li>
16
- <li>Follow the on-screen instructions to complete the installation. You will need to enter a serial number and a product key during the installation. You can use any of these serial numbers: <ul><li>666-69696969</li><li>667-98989898</li><li>400-45454545</li></ul> And this product key: <ul><li>001H1</li></ul></li>
17
- <li>After the installation is finished, do not run Edius 5 yet. You need to apply the keygen first.</li>
18
- <li>Go to the folder where you extracted the ZIP file and find the folder named "xf-adsk2016_x64". Inside this folder, you will see another executable file named xf-adsk2016_x</p> ddb901b051<br />
19
- <br />
20
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fonelab For Ios WORK Crack.md DELETED
@@ -1,32 +0,0 @@
1
-
2
- <h1>How to Recover Lost Data from iOS Devices with FoneLab for iOS Crack</h1>
3
- <p>If you have ever lost or deleted important data from your iPhone, iPad, or iPod touch, you know how frustrating it can be. Whether it's because of accidental deletion, water damage, system crash, forgotten passcode, or any other reason, losing your precious data can be a nightmare.</p>
4
- <h2>Fonelab for ios crack</h2><br /><p><b><b>Download</b> &#9999; &#9999; &#9999; <a href="https://byltly.com/2uKzxI">https://byltly.com/2uKzxI</a></b></p><br /><br />
5
- <p>Fortunately, there is a way to recover your lost data without spending a fortune on professional services or risking further damage to your device. FoneLab for iOS Crack is a powerful and reliable data recovery software that can help you restore your contacts, photos, messages, videos, music, notes, and more from any iOS device or iTunes/iCloud backup.</p>
6
- <p>FoneLab for iOS Crack is easy to use and works with all iOS devices and iOS versions. You can download it for free from HaxPC.net and follow the simple steps below to recover your data in minutes.</p>
7
- <h2>Step 1: Download and install FoneLab for iOS Crack</h2>
8
- <p>Go to <a href="https://haxpc.net/fonelab-crack/">https://haxpc.net/fonelab-crack/</a> and download the FoneLab for iOS Crack file. Extract the file and run the setup to install the software on your computer. Launch the program and choose the "Recover from iOS Device" mode.</p>
9
- <h2>Step 2: Connect your iOS device to the computer</h2>
10
- <p>Use a USB cable to connect your iPhone, iPad, or iPod touch to the computer. The software will automatically detect your device and show its information on the interface. If your device is locked or disabled, you can use FoneLab iOS Unlocker Crack to remove the passcode or Apple ID first.</p>
11
- <p></p>
12
- <h2>Step 3: Scan your device for lost data</h2>
13
- <p>Click the "Start Scan" button to let the software scan your device for lost or deleted data. The scanning process may take some time depending on the amount of data on your device. You can preview the scanned data by category on the left panel.</p>
14
- <h2>Step 4: Recover your data</h2>
15
- <p>Select the data you want to recover and click the "Recover" button. You can choose to recover the data to your computer or directly to your device. The software will start recovering your data and save it in the specified location. You can check the recovered data on your computer or device.</p>
16
- <p>Congratulations! You have successfully recovered your lost data from your iOS device with FoneLab for iOS Crack. You can also use this software to recover data from iTunes or iCloud backup if you have one. FoneLab for iOS Crack is a lifesaver for anyone who wants to recover their precious data from their iOS devices without hassle.</p>
17
-
18
- <h2>Why Choose FoneLab for iOS Crack?</h2>
19
- <p>There are many data recovery software available on the market, but FoneLab for iOS Crack stands out for its features and benefits. Here are some of the reasons why you should choose FoneLab for iOS Crack to recover your lost data from your iOS devices:</p>
20
- <ul>
21
- <li>It supports all iOS devices and iOS versions, including the latest iPhone 12 and iOS 14.</li>
22
- <li>It can recover various types of data, such as contacts, photos, messages, videos, music, notes, WhatsApp, iMessage, call history, etc.</li>
23
- <li>It can recover data from your device directly or from iTunes/iCloud backup.</li>
24
- <li>It can recover data in different scenarios, such as accidental deletion, water damage, system crash, forgotten passcode, device lost/stolen, etc.</li>
25
- <li>It can recover data without damaging your device or overwriting your data.</li>
26
- <li>It can recover data quickly and easily with a few clicks.</li>
27
- <li>It can preview the data before recovering it and selectively recover the data you want.</li>
28
- <li>It can recover data to your computer or directly to your device.</li>
29
- </ul>
30
- <p>With FoneLab for iOS Crack, you can rest assured that your data is safe and secure. You can download it for free from HaxPC.net and enjoy its full features without any limitations. FoneLab for iOS Crack is the best choice for anyone who wants to recover their lost data from their iOS devices with ease and efficiency.</p> 7b8c122e87<br />
31
- <br />
32
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Autodesk AutoCad 2019.1.1 (x86x64) Crack Keygen.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Autodesk AutoCad 2019.1.1 (x86x64) Crack keygen</h2><br /><p><b><b>Download File</b> ->->->-> <a href="https://imgfil.com/2uy0Mo">https://imgfil.com/2uy0Mo</a></b></p><br /><br />
2
-
3
- Download and Install Sage 50 2014 (2015 - 2016 Academic Year) ... Learn Accounting in 1 HOUR First ... 1fdad05405<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/D16 Group PunchBOX V1.0.1 WiN OSX.md DELETED
@@ -1,8 +0,0 @@
1
- <br />
2
- <p><strong>music samples germany</strong><br /> das programm punchbox kann sowohl mit dem vorproduktionen-klanten als auch mit der handwerksklientel kommunizieren. es enthält viele extras, die es dem musiker erlauben, seine klangfarben einzusetzen. dadurch ist es von jedem musiker empfohlen.</p>
3
- <p><strong>mix magazine germany</strong><br /> punchbox ist ein muss für jeden musiker, der elektronische musik produziert. neben der massiven presetauswahl haben wir hier im handumdrehen berzeugend klingende bassdrums mit charakter für die eigene produktion erstellt. neben den sehr guten presets spielen auch die mitgelieferten samples in der obersten liga mit. wer auf der suche nach der bassdrum für den nchsten trap-, edm-, dubstep- oder techno-track ist, findet mit punchbox innerhalb krzester zeit die passende lsung. bei einem preis von 79 euro muss man da gar nicht lange berlegen.</p>
4
- <h2>D16 Group PunchBOX v1.0.1 WiN OSX</h2><br /><p><b><b>DOWNLOAD</b> &#9913; <a href="https://imgfil.com/2uxYo0">https://imgfil.com/2uxYo0</a></b></p><br /><br />
5
- <p>if you're obsessed (as we are at sweetwater) with crafting the perfect bass drum sound, you'll love d16 group's punchbox plug-in. punchbox combines sampling and synthesis in a virtual kick drum instrument that will revitalize your music. the samples are meticulously crafted using only the finest instruments and vintage analog gear. the kick synthesizers are based on d16's acclaimed emulations of classic roland drum machines, customized and upgraded for deployment in punchbox. the punchbox audio engine consists of four sound generators, each of them dedicated to a key component of your kick sound.</p>
6
- <p>punchbox is the first of a suite of instruments that d16 group have created and its easy to see why. the sounds are fun, easy to use and easy to create. you can use the preset library to instantly get what you want and you can always tweak them to exactly what you want. you get a lot of bang for your buck with this instrument. the fact that it can be used with a midi controller is a bonus, but the fact it has so many features and that its so easy to use makes it even more attractive. </p> 899543212b<br />
7
- <br />
8
- <br />
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Film India Kabhi Khushi Kabhie Gham Online Subtitrat.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>film india kabhi khushi kabhie gham online subtitrat</h2><br /><p><b><b>Download Zip</b> &#128504;&#128504;&#128504; <a href="https://imgfil.com/2uy0hT">https://imgfil.com/2uy0hT</a></b></p><br /><br />
2
- <br />
3
- ... Cr3ative Zone. The Crazy Ones Sezonul 1 Episodul 1, serial online subtitrat in Romana | Cr3ative Zone ... Robin Williams: Seven of his most memorable movie roles. Robin Williams ... HinduismIndiaFilme De Dragoste. Black Girl Digs Bollywood (BGDB): "Yeh Ladki Hai Allah" from "Kabhi Khushi Kabhie Gham... " (2001). 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Asphalt Nitro 2 Mod Apk The Ultimate Guide to 60 FPS and Infinite Money.md DELETED
@@ -1,87 +0,0 @@
1
- <br />
2
- <h1>Asphalt Nitro 2 Mod APK 60 FPS: A Review</h1>
3
- <p>If you are a fan of racing games, you might have heard of Asphalt Nitro 2, a mobile game developed and published by Gameloft as part of the Asphalt series. But did you know that there is a modded version of the game that allows you to play it at 60 frames per second (FPS) and enjoy unlimited money and other features? In this article, we will review Asphalt Nitro 2 Mod APK 60 FPS, a modified version of the game that enhances your gaming experience. We will also tell you how to download and install it, and how to play it with some tips and tricks.</p>
4
- <h2>What is Asphalt Nitro 2?</h2>
5
- <h3>A racing game for low-end devices</h3>
6
- <p>Asphalt Nitro 2 is an arcade racing game that was announced in 2021 and is currently available in beta for Android users. It is basically Asphalt but for low-end devices, as it offers so much excitement in a compact (50 MB) package. It is designed to run smoothly on a wide range of mobile devices, including phones with weaker hardware specs.</p>
7
- <h2>asphalt nitro 2 mod apk 60 fps</h2><br /><p><b><b>DOWNLOAD</b> &#10040;&#10040;&#10040; <a href="https://jinyurl.com/2uNRsv">https://jinyurl.com/2uNRsv</a></b></p><br /><br />
8
- <h3>Features of the game</h3>
9
- <p>Asphalt Nitro 2 features top-notch graphics, 20 licensed supercars, four arcade game modes, and 230 races in gorgeous locations around New Zealand and Japan. You can drive famous supercar brands such as Lamborghini, Bugatti, Ferrari, and more, and perform crazy stunts while in the driver's seat. The game also features Asphalt 9's revolutionary TouchDrive technology, which streamlines car steering and allows you to play with just one hand on the screen. However, you can also turn off this mode in the settings if you prefer manual control.</p>
10
- <h2>What is Asphalt Nitro 2 Mod APK 60 FPS?</h2>
11
- <h3>A modified version of the game</h3>
12
- <p>Asphalt Nitro 2 Mod APK 60 FPS is a modified version of the game that enhances your gaming experience by unlocking some features that are not available in the original version. For example, you can play the game at 60 FPS, which makes the graphics smoother and more realistic. You can also enjoy unlimited money, which means you can buy any car or upgrade you want without worrying about the cost. Moreover, you can access all the cars and tracks without having to complete any missions or challenges.</p>
13
- <h3>Benefits of the mod</h3>
14
- <p>The benefits of using Asphalt Nitro 2 Mod APK 60 FPS are obvious. You can have more fun playing the game with better graphics, more money, and more options. You can also save your time and effort by skipping the tedious tasks that are required to unlock the content in the original version. You can simply download and install the mod and start playing right away.</p>
15
- <h2>How to download and install Asphalt Nitro 2 Mod APK 60 FPS?</h2>
16
- <h3>Steps to download and install</h3>
17
- <p>If you want to try Asphalt Nitro 2 Mod APK 60 FPS, you will need to follow these steps:</p>
18
- <ol>
19
- <li>Go to this link and download the mod APK file.</li>
20
- <li>Go to your device's settings and enable installation from unknown sources.</li>
21
- <li>Locate the downloaded file in your file manager and tap on it to install it.</li>
22
- <li>Wait for the installation to finish and launch the game.</li>
23
- <li>Enjoy playing Asphalt Nitro 2 Mod APK 60 FPS.</li>
24
- </ol>
25
- <h3>Precautions and tips</h3>
26
- <p>Before you download and install the mod, you should take some precautions and tips into account:</p>
27
- <ul>
28
- <li>Make sure you have enough storage space on your device to install the mod.</li>
29
- <li>Make sure you have a stable internet connection to download the mod and play the game online.</li>
30
- <li>Make sure you have a backup of your original game data in case something goes wrong with the mod.</li>
31
- <li>Make sure you do not use the mod for any illegal or unethical purposes, such as cheating or hacking.</li>
32
- <li>Make sure you do not update the game from the Play Store, as it may overwrite the mod and cause errors.</li>
33
- </ul>
34
- <h2>How to play Asphalt Nitro 2 Mod APK 60 FPS?</h2>
35
- <h3>Game modes and tracks</h3>
36
- <p>Asphalt Nitro 2 Mod APK 60 FPS offers four game modes: Career, Quick Race, Multiplayer, and Events. In Career mode, you can complete various missions and challenges to earn money and reputation. In Quick Race mode, you can choose any track and car and race against AI opponents. In Multiplayer mode, you can race against other players online and compete for rankings and rewards. In Events mode, you can participate in limited-time events and win exclusive prizes.</p>
37
- <p>The game also features 10 tracks in two locations: New Zealand and Japan. Each track has its own characteristics, such as curves, jumps, shortcuts, and obstacles. You can explore different routes and discover hidden secrets on each track. You can also customize the weather and time of day for each track.</p>
38
- <h3>Tips and tricks for beginners</h3>
39
- <p>If you are new to Asphalt Nitro 2 Mod APK 60 FPS, here are some tips and tricks that can help you improve your skills and performance:</p>
40
- <p>asphalt nitro 2 mod apk unlimited money and ultra graphics<br />
41
- asphalt nitro 2 mod apk download link with max graphics and 60 fps<br />
42
- asphalt nitro 2 mod apk gameplay video with all effects and infinite money<br />
43
- asphalt nitro 2 mod apk latest version with high resolution and smooth performance<br />
44
- asphalt nitro 2 mod apk free download for android with unlocked cars and tracks<br />
45
- asphalt nitro 2 mod apk offline mode with realistic physics and sound effects<br />
46
- asphalt nitro 2 mod apk no root required with easy installation and updates<br />
47
- asphalt nitro 2 mod apk hack features with cheats and tips<br />
48
- asphalt nitro 2 mod apk best settings for low-end devices and battery saving<br />
49
- asphalt nitro 2 mod apk review and rating by users and experts<br />
50
- asphalt nitro 2 mod apk comparison with original game and other racing games<br />
51
- asphalt nitro 2 mod apk how to play guide with tutorials and tricks<br />
52
- asphalt nitro 2 mod apk support and feedback from developers and community<br />
53
- asphalt nitro 2 mod apk new features and improvements in the latest update<br />
54
- asphalt nitro 2 mod apk challenges and achievements to complete and unlock<br />
55
- asphalt nitro 2 mod apk online multiplayer mode with friends and rivals<br />
56
- asphalt nitro 2 mod apk customizations and upgrades for cars and drivers<br />
57
- asphalt nitro 2 mod apk screenshots and wallpapers to download and share<br />
58
- asphalt nitro 2 mod apk fun facts and trivia about the game and its development<br />
59
- asphalt nitro 2 mod apk system requirements and compatibility with different devices<br />
60
- asphalt nitro 2 mod apk bugs and issues to report and fix<br />
61
- asphalt nitro 2 mod apk alternatives and similar games to try out<br />
62
- asphalt nitro 2 mod apk news and updates from official sources and media outlets<br />
63
- asphalt nitro 2 mod apk FAQs and answers to common questions and problems<br />
64
- asphalt nitro 2 mod apk testimonials and feedback from satisfied users and fans</p>
65
- <ul>
66
- <li>Use nitro wisely. Nitro is a boost that can help you speed up and overtake your rivals. However, it is not unlimited and it can run out quickly. You can refill your nitro by performing stunts, such as drifting, jumping, knocking down opponents, or hitting nitro bottles on the track. You can also use different types of nitro, such as perfect nitro, shockwave nitro, or double nitro, depending on your situation.</li>
67
- <li>Upgrade your cars. Upgrading your cars can improve their stats, such as speed, acceleration, handling, and nitro. You can upgrade your cars by spending money or using cards that you can obtain from races or events. You can also customize your cars by changing their color, decals, rims, or license plates.</li>
68
- <li>Choose the right car for each track. Different cars have different strengths and weaknesses, such as top speed, acceleration, handling, or nitro efficiency. You should choose the car that suits your style and the track's conditions. For example, if the track has many curves and turns, you should choose a car with good handling and nitro efficiency. If the track has long straight roads, you should choose a car with high top speed and acceleration.</li>
69
- </ul>
70
- <h2>Conclusion</h2>
71
- <p>Asphalt Nitro 2 Mod APK 60 FPS is a modified version of Asphalt Nitro 2 that enhances your gaming experience by unlocking some features that are not available in the original version. You can play the game at 60 FPS, enjoy unlimited money, and access all the cars and tracks without having to complete any missions or challenges. You can also download and install the mod easily by following the steps we have provided in this article. However, you should also be careful and responsible when using the mod and follow the precautions and tips we have given you. We hope you have fun playing Asphalt Nitro 2 Mod APK 60 FPS.</p>
72
- <h2>FAQs</h2>
73
- <p>Here are some frequently asked questions about Asphalt Nitro 2 Mod APK 60 FPS:</p>
74
- <ol>
75
- <li><b>Is Asphalt Nitro 2 Mod APK 60 FPS safe to use?</b><br>
76
- Yes, Asphalt Nitro 2 Mod APK 60 FPS is safe to use as long as you download it from a trusted source and follow the precautions we have mentioned in this article. However, you should also be aware that using mods may violate the terms of service of the game and may result in bans or penalties from Gameloft.</li>
77
- <li><b>Can I play Asphalt Nitro 2 Mod APK 60 FPS offline?</b><br>
78
- No, Asphalt Nitro 2 Mod APK 60 FPS requires an internet connection to play online with other players or participate in events. However, you can play Career mode or Quick Race mode offline if you want.</li>
79
- <li><b>Can I play Asphalt Nitro 2 Mod APK 60 FPS on iOS devices?</b><br>
80
- No, Asphalt Nitro 2 Mod APK 60 FPS is only compatible with Android devices. However, you can play the original version of Asphalt Nitro 2 on iOS devices if you want.</li>
81
- <li><b>What are the minimum requirements to play Asphalt Nitro 2 Mod APK 60 FPS?</b><br>
82
- The minimum requirements to play Asphalt Nitro 2 Mod APK 60 FPS are the same as the original version of Asphalt Nitro 2. You will need an Android device with at least 1 GB of RAM, 50 MB of free storage space, and Android 4.4 or higher.</li>
83
- <li><b>Where can I get more information about Asphalt Nitro 2 Mod APK 60 FPS?</b><br>
84
- You can get more information about Asphalt Nitro 2 Mod APK 60 FPS by visiting the official website of the mod or by joining the official Discord server of the mod . You can also watch some gameplay videos of the mod on YouTube or read some reviews of the mod on Reddit .</li>
85
- </ol></p> 401be4b1e0<br />
86
- <br />
87
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Doodle Army 2 Mini Militia Hile APK 4.3. 4 - The Ultimate Guide.md DELETED
@@ -1,125 +0,0 @@
1
- <br />
2
- <h1>What is Mini Militia Hile APK 4.3. 4?</h1>
3
- <p>If you are a fan of shooting games, you might have heard of Doodle Army 2: Mini Militia, a popular multiplayer game that lets you battle with up to 12 players online or offline. The game offers various modes, weapons, maps, and customization options to make your gaming experience more fun and exciting.</p>
4
- <p>But what if you want to enjoy more features and advantages in the game? That's where Mini Militia Hile APK 4.3. 4 comes in handy. This is a modded version of the original game that gives you unlimited access to everything in the game, such as ammo, health, jetpack, pro pack, and more. With this modded version, you can dominate the battlefield and have more fun with your friends.</p>
5
- <h2>mini militia hile apk 4.3. 4</h2><br /><p><b><b>Download</b> ---> <a href="https://jinyurl.com/2uNL6N">https://jinyurl.com/2uNL6N</a></b></p><br /><br />
6
- <h2>Why should you download Mini Militia Hile APK 4.3. 4?</h2>
7
- <p>There are many reasons why you should download Mini Militia Hile APK 4.3. 4 on your Android device. Here are some of them:</p>
8
- <ul>
9
- <li>You can get unlimited ammo, health, jetpack, pro pack, and other resources in the game.</li>
10
- <li>You can unlock all the weapons, skins, avatars, and maps in the game.</li>
11
- <li>You can play online or offline with your friends or other players from around the world.</li>
12
- <li>You can customize your character and your gameplay settings according to your preferences.</li>
13
- <li>You can enjoy a smooth and lag-free gaming experience with no ads or bugs.</li>
14
- </ul>
15
- <h2>How to download and install Mini Militia Hile APK 4.3. 4?</h2>
16
- <p>Downloading and installing Mini Militia Hile APK 4.3. 4 is very easy and simple. Just follow these steps:</p>
17
- <ol>
18
- <li>Go to [this link](^1^) and download the APK file of Mini Militia Hile APK 4.3. 4 on your Android device.</li>
19
- <li>Before installing the APK file, make sure you enable the "Unknown Sources" option in your device settings.</li>
20
- <li>After enabling the option, locate the downloaded APK file and tap on it to start the installation process.</li>
21
- <li>Follow the instructions on the screen and wait for the installation to complete.</li>
22
- <li>Once the installation is done, launch the game and enjoy playing Mini Militia Hile APK 4.3. 4.</li>
23
- </ol>
24
- <p>Here are some screenshots of the installation process:</p>
25
- <img src="^2^" alt="Screenshot 1">
26
- <img src="^3^" alt="Screenshot 2">
27
- <img src="^4^" alt="Screenshot 3">
28
- <h2>How to play Mini Militia Hile APK 4.3. 4?</h2>
29
- <p>Playing Mini Militia Hile APK 4.3. 4 is very similar to playing the original game, except that you have more features and advantages in the modded version. Here are some tips and tricks for playing Mini Militia Hile APK 4.3. 4:</p>
30
- <ul>
31
- <li>Choose your mode wisely: You can play in different modes such as survival, deathmatch, team deathmatch, capture the flag, etc. Choose the mode that suits your skills and preferences.</li>
32
- <li>Use your weapons smartly: You can use various weapons such as sniper rifles, shotguns, rocket launchers, flamethrowers, etc. Use them wisely and strategically to defeat your enemies. You can also switch between weapons by tapping on the weapon icon.</li>
33
- <li>Use your jetpack wisely: You can use your jetpack to fly and dodge enemy attacks. You can also use it to reach higher places and ambush your enemies. However, be careful not to run out of fuel or get hit by enemy fire.</li>
34
- <li>Use your pro pack wisely: You can use your pro pack to access more features and advantages in the game, such as dual wield, extra avatar customization, etc. However, be careful not to abuse it or get banned by the game developers.</li>
35
- <li>Use your skills wisely: You can use your skills to improve your performance and survival in the game, such as aiming, dodging, hiding, reloading, etc. Practice and master these skills to become a better player.</li>
36
- </ul>
37
- <h2>What are the pros and cons of Mini Militia Hile APK 4.3. 4?</h2>
38
- <p>Like any other modded version of a game, Mini Militia Hile APK 4.3. 4 has its own pros and cons. Here are some of them:</p>
39
- <table>
40
- <tr>
41
- <th>Pros</th>
42
- <th>Cons</th>
43
- </tr>
44
- <tr>
45
- <td>You can enjoy more features and advantages in the game.</td>
46
- <td>You might face some compatibility issues with some devices or versions of the game.</td>
47
- </tr>
48
- <tr>
49
- <td>You can have more fun and excitement with your friends or other players.</td>
50
- <td>You might get banned by the game developers if they detect your modded version.</td>
51
- </tr>
52
- <tr>
53
- <td>You can improve your skills and strategies in the game.</td>
54
- <td>You might lose the challenge and thrill of the game if you use too many cheats or hacks.</td>
55
- </tr>
56
- </table>
57
- <h2>Conclusion</h2>
58
- <p>Mini Militia Hile APK 4.3. 4 is a modded version of Doodle Army 2: Mini Militia, a popular multiplayer shooting game that lets you battle with up to 12 players online or offline. The modded version gives you unlimited access to everything in the game, such as ammo, health, jetpack, pro pack, and more. With this modded version, you can dominate the battlefield and have more fun with your friends.</p>
59
- <p>mini militia hile apk 4.3. 4 download<br />
60
- mini militia hile apk 4.3. 4 mod<br />
61
- mini militia hile apk 4.3. 4 unlimited<br />
62
- mini militia hile apk 4.3. 4 pro pack<br />
63
- mini militia hile apk 4.3. 4 latest version<br />
64
- mini militia hile apk 4.3. 4 hack<br />
65
- mini militia hile apk 4.3. 4 free<br />
66
- mini militia hile apk 4.3. 4 android<br />
67
- mini militia hile apk 4.3. 4 online<br />
68
- mini militia hile apk 4.3. 4 offline<br />
69
- mini militia hile apk 4.3. 4 cheats<br />
70
- mini militia hile apk 4.3. 4 gameplay<br />
71
- mini militia hile apk 4.3. 4 review<br />
72
- mini militia hile apk 4.3. 4 update<br />
73
- mini militia hile apk 4.3. 4 features<br />
74
- mini militia hile apk 4.3. 4 install<br />
75
- mini militia hile apk 4.3. 4 guide<br />
76
- mini militia hile apk 4.3. 4 tips<br />
77
- mini militia hile apk 4.3. 4 tricks<br />
78
- mini militia hile apk 4.3. 4 tutorial<br />
79
- mini militia hile apk 4.3. 4 softpedia[^1^]<br />
80
- mini militia hile apk 4.3. 5[^2^]<br />
81
- mini militia hile apk war.io[^3^]<br />
82
- mini militia hile apk doodle army[^1^]<br />
83
- mini militia hile apk miniclip[^3^]<br />
84
- mini militia hile apk multiplayer[^1^]<br />
85
- mini militia hile apk shooter[^1^]<br />
86
- mini militia hile apk weapons[^1^]<br />
87
- mini militia hile apk maps[^1^]<br />
88
- mini militia hile apk skins[^1^]<br />
89
- mini militia hile apk zoom control[^1^]<br />
90
- mini militia hile apk dual wield[^1^]<br />
91
- mini militia hile apk team battle[^1^]<br />
92
- mini militia hile apk survival mode[^1^]<br />
93
- mini militia hile apk co-op mode[^1^]<br />
94
- mini militia hile apk training mode[^1^]<br />
95
- mini militia hile apk sarge mode[^1^]<br />
96
- mini militia hile apk bots mode[^1^]<br />
97
- mini militia hile apk custom mode[^1^]<br />
98
- mini militia hile apk avatar mode[^1^]<br />
99
- mini militia hile apk rocket boots mode[^1^]<br />
100
- mini militia hile apk melee mode[^1^]<br />
101
- mini militia hile apk sniper mode[^1^]<br />
102
- mini militia hile apk grenade mode[^1^]<br />
103
- mini militia hile apk flamethrower mode[^1^]<br />
104
- mini militia hile apk shotgun mode[^1^]<br />
105
- mini militia hile apk saw gun mode[^1^]<br />
106
- mini militia hile apk laser gun mode[^1^]<br />
107
- mini militia hile apk machete mode[^1^]<br />
108
- mini militia hile apk katana mode[^1^]</p>
109
- <p>If you want to download and install Mini Militia Hile APK 4.3. 4 on your Android device, you can follow the step-by-step guide with screenshots that we provided in this article. You can also follow the tips and tricks that we shared to play the game better and smarter. However, you should also be aware of the pros and cons of the modded version and use it responsibly and ethically.</p>
110
- <p>We hope you enjoyed reading this article and learned something new about Mini Militia Hile APK 4.3. 4. If you have any questions or feedback, feel free to leave a comment below. Thank you for your time and attention!</p>
111
- <h3>FAQs</h3>
112
- <ul>
113
- <li><b>Q: Is Mini Militia Hile APK 4.3. 4 safe to download and use?</b></li>
114
- <li>A: Yes, Mini Militia Hile APK 4.3. 4 is safe to download and use as long as you download it from a trusted source and scan it with an antivirus before installing it on your device.</li>
115
- <li><b>Q: Is Mini Militia Hile APK 4.3. 4 compatible with all Android devices?</b></li>
116
- <li>A: No, Mini Militia Hile APK 4.3. 4 may not be compatible with some Android devices or versions of the game. You should check the compatibility before downloading and installing it on your device.</li>
117
- <li><b>Q: Can I play online with other players using Mini Militia Hile APK 4.3. 4?</b></li>
118
- <li>A: Yes, you can play online with other players using Mini Militia Hile APK 4.3. 4 as long as they are also using the same modded version of the game.</li>
119
- <li><b>Q: Can I update Mini Militia Hile APK 4.3. 4 to the latest version of the game?</b></li>
120
- <li>A: No, you cannot update Mini Militia Hile APK 4.3. 4 to the latest version of the game as it may cause some errors or crashes in the game. You should wait for the modded version to be updated by its developers before updating it on your device.</li>
121
- <li><b>Q: Can I uninstall Mini Militia Hile APK 4.3. 4 from my device?</b></li>
122
- <li>A: Yes, you can uninstall Mini Militia Hile APK 4.3. 4 from your device by following the same steps that you used to install it. You can also delete the APK file from your device after uninstalling it.</li>
123
- </ul></p> 197e85843d<br />
124
- <br />
125
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Getting Over It with Bennett Foddy MOD APK 1.9.6 - All Unlocked.md DELETED
@@ -1,101 +0,0 @@
1
- <br />
2
- <h1>Getting Over It with Bennett Foddy: A Guide to Downloading and Playing the Latest Mod APK</h1>
3
- <p>If you are looking for a game that will test your patience, skill, and perseverance, then you might want to try Getting Over It with Bennett Foddy. This is a game that will make you rage, laugh, cry, and celebrate as you climb up a mountain with nothing but a hammer and a pot. In this article, we will tell you everything you need to know about this game, including how to download and play the latest mod APK version that offers some extra features and advantages.</p>
4
- <h2>What is Getting Over It with Bennett Foddy?</h2>
5
- <h3>A brief introduction to the game and its creator</h3>
6
- <p>Getting Over It with Bennett Foddy is a punishing climbing game that was released in 2017 by Bennett Foddy, an Australian game developer and professor of game design. The game is inspired by a 2002 B-Game classic called Sexy Hiking, which was created by Jazzuo. The game is also a homage to other games that are known for their difficulty and frustration, such as QWOP, Flappy Bird, and Dark Souls.</p>
7
- <h2>getting over it latest mod apk download</h2><br /><p><b><b>Download File</b> &mdash;&mdash;&mdash; <a href="https://jinyurl.com/2uNQc8">https://jinyurl.com/2uNQc8</a></b></p><br /><br />
8
- <h3>The main features and challenges of the game</h3>
9
- <p>The game has a simple premise: you control a man named Diogenes who is stuck in a metal pot. You use your mouse to move a hammer that can hook onto objects and surfaces. Your goal is to climb up an enormous mountain that is filled with various obstacles, such as rocks, trees, furniture, pipes, barrels, and more. The game has no checkpoints or save points, so if you fall down, you have to start over from where you landed. The game also has no end, so you can keep climbing as long as you want.</p>
10
- <p>The game is designed to be hard and frustrating, as it requires precise mouse movements and timing. The physics of the game are also unpredictable and sometimes unfair, as you can slip, bounce, or fly off in unexpected directions. The game also features a voice-over commentary by Bennett Foddy himself, who will make philosophical observations, sarcastic remarks, or motivational quotes depending on your progress. Some players may find his voice soothing and helpful, while others may find it annoying and mocking.</p>
11
- <h3>The rewards and achievements of the game</h3>
12
- <p>The game does not have any explicit rewards or achievements for completing it, but it does offer some hidden surprises and secrets for those who manage to reach the top of the mountain. There is also a sense of satisfaction and accomplishment that comes from overcoming the challenges and difficulties of the game. The game also allows you to share your success or failure with other players through online leaderboards or chat rooms.</p>
13
- <p>getting over it with bennett foddy mod apk unlocked<br />
14
- getting over it android mod apk latest version<br />
15
- getting over it mod apk free download for android<br />
16
- getting over it mod apk unlimited money and gold<br />
17
- getting over it mod apk no ads and no root<br />
18
- getting over it mod apk download for pc windows 10<br />
19
- getting over it mod apk online multiplayer<br />
20
- getting over it mod apk revdl rexdl<br />
21
- getting over it mod apk hack cheats<br />
22
- getting over it mod apk obb data file<br />
23
- getting over it mod apk premium pro vip<br />
24
- getting over it mod apk full game unlocked<br />
25
- getting over it mod apk new update 2023<br />
26
- getting over it mod apk offline without internet<br />
27
- getting over it mod apk original from play store<br />
28
- getting over it mod apk mega nz mediafire<br />
29
- getting over it mod apk unlimited lives and coins<br />
30
- getting over it mod apk all levels and maps unlocked<br />
31
- getting over it mod apk best graphics and sound<br />
32
- getting over it mod apk easy and hard mode<br />
33
- getting over it mod apk english language and subtitles<br />
34
- getting over it mod apk fast download and install<br />
35
- getting over it mod apk gameplay walkthrough guide<br />
36
- getting over it mod apk high score and leaderboard<br />
37
- getting over it mod apk low mb and size<br />
38
- getting over it mod apk no verification and survey<br />
39
- getting over it mod apk old version and history<br />
40
- getting over it mod apk pure and safe<br />
41
- getting over it mod apk realistic physics and animation<br />
42
- getting over it mod apk tips tricks and secrets</p>
43
- <h2>Why download the latest mod APK?</h2>
44
- <h3>The benefits of using a modded version of the game</h3>
45
- <p>A modded version of the game is a modified version that has some changes or additions that are not present in the original version. A modded version can offer some benefits for players who want to have a different or better experience with the game. For example, a modded version can:</p>
46
- <ul>
47
- <li>Unlock all the features and content of the game without paying any money</li>
48
- <li>Remove any ads or in-app <p>purchases that may interrupt or distract you from the game</li>
49
- <li>Add some extra features or options that can enhance the gameplay experience, such as custom skins, cheats, hacks, or mods</li>
50
- <li>Fix some bugs or errors that may affect the performance or stability of the game</li>
51
- <li>Update the game to the latest version that may have new content or improvements</li>
52
- </ul>
53
- <h3>The mod features that enhance the gameplay experience</h3>
54
- <p>The latest mod APK for Getting Over It with Bennett Foddy has some amazing features that can make the game more enjoyable and fun. Some of these features are:</p>
55
- <ul>
56
- <li>Unlimited coins: You can get unlimited coins that you can use to buy different items or skins in the game</li>
57
- <li>Unlimited lives: You can get unlimited lives that you can use to continue playing the game even if you fall down</li>
58
- <li>No fall damage: You can avoid any damage or injury that may occur when you fall down from a high place</li>
59
- <li>No gravity: You can defy the laws of physics and float in the air as you swing your hammer</li>
60
- <li>No obstacles: You can remove any obstacles or barriers that may block your way or slow you down</li>
61
- <li>Speed hack: You can increase or decrease the speed of your movement or hammer as you wish</li>
62
- <li>Zoom hack: You can zoom in or out of the screen as you want to see more or less of the environment</li>
63
- <li>Skip level: You can skip any level that you find too hard or boring and go to the next one</li>
64
- <li>God mode: You can become invincible and immune to any harm or danger</li>
65
- </ul>
66
- <h3>The compatibility and security of the mod APK</h3>
67
- <p>The latest mod APK for Getting Over It with Bennett Foddy is compatible with most Android devices that have Android 4.1 or higher. The mod APK file size is about 120 MB, so you need to have enough storage space on your device. The mod APK is also safe and secure to use, as it does not contain any viruses, malware, or spyware. The mod APK does not require any root access or special permissions to install or run.</p>
68
- <h2>How to download and install the latest mod APK?</h2>
69
- <h3>The steps to find and download the mod APK file</h3>
70
- <p>If you want to download and install the latest mod APK for Getting Over It with Bennett Foddy, you need to follow these simple steps:</p>
71
- <ol>
72
- <li>Go to a reliable and trusted website that offers the mod APK file for Getting Over It with Bennett Foddy. You can search for it on Google or use this link: </li>
73
- <li>Click on the download button and wait for the download process to complete. You may need to enable the unknown sources option on your device settings to allow the download of third-party apps.</li>
74
- <li>Locate the downloaded mod APK file on your device storage and tap on it to open it.</li>
75
- </ol>
76
- <h3>The steps to install and run the mod APK file</h3>
77
- <p>After you have downloaded the mod APK file, you need to install and run it on your device. Here are the steps to do so:</p>
78
- <ol start="4">
79
- <li>Follow the instructions on the screen and agree to the terms and conditions to install the mod APK file.</li>
80
- <li>Wait for the installation process to finish and then launch the game from your app drawer or home screen.</li>
81
- <li>Enjoy playing Getting Over It with Bennett Foddy with all the mod features enabled.</li>
82
- </ol> <h3>How to enjoy the game and have fun with it</h3>
83
- <p>The final and most important aspect of playing Getting Over It with Bennett Foddy is to enjoy the game and have fun with it. The game is not meant to be a torture or a punishment, but a challenge and a reward. Here are some tips and tricks to help you with that:</p>
84
- <ul>
85
- <li>Appreciate the game's art and design, which are inspired by real-life locations, objects, and artworks. You can also admire the game's graphics and sound effects, which are realistic and immersive.</li>
86
- <li>Explore the game's world and discover its secrets and easter eggs. You can also try to find different paths or shortcuts that can lead you to new places or surprises.</li>
87
- <li>Express yourself and your creativity through the game. You can customize your pot and hammer with different skins or items that you can buy or unlock. You can also use the game as a platform to create your own art or content, such as videos, memes, or fan art.</li>
88
- <li>Connect with other players and the game's community. You can join online chat rooms or leaderboards to chat or compete with other players. You can also watch or follow other players' streams or videos to learn from them or support them.</li>
89
- <li>Challenge yourself and set your own goals or rules. You can try to beat the game in the fastest time possible, or in the most difficult way possible. You can also try to play the game with different settings or modes, such as inverted controls, no mouse, or blindfolded.</li>
90
- </ul>
91
- <h2>Conclusion and FAQs</h2>
92
- <p>In conclusion, Getting Over It with Bennett Foddy is a game that will make you experience a range of emotions and sensations, from anger and frustration to joy and satisfaction. It is a game that will challenge your patience, skill, and perseverance, but also reward you with a unique and memorable experience. If you want to play this game with some extra features and advantages, you can download and install the latest mod APK version that we have explained in this article. We hope that this article has helped you understand more about this game and how to play it better. Here are some FAQs that you may have:</p>
93
- <table>
94
- <tr><td><b>Q: How long does it take to beat the game?</b></td><td><b>A: It depends on your skill level and luck, but some players have reported beating the game in less than 10 minutes, while others have spent hours or days on it.</b></td></tr>
95
- <tr><td><b>Q: Is there a way to save or pause the game?</b></td><td><b>A: No, there is no way to save or pause the game. The game is meant to be played in one sitting, without any interruptions or distractions.</b></td></tr>
96
- <tr><td><b>Q: Is there a multiplayer mode in the game?</b></td><td><b>A: No, there is no multiplayer mode in the game. The game is meant to be played solo, without any help or interference from other players.</b></td></tr>
97
- <tr><td><b>Q: Is there a sequel or a spin-off of the game?</b></td><td><b>A: No, there is no sequel or a spin-off of the game. The game is meant to be a standalone project, without any plans for future updates or expansions.</b></td></tr>
98
- <tr><td><b>Q: Is there a way to contact Bennett Foddy or give him feedback?</b></td><td><b>A: Yes, you can contact Bennett Foddy through his website (https://www.foddy.net/) or his Twitter account (@bfod). You can also give him feedback through his email ([email protected]) or his Steam page (https://store.steampowered.com/app/240720/Getting_Over_It_with_Bennett_Foddy/).</b></td></tr>
99
- </table></p> 401be4b1e0<br />
100
- <br />
101
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Experience the Epic Story of Seven Deadly Sins Grand Cross - APK Download Available.md DELETED
@@ -1,114 +0,0 @@
1
-
2
- <h1>Seven Deadly Sins Grand Cross APK Download: A Cinematic Anime Game for Mobile</h1>
3
- <p>If you are a fan of anime and manga, you might have heard of The Seven Deadly Sins, a popular series that follows the adventures of a group of legendary knights in a fantasy world. If you want to experience the story and battles of The Seven Deadly Sins on your mobile device, you should check out Seven Deadly Sins Grand Cross, a cinematic anime game that will immerse you in the world of Britannia. In this article, we will tell you what Seven Deadly Sins Grand Cross is, how to download its APK file, what are its main features, what are some tips and tricks for playing it, and what are some reviews of it.</p>
4
- <h2>What is Seven Deadly Sins Grand Cross?</h2>
5
- <p>Seven Deadly Sins Grand Cross is a mobile RPG based on the popular anime and manga series The Seven Deadly Sins. It is developed by Netmarble, a leading mobile game company, and is available on Android and iOS platforms. Here are some of the reasons why you should play Seven Deadly Sins Grand Cross:</p>
6
- <h2>seven deadly sins grand cross apk download</h2><br /><p><b><b>Download</b> &#11088; <a href="https://jinyurl.com/2uNNJX">https://jinyurl.com/2uNNJX</a></b></p><br /><br />
7
- <h3>A mobile RPG based on the popular anime and manga series</h3>
8
- <p>Seven Deadly Sins Grand Cross lets you play as Meliodas, the leader of the Seven Deadly Sins, and his companions as they embark on an epic quest to save the kingdom from the tyranny of the Holy Knights. You will meet familiar characters from the series, such as Elizabeth, Ban, King, Diane, Gowther, Merlin, Escanor, Hawk, and many more. You will also encounter enemies and allies from different races, such as humans, fairies, giants, demons, goddesses, vampires, etc. You will be able to relive the memorable scenes and events from the anime and manga, such as the Boar Hat Tavern, the Forest of White Dreams, the Capital of the Dead, etc.</p>
9
- <h3>A game that recreates the original story and battles with high-quality 3D graphics and voice acting</h3>
10
- <p>Seven Deadly Sins Grand Cross is not just a simple adaptation of the series. It is a game that recreates the original story and battles with high-quality 3D graphics and voice acting. The game uses a cinematic approach to present the story, with cutscenes that feature stunning animations and dialogues. The game also uses a card-based combat system that allows you to use different skills and ultimate moves based on your character's abilities. The game also includes original voice dialogues from the voice actors of the anime series, such as Yuki Kaji, Sora Amamiya, Misaki Kuno, Aoi Yuki, Tatsuhisa Suzuki, Jun Fukuyama, Yuhei Takagi, Maaya Sakamoto, and Tomokazu Sugita. You will feel like you are watching the anime as you play the game.</p>
11
- <h3>A game that offers various features and content for fans and newcomers alike</h3>
12
- <p>Seven Deadly Sins Grand Cross is not just a game for fans of the series. It is also a game that offers various features and content for newcomers and casual players. You can explore the vast world of Britannia and interact with different characters and locations. You can also customize your own tavern and collect various items and costumes. You can also join a knighthood and cooperate with other players in guild wars and events. You can also enjoy mini-games, such as cooking, fishing, card battles, etc. There is always something new and exciting to do in Seven Deadly Sins Grand Cross.</p>
13
- <h2>How to download Seven Deadly Sins Grand Cross APK?</h2>
14
- <p>If you want to play Seven Deadly Sins Grand Cross on your mobile device, you will need to download its APK file. APK stands for Android Package Kit, which is a file format that contains all the elements needed to install an app on an Android device. Here are some of the ways you can download Seven Deadly Sins Grand Cross APK:</p>
15
- <h3>The official sources for Android and iOS devices</h3>
16
- <p>The easiest and safest way to download Seven Deadly Sins Grand Cross APK is to use the official sources for Android and iOS devices. You can simply go to the Google Play Store or the App Store and search for the game. Then, you can tap on the install button and wait for the download to finish. You will need about 4 GB of free space on your device to install the game. You will also need a stable internet connection to play the game online.</p>
17
- <h3>The alternative sources for Android devices</h3>
18
- <p>If you cannot access the official sources for some reason, or if you want to download an older version of the game, you can use alternative sources for Android devices. These are websites that offer APK files of various apps and games for free. However, you should be careful when using these sources, as some of them may contain malware or viruses that can harm your device or steal your personal information. You should only use trusted and reputable websites that have positive reviews and ratings from other users. Some examples of these websites are APKPure.com, APKMirror.com, and APKCombo.com. To download Seven Deadly Sins Grand Cross APK from these websites, you will need to follow these steps:</p>
19
- - Go to the website of your choice and search for Seven Deadly Sins Grand Cross. - Choose the version of the game that you want to download and tap on the download button. - Wait for the download to finish and locate the APK file on your device. - Before installing the APK file, you will need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. - Tap on the APK file and follow the instructions to install the game. - Enjoy playing Seven Deadly Sins Grand Cross on your device. <h3>The precautions and requirements for installing the APK file</h3>
20
- <p>Before installing Seven Deadly Sins Grand Cross APK on your device, you should take some precautions and meet some requirements to ensure a smooth and safe gaming experience. Here are some of them:</p>
21
- <p>How to install seven deadly sins grand cross on android<br />
22
- Seven deadly sins grand cross apk mod unlimited gems<br />
23
- Best characters in seven deadly sins grand cross game<br />
24
- Seven deadly sins grand cross pc version download free<br />
25
- Seven deadly sins grand cross tips and tricks for beginners<br />
26
- Seven deadly sins grand cross anime vs game comparison<br />
27
- Seven deadly sins grand cross global release date and news<br />
28
- Seven deadly sins grand cross tier list and guide<br />
29
- Seven deadly sins grand cross gameplay and review<br />
30
- Seven deadly sins grand cross hack and cheats online<br />
31
- Seven deadly sins grand cross official website and support<br />
32
- Seven deadly sins grand cross reddit community and discussion<br />
33
- Seven deadly sins grand cross manga and novel adaptation<br />
34
- Seven deadly sins grand cross update and patch notes<br />
35
- Seven deadly sins grand cross events and rewards<br />
36
- Seven deadly sins grand cross costumes and skins<br />
37
- Seven deadly sins grand cross pvp and guild wars<br />
38
- Seven deadly sins grand cross reroll and gacha system<br />
39
- Seven deadly sins grand cross codes and coupons<br />
40
- Seven deadly sins grand cross emulator and controller support<br />
41
- Seven deadly sins grand cross story mode and quests<br />
42
- Seven deadly sins grand cross netmarble account and login<br />
43
- Seven deadly sins grand cross soundtrack and voice actors<br />
44
- Seven deadly sins grand cross wallpapers and fan art<br />
45
- Seven deadly sins grand cross ratings and reviews on app store<br />
46
- Seven deadly sins grand cross system requirements and compatibility<br />
47
- Seven deadly sins grand cross error and bug fixes<br />
48
- Seven deadly sins grand cross data transfer and backup<br />
49
- Seven deadly sins grand cross collaboration and crossover events<br />
50
- Seven deadly sins grand cross merchandise and products</p>
51
- - Make sure that your device meets the minimum system requirements for the game. According to the official website, you will need at least Android 4.4 or iOS 9.0, 2 GB of RAM, 4 GB of free space, and a compatible processor. - Make sure that your device has enough battery power or is plugged into a charger while installing the game. - Make sure that your device has a stable internet connection while downloading and installing the game. - Make sure that you have enough data or Wi-Fi bandwidth to download the game, as it is quite large in size. - Make sure that you have enough storage space on your device to install the game and its updates. - Make sure that you backup your data before installing the game, in case something goes wrong or you need to uninstall it later. - Make sure that you scan the APK file with an antivirus or security app before installing it, to check for any malware or viruses. - Make sure that you only install the game from trusted sources, as mentioned above. <h2>What are the main features of Seven Deadly Sins Grand Cross?</h2>
52
- <p>Seven Deadly Sins Grand Cross is a game that offers a lot of features and content for players to enjoy. Here are some of the main features of the game:</p>
53
- <h3>Dynamic combat with skill rank up system and ultimate moves</h3>
54
- <p>Seven Deadly Sins Grand Cross uses a card-based combat system that allows you to use different skills and ultimate moves based on your character's abilities. You can choose from four cards per turn, each with a different effect and cost. You can also combine cards of the same type to rank them up and increase their power and range. You can also use ultimate moves that are unique to each character and can deal massive damage to your enemies. The combat system is dynamic and strategic, as you have to consider the enemy's attributes, the card order, the card fusion, the card effects, etc.</p>
55
- <h3>Various PvE systems that reflect the original anime</h3>
56
- <p>Seven Deadly Sins Grand Cross offers various PvE systems that reflect the original anime and manga series. You can follow the main quest line that follows the story of The Seven Deadly Sins, or you can explore the side quests that feature different characters and events. You can also participate in special events that are based on the anime episodes, such as the Vaizel Fighting Festival, the Kingdom Infiltration Arc, etc. You can also challenge various bosses and enemies that appear in the series, such as the Demon Clan, the Ten Commandments, etc. You can also collect various rewards and items from completing these PvE systems.</p>
57
- <h3>Unique character appearances and costumes</h3>
58
- <p>Seven Deadly Sins Grand Cross features unique character appearances and costumes that are faithful to the original anime and manga series. You can collect and customize various characters from the series, each with their own skills, stats, and personalities. You can also unlock and equip different costumes for your characters, such as their original outfits, their casual outfits, their seasonal outfits, etc. You can also change their hairstyles, accessories, weapons, etc. You can also view your characters in 3D models and interact with them in various ways.</p>
59
- <h3>Thorough and authentic implementation of the original anime</h3>
60
- <p>Seven Deadly Sins Grand Cross is a game that is thorough and authentic in implementing the original anime and manga series. The game uses high-quality 3D graphics and voice acting to recreate the original story and battles of The Seven Deadly Sins. The game also includes original soundtracks and sound effects from the anime series, such as the opening and ending songs, the background music, the character voices, etc. The game also includes original scenes and dialogues from the anime series, such as the comedic moments, the emotional moments, the plot twists, etc. The game also includes original content and stories that are exclusive to the game, such as new characters, new events, new quests, etc.</p>
61
- <h3>Real-time PvP and guild content</h3>
62
- <p>Seven Deadly Sins Grand Cross is not only a game for solo players. It is also a game that offers real-time PvP and guild content for multiplayer players. You can compete with other players in various PvP modes, such as Death Match, Elite Demon Battle, Knighthood Boss Battle, etc. You can also join a knighthood and cooperate with other players in guild wars and events. You can also chat with other players in real-time and share your strategies and tips. You can also trade items and cards with other players in the market.</p>
63
- <h2>What are some tips and tricks for playing Seven Deadly Sins Grand Cross?</h2>
64
- <p>If you are new to Seven Deadly Sins Grand Cross or want to improve your gameplay skills, here are some tips and tricks for playing the game:</p>
65
- <h3>Prioritize the main quest line</h3>
66
- <p>The main quest line is the best way to progress through the game and unlock new features and content. The main quest line follows the story of The Seven Deadly Sins and rewards you with various items and resources, such as gold, gems, stamina potions, equipment, etc. The main quest line also unlocks new areas and locations for you to explore and complete side quests. The main quest line also increases your player level and rank, which allows you to access more content and modes.</p>
67
- <h3>Create card fusions without forcing them</h3>
68
- <p>Card fusion is a key element of the combat system in Seven Deadly Sins Grand Cross. Card fusion allows you to combine cards of the same type to rank them up and increase their power and range. However, you should not force card fusion by using cards that are not optimal for the situation. For example, you should not use a heal card to create a fusion if you do not need to heal. You should also not use a debuff card to create a fusion if the enemy is immune to debuffs. You should always consider the enemy's attributes, the card effects, and the card order before creating card fusions. You should also save some cards for the next turn, as they will be automatically ranked up.</p>
69
- <h3>Put the auto battle and x2 speed feature to good use</h3>
70
- <p>Seven Deadly Sins Grand Cross has an auto battle and x2 speed feature that can help you save time and effort when playing the game. The auto battle feature allows the game to choose and use cards for you based on a preset strategy. The x2 speed feature allows the game to run faster and skip some animations. You can use these features when you are farming resources, completing easy quests, or replaying stages that you have already cleared. However, you should not rely on these features too much, as they may not be optimal for some situations. For example, you should not use the auto battle feature when you are facing a boss or a difficult enemy, as the game may not use the best cards or strategy for you. You should also not use the x2 speed feature when you are watching cutscenes or enjoying the story, as you may miss some important details or emotions.</p>
71
- <h3>Manage your resources wisely</h3>
72
- <p>Seven Deadly Sins Grand Cross is a game that requires you to manage your resources wisely. You will need various resources to upgrade your characters, equipment, tavern, etc. Some of the main resources are gold, gems, stamina, anvils, hammers, awakening stones, etc. You can obtain these resources from various sources, such as quests, events, rewards, shops, etc. However, you should not spend these resources recklessly, as they may be limited or scarce. You should always prioritize the most important or urgent upgrades and save some resources for future needs. You should also avoid wasting resources on unnecessary or inefficient upgrades.</p>
73
- <h3>Join a knighthood and participate in events</h3>
74
- <p>Seven Deadly Sins Grand Cross is a game that encourages you to join a knighthood and participate in events. A knighthood is a guild that allows you to cooperate and communicate with other players. You can join an existing knighthood or create your own knighthood with your friends. By joining a knighthood, you can access various benefits and features, such as guild wars, guild bosses, guild shop, guild chat, etc. You can also earn guild coins and guild points that can be used to buy items or rank up your knighthood. By participating in events, you can access various content and rewards that are exclusive to the event period. You can participate in events such as festivals, collabs, special quests, etc. You can also earn event coins and event points that can be used to buy items or exchange for prizes.</p>
75
- <h2>What are some reviews of Seven Deadly Sins Grand Cross?</h2>
76
- <p>Seven Deadly Sins Grand Cross is a game that has received positive reviews from critics and players alike. Here are some of the reviews of the game:</p>
77
- <h3>A positive review from TheGamer.com</h3>
78
- <p>TheGamer.com gave Seven Deadly Sins Grand Cross a score of 4 out of 5 stars and praised its graphics, combat system, story mode, and voice acting. The reviewer wrote:</p>
79
- <blockquote>
80
- <p>"Seven Deadly Sins: Grand Cross is one of the best looking anime games on the market right now...The combat system is simple yet satisfying...The story mode is well done and faithful to the source material...The voice acting is top notch..."</p>
81
- </blockquote>
82
- <h3>A positive review from IGN.com</h3>
83
- <p>IGN.com gave Seven Deadly Sins Grand Cross a score of 8 out of 10 and praised its gameplay variety , graphics, story, and characters. The reviewer wrote:</p>
84
- <blockquote>
85
- <p>"Seven Deadly Sins: Grand Cross is a well-made and polished RPG that offers a lot of gameplay variety...The graphics are stunning and the animations are smooth...The story is engaging and faithful to the anime...The characters are diverse and likable..."</p>
86
- </blockquote>
87
- <h3>A positive review from KINCIR.com</h3>
88
- <p>KINCIR.com gave Seven Deadly Sins Grand Cross a score of 8.5 out of 10 and praised its gameplay mechanics, customization options, and sound quality. The reviewer wrote:</p>
89
- <blockquote>
90
- <p>"Seven Deadly Sins: Grand Cross is a game that has a lot of gameplay mechanics that are fun and challenging...The customization options are abundant and satisfying...The sound quality is excellent and immersive..."</p>
91
- </blockquote>
92
- <h3>A positive review from Metacritic.com</h3>
93
- <p>Metacritic.com gave Seven Deadly Sins Grand Cross a score of 86 out of 100 based on the ratings of 12 critics and 32 users. The website also showed some of the positive user reviews, such as:</p>
94
- <blockquote>
95
- <p>"This game is amazing. The graphics are beautiful, the gameplay is smooth, the story is captivating, and the characters are awesome. I love this game so much."</p>
96
- <p>"This game is one of the best anime games I have ever played. It has everything I want in a game: great story, great combat, great customization, great voice acting, great music, etc. I highly recommend this game to anyone who likes anime or RPGs."</p>
97
- <p>"This game is a masterpiece. It is a perfect adaptation of the anime and manga series. It is a game that respects the fans and the source material. It is a game that deserves more recognition and appreciation."</p>
98
- </blockquote>
99
- <h2>Conclusion</h2>
100
- <p>Seven Deadly Sins Grand Cross is a cinematic anime game for mobile that is based on the popular anime and manga series The Seven Deadly Sins. It is a game that recreates the original story and battles with high-quality 3D graphics and voice acting. It is a game that offers various features and content for fans and newcomers alike. It is a game that has received positive reviews from critics and players alike. If you want to play Seven Deadly Sins Grand Cross on your mobile device, you can download its APK file from the official sources or the alternative sources, as long as you take some precautions and meet some requirements. You can also use some tips and tricks to improve your gameplay skills and enjoy the game more. Seven Deadly Sins Grand Cross is a game that will immerse you in the world of Britannia and make you feel like you are part of The Seven Deadly Sins.</p>
101
- <h2>FAQs</h2>
102
- <p>Here are some of the frequently asked questions about Seven Deadly Sins Grand Cross:</p>
103
- <h3>Q: Is Seven Deadly Sins Grand Cross free to play?</h3>
104
- <p>A: Yes, Seven Deadly Sins Grand Cross is free to play. However, it also offers in-app purchases that can enhance your gaming experience.</p>
105
- <h3>Q: Is Seven Deadly Sins Grand Cross available in my country?</h3>
106
- <p>A: Seven Deadly Sins Grand Cross is available in most countries around the world. However, some regions may have different versions or servers of the game. You can check the official website or the official social media pages for more information.</p>
107
- <h3>Q: Is Seven Deadly Sins Grand Cross compatible with my device?</h3>
108
- <p>A: Seven Deadly Sins Grand Cross is compatible with most Android and iOS devices that meet the minimum system requirements. However, some devices may experience performance issues or bugs due to various factors. You can check the official website or contact the customer support for more information.</p>
109
- <h3>Q: How can I contact the customer support of Seven Deadly Sins Grand Cross?</h3>
110
- <p>A: You can contact the customer support of Seven Deadly Sins Grand Cross by using the in-game inquiry feature or by sending an email to [email protected].</p>
111
- <h3>Q: How can I get more information about Seven Deadly Sins Grand Cross?</h3>
112
- <p>A: You can get more information about Seven Deadly Sins Grand Cross by visiting the official website, following the official social media pages, joining the official community forums, or watching the official YouTube channel.</p> 401be4b1e0<br />
113
- <br />
114
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/A00001/bingothoo/src/components/turn-counter.tsx DELETED
@@ -1,23 +0,0 @@
1
- import React from 'react'
2
- import { Throttling } from '@/lib/bots/bing/types'
3
-
4
- export interface TurnCounterProps {
5
- throttling?: Throttling
6
- }
7
-
8
- export function TurnCounter({ throttling }: TurnCounterProps) {
9
- if (!throttling) {
10
- return null
11
- }
12
-
13
- return (
14
- <div className="turn-counter">
15
- <div className="text">
16
- <span>{throttling.numUserMessagesInConversation}</span>
17
- <span> 共 </span>
18
- <span>{throttling.maxNumUserMessagesInConversation}</span>
19
- </div>
20
- <div className="indicator"></div>
21
- </div>
22
- )
23
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AFCMEgypt/WCB/app.py DELETED
@@ -1,122 +0,0 @@
1
-
2
- #Import Required Packages
3
- import numpy as np
4
- import gradio as gr
5
- #from google.colab.patches import cv2_imshow
6
- import cv2
7
- import matplotlib.pyplot as plt
8
- import numpy as np
9
- import skimage
10
- import imutils
11
- from imutils import contours
12
- import math
13
- def cube (v):
14
- return v**3
15
- def sqrtabs (v) :
16
- return math.sqrt(abs(v))
17
- def figplota(xvalues):
18
- fig = plt.figure()
19
- plt.plot(xvalues, figure=fig)
20
- return fig
21
- def quant(imageinput):
22
- #@title Please Input the Lateral Flow Assay Image
23
- # read image using openCV
24
- #path = "/content/l1.jpg"
25
- image = cv2.imread(imageinput)#imageinput
26
- target = "PKU"
27
- #print(image)
28
- #cv2_imshow(image)
29
- # Convert the image to grayscale
30
- BGR2RGB = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
31
- gray = cv2.cvtColor(BGR2RGB, cv2.COLOR_RGB2GRAY)
32
- #print(gray)
33
- #cv2_imshow(gray)
34
- # Invert the image to negative scale
35
- negative = cv2.bitwise_not(gray)
36
- negativeimage = negative.copy() #save a copy to avoid disrupting the image contour
37
- #print(negativeimage)
38
- #cv2_imshow(negativeimage)
39
- # Minimize the noisy effects of artificats using Gaussian blur (helps with minimizing the effect of noisy artifactual bright-spots)
40
- blur = cv2.GaussianBlur(negativeimage, (11, 11), 0)
41
- #print(blur)
42
- #cv2_imshow(blur)
43
- # Binarize Image
44
- threshold = float(cv2.meanStdDev(blur)[0]) + 0.6*float(cv2.meanStdDev(blur)[1])
45
- imgthreshold = cv2.threshold(blur, threshold, 255, cv2.THRESH_BINARY)[1]
46
- #print(imgthreshold)
47
- #cv2_imshow(image_thresh)
48
- # Reducing noise noise through eroding & eroding
49
- imgeroding = cv2.erode(imgthreshold, None, iterations=1)
50
- zeronoise = cv2.dilate(imgeroding, None, iterations=1)
51
- #print(zeronoise)
52
- #cv2_imshow(zeronoise)
53
- # CCA the threshold Image
54
- import skimage.measure
55
- labels = skimage.measure.label(zeronoise, background=0)
56
- masking = np.zeros(zeronoise.shape, dtype="uint8")
57
- for label in np.unique(labels):
58
- if label == 0:
59
- continue
60
- MaskL = np.zeros(zeronoise.shape, dtype="uint8")
61
- MaskL[labels == label] = 255
62
- numPixels = cv2.countNonZero(MaskL)
63
- if numPixels > masking.shape[1]*3:
64
- masking = cv2.add(masking, MaskL)
65
- #cv2_imshow(mask)
66
- # Find the contours and sort, please change from bottom-to-top to top-to-bottom accordingly
67
- contourss = cv2.findContours(masking.copy(), cv2.RETR_EXTERNAL,
68
- cv2.CHAIN_APPROX_SIMPLE)
69
- contourss = imutils.grab_contours(contourss)
70
- contourss = contours.sort_contours(contourss, method="bottom-to-top")[0] #change here accordingly
71
- final= []
72
- if len(contourss) > 1:
73
- for (i, c) in enumerate(contourss):
74
- # draw the bright spot on the image for the control and sample band
75
- x, y, width, height = cv2.boundingRect(c)
76
- final.append(negativeimage[y:y+height, x:x+width])
77
- rect = cv2.minAreaRect(c)
78
- box = cv2.boxPoints(rect)
79
- # convert all coordinates floating point values to int
80
- box = np.int0(box)
81
- # draw a rectangle
82
- cv2.drawContours(image, [box], 0, (0, 0, 255), thickness=2)
83
-
84
- elif len(contourss) == 1:
85
- # draw the bright spot on the image for the control band
86
- for (i, c) in enumerate(contourss):
87
- x, y, width, height = cv2.boundingRect(c)
88
- final.append(negativeimage[y:y+height, x:x+width])
89
- rect = cv2.minAreaRect(c)
90
- box = cv2.boxPoints(rect)
91
- # convert all coordinates floating point values to int
92
- box = np.int0(box)
93
- # draw a rectangle
94
- cv2.drawContours(image, [box], 0, (0, 0, 255), thickness=2)
95
-
96
-
97
-
98
- # Return error message for unclear tests
99
- else :
100
- print("No Bands Detected")
101
- #print(image)
102
- #cv2_imshow(image)
103
- # generate signal ratio of sample to control band, you can change according to sorting of bands
104
-
105
- ratio1 = cv2.meanStdDev(final[0])[0]
106
- ratio=((cube(math.cos(sqrtabs(ratio1 - -0.393284)) + 2.2783713) / pow(math.cos(y), 0.20675313)) - (math.exp(math.cos(math.cos((sqrtabs(math.tan(cube(ratio1)) - (ratio1 +math.tan(math.sin(ratio1)))) / 0.44953698) * 0.9778089))) + (-2.3363407 / ratio1)))
107
- thresho = 20
108
- sig=final[0][0]
109
- #signal=plt.plot(sig,figure=plt.figure())
110
- if ratio >= thresho:
111
- xx=str("The test band signal [" + str(ratio) + "mg/dl] shows a " + target +"-POSITIVE test." +" " + "Classic PKU, needs urgent medical treatment")
112
- elif ratio >= 2 and ratio <6:
113
- xx=str("The test band signal [" + str(ratio) + "mg/dl] shows a " + target +"-POSITIVE test." +" " + "Likely PKU phenotype.")
114
- elif ratio >= 6 and ratio <12:
115
- xx=str("The test band signal [" + str(ratio) + "mg/dl] shows a " + target +"-POSITIVE test." +" " + "PKU and dietary restriction is recommended")
116
- elif ratio >=12 and ratio <20:
117
- xx=str("The test band signal [" + str(ratio) + "mg/dl] shows a " + target +"-POSITIVE test." +" " + "PKU and need medical attention for risk of intellectuall impairment")
118
- else:
119
- xx=str("The test band signal[" + str(ratio) + "mg/dl] shows a " + target +"-NEGATIVE test.")
120
- return xx,figplota(sig),cv2.resize(image, (20,60), interpolation = cv2.INTER_AREA) #cv2.resize(signal, (20,40), interpolation = cv2.INTER_AREA)#,cv2.resize(signal, (20,40), interpolation = cv2.INTER_AREA)
121
- iface = gr.Interface(quant, gr.Image(type="filepath"), outputs=["text","plot","image"],debug=True)
122
- iface.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/hifigan/modules.py DELETED
@@ -1,332 +0,0 @@
1
- import os
2
- import torch
3
- import torch.nn.functional as F
4
- import torch.nn as nn
5
- from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
6
- from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
7
- from pathlib import Path
8
- import yaml
9
- import numpy as np
10
- from argparse import Namespace
11
- LRELU_SLOPE = 0.1
12
-
13
- def get_padding(kernel_size, dilation=1):
14
- return int((kernel_size*dilation - dilation)/2)
15
-
16
- def init_weights(m, mean=0.0, std=0.01):
17
- classname = m.__class__.__name__
18
- if classname.find("Conv") != -1:
19
- m.weight.data.normal_(mean, std)
20
-
21
-
22
- class ResBlock1(torch.nn.Module):
23
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)):
24
- super(ResBlock1, self).__init__()
25
- self.h = h
26
- self.convs1 = nn.ModuleList([
27
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
28
- padding=get_padding(kernel_size, dilation[0]))),
29
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
30
- padding=get_padding(kernel_size, dilation[1]))),
31
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
32
- padding=get_padding(kernel_size, dilation[2])))
33
- ])
34
- self.convs1.apply(init_weights)
35
-
36
- self.convs2 = nn.ModuleList([
37
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
38
- padding=get_padding(kernel_size, 1))),
39
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
40
- padding=get_padding(kernel_size, 1))),
41
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
42
- padding=get_padding(kernel_size, 1)))
43
- ])
44
- self.convs2.apply(init_weights)
45
-
46
- def forward(self, x):
47
- for c1, c2 in zip(self.convs1, self.convs2):
48
- xt = F.leaky_relu(x, LRELU_SLOPE)
49
- xt = c1(xt)
50
- xt = F.leaky_relu(xt, LRELU_SLOPE)
51
- xt = c2(xt)
52
- x = xt + x
53
- return x
54
-
55
- def remove_weight_norm(self):
56
- for l in self.convs1:
57
- remove_weight_norm(l)
58
- for l in self.convs2:
59
- remove_weight_norm(l)
60
-
61
-
62
- class ResBlock2(torch.nn.Module):
63
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)):
64
- super(ResBlock2, self).__init__()
65
- self.h = h
66
- self.convs = nn.ModuleList([
67
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
68
- padding=get_padding(kernel_size, dilation[0]))),
69
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
70
- padding=get_padding(kernel_size, dilation[1])))
71
- ])
72
- self.convs.apply(init_weights)
73
-
74
- def forward(self, x):
75
- for c in self.convs:
76
- xt = F.leaky_relu(x, LRELU_SLOPE)
77
- xt = c(xt)
78
- x = xt + x
79
- return x
80
-
81
- def remove_weight_norm(self):
82
- for l in self.convs:
83
- remove_weight_norm(l)
84
-
85
-
86
- class Generator(torch.nn.Module):
87
- def __init__(self, h):
88
- super(Generator, self).__init__()
89
- self.h = h
90
- self.num_kernels = len(h.resblock_kernel_sizes)
91
- self.num_upsamples = len(h.upsample_rates)
92
- self.conv_pre = weight_norm(Conv1d(80, h.upsample_initial_channel, 7, 1, padding=3))
93
- resblock = ResBlock1 if h.resblock == '1' else ResBlock2
94
-
95
- self.ups = nn.ModuleList()
96
- for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)):
97
- self.ups.append(weight_norm(
98
- ConvTranspose1d(h.upsample_initial_channel//(2**i), h.upsample_initial_channel//(2**(i+1)),
99
- k, u, padding=(k-u)//2)))
100
-
101
- self.resblocks = nn.ModuleList()
102
- for i in range(len(self.ups)):
103
- ch = h.upsample_initial_channel//(2**(i+1))
104
- for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)):
105
- self.resblocks.append(resblock(h, ch, k, d))
106
-
107
- self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3))
108
- self.ups.apply(init_weights)
109
- self.conv_post.apply(init_weights)
110
-
111
- def forward(self, x):
112
- x = self.conv_pre(x)
113
- for i in range(self.num_upsamples):
114
- x = F.leaky_relu(x, LRELU_SLOPE)
115
- x = self.ups[i](x)
116
- xs = None
117
- for j in range(self.num_kernels):
118
- if xs is None:
119
- xs = self.resblocks[i*self.num_kernels+j](x)
120
- else:
121
- xs += self.resblocks[i*self.num_kernels+j](x)
122
- x = xs / self.num_kernels
123
- x = F.leaky_relu(x)
124
- x = self.conv_post(x)
125
- x = torch.tanh(x)
126
-
127
- return x
128
-
129
- def remove_weight_norm(self):
130
- print('Removing weight norm...')
131
- for l in self.ups:
132
- remove_weight_norm(l)
133
- for l in self.resblocks:
134
- l.remove_weight_norm()
135
- remove_weight_norm(self.conv_pre)
136
- remove_weight_norm(self.conv_post)
137
-
138
-
139
- class DiscriminatorP(torch.nn.Module):
140
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
141
- super(DiscriminatorP, self).__init__()
142
- self.period = period
143
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
144
- self.convs = nn.ModuleList([
145
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
146
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
147
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
148
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
149
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
150
- ])
151
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
152
-
153
- def forward(self, x):
154
- fmap = []
155
-
156
- # 1d to 2d
157
- b, c, t = x.shape
158
- if t % self.period != 0: # pad first
159
- n_pad = self.period - (t % self.period)
160
- x = F.pad(x, (0, n_pad), "reflect")
161
- t = t + n_pad
162
- x = x.view(b, c, t // self.period, self.period)
163
-
164
- for l in self.convs:
165
- x = l(x)
166
- x = F.leaky_relu(x, LRELU_SLOPE)
167
- fmap.append(x)
168
- x = self.conv_post(x)
169
- fmap.append(x)
170
- x = torch.flatten(x, 1, -1)
171
-
172
- return x, fmap
173
-
174
-
175
- class MultiPeriodDiscriminator(torch.nn.Module):
176
- def __init__(self):
177
- super(MultiPeriodDiscriminator, self).__init__()
178
- self.discriminators = nn.ModuleList([
179
- DiscriminatorP(2),
180
- DiscriminatorP(3),
181
- DiscriminatorP(5),
182
- DiscriminatorP(7),
183
- DiscriminatorP(11),
184
- ])
185
-
186
- def forward(self, y, y_hat):
187
- y_d_rs = []
188
- y_d_gs = []
189
- fmap_rs = []
190
- fmap_gs = []
191
- for i, d in enumerate(self.discriminators):
192
- y_d_r, fmap_r = d(y)
193
- y_d_g, fmap_g = d(y_hat)
194
- y_d_rs.append(y_d_r)
195
- fmap_rs.append(fmap_r)
196
- y_d_gs.append(y_d_g)
197
- fmap_gs.append(fmap_g)
198
-
199
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
200
-
201
-
202
- class DiscriminatorS(torch.nn.Module):
203
- def __init__(self, use_spectral_norm=False):
204
- super(DiscriminatorS, self).__init__()
205
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
206
- self.convs = nn.ModuleList([
207
- norm_f(Conv1d(1, 128, 15, 1, padding=7)),
208
- norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
209
- norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
210
- norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
211
- norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
212
- norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
213
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
214
- ])
215
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
216
-
217
- def forward(self, x):
218
- fmap = []
219
- for l in self.convs:
220
- x = l(x)
221
- x = F.leaky_relu(x, LRELU_SLOPE)
222
- fmap.append(x)
223
- x = self.conv_post(x)
224
- fmap.append(x)
225
- x = torch.flatten(x, 1, -1)
226
-
227
- return x, fmap
228
-
229
-
230
- class MultiScaleDiscriminator(torch.nn.Module):
231
- def __init__(self):
232
- super(MultiScaleDiscriminator, self).__init__()
233
- self.discriminators = nn.ModuleList([
234
- DiscriminatorS(use_spectral_norm=True),
235
- DiscriminatorS(),
236
- DiscriminatorS(),
237
- ])
238
- self.meanpools = nn.ModuleList([
239
- AvgPool1d(4, 2, padding=2),
240
- AvgPool1d(4, 2, padding=2)
241
- ])
242
-
243
- def forward(self, y, y_hat):
244
- y_d_rs = []
245
- y_d_gs = []
246
- fmap_rs = []
247
- fmap_gs = []
248
- for i, d in enumerate(self.discriminators):
249
- if i != 0:
250
- y = self.meanpools[i-1](y)
251
- y_hat = self.meanpools[i-1](y_hat)
252
- y_d_r, fmap_r = d(y)
253
- y_d_g, fmap_g = d(y_hat)
254
- y_d_rs.append(y_d_r)
255
- fmap_rs.append(fmap_r)
256
- y_d_gs.append(y_d_g)
257
- fmap_gs.append(fmap_g)
258
-
259
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
260
-
261
-
262
- def feature_loss(fmap_r, fmap_g):
263
- loss = 0
264
- for dr, dg in zip(fmap_r, fmap_g):
265
- for rl, gl in zip(dr, dg):
266
- loss += torch.mean(torch.abs(rl - gl))
267
-
268
- return loss*2
269
-
270
-
271
- def discriminator_loss(disc_real_outputs, disc_generated_outputs):
272
- loss = 0
273
- r_losses = []
274
- g_losses = []
275
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
276
- r_loss = torch.mean((1-dr)**2)
277
- g_loss = torch.mean(dg**2)
278
- loss += (r_loss + g_loss)
279
- r_losses.append(r_loss.item())
280
- g_losses.append(g_loss.item())
281
-
282
- return loss, r_losses, g_losses
283
-
284
-
285
- def generator_loss(disc_outputs):
286
- loss = 0
287
- gen_losses = []
288
- for dg in disc_outputs:
289
- l = torch.mean((1-dg)**2)
290
- gen_losses.append(l)
291
- loss += l
292
-
293
- return loss, gen_losses
294
-
295
-
296
- class VocoderHifigan(object):
297
- def __init__(self, ckpt_vocoder,device='cuda'):
298
-
299
- with open(os.path.join(ckpt_vocoder,'args.yml'), 'r') as f:
300
- vocoder_args = Namespace(**yaml.load(f, Loader=yaml.UnsafeLoader))
301
-
302
- self.generator = Generator(vocoder_args)
303
- netG_path = os.path.join(ckpt_vocoder,'best_netG.pt')
304
- if os.path.exists(netG_path):
305
- vocoder_sd = torch.load(netG_path, map_location='cpu')
306
- self.generator.load_state_dict(vocoder_sd['generator'])
307
- self.generator.eval()
308
-
309
- self.device = device
310
- self.generator.to(self.device)
311
-
312
- def vocode(self, spec, global_step=None):
313
- with torch.no_grad():
314
- if isinstance(spec,np.ndarray):
315
- spec = torch.from_numpy(spec).unsqueeze(0)
316
- spec = spec.to(dtype=torch.float32,device=self.device)
317
- return self.generator(spec).squeeze().cpu().numpy()
318
-
319
- class VocoderHifigan_noload(object):
320
- def __init__(self, vocoder_args,device='cuda'):
321
- self.generator = Generator(vocoder_args)
322
- self.generator.eval()
323
-
324
- self.device = device
325
- self.generator.to(self.device)
326
-
327
- def vocode(self, spec, global_step=None):
328
- with torch.no_grad():
329
- if isinstance(spec,np.ndarray):
330
- spec = torch.from_numpy(spec).unsqueeze(0)
331
- spec = spec.to(dtype=torch.float32,device=self.device)
332
- return self.generator(spec).squeeze().cpu().numpy()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/base_binarizer.py DELETED
@@ -1,412 +0,0 @@
1
- import json
2
- import os
3
- import random
4
- from re import L
5
- import traceback
6
- from functools import partial
7
-
8
- import numpy as np
9
- from resemblyzer import VoiceEncoder
10
- from tqdm import tqdm
11
-
12
- from transformers import AutoTokenizer
13
-
14
- # import utils.commons.single_thread_env # NOQA
15
- from text_to_speech.utils.audio import librosa_wav2spec
16
- from text_to_speech.utils.audio.align import get_mel2ph, mel2token_to_dur
17
- from text_to_speech.utils.audio.cwt import get_lf0_cwt, get_cont_lf0
18
- from text_to_speech.utils.audio.pitch.utils import f0_to_coarse
19
- from text_to_speech.utils.audio.pitch_extractors import extract_pitch_simple
20
- from text_to_speech.utils.commons.hparams import hparams
21
- from text_to_speech.utils.commons.indexed_datasets import IndexedDatasetBuilder
22
- from text_to_speech.utils.commons.multiprocess_utils import multiprocess_run_tqdm
23
- from text_to_speech.utils.os_utils import remove_file, copy_file
24
-
25
- np.seterr(divide='ignore', invalid='ignore')
26
-
27
-
28
- class BinarizationError(Exception):
29
- pass
30
-
31
- sentence2graph_parser = None
32
- bert_tokenizer = None
33
- use_graph = False
34
- use_bpe = True
35
-
36
-
37
- class BaseBinarizer:
38
- def __init__(self, processed_data_dir=None):
39
- if processed_data_dir is None:
40
- processed_data_dir = hparams['processed_data_dir']
41
- self.processed_data_dir = processed_data_dir
42
- self.binarization_args = hparams['binarization_args']
43
- self.items = {}
44
- self.item_names = []
45
-
46
- global sentence2graph_parser
47
- global use_graph
48
- global use_bpe
49
- global bert_tokenizer
50
- if use_graph:
51
- from text_to_speech.modules.tts.syntaspeech.syntactic_graph_buider import Sentence2GraphParser
52
-
53
- if hparams['ds_name'] in ['libritts', 'librispeech']:
54
- # Unfortunately, we found when processing libritts with multi-processing will incur pytorch.multiprocessing ERROR
55
- # so we use single thread with cuda graph builder
56
- # it take about 20 hours in a PC with 24-cores-cpu and a RTX2080Ti to process the whole LibriTTS
57
- # so run the binarization and take a break!
58
- if use_graph:
59
- sentence2graph_parser = Sentence2GraphParser("en", use_gpu=True)
60
- if use_bpe:
61
- model_name = 'bert-base-uncased'
62
- tokenizer_kwargs = {'cache_dir': None, 'use_fast': True, 'revision': 'main', 'use_auth_token': None}
63
- bert_tokenizer = AutoTokenizer.from_pretrained(model_name, **tokenizer_kwargs)
64
- elif hparams['ds_name'] == 'ljspeech':
65
- # use multi-processing, thus gpu is disabled
66
- # it takes about 30 minutes for binarization
67
- if use_graph:
68
- sentence2graph_parser = Sentence2GraphParser("en", use_gpu=False)
69
- if use_bpe:
70
- model_name = 'bert-base-uncased'
71
- tokenizer_kwargs = {'cache_dir': None, 'use_fast': True, 'revision': 'main', 'use_auth_token': None}
72
- bert_tokenizer = AutoTokenizer.from_pretrained(model_name, **tokenizer_kwargs)
73
- elif hparams['preprocess_args']['txt_processor'] == 'zh':
74
- # use multi-processing, thus gpu is disabled
75
- # it takes about 30 minutes for binarization
76
- if use_graph:
77
- sentence2graph_parser = Sentence2GraphParser("zh", use_gpu=False)
78
- if use_bpe:
79
- model_name = 'bert-base-chinese'
80
- tokenizer_kwargs = {'cache_dir': None, 'use_fast': True, 'revision': 'main', 'use_auth_token': None}
81
- bert_tokenizer = AutoTokenizer.from_pretrained(model_name, **tokenizer_kwargs)
82
- else:
83
- pass
84
-
85
- def load_meta_data(self):
86
- processed_data_dir = self.processed_data_dir
87
- items_list = json.load(open(f"{processed_data_dir}/metadata.json"))
88
- for r in tqdm(items_list, desc='Loading meta data.'):
89
- item_name = r['item_name']
90
- self.items[item_name] = r
91
- self.item_names.append(item_name)
92
- if self.binarization_args['shuffle']:
93
- random.seed(1234)
94
- random.shuffle(self.item_names)
95
-
96
- @property
97
- def train_item_names(self):
98
- range_ = self._convert_range(self.binarization_args['train_range'])
99
- return self.item_names[range_[0]:range_[1]]
100
-
101
- @property
102
- def valid_item_names(self):
103
- range_ = self._convert_range(self.binarization_args['valid_range'])
104
- return self.item_names[range_[0]:range_[1]]
105
-
106
- @property
107
- def test_item_names(self):
108
- range_ = self._convert_range(self.binarization_args['test_range'])
109
- return self.item_names[range_[0]:range_[1]]
110
-
111
- def _convert_range(self, range_):
112
- if range_[1] == -1:
113
- range_[1] = len(self.item_names)
114
- return range_
115
-
116
- def meta_data(self, prefix):
117
- if prefix == 'valid':
118
- item_names = self.valid_item_names
119
- elif prefix == 'test':
120
- item_names = self.test_item_names
121
- else:
122
- item_names = self.train_item_names
123
- for item_name in item_names:
124
- yield self.items[item_name]
125
-
126
- def process(self):
127
- self.load_meta_data()
128
- os.makedirs(hparams['binary_data_dir'], exist_ok=True)
129
- for fn in ['phone_set.json', 'word_set.json', 'spk_map.json']:
130
- remove_file(f"{hparams['binary_data_dir']}/{fn}")
131
- copy_file(f"{hparams['processed_data_dir']}/{fn}", f"{hparams['binary_data_dir']}/{fn}")
132
- if hparams['ds_name'] in ['ljspeech', 'biaobei', 'wenetspeech']:
133
- self.process_data('valid')
134
- self.process_data('test')
135
- self.process_data('train')
136
- elif hparams['ds_name'] in ['libritts', 'librispeech']:
137
- self.process_data_single_processing('valid')
138
- self.process_data_single_processing('test')
139
- self.process_data_single_processing('train')
140
- else:
141
- self.process_data('valid')
142
- self.process_data('test')
143
- self.process_data('train')
144
- # raise NotImplementedError
145
-
146
- def process_data(self, prefix):
147
- data_dir = hparams['binary_data_dir']
148
- builder = IndexedDatasetBuilder(f'{data_dir}/{prefix}')
149
- meta_data = list(self.meta_data(prefix))
150
- process_item = partial(self.process_item, binarization_args=self.binarization_args)
151
- ph_lengths = []
152
- mel_lengths = []
153
- total_sec = 0
154
- items = []
155
- args = [{'item': item} for item in meta_data]
156
-
157
- for item_id, item in multiprocess_run_tqdm(process_item, args, desc='Processing data'):
158
- if item is not None:
159
- items.append(item)
160
- if self.binarization_args['with_spk_embed']:
161
- args = [{'wav': item['wav']} for item in items]
162
- for item_id, spk_embed in multiprocess_run_tqdm(
163
- self.get_spk_embed, args,
164
- init_ctx_func=lambda wid: {'voice_encoder': VoiceEncoder().cuda()}, num_workers=4,
165
- desc='Extracting spk embed'):
166
- items[item_id]['spk_embed'] = spk_embed
167
-
168
- for item in items:
169
- if not self.binarization_args['with_wav'] and 'wav' in item:
170
- del item['wav']
171
- builder.add_item(item)
172
- mel_lengths.append(item['len'])
173
- assert item['len'] > 0, (item['item_name'], item['txt'], item['mel2ph'])
174
- if 'ph_len' in item:
175
- ph_lengths.append(item['ph_len'])
176
- total_sec += item['sec']
177
- builder.finalize()
178
- np.save(f'{data_dir}/{prefix}_lengths.npy', mel_lengths)
179
- if len(ph_lengths) > 0:
180
- np.save(f'{data_dir}/{prefix}_ph_lengths.npy', ph_lengths)
181
- print(f"| {prefix} total duration: {total_sec:.3f}s")
182
-
183
- def process_data_single_processing(self, prefix):
184
- data_dir = hparams['binary_data_dir']
185
- builder = IndexedDatasetBuilder(f'{data_dir}/{prefix}')
186
- meta_data = list(self.meta_data(prefix))
187
- ph_lengths = []
188
- mel_lengths = []
189
- total_sec = 0
190
-
191
- if self.binarization_args['with_spk_embed']:
192
- voice_encoder = VoiceEncoder().cuda()
193
- for raw_item in tqdm(meta_data):
194
- item = self.process_item(raw_item, self.binarization_args)
195
- if item is None:
196
- continue
197
- if item is not None:
198
- if use_graph:
199
- if item['dgl_graph'].num_nodes() != np.array(item['ph2word']).max():
200
- print(f"Skip Item: {item['item_name']} word nodes number incorrect!")
201
- continue
202
-
203
- if self.binarization_args['with_spk_embed']:
204
- spk_embed = self.get_spk_embed(item['wav'], {'voice_encoder': voice_encoder})
205
- item['spk_embed'] = spk_embed
206
-
207
- if not self.binarization_args['with_wav'] and 'wav' in item:
208
- del item['wav']
209
- builder.add_item(item)
210
- mel_lengths.append(item['len'])
211
- assert item['len'] > 0, (item['item_name'], item['txt'], item['mel2ph'])
212
- if 'ph_len' in item:
213
- ph_lengths.append(item['ph_len'])
214
- total_sec += item['sec']
215
- builder.finalize()
216
- np.save(f'{data_dir}/{prefix}_lengths.npy', mel_lengths)
217
- if len(ph_lengths) > 0:
218
- np.save(f'{data_dir}/{prefix}_ph_lengths.npy', ph_lengths)
219
- print(f"| {prefix} total duration: {total_sec:.3f}s")
220
-
221
- # def process_data_single_processing(self, prefix):
222
- # data_dir = hparams['binary_data_dir']
223
- # builder = IndexedDatasetBuilder(f'{data_dir}/{prefix}')
224
- # meta_data = list(self.meta_data(prefix))
225
- # ph_lengths = []
226
- # mel_lengths = []
227
- # total_sec = 0
228
- # items = []
229
- # args = [{'item': item} for item in meta_data]
230
-
231
- # for raw_item in tqdm(meta_data):
232
- # item = self.process_item(raw_item, self.binarization_args)
233
- # if item is not None:
234
- # if item['dgl_graph'].num_nodes() != np.array(item['ph2word']).max():
235
- # print(f"Skip Item: {item['item_name']} word nodes number incorrect!")
236
- # continue
237
-
238
- # items.append(item)
239
-
240
- # if self.binarization_args['with_spk_embed']:
241
- # args = [{'wav': item['wav']} for item in items]
242
- # for item_id, spk_embed in multiprocess_run_tqdm(
243
- # self.get_spk_embed, args,
244
- # init_ctx_func=lambda wid: {'voice_encoder': VoiceEncoder().cuda()}, num_workers=4,
245
- # desc='Extracting spk embed'):
246
- # items[item_id]['spk_embed'] = spk_embed
247
-
248
- # for item in items:
249
- # if not self.binarization_args['with_wav'] and 'wav' in item:
250
- # del item['wav']
251
- # builder.add_item(item)
252
- # mel_lengths.append(item['len'])
253
- # assert item['len'] > 0, (item['item_name'], item['txt'], item['mel2ph'])
254
- # if 'ph_len' in item:
255
- # ph_lengths.append(item['ph_len'])
256
- # total_sec += item['sec']
257
- # builder.finalize()
258
- # np.save(f'{data_dir}/{prefix}_lengths.npy', mel_lengths)
259
- # if len(ph_lengths) > 0:
260
- # np.save(f'{data_dir}/{prefix}_ph_lengths.npy', ph_lengths)
261
- # print(f"| {prefix} total duration: {total_sec:.3f}s")
262
-
263
- @classmethod
264
- def process_item(cls, item, binarization_args):
265
- try:
266
- item['ph_len'] = len(item['ph_token'])
267
- item_name = item['item_name']
268
- wav_fn = item['wav_fn']
269
- wav, mel = cls.process_audio(wav_fn, item, binarization_args)
270
- except Exception as e:
271
- print(f"| Skip item ({e}) for index error. item_name: {item_name}, wav_fn: {wav_fn}")
272
- return None
273
- try:
274
- n_bos_frames, n_eos_frames = 0, 0
275
- if binarization_args['with_align']:
276
- tg_fn = f"{hparams['processed_data_dir']}/mfa_outputs/{item_name}.TextGrid"
277
- item['tg_fn'] = tg_fn
278
- cls.process_align(tg_fn, item)
279
- if binarization_args['trim_eos_bos']:
280
- n_bos_frames = item['dur'][0]
281
- n_eos_frames = item['dur'][-1]
282
- T = len(mel)
283
- item['mel'] = mel[n_bos_frames:T - n_eos_frames]
284
-
285
- item['mel2ph'] = item['mel2ph'][n_bos_frames:T - n_eos_frames]
286
- item['mel2word'] = item['mel2word'][n_bos_frames:T - n_eos_frames]
287
- item['dur'] = item['dur'][1:-1]
288
- item['dur_word'] = item['dur_word'][1:-1]
289
- item['len'] = item['mel'].shape[0]
290
- item['wav'] = wav[n_bos_frames * hparams['hop_size']:len(wav) - n_eos_frames * hparams['hop_size']]
291
- if binarization_args['with_f0']:
292
- cls.process_pitch(item, n_bos_frames, n_eos_frames)
293
- except BinarizationError as e:
294
- print(f"| Skip item ({e}). item_name: {item_name}, wav_fn: {wav_fn}")
295
- return None
296
- except Exception as e:
297
- traceback.print_exc()
298
- print(f"| Skip item. item_name: {item_name}, wav_fn: {wav_fn}")
299
- return None
300
-
301
- # if item['mel'].shape[0] < 64:
302
- # print(f"Skip Item: {item['item_name']} Mel-spectrogram is shorter than 64!")
303
- # return None
304
- # fix one bad case of stanza
305
- if item['txt'].endswith('yn .'):
306
- item['txt'] = item['txt'][:-4]+'y .'
307
- if use_graph:
308
- try:
309
- language = sentence2graph_parser.language
310
- if language == 'en':
311
- dgl_graph, etypes = sentence2graph_parser.parse(item['txt'])
312
- elif language == 'zh':
313
- dgl_graph, etypes = sentence2graph_parser.parse(item['txt'], item['word'].split(" "), item['ph_gb_word'].split(" "))
314
- else:
315
- raise NotImplementedError
316
- item['dgl_graph'] = dgl_graph
317
- item['edge_types'] = etypes
318
- except:
319
- print(f"| Dependency Parsing Error! Skip item. item_name: {item_name}, wav_fn: {wav_fn}")
320
- return None
321
-
322
- if use_bpe:
323
- sent = item['word'][6:-6] # discard the <BOS> and <EOS>, because the bert_tokenizer cannot recognize them.
324
- bert_tokens = bert_tokenizer.tokenize(sent)
325
- input_ids = bert_tokenizer.convert_tokens_to_ids(bert_tokens)
326
- input_ids.insert(0, 101) # add [CLS] to represent [BOS]
327
- input_ids.append(102) # add [SEP] to represent [EOS]
328
-
329
- bert_tokens.insert(0, '<BOS>')
330
- bert_tokens.append('<EOS>')
331
- bert_token2word = []
332
- word_idx = 0
333
- for i in range(len(bert_tokens)):
334
- if not bert_tokens[i].startswith("##"): # this token is a independent word
335
- word_idx += 1
336
- bert_token2word.append(word_idx)
337
-
338
- item['bert_token'] = bert_tokens
339
- item['bert_input_ids'] = input_ids
340
- item['bert_token2word'] = bert_token2word
341
- item['bert_attention_mask'] = [1 for _ in range(len(bert_tokens))]
342
- item['bert_token_type_ids'] = [0 for _ in range(len(bert_tokens))]
343
-
344
- return item
345
-
346
- @classmethod
347
- def process_audio(cls, wav_fn, res, binarization_args):
348
- wav2spec_dict = librosa_wav2spec(
349
- wav_fn,
350
- fft_size=hparams['fft_size'],
351
- hop_size=hparams['hop_size'],
352
- win_length=hparams['win_size'],
353
- num_mels=hparams['audio_num_mel_bins'],
354
- fmin=hparams['fmin'],
355
- fmax=hparams['fmax'],
356
- sample_rate=hparams['audio_sample_rate'],
357
- loud_norm=hparams['loud_norm'])
358
- mel = wav2spec_dict['mel']
359
- wav = wav2spec_dict['wav'].astype(np.float16)
360
- if binarization_args['with_linear']:
361
- res['linear'] = wav2spec_dict['linear']
362
- res.update({'mel': mel, 'wav': wav, 'sec': len(wav) / hparams['audio_sample_rate'], 'len': mel.shape[0]})
363
- return wav, mel
364
-
365
- @staticmethod
366
- def process_align(tg_fn, item):
367
- ph = item['ph']
368
- mel = item['mel']
369
- ph_token = item['ph_token']
370
- if tg_fn is not None and os.path.exists(tg_fn):
371
- mel2ph, dur = get_mel2ph(tg_fn, ph, mel, hparams['hop_size'], hparams['audio_sample_rate'],
372
- hparams['binarization_args']['min_sil_duration'])
373
- else:
374
- raise BinarizationError(f"Align not found")
375
- if np.array(mel2ph).max() - 1 >= len(ph_token):
376
- raise BinarizationError(
377
- f"Align does not match: mel2ph.max() - 1: {np.array(mel2ph).max() - 1}, len(phone_encoded): {len(ph_token)}")
378
- item['mel2ph'] = mel2ph
379
- item['dur'] = dur
380
-
381
- ph2word = item['ph2word']
382
- mel2word = [ph2word[p - 1] for p in item['mel2ph']]
383
- item['mel2word'] = mel2word # [T_mel]
384
- dur_word = mel2token_to_dur(mel2word, len(item['word_token']))
385
- item['dur_word'] = dur_word.tolist() # [T_word]
386
-
387
- @staticmethod
388
- def process_pitch(item, n_bos_frames, n_eos_frames):
389
- wav, mel = item['wav'], item['mel']
390
- f0 = extract_pitch_simple(item['wav'])
391
- if sum(f0) == 0:
392
- raise BinarizationError("Empty f0")
393
- assert len(mel) == len(f0), (len(mel), len(f0))
394
- pitch_coarse = f0_to_coarse(f0)
395
- item['f0'] = f0
396
- item['pitch'] = pitch_coarse
397
- if hparams['binarization_args']['with_f0cwt']:
398
- uv, cont_lf0_lpf = get_cont_lf0(f0)
399
- logf0s_mean_org, logf0s_std_org = np.mean(cont_lf0_lpf), np.std(cont_lf0_lpf)
400
- cont_lf0_lpf_norm = (cont_lf0_lpf - logf0s_mean_org) / logf0s_std_org
401
- cwt_spec, scales = get_lf0_cwt(cont_lf0_lpf_norm)
402
- item['cwt_spec'] = cwt_spec
403
- item['cwt_mean'] = logf0s_mean_org
404
- item['cwt_std'] = logf0s_std_org
405
-
406
- @staticmethod
407
- def get_spk_embed(wav, ctx):
408
- return ctx['voice_encoder'].embed_utterance(wav.astype(float))
409
-
410
- @property
411
- def num_workers(self):
412
- return int(os.getenv('N_PROC', hparams.get('N_PROC', os.cpu_count())))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIZeroToHero/Video-Automatic-Speech-Recognition/streaming.py DELETED
@@ -1,66 +0,0 @@
1
- import subprocess
2
-
3
- import numpy as np
4
-
5
-
6
- def ffmpeg_stream(youtube_url, sampling_rate=16_000, chunk_duration_ms=5000, pad_duration_ms=200):
7
- """
8
- Helper function to read an audio file through ffmpeg.
9
- """
10
- chunk_len = int(sampling_rate * chunk_duration_ms / 1000)
11
- pad_len = int(sampling_rate * pad_duration_ms / 1000)
12
- read_chunk_len = chunk_len + pad_len * 2
13
-
14
- ar = f"{sampling_rate}"
15
- ac = "1"
16
- format_for_conversion = "f32le"
17
- dtype = np.float32
18
- size_of_sample = 4
19
-
20
- ffmpeg_command = [
21
- "ffmpeg",
22
- "-i",
23
- "pipe:",
24
- "-ac",
25
- ac,
26
- "-ar",
27
- ar,
28
- "-f",
29
- format_for_conversion,
30
- "-hide_banner",
31
- "-loglevel",
32
- "quiet",
33
- "pipe:1",
34
- ]
35
-
36
- ytdl_command = ["yt-dlp", "-f", "bestaudio", youtube_url, "--quiet", "-o", "-"]
37
-
38
- try:
39
- ffmpeg_process = subprocess.Popen(ffmpeg_command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=-1)
40
- ytdl_process = subprocess.Popen(ytdl_command, stdout=ffmpeg_process.stdin)
41
- except FileNotFoundError:
42
- raise ValueError("ffmpeg was not found but is required to stream audio files from filename")
43
-
44
- acc = b""
45
- leftover = np.zeros((0,), dtype=np.float32)
46
- while ytdl_process.poll() is None:
47
- buflen = read_chunk_len * size_of_sample
48
-
49
- raw = ffmpeg_process.stdout.read(buflen)
50
- if raw == b"":
51
- break
52
-
53
- if len(acc) + len(raw) > buflen:
54
- acc = raw
55
- else:
56
- acc += raw
57
-
58
- audio = np.frombuffer(acc, dtype=dtype)
59
- audio = np.concatenate([leftover, audio])
60
- if len(audio) < pad_len * 2:
61
- # TODO: handle end of stream better than this
62
- break
63
- yield audio
64
-
65
- leftover = audio[-pad_len * 2 :]
66
- read_chunk_len = chunk_len
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ASJMO/freegpt/g4f/Provider/Providers/You.py DELETED
@@ -1,24 +0,0 @@
1
- import os
2
- import json
3
- import time
4
- import subprocess
5
-
6
- from ...typing import sha256, Dict, get_type_hints
7
-
8
- url = 'https://you.com'
9
- model = 'gpt-3.5-turbo'
10
- supports_stream = True
11
- needs_auth = False
12
-
13
- def _create_completion(model: str, messages: list, stream: bool, **kwargs):
14
-
15
- path = os.path.dirname(os.path.realpath(__file__))
16
- config = json.dumps({
17
- 'messages': messages}, separators=(',', ':'))
18
-
19
- cmd = ['python3', f'{path}/helpers/you.py', config]
20
-
21
- p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
22
-
23
- for line in iter(p.stdout.readline, b''):
24
- yield line.decode('utf-8') #[:-1]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AbeShinzo0708/AI_Kishida_Fumio_speaker/hooks/hook-streamlit.py DELETED
@@ -1,3 +0,0 @@
1
- from PyInstaller.utils.hooks import copy_metadata
2
-
3
- datas = copy_metadata('streamlit')
 
 
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/stores/pendingMessageIdToRetry.ts DELETED
@@ -1,4 +0,0 @@
1
- import type { Message } from "$lib/types/Message";
2
- import { writable } from "svelte/store";
3
-
4
- export const pendingMessageIdToRetry = writable<Message["id"] | null>(null);
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/server/website.py DELETED
@@ -1,58 +0,0 @@
1
- from flask import render_template, redirect, url_for, request, session
2
- from flask_babel import refresh
3
- from time import time
4
- from os import urandom
5
- from server.babel import get_locale, get_languages
6
-
7
-
8
- class Website:
9
- def __init__(self, bp, url_prefix) -> None:
10
- self.bp = bp
11
- self.url_prefix = url_prefix
12
- self.routes = {
13
- '/': {
14
- 'function': lambda: redirect(url_for('._index')),
15
- 'methods': ['GET', 'POST']
16
- },
17
- '/chat/': {
18
- 'function': self._index,
19
- 'methods': ['GET', 'POST']
20
- },
21
- '/chat/<conversation_id>': {
22
- 'function': self._chat,
23
- 'methods': ['GET', 'POST']
24
- },
25
- '/change-language': {
26
- 'function': self.change_language,
27
- 'methods': ['POST']
28
- },
29
- '/get-locale': {
30
- 'function': self.get_locale,
31
- 'methods': ['GET']
32
- },
33
- '/get-languages': {
34
- 'function': self.get_languages,
35
- 'methods': ['GET']
36
- }
37
- }
38
-
39
- def _chat(self, conversation_id):
40
- if '-' not in conversation_id:
41
- return redirect(url_for('._index'))
42
-
43
- return render_template('index.html', chat_id=conversation_id, url_prefix=self.url_prefix)
44
-
45
- def _index(self):
46
- return render_template('index.html', chat_id=f'{urandom(4).hex()}-{urandom(2).hex()}-{urandom(2).hex()}-{urandom(2).hex()}-{hex(int(time() * 1000))[2:]}', url_prefix=self.url_prefix)
47
-
48
- def change_language(self):
49
- data = request.get_json()
50
- session['language'] = data.get('language')
51
- refresh()
52
- return '', 204
53
-
54
- def get_locale(self):
55
- return get_locale()
56
-
57
- def get_languages(self):
58
- return get_languages()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/vertical.py DELETED
@@ -1,58 +0,0 @@
1
- from __future__ import annotations
2
- import asyncio
3
- from colorama import Fore
4
-
5
- from typing import TYPE_CHECKING, List
6
-
7
- from . import decision_maker_registry
8
- from .base import BaseDecisionMaker
9
- from agentverse.logging import typewriter_log, logger
10
-
11
- if TYPE_CHECKING:
12
- from agentverse.agents import BaseAgent, SolverAgent, CriticAgent
13
- from agentverse.message import Message, CriticMessage, SolverMessage
14
-
15
-
16
- @decision_maker_registry.register("vertical")
17
- class VerticalDecisionMaker(BaseDecisionMaker):
18
- """
19
- Discuss in a vertical manner.
20
- """
21
-
22
- name: str = "vertical"
23
-
24
- async def astep(
25
- self,
26
- agents: List[BaseAgent],
27
- task_description: str,
28
- previous_plan: str = "No solution yet.",
29
- advice: str = "No advice yet.",
30
- *args,
31
- **kwargs,
32
- ) -> List[SolverMessage]:
33
- # Here we assume that the first agent is the solver.
34
- # The rest of the agents are the reviewers.
35
- reviews: List[CriticMessage] = await asyncio.gather(
36
- *[
37
- agent.astep(previous_plan, advice, task_description)
38
- for agent in agents[1:]
39
- ]
40
- )
41
- logger.info("", "Reviews:", Fore.YELLOW)
42
- logger.info(
43
- "",
44
- "\n".join([f"[{review.sender}]: {review.content}" for review in reviews]),
45
- Fore.YELLOW,
46
- )
47
-
48
- nonempty_reviews = []
49
- for review in reviews:
50
- if not review.is_agree and review.content != "":
51
- nonempty_reviews.append(review)
52
- agents[0].add_message_to_memory(nonempty_reviews)
53
- result = agents[0].step(previous_plan, advice, task_description)
54
- agents[0].add_message_to_memory([result])
55
- return [result]
56
-
57
- def reset(self):
58
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/InputText.js DELETED
@@ -1,2 +0,0 @@
1
- import InputText from '../../../plugins/inputtext.js';
2
- export default InputText;
 
 
 
spaces/Alex132/togethercomputer-LLaMA-2-7B-32K/app.py DELETED
@@ -1,3 +0,0 @@
1
- import gradio as gr
2
-
3
- gr.Interface.load("models/togethercomputer/LLaMA-2-7B-32K").launch()
 
 
 
 
spaces/AlexWang/lama/models/ade20k/segm_lib/utils/th.py DELETED
@@ -1,41 +0,0 @@
1
- import torch
2
- from torch.autograd import Variable
3
- import numpy as np
4
- import collections
5
-
6
- __all__ = ['as_variable', 'as_numpy', 'mark_volatile']
7
-
8
- def as_variable(obj):
9
- if isinstance(obj, Variable):
10
- return obj
11
- if isinstance(obj, collections.Sequence):
12
- return [as_variable(v) for v in obj]
13
- elif isinstance(obj, collections.Mapping):
14
- return {k: as_variable(v) for k, v in obj.items()}
15
- else:
16
- return Variable(obj)
17
-
18
- def as_numpy(obj):
19
- if isinstance(obj, collections.Sequence):
20
- return [as_numpy(v) for v in obj]
21
- elif isinstance(obj, collections.Mapping):
22
- return {k: as_numpy(v) for k, v in obj.items()}
23
- elif isinstance(obj, Variable):
24
- return obj.data.cpu().numpy()
25
- elif torch.is_tensor(obj):
26
- return obj.cpu().numpy()
27
- else:
28
- return np.array(obj)
29
-
30
- def mark_volatile(obj):
31
- if torch.is_tensor(obj):
32
- obj = Variable(obj)
33
- if isinstance(obj, Variable):
34
- obj.no_grad = True
35
- return obj
36
- elif isinstance(obj, collections.Mapping):
37
- return {k: mark_volatile(o) for k, o in obj.items()}
38
- elif isinstance(obj, collections.Sequence):
39
- return [mark_volatile(o) for o in obj]
40
- else:
41
- return obj
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlhitawiMohammed22/CER_Hu-Evaluation-Metrics/eval_cer.py DELETED
@@ -1,145 +0,0 @@
1
- """ Character Error Ratio (CER) metric. """
2
- from typing import List
3
- import datasets, evaluate , jiwer
4
- import jiwer.transforms as tr
5
- from datasets.config import PY_VERSION
6
- from packaging import version
7
-
8
-
9
- if PY_VERSION < version.parse("3.8"):
10
- import importlib_metadata
11
- else:
12
- import importlib.metadata as importlib_metadata
13
-
14
- SENTENCE_DELIMITER = ""
15
-
16
- if version.parse(importlib_metadata.version("jiwer")) < version.parse("2.3.0"):
17
-
18
- class SentencesToListOfCharacters(tr.AbstractTransform):
19
- def __init__(self, sentence_delimiter: str = " "):
20
- self.sentence_delimiter = sentence_delimiter
21
-
22
- def process_string(self, s: str):
23
- return list(s)
24
-
25
- def process_list(self, inp: List[str]):
26
- chars = []
27
- for sent_idx, sentence in enumerate(inp):
28
- chars.extend(self.process_string(sentence))
29
- if self.sentence_delimiter is not None and self.sentence_delimiter != "" and sent_idx < len(inp) - 1:
30
- chars.append(self.sentence_delimiter)
31
- return chars
32
-
33
- cer_transform = tr.Compose(
34
- [tr.RemoveMultipleSpaces(), tr.Strip(), SentencesToListOfCharacters(SENTENCE_DELIMITER)]
35
- )
36
- else:
37
- cer_transform = tr.Compose(
38
- [
39
- tr.RemoveMultipleSpaces(),
40
- tr.Strip(),
41
- tr.ReduceToSingleSentence(SENTENCE_DELIMITER),
42
- tr.ReduceToListOfListOfChars(),
43
- ]
44
- )
45
-
46
-
47
- _CITATION = """\
48
- @inproceedings{inproceedings,
49
- author = {Morris, Andrew and Maier, Viktoria and Green, Phil},
50
- year = {2004},
51
- month = {01},
52
- pages = {},
53
- title = {From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition.}
54
- }
55
- """
56
-
57
-
58
- _DESCRIPTION = """\
59
- Character error rate (CER) is a standard metric of the performance of an automatic speech recognition system.
60
-
61
- CER is similar to Word Error Rate (WER) but operates on characters instead of words. Please refer to the docs of WER for further information.
62
-
63
- The character error rate can be computed as:
64
-
65
- CER = (S + D + I) / N = (S + D + I) / (S + D + C)
66
-
67
- where
68
-
69
- S is the number of substitutions,
70
- D is the number of deletions,
71
- I is the number of insertions,
72
- C is the number of correct characters,
73
- N is the number of characters in the reference (N=S+D+C).
74
-
75
- CER's output is not always a number between 0 and 1, particularly when there is a high number of insertions. This value is often associated to the percentage of characters that were incorrectly predicted. The lower the value, the better the
76
- performance of the ASR system with a CER of 0 being a perfect score.
77
- """
78
-
79
- _KWARGS_DESCRIPTION = """
80
- Computes CER score of transcribed segments against references.
81
- Args:
82
- references: list of references for each speech input.
83
- predictions: list of transcriptions to score.
84
- concatenate_texts: Whether or not to concatenate sentences before evaluation, set to True for a more accurate result.
85
- Returns:
86
- (float): the character error rate
87
-
88
- Examples for the Hungarian Language:
89
- >>> # Colab usage
90
- >>> !pip install evaluate jiwer
91
- >>> import evaluate
92
- >>> from evaluate import load
93
-
94
- >>> predictions = ["ez a jóslat", "van egy másik minta is"]
95
- >>> references = ["ez a hivatkozás", "van még egy"]
96
- >>> cer = evaluate.load("cer")
97
- >>> cer_score = cer.compute(predictions=predictions, references=references)
98
- >>> print(cer_score)
99
- >>> 0.9615384615384616
100
- """
101
-
102
-
103
-
104
- @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
105
- class CER(evaluate.Metric):
106
- def _info(self):
107
- return evaluate.MetricInfo(
108
- description=_DESCRIPTION,
109
- citation=_CITATION,
110
- inputs_description=_KWARGS_DESCRIPTION,
111
- features=datasets.Features(
112
- {
113
- "predictions": datasets.Value("string", id="sequence"),
114
- "references": datasets.Value("string", id="sequence"),
115
- }
116
- ),
117
- codebase_urls=["https://github.com/jitsi/jiwer/"],
118
- reference_urls=[
119
- "https://en.wikipedia.org/wiki/Word_error_rate",
120
- "https://sites.google.com/site/textdigitisation/qualitymeasures/computingerrorrates",
121
- ],
122
- )
123
-
124
- def _compute(self, predictions, references, concatenate_texts=False):
125
- if concatenate_texts:
126
- return jiwer.compute_measures(
127
- references,
128
- predictions,
129
- truth_transform=cer_transform,
130
- hypothesis_transform=cer_transform,
131
- )["wer"]
132
-
133
- incorrect = 0
134
- total = 0
135
- for prediction, reference in zip(predictions, references):
136
- measures = jiwer.compute_measures(
137
- reference,
138
- prediction,
139
- truth_transform=cer_transform,
140
- hypothesis_transform=cer_transform,
141
- )
142
- incorrect += measures["substitutions"] + measures["deletions"] + measures["insertions"]
143
- total += measures["substitutions"] + measures["deletions"] + measures["hits"]
144
-
145
- return incorrect / total
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Altinas/vits-uma-genshin-honkais/text/__init__.py DELETED
@@ -1,57 +0,0 @@
1
- """ from https://github.com/keithito/tacotron """
2
- from text import cleaners
3
- from text.symbols import symbols
4
-
5
-
6
- # Mappings from symbol to numeric ID and vice versa:
7
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
8
- _id_to_symbol = {i: s for i, s in enumerate(symbols)}
9
-
10
-
11
- def text_to_sequence(text, symbols, cleaner_names):
12
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
13
- Args:
14
- text: string to convert to a sequence
15
- cleaner_names: names of the cleaner functions to run the text through
16
- Returns:
17
- List of integers corresponding to the symbols in the text
18
- '''
19
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
20
- sequence = []
21
-
22
- clean_text = _clean_text(text, cleaner_names)
23
- for symbol in clean_text:
24
- if symbol not in _symbol_to_id.keys():
25
- continue
26
- symbol_id = _symbol_to_id[symbol]
27
- sequence += [symbol_id]
28
- return sequence, clean_text
29
-
30
-
31
- def cleaned_text_to_sequence(cleaned_text):
32
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
33
- Args:
34
- text: string to convert to a sequence
35
- Returns:
36
- List of integers corresponding to the symbols in the text
37
- '''
38
- sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()]
39
- return sequence
40
-
41
-
42
- def sequence_to_text(sequence):
43
- '''Converts a sequence of IDs back to a string'''
44
- result = ''
45
- for symbol_id in sequence:
46
- s = _id_to_symbol[symbol_id]
47
- result += s
48
- return result
49
-
50
-
51
- def _clean_text(text, cleaner_names):
52
- for name in cleaner_names:
53
- cleaner = getattr(cleaners, name)
54
- if not cleaner:
55
- raise Exception('Unknown cleaner: %s' % name)
56
- text = cleaner(text)
57
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/DragGan-Inversion/PTI/training/coaches/single_id_coach.py DELETED
@@ -1,80 +0,0 @@
1
- import os
2
- import torch
3
- from tqdm import tqdm
4
- from PTI.configs import paths_config, hyperparameters, global_config
5
- from PTI.training.coaches.base_coach import BaseCoach
6
- from PTI.utils.log_utils import log_images_from_w
7
-
8
-
9
- class SingleIDCoach(BaseCoach):
10
- def __init__(self, data_loader, use_wandb):
11
- super().__init__(data_loader, use_wandb)
12
-
13
- def train(self):
14
- w_path_dir = f"{paths_config.embedding_base_dir}/{paths_config.input_data_id}"
15
- os.makedirs(w_path_dir, exist_ok=True)
16
- os.makedirs(f"{w_path_dir}/{paths_config.pti_results_keyword}", exist_ok=True)
17
-
18
- use_ball_holder = True
19
- w_pivot = None
20
- fname, image = next(iter(self.data_loader))
21
- print("NANANAN", fname)
22
- image_name = fname[0]
23
-
24
- self.restart_training()
25
-
26
- embedding_dir = f"{w_path_dir}/{paths_config.pti_results_keyword}/{image_name}"
27
- os.makedirs(embedding_dir, exist_ok=True)
28
-
29
- if hyperparameters.use_last_w_pivots:
30
- w_pivot = self.load_inversions(w_path_dir, image_name)
31
-
32
- elif not hyperparameters.use_last_w_pivots or w_pivot is None:
33
- w_pivot = self.calc_inversions(image, image_name)
34
- torch.save(w_pivot, f"{embedding_dir}/0.pt")
35
- # w_pivot = w_pivot.detach().clone().to(global_config.device)
36
- w_pivot = w_pivot.to(global_config.device)
37
-
38
- log_images_counter = 0
39
- real_images_batch = image.to(global_config.device)
40
-
41
- for i in tqdm(range(hyperparameters.max_pti_steps)):
42
- generated_images = self.forward(w_pivot)
43
- loss, l2_loss_val, loss_lpips = self.calc_loss(
44
- generated_images,
45
- real_images_batch,
46
- image_name,
47
- self.G,
48
- use_ball_holder,
49
- w_pivot,
50
- )
51
-
52
- self.optimizer.zero_grad()
53
-
54
- if loss_lpips <= hyperparameters.LPIPS_value_threshold:
55
- break
56
-
57
- loss.backward()
58
- self.optimizer.step()
59
-
60
- use_ball_holder = (
61
- global_config.training_step
62
- % hyperparameters.locality_regularization_interval
63
- == 0
64
- )
65
-
66
- if (
67
- self.use_wandb
68
- and log_images_counter % global_config.image_rec_result_log_snapshot
69
- == 0
70
- ):
71
- log_images_from_w([w_pivot], self.G, [image_name])
72
-
73
- global_config.training_step += 1
74
- log_images_counter += 1
75
-
76
- torch.save(
77
- self.G,
78
- f"{paths_config.checkpoints_dir}/model_{global_config.run_name}_{image_name}.pt",
79
- )
80
- return self.G, w_pivot
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/image-caption-with-vit-gpt2/README.md DELETED
@@ -1,46 +0,0 @@
1
- ---
2
- title: Image Caption With Vit Gpt2
3
- emoji: 👀
4
- colorFrom: pink
5
- colorTo: pink
6
- sdk: gradio
7
- app_file: app.py
8
- pinned: false
9
- license: mit
10
- ---
11
-
12
- # Configuration
13
-
14
- `title`: _string_
15
- Display title for the Space
16
-
17
- `emoji`: _string_
18
- Space emoji (emoji-only character allowed)
19
-
20
- `colorFrom`: _string_
21
- Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
22
-
23
- `colorTo`: _string_
24
- Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
25
-
26
- `sdk`: _string_
27
- Can be either `gradio`, `streamlit`, or `static`
28
-
29
- `sdk_version` : _string_
30
- Only applicable for `streamlit` SDK.
31
- See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
32
-
33
- `app_file`: _string_
34
- Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
35
- Path is relative to the root of the repository.
36
-
37
- `models`: _List[string]_
38
- HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
39
- Will be parsed automatically from your code if not specified here.
40
-
41
- `datasets`: _List[string]_
42
- HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
43
- Will be parsed automatically from your code if not specified here.
44
-
45
- `pinned`: _boolean_
46
- Whether the Space stays on top of your list.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py DELETED
@@ -1,578 +0,0 @@
1
- # Copyright 2023 The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- from dataclasses import dataclass
16
- from typing import List, Optional, Union
17
-
18
- import numpy as np
19
- import PIL
20
- import torch
21
- from transformers import CLIPImageProcessor, CLIPTextModelWithProjection, CLIPTokenizer, CLIPVisionModelWithProjection
22
-
23
- from ...models import PriorTransformer
24
- from ...schedulers import UnCLIPScheduler
25
- from ...utils import (
26
- BaseOutput,
27
- is_accelerate_available,
28
- is_accelerate_version,
29
- logging,
30
- randn_tensor,
31
- replace_example_docstring,
32
- )
33
- from ..pipeline_utils import DiffusionPipeline
34
-
35
-
36
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
37
-
38
- EXAMPLE_DOC_STRING = """
39
- Examples:
40
- ```py
41
- >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline
42
- >>> import torch
43
-
44
- >>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior")
45
- >>> pipe_prior.to("cuda")
46
-
47
- >>> prompt = "red cat, 4k photo"
48
- >>> out = pipe_prior(prompt)
49
- >>> image_emb = out.image_embeds
50
- >>> negative_image_emb = out.negative_image_embeds
51
-
52
- >>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1")
53
- >>> pipe.to("cuda")
54
-
55
- >>> image = pipe(
56
- ... prompt,
57
- ... image_embeds=image_emb,
58
- ... negative_image_embeds=negative_image_emb,
59
- ... height=768,
60
- ... width=768,
61
- ... num_inference_steps=100,
62
- ... ).images
63
-
64
- >>> image[0].save("cat.png")
65
- ```
66
- """
67
-
68
- EXAMPLE_INTERPOLATE_DOC_STRING = """
69
- Examples:
70
- ```py
71
- >>> from diffusers import KandinskyPriorPipeline, KandinskyPipeline
72
- >>> from diffusers.utils import load_image
73
- >>> import PIL
74
-
75
- >>> import torch
76
- >>> from torchvision import transforms
77
-
78
- >>> pipe_prior = KandinskyPriorPipeline.from_pretrained(
79
- ... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16
80
- ... )
81
- >>> pipe_prior.to("cuda")
82
-
83
- >>> img1 = load_image(
84
- ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
85
- ... "/kandinsky/cat.png"
86
- ... )
87
-
88
- >>> img2 = load_image(
89
- ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
90
- ... "/kandinsky/starry_night.jpeg"
91
- ... )
92
-
93
- >>> images_texts = ["a cat", img1, img2]
94
- >>> weights = [0.3, 0.3, 0.4]
95
- >>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights)
96
-
97
- >>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
98
- >>> pipe.to("cuda")
99
-
100
- >>> image = pipe(
101
- ... "",
102
- ... image_embeds=image_emb,
103
- ... negative_image_embeds=zero_image_emb,
104
- ... height=768,
105
- ... width=768,
106
- ... num_inference_steps=150,
107
- ... ).images[0]
108
-
109
- >>> image.save("starry_cat.png")
110
- ```
111
- """
112
-
113
-
114
- @dataclass
115
- class KandinskyPriorPipelineOutput(BaseOutput):
116
- """
117
- Output class for KandinskyPriorPipeline.
118
-
119
- Args:
120
- image_embeds (`torch.FloatTensor`)
121
- clip image embeddings for text prompt
122
- negative_image_embeds (`List[PIL.Image.Image]` or `np.ndarray`)
123
- clip image embeddings for unconditional tokens
124
- """
125
-
126
- image_embeds: Union[torch.FloatTensor, np.ndarray]
127
- negative_image_embeds: Union[torch.FloatTensor, np.ndarray]
128
-
129
-
130
- class KandinskyPriorPipeline(DiffusionPipeline):
131
- """
132
- Pipeline for generating image prior for Kandinsky
133
-
134
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
135
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
136
-
137
- Args:
138
- prior ([`PriorTransformer`]):
139
- The canonincal unCLIP prior to approximate the image embedding from the text embedding.
140
- image_encoder ([`CLIPVisionModelWithProjection`]):
141
- Frozen image-encoder.
142
- text_encoder ([`CLIPTextModelWithProjection`]):
143
- Frozen text-encoder.
144
- tokenizer (`CLIPTokenizer`):
145
- Tokenizer of class
146
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
147
- scheduler ([`UnCLIPScheduler`]):
148
- A scheduler to be used in combination with `prior` to generate image embedding.
149
- """
150
-
151
- _exclude_from_cpu_offload = ["prior"]
152
-
153
- def __init__(
154
- self,
155
- prior: PriorTransformer,
156
- image_encoder: CLIPVisionModelWithProjection,
157
- text_encoder: CLIPTextModelWithProjection,
158
- tokenizer: CLIPTokenizer,
159
- scheduler: UnCLIPScheduler,
160
- image_processor: CLIPImageProcessor,
161
- ):
162
- super().__init__()
163
-
164
- self.register_modules(
165
- prior=prior,
166
- text_encoder=text_encoder,
167
- tokenizer=tokenizer,
168
- scheduler=scheduler,
169
- image_encoder=image_encoder,
170
- image_processor=image_processor,
171
- )
172
-
173
- @torch.no_grad()
174
- @replace_example_docstring(EXAMPLE_INTERPOLATE_DOC_STRING)
175
- def interpolate(
176
- self,
177
- images_and_prompts: List[Union[str, PIL.Image.Image, torch.FloatTensor]],
178
- weights: List[float],
179
- num_images_per_prompt: int = 1,
180
- num_inference_steps: int = 25,
181
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
182
- latents: Optional[torch.FloatTensor] = None,
183
- negative_prior_prompt: Optional[str] = None,
184
- negative_prompt: str = "",
185
- guidance_scale: float = 4.0,
186
- device=None,
187
- ):
188
- """
189
- Function invoked when using the prior pipeline for interpolation.
190
-
191
- Args:
192
- images_and_prompts (`List[Union[str, PIL.Image.Image, torch.FloatTensor]]`):
193
- list of prompts and images to guide the image generation.
194
- weights: (`List[float]`):
195
- list of weights for each condition in `images_and_prompts`
196
- num_images_per_prompt (`int`, *optional*, defaults to 1):
197
- The number of images to generate per prompt.
198
- num_inference_steps (`int`, *optional*, defaults to 25):
199
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
200
- expense of slower inference.
201
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
202
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
203
- to make generation deterministic.
204
- latents (`torch.FloatTensor`, *optional*):
205
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
206
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
207
- tensor will ge generated by sampling using the supplied random `generator`.
208
- negative_prior_prompt (`str`, *optional*):
209
- The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if
210
- `guidance_scale` is less than `1`).
211
- negative_prompt (`str` or `List[str]`, *optional*):
212
- The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if
213
- `guidance_scale` is less than `1`).
214
- guidance_scale (`float`, *optional*, defaults to 4.0):
215
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
216
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
217
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
218
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
219
- usually at the expense of lower image quality.
220
-
221
- Examples:
222
-
223
- Returns:
224
- [`KandinskyPriorPipelineOutput`] or `tuple`
225
- """
226
-
227
- device = device or self.device
228
-
229
- if len(images_and_prompts) != len(weights):
230
- raise ValueError(
231
- f"`images_and_prompts` contains {len(images_and_prompts)} items and `weights` contains {len(weights)} items - they should be lists of same length"
232
- )
233
-
234
- image_embeddings = []
235
- for cond, weight in zip(images_and_prompts, weights):
236
- if isinstance(cond, str):
237
- image_emb = self(
238
- cond,
239
- num_inference_steps=num_inference_steps,
240
- num_images_per_prompt=num_images_per_prompt,
241
- generator=generator,
242
- latents=latents,
243
- negative_prompt=negative_prior_prompt,
244
- guidance_scale=guidance_scale,
245
- ).image_embeds
246
-
247
- elif isinstance(cond, (PIL.Image.Image, torch.Tensor)):
248
- if isinstance(cond, PIL.Image.Image):
249
- cond = (
250
- self.image_processor(cond, return_tensors="pt")
251
- .pixel_values[0]
252
- .unsqueeze(0)
253
- .to(dtype=self.image_encoder.dtype, device=device)
254
- )
255
-
256
- image_emb = self.image_encoder(cond)["image_embeds"]
257
-
258
- else:
259
- raise ValueError(
260
- f"`images_and_prompts` can only contains elements to be of type `str`, `PIL.Image.Image` or `torch.Tensor` but is {type(cond)}"
261
- )
262
-
263
- image_embeddings.append(image_emb * weight)
264
-
265
- image_emb = torch.cat(image_embeddings).sum(dim=0, keepdim=True)
266
-
267
- out_zero = self(
268
- negative_prompt,
269
- num_inference_steps=num_inference_steps,
270
- num_images_per_prompt=num_images_per_prompt,
271
- generator=generator,
272
- latents=latents,
273
- negative_prompt=negative_prior_prompt,
274
- guidance_scale=guidance_scale,
275
- )
276
- zero_image_emb = out_zero.negative_image_embeds if negative_prompt == "" else out_zero.image_embeds
277
-
278
- return KandinskyPriorPipelineOutput(image_embeds=image_emb, negative_image_embeds=zero_image_emb)
279
-
280
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
281
- def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
282
- if latents is None:
283
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
284
- else:
285
- if latents.shape != shape:
286
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
287
- latents = latents.to(device)
288
-
289
- latents = latents * scheduler.init_noise_sigma
290
- return latents
291
-
292
- def get_zero_embed(self, batch_size=1, device=None):
293
- device = device or self.device
294
- zero_img = torch.zeros(1, 3, self.image_encoder.config.image_size, self.image_encoder.config.image_size).to(
295
- device=device, dtype=self.image_encoder.dtype
296
- )
297
- zero_image_emb = self.image_encoder(zero_img)["image_embeds"]
298
- zero_image_emb = zero_image_emb.repeat(batch_size, 1)
299
- return zero_image_emb
300
-
301
- def _encode_prompt(
302
- self,
303
- prompt,
304
- device,
305
- num_images_per_prompt,
306
- do_classifier_free_guidance,
307
- negative_prompt=None,
308
- ):
309
- batch_size = len(prompt) if isinstance(prompt, list) else 1
310
- # get prompt text embeddings
311
- text_inputs = self.tokenizer(
312
- prompt,
313
- padding="max_length",
314
- max_length=self.tokenizer.model_max_length,
315
- truncation=True,
316
- return_tensors="pt",
317
- )
318
- text_input_ids = text_inputs.input_ids
319
- text_mask = text_inputs.attention_mask.bool().to(device)
320
-
321
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
322
-
323
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids):
324
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
325
- logger.warning(
326
- "The following part of your input was truncated because CLIP can only handle sequences up to"
327
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
328
- )
329
- text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
330
-
331
- text_encoder_output = self.text_encoder(text_input_ids.to(device))
332
-
333
- prompt_embeds = text_encoder_output.text_embeds
334
- text_encoder_hidden_states = text_encoder_output.last_hidden_state
335
-
336
- prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
337
- text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0)
338
- text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0)
339
-
340
- if do_classifier_free_guidance:
341
- uncond_tokens: List[str]
342
- if negative_prompt is None:
343
- uncond_tokens = [""] * batch_size
344
- elif type(prompt) is not type(negative_prompt):
345
- raise TypeError(
346
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
347
- f" {type(prompt)}."
348
- )
349
- elif isinstance(negative_prompt, str):
350
- uncond_tokens = [negative_prompt]
351
- elif batch_size != len(negative_prompt):
352
- raise ValueError(
353
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
354
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
355
- " the batch size of `prompt`."
356
- )
357
- else:
358
- uncond_tokens = negative_prompt
359
-
360
- uncond_input = self.tokenizer(
361
- uncond_tokens,
362
- padding="max_length",
363
- max_length=self.tokenizer.model_max_length,
364
- truncation=True,
365
- return_tensors="pt",
366
- )
367
- uncond_text_mask = uncond_input.attention_mask.bool().to(device)
368
- negative_prompt_embeds_text_encoder_output = self.text_encoder(uncond_input.input_ids.to(device))
369
-
370
- negative_prompt_embeds = negative_prompt_embeds_text_encoder_output.text_embeds
371
- uncond_text_encoder_hidden_states = negative_prompt_embeds_text_encoder_output.last_hidden_state
372
-
373
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
374
-
375
- seq_len = negative_prompt_embeds.shape[1]
376
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt)
377
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len)
378
-
379
- seq_len = uncond_text_encoder_hidden_states.shape[1]
380
- uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1)
381
- uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view(
382
- batch_size * num_images_per_prompt, seq_len, -1
383
- )
384
- uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0)
385
-
386
- # done duplicates
387
-
388
- # For classifier free guidance, we need to do two forward passes.
389
- # Here we concatenate the unconditional and text embeddings into a single batch
390
- # to avoid doing two forward passes
391
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
392
- text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states])
393
-
394
- text_mask = torch.cat([uncond_text_mask, text_mask])
395
-
396
- return prompt_embeds, text_encoder_hidden_states, text_mask
397
-
398
- def enable_model_cpu_offload(self, gpu_id=0):
399
- r"""
400
- Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
401
- to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
402
- method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
403
- `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
404
- """
405
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
406
- from accelerate import cpu_offload_with_hook
407
- else:
408
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
409
-
410
- device = torch.device(f"cuda:{gpu_id}")
411
-
412
- if self.device.type != "cpu":
413
- self.to("cpu", silence_dtype_warnings=True)
414
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
415
-
416
- hook = None
417
- for cpu_offloaded_model in [self.text_encoder, self.prior]:
418
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
419
-
420
- # We'll offload the last model manually.
421
- self.prior_hook = hook
422
-
423
- _, hook = cpu_offload_with_hook(self.image_encoder, device, prev_module_hook=self.prior_hook)
424
-
425
- self.final_offload_hook = hook
426
-
427
- @torch.no_grad()
428
- @replace_example_docstring(EXAMPLE_DOC_STRING)
429
- def __call__(
430
- self,
431
- prompt: Union[str, List[str]],
432
- negative_prompt: Optional[Union[str, List[str]]] = None,
433
- num_images_per_prompt: int = 1,
434
- num_inference_steps: int = 25,
435
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
436
- latents: Optional[torch.FloatTensor] = None,
437
- guidance_scale: float = 4.0,
438
- output_type: Optional[str] = "pt",
439
- return_dict: bool = True,
440
- ):
441
- """
442
- Function invoked when calling the pipeline for generation.
443
-
444
- Args:
445
- prompt (`str` or `List[str]`):
446
- The prompt or prompts to guide the image generation.
447
- negative_prompt (`str` or `List[str]`, *optional*):
448
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
449
- if `guidance_scale` is less than `1`).
450
- num_images_per_prompt (`int`, *optional*, defaults to 1):
451
- The number of images to generate per prompt.
452
- num_inference_steps (`int`, *optional*, defaults to 25):
453
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
454
- expense of slower inference.
455
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
456
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
457
- to make generation deterministic.
458
- latents (`torch.FloatTensor`, *optional*):
459
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
460
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
461
- tensor will ge generated by sampling using the supplied random `generator`.
462
- guidance_scale (`float`, *optional*, defaults to 4.0):
463
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
464
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
465
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
466
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
467
- usually at the expense of lower image quality.
468
- output_type (`str`, *optional*, defaults to `"pt"`):
469
- The output format of the generate image. Choose between: `"np"` (`np.array`) or `"pt"`
470
- (`torch.Tensor`).
471
- return_dict (`bool`, *optional*, defaults to `True`):
472
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
473
-
474
- Examples:
475
-
476
- Returns:
477
- [`KandinskyPriorPipelineOutput`] or `tuple`
478
- """
479
-
480
- if isinstance(prompt, str):
481
- prompt = [prompt]
482
- elif not isinstance(prompt, list):
483
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
484
-
485
- if isinstance(negative_prompt, str):
486
- negative_prompt = [negative_prompt]
487
- elif not isinstance(negative_prompt, list) and negative_prompt is not None:
488
- raise ValueError(f"`negative_prompt` has to be of type `str` or `list` but is {type(negative_prompt)}")
489
-
490
- # if the negative prompt is defined we double the batch size to
491
- # directly retrieve the negative prompt embedding
492
- if negative_prompt is not None:
493
- prompt = prompt + negative_prompt
494
- negative_prompt = 2 * negative_prompt
495
-
496
- device = self._execution_device
497
-
498
- batch_size = len(prompt)
499
- batch_size = batch_size * num_images_per_prompt
500
-
501
- do_classifier_free_guidance = guidance_scale > 1.0
502
- prompt_embeds, text_encoder_hidden_states, text_mask = self._encode_prompt(
503
- prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
504
- )
505
-
506
- # prior
507
- self.scheduler.set_timesteps(num_inference_steps, device=device)
508
- prior_timesteps_tensor = self.scheduler.timesteps
509
-
510
- embedding_dim = self.prior.config.embedding_dim
511
-
512
- latents = self.prepare_latents(
513
- (batch_size, embedding_dim),
514
- prompt_embeds.dtype,
515
- device,
516
- generator,
517
- latents,
518
- self.scheduler,
519
- )
520
-
521
- for i, t in enumerate(self.progress_bar(prior_timesteps_tensor)):
522
- # expand the latents if we are doing classifier free guidance
523
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
524
-
525
- predicted_image_embedding = self.prior(
526
- latent_model_input,
527
- timestep=t,
528
- proj_embedding=prompt_embeds,
529
- encoder_hidden_states=text_encoder_hidden_states,
530
- attention_mask=text_mask,
531
- ).predicted_image_embedding
532
-
533
- if do_classifier_free_guidance:
534
- predicted_image_embedding_uncond, predicted_image_embedding_text = predicted_image_embedding.chunk(2)
535
- predicted_image_embedding = predicted_image_embedding_uncond + guidance_scale * (
536
- predicted_image_embedding_text - predicted_image_embedding_uncond
537
- )
538
-
539
- if i + 1 == prior_timesteps_tensor.shape[0]:
540
- prev_timestep = None
541
- else:
542
- prev_timestep = prior_timesteps_tensor[i + 1]
543
-
544
- latents = self.scheduler.step(
545
- predicted_image_embedding,
546
- timestep=t,
547
- sample=latents,
548
- generator=generator,
549
- prev_timestep=prev_timestep,
550
- ).prev_sample
551
-
552
- latents = self.prior.post_process_latents(latents)
553
-
554
- image_embeddings = latents
555
-
556
- # if negative prompt has been defined, we retrieve split the image embedding into two
557
- if negative_prompt is None:
558
- zero_embeds = self.get_zero_embed(latents.shape[0], device=latents.device)
559
-
560
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
561
- self.final_offload_hook.offload()
562
- else:
563
- image_embeddings, zero_embeds = image_embeddings.chunk(2)
564
-
565
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
566
- self.prior_hook.offload()
567
-
568
- if output_type not in ["pt", "np"]:
569
- raise ValueError(f"Only the output types `pt` and `np` are supported not output_type={output_type}")
570
-
571
- if output_type == "np":
572
- image_embeddings = image_embeddings.cpu().numpy()
573
- zero_embeds = zero_embeds.cpu().numpy()
574
-
575
- if not return_dict:
576
- return (image_embeddings, zero_embeds)
577
-
578
- return KandinskyPriorPipelineOutput(image_embeds=image_embeddings, negative_image_embeds=zero_embeds)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_image_processor.py DELETED
@@ -1,149 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2023 HuggingFace Inc.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import unittest
17
-
18
- import numpy as np
19
- import PIL
20
- import torch
21
-
22
- from diffusers.image_processor import VaeImageProcessor
23
-
24
-
25
- class ImageProcessorTest(unittest.TestCase):
26
- @property
27
- def dummy_sample(self):
28
- batch_size = 1
29
- num_channels = 3
30
- height = 8
31
- width = 8
32
-
33
- sample = torch.rand((batch_size, num_channels, height, width))
34
-
35
- return sample
36
-
37
- def to_np(self, image):
38
- if isinstance(image[0], PIL.Image.Image):
39
- return np.stack([np.array(i) for i in image], axis=0)
40
- elif isinstance(image, torch.Tensor):
41
- return image.cpu().numpy().transpose(0, 2, 3, 1)
42
- return image
43
-
44
- def test_vae_image_processor_pt(self):
45
- image_processor = VaeImageProcessor(do_resize=False, do_normalize=True)
46
-
47
- input_pt = self.dummy_sample
48
- input_np = self.to_np(input_pt)
49
-
50
- for output_type in ["pt", "np", "pil"]:
51
- out = image_processor.postprocess(
52
- image_processor.preprocess(input_pt),
53
- output_type=output_type,
54
- )
55
- out_np = self.to_np(out)
56
- in_np = (input_np * 255).round() if output_type == "pil" else input_np
57
- assert (
58
- np.abs(in_np - out_np).max() < 1e-6
59
- ), f"decoded output does not match input for output_type {output_type}"
60
-
61
- def test_vae_image_processor_np(self):
62
- image_processor = VaeImageProcessor(do_resize=False, do_normalize=True)
63
- input_np = self.dummy_sample.cpu().numpy().transpose(0, 2, 3, 1)
64
-
65
- for output_type in ["pt", "np", "pil"]:
66
- out = image_processor.postprocess(image_processor.preprocess(input_np), output_type=output_type)
67
-
68
- out_np = self.to_np(out)
69
- in_np = (input_np * 255).round() if output_type == "pil" else input_np
70
- assert (
71
- np.abs(in_np - out_np).max() < 1e-6
72
- ), f"decoded output does not match input for output_type {output_type}"
73
-
74
- def test_vae_image_processor_pil(self):
75
- image_processor = VaeImageProcessor(do_resize=False, do_normalize=True)
76
-
77
- input_np = self.dummy_sample.cpu().numpy().transpose(0, 2, 3, 1)
78
- input_pil = image_processor.numpy_to_pil(input_np)
79
-
80
- for output_type in ["pt", "np", "pil"]:
81
- out = image_processor.postprocess(image_processor.preprocess(input_pil), output_type=output_type)
82
- for i, o in zip(input_pil, out):
83
- in_np = np.array(i)
84
- out_np = self.to_np(out) if output_type == "pil" else (self.to_np(out) * 255).round()
85
- assert (
86
- np.abs(in_np - out_np).max() < 1e-6
87
- ), f"decoded output does not match input for output_type {output_type}"
88
-
89
- def test_preprocess_input_3d(self):
90
- image_processor = VaeImageProcessor(do_resize=False, do_normalize=False)
91
-
92
- input_pt_4d = self.dummy_sample
93
- input_pt_3d = input_pt_4d.squeeze(0)
94
-
95
- out_pt_4d = image_processor.postprocess(
96
- image_processor.preprocess(input_pt_4d),
97
- output_type="np",
98
- )
99
- out_pt_3d = image_processor.postprocess(
100
- image_processor.preprocess(input_pt_3d),
101
- output_type="np",
102
- )
103
-
104
- input_np_4d = self.to_np(self.dummy_sample)
105
- input_np_3d = input_np_4d.squeeze(0)
106
-
107
- out_np_4d = image_processor.postprocess(
108
- image_processor.preprocess(input_np_4d),
109
- output_type="np",
110
- )
111
- out_np_3d = image_processor.postprocess(
112
- image_processor.preprocess(input_np_3d),
113
- output_type="np",
114
- )
115
-
116
- assert np.abs(out_pt_4d - out_pt_3d).max() < 1e-6
117
- assert np.abs(out_np_4d - out_np_3d).max() < 1e-6
118
-
119
- def test_preprocess_input_list(self):
120
- image_processor = VaeImageProcessor(do_resize=False, do_normalize=False)
121
-
122
- input_pt_4d = self.dummy_sample
123
- input_pt_list = list(input_pt_4d)
124
-
125
- out_pt_4d = image_processor.postprocess(
126
- image_processor.preprocess(input_pt_4d),
127
- output_type="np",
128
- )
129
-
130
- out_pt_list = image_processor.postprocess(
131
- image_processor.preprocess(input_pt_list),
132
- output_type="np",
133
- )
134
-
135
- input_np_4d = self.to_np(self.dummy_sample)
136
- list(input_np_4d)
137
-
138
- out_np_4d = image_processor.postprocess(
139
- image_processor.preprocess(input_pt_4d),
140
- output_type="np",
141
- )
142
-
143
- out_np_list = image_processor.postprocess(
144
- image_processor.preprocess(input_pt_list),
145
- output_type="np",
146
- )
147
-
148
- assert np.abs(out_pt_4d - out_pt_list).max() < 1e-6
149
- assert np.abs(out_np_4d - out_np_list).max() < 1e-6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/core/mask/__init__.py DELETED
@@ -1,8 +0,0 @@
1
- from .mask_target import mask_target
2
- from .structures import BaseInstanceMasks, BitmapMasks, PolygonMasks
3
- from .utils import encode_mask_results, split_combined_polys
4
-
5
- __all__ = [
6
- 'split_combined_polys', 'mask_target', 'BaseInstanceMasks', 'BitmapMasks',
7
- 'PolygonMasks', 'encode_mask_results'
8
- ]
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/paa_head.py DELETED
@@ -1,671 +0,0 @@
1
- import numpy as np
2
- import torch
3
- from mmcv.runner import force_fp32
4
-
5
- from mmdet.core import multi_apply, multiclass_nms
6
- from mmdet.core.bbox.iou_calculators import bbox_overlaps
7
- from mmdet.models import HEADS
8
- from mmdet.models.dense_heads import ATSSHead
9
-
10
- EPS = 1e-12
11
- try:
12
- import sklearn.mixture as skm
13
- except ImportError:
14
- skm = None
15
-
16
-
17
- def levels_to_images(mlvl_tensor):
18
- """Concat multi-level feature maps by image.
19
-
20
- [feature_level0, feature_level1...] -> [feature_image0, feature_image1...]
21
- Convert the shape of each element in mlvl_tensor from (N, C, H, W) to
22
- (N, H*W , C), then split the element to N elements with shape (H*W, C), and
23
- concat elements in same image of all level along first dimension.
24
-
25
- Args:
26
- mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from
27
- corresponding level. Each element is of shape (N, C, H, W)
28
-
29
- Returns:
30
- list[torch.Tensor]: A list that contains N tensors and each tensor is
31
- of shape (num_elements, C)
32
- """
33
- batch_size = mlvl_tensor[0].size(0)
34
- batch_list = [[] for _ in range(batch_size)]
35
- channels = mlvl_tensor[0].size(1)
36
- for t in mlvl_tensor:
37
- t = t.permute(0, 2, 3, 1)
38
- t = t.view(batch_size, -1, channels).contiguous()
39
- for img in range(batch_size):
40
- batch_list[img].append(t[img])
41
- return [torch.cat(item, 0) for item in batch_list]
42
-
43
-
44
- @HEADS.register_module()
45
- class PAAHead(ATSSHead):
46
- """Head of PAAAssignment: Probabilistic Anchor Assignment with IoU
47
- Prediction for Object Detection.
48
-
49
- Code is modified from the `official github repo
50
- <https://github.com/kkhoot/PAA/blob/master/paa_core
51
- /modeling/rpn/paa/loss.py>`_.
52
-
53
- More details can be found in the `paper
54
- <https://arxiv.org/abs/2007.08103>`_ .
55
-
56
- Args:
57
- topk (int): Select topk samples with smallest loss in
58
- each level.
59
- score_voting (bool): Whether to use score voting in post-process.
60
- covariance_type : String describing the type of covariance parameters
61
- to be used in :class:`sklearn.mixture.GaussianMixture`.
62
- It must be one of:
63
-
64
- - 'full': each component has its own general covariance matrix
65
- - 'tied': all components share the same general covariance matrix
66
- - 'diag': each component has its own diagonal covariance matrix
67
- - 'spherical': each component has its own single variance
68
- Default: 'diag'. From 'full' to 'spherical', the gmm fitting
69
- process is faster yet the performance could be influenced. For most
70
- cases, 'diag' should be a good choice.
71
- """
72
-
73
- def __init__(self,
74
- *args,
75
- topk=9,
76
- score_voting=True,
77
- covariance_type='diag',
78
- **kwargs):
79
- # topk used in paa reassign process
80
- self.topk = topk
81
- self.with_score_voting = score_voting
82
- self.covariance_type = covariance_type
83
- super(PAAHead, self).__init__(*args, **kwargs)
84
-
85
- @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds'))
86
- def loss(self,
87
- cls_scores,
88
- bbox_preds,
89
- iou_preds,
90
- gt_bboxes,
91
- gt_labels,
92
- img_metas,
93
- gt_bboxes_ignore=None):
94
- """Compute losses of the head.
95
-
96
- Args:
97
- cls_scores (list[Tensor]): Box scores for each scale level
98
- Has shape (N, num_anchors * num_classes, H, W)
99
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
100
- level with shape (N, num_anchors * 4, H, W)
101
- iou_preds (list[Tensor]): iou_preds for each scale
102
- level with shape (N, num_anchors * 1, H, W)
103
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
104
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
105
- gt_labels (list[Tensor]): class indices corresponding to each box
106
- img_metas (list[dict]): Meta information of each image, e.g.,
107
- image size, scaling factor, etc.
108
- gt_bboxes_ignore (list[Tensor] | None): Specify which bounding
109
- boxes can be ignored when are computing the loss.
110
-
111
- Returns:
112
- dict[str, Tensor]: A dictionary of loss gmm_assignment.
113
- """
114
-
115
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
116
- assert len(featmap_sizes) == self.anchor_generator.num_levels
117
-
118
- device = cls_scores[0].device
119
- anchor_list, valid_flag_list = self.get_anchors(
120
- featmap_sizes, img_metas, device=device)
121
- label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
122
- cls_reg_targets = self.get_targets(
123
- anchor_list,
124
- valid_flag_list,
125
- gt_bboxes,
126
- img_metas,
127
- gt_bboxes_ignore_list=gt_bboxes_ignore,
128
- gt_labels_list=gt_labels,
129
- label_channels=label_channels,
130
- )
131
- (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds,
132
- pos_gt_index) = cls_reg_targets
133
- cls_scores = levels_to_images(cls_scores)
134
- cls_scores = [
135
- item.reshape(-1, self.cls_out_channels) for item in cls_scores
136
- ]
137
- bbox_preds = levels_to_images(bbox_preds)
138
- bbox_preds = [item.reshape(-1, 4) for item in bbox_preds]
139
- iou_preds = levels_to_images(iou_preds)
140
- iou_preds = [item.reshape(-1, 1) for item in iou_preds]
141
- pos_losses_list, = multi_apply(self.get_pos_loss, anchor_list,
142
- cls_scores, bbox_preds, labels,
143
- labels_weight, bboxes_target,
144
- bboxes_weight, pos_inds)
145
-
146
- with torch.no_grad():
147
- reassign_labels, reassign_label_weight, \
148
- reassign_bbox_weights, num_pos = multi_apply(
149
- self.paa_reassign,
150
- pos_losses_list,
151
- labels,
152
- labels_weight,
153
- bboxes_weight,
154
- pos_inds,
155
- pos_gt_index,
156
- anchor_list)
157
- num_pos = sum(num_pos)
158
- # convert all tensor list to a flatten tensor
159
- cls_scores = torch.cat(cls_scores, 0).view(-1, cls_scores[0].size(-1))
160
- bbox_preds = torch.cat(bbox_preds, 0).view(-1, bbox_preds[0].size(-1))
161
- iou_preds = torch.cat(iou_preds, 0).view(-1, iou_preds[0].size(-1))
162
- labels = torch.cat(reassign_labels, 0).view(-1)
163
- flatten_anchors = torch.cat(
164
- [torch.cat(item, 0) for item in anchor_list])
165
- labels_weight = torch.cat(reassign_label_weight, 0).view(-1)
166
- bboxes_target = torch.cat(bboxes_target,
167
- 0).view(-1, bboxes_target[0].size(-1))
168
-
169
- pos_inds_flatten = ((labels >= 0)
170
- &
171
- (labels < self.num_classes)).nonzero().reshape(-1)
172
-
173
- losses_cls = self.loss_cls(
174
- cls_scores,
175
- labels,
176
- labels_weight,
177
- avg_factor=max(num_pos, len(img_metas))) # avoid num_pos=0
178
- if num_pos:
179
- pos_bbox_pred = self.bbox_coder.decode(
180
- flatten_anchors[pos_inds_flatten],
181
- bbox_preds[pos_inds_flatten])
182
- pos_bbox_target = bboxes_target[pos_inds_flatten]
183
- iou_target = bbox_overlaps(
184
- pos_bbox_pred.detach(), pos_bbox_target, is_aligned=True)
185
- losses_iou = self.loss_centerness(
186
- iou_preds[pos_inds_flatten],
187
- iou_target.unsqueeze(-1),
188
- avg_factor=num_pos)
189
- losses_bbox = self.loss_bbox(
190
- pos_bbox_pred,
191
- pos_bbox_target,
192
- iou_target.clamp(min=EPS),
193
- avg_factor=iou_target.sum())
194
- else:
195
- losses_iou = iou_preds.sum() * 0
196
- losses_bbox = bbox_preds.sum() * 0
197
-
198
- return dict(
199
- loss_cls=losses_cls, loss_bbox=losses_bbox, loss_iou=losses_iou)
200
-
201
- def get_pos_loss(self, anchors, cls_score, bbox_pred, label, label_weight,
202
- bbox_target, bbox_weight, pos_inds):
203
- """Calculate loss of all potential positive samples obtained from first
204
- match process.
205
-
206
- Args:
207
- anchors (list[Tensor]): Anchors of each scale.
208
- cls_score (Tensor): Box scores of single image with shape
209
- (num_anchors, num_classes)
210
- bbox_pred (Tensor): Box energies / deltas of single image
211
- with shape (num_anchors, 4)
212
- label (Tensor): classification target of each anchor with
213
- shape (num_anchors,)
214
- label_weight (Tensor): Classification loss weight of each
215
- anchor with shape (num_anchors).
216
- bbox_target (dict): Regression target of each anchor with
217
- shape (num_anchors, 4).
218
- bbox_weight (Tensor): Bbox weight of each anchor with shape
219
- (num_anchors, 4).
220
- pos_inds (Tensor): Index of all positive samples got from
221
- first assign process.
222
-
223
- Returns:
224
- Tensor: Losses of all positive samples in single image.
225
- """
226
- if not len(pos_inds):
227
- return cls_score.new([]),
228
- anchors_all_level = torch.cat(anchors, 0)
229
- pos_scores = cls_score[pos_inds]
230
- pos_bbox_pred = bbox_pred[pos_inds]
231
- pos_label = label[pos_inds]
232
- pos_label_weight = label_weight[pos_inds]
233
- pos_bbox_target = bbox_target[pos_inds]
234
- pos_bbox_weight = bbox_weight[pos_inds]
235
- pos_anchors = anchors_all_level[pos_inds]
236
- pos_bbox_pred = self.bbox_coder.decode(pos_anchors, pos_bbox_pred)
237
-
238
- # to keep loss dimension
239
- loss_cls = self.loss_cls(
240
- pos_scores,
241
- pos_label,
242
- pos_label_weight,
243
- avg_factor=self.loss_cls.loss_weight,
244
- reduction_override='none')
245
-
246
- loss_bbox = self.loss_bbox(
247
- pos_bbox_pred,
248
- pos_bbox_target,
249
- pos_bbox_weight,
250
- avg_factor=self.loss_cls.loss_weight,
251
- reduction_override='none')
252
-
253
- loss_cls = loss_cls.sum(-1)
254
- pos_loss = loss_bbox + loss_cls
255
- return pos_loss,
256
-
257
- def paa_reassign(self, pos_losses, label, label_weight, bbox_weight,
258
- pos_inds, pos_gt_inds, anchors):
259
- """Fit loss to GMM distribution and separate positive, ignore, negative
260
- samples again with GMM model.
261
-
262
- Args:
263
- pos_losses (Tensor): Losses of all positive samples in
264
- single image.
265
- label (Tensor): classification target of each anchor with
266
- shape (num_anchors,)
267
- label_weight (Tensor): Classification loss weight of each
268
- anchor with shape (num_anchors).
269
- bbox_weight (Tensor): Bbox weight of each anchor with shape
270
- (num_anchors, 4).
271
- pos_inds (Tensor): Index of all positive samples got from
272
- first assign process.
273
- pos_gt_inds (Tensor): Gt_index of all positive samples got
274
- from first assign process.
275
- anchors (list[Tensor]): Anchors of each scale.
276
-
277
- Returns:
278
- tuple: Usually returns a tuple containing learning targets.
279
-
280
- - label (Tensor): classification target of each anchor after
281
- paa assign, with shape (num_anchors,)
282
- - label_weight (Tensor): Classification loss weight of each
283
- anchor after paa assign, with shape (num_anchors).
284
- - bbox_weight (Tensor): Bbox weight of each anchor with shape
285
- (num_anchors, 4).
286
- - num_pos (int): The number of positive samples after paa
287
- assign.
288
- """
289
- if not len(pos_inds):
290
- return label, label_weight, bbox_weight, 0
291
- label = label.clone()
292
- label_weight = label_weight.clone()
293
- bbox_weight = bbox_weight.clone()
294
- num_gt = pos_gt_inds.max() + 1
295
- num_level = len(anchors)
296
- num_anchors_each_level = [item.size(0) for item in anchors]
297
- num_anchors_each_level.insert(0, 0)
298
- inds_level_interval = np.cumsum(num_anchors_each_level)
299
- pos_level_mask = []
300
- for i in range(num_level):
301
- mask = (pos_inds >= inds_level_interval[i]) & (
302
- pos_inds < inds_level_interval[i + 1])
303
- pos_level_mask.append(mask)
304
- pos_inds_after_paa = [label.new_tensor([])]
305
- ignore_inds_after_paa = [label.new_tensor([])]
306
- for gt_ind in range(num_gt):
307
- pos_inds_gmm = []
308
- pos_loss_gmm = []
309
- gt_mask = pos_gt_inds == gt_ind
310
- for level in range(num_level):
311
- level_mask = pos_level_mask[level]
312
- level_gt_mask = level_mask & gt_mask
313
- value, topk_inds = pos_losses[level_gt_mask].topk(
314
- min(level_gt_mask.sum(), self.topk), largest=False)
315
- pos_inds_gmm.append(pos_inds[level_gt_mask][topk_inds])
316
- pos_loss_gmm.append(value)
317
- pos_inds_gmm = torch.cat(pos_inds_gmm)
318
- pos_loss_gmm = torch.cat(pos_loss_gmm)
319
- # fix gmm need at least two sample
320
- if len(pos_inds_gmm) < 2:
321
- continue
322
- device = pos_inds_gmm.device
323
- pos_loss_gmm, sort_inds = pos_loss_gmm.sort()
324
- pos_inds_gmm = pos_inds_gmm[sort_inds]
325
- pos_loss_gmm = pos_loss_gmm.view(-1, 1).cpu().numpy()
326
- min_loss, max_loss = pos_loss_gmm.min(), pos_loss_gmm.max()
327
- means_init = np.array([min_loss, max_loss]).reshape(2, 1)
328
- weights_init = np.array([0.5, 0.5])
329
- precisions_init = np.array([1.0, 1.0]).reshape(2, 1, 1) # full
330
- if self.covariance_type == 'spherical':
331
- precisions_init = precisions_init.reshape(2)
332
- elif self.covariance_type == 'diag':
333
- precisions_init = precisions_init.reshape(2, 1)
334
- elif self.covariance_type == 'tied':
335
- precisions_init = np.array([[1.0]])
336
- if skm is None:
337
- raise ImportError('Please run "pip install sklearn" '
338
- 'to install sklearn first.')
339
- gmm = skm.GaussianMixture(
340
- 2,
341
- weights_init=weights_init,
342
- means_init=means_init,
343
- precisions_init=precisions_init,
344
- covariance_type=self.covariance_type)
345
- gmm.fit(pos_loss_gmm)
346
- gmm_assignment = gmm.predict(pos_loss_gmm)
347
- scores = gmm.score_samples(pos_loss_gmm)
348
- gmm_assignment = torch.from_numpy(gmm_assignment).to(device)
349
- scores = torch.from_numpy(scores).to(device)
350
-
351
- pos_inds_temp, ignore_inds_temp = self.gmm_separation_scheme(
352
- gmm_assignment, scores, pos_inds_gmm)
353
- pos_inds_after_paa.append(pos_inds_temp)
354
- ignore_inds_after_paa.append(ignore_inds_temp)
355
-
356
- pos_inds_after_paa = torch.cat(pos_inds_after_paa)
357
- ignore_inds_after_paa = torch.cat(ignore_inds_after_paa)
358
- reassign_mask = (pos_inds.unsqueeze(1) != pos_inds_after_paa).all(1)
359
- reassign_ids = pos_inds[reassign_mask]
360
- label[reassign_ids] = self.num_classes
361
- label_weight[ignore_inds_after_paa] = 0
362
- bbox_weight[reassign_ids] = 0
363
- num_pos = len(pos_inds_after_paa)
364
- return label, label_weight, bbox_weight, num_pos
365
-
366
- def gmm_separation_scheme(self, gmm_assignment, scores, pos_inds_gmm):
367
- """A general separation scheme for gmm model.
368
-
369
- It separates a GMM distribution of candidate samples into three
370
- parts, 0 1 and uncertain areas, and you can implement other
371
- separation schemes by rewriting this function.
372
-
373
- Args:
374
- gmm_assignment (Tensor): The prediction of GMM which is of shape
375
- (num_samples,). The 0/1 value indicates the distribution
376
- that each sample comes from.
377
- scores (Tensor): The probability of sample coming from the
378
- fit GMM distribution. The tensor is of shape (num_samples,).
379
- pos_inds_gmm (Tensor): All the indexes of samples which are used
380
- to fit GMM model. The tensor is of shape (num_samples,)
381
-
382
- Returns:
383
- tuple[Tensor]: The indices of positive and ignored samples.
384
-
385
- - pos_inds_temp (Tensor): Indices of positive samples.
386
- - ignore_inds_temp (Tensor): Indices of ignore samples.
387
- """
388
- # The implementation is (c) in Fig.3 in origin paper instead of (b).
389
- # You can refer to issues such as
390
- # https://github.com/kkhoot/PAA/issues/8 and
391
- # https://github.com/kkhoot/PAA/issues/9.
392
- fgs = gmm_assignment == 0
393
- pos_inds_temp = fgs.new_tensor([], dtype=torch.long)
394
- ignore_inds_temp = fgs.new_tensor([], dtype=torch.long)
395
- if fgs.nonzero().numel():
396
- _, pos_thr_ind = scores[fgs].topk(1)
397
- pos_inds_temp = pos_inds_gmm[fgs][:pos_thr_ind + 1]
398
- ignore_inds_temp = pos_inds_gmm.new_tensor([])
399
- return pos_inds_temp, ignore_inds_temp
400
-
401
- def get_targets(
402
- self,
403
- anchor_list,
404
- valid_flag_list,
405
- gt_bboxes_list,
406
- img_metas,
407
- gt_bboxes_ignore_list=None,
408
- gt_labels_list=None,
409
- label_channels=1,
410
- unmap_outputs=True,
411
- ):
412
- """Get targets for PAA head.
413
-
414
- This method is almost the same as `AnchorHead.get_targets()`. We direct
415
- return the results from _get_targets_single instead map it to levels
416
- by images_to_levels function.
417
-
418
- Args:
419
- anchor_list (list[list[Tensor]]): Multi level anchors of each
420
- image. The outer list indicates images, and the inner list
421
- corresponds to feature levels of the image. Each element of
422
- the inner list is a tensor of shape (num_anchors, 4).
423
- valid_flag_list (list[list[Tensor]]): Multi level valid flags of
424
- each image. The outer list indicates images, and the inner list
425
- corresponds to feature levels of the image. Each element of
426
- the inner list is a tensor of shape (num_anchors, )
427
- gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image.
428
- img_metas (list[dict]): Meta info of each image.
429
- gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be
430
- ignored.
431
- gt_labels_list (list[Tensor]): Ground truth labels of each box.
432
- label_channels (int): Channel of label.
433
- unmap_outputs (bool): Whether to map outputs back to the original
434
- set of anchors.
435
-
436
- Returns:
437
- tuple: Usually returns a tuple containing learning targets.
438
-
439
- - labels (list[Tensor]): Labels of all anchors, each with
440
- shape (num_anchors,).
441
- - label_weights (list[Tensor]): Label weights of all anchor.
442
- each with shape (num_anchors,).
443
- - bbox_targets (list[Tensor]): BBox targets of all anchors.
444
- each with shape (num_anchors, 4).
445
- - bbox_weights (list[Tensor]): BBox weights of all anchors.
446
- each with shape (num_anchors, 4).
447
- - pos_inds (list[Tensor]): Contains all index of positive
448
- sample in all anchor.
449
- - gt_inds (list[Tensor]): Contains all gt_index of positive
450
- sample in all anchor.
451
- """
452
-
453
- num_imgs = len(img_metas)
454
- assert len(anchor_list) == len(valid_flag_list) == num_imgs
455
- concat_anchor_list = []
456
- concat_valid_flag_list = []
457
- for i in range(num_imgs):
458
- assert len(anchor_list[i]) == len(valid_flag_list[i])
459
- concat_anchor_list.append(torch.cat(anchor_list[i]))
460
- concat_valid_flag_list.append(torch.cat(valid_flag_list[i]))
461
-
462
- # compute targets for each image
463
- if gt_bboxes_ignore_list is None:
464
- gt_bboxes_ignore_list = [None for _ in range(num_imgs)]
465
- if gt_labels_list is None:
466
- gt_labels_list = [None for _ in range(num_imgs)]
467
- results = multi_apply(
468
- self._get_targets_single,
469
- concat_anchor_list,
470
- concat_valid_flag_list,
471
- gt_bboxes_list,
472
- gt_bboxes_ignore_list,
473
- gt_labels_list,
474
- img_metas,
475
- label_channels=label_channels,
476
- unmap_outputs=unmap_outputs)
477
-
478
- (labels, label_weights, bbox_targets, bbox_weights, valid_pos_inds,
479
- valid_neg_inds, sampling_result) = results
480
-
481
- # Due to valid flag of anchors, we have to calculate the real pos_inds
482
- # in origin anchor set.
483
- pos_inds = []
484
- for i, single_labels in enumerate(labels):
485
- pos_mask = (0 <= single_labels) & (
486
- single_labels < self.num_classes)
487
- pos_inds.append(pos_mask.nonzero().view(-1))
488
-
489
- gt_inds = [item.pos_assigned_gt_inds for item in sampling_result]
490
- return (labels, label_weights, bbox_targets, bbox_weights, pos_inds,
491
- gt_inds)
492
-
493
- def _get_targets_single(self,
494
- flat_anchors,
495
- valid_flags,
496
- gt_bboxes,
497
- gt_bboxes_ignore,
498
- gt_labels,
499
- img_meta,
500
- label_channels=1,
501
- unmap_outputs=True):
502
- """Compute regression and classification targets for anchors in a
503
- single image.
504
-
505
- This method is same as `AnchorHead._get_targets_single()`.
506
- """
507
- assert unmap_outputs, 'We must map outputs back to the original' \
508
- 'set of anchors in PAAhead'
509
- return super(ATSSHead, self)._get_targets_single(
510
- flat_anchors,
511
- valid_flags,
512
- gt_bboxes,
513
- gt_bboxes_ignore,
514
- gt_labels,
515
- img_meta,
516
- label_channels=1,
517
- unmap_outputs=True)
518
-
519
- def _get_bboxes(self,
520
- cls_scores,
521
- bbox_preds,
522
- iou_preds,
523
- mlvl_anchors,
524
- img_shapes,
525
- scale_factors,
526
- cfg,
527
- rescale=False,
528
- with_nms=True):
529
- """Transform outputs for a single batch item into labeled boxes.
530
-
531
- This method is almost same as `ATSSHead._get_bboxes()`.
532
- We use sqrt(iou_preds * cls_scores) in NMS process instead of just
533
- cls_scores. Besides, score voting is used when `` score_voting``
534
- is set to True.
535
- """
536
- assert with_nms, 'PAA only supports "with_nms=True" now'
537
- assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors)
538
- batch_size = cls_scores[0].shape[0]
539
-
540
- mlvl_bboxes = []
541
- mlvl_scores = []
542
- mlvl_iou_preds = []
543
- for cls_score, bbox_pred, iou_preds, anchors in zip(
544
- cls_scores, bbox_preds, iou_preds, mlvl_anchors):
545
- assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
546
-
547
- scores = cls_score.permute(0, 2, 3, 1).reshape(
548
- batch_size, -1, self.cls_out_channels).sigmoid()
549
- bbox_pred = bbox_pred.permute(0, 2, 3,
550
- 1).reshape(batch_size, -1, 4)
551
- iou_preds = iou_preds.permute(0, 2, 3, 1).reshape(batch_size,
552
- -1).sigmoid()
553
-
554
- nms_pre = cfg.get('nms_pre', -1)
555
- if nms_pre > 0 and scores.shape[1] > nms_pre:
556
- max_scores, _ = (scores * iou_preds[..., None]).sqrt().max(-1)
557
- _, topk_inds = max_scores.topk(nms_pre)
558
- batch_inds = torch.arange(batch_size).view(
559
- -1, 1).expand_as(topk_inds).long()
560
- anchors = anchors[topk_inds, :]
561
- bbox_pred = bbox_pred[batch_inds, topk_inds, :]
562
- scores = scores[batch_inds, topk_inds, :]
563
- iou_preds = iou_preds[batch_inds, topk_inds]
564
- else:
565
- anchors = anchors.expand_as(bbox_pred)
566
-
567
- bboxes = self.bbox_coder.decode(
568
- anchors, bbox_pred, max_shape=img_shapes)
569
- mlvl_bboxes.append(bboxes)
570
- mlvl_scores.append(scores)
571
- mlvl_iou_preds.append(iou_preds)
572
-
573
- batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1)
574
- if rescale:
575
- batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor(
576
- scale_factors).unsqueeze(1)
577
- batch_mlvl_scores = torch.cat(mlvl_scores, dim=1)
578
- # Add a dummy background class to the backend when using sigmoid
579
- # remind that we set FG labels to [0, num_class-1] since mmdet v2.0
580
- # BG cat_id: num_class
581
- padding = batch_mlvl_scores.new_zeros(batch_size,
582
- batch_mlvl_scores.shape[1], 1)
583
- batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1)
584
- batch_mlvl_iou_preds = torch.cat(mlvl_iou_preds, dim=1)
585
- batch_mlvl_nms_scores = (batch_mlvl_scores *
586
- batch_mlvl_iou_preds[..., None]).sqrt()
587
-
588
- det_results = []
589
- for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes,
590
- batch_mlvl_nms_scores):
591
- det_bbox, det_label = multiclass_nms(
592
- mlvl_bboxes,
593
- mlvl_scores,
594
- cfg.score_thr,
595
- cfg.nms,
596
- cfg.max_per_img,
597
- score_factors=None)
598
- if self.with_score_voting and len(det_bbox) > 0:
599
- det_bbox, det_label = self.score_voting(
600
- det_bbox, det_label, mlvl_bboxes, mlvl_scores,
601
- cfg.score_thr)
602
- det_results.append(tuple([det_bbox, det_label]))
603
-
604
- return det_results
605
-
606
- def score_voting(self, det_bboxes, det_labels, mlvl_bboxes,
607
- mlvl_nms_scores, score_thr):
608
- """Implementation of score voting method works on each remaining boxes
609
- after NMS procedure.
610
-
611
- Args:
612
- det_bboxes (Tensor): Remaining boxes after NMS procedure,
613
- with shape (k, 5), each dimension means
614
- (x1, y1, x2, y2, score).
615
- det_labels (Tensor): The label of remaining boxes, with shape
616
- (k, 1),Labels are 0-based.
617
- mlvl_bboxes (Tensor): All boxes before the NMS procedure,
618
- with shape (num_anchors,4).
619
- mlvl_nms_scores (Tensor): The scores of all boxes which is used
620
- in the NMS procedure, with shape (num_anchors, num_class)
621
- mlvl_iou_preds (Tensor): The predictions of IOU of all boxes
622
- before the NMS procedure, with shape (num_anchors, 1)
623
- score_thr (float): The score threshold of bboxes.
624
-
625
- Returns:
626
- tuple: Usually returns a tuple containing voting results.
627
-
628
- - det_bboxes_voted (Tensor): Remaining boxes after
629
- score voting procedure, with shape (k, 5), each
630
- dimension means (x1, y1, x2, y2, score).
631
- - det_labels_voted (Tensor): Label of remaining bboxes
632
- after voting, with shape (num_anchors,).
633
- """
634
- candidate_mask = mlvl_nms_scores > score_thr
635
- candidate_mask_nonzeros = candidate_mask.nonzero()
636
- candidate_inds = candidate_mask_nonzeros[:, 0]
637
- candidate_labels = candidate_mask_nonzeros[:, 1]
638
- candidate_bboxes = mlvl_bboxes[candidate_inds]
639
- candidate_scores = mlvl_nms_scores[candidate_mask]
640
- det_bboxes_voted = []
641
- det_labels_voted = []
642
- for cls in range(self.cls_out_channels):
643
- candidate_cls_mask = candidate_labels == cls
644
- if not candidate_cls_mask.any():
645
- continue
646
- candidate_cls_scores = candidate_scores[candidate_cls_mask]
647
- candidate_cls_bboxes = candidate_bboxes[candidate_cls_mask]
648
- det_cls_mask = det_labels == cls
649
- det_cls_bboxes = det_bboxes[det_cls_mask].view(
650
- -1, det_bboxes.size(-1))
651
- det_candidate_ious = bbox_overlaps(det_cls_bboxes[:, :4],
652
- candidate_cls_bboxes)
653
- for det_ind in range(len(det_cls_bboxes)):
654
- single_det_ious = det_candidate_ious[det_ind]
655
- pos_ious_mask = single_det_ious > 0.01
656
- pos_ious = single_det_ious[pos_ious_mask]
657
- pos_bboxes = candidate_cls_bboxes[pos_ious_mask]
658
- pos_scores = candidate_cls_scores[pos_ious_mask]
659
- pis = (torch.exp(-(1 - pos_ious)**2 / 0.025) *
660
- pos_scores)[:, None]
661
- voted_box = torch.sum(
662
- pis * pos_bboxes, dim=0) / torch.sum(
663
- pis, dim=0)
664
- voted_score = det_cls_bboxes[det_ind][-1:][None, :]
665
- det_bboxes_voted.append(
666
- torch.cat((voted_box[None, :], voted_score), dim=1))
667
- det_labels_voted.append(cls)
668
-
669
- det_bboxes_voted = torch.cat(det_bboxes_voted, dim=0)
670
- det_labels_voted = det_labels.new_tensor(det_labels_voted)
671
- return det_bboxes_voted, det_labels_voted
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py DELETED
@@ -1,27 +0,0 @@
1
- from mmdet.models.builder import HEADS
2
- from mmdet.models.utils import ResLayer, SimplifiedBasicBlock
3
- from .fcn_mask_head import FCNMaskHead
4
-
5
-
6
- @HEADS.register_module()
7
- class SCNetMaskHead(FCNMaskHead):
8
- """Mask head for `SCNet <https://arxiv.org/abs/2012.10150>`_.
9
-
10
- Args:
11
- conv_to_res (bool, optional): if True, change the conv layers to
12
- ``SimplifiedBasicBlock``.
13
- """
14
-
15
- def __init__(self, conv_to_res=True, **kwargs):
16
- super(SCNetMaskHead, self).__init__(**kwargs)
17
- self.conv_to_res = conv_to_res
18
- if conv_to_res:
19
- assert self.conv_kernel_size == 3
20
- self.num_res_blocks = self.num_convs // 2
21
- self.convs = ResLayer(
22
- SimplifiedBasicBlock,
23
- self.in_channels,
24
- self.conv_out_channels,
25
- self.num_res_blocks,
26
- conv_cfg=self.conv_cfg,
27
- norm_cfg=self.norm_cfg)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_769x769_40k_cityscapes.py DELETED
@@ -1,9 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/apcnet_r50-d8.py',
3
- '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
4
- '../_base_/schedules/schedule_40k.py'
5
- ]
6
- model = dict(
7
- decode_head=dict(align_corners=True),
8
- auxiliary_head=dict(align_corners=True),
9
- test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
 
 
 
 
 
 
 
 
 
 
spaces/Armored-Atom/Image-To-Motion/app.py DELETED
@@ -1,128 +0,0 @@
1
- import gradio as gr
2
- import os
3
- import shutil
4
- import torch
5
- from PIL import Image
6
- import argparse
7
- import pathlib
8
-
9
- os.system("git clone https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model")
10
- os.chdir("Thin-Plate-Spline-Motion-Model")
11
- os.system("mkdir checkpoints")
12
- os.system("wget -c https://cloud.tsinghua.edu.cn/f/da8d61d012014b12a9e4/?dl=1 -O checkpoints/vox.pth.tar")
13
-
14
-
15
-
16
- title = "# Thin-Plate Spline Motion Model for Image Animation"
17
- DESCRIPTION = '''### Gradio demo for <b>Thin-Plate Spline Motion Model for Image Animation</b>, CVPR 2022. <a href='https://arxiv.org/abs/2203.14367'>[Paper]</a><a href='https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model'>[Github Code]</a>
18
-
19
- <img id="overview" alt="overview" src="https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model/raw/main/assets/vox.gif" />
20
- '''
21
- FOOTER = '<img id="visitor-badge" alt="visitor badge" src="https://visitor-badge.glitch.me/badge?page_id=gradio-blocks.Image-Animation-using-Thin-Plate-Spline-Motion-Model" />'
22
-
23
-
24
- def get_style_image_path(style_name: str) -> str:
25
- base_path = 'assets'
26
- filenames = {
27
- 'source': 'source.png',
28
- 'driving': 'driving.mp4',
29
- }
30
- return f'{base_path}/{filenames[style_name]}'
31
-
32
-
33
- def get_style_image_markdown_text(style_name: str) -> str:
34
- url = get_style_image_path(style_name)
35
- return f'<img id="style-image" src="{url}" alt="style image">'
36
-
37
-
38
- def update_style_image(style_name: str) -> dict:
39
- text = get_style_image_markdown_text(style_name)
40
- return gr.Markdown.update(value=text)
41
-
42
-
43
- def set_example_image(example: list) -> dict:
44
- return gr.Image.update(value=example[0])
45
-
46
- def set_example_video(example: list) -> dict:
47
- return gr.Video.update(value=example[0])
48
-
49
- def inference(img,vid):
50
- if not os.path.exists('temp'):
51
- os.system('mkdir temp')
52
-
53
- img.save("temp/image.jpg", "JPEG")
54
- os.system(f"python demo.py --config config/vox-256.yaml --checkpoint ./checkpoints/vox.pth.tar --source_image 'temp/image.jpg' --driving_video {vid} --result_video './temp/result.mp4' --cpu")
55
- return './temp/result.mp4'
56
-
57
-
58
-
59
- def main():
60
- with gr.Blocks(theme="huggingface", css='style.css') as demo:
61
- gr.Markdown(title)
62
- gr.Markdown(DESCRIPTION)
63
-
64
- with gr.Box():
65
- gr.Markdown('''## Step 1 (Provide Input Face Image)
66
- - Drop an image containing a face to the **Input Image**.
67
- - If there are multiple faces in the image, use Edit button in the upper right corner and crop the input image beforehand.
68
- ''')
69
- with gr.Row():
70
- with gr.Column():
71
- with gr.Row():
72
- input_image = gr.Image(label='Input Image',
73
- type="pil")
74
-
75
- with gr.Row():
76
- paths = sorted(pathlib.Path('assets').glob('*.png'))
77
- example_images = gr.Dataset(components=[input_image],
78
- samples=[[path.as_posix()]
79
- for path in paths])
80
-
81
- with gr.Box():
82
- gr.Markdown('''## Step 2 (Select Driving Video)
83
- - Select **Style Driving Video for the face image animation**.
84
- ''')
85
- with gr.Row():
86
- with gr.Column():
87
- with gr.Row():
88
- driving_video = gr.Video(label='Driving Video',
89
- format="mp4")
90
-
91
- with gr.Row():
92
- paths = sorted(pathlib.Path('assets').glob('*.mp4'))
93
- example_video = gr.Dataset(components=[driving_video],
94
- samples=[[path.as_posix()]
95
- for path in paths])
96
-
97
- with gr.Box():
98
- gr.Markdown('''## Step 3 (Generate Animated Image based on the Video)
99
- - Hit the **Generate** button. (Note: As it runs on the CPU, it takes ~ 3 minutes to generate final results.)
100
- ''')
101
- with gr.Row():
102
- with gr.Column():
103
- with gr.Row():
104
- generate_button = gr.Button('Generate')
105
-
106
- with gr.Column():
107
- result = gr.Video(type="file", label="Output")
108
- gr.Markdown(FOOTER)
109
- generate_button.click(fn=inference,
110
- inputs=[
111
- input_image,
112
- driving_video
113
- ],
114
- outputs=result)
115
- example_images.click(fn=set_example_image,
116
- inputs=example_images,
117
- outputs=example_images.components)
118
- example_video.click(fn=set_example_video,
119
- inputs=example_video,
120
- outputs=example_video.components)
121
-
122
- demo.launch(
123
- enable_queue=True,
124
- debug=True
125
- )
126
-
127
- if __name__ == '__main__':
128
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/spinners.py DELETED
@@ -1,159 +0,0 @@
1
- import contextlib
2
- import itertools
3
- import logging
4
- import sys
5
- import time
6
- from typing import IO, Generator, Optional
7
-
8
- from pip._internal.utils.compat import WINDOWS
9
- from pip._internal.utils.logging import get_indentation
10
-
11
- logger = logging.getLogger(__name__)
12
-
13
-
14
- class SpinnerInterface:
15
- def spin(self) -> None:
16
- raise NotImplementedError()
17
-
18
- def finish(self, final_status: str) -> None:
19
- raise NotImplementedError()
20
-
21
-
22
- class InteractiveSpinner(SpinnerInterface):
23
- def __init__(
24
- self,
25
- message: str,
26
- file: Optional[IO[str]] = None,
27
- spin_chars: str = "-\\|/",
28
- # Empirically, 8 updates/second looks nice
29
- min_update_interval_seconds: float = 0.125,
30
- ):
31
- self._message = message
32
- if file is None:
33
- file = sys.stdout
34
- self._file = file
35
- self._rate_limiter = RateLimiter(min_update_interval_seconds)
36
- self._finished = False
37
-
38
- self._spin_cycle = itertools.cycle(spin_chars)
39
-
40
- self._file.write(" " * get_indentation() + self._message + " ... ")
41
- self._width = 0
42
-
43
- def _write(self, status: str) -> None:
44
- assert not self._finished
45
- # Erase what we wrote before by backspacing to the beginning, writing
46
- # spaces to overwrite the old text, and then backspacing again
47
- backup = "\b" * self._width
48
- self._file.write(backup + " " * self._width + backup)
49
- # Now we have a blank slate to add our status
50
- self._file.write(status)
51
- self._width = len(status)
52
- self._file.flush()
53
- self._rate_limiter.reset()
54
-
55
- def spin(self) -> None:
56
- if self._finished:
57
- return
58
- if not self._rate_limiter.ready():
59
- return
60
- self._write(next(self._spin_cycle))
61
-
62
- def finish(self, final_status: str) -> None:
63
- if self._finished:
64
- return
65
- self._write(final_status)
66
- self._file.write("\n")
67
- self._file.flush()
68
- self._finished = True
69
-
70
-
71
- # Used for dumb terminals, non-interactive installs (no tty), etc.
72
- # We still print updates occasionally (once every 60 seconds by default) to
73
- # act as a keep-alive for systems like Travis-CI that take lack-of-output as
74
- # an indication that a task has frozen.
75
- class NonInteractiveSpinner(SpinnerInterface):
76
- def __init__(self, message: str, min_update_interval_seconds: float = 60.0) -> None:
77
- self._message = message
78
- self._finished = False
79
- self._rate_limiter = RateLimiter(min_update_interval_seconds)
80
- self._update("started")
81
-
82
- def _update(self, status: str) -> None:
83
- assert not self._finished
84
- self._rate_limiter.reset()
85
- logger.info("%s: %s", self._message, status)
86
-
87
- def spin(self) -> None:
88
- if self._finished:
89
- return
90
- if not self._rate_limiter.ready():
91
- return
92
- self._update("still running...")
93
-
94
- def finish(self, final_status: str) -> None:
95
- if self._finished:
96
- return
97
- self._update(f"finished with status '{final_status}'")
98
- self._finished = True
99
-
100
-
101
- class RateLimiter:
102
- def __init__(self, min_update_interval_seconds: float) -> None:
103
- self._min_update_interval_seconds = min_update_interval_seconds
104
- self._last_update: float = 0
105
-
106
- def ready(self) -> bool:
107
- now = time.time()
108
- delta = now - self._last_update
109
- return delta >= self._min_update_interval_seconds
110
-
111
- def reset(self) -> None:
112
- self._last_update = time.time()
113
-
114
-
115
- @contextlib.contextmanager
116
- def open_spinner(message: str) -> Generator[SpinnerInterface, None, None]:
117
- # Interactive spinner goes directly to sys.stdout rather than being routed
118
- # through the logging system, but it acts like it has level INFO,
119
- # i.e. it's only displayed if we're at level INFO or better.
120
- # Non-interactive spinner goes through the logging system, so it is always
121
- # in sync with logging configuration.
122
- if sys.stdout.isatty() and logger.getEffectiveLevel() <= logging.INFO:
123
- spinner: SpinnerInterface = InteractiveSpinner(message)
124
- else:
125
- spinner = NonInteractiveSpinner(message)
126
- try:
127
- with hidden_cursor(sys.stdout):
128
- yield spinner
129
- except KeyboardInterrupt:
130
- spinner.finish("canceled")
131
- raise
132
- except Exception:
133
- spinner.finish("error")
134
- raise
135
- else:
136
- spinner.finish("done")
137
-
138
-
139
- HIDE_CURSOR = "\x1b[?25l"
140
- SHOW_CURSOR = "\x1b[?25h"
141
-
142
-
143
- @contextlib.contextmanager
144
- def hidden_cursor(file: IO[str]) -> Generator[None, None, None]:
145
- # The Windows terminal does not support the hide/show cursor ANSI codes,
146
- # even via colorama. So don't even try.
147
- if WINDOWS:
148
- yield
149
- # We don't want to clutter the output with control characters if we're
150
- # writing to a file, or if the user is running with --quiet.
151
- # See https://github.com/pypa/pip/issues/3418
152
- elif not file.isatty() or logger.getEffectiveLevel() > logging.INFO:
153
- yield
154
- else:
155
- file.write(HIDE_CURSOR)
156
- try:
157
- yield
158
- finally:
159
- file.write(SHOW_CURSOR)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/build_clib.py DELETED
@@ -1,208 +0,0 @@
1
- """distutils.command.build_clib
2
-
3
- Implements the Distutils 'build_clib' command, to build a C/C++ library
4
- that is included in the module distribution and needed by an extension
5
- module."""
6
-
7
-
8
- # XXX this module has *lots* of code ripped-off quite transparently from
9
- # build_ext.py -- not surprisingly really, as the work required to build
10
- # a static library from a collection of C source files is not really all
11
- # that different from what's required to build a shared object file from
12
- # a collection of C source files. Nevertheless, I haven't done the
13
- # necessary refactoring to account for the overlap in code between the
14
- # two modules, mainly because a number of subtle details changed in the
15
- # cut 'n paste. Sigh.
16
-
17
- import os
18
- from distutils.core import Command
19
- from distutils.errors import DistutilsSetupError
20
- from distutils.sysconfig import customize_compiler
21
- from distutils import log
22
-
23
-
24
- def show_compilers():
25
- from distutils.ccompiler import show_compilers
26
-
27
- show_compilers()
28
-
29
-
30
- class build_clib(Command):
31
-
32
- description = "build C/C++ libraries used by Python extensions"
33
-
34
- user_options = [
35
- ('build-clib=', 'b', "directory to build C/C++ libraries to"),
36
- ('build-temp=', 't', "directory to put temporary build by-products"),
37
- ('debug', 'g', "compile with debugging information"),
38
- ('force', 'f', "forcibly build everything (ignore file timestamps)"),
39
- ('compiler=', 'c', "specify the compiler type"),
40
- ]
41
-
42
- boolean_options = ['debug', 'force']
43
-
44
- help_options = [
45
- ('help-compiler', None, "list available compilers", show_compilers),
46
- ]
47
-
48
- def initialize_options(self):
49
- self.build_clib = None
50
- self.build_temp = None
51
-
52
- # List of libraries to build
53
- self.libraries = None
54
-
55
- # Compilation options for all libraries
56
- self.include_dirs = None
57
- self.define = None
58
- self.undef = None
59
- self.debug = None
60
- self.force = 0
61
- self.compiler = None
62
-
63
- def finalize_options(self):
64
- # This might be confusing: both build-clib and build-temp default
65
- # to build-temp as defined by the "build" command. This is because
66
- # I think that C libraries are really just temporary build
67
- # by-products, at least from the point of view of building Python
68
- # extensions -- but I want to keep my options open.
69
- self.set_undefined_options(
70
- 'build',
71
- ('build_temp', 'build_clib'),
72
- ('build_temp', 'build_temp'),
73
- ('compiler', 'compiler'),
74
- ('debug', 'debug'),
75
- ('force', 'force'),
76
- )
77
-
78
- self.libraries = self.distribution.libraries
79
- if self.libraries:
80
- self.check_library_list(self.libraries)
81
-
82
- if self.include_dirs is None:
83
- self.include_dirs = self.distribution.include_dirs or []
84
- if isinstance(self.include_dirs, str):
85
- self.include_dirs = self.include_dirs.split(os.pathsep)
86
-
87
- # XXX same as for build_ext -- what about 'self.define' and
88
- # 'self.undef' ?
89
-
90
- def run(self):
91
- if not self.libraries:
92
- return
93
-
94
- # Yech -- this is cut 'n pasted from build_ext.py!
95
- from distutils.ccompiler import new_compiler
96
-
97
- self.compiler = new_compiler(
98
- compiler=self.compiler, dry_run=self.dry_run, force=self.force
99
- )
100
- customize_compiler(self.compiler)
101
-
102
- if self.include_dirs is not None:
103
- self.compiler.set_include_dirs(self.include_dirs)
104
- if self.define is not None:
105
- # 'define' option is a list of (name,value) tuples
106
- for (name, value) in self.define:
107
- self.compiler.define_macro(name, value)
108
- if self.undef is not None:
109
- for macro in self.undef:
110
- self.compiler.undefine_macro(macro)
111
-
112
- self.build_libraries(self.libraries)
113
-
114
- def check_library_list(self, libraries):
115
- """Ensure that the list of libraries is valid.
116
-
117
- `library` is presumably provided as a command option 'libraries'.
118
- This method checks that it is a list of 2-tuples, where the tuples
119
- are (library_name, build_info_dict).
120
-
121
- Raise DistutilsSetupError if the structure is invalid anywhere;
122
- just returns otherwise.
123
- """
124
- if not isinstance(libraries, list):
125
- raise DistutilsSetupError("'libraries' option must be a list of tuples")
126
-
127
- for lib in libraries:
128
- if not isinstance(lib, tuple) and len(lib) != 2:
129
- raise DistutilsSetupError("each element of 'libraries' must a 2-tuple")
130
-
131
- name, build_info = lib
132
-
133
- if not isinstance(name, str):
134
- raise DistutilsSetupError(
135
- "first element of each tuple in 'libraries' "
136
- "must be a string (the library name)"
137
- )
138
-
139
- if '/' in name or (os.sep != '/' and os.sep in name):
140
- raise DistutilsSetupError(
141
- "bad library name '%s': "
142
- "may not contain directory separators" % lib[0]
143
- )
144
-
145
- if not isinstance(build_info, dict):
146
- raise DistutilsSetupError(
147
- "second element of each tuple in 'libraries' "
148
- "must be a dictionary (build info)"
149
- )
150
-
151
- def get_library_names(self):
152
- # Assume the library list is valid -- 'check_library_list()' is
153
- # called from 'finalize_options()', so it should be!
154
- if not self.libraries:
155
- return None
156
-
157
- lib_names = []
158
- for (lib_name, build_info) in self.libraries:
159
- lib_names.append(lib_name)
160
- return lib_names
161
-
162
- def get_source_files(self):
163
- self.check_library_list(self.libraries)
164
- filenames = []
165
- for (lib_name, build_info) in self.libraries:
166
- sources = build_info.get('sources')
167
- if sources is None or not isinstance(sources, (list, tuple)):
168
- raise DistutilsSetupError(
169
- "in 'libraries' option (library '%s'), "
170
- "'sources' must be present and must be "
171
- "a list of source filenames" % lib_name
172
- )
173
-
174
- filenames.extend(sources)
175
- return filenames
176
-
177
- def build_libraries(self, libraries):
178
- for (lib_name, build_info) in libraries:
179
- sources = build_info.get('sources')
180
- if sources is None or not isinstance(sources, (list, tuple)):
181
- raise DistutilsSetupError(
182
- "in 'libraries' option (library '%s'), "
183
- "'sources' must be present and must be "
184
- "a list of source filenames" % lib_name
185
- )
186
- sources = list(sources)
187
-
188
- log.info("building '%s' library", lib_name)
189
-
190
- # First, compile the source code to object files in the library
191
- # directory. (This should probably change to putting object
192
- # files in a temporary build directory.)
193
- macros = build_info.get('macros')
194
- include_dirs = build_info.get('include_dirs')
195
- objects = self.compiler.compile(
196
- sources,
197
- output_dir=self.build_temp,
198
- macros=macros,
199
- include_dirs=include_dirs,
200
- debug=self.debug,
201
- )
202
-
203
- # Now "link" the object files together into a static library.
204
- # (On Unix at least, this isn't really linking -- it just
205
- # builds an archive. Whatever.)
206
- self.compiler.create_static_lib(
207
- objects, lib_name, output_dir=self.build_clib, debug=self.debug
208
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/extend.md DELETED
@@ -1,141 +0,0 @@
1
- # Extend Detectron2's Defaults
2
-
3
- __Research is about doing things in new ways__.
4
- This brings a tension in how to create abstractions in code,
5
- which is a challenge for any research engineering project of a significant size:
6
-
7
- 1. On one hand, it needs to have very thin abstractions to allow for the possibility of doing
8
- everything in new ways. It should be reasonably easy to break existing
9
- abstractions and replace them with new ones.
10
-
11
- 2. On the other hand, such a project also needs reasonably high-level
12
- abstractions, so that users can easily do things in standard ways,
13
- without worrying too much about the details that only certain researchers care about.
14
-
15
- In detectron2, there are two types of interfaces that address this tension together:
16
-
17
- 1. Functions and classes that take a config (`cfg`) argument
18
- created from a yaml file
19
- (sometimes with few extra arguments).
20
-
21
- Such functions and classes implement
22
- the "standard default" behavior: it will read what it needs from a given
23
- config and do the "standard" thing.
24
- Users only need to load an expert-made config and pass it around, without having to worry about
25
- which arguments are used and what they all mean.
26
-
27
- See [Yacs Configs](configs.md) for a detailed tutorial.
28
-
29
- 2. Functions and classes that have well-defined explicit arguments.
30
-
31
- Each of these is a small building block of the entire system.
32
- They require users' expertise to understand what each argument should be,
33
- and require more effort to stitch together to a larger system.
34
- But they can be stitched together in more flexible ways.
35
-
36
- When you need to implement something not supported by the "standard defaults"
37
- included in detectron2, these well-defined components can be reused.
38
-
39
- The [LazyConfig system](lazyconfigs.md) relies on such functions and classes.
40
-
41
- 3. A few functions and classes are implemented with the
42
- [@configurable](../modules/config.html#detectron2.config.configurable)
43
- decorator - they can be called with either a config, or with explicit arguments, or a mixture of both.
44
- Their explicit argument interfaces are currently experimental.
45
-
46
- As an example, a Mask R-CNN model can be built in the following ways:
47
-
48
- 1. Config-only:
49
- ```python
50
- # load proper yaml config file, then
51
- model = build_model(cfg)
52
- ```
53
-
54
- 2. Mixture of config and additional argument overrides:
55
- ```python
56
- model = GeneralizedRCNN(
57
- cfg,
58
- roi_heads=StandardROIHeads(cfg, batch_size_per_image=666),
59
- pixel_std=[57.0, 57.0, 57.0])
60
- ```
61
-
62
- 3. Full explicit arguments:
63
- <details>
64
- <summary>
65
- (click to expand)
66
- </summary>
67
-
68
- ```python
69
- model = GeneralizedRCNN(
70
- backbone=FPN(
71
- ResNet(
72
- BasicStem(3, 64, norm="FrozenBN"),
73
- ResNet.make_default_stages(50, stride_in_1x1=True, norm="FrozenBN"),
74
- out_features=["res2", "res3", "res4", "res5"],
75
- ).freeze(2),
76
- ["res2", "res3", "res4", "res5"],
77
- 256,
78
- top_block=LastLevelMaxPool(),
79
- ),
80
- proposal_generator=RPN(
81
- in_features=["p2", "p3", "p4", "p5", "p6"],
82
- head=StandardRPNHead(in_channels=256, num_anchors=3),
83
- anchor_generator=DefaultAnchorGenerator(
84
- sizes=[[32], [64], [128], [256], [512]],
85
- aspect_ratios=[0.5, 1.0, 2.0],
86
- strides=[4, 8, 16, 32, 64],
87
- offset=0.0,
88
- ),
89
- anchor_matcher=Matcher([0.3, 0.7], [0, -1, 1], allow_low_quality_matches=True),
90
- box2box_transform=Box2BoxTransform([1.0, 1.0, 1.0, 1.0]),
91
- batch_size_per_image=256,
92
- positive_fraction=0.5,
93
- pre_nms_topk=(2000, 1000),
94
- post_nms_topk=(1000, 1000),
95
- nms_thresh=0.7,
96
- ),
97
- roi_heads=StandardROIHeads(
98
- num_classes=80,
99
- batch_size_per_image=512,
100
- positive_fraction=0.25,
101
- proposal_matcher=Matcher([0.5], [0, 1], allow_low_quality_matches=False),
102
- box_in_features=["p2", "p3", "p4", "p5"],
103
- box_pooler=ROIPooler(7, (1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32), 0, "ROIAlignV2"),
104
- box_head=FastRCNNConvFCHead(
105
- ShapeSpec(channels=256, height=7, width=7), conv_dims=[], fc_dims=[1024, 1024]
106
- ),
107
- box_predictor=FastRCNNOutputLayers(
108
- ShapeSpec(channels=1024),
109
- test_score_thresh=0.05,
110
- box2box_transform=Box2BoxTransform((10, 10, 5, 5)),
111
- num_classes=80,
112
- ),
113
- mask_in_features=["p2", "p3", "p4", "p5"],
114
- mask_pooler=ROIPooler(14, (1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32), 0, "ROIAlignV2"),
115
- mask_head=MaskRCNNConvUpsampleHead(
116
- ShapeSpec(channels=256, width=14, height=14),
117
- num_classes=80,
118
- conv_dims=[256, 256, 256, 256, 256],
119
- ),
120
- ),
121
- pixel_mean=[103.530, 116.280, 123.675],
122
- pixel_std=[1.0, 1.0, 1.0],
123
- input_format="BGR",
124
- )
125
- ```
126
-
127
- </details>
128
-
129
-
130
- If you only need the standard behavior, the [Beginner's Tutorial](./getting_started.md)
131
- should suffice. If you need to extend detectron2 to your own needs,
132
- see the following tutorials for more details:
133
-
134
- * Detectron2 includes a few standard datasets. To use custom ones, see
135
- [Use Custom Datasets](./datasets.md).
136
- * Detectron2 contains the standard logic that creates a data loader for training/testing from a
137
- dataset, but you can write your own as well. See [Use Custom Data Loaders](./data_loading.md).
138
- * Detectron2 implements many standard detection models, and provide ways for you
139
- to overwrite their behaviors. See [Use Models](./models.md) and [Write Models](./write-models.md).
140
- * Detectron2 provides a default training loop that is good for common training tasks.
141
- You can customize it with hooks, or write your own loop instead. See [training](./training.md).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Agua Clasificacin Rompecabezas Mod Apk Descargar.md DELETED
@@ -1,48 +0,0 @@
1
-
2
- <h1>Tipo de agua Jigsaw Mod APK Descargar: Un divertido y relajante juego de puzzle</h1>
3
- <p>¿Te encantan los juegos de puzzle que desafían tu cerebro y calman tus nervios? Si es así, entonces deberías probar Water Sort Jigsaw, un juego único y adictivo que combina clasificación de agua y rompecabezas. En este juego, tienes que ordenar diferentes colores de agua en tubos separados y completar hermosas imágenes con el agua ordenada. Suena fácil, ¿verdad? Bueno, no tan rápido. Tienes que tener cuidado de no mezclar los colores o desbordar los tubos, o tendrás que empezar de nuevo. Water Sort Jigsaw es un juego que pondrá a prueba tu lógica, paciencia y creatividad. </p>
4
- <h2>¿Qué es el rompecabezas de clasificación de agua? </h2>
5
- <p>Water Sort Jigsaw es un juego de puzzle desarrollado por IEC Global Pty Ltd, una empresa especializada en juegos casuales y educativos para todas las edades. El juego fue lanzado en 2020 y se ha descargado más de 10 millones de veces en Google Play Store. El juego tiene una calificación de 4.4 de 5 estrellas, con miles de comentarios positivos de jugadores satisfechos. </p>
6
- <h2>agua clasificación rompecabezas mod apk descargar</h2><br /><p><b><b>Download File</b> &mdash;&mdash;&mdash;&mdash;&mdash; <a href="https://bltlly.com/2v6KnM">https://bltlly.com/2v6KnM</a></b></p><br /><br />
7
- <h3>Cómo jugar Agua Ordenar rompecabezas</h3>
8
- <p>El juego de Water Sort Jigsaw es simple e intuitivo. Tienes un conjunto de tubos llenos de diferentes colores de agua. Su objetivo es ordenar el agua por color en tubos separados. Solo puede verter agua de un tubo a otro si los colores coinciden o si el tubo está vacío. También puedes usar tubos vacíos como almacenamiento temporal de agua. Tienes que ordenar toda el agua de los tubos para completar el nivel. </p>
9
- <p>A medida que avanzas a través de los niveles, también desbloquearás diferentes rompecabezas que puedes completar con el agua ordenada. Los rompecabezas se basan en varios temas, como animales, naturaleza, comida, arte y más. Usted puede elegir el nivel de dificultad de los puzzles, de fácil a difícil. Los rompecabezas son una gran manera de relajarse y disfrutar de los gráficos coloridos del juego. </p>
10
- <h3> ¿Por qué descargar agua Ordenar Jigsaw mod apk? </h3>
11
-
12
- <h2>Características de Water Sort Jigsaw mod apk</h2>
13
- <h3>Niveles y puzzles ilimitados</h3>
14
- <p>Una de las mejores características de Water Sort Jigsaw mod apk es que le da acceso ilimitado a todos los niveles y rompecabezas en el juego. No tienes que esperar nuevas actualizaciones o pagar por contenido premium. Puedes jugar todo lo que quieras y disfrutar de interminables horas de diversión y entretenimiento. </p>
15
- <h3>Gráficos coloridos y sonidos relajantes</h3>
16
- <p>Otra característica de Water Sort Jigsaw mod apk es que mejora los gráficos y sonidos del juego. El apk mod hace que los colores más vibrante y realista, haciendo el juego más atractivo y atractivo. El mod apk también mejora la calidad de sonido y añade más relajante música y efectos de sonido para el juego. El juego se vuelve más envolvente y relajante con el mod apk. </p>
17
- <h3>No se requieren anuncios ni internet</h3>
18
- <p>Una tercera característica de Water Sort Jigsaw mod apk es que elimina todos los anuncios molestos y pop-ups que interrumpen su juego. No tienes que ver anuncios para desbloquear niveles u obtener recompensas. Puedes jugar sin distracciones ni interrupciones. El mod apk también le permite jugar sin conexión, sin necesidad de una conexión a Internet. Puede jugar el juego en cualquier momento y en cualquier lugar que desee. </p>
19
- <h3>Fácil de instalar y usar</h3>
20
- <p>Una cuarta característica de Water Sort Jigsaw mod apk es que es muy fácil de instalar y usar. Usted no necesita raíz de su dispositivo o pasar por pasos complicados para obtener el apk mod. Solo tiene que descargar el archivo apk mod de una fuente de confianza y siga las instrucciones simples a continuación. El mod apk es compatible con la mayoría de los dispositivos Android y se ejecuta sin problemas sin errores o problemas técnicos. </p>
21
- <p></p>
22
- <h2>¿Cómo descargar e instalar Water Sort Jigsaw mod apk? </h2>
23
- <p>Si usted está interesado en la descarga y la instalación de Water Sort Jigsaw mod apk, puede seguir estos sencillos pasos:</p>
24
- <h3>Paso 1: Descargar el archivo apk mod de una fuente de confianza</h3>
25
-
26
- <p><a href="">Clasificación del agua Jigsaw Mod APK Descargar</a></p>
27
- <h3>Paso 2: Habilitar fuentes desconocidas en el dispositivo</h3>
28
- <p>El segundo paso es habilitar fuentes desconocidas en su dispositivo. Esto le permitirá instalar aplicaciones que no son de Google Play Store. Para hacer esto, vaya a la configuración del dispositivo y busque opciones de seguridad o privacidad. Luego, encuentre la opción que dice fuentes desconocidas o permita la instalación desde fuentes desconocidas y enciéndala. </p>
29
- <h3>Paso 3: Instalar el archivo apk mod y disfrutar del juego</h3>
30
- <p>El tercer y último paso es instalar el archivo apk mod y disfrutar del juego. Para hacer esto, busque el archivo descargado en su dispositivo y toque en él. Luego, siga las instrucciones en pantalla para completar el proceso de instalación. Una vez hecho esto, puedes iniciar el juego y comenzar a jugar con funciones y beneficios ilimitados. </p>
31
- <h2>Conclusión</h2>
32
- <p>Water Sort Jigsaw es un divertido y relajante juego de puzzle que te mantendrá entretenido durante horas. Es una gran manera de ejercitar el cerebro y aliviar el estrés. Si usted quiere disfrutar del juego con más características y beneficios, usted debe descargar Water Sort Jigsaw mod apk. El apk mod le da acceso ilimitado a todos los niveles y rompecabezas, mejora los gráficos y sonidos, elimina anuncios y requisitos de Internet, y es fácil de instalar y usar. Puede descargar agua Ordenar Jigsaw mod apk desde el enlace de abajo y empezar a clasificar el agua y completar rompecabezas. </p>
33
- <h4>Preguntas frecuentes</h4>
34
- <p>Aquí hay algunas preguntas frecuentes sobre Water Sort Jigsaw mod apk:</p>
35
- <ul>
36
- <li><b> ¿Es seguro descargar Jigsaw mod apk? </b></li>
37
- <li>Sí, Agua Ordenar Jigsaw mod apk es seguro de descargar, siempre y cuando se obtiene de una fuente de confianza. El archivo apk mod es libre de virus y no contiene ningún código malicioso o malware. </li>
38
- <li><b>¿Requiere acceso de raíz Jigsaw mod apk? </b></li>
39
-
40
- <li><b>¿Puedo actualizar la clasificación de agua Jigsaw mod apk? </b></li>
41
- <li>No, Agua Ordenar Jigsaw mod apk no es compatible con las actualizaciones de la versión oficial. Si desea actualizar el juego, usted tiene que desinstalar el apk mod e instalar la última versión de la Google Play Store.</li>
42
- <li><b>¿Puedo jugar Water Sort Jigsaw con mis amigos? </b></li>
43
- <li>No, Water Sort Jigsaw no tiene un modo multijugador o una función social. Solo puede jugar el juego en solitario y sin conexión. </li>
44
- <li><b>¿Puedo personalizar la configuración del juego? </b></li>
45
- <li>Sí, Water Sort Jigsaw te permite personalizar algunos de los ajustes del juego, como sonido, música, vibración, idioma y nivel de dificultad. Puedes acceder a estos ajustes desde el menú principal del juego. </li>
46
- </ul></p> 64aa2da5cf<br />
47
- <br />
48
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Bmw Drift Apk.md DELETED
@@ -1,54 +0,0 @@
1
-
2
- <h1>BMW deriva APK: Un divertido y realista juego de deriva para Android</h1>
3
- <p>Si usted es un fan de la deriva y los coches de BMW, te encantará BMW Drift APK, un juego que le permite experimentar la emoción de deslizarse de lado en varios modelos del fabricante de automóviles alemán. En este artículo, le diremos qué es BMW Drift APK, cómo descargarlo e instalarlo, cómo jugarlo, y algunos consejos y trucos para mejorar sus habilidades de deriva. </p>
4
- <h2>¿Qué es BMW Drift APK? </h2>
5
- <p>BMW Drift APK es un juego que simula la técnica de conducción de la deriva, donde el conductor sobrevirajes intencionalmente y pierde la tracción manteniendo el control y la dirección. El juego le permite elegir entre diferentes modelos de BMW, como el M3, M5, Z4, X6, y más, y la deriva en varias pistas, como calles de la ciudad, carreteras, carreteras de montaña, y circuitos de carreras. </p>
6
- <h2>bmw drift apk</h2><br /><p><b><b>DOWNLOAD</b> &#11088; <a href="https://bltlly.com/2v6J5l">https://bltlly.com/2v6J5l</a></b></p><br /><br />
7
- <h3>Características del juego</h3>
8
- <p>BMW Drift APK tiene muchas características que lo convierten en un juego de deriva divertido y realista para dispositivos Android. Algunas de estas características son:</p>
9
- <h4>Física y gráficos realistas</h4>
10
- <p>El juego utiliza la física avanzada y los motores gráficos para crear una experiencia de conducción realista. Se puede ver el humo de los neumáticos, las chispas de su parachoques, el daño a su coche, y los reflejos en las ventanas. También puede sentir la transferencia de peso, la inercia, el agarre y la retroalimentación de su automóvil a medida que se deriva. </p>
11
- <h4>Coches y ajustes personalizables</h4>
12
- <p>El juego te permite personalizar la apariencia y el rendimiento de tu coche. Puedes cambiar el color, las ruedas, los spoilers, los escapes y más. También puede ajustar el motor, la suspensión, los frenos, los neumáticos, el diferencial y la dirección de su automóvil para adaptarse a su estilo de conducción. También puedes ajustar la configuración del juego, como el ángulo de la cámara, los efectos de sonido, el volumen de música y el nivel de dificultad. </p>
13
- <h4>Múltiples modos de juego y desafíos</h4>
14
-
15
- <h2>Cómo descargar e instalar BMW deriva APK? </h2>
16
- <p>Si desea descargar e instalar BMW Drift APK en su dispositivo Android, es necesario seguir estos pasos:</p>
17
- <h3>Descargar el archivo APK de una fuente de confianza</h3>
18
- <p>El primer paso es descargar el archivo APK de BMW Drift APK de una fuente de confianza. Puede utilizar este enlace para descargarlo de forma segura. El tamaño del archivo es de unos 50 MB.</p>
19
- <h3>Habilitar fuentes desconocidas en su dispositivo</h3>
20
- <p>El siguiente paso es habilitar fuentes desconocidas en su dispositivo. Esto le permitirá instalar aplicaciones que no son de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. </p>
21
- <h3>Instalar el archivo APK y lanzar el juego</h3>
22
- <p>El paso final es instalar el archivo APK y lanzar el juego. Para hacer esto, busque el archivo descargado en su administrador de archivos y toque en él. Siga las instrucciones en la pantalla para instalar el juego. Una vez que la instalación haya terminado, puede abrir el juego y disfrutar de la deriva. </p>
23
- <h2>Cómo jugar BMW deriva APK? </h2>
24
- <p>Jugar BMW Drift APK es fácil y divertido. Aquí están los pasos básicos para jugar el juego:</p>
25
- <p></p>
26
- <h3>Elige tu coche y pista</h3>
27
- <p>Lo primero que tienes que hacer es elegir tu coche y pista. Puedes seleccionar entre una variedad de modelos BMW, como el M3, M5, Z4, X6 y más. También puede elegir entre diferentes pistas, como calles de la ciudad, carreteras, carreteras de montaña y circuitos de carreras. También puede personalizar la apariencia y el rendimiento de su coche antes de empezar a deriva. </p>
28
- <h3>Utilice los controles para dirigir, acelerar, frenar y la deriva</h3>
29
- <p>Lo siguiente que tienes que hacer es utilizar los controles para dirigir, acelerar, frenar y la deriva. Puede utilizar los botones en pantalla o el sensor de inclinación de su dispositivo para controlar su automóvil. También puede usar el botón de freno de mano para iniciar una deriva. El juego te mostrará un indicador de deriva que te dice qué tan bien estás a la deriva. Cuanto más te desvíes, más puntos y recompensas ganarás. </p>
30
-
31
- <p>Lo último que tienes que hacer es ganar puntos y recompensas por tus habilidades de deriva. El juego te dará puntos basados en el ángulo, la velocidad, la duración y la distancia de tus derivas. También puede ganar puntos extra mediante la realización de combos, como encadenar múltiples derivaciones juntos o la deriva cerca de los obstáculos. Puede utilizar los puntos y recompensas para desbloquear nuevos coches y pistas, o actualizar los existentes. </p>
32
- <h2> Consejos y trucos para BMW deriva APK</h2>
33
- <p>Si desea mejorar sus habilidades de deriva y disfrutar del juego más, aquí hay algunos consejos y trucos para BMW Drift APK:</p>
34
- <h3>Aprende los fundamentos de las técnicas de deriva</h3>
35
- <p>El primer consejo es aprender los fundamentos de las técnicas de deriva. La deriva no se trata solo de deslizarse hacia los lados, sino también de controlar el equilibrio y la dirección de su coche. Hay diferentes tipos de derivaciones, como derivas de potencia, derivas de freno, derivas de patada de embrague, derivas de freno de mano, y más. Puedes aprender más sobre estas técnicas en tutoriales o videos en línea. </p>
36
- <h3>Práctica en diferentes pistas y coches</h3>
37
- <p>El segundo consejo es practicar en diferentes pistas y coches. Cada pista y coche tiene sus propias características y desafíos. Algunas pistas pueden tener esquinas estrechas, carriles estrechos o superficies resbaladizas. Algunos coches pueden tener más potencia, agarre o peso que otros. Practicando en diferentes pistas y coches, aprenderás cómo adaptarte a diferentes situaciones y mejorar tus habilidades de deriva. </p>
38
- <h3>Ajustar la configuración para adaptarse a sus preferencias y el rendimiento del dispositivo</h3>
39
- <p>El tercer consejo es ajustar la configuración para adaptarse a sus preferencias y el rendimiento del dispositivo. Puedes cambiar la configuración del juego, como el ángulo de la cámara, los efectos de sonido, el volumen de la música y el nivel de dificultad. También puede ajustar la configuración de su automóvil, como el motor, la suspensión, los frenos, los neumáticos, el diferencial y la dirección. Ajustando los ajustes, puedes hacer el juego más agradable y cómodo para ti. </p>
40
- <h2>Conclusión</h2>
41
-
42
- <h2>Preguntas frecuentes</h2>
43
- <p>Aquí hay algunas preguntas frecuentes sobre BMW Drift APK:</p>
44
- <tabla>
45
- <tr><td><b>Question</b></td><td><b>Answer</b></td></tr>
46
- <tr><td>¿Es BMW Drift APK gratis? </td><td>Sí, BMW Drift APK es gratis para descargar y jugar. </td></tr>
47
- <tr><td>¿Es seguro BMW Drift APK? </td><td>Sí, BMW Drift APK es seguro si lo descarga de una fuente de confianza como este enlace. Sin embargo, siempre debe tener cuidado al instalar aplicaciones de fuentes desconocidas. </td></tr>
48
- <tr><td>Es BMW Dr ift APK compatible con mi dispositivo? </td><td>BMW Drift APK es compatible con la mayoría de los dispositivos Android que tienen Android 4.1 o superior. Sin embargo, algunos dispositivos pueden tener problemas de rendimiento o compatibilidad dependiendo de sus especificaciones y configuraciones. </td></tr>
49
- <tr><td>¿Puedo jugar BMW Drift APK offline? </td><td>Sí, puede jugar BMW Drift APK offline en modo libre y modo carrera. Sin embargo, necesitará una conexión a Internet para jugar en modo online y acceder a algunas características, como tablas de clasificación y actualizaciones. </td></tr>
50
- <tr><td>¿Puedo jugar BMW Drift APK con un controlador? </td><td>Sí, puede jugar BMW Drift APK con un controlador si su dispositivo lo admite. Puede conectar su controlador a través de Bluetooth o USB y configurar los botones en la configuración del juego. </td></tr>
51
- </tabla>
52
- <p>Espero que este artículo le ha ayudado a aprender más sobre BMW Drift APK y cómo disfrutarlo. Si tienes alguna pregunta o comentario, por favor deja un comentario abajo. Happy drifting! </p> 64aa2da5cf<br />
53
- <br />
54
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar El Juego Mod Township Offline.md DELETED
@@ -1,79 +0,0 @@
1
-
2
- <h1>Cómo descargar el juego Mod Township sin conexión gratis</h1>
3
- <p>Si usted está buscando un juego divertido y relajante que combina la construcción de la ciudad y la agricultura, entonces usted debe probar Township. Township es un popular juego móvil que te permite crear tu ciudad de ensueño, cosechar cosechas, comerciar con otros países, administrar un zoológico y más. ¿Pero qué pasa si quieres jugar Township sin conexión a Internet? ¿O qué pasa si quieres obtener recursos ilimitados, monedas y dinero en efectivo en el juego? En este artículo, te mostraremos cómo descargar el juego mod Township offline gratis. También explicaremos qué es un mod de juego, cómo puede mejorar su experiencia de juego y cuáles son los beneficios y riesgos de descargar el juego mod Township sin conexión. </p>
4
- <h2>descargar el juego mod township offline</h2><br /><p><b><b>Download Zip</b> ---> <a href="https://bltlly.com/2v6JJV">https://bltlly.com/2v6JJV</a></b></p><br /><br />
5
- <h2>¿Qué es el municipio y por qué usted debe jugar</h2>
6
- <p>Municipio es una mezcla única de la construcción de la ciudad y la agricultura que fue desarrollado por Playrix. Está disponible para dispositivos Android, iOS, Windows, Xbox One, PlayStation 4 y Nintendo Switch. En Township, puedes construir tu ciudad de ensueño desde cero, utilizando varios edificios y decoraciones que puedes personalizar a tu gusto. También puede cultivar y procesar cultivos en sus granjas y fábricas, vender bienes para desarrollar su ciudad, comerciar con islas exóticas, restaurantes abiertos, cines y otros edificios comunitarios, explorar la mina en busca de recursos y artefactos, administrar su propio zoológico con animales de todo el mundo, y más. Township es un juego que ofrece infinitas posibilidades de creatividad y diversión. </p>
7
- <p>Township tiene muchas características y actividades para disfrutar. Puedes jugar con tus amigos de Facebook y Google+, hacer nuevos amigos en la comunidad de juegos, crear tus propios clanes, participar en eventos y competiciones de temporada, completar misiones y pedidos de tu pueblo, recoger banderas del país y monumentos famosos para tu ciudad, ver animaciones divertidas de sus personajes, y mucho más. Township es un juego que nunca se vuelve aburrido. </p>
8
-
9
- <h2>¿Qué es un mod de juego y cómo puede mejorar su experiencia de juego</h2>
10
- <p>Un mod de juego es una modificación o alteración del juego original que cambia algunos aspectos del mismo. Un mod de juego puede ser creado por cualquiera que tenga las habilidades y herramientas para hacerlo. Un mod de juego se puede descargar desde varios sitios web o plataformas que los alojan. Puedes instalar un mod de juego en tu dispositivo siguiendo algunas instrucciones o usando algún software. </p>
11
- <p></p>
12
- <p>Un mod de juego puede agregar nuevo contenido, características o elementos de juego al juego. Por ejemplo, un mod de juego puede introducir nuevos personajes, objetos, mapas, misiones, modos o géneros al juego. Por ejemplo, un mod de juego puede convertir un juego de estrategia en un juego de rol, o un juego de carreras en un juego de supervivencia zombie. Un mod de juego también puede mejorar los gráficos, el sonido o la interfaz del juego. Por ejemplo, un mod de juego puede mejorar la resolución, texturas, iluminación o efectos del juego, o agregar nueva música, voz o subtítulos al juego. Un mod de juego también puede corregir errores, mejorar el rendimiento o personalizar el juego según tus preferencias. Por ejemplo, un mod de juego puede eliminar fallos, errores o fallos del juego, o aumentar la velocidad, estabilidad o compatibilidad del juego. Un mod de juego también puede cambiar la dificultad, el equilibrio o la mecánica del juego. Por ejemplo, un mod de juego puede hacer el juego más fácil o más difícil, más realista o más fantasía, más divertido o más desafiante. </p>
13
- <p>Un mod de juego puede mejorar tu experiencia de juego al darte más opciones, variedad y disfrute en el juego. Un mod de juego puede hacer que el juego sea más interesante, emocionante o inmersivo. Un mod de juego también puede extender la vida útil del juego añadiendo nuevo contenido o valor de repetición al juego. Un mod de juego también puede satisfacer tu curiosidad o creatividad permitiéndote explorar nuevas posibilidades o crear tus propios escenarios en el juego. </p>
14
- <h2>Cómo descargar el juego Mod Township sin conexión gratis</h2>
15
-
16
- <h3>Encuentra una fuente confiable y segura para el juego mod</h3>
17
- <p>Hay muchos sitios web o plataformas que ofrecen mods de juegos para Township y otros juegos. Sin embargo, no todos son confiables o seguros. Algunos de ellos pueden contener archivos falsos, obsoletos o dañados que pueden no funcionar correctamente o dañar su dispositivo. Algunos de ellos también pueden tener anuncios maliciosos, ventanas emergentes o enlaces que pueden redirigirle a sitios no deseados o peligrosos. Por lo tanto, debes ser cuidadoso y selectivo al elegir una fuente para el mod del juego. </p>
18
- <p>Una forma de encontrar una fuente confiable y segura para el mod del juego es hacer una investigación y leer algunos comentarios de otros usuarios que han descargado y usado el mod del juego. También puedes consultar las valoraciones, comentarios, comentarios o testimonios de otros usuarios que han descargado y utilizado el mod del juego. También puede buscar algunas recomendaciones o sugerencias de sitios de renombre, blogs, foros o comunidades que están relacionados con Township o juegos de azar en general. </p>
19
- <p>Otra manera de encontrar una fuente confiable y segura para el mod del juego es usar algunas herramientas o software que pueden ayudarlo a escanear y verificar los archivos antes de descargarlos. Puede usar algunos programas antivirus, detectores de malware, comprobadores de archivos o gestores de descargas que pueden ayudarlo a detectar y eliminar cualquier virus, malware, spyware, adware, troyanos, gusanos u otras amenazas de los archivos. También puede utilizar algunas herramientas o software que pueden ayudarle a comparar y verificar los archivos con los archivos originales del juego para asegurarse de que son compatibles y auténticos. </p>
20
- <h3>Descargue el archivo mod del juego e instálelo en su dispositivo</h3>
21
-
22
- <p>Para descargar el archivo mod del juego, debe seguir el enlace o botón proporcionado por la fuente y guardar el archivo en su dispositivo. Es posible que necesite permitir algunos permisos o acceso a su dispositivo o navegador para descargar el archivo. También es posible que necesite desactivar algunos ajustes de seguridad o características en su dispositivo o navegador para descargar el archivo. Por ejemplo, puede que necesite habilitar fuentes desconocidas o deshabilitar programas antivirus en su dispositivo para descargar el archivo. </p>
23
- <p>Para instalar el archivo mod del juego, necesita localizar y abrir el archivo en su dispositivo. Es posible que necesite extraer o descomprimir el archivo primero si es un archivo comprimido. También es posible que tenga que desinstalar o eliminar el juego original primero si ya está instalado en su dispositivo. También es posible que tenga que hacer una copia de seguridad o guardar primero el progreso o los datos del juego si desea conservarlos. Luego, debes seguir las instrucciones o pasos proporcionados por la fuente o el propio archivo para instalar el mod del juego en tu dispositivo. Es posible que necesite permitir algunos permisos o acceso a su dispositivo o aplicación para instalar el archivo. También es posible que necesite reiniciar su dispositivo o aplicación después de instalar el archivo. </p>
24
- <h3>Inicie el mod del juego y disfrute jugando Township offline</h3>
25
- <p>Después de instalar el archivo de mod del juego, puede iniciar el mod del juego y disfrutar jugando Township sin conexión. Puedes encontrar y abrir el icono o la aplicación de mod del juego en tu dispositivo. Es posible que veas algunos cambios o diferencias en el logotipo, el título, la interfaz o el contenido del juego en comparación con el juego original. También puede ver algunas notificaciones o mensajes de la fuente o el archivo en sí sobre las características o configuraciones del mod del juego. Puedes ajustarlos o personalizarlos según tus preferencias. </p>
26
-
27
- <h2>Beneficios y riesgos de descargar el juego Mod Township sin conexión</h2>
28
- <p>Descargar el juego mod Township offline tiene sus beneficios y riesgos. Estos son algunos de ellos:</p>
29
- <h3>Beneficios de descargar el juego mod Township offline</h3>
30
- <tabla>
31
- <tr>
32
- <th>Beneficio</th>
33
- <th>Descripción</th>
34
- </tr>
35
- <tr>
36
- <td>Puedes jugar a Township sin conexión a internet</td>
37
- <td>No necesitas preocuparte por tener una conexión a Internet estable o rápida para jugar a Township. Puedes jugar a Township en cualquier momento y en cualquier lugar que desees, incluso si no estás conectado. También puede guardar su uso de datos o la duración de la batería jugando Township offline. </td>
38
- </tr>
39
- <tr>
40
- <td>Puedes acceder a recursos ilimitados, monedas y dinero en efectivo en el juego</td>
41
- <td>No necesitas esperar a que tus recursos crezcan o se repongan en el juego. Usted no necesita gastar su dinero real para comprar monedas o dinero en efectivo en el juego. Puedes tener recursos ilimitados, monedas y dinero en el juego que puedes usar para construir, mejorar o expandir tu ciudad, granja, zoológico y más. </td>
42
- </tr>
43
- <tr>
44
- <td>Puedes desbloquear todos los edificios, decoraciones y animales del juego</td>
45
- <td>No necesitas subir de nivel o completar ciertas tareas para desbloquear todos los edificios, decoraciones y animales del juego. Puedes tener acceso a todos los elementos y opciones del juego que puedes usar para personalizar y embellecer tu ciudad, granja, zoológico y más. </td>
46
- </tr>
47
- </tabla>
48
- <h3>Riesgos de descargar el juego mod Township offline</h3>
49
- <tabla>
50
- <tr>
51
- <th>Riesgo</th>
52
- <th>Descripción</th>
53
- </tr>
54
- <tr>
55
- <td>Usted puede encontrar problemas de compatibilidad o errores en el juego mod</td>
56
- <td>El mod del juego puede no funcionar correctamente o sin problemas en su dispositivo o aplicación. Es posible que el mod del juego no sea compatible con el modelo de dispositivo, el sistema operativo, la versión de la aplicación u otros factores. El mod del juego también puede tener algunos errores, fallas o errores que pueden afectar su juego o rendimiento. </td>
57
- </tr>
58
- <tr>
59
- <td>Puede violar los términos de servicio o la política de privacidad del desarrollador del juego</td>
60
-
61
- </tr>
62
- <tr>
63
- <td>Puede exponer su dispositivo a malware o virus desde el archivo de mod del juego</td>
64
- <td>El archivo mod del juego puede contener algún código malicioso o software que puede dañar su dispositivo o aplicación. El archivo mod del juego también puede tener algunos anuncios ocultos, ventanas emergentes o enlaces que pueden redirigirlo a sitios no deseados o peligrosos. Al descargar e instalar el archivo mod del juego, puede exponer su dispositivo a malware o virus que pueden dañar su dispositivo o aplicación. </td>
65
- </tr>
66
- </tabla>
67
- <h2>Conclusión y preguntas frecuentes</h2>
68
- <p>En conclusión, descargar el juego mod Township sin conexión es una manera de disfrutar jugando Township sin conexión a Internet y con recursos ilimitados, monedas, dinero en efectivo y artículos en el juego. Sin embargo, descargar el juego mod Township offline también tiene algunos riesgos como problemas de compatibilidad, términos de violaciones de servicio y exposición a malware. Por lo tanto, debe ser cuidadoso y responsable al descargar y usar el juego mod Township offline. Necesitas encontrar una fuente confiable y segura para el mod del juego, descargar e instalar el archivo del mod del juego correctamente, y lanzar y jugar el mod del juego con precaución. También es necesario respetar los derechos e intereses del desarrollador del juego y otros jugadores. También debes ser consciente de las posibles consecuencias de descargar y usar el juego mod Township offline. Aquí hay algunas preguntas frecuentes que pueden ayudarle a aprender más acerca de cómo descargar el juego mod Township offline: <h4>Q: ¿Puedo jugar Township online con el mod del juego? </h4>
69
- <p>A: No, no se puede jugar Township en línea con el mod del juego. El mod del juego está diseñado para funcionar solo sin conexión. Si intenta jugar Township en línea con el mod del juego, puede encontrar algunos errores o problemas. También puede correr el riesgo de ser detectado o reportado por el desarrollador del juego u otros jugadores. </p>
70
- <h4>Q: ¿Puedo actualizar Township con el mod del juego? </h4>
71
-
72
- <h4>P: ¿Puedo restaurar mi juego original de Township después de usar el mod de juego? </h4>
73
- <p>A: Sí, puedes restaurar tu juego original de Township después de usar el mod de juego. Necesitas desinstalar o eliminar el archivo de mod de juego de tu dispositivo. También necesitas reinstalar o descargar el juego original de Township desde la fuente oficial. También es posible que necesite restaurar o recuperar el progreso del juego original de Township o los datos de su copia de seguridad o almacenamiento en la nube. </p>
74
- <h4>Q: ¿Puedo usar otros mods de juego para Township? </h4>
75
- <p>A: Sí, puedes usar otros mods de juegos para Township. Hay muchos tipos diferentes de mods de juego para Township que ofrecen diferentes características o funciones. Sin embargo, debes ser cuidadoso y selectivo al elegir y usar otros mods de juego para Township. Necesitas asegurarte de que sean confiables, seguros, compatibles y actualizados. </p>
76
- <h4>Q: ¿Puedo crear mi propio mod de juego para Township? </h4>
77
- <p>A: Sí, puedes crear tu propio mod de juego para Township si tienes las habilidades y herramientas para hacerlo. Necesitas tener algún conocimiento y experiencia en programación, codificación, hacking o modificación de juegos. También necesitas tener algunas herramientas o software que te ayuden a crear, editar, probar o distribuir tu propio mod de juego para Township. Sin embargo, debes ser respetuoso y ético al crear tu propio mod de juego para Township. Es necesario seguir las reglas y reglamentos del desarrollador de juegos y la comunidad de juegos. También necesitas dar crédito y reconocimiento a las fuentes originales o creadores de tu propio mod de juego para Township.</p> 64aa2da5cf<br />
78
- <br />
79
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BetterAPI/BetterChat_new/src/lib/utils/sha256.ts DELETED
@@ -1,7 +0,0 @@
1
- export async function sha256(input: string): Promise<string> {
2
- const utf8 = new TextEncoder().encode(input);
3
- const hashBuffer = await crypto.subtle.digest("SHA-256", utf8);
4
- const hashArray = Array.from(new Uint8Array(hashBuffer));
5
- const hashHex = hashArray.map((bytes) => bytes.toString(16).padStart(2, "0")).join("");
6
- return hashHex;
7
- }
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/attr.py DELETED
@@ -1,72 +0,0 @@
1
- # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License"). You
4
- # may not use this file except in compliance with the License. A copy of
5
- # the License is located at
6
- #
7
- # https://aws.amazon.com/apache2.0/
8
- #
9
- # or in the "license" file accompanying this file. This file is
10
- # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11
- # ANY KIND, either express or implied. See the License for the specific
12
- # language governing permissions and limitations under the License.
13
- from botocore.docs.params import ResponseParamsDocumenter
14
-
15
- from boto3.docs.utils import get_identifier_description
16
-
17
-
18
- class ResourceShapeDocumenter(ResponseParamsDocumenter):
19
- EVENT_NAME = 'resource-shape'
20
-
21
-
22
- def document_attribute(
23
- section,
24
- service_name,
25
- resource_name,
26
- attr_name,
27
- event_emitter,
28
- attr_model,
29
- include_signature=True,
30
- ):
31
- if include_signature:
32
- full_attr_name = f"{section.context.get('qualifier', '')}{attr_name}"
33
- section.style.start_sphinx_py_attr(full_attr_name)
34
- # Note that an attribute may have one, may have many, or may have no
35
- # operations that back the resource's shape. So we just set the
36
- # operation_name to the resource name if we ever to hook in and modify
37
- # a particular attribute.
38
- ResourceShapeDocumenter(
39
- service_name=service_name,
40
- operation_name=resource_name,
41
- event_emitter=event_emitter,
42
- ).document_params(section=section, shape=attr_model)
43
-
44
-
45
- def document_identifier(
46
- section,
47
- resource_name,
48
- identifier_model,
49
- include_signature=True,
50
- ):
51
- if include_signature:
52
- full_identifier_name = (
53
- f"{section.context.get('qualifier', '')}{identifier_model.name}"
54
- )
55
- section.style.start_sphinx_py_attr(full_identifier_name)
56
- description = get_identifier_description(
57
- resource_name, identifier_model.name
58
- )
59
- section.write(f'*(string)* {description}')
60
-
61
-
62
- def document_reference(section, reference_model, include_signature=True):
63
- if include_signature:
64
- full_reference_name = (
65
- f"{section.context.get('qualifier', '')}{reference_model.name}"
66
- )
67
- section.style.start_sphinx_py_attr(full_reference_name)
68
- reference_type = f'(:py:class:`{reference_model.resource.type}`) '
69
- section.write(reference_type)
70
- section.include_doc_string(
71
- f'The related {reference_model.name} if set, otherwise ``None``.'
72
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/resolvelib/candidates.py DELETED
@@ -1,552 +0,0 @@
1
- import logging
2
- import sys
3
- from typing import TYPE_CHECKING, Any, FrozenSet, Iterable, Optional, Tuple, Union, cast
4
-
5
- from pip._vendor.packaging.utils import NormalizedName, canonicalize_name
6
- from pip._vendor.packaging.version import Version
7
-
8
- from pip._internal.exceptions import (
9
- HashError,
10
- InstallationSubprocessError,
11
- MetadataInconsistent,
12
- )
13
- from pip._internal.metadata import BaseDistribution
14
- from pip._internal.models.link import Link, links_equivalent
15
- from pip._internal.models.wheel import Wheel
16
- from pip._internal.req.constructors import (
17
- install_req_from_editable,
18
- install_req_from_line,
19
- )
20
- from pip._internal.req.req_install import InstallRequirement
21
- from pip._internal.utils.direct_url_helpers import direct_url_from_link
22
- from pip._internal.utils.misc import normalize_version_info
23
-
24
- from .base import Candidate, CandidateVersion, Requirement, format_name
25
-
26
- if TYPE_CHECKING:
27
- from .factory import Factory
28
-
29
- logger = logging.getLogger(__name__)
30
-
31
- BaseCandidate = Union[
32
- "AlreadyInstalledCandidate",
33
- "EditableCandidate",
34
- "LinkCandidate",
35
- ]
36
-
37
- # Avoid conflicting with the PyPI package "Python".
38
- REQUIRES_PYTHON_IDENTIFIER = cast(NormalizedName, "<Python from Requires-Python>")
39
-
40
-
41
- def as_base_candidate(candidate: Candidate) -> Optional[BaseCandidate]:
42
- """The runtime version of BaseCandidate."""
43
- base_candidate_classes = (
44
- AlreadyInstalledCandidate,
45
- EditableCandidate,
46
- LinkCandidate,
47
- )
48
- if isinstance(candidate, base_candidate_classes):
49
- return candidate
50
- return None
51
-
52
-
53
- def make_install_req_from_link(
54
- link: Link, template: InstallRequirement
55
- ) -> InstallRequirement:
56
- assert not template.editable, "template is editable"
57
- if template.req:
58
- line = str(template.req)
59
- else:
60
- line = link.url
61
- ireq = install_req_from_line(
62
- line,
63
- user_supplied=template.user_supplied,
64
- comes_from=template.comes_from,
65
- use_pep517=template.use_pep517,
66
- isolated=template.isolated,
67
- constraint=template.constraint,
68
- global_options=template.global_options,
69
- hash_options=template.hash_options,
70
- config_settings=template.config_settings,
71
- )
72
- ireq.original_link = template.original_link
73
- ireq.link = link
74
- ireq.extras = template.extras
75
- return ireq
76
-
77
-
78
- def make_install_req_from_editable(
79
- link: Link, template: InstallRequirement
80
- ) -> InstallRequirement:
81
- assert template.editable, "template not editable"
82
- ireq = install_req_from_editable(
83
- link.url,
84
- user_supplied=template.user_supplied,
85
- comes_from=template.comes_from,
86
- use_pep517=template.use_pep517,
87
- isolated=template.isolated,
88
- constraint=template.constraint,
89
- permit_editable_wheels=template.permit_editable_wheels,
90
- global_options=template.global_options,
91
- hash_options=template.hash_options,
92
- config_settings=template.config_settings,
93
- )
94
- ireq.extras = template.extras
95
- return ireq
96
-
97
-
98
- def _make_install_req_from_dist(
99
- dist: BaseDistribution, template: InstallRequirement
100
- ) -> InstallRequirement:
101
- if template.req:
102
- line = str(template.req)
103
- elif template.link:
104
- line = f"{dist.canonical_name} @ {template.link.url}"
105
- else:
106
- line = f"{dist.canonical_name}=={dist.version}"
107
- ireq = install_req_from_line(
108
- line,
109
- user_supplied=template.user_supplied,
110
- comes_from=template.comes_from,
111
- use_pep517=template.use_pep517,
112
- isolated=template.isolated,
113
- constraint=template.constraint,
114
- global_options=template.global_options,
115
- hash_options=template.hash_options,
116
- config_settings=template.config_settings,
117
- )
118
- ireq.satisfied_by = dist
119
- return ireq
120
-
121
-
122
- class _InstallRequirementBackedCandidate(Candidate):
123
- """A candidate backed by an ``InstallRequirement``.
124
-
125
- This represents a package request with the target not being already
126
- in the environment, and needs to be fetched and installed. The backing
127
- ``InstallRequirement`` is responsible for most of the leg work; this
128
- class exposes appropriate information to the resolver.
129
-
130
- :param link: The link passed to the ``InstallRequirement``. The backing
131
- ``InstallRequirement`` will use this link to fetch the distribution.
132
- :param source_link: The link this candidate "originates" from. This is
133
- different from ``link`` when the link is found in the wheel cache.
134
- ``link`` would point to the wheel cache, while this points to the
135
- found remote link (e.g. from pypi.org).
136
- """
137
-
138
- dist: BaseDistribution
139
- is_installed = False
140
-
141
- def __init__(
142
- self,
143
- link: Link,
144
- source_link: Link,
145
- ireq: InstallRequirement,
146
- factory: "Factory",
147
- name: Optional[NormalizedName] = None,
148
- version: Optional[CandidateVersion] = None,
149
- ) -> None:
150
- self._link = link
151
- self._source_link = source_link
152
- self._factory = factory
153
- self._ireq = ireq
154
- self._name = name
155
- self._version = version
156
- self.dist = self._prepare()
157
-
158
- def __str__(self) -> str:
159
- return f"{self.name} {self.version}"
160
-
161
- def __repr__(self) -> str:
162
- return "{class_name}({link!r})".format(
163
- class_name=self.__class__.__name__,
164
- link=str(self._link),
165
- )
166
-
167
- def __hash__(self) -> int:
168
- return hash((self.__class__, self._link))
169
-
170
- def __eq__(self, other: Any) -> bool:
171
- if isinstance(other, self.__class__):
172
- return links_equivalent(self._link, other._link)
173
- return False
174
-
175
- @property
176
- def source_link(self) -> Optional[Link]:
177
- return self._source_link
178
-
179
- @property
180
- def project_name(self) -> NormalizedName:
181
- """The normalised name of the project the candidate refers to"""
182
- if self._name is None:
183
- self._name = self.dist.canonical_name
184
- return self._name
185
-
186
- @property
187
- def name(self) -> str:
188
- return self.project_name
189
-
190
- @property
191
- def version(self) -> CandidateVersion:
192
- if self._version is None:
193
- self._version = self.dist.version
194
- return self._version
195
-
196
- def format_for_error(self) -> str:
197
- return "{} {} (from {})".format(
198
- self.name,
199
- self.version,
200
- self._link.file_path if self._link.is_file else self._link,
201
- )
202
-
203
- def _prepare_distribution(self) -> BaseDistribution:
204
- raise NotImplementedError("Override in subclass")
205
-
206
- def _check_metadata_consistency(self, dist: BaseDistribution) -> None:
207
- """Check for consistency of project name and version of dist."""
208
- if self._name is not None and self._name != dist.canonical_name:
209
- raise MetadataInconsistent(
210
- self._ireq,
211
- "name",
212
- self._name,
213
- dist.canonical_name,
214
- )
215
- if self._version is not None and self._version != dist.version:
216
- raise MetadataInconsistent(
217
- self._ireq,
218
- "version",
219
- str(self._version),
220
- str(dist.version),
221
- )
222
-
223
- def _prepare(self) -> BaseDistribution:
224
- try:
225
- dist = self._prepare_distribution()
226
- except HashError as e:
227
- # Provide HashError the underlying ireq that caused it. This
228
- # provides context for the resulting error message to show the
229
- # offending line to the user.
230
- e.req = self._ireq
231
- raise
232
- except InstallationSubprocessError as exc:
233
- # The output has been presented already, so don't duplicate it.
234
- exc.context = "See above for output."
235
- raise
236
-
237
- self._check_metadata_consistency(dist)
238
- return dist
239
-
240
- def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]:
241
- requires = self.dist.iter_dependencies() if with_requires else ()
242
- for r in requires:
243
- yield self._factory.make_requirement_from_spec(str(r), self._ireq)
244
- yield self._factory.make_requires_python_requirement(self.dist.requires_python)
245
-
246
- def get_install_requirement(self) -> Optional[InstallRequirement]:
247
- return self._ireq
248
-
249
-
250
- class LinkCandidate(_InstallRequirementBackedCandidate):
251
- is_editable = False
252
-
253
- def __init__(
254
- self,
255
- link: Link,
256
- template: InstallRequirement,
257
- factory: "Factory",
258
- name: Optional[NormalizedName] = None,
259
- version: Optional[CandidateVersion] = None,
260
- ) -> None:
261
- source_link = link
262
- cache_entry = factory.get_wheel_cache_entry(source_link, name)
263
- if cache_entry is not None:
264
- logger.debug("Using cached wheel link: %s", cache_entry.link)
265
- link = cache_entry.link
266
- ireq = make_install_req_from_link(link, template)
267
- assert ireq.link == link
268
- if ireq.link.is_wheel and not ireq.link.is_file:
269
- wheel = Wheel(ireq.link.filename)
270
- wheel_name = canonicalize_name(wheel.name)
271
- assert name == wheel_name, f"{name!r} != {wheel_name!r} for wheel"
272
- # Version may not be present for PEP 508 direct URLs
273
- if version is not None:
274
- wheel_version = Version(wheel.version)
275
- assert version == wheel_version, "{!r} != {!r} for wheel {}".format(
276
- version, wheel_version, name
277
- )
278
-
279
- if cache_entry is not None:
280
- assert ireq.link.is_wheel
281
- assert ireq.link.is_file
282
- if cache_entry.persistent and template.link is template.original_link:
283
- ireq.cached_wheel_source_link = source_link
284
- if cache_entry.origin is not None:
285
- ireq.download_info = cache_entry.origin
286
- else:
287
- # Legacy cache entry that does not have origin.json.
288
- # download_info may miss the archive_info.hashes field.
289
- ireq.download_info = direct_url_from_link(
290
- source_link, link_is_in_wheel_cache=cache_entry.persistent
291
- )
292
-
293
- super().__init__(
294
- link=link,
295
- source_link=source_link,
296
- ireq=ireq,
297
- factory=factory,
298
- name=name,
299
- version=version,
300
- )
301
-
302
- def _prepare_distribution(self) -> BaseDistribution:
303
- preparer = self._factory.preparer
304
- return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
305
-
306
-
307
- class EditableCandidate(_InstallRequirementBackedCandidate):
308
- is_editable = True
309
-
310
- def __init__(
311
- self,
312
- link: Link,
313
- template: InstallRequirement,
314
- factory: "Factory",
315
- name: Optional[NormalizedName] = None,
316
- version: Optional[CandidateVersion] = None,
317
- ) -> None:
318
- super().__init__(
319
- link=link,
320
- source_link=link,
321
- ireq=make_install_req_from_editable(link, template),
322
- factory=factory,
323
- name=name,
324
- version=version,
325
- )
326
-
327
- def _prepare_distribution(self) -> BaseDistribution:
328
- return self._factory.preparer.prepare_editable_requirement(self._ireq)
329
-
330
-
331
- class AlreadyInstalledCandidate(Candidate):
332
- is_installed = True
333
- source_link = None
334
-
335
- def __init__(
336
- self,
337
- dist: BaseDistribution,
338
- template: InstallRequirement,
339
- factory: "Factory",
340
- ) -> None:
341
- self.dist = dist
342
- self._ireq = _make_install_req_from_dist(dist, template)
343
- self._factory = factory
344
-
345
- # This is just logging some messages, so we can do it eagerly.
346
- # The returned dist would be exactly the same as self.dist because we
347
- # set satisfied_by in _make_install_req_from_dist.
348
- # TODO: Supply reason based on force_reinstall and upgrade_strategy.
349
- skip_reason = "already satisfied"
350
- factory.preparer.prepare_installed_requirement(self._ireq, skip_reason)
351
-
352
- def __str__(self) -> str:
353
- return str(self.dist)
354
-
355
- def __repr__(self) -> str:
356
- return "{class_name}({distribution!r})".format(
357
- class_name=self.__class__.__name__,
358
- distribution=self.dist,
359
- )
360
-
361
- def __hash__(self) -> int:
362
- return hash((self.__class__, self.name, self.version))
363
-
364
- def __eq__(self, other: Any) -> bool:
365
- if isinstance(other, self.__class__):
366
- return self.name == other.name and self.version == other.version
367
- return False
368
-
369
- @property
370
- def project_name(self) -> NormalizedName:
371
- return self.dist.canonical_name
372
-
373
- @property
374
- def name(self) -> str:
375
- return self.project_name
376
-
377
- @property
378
- def version(self) -> CandidateVersion:
379
- return self.dist.version
380
-
381
- @property
382
- def is_editable(self) -> bool:
383
- return self.dist.editable
384
-
385
- def format_for_error(self) -> str:
386
- return f"{self.name} {self.version} (Installed)"
387
-
388
- def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]:
389
- if not with_requires:
390
- return
391
- for r in self.dist.iter_dependencies():
392
- yield self._factory.make_requirement_from_spec(str(r), self._ireq)
393
-
394
- def get_install_requirement(self) -> Optional[InstallRequirement]:
395
- return None
396
-
397
-
398
- class ExtrasCandidate(Candidate):
399
- """A candidate that has 'extras', indicating additional dependencies.
400
-
401
- Requirements can be for a project with dependencies, something like
402
- foo[extra]. The extras don't affect the project/version being installed
403
- directly, but indicate that we need additional dependencies. We model that
404
- by having an artificial ExtrasCandidate that wraps the "base" candidate.
405
-
406
- The ExtrasCandidate differs from the base in the following ways:
407
-
408
- 1. It has a unique name, of the form foo[extra]. This causes the resolver
409
- to treat it as a separate node in the dependency graph.
410
- 2. When we're getting the candidate's dependencies,
411
- a) We specify that we want the extra dependencies as well.
412
- b) We add a dependency on the base candidate.
413
- See below for why this is needed.
414
- 3. We return None for the underlying InstallRequirement, as the base
415
- candidate will provide it, and we don't want to end up with duplicates.
416
-
417
- The dependency on the base candidate is needed so that the resolver can't
418
- decide that it should recommend foo[extra1] version 1.0 and foo[extra2]
419
- version 2.0. Having those candidates depend on foo=1.0 and foo=2.0
420
- respectively forces the resolver to recognise that this is a conflict.
421
- """
422
-
423
- def __init__(
424
- self,
425
- base: BaseCandidate,
426
- extras: FrozenSet[str],
427
- ) -> None:
428
- self.base = base
429
- self.extras = extras
430
-
431
- def __str__(self) -> str:
432
- name, rest = str(self.base).split(" ", 1)
433
- return "{}[{}] {}".format(name, ",".join(self.extras), rest)
434
-
435
- def __repr__(self) -> str:
436
- return "{class_name}(base={base!r}, extras={extras!r})".format(
437
- class_name=self.__class__.__name__,
438
- base=self.base,
439
- extras=self.extras,
440
- )
441
-
442
- def __hash__(self) -> int:
443
- return hash((self.base, self.extras))
444
-
445
- def __eq__(self, other: Any) -> bool:
446
- if isinstance(other, self.__class__):
447
- return self.base == other.base and self.extras == other.extras
448
- return False
449
-
450
- @property
451
- def project_name(self) -> NormalizedName:
452
- return self.base.project_name
453
-
454
- @property
455
- def name(self) -> str:
456
- """The normalised name of the project the candidate refers to"""
457
- return format_name(self.base.project_name, self.extras)
458
-
459
- @property
460
- def version(self) -> CandidateVersion:
461
- return self.base.version
462
-
463
- def format_for_error(self) -> str:
464
- return "{} [{}]".format(
465
- self.base.format_for_error(), ", ".join(sorted(self.extras))
466
- )
467
-
468
- @property
469
- def is_installed(self) -> bool:
470
- return self.base.is_installed
471
-
472
- @property
473
- def is_editable(self) -> bool:
474
- return self.base.is_editable
475
-
476
- @property
477
- def source_link(self) -> Optional[Link]:
478
- return self.base.source_link
479
-
480
- def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]:
481
- factory = self.base._factory
482
-
483
- # Add a dependency on the exact base
484
- # (See note 2b in the class docstring)
485
- yield factory.make_requirement_from_candidate(self.base)
486
- if not with_requires:
487
- return
488
-
489
- # The user may have specified extras that the candidate doesn't
490
- # support. We ignore any unsupported extras here.
491
- valid_extras = self.extras.intersection(self.base.dist.iter_provided_extras())
492
- invalid_extras = self.extras.difference(self.base.dist.iter_provided_extras())
493
- for extra in sorted(invalid_extras):
494
- logger.warning(
495
- "%s %s does not provide the extra '%s'",
496
- self.base.name,
497
- self.version,
498
- extra,
499
- )
500
-
501
- for r in self.base.dist.iter_dependencies(valid_extras):
502
- requirement = factory.make_requirement_from_spec(
503
- str(r), self.base._ireq, valid_extras
504
- )
505
- if requirement:
506
- yield requirement
507
-
508
- def get_install_requirement(self) -> Optional[InstallRequirement]:
509
- # We don't return anything here, because we always
510
- # depend on the base candidate, and we'll get the
511
- # install requirement from that.
512
- return None
513
-
514
-
515
- class RequiresPythonCandidate(Candidate):
516
- is_installed = False
517
- source_link = None
518
-
519
- def __init__(self, py_version_info: Optional[Tuple[int, ...]]) -> None:
520
- if py_version_info is not None:
521
- version_info = normalize_version_info(py_version_info)
522
- else:
523
- version_info = sys.version_info[:3]
524
- self._version = Version(".".join(str(c) for c in version_info))
525
-
526
- # We don't need to implement __eq__() and __ne__() since there is always
527
- # only one RequiresPythonCandidate in a resolution, i.e. the host Python.
528
- # The built-in object.__eq__() and object.__ne__() do exactly what we want.
529
-
530
- def __str__(self) -> str:
531
- return f"Python {self._version}"
532
-
533
- @property
534
- def project_name(self) -> NormalizedName:
535
- return REQUIRES_PYTHON_IDENTIFIER
536
-
537
- @property
538
- def name(self) -> str:
539
- return REQUIRES_PYTHON_IDENTIFIER
540
-
541
- @property
542
- def version(self) -> CandidateVersion:
543
- return self._version
544
-
545
- def format_for_error(self) -> str:
546
- return f"Python {self.version}"
547
-
548
- def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]:
549
- return ()
550
-
551
- def get_install_requirement(self) -> Optional[InstallRequirement]:
552
- return None
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/cascade_rcnn.py DELETED
@@ -1,245 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
- import torch
3
- from torch import nn
4
- from torch.autograd.function import Function
5
-
6
- from detectron2.layers import ShapeSpec
7
- from detectron2.structures import Boxes, Instances, pairwise_iou
8
- from detectron2.utils.events import get_event_storage
9
-
10
- from ..box_regression import Box2BoxTransform
11
- from ..matcher import Matcher
12
- from ..poolers import ROIPooler
13
- from .box_head import build_box_head
14
- from .fast_rcnn import FastRCNNOutputLayers, fast_rcnn_inference
15
- from .roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads
16
-
17
-
18
- class _ScaleGradient(Function):
19
- @staticmethod
20
- def forward(ctx, input, scale):
21
- ctx.scale = scale
22
- return input
23
-
24
- @staticmethod
25
- def backward(ctx, grad_output):
26
- return grad_output * ctx.scale, None
27
-
28
-
29
- @ROI_HEADS_REGISTRY.register()
30
- class CascadeROIHeads(StandardROIHeads):
31
- def _init_box_head(self, cfg, input_shape):
32
- # fmt: off
33
- pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
34
- pooler_scales = tuple(1.0 / input_shape[k].stride for k in self.in_features)
35
- sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
36
- pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
37
- cascade_bbox_reg_weights = cfg.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS
38
- cascade_ious = cfg.MODEL.ROI_BOX_CASCADE_HEAD.IOUS
39
- self.num_cascade_stages = len(cascade_ious)
40
- assert len(cascade_bbox_reg_weights) == self.num_cascade_stages
41
- assert cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG, \
42
- "CascadeROIHeads only support class-agnostic regression now!"
43
- assert cascade_ious[0] == cfg.MODEL.ROI_HEADS.IOU_THRESHOLDS[0]
44
- # fmt: on
45
-
46
- in_channels = [input_shape[f].channels for f in self.in_features]
47
- # Check all channel counts are equal
48
- assert len(set(in_channels)) == 1, in_channels
49
- in_channels = in_channels[0]
50
-
51
- self.box_pooler = ROIPooler(
52
- output_size=pooler_resolution,
53
- scales=pooler_scales,
54
- sampling_ratio=sampling_ratio,
55
- pooler_type=pooler_type,
56
- )
57
- pooled_shape = ShapeSpec(
58
- channels=in_channels, width=pooler_resolution, height=pooler_resolution
59
- )
60
-
61
- self.box_head = nn.ModuleList()
62
- self.box_predictor = nn.ModuleList()
63
- self.box2box_transform = []
64
- self.proposal_matchers = []
65
- for k in range(self.num_cascade_stages):
66
- box_head = build_box_head(cfg, pooled_shape)
67
- self.box_head.append(box_head)
68
- self.box_predictor.append(
69
- FastRCNNOutputLayers(
70
- cfg,
71
- box_head.output_shape,
72
- box2box_transform=Box2BoxTransform(weights=cascade_bbox_reg_weights[k]),
73
- )
74
- )
75
-
76
- if k == 0:
77
- # The first matching is done by the matcher of ROIHeads (self.proposal_matcher).
78
- self.proposal_matchers.append(None)
79
- else:
80
- self.proposal_matchers.append(
81
- Matcher([cascade_ious[k]], [0, 1], allow_low_quality_matches=False)
82
- )
83
-
84
- def forward(self, images, features, proposals, targets=None):
85
- del images
86
- if self.training:
87
- proposals = self.label_and_sample_proposals(proposals, targets)
88
-
89
- if self.training:
90
- # Need targets to box head
91
- losses = self._forward_box(features, proposals, targets)
92
- losses.update(self._forward_mask(features, proposals))
93
- losses.update(self._forward_keypoint(features, proposals))
94
- return proposals, losses
95
- else:
96
- pred_instances = self._forward_box(features, proposals)
97
- pred_instances = self.forward_with_given_boxes(features, pred_instances)
98
- return pred_instances, {}
99
-
100
- def _forward_box(self, features, proposals, targets=None):
101
- """
102
- Args:
103
- features, targets: the same as in
104
- Same as in :meth:`ROIHeads.forward`.
105
- proposals (list[Instances]): the per-image object proposals with
106
- their matching ground truth.
107
- Each has fields "proposal_boxes", and "objectness_logits",
108
- "gt_classes", "gt_boxes".
109
- """
110
- features = [features[f] for f in self.in_features]
111
- head_outputs = [] # (predictor, predictions, proposals)
112
- prev_pred_boxes = None
113
- image_sizes = [x.image_size for x in proposals]
114
- for k in range(self.num_cascade_stages):
115
- if k > 0:
116
- # The output boxes of the previous stage are used to create the input
117
- # proposals of the next stage.
118
- proposals = self._create_proposals_from_boxes(prev_pred_boxes, image_sizes)
119
- if self.training:
120
- proposals = self._match_and_label_boxes(proposals, k, targets)
121
- predictions = self._run_stage(features, proposals, k)
122
- prev_pred_boxes = self.box_predictor[k].predict_boxes(predictions, proposals)
123
- head_outputs.append((self.box_predictor[k], predictions, proposals))
124
-
125
- if self.training:
126
- losses = {}
127
- storage = get_event_storage()
128
- for stage, (predictor, predictions, proposals) in enumerate(head_outputs):
129
- with storage.name_scope("stage{}".format(stage)):
130
- stage_losses = predictor.losses(predictions, proposals)
131
- losses.update({k + "_stage{}".format(stage): v for k, v in stage_losses.items()})
132
- return losses
133
- else:
134
- # Each is a list[Tensor] of length #image. Each tensor is Ri x (K+1)
135
- scores_per_stage = [h[0].predict_probs(h[1], h[2]) for h in head_outputs]
136
-
137
- # Average the scores across heads
138
- scores = [
139
- sum(list(scores_per_image)) * (1.0 / self.num_cascade_stages)
140
- for scores_per_image in zip(*scores_per_stage)
141
- ]
142
- # Use the boxes of the last head
143
- predictor, predictions, proposals = head_outputs[-1]
144
- boxes = predictor.predict_boxes(predictions, proposals)
145
- pred_instances, _ = fast_rcnn_inference(
146
- boxes,
147
- scores,
148
- image_sizes,
149
- predictor.test_score_thresh,
150
- predictor.test_nms_thresh,
151
- predictor.test_topk_per_image,
152
- )
153
- return pred_instances
154
-
155
- @torch.no_grad()
156
- def _match_and_label_boxes(self, proposals, stage, targets):
157
- """
158
- Match proposals with groundtruth using the matcher at the given stage.
159
- Label the proposals as foreground or background based on the match.
160
-
161
- Args:
162
- proposals (list[Instances]): One Instances for each image, with
163
- the field "proposal_boxes".
164
- stage (int): the current stage
165
- targets (list[Instances]): the ground truth instances
166
-
167
- Returns:
168
- list[Instances]: the same proposals, but with fields "gt_classes" and "gt_boxes"
169
- """
170
- num_fg_samples, num_bg_samples = [], []
171
- for proposals_per_image, targets_per_image in zip(proposals, targets):
172
- match_quality_matrix = pairwise_iou(
173
- targets_per_image.gt_boxes, proposals_per_image.proposal_boxes
174
- )
175
- # proposal_labels are 0 or 1
176
- matched_idxs, proposal_labels = self.proposal_matchers[stage](match_quality_matrix)
177
- if len(targets_per_image) > 0:
178
- gt_classes = targets_per_image.gt_classes[matched_idxs]
179
- # Label unmatched proposals (0 label from matcher) as background (label=num_classes)
180
- gt_classes[proposal_labels == 0] = self.num_classes
181
- gt_boxes = targets_per_image.gt_boxes[matched_idxs]
182
- else:
183
- gt_classes = torch.zeros_like(matched_idxs) + self.num_classes
184
- gt_boxes = Boxes(
185
- targets_per_image.gt_boxes.tensor.new_zeros((len(proposals_per_image), 4))
186
- )
187
- proposals_per_image.gt_classes = gt_classes
188
- proposals_per_image.gt_boxes = gt_boxes
189
-
190
- num_fg_samples.append((proposal_labels == 1).sum().item())
191
- num_bg_samples.append(proposal_labels.numel() - num_fg_samples[-1])
192
-
193
- # Log the number of fg/bg samples in each stage
194
- storage = get_event_storage()
195
- storage.put_scalar(
196
- "stage{}/roi_head/num_fg_samples".format(stage),
197
- sum(num_fg_samples) / len(num_fg_samples),
198
- )
199
- storage.put_scalar(
200
- "stage{}/roi_head/num_bg_samples".format(stage),
201
- sum(num_bg_samples) / len(num_bg_samples),
202
- )
203
- return proposals
204
-
205
- def _run_stage(self, features, proposals, stage):
206
- """
207
- Args:
208
- features (list[Tensor]): #lvl input features to ROIHeads
209
- proposals (list[Instances]): #image Instances, with the field "proposal_boxes"
210
- stage (int): the current stage
211
-
212
- Returns:
213
- Same output as `FastRCNNOutputLayers.forward()`.
214
- """
215
- box_features = self.box_pooler(features, [x.proposal_boxes for x in proposals])
216
- # The original implementation averages the losses among heads,
217
- # but scale up the parameter gradients of the heads.
218
- # This is equivalent to adding the losses among heads,
219
- # but scale down the gradients on features.
220
- box_features = _ScaleGradient.apply(box_features, 1.0 / self.num_cascade_stages)
221
- box_features = self.box_head[stage](box_features)
222
- return self.box_predictor[stage](box_features)
223
-
224
- def _create_proposals_from_boxes(self, boxes, image_sizes):
225
- """
226
- Args:
227
- boxes (list[Tensor]): per-image predicted boxes, each of shape Ri x 4
228
- image_sizes (list[tuple]): list of image shapes in (h, w)
229
-
230
- Returns:
231
- list[Instances]: per-image proposals with the given boxes.
232
- """
233
- # Just like RPN, the proposals should not have gradients
234
- boxes = [Boxes(b.detach()) for b in boxes]
235
- proposals = []
236
- for boxes_per_image, image_size in zip(boxes, image_sizes):
237
- boxes_per_image.clip(image_size)
238
- if self.training:
239
- # do not filter empty boxes at inference time,
240
- # because the scores from each stage need to be aligned and added later
241
- boxes_per_image = boxes_per_image[boxes_per_image.nonempty()]
242
- prop = Instances(image_size)
243
- prop.proposal_boxes = boxes_per_image
244
- proposals.append(prop)
245
- return proposals
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/train_net.py DELETED
@@ -1,110 +0,0 @@
1
- #!/usr/bin/env python3
2
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
3
-
4
- """
5
- PointRend Training Script.
6
-
7
- This script is a simplified version of the training script in detectron2/tools.
8
- """
9
-
10
- import os
11
- import torch
12
-
13
- import detectron2.utils.comm as comm
14
- from detectron2.checkpoint import DetectionCheckpointer
15
- from detectron2.config import get_cfg
16
- from detectron2.data import MetadataCatalog
17
- from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, launch
18
- from detectron2.evaluation import (
19
- CityscapesEvaluator,
20
- COCOEvaluator,
21
- DatasetEvaluators,
22
- LVISEvaluator,
23
- verify_results,
24
- )
25
-
26
- from point_rend import add_pointrend_config
27
-
28
-
29
- class Trainer(DefaultTrainer):
30
- """
31
- We use the "DefaultTrainer" which contains a number pre-defined logic for
32
- standard training workflow. They may not work for you, especially if you
33
- are working on a new research project. In that case you can use the cleaner
34
- "SimpleTrainer", or write your own training loop.
35
- """
36
-
37
- @classmethod
38
- def build_evaluator(cls, cfg, dataset_name, output_folder=None):
39
- """
40
- Create evaluator(s) for a given dataset.
41
- This uses the special metadata "evaluator_type" associated with each builtin dataset.
42
- For your own dataset, you can simply create an evaluator manually in your
43
- script and do not have to worry about the hacky if-else logic here.
44
- """
45
- if output_folder is None:
46
- output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
47
- evaluator_list = []
48
- evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type
49
- if evaluator_type == "lvis":
50
- return LVISEvaluator(dataset_name, cfg, True, output_folder)
51
- if evaluator_type == "coco":
52
- return COCOEvaluator(dataset_name, cfg, True, output_folder)
53
- if evaluator_type == "cityscapes":
54
- assert (
55
- torch.cuda.device_count() >= comm.get_rank()
56
- ), "CityscapesEvaluator currently do not work with multiple machines."
57
- return CityscapesEvaluator(dataset_name)
58
- if len(evaluator_list) == 0:
59
- raise NotImplementedError(
60
- "no Evaluator for the dataset {} with the type {}".format(
61
- dataset_name, evaluator_type
62
- )
63
- )
64
- if len(evaluator_list) == 1:
65
- return evaluator_list[0]
66
- return DatasetEvaluators(evaluator_list)
67
-
68
-
69
- def setup(args):
70
- """
71
- Create configs and perform basic setups.
72
- """
73
- cfg = get_cfg()
74
- add_pointrend_config(cfg)
75
- cfg.merge_from_file(args.config_file)
76
- cfg.merge_from_list(args.opts)
77
- cfg.freeze()
78
- default_setup(cfg, args)
79
- return cfg
80
-
81
-
82
- def main(args):
83
- cfg = setup(args)
84
-
85
- if args.eval_only:
86
- model = Trainer.build_model(cfg)
87
- DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load(
88
- cfg.MODEL.WEIGHTS, resume=args.resume
89
- )
90
- res = Trainer.test(cfg, model)
91
- if comm.is_main_process():
92
- verify_results(cfg, res)
93
- return res
94
-
95
- trainer = Trainer(cfg)
96
- trainer.resume_or_load(resume=args.resume)
97
- return trainer.train()
98
-
99
-
100
- if __name__ == "__main__":
101
- args = default_argument_parser().parse_args()
102
- print("Command Line Args:", args)
103
- launch(
104
- main,
105
- args.num_gpus,
106
- num_machines=args.num_machines,
107
- machine_rank=args.machine_rank,
108
- dist_url=args.dist_url,
109
- args=(args,),
110
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pydiffvg/shape.py DELETED
@@ -1,172 +0,0 @@
1
- import torch
2
- import svgpathtools
3
- import math
4
-
5
- class Circle:
6
- def __init__(self, radius, center, stroke_width = torch.tensor(1.0), id = ''):
7
- self.radius = radius
8
- self.center = center
9
- self.stroke_width = stroke_width
10
- self.id = id
11
-
12
- class Ellipse:
13
- def __init__(self, radius, center, stroke_width = torch.tensor(1.0), id = ''):
14
- self.radius = radius
15
- self.center = center
16
- self.stroke_width = stroke_width
17
- self.id = id
18
-
19
- class Path:
20
- def __init__(self,
21
- num_control_points,
22
- points,
23
- is_closed,
24
- stroke_width = torch.tensor(1.0),
25
- id = '',
26
- use_distance_approx = False):
27
- self.num_control_points = num_control_points
28
- self.points = points
29
- self.is_closed = is_closed
30
- self.stroke_width = stroke_width
31
- self.id = id
32
- self.use_distance_approx = use_distance_approx
33
-
34
- class Polygon:
35
- def __init__(self, points, is_closed, stroke_width = torch.tensor(1.0), id = ''):
36
- self.points = points
37
- self.is_closed = is_closed
38
- self.stroke_width = stroke_width
39
- self.id = id
40
-
41
- class Rect:
42
- def __init__(self, p_min, p_max, stroke_width = torch.tensor(1.0), id = ''):
43
- self.p_min = p_min
44
- self.p_max = p_max
45
- self.stroke_width = stroke_width
46
- self.id = id
47
-
48
- class ShapeGroup:
49
- def __init__(self,
50
- shape_ids,
51
- fill_color,
52
- use_even_odd_rule = True,
53
- stroke_color = None,
54
- shape_to_canvas = torch.eye(3),
55
- id = ''):
56
- self.shape_ids = shape_ids
57
- self.fill_color = fill_color
58
- self.use_even_odd_rule = use_even_odd_rule
59
- self.stroke_color = stroke_color
60
- self.shape_to_canvas = shape_to_canvas
61
- self.id = id
62
-
63
- def from_svg_path(path_str, shape_to_canvas = torch.eye(3), force_close = False):
64
- path = svgpathtools.parse_path(path_str)
65
- if len(path) == 0:
66
- return []
67
- ret_paths = []
68
- subpaths = path.continuous_subpaths()
69
- for subpath in subpaths:
70
- if subpath.isclosed():
71
- if len(subpath) > 1 and isinstance(subpath[-1], svgpathtools.Line) and subpath[-1].length() < 1e-5:
72
- subpath.remove(subpath[-1])
73
- subpath[-1].end = subpath[0].start # Force closing the path
74
- subpath.end = subpath[-1].end
75
- assert(subpath.isclosed())
76
- else:
77
- beg = subpath[0].start
78
- end = subpath[-1].end
79
- if abs(end - beg) < 1e-5:
80
- subpath[-1].end = beg # Force closing the path
81
- subpath.end = subpath[-1].end
82
- assert(subpath.isclosed())
83
- elif force_close:
84
- subpath.append(svgpathtools.Line(end, beg))
85
- subpath.end = subpath[-1].end
86
- assert(subpath.isclosed())
87
-
88
- num_control_points = []
89
- points = []
90
-
91
- for i, e in enumerate(subpath):
92
- if i == 0:
93
- points.append((e.start.real, e.start.imag))
94
- else:
95
- # Must begin from the end of previous segment
96
- assert(e.start.real == points[-1][0])
97
- assert(e.start.imag == points[-1][1])
98
- if isinstance(e, svgpathtools.Line):
99
- num_control_points.append(0)
100
- elif isinstance(e, svgpathtools.QuadraticBezier):
101
- num_control_points.append(1)
102
- points.append((e.control.real, e.control.imag))
103
- elif isinstance(e, svgpathtools.CubicBezier):
104
- num_control_points.append(2)
105
- points.append((e.control1.real, e.control1.imag))
106
- points.append((e.control2.real, e.control2.imag))
107
- elif isinstance(e, svgpathtools.Arc):
108
- # Convert to Cubic curves
109
- # https://www.joecridge.me/content/pdf/bezier-arcs.pdf
110
- start = e.theta * math.pi / 180.0
111
- stop = (e.theta + e.delta) * math.pi / 180.0
112
-
113
- sign = 1.0
114
- if stop < start:
115
- sign = -1.0
116
-
117
- epsilon = 0.00001
118
- debug = abs(e.delta) >= 90.0
119
- while (sign * (stop - start) > epsilon):
120
- arc_to_draw = stop - start
121
- if arc_to_draw > 0.0:
122
- arc_to_draw = min(arc_to_draw, 0.5 * math.pi)
123
- else:
124
- arc_to_draw = max(arc_to_draw, -0.5 * math.pi)
125
- alpha = arc_to_draw / 2.0
126
- cos_alpha = math.cos(alpha)
127
- sin_alpha = math.sin(alpha)
128
- cot_alpha = 1.0 / math.tan(alpha)
129
- phi = start + alpha
130
- cos_phi = math.cos(phi)
131
- sin_phi = math.sin(phi)
132
- lambda_ = (4.0 - cos_alpha) / 3.0
133
- mu = sin_alpha + (cos_alpha - lambda_) * cot_alpha
134
- last = sign * (stop - (start + arc_to_draw)) <= epsilon
135
- num_control_points.append(2)
136
- rx = e.radius.real
137
- ry = e.radius.imag
138
- cx = e.center.real
139
- cy = e.center.imag
140
- rot = e.phi * math.pi / 180.0
141
- cos_rot = math.cos(rot)
142
- sin_rot = math.sin(rot)
143
- x = lambda_ * cos_phi + mu * sin_phi
144
- y = lambda_ * sin_phi - mu * cos_phi
145
- xx = x * cos_rot - y * sin_rot
146
- yy = x * sin_rot + y * cos_rot
147
- points.append((cx + rx * xx, cy + ry * yy))
148
- x = lambda_ * cos_phi - mu * sin_phi
149
- y = lambda_ * sin_phi + mu * cos_phi
150
- xx = x * cos_rot - y * sin_rot
151
- yy = x * sin_rot + y * cos_rot
152
- points.append((cx + rx * xx, cy + ry * yy))
153
- if not last:
154
- points.append((cx + rx * math.cos(rot + start + arc_to_draw),
155
- cy + ry * math.sin(rot + start + arc_to_draw)))
156
- start += arc_to_draw
157
- first = False
158
- if i != len(subpath) - 1:
159
- points.append((e.end.real, e.end.imag))
160
- else:
161
- if subpath.isclosed():
162
- # Must end at the beginning of first segment
163
- assert(e.end.real == points[0][0])
164
- assert(e.end.imag == points[0][1])
165
- else:
166
- points.append((e.end.real, e.end.imag))
167
- points = torch.tensor(points)
168
- points = torch.cat((points, torch.ones([points.shape[0], 1])), dim = 1) @ torch.transpose(shape_to_canvas, 0, 1)
169
- points = points / points[:, 2:3]
170
- points = points[:, :2].contiguous()
171
- ret_paths.append(Path(torch.tensor(num_control_points), points, subpath.isclosed()))
172
- return ret_paths
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/logical.h DELETED
@@ -1,22 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
-
21
- // this system has no special version of this algorithm
22
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/fcn_occmask_head.py DELETED
@@ -1,570 +0,0 @@
1
- import numpy as np
2
- import torch
3
- import torch.nn as nn
4
- import torch.nn.functional as F
5
- from mmcv.cnn import Conv2d, ConvModule, build_upsample_layer
6
- from mmcv.ops.carafe import CARAFEPack
7
- from mmcv.runner import auto_fp16, force_fp32
8
- from torch.nn.modules.utils import _pair
9
-
10
- from mmdet.core import mask_target
11
- from mmdet.models.builder import HEADS, build_loss
12
-
13
- BYTES_PER_FLOAT = 4
14
- # TODO: This memory limit may be too much or too little. It would be better to
15
- # determine it based on available resources.
16
- GPU_MEM_LIMIT = 1024**3 # 1 GB memory limit
17
-
18
-
19
- @HEADS.register_module()
20
- class FCNOccMaskHead(nn.Module):
21
-
22
- def __init__(self,
23
- num_convs=4,
24
- roi_feat_size=14,
25
- in_channels=256,
26
- conv_kernel_size=3,
27
- conv_out_channels=256,
28
- num_classes=80,
29
- class_agnostic=False,
30
- upsample_cfg=dict(type='deconv', scale_factor=2),
31
- conv_cfg=None,
32
- norm_cfg=None,
33
- loss_mask=dict(
34
- type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)):
35
- super(FCNOccMaskHead, self).__init__()
36
- self.upsample_cfg = upsample_cfg.copy()
37
- if self.upsample_cfg['type'] not in [
38
- None, 'deconv', 'nearest', 'bilinear', 'carafe'
39
- ]:
40
- raise ValueError(
41
- f'Invalid upsample method {self.upsample_cfg["type"]}, '
42
- 'accepted methods are "deconv", "nearest", "bilinear", '
43
- '"carafe"')
44
- self.num_convs = num_convs
45
- # WARN: roi_feat_size is reserved and not used
46
- self.roi_feat_size = _pair(roi_feat_size)
47
- self.in_channels = in_channels
48
- self.conv_kernel_size = conv_kernel_size
49
- self.conv_out_channels = conv_out_channels
50
- self.upsample_method = self.upsample_cfg.get('type')
51
- self.scale_factor = self.upsample_cfg.pop('scale_factor', None)
52
- self.num_classes = num_classes
53
- self.class_agnostic = class_agnostic
54
- self.conv_cfg = conv_cfg
55
- self.norm_cfg = norm_cfg
56
- self.fp16_enabled = False
57
- self.loss_mask = build_loss(loss_mask)
58
-
59
- self.convs = nn.ModuleList()
60
- for i in range(self.num_convs):
61
- if i ==0:
62
- in_channels_change = in_channels*2
63
- else:
64
- in_channels_change = in_channels
65
-
66
- in_channels = (
67
- self.in_channels if i == 0 else self.conv_out_channels)
68
- padding = (self.conv_kernel_size - 1) // 2
69
- self.convs.append(
70
- ConvModule(
71
- in_channels_change,
72
- self.conv_out_channels,
73
- self.conv_kernel_size,
74
- padding=padding,
75
- conv_cfg=conv_cfg,
76
- norm_cfg=norm_cfg))
77
-
78
- self.convs_occluder = nn.ModuleList()
79
- for i in range(self.num_convs):
80
- in_channels = (
81
- self.in_channels if i == 0 else self.conv_out_channels)
82
- padding = (self.conv_kernel_size - 1) // 2
83
- self.convs_occluder.append(
84
- ConvModule(
85
- in_channels,
86
- self.conv_out_channels,
87
- self.conv_kernel_size,
88
- padding=padding,
89
- conv_cfg=conv_cfg,
90
- norm_cfg=norm_cfg))
91
-
92
- upsample_in_channels = (
93
- self.conv_out_channels if self.num_convs > 0 else in_channels)
94
- upsample_cfg_ = self.upsample_cfg.copy()
95
- if self.upsample_method is None:
96
- self.upsample = None
97
- elif self.upsample_method == 'deconv':
98
- upsample_cfg_.update(
99
- in_channels=upsample_in_channels,
100
- out_channels=self.conv_out_channels,
101
- kernel_size=self.scale_factor,
102
- stride=self.scale_factor)
103
- self.upsample = build_upsample_layer(upsample_cfg_)
104
- elif self.upsample_method == 'carafe':
105
- upsample_cfg_.update(
106
- channels=upsample_in_channels, scale_factor=self.scale_factor)
107
- self.upsample = build_upsample_layer(upsample_cfg_)
108
- else:
109
- # suppress warnings
110
- align_corners = (None
111
- if self.upsample_method == 'nearest' else False)
112
- upsample_cfg_.update(
113
- scale_factor=self.scale_factor,
114
- mode=self.upsample_method,
115
- align_corners=align_corners)
116
- self.upsample = build_upsample_layer(upsample_cfg_)
117
-
118
- out_channels = 1 if self.class_agnostic else self.num_classes
119
- logits_in_channel = (
120
- self.conv_out_channels
121
- if self.upsample_method == 'deconv' else upsample_in_channels)
122
- self.conv_logits = Conv2d(logits_in_channel, out_channels, 1)
123
- self.conv_logits_occluder = Conv2d(logits_in_channel, out_channels, 1)
124
- self.relu = nn.ReLU(inplace=True)
125
- self.debug_imgs = None
126
-
127
- def init_weights(self):
128
- for m in [self.upsample, self.conv_logits]:
129
- if m is None:
130
- continue
131
- elif isinstance(m, CARAFEPack):
132
- m.init_weights()
133
- else:
134
- nn.init.kaiming_normal_(
135
- m.weight, mode='fan_out', nonlinearity='relu')
136
- nn.init.constant_(m.bias, 0)
137
-
138
- @auto_fp16()
139
- def forward(self, x):
140
- y = x.clone()
141
- for conv in self.convs_occluder:
142
- y = conv(y)
143
- x = torch.cat((x, y), 1)
144
- for conv in self.convs:
145
- x = conv(x)
146
- if self.upsample is not None:
147
- x = self.upsample(x)
148
- if self.upsample_method == 'deconv':
149
- x = self.relu(x)
150
- if self.upsample is not None:
151
- y = self.upsample(y)
152
- if self.upsample_method == 'deconv':
153
- y = self.relu(y)
154
- mask_pred = self.conv_logits(x)
155
- mask_occluder_pred = self.conv_logits_occluder(y)
156
- return mask_pred, mask_occluder_pred
157
-
158
- def get_targets(self, sampling_results, gt_masks, rcnn_train_cfg):
159
- pos_proposals = [res.pos_bboxes for res in sampling_results]
160
- pos_assigned_gt_inds = [
161
- res.pos_assigned_gt_inds for res in sampling_results
162
- ]
163
- mask_targets = mask_target(pos_proposals, pos_assigned_gt_inds,
164
- gt_masks, rcnn_train_cfg)
165
- return mask_targets
166
-
167
- @force_fp32(apply_to=('mask_pred', ))
168
- def loss(self, mask_pred, mask_targets, labels):
169
- """
170
- Example:
171
- >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA
172
- >>> N = 7 # N = number of extracted ROIs
173
- >>> C, H, W = 11, 32, 32
174
- >>> # Create example instance of FCN Mask Head.
175
- >>> # There are lots of variations depending on the configuration
176
- >>> self = FCNMaskHead(num_classes=C, num_convs=1)
177
- >>> inputs = torch.rand(N, self.in_channels, H, W)
178
- >>> mask_pred = self.forward(inputs)
179
- >>> sf = self.scale_factor
180
- >>> labels = torch.randint(0, C, size=(N,))
181
- >>> # With the default properties the mask targets should indicate
182
- >>> # a (potentially soft) single-class label
183
- >>> mask_targets = torch.rand(N, H * sf, W * sf)
184
- >>> loss = self.loss(mask_pred, mask_targets, labels)
185
- >>> print('loss = {!r}'.format(loss))
186
- """
187
- mask_full_pred, mask_occ_pred = mask_pred
188
- loss = dict()
189
- if mask_full_pred.size(0) == 0:
190
- loss_mask_vis = mask_full_pred.sum()
191
- else:
192
- if self.class_agnostic:
193
- loss_mask = self.loss_mask(mask_full_pred, mask_targets,
194
- torch.zeros_like(labels))
195
- else:
196
- #print(mask_pred[:,0:1].shape, mask_targets[0::2].shape, labels.shape)
197
- loss_mask_vis = self.loss_mask(mask_full_pred[:,0:1], mask_targets[0::2], labels)
198
- loss['loss_mask_vis'] = loss_mask_vis
199
-
200
- if mask_occ_pred.size(0) == 0:
201
- loss_mask = mask_occ_pred.sum()
202
- else:
203
- if self.class_agnostic:
204
- loss_mask = self.loss_mask(mask_occ_pred, mask_targets,
205
- torch.zeros_like(labels))
206
- else:
207
- loss_mask_occ = self.loss_mask(mask_occ_pred[:,0:1], mask_targets[1::2], labels)
208
- loss['loss_mask_occ'] = loss_mask_occ
209
- return loss
210
-
211
- def get_seg_masks(self, mask_pred, det_bboxes, det_labels, rcnn_test_cfg,
212
- ori_shape, scale_factor, rescale):
213
- """Get segmentation masks from mask_pred and bboxes.
214
- Args:
215
- mask_pred (Tensor or ndarray): shape (n, #class, h, w).
216
- For single-scale testing, mask_pred is the direct output of
217
- model, whose type is Tensor, while for multi-scale testing,
218
- it will be converted to numpy array outside of this method.
219
- det_bboxes (Tensor): shape (n, 4/5)
220
- det_labels (Tensor): shape (n, )
221
- rcnn_test_cfg (dict): rcnn testing config
222
- ori_shape (Tuple): original image height and width, shape (2,)
223
- scale_factor(float | Tensor): If ``rescale is True``, box
224
- coordinates are divided by this scale factor to fit
225
- ``ori_shape``.
226
- rescale (bool): If True, the resulting masks will be rescaled to
227
- ``ori_shape``.
228
- Returns:
229
- list[list]: encoded masks. The c-th item in the outer list
230
- corresponds to the c-th class. Given the c-th outer list, the
231
- i-th item in that inner list is the mask for the i-th box with
232
- class label c.
233
- Example:
234
- >>> import mmcv
235
- >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA
236
- >>> N = 7 # N = number of extracted ROIs
237
- >>> C, H, W = 11, 32, 32
238
- >>> # Create example instance of FCN Mask Head.
239
- >>> self = FCNMaskHead(num_classes=C, num_convs=0)
240
- >>> inputs = torch.rand(N, self.in_channels, H, W)
241
- >>> mask_pred = self.forward(inputs)
242
- >>> # Each input is associated with some bounding box
243
- >>> det_bboxes = torch.Tensor([[1, 1, 42, 42 ]] * N)
244
- >>> det_labels = torch.randint(0, C, size=(N,))
245
- >>> rcnn_test_cfg = mmcv.Config({'mask_thr_binary': 0, })
246
- >>> ori_shape = (H * 4, W * 4)
247
- >>> scale_factor = torch.FloatTensor((1, 1))
248
- >>> rescale = False
249
- >>> # Encoded masks are a list for each category.
250
- >>> encoded_masks = self.get_seg_masks(
251
- >>> mask_pred, det_bboxes, det_labels, rcnn_test_cfg, ori_shape,
252
- >>> scale_factor, rescale
253
- >>> )
254
- >>> assert len(encoded_masks) == C
255
- >>> assert sum(list(map(len, encoded_masks))) == N
256
- """
257
- if isinstance(mask_pred, torch.Tensor):
258
- mask_pred = mask_pred.sigmoid()
259
- else:
260
- mask_pred = det_bboxes.new_tensor(mask_pred)
261
-
262
- device = mask_pred.device
263
- cls_segms = [[] for _ in range(self.num_classes)
264
- ] # BG is not included in num_classes
265
- bboxes = det_bboxes[:, :4]
266
- labels = det_labels
267
-
268
- if rescale:
269
- img_h, img_w = ori_shape[:2]
270
- else:
271
- if isinstance(scale_factor, float):
272
- img_h = np.round(ori_shape[0] * scale_factor).astype(np.int32)
273
- img_w = np.round(ori_shape[1] * scale_factor).astype(np.int32)
274
- else:
275
- w_scale, h_scale = scale_factor[0], scale_factor[1]
276
- img_h = np.round(ori_shape[0] * h_scale.item()).astype(
277
- np.int32)
278
- img_w = np.round(ori_shape[1] * w_scale.item()).astype(
279
- np.int32)
280
- scale_factor = 1.0
281
-
282
- if not isinstance(scale_factor, (float, torch.Tensor)):
283
- scale_factor = bboxes.new_tensor(scale_factor)
284
- bboxes = bboxes / scale_factor
285
-
286
- if torch.onnx.is_in_onnx_export():
287
- # TODO: Remove after F.grid_sample is supported.
288
- from torchvision.models.detection.roi_heads \
289
- import paste_masks_in_image
290
- masks = paste_masks_in_image(mask_pred, bboxes, ori_shape[:2])
291
- thr = rcnn_test_cfg.get('mask_thr_binary', 0)
292
- if thr > 0:
293
- masks = masks >= thr
294
- return masks
295
-
296
- N = len(mask_pred)
297
- # The actual implementation split the input into chunks,
298
- # and paste them chunk by chunk.
299
- if device.type == 'cpu':
300
- # CPU is most efficient when they are pasted one by one with
301
- # skip_empty=True, so that it performs minimal number of
302
- # operations.
303
- num_chunks = N
304
- else:
305
- # GPU benefits from parallelism for larger chunks,
306
- # but may have memory issue
307
- num_chunks = int(
308
- np.ceil(N * img_h * img_w * BYTES_PER_FLOAT / GPU_MEM_LIMIT))
309
- assert (num_chunks <=
310
- N), 'Default GPU_MEM_LIMIT is too small; try increasing it'
311
- chunks = torch.chunk(torch.arange(N, device=device), num_chunks)
312
-
313
- threshold = rcnn_test_cfg.mask_thr_binary
314
- im_mask = torch.zeros(
315
- N,
316
- img_h,
317
- img_w,
318
- device=device,
319
- dtype=torch.bool if threshold >= 0 else torch.uint8)
320
-
321
- if not self.class_agnostic:
322
- mask_pred = mask_pred[range(N), labels][:, None]
323
-
324
- for inds in chunks:
325
- masks_chunk, spatial_inds = _do_paste_mask(
326
- mask_pred[inds],
327
- bboxes[inds],
328
- img_h,
329
- img_w,
330
- skip_empty=device.type == 'cpu')
331
-
332
- if threshold >= 0:
333
- masks_chunk = (masks_chunk >= threshold).to(dtype=torch.bool)
334
- else:
335
- # for visualization and debugging
336
- masks_chunk = (masks_chunk * 255).to(dtype=torch.uint8)
337
-
338
- im_mask[(inds, ) + spatial_inds] = masks_chunk
339
-
340
- for i in range(N):
341
- cls_segms[labels[i]].append(im_mask[i].detach().cpu().numpy())
342
- return cls_segms
343
-
344
- def get_seg_masks1(self, mask_pred, det_bboxes, det_labels, rcnn_test_cfg,
345
- ori_shape, scale_factor, rescale):
346
- """Get segmentation masks from mask_pred and bboxes.
347
-
348
- Args:
349
- mask_pred (Tensor or ndarray): shape (n, #class, h, w).
350
- For single-scale testing, mask_pred is the direct output of
351
- model, whose type is Tensor, while for multi-scale testing,
352
- it will be converted to numpy array outside of this method.
353
- det_bboxes (Tensor): shape (n, 4/5)
354
- det_labels (Tensor): shape (n, )
355
- rcnn_test_cfg (dict): rcnn testing config
356
- ori_shape (Tuple): original image height and width, shape (2,)
357
- scale_factor(float | Tensor): If ``rescale is True``, box
358
- coordinates are divided by this scale factor to fit
359
- ``ori_shape``.
360
- rescale (bool): If True, the resulting masks will be rescaled to
361
- ``ori_shape``.
362
-
363
- Returns:
364
- list[list]: encoded masks. The c-th item in the outer list
365
- corresponds to the c-th class. Given the c-th outer list, the
366
- i-th item in that inner list is the mask for the i-th box with
367
- class label c.
368
-
369
- Example:
370
- >>> import mmcv
371
- >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA
372
- >>> N = 7 # N = number of extracted ROIs
373
- >>> C, H, W = 11, 32, 32
374
- >>> # Create example instance of FCN Mask Head.
375
- >>> self = FCNMaskHead(num_classes=C, num_convs=0)
376
- >>> inputs = torch.rand(N, self.in_channels, H, W)
377
- >>> mask_pred = self.forward(inputs)
378
- >>> # Each input is associated with some bounding box
379
- >>> det_bboxes = torch.Tensor([[1, 1, 42, 42 ]] * N)
380
- >>> det_labels = torch.randint(0, C, size=(N,))
381
- >>> rcnn_test_cfg = mmcv.Config({'mask_thr_binary': 0, })
382
- >>> ori_shape = (H * 4, W * 4)
383
- >>> scale_factor = torch.FloatTensor((1, 1))
384
- >>> rescale = False
385
- >>> # Encoded masks are a list for each category.
386
- >>> encoded_masks = self.get_seg_masks(
387
- >>> mask_pred, det_bboxes, det_labels, rcnn_test_cfg, ori_shape,
388
- >>> scale_factor, rescale
389
- >>> )
390
- >>> assert len(encoded_masks) == C
391
- >>> assert sum(list(map(len, encoded_masks))) == N
392
- """
393
- if isinstance(mask_pred, torch.Tensor):
394
- mask_pred = mask_pred.sigmoid()
395
- else:
396
- mask_pred = det_bboxes.new_tensor(mask_pred)
397
-
398
- device = mask_pred.device
399
- cls_segms = [[] for _ in range(self.num_classes)
400
- ] # BG is not included in num_classes
401
- bboxes = det_bboxes[:, :4]
402
- labels = det_labels
403
- labels = torch.cat((labels, torch.tensor(([1]))))
404
- bboxes = torch.cat((bboxes, bboxes))
405
- #print(labels,torch.tensor(([1])))
406
- #asas
407
-
408
- if rescale:
409
- img_h, img_w = ori_shape[:2]
410
- else:
411
- if isinstance(scale_factor, float):
412
- img_h = np.round(ori_shape[0] * scale_factor).astype(np.int32)
413
- img_w = np.round(ori_shape[1] * scale_factor).astype(np.int32)
414
- else:
415
- w_scale, h_scale = scale_factor[0], scale_factor[1]
416
- img_h = np.round(ori_shape[0] * h_scale.item()).astype(
417
- np.int32)
418
- img_w = np.round(ori_shape[1] * w_scale.item()).astype(
419
- np.int32)
420
- scale_factor = 1.0
421
-
422
- if not isinstance(scale_factor, (float, torch.Tensor)):
423
- scale_factor = bboxes.new_tensor(scale_factor)
424
- bboxes = bboxes / scale_factor
425
-
426
- if torch.onnx.is_in_onnx_export():
427
- # TODO: Remove after F.grid_sample is supported.
428
- from torchvision.models.detection.roi_heads \
429
- import paste_masks_in_image
430
- masks = paste_masks_in_image(mask_pred, bboxes, ori_shape[:2])
431
- thr = rcnn_test_cfg.get('mask_thr_binary', 0)
432
- if thr > 0:
433
- masks = masks >= thr
434
- return masks
435
-
436
- N = len(mask_pred)
437
- # The actual implementation split the input into chunks,
438
- # and paste them chunk by chunk.
439
- if device.type == 'cpu':
440
- # CPU is most efficient when they are pasted one by one with
441
- # skip_empty=True, so that it performs minimal number of
442
- # operations.
443
- num_chunks = N
444
- else:
445
- # GPU benefits from parallelism for larger chunks,
446
- # but may have memory issue
447
- num_chunks = int(
448
- np.ceil(N * img_h * img_w * BYTES_PER_FLOAT / GPU_MEM_LIMIT))
449
- assert (num_chunks <=
450
- N), 'Default GPU_MEM_LIMIT is too small; try increasing it'
451
- chunks = torch.chunk(torch.arange(N, device=device), num_chunks)
452
-
453
- threshold = rcnn_test_cfg.mask_thr_binary
454
- im_mask = torch.zeros(
455
- N,
456
- img_h,
457
- img_w,
458
- device=device,
459
- dtype=torch.bool if threshold >= 0 else torch.uint8)
460
-
461
- if not self.class_agnostic:
462
- mask_pred = mask_pred[range(N), labels][:, None]
463
- #print('-----------------------------')
464
- #print(chunks)
465
-
466
- for inds in chunks:
467
- #print(mask_pred[inds].shape, bboxes[inds].shape)
468
- masks_chunk, spatial_inds = _do_paste_mask(
469
- mask_pred[0:1],
470
- bboxes[inds],
471
- img_h,
472
- img_w,
473
- skip_empty=device.type == 'cpu')
474
- masks_chunk_occ, spatial_inds_occ = _do_paste_mask(
475
- mask_pred[1:2],
476
- bboxes[inds],
477
- img_h,
478
- img_w,
479
- skip_empty=device.type == 'cpu')
480
-
481
-
482
- if threshold >= 0:
483
- masks_chunk = (masks_chunk >= threshold).to(dtype=torch.bool)
484
- masks_chunk_occ = (masks_chunk_occ >= threshold).to(dtype=torch.bool)
485
- else:
486
- # for visualization and debugging
487
- masks_chunk = (masks_chunk * 255).to(dtype=torch.uint8)
488
-
489
- im_mask[([0], ) + spatial_inds] = masks_chunk
490
- im_mask[([1], ) + spatial_inds] = masks_chunk_occ
491
-
492
-
493
- for i in range(N):
494
- cls_segms[labels[i]].append(im_mask[i].detach().cpu().numpy())
495
- #print(cls_segms)
496
- return cls_segms
497
-
498
-
499
- def _do_paste_mask(masks, boxes, img_h, img_w, skip_empty=True):
500
- """Paste instance masks according to boxes.
501
-
502
- This implementation is modified from
503
- https://github.com/facebookresearch/detectron2/
504
-
505
- Args:
506
- masks (Tensor): N, 1, H, W
507
- boxes (Tensor): N, 4
508
- img_h (int): Height of the image to be pasted.
509
- img_w (int): Width of the image to be pasted.
510
- skip_empty (bool): Only paste masks within the region that
511
- tightly bound all boxes, and returns the results this region only.
512
- An important optimization for CPU.
513
-
514
- Returns:
515
- tuple: (Tensor, tuple). The first item is mask tensor, the second one
516
- is the slice object.
517
- If skip_empty == False, the whole image will be pasted. It will
518
- return a mask of shape (N, img_h, img_w) and an empty tuple.
519
- If skip_empty == True, only area around the mask will be pasted.
520
- A mask of shape (N, h', w') and its start and end coordinates
521
- in the original image will be returned.
522
- """
523
- # On GPU, paste all masks together (up to chunk size)
524
- # by using the entire image to sample the masks
525
- # Compared to pasting them one by one,
526
- # this has more operations but is faster on COCO-scale dataset.
527
- device = masks.device
528
- if skip_empty:
529
- x0_int, y0_int = torch.clamp(
530
- boxes.min(dim=0).values.floor()[:2] - 1,
531
- min=0).to(dtype=torch.int32)
532
- x1_int = torch.clamp(
533
- boxes[:, 2].max().ceil() + 1, max=img_w).to(dtype=torch.int32)
534
- y1_int = torch.clamp(
535
- boxes[:, 3].max().ceil() + 1, max=img_h).to(dtype=torch.int32)
536
- else:
537
- x0_int, y0_int = 0, 0
538
- x1_int, y1_int = img_w, img_h
539
- x0, y0, x1, y1 = torch.split(boxes, 1, dim=1) # each is Nx1
540
-
541
- N = masks.shape[0]
542
-
543
- img_y = torch.arange(
544
- y0_int, y1_int, device=device, dtype=torch.float32) + 0.5
545
- img_x = torch.arange(
546
- x0_int, x1_int, device=device, dtype=torch.float32) + 0.5
547
- img_y = (img_y - y0) / (y1 - y0) * 2 - 1
548
- img_x = (img_x - x0) / (x1 - x0) * 2 - 1
549
- # img_x, img_y have shapes (N, w), (N, h)
550
- if torch.isinf(img_x).any():
551
- inds = torch.where(torch.isinf(img_x))
552
- img_x[inds] = 0
553
- if torch.isinf(img_y).any():
554
- inds = torch.where(torch.isinf(img_y))
555
- img_y[inds] = 0
556
-
557
- gx = img_x[:, None, :].expand(N, img_y.size(1), img_x.size(1))
558
- gy = img_y[:, :, None].expand(N, img_y.size(1), img_x.size(1))
559
- grid = torch.stack([gx, gy], dim=3)
560
-
561
- if torch.onnx.is_in_onnx_export():
562
- raise RuntimeError(
563
- 'Exporting F.grid_sample from Pytorch to ONNX is not supported.')
564
- img_masks = F.grid_sample(
565
- masks.to(dtype=torch.float32), grid, align_corners=False)
566
-
567
- if skip_empty:
568
- return img_masks[:, 0], (slice(y0_int, y1_int), slice(x0_int, x1_int))
569
- else:
570
- return img_masks[:, 0], ()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/audio_text.py DELETED
@@ -1,36 +0,0 @@
1
- import json
2
-
3
- import requests
4
-
5
- from autogpt.config import Config
6
- from autogpt.workspace import path_in_workspace
7
-
8
- cfg = Config()
9
-
10
-
11
- def read_audio_from_file(audio_path):
12
- audio_path = path_in_workspace(audio_path)
13
- with open(audio_path, "rb") as audio_file:
14
- audio = audio_file.read()
15
- return read_audio(audio)
16
-
17
-
18
- def read_audio(audio):
19
- model = cfg.huggingface_audio_to_text_model
20
- api_url = f"https://api-inference.huggingface.co/models/{model}"
21
- api_token = cfg.huggingface_api_token
22
- headers = {"Authorization": f"Bearer {api_token}"}
23
-
24
- if api_token is None:
25
- raise ValueError(
26
- "You need to set your Hugging Face API token in the config file."
27
- )
28
-
29
- response = requests.post(
30
- api_url,
31
- headers=headers,
32
- data=audio,
33
- )
34
-
35
- text = json.loads(response.content.decode("utf-8"))["text"]
36
- return "The audio says: " + text