parquet-converter commited on
Commit
851259e
·
1 Parent(s): f5b2c7a

Update parquet files (step 67 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autodesk Concrete Building Structures 2014 Torrents Updates and Patches.md +0 -80
  2. spaces/1gistliPinn/ChatGPT4/Examples/Bonetown V1.1.1 Crack WORK.md +0 -16
  3. spaces/1gistliPinn/ChatGPT4/Examples/En Office Enterprise 2007 Dvd Vl X12 19574.iso.rar.md +0 -13
  4. spaces/1gistliPinn/ChatGPT4/Examples/FULL IStripper V1.2.158 NSFW.md +0 -11
  5. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Angry Birds Rio 2 The Latest Episode of the Popular Franchise - Available for Windows 10.md +0 -84
  6. spaces/1phancelerku/anime-remove-background/Cmo descargar e instalar Among Us APK en tu Android.md +0 -118
  7. spaces/1phancelerku/anime-remove-background/Download Noblemen 1896 APK Data and Lead Your Armies to Victory!.md +0 -116
  8. spaces/1phancelerku/anime-remove-background/Download Real Drag Bike Racing Mod APK and Experience the Ultimate Drag Racing Challenge.md +0 -114
  9. spaces/1phancelerku/anime-remove-background/EvoWars.io A Unique and Exciting IO Game with Dynamic Gameplay.md +0 -113
  10. spaces/1phancelerku/anime-remove-background/Explore the Dungeon and Fight the Boss in Pixel Blade M VIP APK.md +0 -129
  11. spaces/2ndelement/voicevox/docs/VOICEVOX音声合成エンジンとの連携.md +0 -7
  12. spaces/52Hz/SRMNet_AWGN_denoising/model/SRMNet.py +0 -227
  13. spaces/801artistry/RVC801/julius/__init__.py +0 -41
  14. spaces/AI-Naga/Parking_Space_Counter/README.md +0 -12
  15. spaces/AIFILMS/StyleGANEX/models/stylegan2/model.py +0 -768
  16. spaces/AIML-TUDA/unsafe-vs-safe-stable-diffusion/app.py +0 -345
  17. spaces/AIWaves/Software_Company/src/agents/Prompt/base_Prompts.py +0 -83
  18. spaces/AIZero2HeroBootcamp/VideoToAnimatedGif/app.py +0 -26
  19. spaces/Abduhoshim/speech_emotion_detection/app.py +0 -73
  20. spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/dynamic.py +0 -84
  21. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/utils/CreateChild.js +0 -16
  22. spaces/AlekseyKorshuk/thin-plate-spline-motion-model/app.py +0 -100
  23. spaces/AlexWortega/MailruQA/app.py +0 -47
  24. spaces/Ali36Ahmad/magic-diffusion/app.py +0 -104
  25. spaces/Alpaca233/SadTalker/src/face3d/data/__init__.py +0 -116
  26. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py +0 -707
  27. spaces/Andy1621/uniformer_image_detection/configs/grid_rcnn/grid_rcnn_x101_32x4d_fpn_gn-head_2x_coco.py +0 -23
  28. spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r101_fpn_2x_coco.py +0 -4
  29. spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/download_urls.py +0 -65
  30. spaces/Apex-X/ROOPOK/roop/predictor.py +0 -43
  31. spaces/Ariharasudhan/YoloV5/utils/loggers/wandb/sweep.py +0 -41
  32. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/dist.py +0 -1222
  33. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/feature-request.md +0 -31
  34. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/README.md +0 -15
  35. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/heatmap_focal_loss.py +0 -92
  36. spaces/BAAI/vid2vid-zero/style.css +0 -3
  37. spaces/BertChristiaens/youtube-dl/README.md +0 -13
  38. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/progress.py +0 -1702
  39. spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/glob.py +0 -167
  40. spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/util/url.py +0 -435
  41. spaces/BigDL/bigdl_nano_demo/README.md +0 -12
  42. spaces/BimboAnon/BimboProxy/Dockerfile +0 -11
  43. spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/equal.h +0 -23
  44. spaces/CVPR/Text2Human/Text2Human/ui/ui.py +0 -313
  45. spaces/CVPR/WALT/mmdet/datasets/pipelines/compose.py +0 -51
  46. spaces/CVPR/lama-example/saicinpainting/training/trainers/__init__.py +0 -30
  47. spaces/CofAI/chat/g4f/Provider/Providers/Lockchat.py +0 -32
  48. spaces/Cognomen/CatCon-Controlnet-WD-1-5-b2/README.md +0 -15
  49. spaces/Cosmo-Hug/Cosmo-Hug-FeverDream/README.md +0 -13
  50. spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/__init__.py +0 -0
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autodesk Concrete Building Structures 2014 Torrents Updates and Patches.md DELETED
@@ -1,80 +0,0 @@
1
- <br />
2
- <h1>Visual Studio 2012 Professional Product Key Crack: How to Activate Your IDE for Free</h1>
3
- <p>If you are a developer who uses Microsoft's Visual Studio as your integrated development environment (IDE), you might be interested in getting Visual Studio 2012 Professional for free. Visual Studio 2012 Professional is one of the most popular versions of Visual Studio that offers many features and tools for creating, debugging, testing, and deploying various types of applications.</p>
4
- <p>However, to use Visual Studio 2012 Professional, you need to have a valid product key that you can purchase from Microsoft or its authorized resellers. A product key is a unique code that activates your copy of Visual Studio and verifies that you have a legitimate license to use it.</p>
5
- <h2>Visual Studio 2012 Professional Product Key Crack</h2><br /><p><b><b>Download</b> &mdash;&mdash;&mdash;&mdash;&mdash; <a href="https://byltly.com/2uKwaq">https://byltly.com/2uKwaq</a></b></p><br /><br />
6
- <p>But what if you don't want to pay for a product key? Is there a way to get Visual Studio 2012 Professional for free? The answer is yes, but it comes with some risks and challenges. In this article, we will show you how to find and use a product key crack for Visual Studio 2012 Professional that will allow you to activate your IDE without paying anything.</p>
7
- <p>A product key crack is a method of bypassing the activation process of Visual Studio by using a fake or stolen product key that tricks the software into thinking that you have a valid license. There are many websites and tools that claim to provide product key cracks for various versions of Visual Studio, including Visual Studio 2012 Professional.</p>
8
- <p>The benefits of using a product key crack are obvious: you can save money and enjoy all the features and functionalities of Visual Studio without any limitations or restrictions. You can also avoid the hassle of registering your copy of Visual Studio with Microsoft or providing any personal information.</p>
9
- <p>How to activate Visual Studio 2012 Professional without product key<br />
10
- Visual Studio 2012 Professional license key generator download<br />
11
- Free Visual Studio 2012 Professional serial number crack<br />
12
- Visual Studio 2012 Professional activation code hack<br />
13
- Visual Studio 2012 Professional full version crack patch<br />
14
- Visual Studio 2012 Professional registration key crack free download<br />
15
- Visual Studio 2012 Professional crack keygen torrent<br />
16
- Visual Studio 2012 Professional product key finder software<br />
17
- Visual Studio 2012 Professional license key crack online<br />
18
- Visual Studio 2012 Professional serial key crack windows 10<br />
19
- Visual Studio 2012 Professional activation key crack reddit<br />
20
- Visual Studio 2012 Professional crack patch download<br />
21
- Visual Studio 2012 Professional product key generator online<br />
22
- Visual Studio 2012 Professional license key crack 2023<br />
23
- Visual Studio 2012 Professional serial number crack mac<br />
24
- Visual Studio 2012 Professional activation code crack youtube<br />
25
- Visual Studio 2012 Professional full version crack free download<br />
26
- Visual Studio 2012 Professional registration key generator online<br />
27
- Visual Studio 2012 Professional crack keygen download<br />
28
- Visual Studio 2012 Professional product key finder tool<br />
29
- Visual Studio 2012 Professional license key hack online<br />
30
- Visual Studio 2012 Professional serial key generator download<br />
31
- Visual Studio 2012 Professional activation key finder software<br />
32
- Visual Studio 2012 Professional crack patch online<br />
33
- Visual Studio 2012 Professional product key generator free download<br />
34
- Visual Studio 2012 Professional license key finder tool<br />
35
- Visual Studio 2012 Professional serial number hack online<br />
36
- Visual Studio 2012 Professional activation code generator download<br />
37
- Visual Studio 2012 Professional full version crack online<br />
38
- Visual Studio 2012 Professional registration key finder software<br />
39
- Visual Studio 2012 Professional crack keygen online<br />
40
- Visual Studio 2012 Professional product key hack reddit<br />
41
- Visual Studio 2012 Professional license key generator online free<br />
42
- Visual Studio 2012 Professional serial number generator free download<br />
43
- Visual Studio 2012 Professional activation key hack youtube<br />
44
- Visual Studio 2012 Professional crack patch free download<br />
45
- Visual Studio 2012 Professional product key finder online free<br />
46
- Visual Studio 2012 Professional license key hack windows 10<br />
47
- Visual Studio 2012 Professional serial key finder tool<br />
48
- Visual Studio 2012 Professional activation code finder software<br />
49
- Visual Studio 2012 Professional full version crack reddit<br />
50
- Visual Studio 2012 Professional registration key hack online<br />
51
- Visual Studio 2012 Professional crack keygen free download<br />
52
- Visual Studio 2012 Professional product key generator reddit<br />
53
- Visual Studio 2012 Professional license key finder online free download<br />
54
- Visual Studio 2012 Professional serial number hack youtube<br />
55
- Visual Studio 2012 Professional activation code hack reddit<br />
56
- Visual Studio 2012 Professional full version crack youtube<br />
57
- Visual Studio 2012 Professional registration key generator free download</p>
58
- <p>However, using a product key crack also comes with some risks and challenges. First of all, using a product key crack is illegal and unethical, as it violates the terms and conditions of Microsoft's software license agreement. You could face legal consequences or penalties if Microsoft detects that you are using an unauthorized copy of Visual Studio.</p>
59
- <p>Secondly, using a product key crack is unsafe and unreliable, as it could expose your computer to malware, viruses, or spyware that could harm your system or steal your data. You could also encounter errors, bugs, or compatibility issues that could affect your development work or performance. Moreover, you could lose access to updates, patches, or support from Microsoft or its partners that could improve or fix your copy of Visual Studio.</p>
60
- <p>Therefore, before you decide to use a product key crack for Visual Studio 2012 Professional, you should weigh the pros and cons carefully and consider the alternatives. If you still want to proceed with using a product key crack, here are some steps that you need to follow.</p>
61
- <h2>How to Find a Valid Product Key for Visual Studio 2012 Professional</h2>
62
- <p>The first step in using a product key crack for Visual Studio 2012 Professional is finding a valid product key that will work with your copy of Visual Studio. There are two main options that you can try:</p>
63
- <h3>Option 1: Use a product key generator</h3>
64
- <p>A product key generator is a software tool that creates random or algorithm-based product keys for various software products, including Visual Studio. A product key generator works by mimicking the format and structure of an authentic product key and generating multiple combinations of letters and numbers that could potentially activate your copy of Visual Studio.</p>
65
- <p>There are many websites and tools that claim to offer product key generators for Visual Studio 2012 Professional, such as <a href="https://github.com/shivam0612/Product-Keys/blob/main/Visual%20Studio%202012.txt">Product-Keys/Visual Studio</a>, <a href="https://appnee.com/microsoft-visual-studio-all-versions-product-keys-collection/">AppNee Freeware Group</a>, or <a href="https://gist.github.com/ssbalakumar/17e5402c3df6a2e57f8af52844c958e3">All Product Keys</a>. However, not all of them are reliable or trustworthy, as some of them could contain malware, viruses, or spyware that could harm your computer or steal your data.</p>
66
- <p>Therefore, before you download or use any product key generator for Visual Studio 2012 Professional, you should do some research and check the reputation and reviews of the website or tool that provides it. You should also scan the file or tool with an antivirus program before opening or running it on your computer.</p>
67
- <p>To use a product key generator for Visual Studio 2012 Professional, you need to follow these steps:</p>
68
- <ol>
69
- <li>Download the product key generator from a reputable website or tool.</li>
70
- <li>Extract the file or run the tool on your computer.</li>
71
- <li>Select Visual Studio 2012 Professional from the list of software products.</li>
72
- <li>Click on Generate or Create button to generate multiple product keys.</li>
73
- <li>Copy one of the generated product keys and save it somewhere safe.</li>
74
- </ol>
75
- <h3>Option 2: Use a product key list</h3>
76
- <p>A product key list is a collection of pre-existing or leaked product keys for various software products, including Visual Studio. A product key list works by providing you with an actual or authentic product key that someone else has already used or obtained from Microsoft or its authorized resellers.</p>
77
- <p>There are many websites and tools that claim to offer product key lists for Visual Studio 2012 Professional, such as <a href="https://github.com/shivam0612/Product-Keys/blob/main/Visual%20Studio%202012.txt">Product-Keys/Visual Studio</a>, <a href="https://appnee.com/microsoft-visual-studio-all-versions-product-keys-collection/">AppNee Freeware Group</a>, or <a href="https://gist.github.com/ssbalakumar/17e5402c3df6a2e57f8af52844c958e3">All Product Keys</a>. However, not all of them are reliable or trustworthy, as some of them could contain outdated, invalid, or duplicate product keys that could not activate your copy of Visual Studio.</p>
78
- <p>Therefore, before you use any product key list for Visual Studio 2012 Professional, you should do some research and check the reputation and reviews of the website or tool that provides it. You should also verify that the product keys are updated, valid, and unique before using them on your computer.</</p> 0a6ba089eb<br />
79
- <br />
80
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Bonetown V1.1.1 Crack WORK.md DELETED
@@ -1,16 +0,0 @@
1
- <h2>bonetown v1.1.1 crack</h2><br /><p><b><b>Download</b> &middot;&middot;&middot; <a href="https://imgfil.com/2uxX09">https://imgfil.com/2uxX09</a></b></p><br /><br />
2
-
3
- On this game portal you can download the game BoneTown for free torrent. Full version of the game BoneTown was . At the moment, the last version: 1.1.1, rating: rate. Torrent Download Free " Torrent Download Games " Bone Town / Bones of Town (2010) PC.
4
- Year: 2010 Genre: Strategy, 3D Developer: GSC Game World Platform: PC....
5
- How to download BoneTown game for free.
6
- Download the game for free.
7
- BoneTown download for free.
8
- BoneTown.
9
- Download the game BoneTown for free.
10
- BoneTown.torrent.
11
- BoneTown - download the game for free on your computer, full version, without registration and sms.
12
- BoneTown free download.
13
- BoneTown.torrent. 8a78ff9644<br />
14
- <br />
15
- <br />
16
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/En Office Enterprise 2007 Dvd Vl X12 19574.iso.rar.md DELETED
@@ -1,13 +0,0 @@
1
- <h2>En Office Enterprise 2007 Dvd Vl X12 19574.iso.rar</h2><br /><p><b><b>Download File</b> &#10038; <a href="https://imgfil.com/2uxXH1">https://imgfil.com/2uxXH1</a></b></p><br /><br />
2
- <br />
3
- Download. office business; Office Enterprise 2007. En Office Enterprise 2007 DVD Vl X12 19574.iso.rar. Download. 704, October 21, 2017, 560.65 MB, OFFICE 2007. Office 2010 download - Office 2010 Standard - free download Russian version.
4
- Office 2010 - free download.
5
- Download Office 2010 free Russian version without registration for Windows 7 / 8, 10, XP 64 and 32 bit Office 2010 Download Torrent.
6
- Office 2010 is a software package for working with various types of documents.
7
- Download office 2010 for free.
8
- Free and without registration.
9
- Daily.
10
- Download Office 2010 for free and without. 8a78ff9644<br />
11
- <br />
12
- <br />
13
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/FULL IStripper V1.2.158 NSFW.md DELETED
@@ -1,11 +0,0 @@
1
- <h2>FULL iStripper V1.2.158 NSFW</h2><br /><p><b><b>Download Zip</b> &middot; <a href="https://imgfil.com/2uy0NO">https://imgfil.com/2uy0NO</a></b></p><br /><br />
2
- <br />
3
- FULL IStripper V1.2.158 NSFW !FULL!. 5 point. FULL iStripper V1.2.158 NSFW. DOWNLOAD: fd16d57201. Related links:. FULL IStripper V1.2.158 NSFW !FULL!. 5 point. FULL iStripper V1.2.158 NSFW. DOWNLOAD.
4
- FULL V1.2.157 NSFW!FULL!. 5 point. FULL iStripper V1.2.157 NSFW. DOWNLOAD. fd16d57201.
5
- FULL IStripper V1.2.157 NSFW !FULL!. 5 point. FULL iStripper V1.2.157 NSFW. DOWNLOAD.
6
- NSFW iStripper v1.2.155 V1.2.157.
7
- DOWNLOAD iStripper v1.2.157.
8
- I 8a78ff9644<br />
9
- <br />
10
- <br />
11
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Angry Birds Rio 2 The Latest Episode of the Popular Franchise - Available for Windows 10.md DELETED
@@ -1,84 +0,0 @@
1
-
2
- <h1>Angry Birds Rio 2 Game Free Download for Windows 10</h1>
3
- <p>If you are a fan of the Angry Birds franchise, you might have heard of Angry Birds Rio 2, the second puzzle game based on the hit movies Rio and Rio 2. In this game, you have to fling birds at the piggies' towers and save their friends Blu and Jewel, two rare macaws, from the evil smugglers. The game is full of fun, challenge, and excitement, and it is completely free to download for Windows 10. In this article, we will tell you everything you need to know about Angry Birds Rio 2 game, including its features, download process, and tips and tricks.</p>
4
- <h2>Features</h2>
5
- <p>Angry Birds Rio 2 game has many features that make it different from the previous Angry Birds games. Here are some of them:</p>
6
- <h2>angry birds rio 2 game free download for windows 10</h2><br /><p><b><b>Download File</b> ::: <a href="https://urlin.us/2uSXIl">https://urlin.us/2uSXIl</a></b></p><br /><br />
7
- <ul>
8
- <li><b>Multi-stage levels.</b> The game has hundreds of levels with multiple stages, each with its own obstacles and objectives. You have to use strategy and skill to complete each stage and earn stars.</li>
9
- <li><b>Power-ups.</b> You can boost your birds' abilities with various power-ups, such as Sling Scope for laser targeting, Power Potion to supersize your birds, Samba Burst for dancing destruction, TNT for explosive help, and Call the Flock for a blizzard of macaw mayhem.</li>
10
- <li><b>The Mighty Eagle.</b> The Mighty Eagle is a one-time in-app purchase that you can use forever. If you get stuck on a level, this cool creature will dive from the skies to smash those meddling monkeys into oblivion. You can also use it to unlock new gameplay goals and achievements.</li>
11
- <li><b>Clans and Arena.</b> You can join a clan to take down the pigs with friends and players around the world. You can also compete in the Arena to prove who is the best bird flinger in the world.</li>
12
- <li><b>Silly hats.</b> You can collect hats with different fun themes and level up your birds' fashion game. You can also take part in special events with themed levels and rewards.</li>
13
- </ul>
14
- <h2>Download</h2>
15
- <p>To download Angry Birds Rio 2 game for free for Windows 10, you need to follow these steps:</p>
16
- <ol>
17
- <li>Go to [FileHippo](^3^), a trusted website that offers free software downloads.</li>
18
- <li>Click on the green "Download Latest Version" button on the top right corner of the page.</li>
19
- <li>Wait for the download to finish and then open the file.</li>
20
- <li>Follow the instructions on the screen to install the game on your PC.</li>
21
- <li>Enjoy playing Angry Birds Rio 2 game!</li>
22
- </ol>
23
- <p>The system requirements for Angry Birds Rio 2 game are:</p>
24
- <table>
25
- <tr><th>Operating system</th><th>Processor</th><th>Memory</th><th>Graphics</th></tr>
26
- <tr><td>Windows XP or later</td><td>1 GHz or faster</td><td>512 MB or more</td><td>OpenGL 1.3 compatible or better</td></tr>
27
- </table>
28
- <h2>Tips and Tricks</h2>
29
- <p>To improve your skills and score in Angry Birds Rio 2 game, here are some tips and tricks to know:</p>
30
- <ul>
31
- <li><b>Choose your bird wisely.</b> Each bird has its own special ability and strength. For example, Red can knock down wood easily, Chuck can speed up and break glass, Bomb can explode and destroy stone, Matilda can drop an egg bomb, etc. You have to choose which bird to put in the slingshot depending on the structure of the piggies and the materials they are made of.</li>
32
- <li><b>Aim for the weak spots.</b> You can cause more damage and destruction by aiming for the weak spots of the piggies' towers, such as joints, supports, explosives, etc. You can also use the environment to your advantage, such as rocks, ropes, wheels, etc.</li>
33
- <li><b>Use the power-ups wisely.</b> Power-ups can help you a lot in difficult levels, but they are limited and sometimes cost real money. You should use them only when you really need them and not waste them on easy levels. You can also earn some power-ups by completing achievements or watching ads.</li>
34
- <li><b>Watch the videos.</b> If you are stuck on a level or want to get three stars, you can watch the videos of other players who have completed the level. You can learn from their strategies and techniques and try to replicate them. You can find the videos by clicking on the video icon on the top right corner of the screen.</li>
35
- <li><b>Have fun.</b> The most important tip is to have fun while playing Angry Birds Rio 2 game. Don't get frustrated or angry if you fail a level or miss a shot. Just keep trying and enjoy the colorful graphics, funny sounds, and cute characters of the game.</li>
36
- </ul>
37
- <h2>Conclusion</h2>
38
- <p>Angry Birds Rio 2 game is a great puzzle game that will keep you entertained for hours. It has many features that make it different from the previous Angry Birds games, such as multi-stage levels, power-ups, clans, arena, and silly hats. You can download it for free for Windows 10 from FileHippo, a trusted website that offers free software downloads. You can also improve your skills and score in the game by following some tips and tricks, such as choosing your bird wisely, aiming for the weak spots, using the power-ups wisely, watching the videos, and having fun. We hope you enjoyed this article and learned something new about Angry Birds Rio 2 game. Now go ahead and download it and start flinging those birds at those piggies!</p>
39
- <h3>FAQs</h3>
40
- <p>Here are some frequently asked questions about Angry Birds Rio 2 game:</p>
41
- <p>* angry birds rio 2 pc game download full version<br />
42
- * how to install angry birds rio 2 on windows 10<br />
43
- * angry birds rio 2 game online play free<br />
44
- * angry birds rio 2 game features and reviews<br />
45
- * angry birds rio 2 game system requirements for windows 10<br />
46
- * angry birds rio 2 game walkthrough and tips<br />
47
- * angry birds rio 2 game cheats and hacks for windows 10<br />
48
- * angry birds rio 2 game latest updates and news<br />
49
- * angry birds rio 2 game trailer and screenshots<br />
50
- * angry birds rio 2 game best price and deals for windows 10<br />
51
- * angry birds rio 2 game free trial download for windows 10<br />
52
- * angry birds rio 2 game alternatives and similar games for windows 10<br />
53
- * angry birds rio 2 game problems and solutions for windows 10<br />
54
- * angry birds rio 2 game ratings and feedback from users<br />
55
- * angry birds rio 2 game developer and publisher information<br />
56
- * angry birds rio 2 game based on the movie Rio 2<br />
57
- * angry birds rio 2 game new characters and levels<br />
58
- * angry birds rio 2 game modes and challenges<br />
59
- * angry birds rio 2 game achievements and rewards<br />
60
- * angry birds rio 2 game comparison with other angry birds games<br />
61
- * angry birds rio 2 game fun facts and trivia<br />
62
- * angry birds rio 2 game fan art and videos<br />
63
- * angry birds rio 2 game merchandise and accessories<br />
64
- * angry birds rio 2 game download size and speed for windows 10<br />
65
- * angry birds rio 2 game compatibility and performance for windows 10<br />
66
- * angry birds rio 2 game support and contact details<br />
67
- * angry birds rio 2 game license and terms of use for windows 10<br />
68
- * angry birds rio 2 game refund and cancellation policy for windows 10<br />
69
- * angry birds rio 2 game security and privacy for windows 10<br />
70
- * angry birds rio 2 game community and forums for windows 10 users</p>
71
- <ol>
72
- <li><b>Q: How many levels are there in Angry Birds Rio 2 game?</b>
73
- <br>A: There are over 400 levels in Angry Birds Rio 2 game, divided into several episodes based on the movies Rio and Rio 2. Each episode has its own theme, background, music, and characters.</li>
74
- <li><b>Q: How can I unlock new birds in Angry Birds Rio 2 game?</b>
75
- <br>A: You can unlock new birds in Angry Birds Rio 2 game by completing certain levels or achievements. For example, you can unlock Blu and Jewel by completing level 1-7 of Smugglers' Den episode, or you can unlock Stella by completing level 1-15 of Blossom River episode.</li>
76
- <li><b>Q: How can I join a clan in Angry Birds Rio 2 game?</b>
77
- <br>A: You can join a clan in Angry Birds Rio 2 game by clicking on the clan icon on the bottom left corner of the screen. You can either create your own clan or join an existing one. You can also invite your friends to join your clan or search for other clans by name or tag.</li>
78
- <li><b>Q: How can I play in the arena in Angry Birds Rio 2 game?</b>
79
- <br>A: You can play in the arena in Angry Birds Rio 2 game by clicking on the arena icon on the bottom right corner of the screen. You can compete with other players around the world in daily tournaments and win prizes and trophies. You can also choose your own bird to play with and customize it with hats.</li>
80
- <li><b>Q: How can I contact the support team of Angry Birds Rio 2 game?</b>
81
- <br>A: You can contact the support team of Angry Birds Rio 2 game by clicking on the settings icon on the top left corner of the screen and then clicking on "Help & Support". You can also visit their website [here] or email them at [email protected].</li>
82
- </ol></p> 197e85843d<br />
83
- <br />
84
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Cmo descargar e instalar Among Us APK en tu Android.md DELETED
@@ -1,118 +0,0 @@
1
-
2
- <h1>Among Us APK Descargar: How to Download and Play the Popular Game on Android</h1>
3
- <p>Among Us is one of the most popular games of 2020 and 2021, with millions of players around the world. The game is available on various platforms, including PC, iOS, and Android. If you want to play Among Us on your Android device, you will need to download the APK file from a reliable source. In this article, we will show you how to download and install Among Us APK on Android, how to play the game, and some tips and tricks to help you win. We will also suggest some alternatives to Among Us that you can try if you want more games like it.</p>
4
- <h2>among us apk descargar</h2><br /><p><b><b>Download</b> &#10084;&#10084;&#10084; <a href="https://jinyurl.com/2uNOxF">https://jinyurl.com/2uNOxF</a></b></p><br /><br />
5
- <h2>What is Among Us?</h2>
6
- <p>Among Us is a multiplayer social deduction game developed by Innersloth, an American game studio. The game was released in 2018, but it became a viral sensation in 2020 thanks to streamers and YouTubers who played it online. The game has won several awards, such as the Best Multiplayer Game and the Best Mobile Game at The Game Awards 2020.</p>
7
- <h3>A multiplayer social deduction game</h3>
8
- <p>The premise of Among Us is simple: you are part of a crew of up to 10 players who are on a spaceship or a base. However, among you are one or more impostors who are trying to kill everyone else. The crewmates have to work together to complete tasks and find the impostors before they are all eliminated. The impostors have to blend in with the crewmates, sabotage their tasks, and kill them without being caught.</p>
9
- <h3>Features of Among Us</h3>
10
- <p>Among Us has many features that make it fun and engaging for players of all ages. Some of these features are:</p>
11
- <ul>
12
- <li>Customization: You can choose your color, hat, skin, pet, and name.</li>
13
- <li>Game options: You can adjust the number of impostors, tasks, roles, maps, speed, vision, voting time, and more.</li>
14
- <li>Modes: You can play in classic mode or hide and seek mode.</li>
15
- <li>Maps: You can play in four different maps: The Skeld, MIRA HQ, Polus, and The Airship.</li>
16
- <li>Online or local multiplayer: You can play online with strangers or friends, or over local WiFi with your nearby buddies.</li>
17
- <li>Text or voice chat: You can communicate with other players using text or voice chat during meetings or emergencies.</li>
18
- <li>Cross-platform play: You can play with players on PC, console, Android, or iOS devices.</li>
19
- </ul>
20
- <h2>How to Download Among Us APK on Android</h2>
21
- <p>If you want to play Among Us on your Android device, you will need to download the APK file from a trusted source. There are two ways to do this:</p>
22
- <h3>Steps to download and install Among Us APK from APKCombo</h3>
23
- <ol>
24
- <li>Go to <a href="(^1^)">APKCombo</a>, a website that offers APK downloads for Android games and apps. You can search for Among Us or use this link: <a href="(^2^)">Among Us APK</a>.</li>
25
- <li>Select the version you want to download and click on the "Download APK" button.</li>
26
- <li>Wait for the download to finish and then open the APK file. You may need to enable "Unknown sources" in your device settings to install apps from outside the Google Play Store.</li>
27
- <li>Follow the instructions on the screen to install Among Us on your device.</li>
28
- </ol>
29
- <h3>Steps to download and install Among Us APK from Google Play Store</h3>
30
- <ol>
31
- <li>Go to the <a href="(^4^)">Google Play Store</a> on your device or use this link: <a href="(^5^)">Among Us on Google Play Store</a>.</li>
32
- <li>Tap on the "Install" button and wait for the download to finish.</li>
33
- <li>Open Among Us from your app drawer and enjoy the game.</li>
34
- </ol>
35
- <h2>How to Play Among Us on Android</h2>
36
- <p>Once you have installed Among Us on your Android device, you can start playing the game with your friends or strangers online. Here are some basic steps to play Among Us on Android:</p>
37
- <h3>Choose your role and map</h3>
38
- <p>You can either create your own game or join an existing one. If you create a game, you can choose the number of impostors, the map, and the game settings. You can also invite your friends by sharing the game code. If you join a game, you will be assigned a random role and map. You can either be a crewmate or an impostor, depending on the game settings.</p>
39
- <p>among us apk download android<br />
40
- among us apk mod menu<br />
41
- among us apk pc<br />
42
- among us apk hack<br />
43
- among us apk uptodown<br />
44
- among us apk latest version<br />
45
- among us apk free<br />
46
- among us apk mediafıre<br />
47
- among us apk 2023.2.9<br />
48
- among us apk mod impostor<br />
49
- among us apk online<br />
50
- among us apk full unlocked<br />
51
- among us apk no ads<br />
52
- among us apk unlimited skins<br />
53
- among us apk always impostor<br />
54
- among us apk mod 2023<br />
55
- among us apk para pc<br />
56
- among us apk sin emulador<br />
57
- among us apk gratis<br />
58
- among us apk español<br />
59
- among us apk mega<br />
60
- among us apk mod menu 2023<br />
61
- among us apk mod skins<br />
62
- among us apk mod pets<br />
63
- among us apk mod hats<br />
64
- among us apk mod invisible<br />
65
- among us apk mod speed<br />
66
- among us apk mod kill cooldown<br />
67
- among us apk mod vent as crewmate<br />
68
- among us apk mod no kill cooldown<br />
69
- among us apk mod see impostor<br />
70
- among us apk mod always win<br />
71
- among us apk mod voice chat<br />
72
- among us apk mod anti ban<br />
73
- among us apk mod all unlocked<br />
74
- among us apk mod no ads<br />
75
- among us apk mod unlimited money<br />
76
- among us apk mod god mode<br />
77
- among us apk mod radar impostor<br />
78
- among us apk mod fake impostor<br />
79
- among us apk mod zoom out<br />
80
- among us apk mod no name<br />
81
- among us apk mod rainbow skin<br />
82
- among us apk mod custom skins<br />
83
- among us apk mod hide and seek mode</p>
84
- <h3>Complete tasks or kill crewmates</h3>
85
- <p>If you are a crewmate, your goal is to complete tasks around the map and find the impostors. You can see your tasks on the top left corner of the screen. You can also use the map button to see where your tasks are located. Some tasks are visual, meaning that other players can see you doing them. These tasks can help you prove your innocence or expose an impostor. If you are an impostor, your goal is to kill crewmates and sabotage their tasks. You can use vents to move around the map quickly and secretly. You can also use the sabotage button to cause problems for the crewmates, such as turning off the lights, locking doors, or triggering emergencies.</p>
86
- <h3>Communicate and vote</h3>
87
- <p>If a dead body is reported or an emergency meeting is called, all players will gather in a meeting room to discuss and vote. You can use text or voice chat to communicate with other players. You can share information, accuse someone, defend yourself, or lie. You can also skip voting if you are not sure who the impostor is. The player with the most votes will be ejected from the game. The game will continue until either all impostors are eliminated, all crewmates are killed, or a major sabotage is not fixed in time.</p>
88
- <h2>Tips and Tricks for Among Us</h2>
89
- <p>Playing Among Us can be challenging and fun, especially if you want to win as either a crewmate or an impostor. Here are some tips and tricks that can help you improve your skills and strategies in Among Us:</p>
90
- <h3>Learn your common tasks and viewing distances</h3>
91
- <p>Common tasks are tasks that are assigned to all crewmates in a game. They can be used to verify if someone is telling the truth or lying about their role. For example, if someone claims to have done a common task that you don't have, they are likely an impostor. Common tasks vary depending on the map, so make sure you know what they are before playing. Viewing distances are how far you can see in the game. They can be affected by lights, walls, doors, and vents. Knowing how far you can see and how far others can see you can help you avoid being caught or catch someone in the act.</p>
92
- <h3>Check rooms and cameras for bodies and impostors</h3>
93
- <p>If you are a crewmate, you should check rooms frequently for dead bodies or suspicious activities. If you find a body, report it immediately and share what you saw or where you were. If you don't find any bodies, but see someone acting weirdly, such as venting, killing, or faking tasks, call an emergency meeting and expose them. If you are an impostor, you should avoid killing in plain sight or leaving bodies in obvious places. You should also vent carefully and avoid being seen by cameras or other players.</p>
94
- <h3>Use vents and sabotages wisely as an impostor</h3> <p>If you are an impostor, you should use vents and sabotages wisely to create confusion, distraction, and chaos among the crewmates. Vents allow you to move around the map quickly and secretly, but you should only use them when no one is around or watching. Sabotages allow you to cause problems for the crewmates, such as turning off the lights, locking doors, or triggering emergencies. You should use sabotages to separate, isolate, or lure your targets, or to prevent them from completing their tasks or finding bodies.</p>
95
- <h3>Don't trust anyone and have an alibi as a crewmate</h3>
96
- <p>If you are a crewmate, you should be careful about who you trust and who you follow. Anyone can be an impostor, even your friends or teammates. You should also have an alibi for where you were and what you did during the game. You can use visual tasks, cameras, logs, or other players as your alibi. Having an alibi can help you prove your innocence or accuse someone else.</p>
97
- <h2>Alternatives to Among Us on Android</h2>
98
- <p>If you love Among Us and want to try more games like it, you can check out some of these alternatives on Android:</p>
99
- <h3>Town of Salem</h3>
100
- <p>Town of Salem is a game of murder, deception, and mystery. You are one of 15 players in a town where each player has a role and a goal. Some roles are good, such as the Sheriff, the Doctor, or the Investigator. Some roles are evil, such as the Serial Killer, the Arsonist, or the Witch. Each night, the evil roles can kill someone, while the good roles can protect, heal, or investigate someone. Each day, the town can vote to lynch someone they suspect is evil. The game ends when either all the evil roles are dead, or the evil roles outnumber the good ones.</p>
101
- <h3>Project Winter</h3>
102
- <p>Project Winter is a game of survival and betrayal. You are one of 8 players who are stranded in a snowy wilderness. You have to work together to gather resources, repair structures, and escape. However, among you are two traitors who are trying to sabotage your efforts and kill you. You have to use voice chat and social skills to communicate with other players and find out who the traitors are. You can also use weapons and items to fight back or escape.</p>
103
- <h3>Betrayal.io</h3>
104
- <p>Betrayal.io is a game of deception and deduction. You are one of 12 players who are on a mission to complete tasks and find clues. However, among you are two betrayers who are trying to stop you and kill you. You have to use text chat and emojis to communicate with other players and vote out the betrayers. You can also use gadgets and abilities to help you or hinder others.</p>
105
- <h2>Conclusion</h2>
106
- <p>Among Us is a fun and addictive game that you can play on your Android device with your friends or strangers online. You can download the APK file from APKCombo or Google Play Store and install it on your device. You can then choose your role and map, complete tasks or kill crewmates, communicate and vote, and enjoy the game. You can also improve your skills and strategies by learning some tips and tricks for Among Us. If you want more games like Among Us, you can try some alternatives such as Town of Salem, Project Winter, or Betrayal.io.</p>
107
- <h2>FAQs</h2>
108
- <p>Here are some frequently asked questions about Among Us:</p>
109
- <table>
110
- <tr><td><b>Question</b></td><td><b>Answer</b></td></tr>
111
- <tr><td>Is Among Us free on Android?</td><td>Yes, Among Us is free to download and play on Android devices.</td></tr>
112
- <tr><td>How many players can play Among Us?</td><td>You can play with up to 10 players in one game of Among Us.</td></tr>
113
- <tr><td>Can I play Among Us offline?</td><td>No, you need an internet connection to play Among Us online or over local WiFi.</td></tr>
114
- <tr><td>Can I play Among Us with PC players?</td><td>Yes, you can play with PC players as long as you have the same version of the game.</td></tr>
115
- <tr><td>How do I update Among Us on Android?</td><td>You can update Among Us on Android by downloading the latest APK file from APKCombo or Google Play Store.</td></tr>
116
- </table></p> 197e85843d<br />
117
- <br />
118
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Noblemen 1896 APK Data and Lead Your Armies to Victory!.md DELETED
@@ -1,116 +0,0 @@
1
- <br />
2
- <h1>Noblemen: 1896 APK Data + Download</h1>
3
- <p>If you are looking for a unique and immersive action/strategy game that takes you back to an alternate history of 1896, then you might want to check out Noblemen: 1896. This game lets you play as a nobleman who leads his armies to victory in a steampunk-inspired war. In this article, we will tell you everything you need to know about Noblemen: 1896 APK data + download, including what the game is about, why you should download it, how to download it, how to play it, and what other players think about it.</p>
4
- <h2>noblemen 1896 apk data + download</h2><br /><p><b><b>Download File</b> &#8230;&#8230;&#8230; <a href="https://jinyurl.com/2uNOUa">https://jinyurl.com/2uNOUa</a></b></p><br /><br />
5
- <h2>What is Noblemen: 1896?</h2>
6
- <p>Noblemen: 1896 is a game developed by Foursaken Media that combines third-person shooter combat with strategic planning and resource management. The game is set in an alternate reality where the United States is divided by a civil war that involves advanced weapons such as cannons, gatling guns, airships, steam tanks, and more. You play as a nobleman who commands his own regiment and fights alongside other units in large-scale battles. You can also customize your equipment, recruit new soldiers, upgrade your base, collect battle cards, and explore a dynamic map.</p>
7
- <h3>Why download Noblemen: 1896 APK and data files?</h3>
8
- <p>There are several reasons why you might want to download Noblemen: 1896 APK and data files instead of using the Google Play Store. Here are some of them:</p>
9
- <ul>
10
- <li>You can enjoy offline play without an internet connection.</li>
11
- <li>You can access the latest version of the game without waiting for updates.</li>
12
- <li>You can avoid compatibility issues with your device or region.</li>
13
- <li>You can save storage space by deleting unwanted files.</li>
14
- <li>You can modify or tweak the game to your liking.</li>
15
- </ul>
16
- <h4>How to download Noblemen: 1896 APK and data files?</h4>
17
- <p>To download Noblemen: 1896 APK and data files on your Android device, you need to follow these steps:</p>
18
- <p>noblemen 1896 game apk download<br />
19
- noblemen 1896 mod apk + data<br />
20
- noblemen 1896 android game free download<br />
21
- noblemen 1896 apk obb offline<br />
22
- noblemen 1896 apk data highly compressed<br />
23
- noblemen 1896 full version apk download<br />
24
- noblemen 1896 unlimited money apk + data<br />
25
- noblemen 1896 latest apk download<br />
26
- noblemen 1896 apk data revdl<br />
27
- noblemen 1896 offline shooter game apk<br />
28
- noblemen 1896 action game apk + data<br />
29
- noblemen 1896 apk data modded<br />
30
- noblemen 1896 apk data android 1<br />
31
- noblemen 1896 apk data rexdl<br />
32
- noblemen 1896 apk data mega<br />
33
- noblemen 1896 hack apk download<br />
34
- noblemen 1896 premium apk + data<br />
35
- noblemen 1896 apk data uptodown<br />
36
- noblemen 1896 apk data apkpure<br />
37
- noblemen 1896 apk data google drive<br />
38
- noblemen 1896 cracked apk download<br />
39
- noblemen 1896 pro apk + data<br />
40
- noblemen 1896 apk data mediafire<br />
41
- noblemen 1896 apk data zip file<br />
42
- noblemen 1896 unlocked apk download<br />
43
- noblemen 1896 paid apk + data<br />
44
- noblemen 1896 apk data for pc<br />
45
- noblemen 1896 apk data mod menu<br />
46
- noblemen 1896 apk data no root<br />
47
- noblemen 1896 patched apk download<br />
48
- noblemen 1896 steam tank game apk + data<br />
49
- noblemen 1896 gatling gun game apk download<br />
50
- noblemen 1896 alternate reality game apk + data<br />
51
- noblemen 1896 airship game apk download<br />
52
- noblemen 1896 cavalry game apk + data<br />
53
- noblemen 1896 campaign game apk download<br />
54
- noblemen 1896 battle cards game apk + data<br />
55
- noblemen 1896 frigate game apk download<br />
56
- noblemen 1896 militia game apk + data<br />
57
- noblemen 1896 cannon game apk download<br />
58
- noblemen 1896 foursaken media game apk + data<br />
59
- noblemen 1896 graphics game apk download<br />
60
- noblemen 1896 strategy game apk + data<br />
61
- noblemen 1896 shooter game offline download</p>
62
- <ol>
63
- <li>Allow unknown apps on your device by going to Settings > Apps > Menu > Special access > Install unknown apps > Chrome (or your preferred browser) > Enable Allow from this source.</li>
64
- <li>Install a file manager app (such as Cx File Explorer or File Manager) so that you can find the APK and data files after you download them.</li>
65
- <li>Download the APK file from a reputable website (such as APK Mirror) by tapping the link and accepting any pop-ups.</li>
66
- <li>Download the data file (usually in ZIP or RAR format) from the same website or another source (such as Google Drive).</li>
67
- <li>Locate the downloaded files in your file manager app and extract the data file to get a folder with OBB or DATA extension.</li>
68
- <li>Copy or move the folder to Android > OBB or Android > DATA <li>Install the APK file by tapping on it and following the instructions.</li>
69
- <li>Launch the game and enjoy!</li>
70
- </ol>
71
- <h4>How to play Noblemen: 1896?</h4>
72
- <p>Noblemen: 1896 is a game that requires both skill and strategy to win. Here are some tips on how to play it:</p>
73
- <ul>
74
- <li>Choose your faction and difficulty level before starting a new game. You can play as the Union or the Confederacy, and select from easy, normal, hard, or insane modes.</li>
75
- <li>Learn the basics of combat by completing the tutorial missions. You can control your nobleman by using the virtual joystick and buttons on the screen. You can also switch between different weapons, use special abilities, and command your troops.</li>
76
- <li>Plan your moves on the map screen by tapping on different regions and objectives. You can see the enemy strength, terrain, weather, and rewards for each area. You can also move your base, recruit new units, upgrade your equipment, and use battle cards.</li>
77
- <li>Fight in epic battles by deploying your units and engaging the enemy. You can zoom in and out, rotate the camera, and pause the game to issue orders. You can also use artillery, airships, and reinforcements to turn the tide of war.</li>
78
- <li>Earn medals, coins, and cards by completing missions, achievements, and challenges. You can use them to unlock new items, skills, and bonuses for your nobleman and your army.</li>
79
- </ul>
80
- <h2>Noblemen: 1896 Game Review</h2>
81
- <p>Noblemen: 1896 is a game that offers a lot of fun and excitement for fans of action and strategy games. Here is our review of the game's graphics, sound, story, difficulty, replay value, and overall rating.</p>
82
- <h3>Pros and cons of Noblemen: 1896</h3>
83
- <table>
84
- <tr><th>Pros</th><th>Cons</th></tr>
85
- <tr><td>Stunning graphics and animations</td><td>Sometimes laggy or buggy</td></tr>
86
- <tr><td>Immersive sound effects and music</td><td>Some voice acting is cheesy or annoying</td></tr>
87
- <tr><td>Engaging story and characters</td><td>Limited choices or consequences</td></tr>
88
- <tr><td>Challenging and varied gameplay</td><td>Can be frustrating or repetitive</td></tr>
89
- <tr><td>High replay value and content</td><td>Requires a lot of grinding or spending</td></tr>
90
- </table>
91
- <h3>User feedback on Noblemen: 1896</h3>
92
- <p>Noblemen: 1896 has received mostly positive feedback from users who have played it. Here are some of their reviews from different sources and platforms:</p>
93
- <blockquote>"This game is amazing! The graphics are awesome, the gameplay is smooth, and the story is captivating. I love how you can customize your nobleman and your army, and how you can choose different strategies and tactics. The battles are epic and realistic, and the map is huge and dynamic. This is one of the best games I have ever played!" - Google Play user</blockquote>
94
- <blockquote>"I really like this game, but it has some issues. The game sometimes crashes or freezes, especially when there are too many units on the screen. The game also drains my battery very fast, even when I lower the settings. The game is also very hard, even on easy mode. I wish there was a way to skip some missions or get more resources." - App Store user</blockquote>
95
- <blockquote>"This game is a masterpiece! The graphics are breathtaking, the sound is immersive, and the story is intriguing. I love how you can control your nobleman and your troops in real-time combat, and how you can use different weapons and abilities. The game is also very challenging and rewarding, and it has a lot of content and replay value. This is one of the best games I have ever played!" - Steam user</blockquote>
96
- <h2>Conclusion</h2>
97
- <p>Noblemen: 1896 is a game that combines third-person shooter combat with strategic planning and resource management. The game is set in an alternate history of 1896 where the United States is divided by a civil war that involves advanced weapons such as cannons, gatling guns, airships, steam tanks, and more. You play as a nobleman who commands his own regiment and fights alongside other units in large-scale battles.</p>
98
- <p>If you want to experience this game on your Android device, you can download Noblemen: 1896 APK data + download from reputable websites. This way, you can enjoy offline play without an internet connection, access the latest version of the game without waiting for updates, avoid compatibility issues with your device or region, save storage space by deleting unwanted files, modify or tweak the game to your liking, and more.</p>
99
- <p>Noblemen: 1896 is a game that offers a lot of fun and excitement for fans of action and strategy games. The game has stunning graphics and animations, immersive sound effects and music, engaging story and characters, challenging and varied gameplay, and high replay value and content. The game also has some drawbacks, such as being sometimes laggy or buggy, having some voice acting that is cheesy or annoying, having limited choices or consequences, being frustrating or repetitive, and requiring a lot of grinding or spending. However, these issues do not overshadow the overall quality and enjoyment of the game.</p>
100
- <p>If you are looking for a unique and immersive action/strategy game that takes you back to an alternate history of 1896, then you might want to check out Noblemen: 1896. You will not regret it!</p>
101
- <h3>FAQs on Noblemen: 1896 APK Data + Download</h3>
102
- <p>Here are some frequently asked questions and answers on Noblemen: 1896 APK data + download:</p>
103
- <ol>
104
- <li>Is Noblemen: 1896 free to play?</li>
105
- <p>Yes, Noblemen: 1896 is free to play, but it also has in-app purchases that can enhance your gaming experience.</p>
106
- <li>Is Noblemen: 1896 safe to download?</li>
107
- <p>Yes, Noblemen: 1896 is safe to download as long as you use reputable websites that provide virus-free and malware-free files. You should also scan the files before installing them on your device.</p>
108
- <li>Is Noblemen: 1896 compatible with my device?</li>
109
- <p>Noblemen: 1896 requires Android 4.3 or higher and at least 1 GB of RAM to run smoothly. You should also have enough storage space to accommodate the APK and data files.</p>
110
- <li>How can I contact the developers of Noblemen: 1896?</li>
111
- <p>You can contact the developers of Noblemen: 1896 by visiting their website (https://www.foursakenmedia.com/), their Facebook page (https://www.facebook.com/FoursakenMedia), their Twitter account (https://twitter.com/FoursakenMedia), or their email address ([email protected]).</p>
112
- <li>Where can I find more information about Noblemen: 1896?</li>
113
- <p>You can find more information about Noblemen: 1896 by visiting their official website (https://www.foursakenmedia.com/noblemen-1896), their Google Play Store page (https://play.google.com/store/apps/details?id=com.foursakenmedia.noblemen), their App Store page (https://apps.apple.com/us/app/noblemen-1896/id1178777377), or their Steam page (https://store.steampowered.com/app/1105440/Noblemen_1896/).</p>
114
- </ol></p> 401be4b1e0<br />
115
- <br />
116
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Real Drag Bike Racing Mod APK and Experience the Ultimate Drag Racing Challenge.md DELETED
@@ -1,114 +0,0 @@
1
- <br />
2
- <h1>Download Game Real Drag Bike Racing Mod Apk: A Guide for Racing Fans</h1>
3
- <p>If you are a fan of racing games, you might have heard of Real Drag Bike Racing, a popular game that lets you experience the thrill of drag racing on your mobile device. But did you know that there is a mod apk version of this game that gives you unlimited money, coins, bikes, and more? In this article, we will tell you everything you need to know about Real Drag Bike Racing Mod Apk, including its features, how to download and install it, tips and tricks for playing it, pros and cons, and some frequently asked questions. Read on to find out more!</p>
4
- <h2>download game real drag bike racing mod apk</h2><br /><p><b><b>DOWNLOAD</b> >>>>> <a href="https://jinyurl.com/2uNNAe">https://jinyurl.com/2uNNAe</a></b></p><br /><br />
5
- <h2>Features of Real Drag Bike Racing Mod Apk</h2>
6
- <p>Real Drag Bike Racing Mod Apk is a modified version of the original game that offers many advantages over the regular version. Here are some of the features that you can enjoy with this mod apk:</p>
7
- <ul>
8
- <li><b>Unlimited money and coins:</b> With this mod apk, you will never run out of money and coins to buy new bikes, upgrade your existing ones, or customize them with different colors and stickers. You can also use them to enter tournaments and challenge other players online.</li>
9
- <li><b>All bikes unlocked and upgraded:</b> With this mod apk, you will have access to all the bikes in the game, from the basic ones to the most advanced ones. You can also upgrade them to their maximum level without spending any money or coins. This will give you an edge over your opponents and help you win more races.</li>
10
- <li><b>No ads and no root required:</b> With this mod apk, you will not have to deal with annoying ads that interrupt your gameplay or slow down your device. You will also not need to root your device to install this mod apk, which means you can avoid any risks of damaging your device or voiding its warranty.</li>
11
- </ul>
12
- <h2>How to Download and Install Real Drag Bike Racing Mod Apk</h2>
13
- <p>Downloading and installing Real Drag Bike Racing Mod Apk is very easy and simple. Just follow these steps:</p>
14
- <ol>
15
- <li><b>Step 1:</b> Download the mod apk file from a trusted source. You can use one of these links to download the latest version of Real Drag Bike Racing Mod Apk.</li>
16
- <li><b>Step 2:</ <b>Step 2:</b> Enable unknown sources on your device. To do this, go to your device settings, then security, and then toggle on the option that allows you to install apps from unknown sources. This will enable you to install the mod apk file that you downloaded.</li>
17
- <li><b>Step 3:</b> Install the mod apk file and enjoy the game. To do this, locate the mod apk file in your device storage, tap on it, and follow the instructions on the screen. Once the installation is complete, you can launch the game and start playing with all the mod features.</li>
18
- </ol>
19
- <h2>Tips and Tricks for Playing Real Drag Bike Racing Mod Apk</h2>
20
- <p>Real Drag Bike Racing Mod Apk is a fun and addictive game that will test your skills and reflexes as a drag racer. Here are some tips and tricks that will help you improve your performance and win more races:</p>
21
- <ul>
22
- <li><b>Tip 1:</b> Choose the right bike for each race. Different bikes have different attributes, such as speed, acceleration, handling, and nitro. You should choose a bike that suits the track and the difficulty level of the race. For example, if the track has many turns, you should choose a bike with good handling and nitro. If the track is straight and long, you should choose a bike with high speed and acceleration.</li>
23
- <li><b>Tip 2:</b> Upgrade your bike regularly. As you progress in the game, you will face tougher opponents and more challenging tracks. You should upgrade your bike to keep up with them and gain an advantage. You can upgrade your bike's engine, transmission, tires, nitro, and body with the money and coins that you earn from winning races or from the mod apk.</li>
24
- <li><b>Tip 3:</b> Use nitro wisely. Nitro is a powerful boost that can help you speed up and overtake your rivals. However, it is also limited and takes time to recharge. You should use nitro strategically, such as when you are behind or when you are close to the finish line. You should also avoid using nitro when you are turning or braking, as it can make you lose control of your bike.</li>
25
- </ul>
26
- <h2>Pros and Cons of Real Drag Bike Racing Mod Apk</h2>
27
- <p>Real Drag Bike Racing Mod Apk is a great game for racing fans, but it also has some drawbacks that you should be aware of. Here are some of the pros and cons of this mod apk:</p>
28
- <table>
29
- <tr>
30
- <th>Pros</th>
31
- <th>Cons</th>
32
- </tr>
33
- <tr>
34
- <td><b>Realistic graphics and sound effects:</b> The game has stunning graphics and sound effects that make you feel like you are in a real drag race. You can see the details of your bike, the environment, and the other racers. You can also hear the roar of your engine, the screech of your tires, and the cheers of the crowd.</td>
35
- <td><b>Requires internet connection:</b> The game requires an internet connection to run properly. This means that you cannot play it offline or in areas with poor network coverage. This can be inconvenient for some players who want to enjoy the game anytime and anywhere.</td>
36
- </tr>
37
- <tr>
38
- <td><b>Easy and smooth controls:</b> The game has easy and smooth controls that make it suitable for players of all ages and skill levels. You can control your bike by tapping on the screen or tilting your device. You can also customize your controls according to your preference in the settings menu.</td>
39
- <td><b>May not be compatible with some devices:</b> The game may not work well on some devices due to their specifications or operating systems. Some players have reported issues such as crashes, glitches, or lagging while playing the game. You should check the compatibility of your device before downloading and installing the mod apk.</td>
40
- </tr>
41
- <tr>
42
- <td><b>Various modes and challenges:</b> The game has various modes and challenges that keep you entertained and challenged. You can play in career mode, tournament mode, or online mode. You can also participate in daily missions, weekly events, or special races that offer rewards and bonuses.</td>
43
- <td></td>
44
- </tr>
45
- </table>
46
- <h2>Conclusion and FAQs</h2>
47
- <p>In conclusion, Real Drag Bike Racing Mod Apk is a fantastic game for racing enthusiasts who want to experience the thrill of drag racing on their mobile devices. It offers many features that enhance the gameplay, such as unlimited money, coins, bikes, no ads, no root required, realistic graphics, sound effects, easy controls, various modes, challenges, etc. It also has some drawbacks that may affect some players, such as requiring internet connection or not being compatible with some devices. However, these are minor issues compared to the to use, as long as you download it from a reliable source and follow the installation instructions carefully. However, you should be aware that using mod apk files may violate the terms and conditions of the original game and may result in your account being banned or suspended. You should use this mod apk at your own risk and discretion.</p>
48
- <p>download real drag bike racing mod apk unlimited money<br />
49
- real drag bike racing mod apk latest version<br />
50
- how to download real drag bike racing mod apk for android<br />
51
- real drag bike racing mod apk free download uptodown<br />
52
- real drag bike racing mod apk offline<br />
53
- download game real drag bike racing indonesia mod apk<br />
54
- real drag bike racing mod apk 2023<br />
55
- real drag bike racing mod apk hack<br />
56
- download game real drag bike racing 3d mod apk<br />
57
- real drag bike racing mod apk no ads<br />
58
- download game real drag bike racing 2 mod apk<br />
59
- real drag bike racing mod apk unlimited coins and gems<br />
60
- real drag bike racing mod apk revdl<br />
61
- download game real drag bike racing hd mod apk<br />
62
- real drag bike racing mod apk rexdl<br />
63
- download game real drag bike racing pro mod apk<br />
64
- real drag bike racing mod apk unlock all bikes<br />
65
- real drag bike racing mod apk pure<br />
66
- download game real drag bike racing online mod apk<br />
67
- real drag bike racing mod apk android 1<br />
68
- download game real drag bike racing simulator mod apk<br />
69
- real drag bike racing mod apk happymod<br />
70
- download game real drag bike racing new version mod apk<br />
71
- real drag bike racing mod apk unlimited everything<br />
72
- real drag bike racing mod apk obb<br />
73
- download game real drag bike racing extreme mod apk<br />
74
- real drag bike racing mod apk old version<br />
75
- download game real drag bike racing 4x4 mod apk<br />
76
- real drag bike racing mod apk cheat<br />
77
- download game real drag bike racing nitro mod apk<br />
78
- real drag bike racing mod apk update<br />
79
- download game real drag bike racing classic mod apk<br />
80
- real drag bike racing mod apk full version<br />
81
- download game real drag bike racing adventure mod apk<br />
82
- real drag bike racing mod apk data<br />
83
- download game real drag bike racing championship mod apk<br />
84
- real drag bike racing mod apk vip<br />
85
- download game real drag bike racing turbo mod apk<br />
86
- real drag bike racing mod apk mega mod<br />
87
- download game real drag bike racing legend mod apk<br />
88
- real drag bike racing mod apk all unlocked<br />
89
- download game real drag bike racing city mod apk<br />
90
- real drag bike racing mod apk unlimited fuel and nitro<br />
91
- download game real drag bike racing world tour mod apk<br />
92
- real drag bike racing mod apk no root<br />
93
- download game real drag bike racing ultimate mod apk<br />
94
- real drag bike racing mod apk easy win<br />
95
- download game real drag bike racing supercharged mod apk<br />
96
- real drag bike racing mod apk high graphics</p>
97
- <li><b>FAQ 3:</b> How can I get more money and coins in Real Drag Bike Racing Mod Apk?</li>
98
- <p>With Real Drag Bike Racing Mod Apk, you will get unlimited money and coins that you can use to buy, upgrade, and customize your bikes. You will also earn money and coins from winning races, completing missions, and participating in events. However, if you want to get more money and coins faster, you can use some of these tricks:</p>
99
- <ul>
100
- <li>Watch videos: You can watch short videos in the game to get extra money and coins. You can do this once every few hours.</li>
101
- <li>Invite friends: You can invite your friends to play the game and get rewards for each friend who joins. You can also challenge your friends to online races and earn money and coins from beating them.</li>
102
- <li>Use cheats: You can use some of the cheats that are available online to hack the game and get unlimited money and coins. However, this is not recommended, as it may harm your device or get your account banned.</li>
103
- </ul>
104
- <li><b>FAQ 4:</b> How can I contact the developer of Real Drag Bike Racing Mod Apk?</li>
105
- <p>If you have any questions, suggestions, or feedback about Real Drag Bike Racing Mod Apk, you can contact the developer through their email address: [email protected]. You can also follow them on their social media accounts: Facebook, Twitter, Instagram, and YouTube.</p>
106
- <li><b>FAQ 5:</b> What are some alternatives to Real Drag Bike Racing Mod Apk?</li>
107
- <p>If you are looking for some other games that are similar to Real Drag Bike Racing Mod Apk, you can try some of these alternatives:</p>
108
- <ul>
109
- <li><b>Drag Racing:</b> This is another popular drag racing game that lets you race against other players online or offline. You can choose from over 50 cars, customize them, and upgrade them. You can also join a team, chat with other racers, and compete in tournaments.</li>
110
- <li><b>Bike Race Free:</b> This is a fun and addictive bike racing game that lets you perform stunts and tricks on different tracks. You can play with your friends or with millions of players worldwide. You can also create your own levels and share them with others.</li>
111
- <li><b>Traffic Rider:</b> This is a realistic motorcycle racing game that lets you ride your bike through endless highway traffic. You can choose from over 20 bikes, upgrade them, and unlock new modes. You can also enjoy the first-person view, the career mode, and the online leaderboards.</li>
112
- </ul></p> 197e85843d<br />
113
- <br />
114
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/EvoWars.io A Unique and Exciting IO Game with Dynamic Gameplay.md DELETED
@@ -1,113 +0,0 @@
1
- <br />
2
- <h1>EvoWars.io: A Fun and Addictive Online Battle Game</h1>
3
- <p>If you are looking for a game that is simple, fast-paced, and exciting, then you might want to try EvoWars.io. This is an IO game that lets you fight, kill, and evolve in a top-down online battle arena. You can collect orbs and battle other players to evolve your warrior into different forms, each with its own weapon and abilities. You can also use a sprint ability to chase or escape from your enemies, but at the cost of your experience points. The game is easy to play but hard to master, as you need to balance your size, speed, and range to survive and dominate the battlefield.</p>
4
- <h2>What is EvoWars.io?</h2>
5
- <p>EvoWars.io is an IO game that was released in March 2018 by Night Steed Games. It is available to play on web browsers (desktop and mobile), Android, and iOS devices. The game is inspired by some of the most popular IO games, such as Agar.io and Slither.io, where you have to grow bigger and stronger by collecting orbs and killing other players. However, EvoWars.io adds a twist to the formula by introducing an evolution system that changes your character's appearance and weapon every time you level up. There are currently 25 levels and evolutions to unlock, ranging from a caveman with a club to a demon with a scythe.</p>
6
- <h2>evowars io apkmody</h2><br /><p><b><b>Download Zip</b> &#127383; <a href="https://jinyurl.com/2uNRIh">https://jinyurl.com/2uNRIh</a></b></p><br /><br />
7
- <h2>How to play EvoWars.io?</h2>
8
- <p>The gameplay of EvoWars.io is simple and intuitive. You just need to move your mouse to control your character's movement, left click to attack, and right click to sprint. Your goal is to collect orbs and kill other players to gain experience and points. Every time you fill up your experience bar, you level up and evolve into a new form. Each evolution improves your weapon range but slows down your movement speed. You also lose some of your experience points when you use the sprint ability, so use it wisely. The game ends when you die or when you reach the maximum level of 25.</p>
9
- <h2>What are the features of EvoWars.io?</h2>
10
- <p>EvoWars.io has many features that make it fun and addictive to play. Some of them are:</p>
11
- <ul>
12
- <li>Intense slashing gameplay that lets you eliminate opponents with one hit</li>
13
- <li>More than 15 character models that you can evolve into, each with its own weapon and style</li>
14
- <li>A sprint ability that gives you a speed boost at the expense of your experience points</li>
15
- <li>A leaderboard that shows the top players in the game</li>
16
- <li>An option to watch an ad video to revive once per round</li>
17
- <li>An option to activate bonuses such as the Minotaur bonus that gives you double experience and points when you reach level 23</li>
18
- <li>A chat system that lets you communicate with other players</li>
19
- <li>A user-friendly interface that shows your level, experience bar, score, kills, deaths, and ping</li>
20
- <li>Smooth graphics and animations that create a dynamic and immersive atmosphere</li>
21
- <li>Sound effects and music that enhance the gameplay experience</li>
22
- </ul>
23
- <h2>What are the tips and tricks for EvoWars.io?</h2>
24
- <p>EvoWars.io may seem easy at first glance, but it can be challenging and competitive as well. Here are some tips and tricks that can help you improve your skills and performance in the game:</p>
25
- <ul>
26
- <li>Focus on collecting orbs around the outside of the map when you are low level. This will help you level up faster and avoid stronger players.</li>
27
- <li>Your weapon range improves as you get bigger, but you also become slower. So remember to remain cautious and alert when you are near other players.</li>
28
- <li>When you are smaller, you can move faster. Use this to your advantage and play more evasively when you are low level.</li>
29
- <li>Use your sprint ability sparingly. It can help you chase or escape from enemies, but but it also drains your experience points. So use it only when necessary and avoid wasting it.</li>
30
- <li>Try to hit your enemies from behind or from the side. This will give you a better chance of landing a hit and avoiding their attacks.</li>
31
- <li>Don't be afraid to retreat when you are outnumbered or outmatched. Sometimes, it is better to survive and fight another day than to risk losing everything.</li>
32
- <li>Watch out for the red circles on the map. These indicate the locations of the Minotaur bonus, which can give you a huge advantage if you activate it.</li>
33
- <li>Have fun and enjoy the game. Don't get too frustrated or angry if you lose. Learn from your mistakes and try again.</li>
34
- </ul>
35
- <h2>What are the mod features of EvoWars.io?</h2>
36
- <p>If you want to enhance your gaming experience, you can try the EvoWars.io mod APK from APKMODY. This is a modified version of the game that gives you some extra features and benefits, such as:</p>
37
- <ul>
38
- <li>Unlimited coins that you can use to buy skins and accessories for your character</li>
39
- <li>Unlocked all levels and evolutions that you can access without having to collect orbs or kill other players</li>
40
- <li>No ads that can interrupt your gameplay or slow down your device</li>
41
- <li>Easy installation and compatibility with most Android devices</li>
42
- </ul>
43
- <p>To download and install the EvoWars.io mod APK, you just need to follow these simple steps:</p>
44
- <ol>
45
- <li>Go to the APKMODY website and search for EvoWars.io mod APK</li>
46
- <li>Click on the download button and wait for the file to be downloaded</li>
47
- <li>Open the file and tap on install</li>
48
- <li>Allow unknown sources if prompted by your device settings</li>
49
- <li>Launch the game and enjoy the mod features</li>
50
- </ol>
51
- <h2>What are the alternatives to EvoWars.io?</h2>
52
- <p>If you like EvoWars.io, you might also like some of these similar IO games that offer similar gameplay and features:</p>
53
- | Game | Description | | --- | --- | | Brutal.io | A game where you control a car with a flail and try to smash other players with it | | ZombsRoyale.io | A game where you parachute into a map with 99 other players and try to be the last one standing | | WormsZone.io | A game where you control a worm and try to eat as much food as possible while avoiding other worms | | Starve.io | A game where you have to survive in a harsh environment by gathering resources, crafting items, and fighting enemies | | Mope.io | A game where you start as a mouse and try to evolve into different animals by eating food and water | <h2>Conclusion</h2>
54
- <p>EvoWars.io is a fun and addictive online battle game that lets you fight, kill, and evolve in a top-down arena. You can collect orbs and battle other players to level up and unlock different character models and weapons. You can also use a sprint ability to boost your speed at the cost of your experience points. The game is easy to play but hard to master, as you need to balance your size, speed, and range to survive and dominate the battlefield. If you want to enhance your gaming experience, you can try the EvoWars.io mod APK from APKMODY that gives you unlimited coins, unlocked levels, no ads, and more. You can also check out some of the alternatives to EvoWars.io that offer similar gameplay and features. EvoWars.io is a game that will keep you entertained and engaged for hours. So what are you waiting for? Join the battle and evolve now!</p>
55
- <h2>FAQs</h2>
56
- <p>Here are some of the frequently asked questions about EvoWars.io:</p>
57
- <p>evowars io apk download free<br />
58
- evowars io mod apk unlimited money<br />
59
- evowars io game online play<br />
60
- evowars io hack apk android<br />
61
- evowars io cheats codes pc<br />
62
- evowars io unblocked games 66<br />
63
- evowars io apk mod menu<br />
64
- evowars io tips and tricks<br />
65
- evowars io best evolution strategy<br />
66
- evowars io apk latest version<br />
67
- evowars io mod apk no ads<br />
68
- evowars io gameplay walkthrough<br />
69
- evowars io skins unlock all<br />
70
- evowars io hack apk ios<br />
71
- evowars io review rating<br />
72
- evowars io apk offline mode<br />
73
- evowars io mod apk god mode<br />
74
- evowars io wiki guide<br />
75
- evowars io all evolutions list<br />
76
- evowars io apk pure download<br />
77
- evowars io mod apk revdl<br />
78
- evowars io update new features<br />
79
- evowars io reddit community<br />
80
- evowars io skins customizer<br />
81
- evowars io hack apk download<br />
82
- evowars io mod apk happymod<br />
83
- evowars io tutorial beginner<br />
84
- evowars io discord server link<br />
85
- evowars io skins names generator<br />
86
- evowars io hack apk 2023<br />
87
- evowars io mod apk rexdl<br />
88
- evowars io challenge mode hard<br />
89
- evowars io youtube video gameplay<br />
90
- evowars io skins editor online<br />
91
- evowars io hack apk unlimited orbs<br />
92
- evowars io mod apk an1.com<br />
93
- evowars io leaderboard top players<br />
94
- evowars io facebook fan page<br />
95
- evowars io skins maker free<br />
96
- evowars io hack apk 2022<br />
97
- evowars io mod apk android 1.com <br />
98
- evowars io achievements unlock guide <br />
99
- evowars io instagram official account <br />
100
- evowars io skins creator app <br />
101
- evowars io hack apk no root <br />
102
- evowars io mod apk apkpure <br />
103
- evowars io controls keyboard settings <br />
104
- evowars io twitter official handle <br />
105
- evowars io skins download png</p>
106
- <h3>Q: How many players can play EvoWars.io at the same time?</h3>
107
- <p>A: EvoWars.io can support up to 100 players per server. You can join any server that has available slots or create your own private server with a password.</p>
108
- <h3>Q: How can I change my character's name, skin, or accessory in EvoWars.io?</h3>
109
- <p>A: You can change your character's name by typing it in the box below the play button. You can change your character's skin or accessory by clicking on the shop button on the top right corner of the screen. You can buy skins or accessories with coins that you earn by playing the game or watching ads. You can also get unlimited coins by using the EvoWars.io mod APK from APKMODY.</p>
110
- <h3>Q: How can I report a bug or a problem in EvoWars.io?</h3>
111
- <p>A: You can report a bug or a problem in EvoWars.io by contacting the developers through their email <p>A: Yes, EvoWars.io is free to play on web browsers, Android, and iOS devices. You don't need to pay anything to enjoy the game. However, you can support the developers by buying coins or watching ads, which can help them improve the game and add more features.</p> 401be4b1e0<br />
112
- <br />
113
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Explore the Dungeon and Fight the Boss in Pixel Blade M VIP APK.md DELETED
@@ -1,129 +0,0 @@
1
-
2
- <h1>Pixel Blade M VIP APK: A Review of the Action RPG Game</h1>
3
- <p>If you are looking for a pixel-style 3D action RPG game that offers quick and exciting gameplay, various weapons and skills, and a challenging dungeon adventure, then you might want to check out Pixel Blade M VIP APK. This is a game developed by PixelStar Games, which is the VIP version of the original Pixel Blade game. In this article, we will review the features, installation process, pros and cons, and FAQs of Pixel Blade M VIP APK.</p>
4
- <h2>What is Pixel Blade M VIP APK?</h2>
5
- <p>Pixel Blade M VIP APK is an Android game that belongs to the action RPG genre. It is set in a pixel world where you play as the last pixel hero who has to collect weapons and conquer dungeons to save the world. The game has pixel-style graphics, 3D effects, and a hack and slash gameplay that will keep you entertained for hours.</p>
6
- <h2>pixel blade m vip apk</h2><br /><p><b><b>DOWNLOAD</b> &#128505; <a href="https://jinyurl.com/2uNMFY">https://jinyurl.com/2uNMFY</a></b></p><br /><br />
7
- <p>As the VIP version of the game, Pixel Blade M VIP APK offers some exclusive benefits for the players, such as:</p>
8
- <ul>
9
- <li>500 GEM (click top vip button)</li>
10
- <li>Remove ads (banner)</li>
11
- </ul>
12
- <p>The game also has regular updates that add new features and improvements to the gameplay.</p>
13
- <h3>Features of Pixel Blade M VIP APK</h3>
14
- <p>Pixel Blade M VIP APK has many features that make it an enjoyable and addictive action RPG game. Here are some of them:</p>
15
- <h4>Quick action and a variety of skills</h4>
16
- <p>The game has a fast-paced and dynamic gameplay that requires you to use different skills and strategies to defeat the enemies. You can use various buttons to perform attacks, dodge, jump, and use special skills. You can also customize your skill set according to your preference and play style.</p>
17
- <h4>Various weapon skills and upgrade systems</h4>
18
- <p>The game has a wide range of weapons that you can collect and use in the dungeon. Each weapon has its own skill and attribute that can affect your performance in combat. You can also upgrade your weapons and equipment using the materials that you obtain from hunting monsters or mining. You can also advance your weapons to unlock new skills and effects.</p>
19
- <h4>Various costumes and armor</h4>
20
- <p>The game allows you to change your appearance by wearing different costumes and armor. You can choose from various styles and colors that suit your taste. The costumes and armor also have different stats that can boost your defense, attack, speed, or other attributes.</p>
21
- <h4>Mine system and craft system</h4>
22
- <p>The game has a mine system that lets you obtain gems and potions for free. You can use these items to enhance your weapons, equipment, or skills. The game also has a craft system that lets you create new items using the materials that you collect from the dungeon or the mine.</p>
23
- <p>pixel blade m vip mod apk<br />
24
- pixel blade m vip hack apk<br />
25
- pixel blade m vip apk download<br />
26
- pixel blade m vip apk free<br />
27
- pixel blade m vip apk latest version<br />
28
- pixel blade m vip apk unlimited money<br />
29
- pixel blade m vip apk android<br />
30
- pixel blade m vip apk offline<br />
31
- pixel blade m vip apk 9.2.9<br />
32
- pixel blade m vip apk 2023<br />
33
- pixel blade m vip apk rexdl<br />
34
- pixel blade m vip apk revdl<br />
35
- pixel blade m vip apk apkpure<br />
36
- pixel blade m vip apk happymod<br />
37
- pixel blade m vip apk moddroid<br />
38
- pixel blade m vip apk an1<br />
39
- pixel blade m vip apk obb<br />
40
- pixel blade m vip apk data<br />
41
- pixel blade m vip apk full version<br />
42
- pixel blade m vip apk no ads<br />
43
- pixel blade m vip game apk<br />
44
- pixel blade m vip rpg game mod apk<br />
45
- pixel blade m vip 3d action rpg game hack apk<br />
46
- download game pixel blade m vip mod apk<br />
47
- download game pixel blade m vip hack apk<br />
48
- download game pixel blade m vip free apk<br />
49
- download game pixel blade m vip latest version apk<br />
50
- download game pixel blade m vip unlimited money apk<br />
51
- download game pixel blade m vip android apk<br />
52
- download game pixel blade m vip offline apk<br />
53
- download game pixel blade m vip 9.2.9 apk<br />
54
- download game pixel blade m vip 2023 apk<br />
55
- download game pixel blade m vip rexdl apk<br />
56
- download game pixel blade m vip revdl apk<br />
57
- download game pixel blade m vip apkpure apk<br />
58
- download game pixel blade m vip happymod apk<br />
59
- download game pixel blade m vip moddroid apk<br />
60
- download game pixel blade m vip an1 apk<br />
61
- download game pixel blade m vip obb data apk<br />
62
- download game pixel blade m vip full version no ads apk<br />
63
- how to install pixel blade m vip mod hack apk on android device<br />
64
- how to play pixel blade m vip offline without internet connection on android device <br />
65
- how to get unlimited money gems and weapons in pixel blade m vip mod hack apk on android device <br />
66
- how to update to the latest version of pixel blade m vip mod hack apk on android device <br />
67
- how to fix the crash issue of pixel blade m vip mod hack apk on android 12 device <br />
68
- how to craft weapons and armor in pixel blade m vip mod hack rpg game on android device <br />
69
- how to mine gems and potions in pixel blade m vip mod hack rpg game on android device <br />
70
- how to raid bosses and dungeons in pixel blade m vip mod hack rpg game on android device <br />
71
- how to customize your character and costume in pixel blade m vip mod hack rpg game on android device</p>
72
- <h4>Boss raid</h4>
73
- <p>The game has a boss raid feature that lets you challenge powerful bosses in the dungeon. You can team up with other players online or play solo to defeat the bosses and get rewards. The bosses have different patterns and abilities that require you to use your skills wisely.</p>
74
- <h3>How to download and install Pixel Blade M VIP APK?</h3>
75
- <p>If you want to play Pixel Blade M VIP APK on your Android device, you need to follow these steps:</p>
76
- <h4>Requirements and compatibility</h4>
77
- <p>Before you download and install the game, make sure that your device meets these requirements:</p>
78
- <ul>
79
- <li>Android version 8.0 or higher</li>
80
- <li>At least 100 MB of free storage space</li>
81
- <li>A stable internet connection</li>
82
- </ul>
83
- <p>The game The game is compatible with most Android devices, but some features may not work properly on some models or versions. If you encounter any problems while playing the game, you can contact the developer through their email or social media accounts. <h4>Steps to download and install</h4>
84
- <p>After you have checked the requirements and compatibility, you can follow these steps to download and install the game:</p>
85
- <ol>
86
- <li>Go to the official website of PixelStar Games or click on this link: [Pixel Blade M VIP APK].</li>
87
- <li>Click on the download button and wait for the APK file to be downloaded to your device.</li>
88
- <li>Once the download is complete, locate the APK file in your device's file manager and tap on it to install it.</li>
89
- <li>If you see a warning message that says "Install blocked", go to your device's settings and enable the option to allow installation from unknown sources.</li>
90
- <li>Follow the instructions on the screen to complete the installation process.</li>
91
- <li>Launch the game and enjoy playing Pixel Blade M VIP APK.</li>
92
- </ol>
93
- <h3>Pros and cons of Pixel Blade M VIP APK</h3>
94
- <p>Like any other game, Pixel Blade M VIP APK has its own advantages and disadvantages. Here are some of them:</p>
95
- <h4>Pros</h4>
96
- <ul>
97
- <li>The game has a simple and intuitive interface that makes it easy to navigate and play.</li>
98
- <li>The game has a pixel-style graphics that gives it a retro and nostalgic feel.</li>
99
- <li>The game has a fast and smooth gameplay that offers a lot of action and fun.</li>
100
- <li>The game has a variety of weapons, skills, costumes, and items that you can collect and customize.</li>
101
- <li>The game has a mine system and a craft system that let you create your own items and enhance your equipment.</li>
102
- <li>The game has a boss raid feature that lets you challenge powerful bosses and get rewards.</li>
103
- <li>The game has a VIP version that gives you exclusive benefits such as free gems and no ads.</li>
104
- </ul>
105
- <h4>Cons</h4>
106
- <ul>
107
- <li>The game may not be compatible with some devices or versions of Android.</li>
108
- <li>The game may have some bugs or glitches that affect the gameplay or performance.</li>
109
- <li>The game may have some ads or in-app purchases that may annoy some players.</li>
110
- </ul>
111
- <h2>Conclusion</h2>
112
- <p>Pixel Blade M VIP APK is an action RPG game that lets you play as the last pixel hero who has to save the world from evil. The game has pixel-style graphics, 3D effects, and a hack and slash gameplay that will keep you entertained for hours. The game also has many features that make it enjoyable and addictive, such as various weapons, skills, costumes, items, mine system, craft system, and boss raid. The game also has a VIP version that gives you exclusive benefits such as free gems and no ads. If you are looking for a pixel-style 3D action RPG game that offers quick and exciting gameplay, various weapons and skills, and a challenging dungeon adventure, then you might want to check out Pixel Blade M VIP APK.</p>
113
- <h2>FAQs</h2>
114
- <p>Here are some frequently asked questions about Pixel Blade M VIP APK:</p>
115
- <ol>
116
- <li><b>What is the difference between Pixel Blade M VIP APK and Pixel Blade M APK?</b></li>
117
- <p>Pixel Blade M VIP APK is the VIP version of Pixel Blade M APK. It offers some exclusive benefits for the players, such as 500 GEM (click top vip button) and remove ads (banner). The VIP version also has regular updates that add new features and improvements to the gameplay.</p>
118
- <li><b>Is Pixel Blade M VIP APK safe to download and install?</b></li>
119
- <p>Yes, Pixel Blade M VIP APK is safe to download and install. It does not contain any viruses or malware that can harm your device or data. However, you should always download the game from trusted sources such as the official website of PixelStar Games or this link: [Pixel Blade M VIP APK].</p>
120
- <li><b>How can I get more gems in Pixel Blade M VIP APK?</b></li>
121
- <p>You can get more gems in Pixel Blade M VIP APK by using the mine system or by clicking on the top vip button. You can also get gems by completing quests, achievements, or events in the game. You can also buy gems using real money through in-app purchases.</p>
122
- <li><b>How can I advance my weapons in Pixel Blade M VIP APK?</b></li>
123
- <p>You can advance your weapons in Pixel Blade M VIP APK by using the upgrade system or the craft system. You need to have enough materials and gold to upgrade or craft your weapons. You can get materials from hunting monsters or mining or from the craft system. You can also advance your weapons by using the gems that you obtain from the mine system or the vip button.</p>
124
- <li><b>How can I play Pixel Blade M VIP APK with other players?</b></li>
125
- <p>You can play Pixel Blade M VIP APK with other players by using the boss raid feature. You can join or create a room and invite other players online or play solo to challenge the bosses in the dungeon. You can also chat with other players in the game and make friends.</p>
126
- </ol>
127
- <p>I hope this article has helped you learn more about Pixel Blade M VIP APK and how to play it. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and have fun playing Pixel Blade M VIP APK!</p> 401be4b1e0<br />
128
- <br />
129
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/2ndelement/voicevox/docs/VOICEVOX音声合成エンジンとの連携.md DELETED
@@ -1,7 +0,0 @@
1
- メモ書き程度ですが、どういう方針で開発を進めているかを紹介します。
2
-
3
- - バージョンが上がっても、`/audio_query`で返ってくる値をそのまま`/synthesis`に POST すれば音声合成できるようにする予定です
4
- - `AudioQuery`のパラメータは増えますが、なるべくデフォルト値で以前と変わらない音声が生成されるようにします
5
- - バージョン 0.7 から音声スタイルが実装されました。スタイルの情報は`/speakers`から取得できます
6
- - スタイルの情報にある`style_id`を`speaker`に指定することで、今まで通り音声合成ができます
7
- - style_id の指定先が speaker なのは互換性のためです
 
 
 
 
 
 
 
 
spaces/52Hz/SRMNet_AWGN_denoising/model/SRMNet.py DELETED
@@ -1,227 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
-
4
- ##---------- Basic Layers ----------
5
- def conv3x3(in_chn, out_chn, bias=True):
6
- layer = nn.Conv2d(in_chn, out_chn, kernel_size=3, stride=1, padding=1, bias=bias)
7
- return layer
8
-
9
- def conv(in_channels, out_channels, kernel_size, bias=False, stride=1):
10
- return nn.Conv2d(
11
- in_channels, out_channels, kernel_size,
12
- padding=(kernel_size // 2), bias=bias, stride=stride)
13
-
14
- def bili_resize(factor):
15
- return nn.Upsample(scale_factor=factor, mode='bilinear', align_corners=False)
16
-
17
- ##---------- Basic Blocks ----------
18
- class UNetConvBlock(nn.Module):
19
- def __init__(self, in_size, out_size, downsample):
20
- super(UNetConvBlock, self).__init__()
21
- self.downsample = downsample
22
- self.block = SK_RDB(in_channels=in_size, growth_rate=out_size, num_layers=3)
23
- if downsample:
24
- self.downsample = PS_down(out_size, out_size, downscale=2)
25
-
26
- def forward(self, x):
27
- out = self.block(x)
28
- if self.downsample:
29
- out_down = self.downsample(out)
30
- return out_down, out
31
- else:
32
- return out
33
-
34
- class UNetUpBlock(nn.Module):
35
- def __init__(self, in_size, out_size):
36
- super(UNetUpBlock, self).__init__()
37
- # self.up = nn.ConvTranspose2d(in_size, out_size, kernel_size=2, stride=2, bias=True)
38
- self.up = PS_up(in_size, out_size, upscale=2)
39
- self.conv_block = UNetConvBlock(in_size, out_size, False)
40
-
41
- def forward(self, x, bridge):
42
- up = self.up(x)
43
- out = torch.cat([up, bridge], dim=1)
44
- out = self.conv_block(out)
45
- return out
46
-
47
- ##---------- Resizing Modules (Pixel(Un)Shuffle) ----------
48
- class PS_down(nn.Module):
49
- def __init__(self, in_size, out_size, downscale):
50
- super(PS_down, self).__init__()
51
- self.UnPS = nn.PixelUnshuffle(downscale)
52
- self.conv1 = nn.Conv2d((downscale**2) * in_size, out_size, 1, 1, 0)
53
-
54
- def forward(self, x):
55
- x = self.UnPS(x) # h/2, w/2, 4*c
56
- x = self.conv1(x)
57
- return x
58
-
59
- class PS_up(nn.Module):
60
- def __init__(self, in_size, out_size, upscale):
61
- super(PS_up, self).__init__()
62
-
63
- self.PS = nn.PixelShuffle(upscale)
64
- self.conv1 = nn.Conv2d(in_size//(upscale**2), out_size, 1, 1, 0)
65
-
66
- def forward(self, x):
67
- x = self.PS(x) # h/2, w/2, 4*c
68
- x = self.conv1(x)
69
- return x
70
-
71
- ##---------- Selective Kernel Feature Fusion (SKFF) ----------
72
- class SKFF(nn.Module):
73
- def __init__(self, in_channels, height=3, reduction=8, bias=False):
74
- super(SKFF, self).__init__()
75
-
76
- self.height = height
77
- d = max(int(in_channels / reduction), 4)
78
-
79
- self.avg_pool = nn.AdaptiveAvgPool2d(1)
80
- self.conv_du = nn.Sequential(nn.Conv2d(in_channels, d, 1, padding=0, bias=bias), nn.PReLU())
81
-
82
- self.fcs = nn.ModuleList([])
83
- for i in range(self.height):
84
- self.fcs.append(nn.Conv2d(d, in_channels, kernel_size=1, stride=1, bias=bias))
85
-
86
- self.softmax = nn.Softmax(dim=1)
87
-
88
- def forward(self, inp_feats):
89
- batch_size, n_feats, H, W = inp_feats[1].shape
90
-
91
- inp_feats = torch.cat(inp_feats, dim=1)
92
- inp_feats = inp_feats.view(batch_size, self.height, n_feats, inp_feats.shape[2], inp_feats.shape[3])
93
-
94
- feats_U = torch.sum(inp_feats, dim=1)
95
- feats_S = self.avg_pool(feats_U)
96
- feats_Z = self.conv_du(feats_S)
97
-
98
- attention_vectors = [fc(feats_Z) for fc in self.fcs]
99
- attention_vectors = torch.cat(attention_vectors, dim=1)
100
- attention_vectors = attention_vectors.view(batch_size, self.height, n_feats, 1, 1)
101
-
102
- attention_vectors = self.softmax(attention_vectors)
103
- feats_V = torch.sum(inp_feats * attention_vectors, dim=1)
104
-
105
- return feats_V
106
-
107
- ##---------- Dense Block ----------
108
- class DenseLayer(nn.Module):
109
- def __init__(self, in_channels, out_channels, I):
110
- super(DenseLayer, self).__init__()
111
- self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=3 // 2)
112
- self.relu = nn.ReLU(inplace=True)
113
- self.sk = SKFF(out_channels, height=2, reduction=8, bias=False)
114
-
115
- def forward(self, x):
116
- x1 = self.relu(self.conv(x))
117
- # output = torch.cat([x, x1], 1) # -> RDB
118
- output = self.sk((x, x1))
119
- return output
120
-
121
- ##---------- Selective Kernel Residual Dense Block (SK-RDB) ----------
122
- class SK_RDB(nn.Module):
123
- def __init__(self, in_channels, growth_rate, num_layers):
124
- super(SK_RDB, self).__init__()
125
- self.identity = nn.Conv2d(in_channels, growth_rate, 1, 1, 0)
126
- self.layers = nn.Sequential(
127
- *[DenseLayer(in_channels, in_channels, I=i) for i in range(num_layers)]
128
- )
129
- self.lff = nn.Conv2d(in_channels, growth_rate, kernel_size=1)
130
-
131
- def forward(self, x):
132
- res = self.identity(x)
133
- x = self.layers(x)
134
- x = self.lff(x)
135
- return res + x
136
-
137
- ##---------- testNet ----------
138
- class SRMNet(nn.Module):
139
- def __init__(self, in_chn=3, wf=96, depth=4):
140
- super(SRMNet, self).__init__()
141
- self.depth = depth
142
- self.down_path = nn.ModuleList()
143
- self.bili_down = bili_resize(0.5)
144
- self.conv_01 = nn.Conv2d(in_chn, wf, 3, 1, 1)
145
-
146
- # encoder of UNet
147
- prev_channels = 0
148
- for i in range(depth): # 0,1,2,3
149
- downsample = True if (i + 1) < depth else False
150
- self.down_path.append(UNetConvBlock(prev_channels + wf, (2 ** i) * wf, downsample))
151
- prev_channels = (2 ** i) * wf
152
-
153
- # decoder of UNet
154
- self.up_path = nn.ModuleList()
155
- self.skip_conv = nn.ModuleList()
156
- self.conv_up = nn.ModuleList()
157
- self.bottom_conv = nn.Conv2d(prev_channels, wf, 3, 1, 1)
158
- self.bottom_up = bili_resize(2 ** (depth-1))
159
-
160
- for i in reversed(range(depth - 1)):
161
- self.up_path.append(UNetUpBlock(prev_channels, (2 ** i) * wf))
162
- self.skip_conv.append(nn.Conv2d((2 ** i) * wf, (2 ** i) * wf, 3, 1, 1))
163
- self.conv_up.append(nn.Sequential(*[nn.Conv2d((2 ** i) * wf, wf, 3, 1, 1), bili_resize(2 ** i)]))
164
- prev_channels = (2 ** i) * wf
165
-
166
- self.final_ff = SKFF(in_channels=wf, height=depth)
167
- self.last = conv3x3(prev_channels, in_chn, bias=True)
168
-
169
- def forward(self, x):
170
- img = x
171
- scale_img = img
172
-
173
- ##### shallow conv #####
174
- x1 = self.conv_01(img)
175
- encs = []
176
- ######## UNet ########
177
- # Down-path (Encoder)
178
- for i, down in enumerate(self.down_path):
179
- if i == 0:
180
- x1, x1_up = down(x1)
181
- encs.append(x1_up)
182
- elif (i + 1) < self.depth:
183
- scale_img = self.bili_down(scale_img)
184
- left_bar = self.conv_01(scale_img)
185
- x1 = torch.cat([x1, left_bar], dim=1)
186
- x1, x1_up = down(x1)
187
- encs.append(x1_up)
188
- else:
189
- scale_img = self.bili_down(scale_img)
190
- left_bar = self.conv_01(scale_img)
191
- x1 = torch.cat([x1, left_bar], dim=1)
192
- x1 = down(x1)
193
-
194
- # Up-path (Decoder)
195
- ms_result = [self.bottom_up(self.bottom_conv(x1))]
196
- for i, up in enumerate(self.up_path):
197
- x1 = up(x1, self.skip_conv[i](encs[-i - 1]))
198
- ms_result.append(self.conv_up[i](x1))
199
-
200
- # Multi-scale selective feature fusion
201
- msff_result = self.final_ff(ms_result)
202
-
203
- ##### Reconstruct #####
204
- out_1 = self.last(msff_result) + img
205
-
206
- return out_1
207
-
208
-
209
- if __name__ == "__main__":
210
- from thop import profile
211
-
212
- input = torch.ones(1, 3, 256, 256, dtype=torch.float, requires_grad=False)
213
- model = SRMNet(in_chn=3, wf=96, depth=4)
214
- out = model(input)
215
- flops, params = profile(model, inputs=(input,))
216
- total = sum(p.numel() for p in model.parameters())
217
-
218
- # RDBlayer = SK_RDB(in_channels=64, growth_rate=64, num_layers=3)
219
- # print(RDBlayer)
220
- # out = RDBlayer(input)
221
- # flops, params = profile(RDBlayer, inputs=(input,))
222
-
223
- print('input shape:', input.shape)
224
- print('output shape', out.shape)
225
- print("-----------------------------------")
226
- print("Total params: %.4f M" % (total / 1e6))
227
- print("Total params: %.4f G" % (flops / 1e9))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/julius/__init__.py DELETED
@@ -1,41 +0,0 @@
1
- # File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details.
2
- # Author: adefossez, 2020
3
-
4
- # flake8: noqa
5
- """
6
- .. image:: ../logo.png
7
-
8
- Julius contains different Digital Signal Processing algorithms implemented
9
- with PyTorch, so that they are differentiable and available on CUDA.
10
- Note that all the modules implemented here can be used with TorchScript.
11
-
12
- For now, I have implemented:
13
-
14
- - `julius.resample`: fast sinc resampling.
15
- - `julius.fftconv`: FFT based convolutions.
16
- - `julius.lowpass`: FIR low pass filter banks.
17
- - `julius.filters`: FIR high pass and band pass filters.
18
- - `julius.bands`: Decomposition of a waveform signal over mel-scale frequency bands.
19
-
20
- Along that, you might found useful utilities in:
21
-
22
- - `julius.core`: DSP related functions.
23
- - `julius.utils`: Generic utilities.
24
-
25
-
26
- Please checkout [the Github repository](https://github.com/adefossez/julius) for other informations.
27
- For a verification of the speed and correctness of Julius, check the benchmark module `bench`.
28
-
29
-
30
- This package is named in this honor of
31
- [Julius O. Smith](https://ccrma.stanford.edu/~jos/),
32
- whose books and website were a gold mine of information for me to learn about DSP. Go checkout his website if you want
33
- to learn more about DSP.
34
- """
35
-
36
- from .bands import SplitBands, split_bands
37
- from .fftconv import fft_conv1d, FFTConv1d
38
- from .filters import bandpass_filter, BandPassFilter
39
- from .filters import highpass_filter, highpass_filters, HighPassFilter, HighPassFilters
40
- from .lowpass import lowpass_filter, lowpass_filters, LowPassFilters, LowPassFilter
41
- from .resample import resample_frac, ResampleFrac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Naga/Parking_Space_Counter/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Parking Space Counter
3
- emoji: ⚡
4
- colorFrom: red
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 3.18.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/StyleGANEX/models/stylegan2/model.py DELETED
@@ -1,768 +0,0 @@
1
- import math
2
- import random
3
- import torch
4
- from torch import nn
5
- from torch.nn import functional as F
6
- import numpy as np
7
-
8
- from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
9
-
10
-
11
- class PixelNorm(nn.Module):
12
- def __init__(self):
13
- super().__init__()
14
-
15
- def forward(self, input):
16
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
17
-
18
-
19
- def make_kernel(k):
20
- k = torch.tensor(k, dtype=torch.float32)
21
-
22
- if k.ndim == 1:
23
- k = k[None, :] * k[:, None]
24
-
25
- k /= k.sum()
26
-
27
- return k
28
-
29
-
30
- class Upsample(nn.Module):
31
- def __init__(self, kernel, factor=2):
32
- super().__init__()
33
-
34
- self.factor = factor
35
- kernel = make_kernel(kernel) * (factor ** 2)
36
- self.register_buffer('kernel', kernel)
37
-
38
- p = kernel.shape[0] - factor
39
-
40
- pad0 = (p + 1) // 2 + factor - 1
41
- pad1 = p // 2
42
-
43
- self.pad = (pad0, pad1)
44
-
45
- def forward(self, input):
46
- out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad)
47
-
48
- return out
49
-
50
-
51
- class Downsample(nn.Module):
52
- def __init__(self, kernel, factor=2):
53
- super().__init__()
54
-
55
- self.factor = factor
56
- kernel = make_kernel(kernel)
57
- self.register_buffer('kernel', kernel)
58
-
59
- p = kernel.shape[0] - factor
60
-
61
- pad0 = (p + 1) // 2
62
- pad1 = p // 2
63
-
64
- self.pad = (pad0, pad1)
65
-
66
- def forward(self, input):
67
- out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad)
68
-
69
- return out
70
-
71
-
72
- class Blur(nn.Module):
73
- def __init__(self, kernel, pad, upsample_factor=1):
74
- super().__init__()
75
-
76
- kernel = make_kernel(kernel)
77
-
78
- if upsample_factor > 1:
79
- kernel = kernel * (upsample_factor ** 2)
80
-
81
- self.register_buffer('kernel', kernel)
82
-
83
- self.pad = pad
84
-
85
- def forward(self, input):
86
- out = upfirdn2d(input, self.kernel, pad=self.pad)
87
-
88
- return out
89
-
90
-
91
- class EqualConv2d(nn.Module):
92
- def __init__(
93
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True, dilation=1 ## modified
94
- ):
95
- super().__init__()
96
-
97
- self.weight = nn.Parameter(
98
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
99
- )
100
- self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
101
-
102
- self.stride = stride
103
- self.padding = padding
104
- self.dilation = dilation ## modified
105
-
106
- if bias:
107
- self.bias = nn.Parameter(torch.zeros(out_channel))
108
-
109
- else:
110
- self.bias = None
111
-
112
- def forward(self, input):
113
- out = F.conv2d(
114
- input,
115
- self.weight * self.scale,
116
- bias=self.bias,
117
- stride=self.stride,
118
- padding=self.padding,
119
- dilation=self.dilation, ## modified
120
- )
121
-
122
- return out
123
-
124
- def __repr__(self):
125
- return (
126
- f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},"
127
- f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding}, dilation={self.dilation})" ## modified
128
- )
129
-
130
-
131
- class EqualLinear(nn.Module):
132
- def __init__(
133
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None
134
- ):
135
- super().__init__()
136
-
137
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
138
-
139
- if bias:
140
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
141
-
142
- else:
143
- self.bias = None
144
-
145
- self.activation = activation
146
-
147
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
148
- self.lr_mul = lr_mul
149
-
150
- def forward(self, input):
151
- if self.activation:
152
- out = F.linear(input, self.weight * self.scale)
153
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
154
-
155
- else:
156
- out = F.linear(
157
- input, self.weight * self.scale, bias=self.bias * self.lr_mul
158
- )
159
-
160
- return out
161
-
162
- def __repr__(self):
163
- return (
164
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})'
165
- )
166
-
167
-
168
- class ScaledLeakyReLU(nn.Module):
169
- def __init__(self, negative_slope=0.2):
170
- super().__init__()
171
-
172
- self.negative_slope = negative_slope
173
-
174
- def forward(self, input):
175
- out = F.leaky_relu(input, negative_slope=self.negative_slope)
176
-
177
- return out * math.sqrt(2)
178
-
179
-
180
- class ModulatedConv2d(nn.Module):
181
- def __init__(
182
- self,
183
- in_channel,
184
- out_channel,
185
- kernel_size,
186
- style_dim,
187
- demodulate=True,
188
- upsample=False,
189
- downsample=False,
190
- blur_kernel=[1, 3, 3, 1],
191
- dilation=1, ##### modified
192
- ):
193
- super().__init__()
194
-
195
- self.eps = 1e-8
196
- self.kernel_size = kernel_size
197
- self.in_channel = in_channel
198
- self.out_channel = out_channel
199
- self.upsample = upsample
200
- self.downsample = downsample
201
- self.dilation = dilation ##### modified
202
-
203
- if upsample:
204
- factor = 2
205
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
206
- pad0 = (p + 1) // 2 + factor - 1
207
- pad1 = p // 2 + 1
208
-
209
- self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor)
210
-
211
- # to simulate transconv + blur
212
- # we use dilated transposed conv with blur kernel as weight + dilated transconv
213
- if dilation > 1: ##### modified
214
- blur_weight = torch.randn(1, 1, 3, 3) * 0 + 1
215
- blur_weight[:,:,0,1] = 2
216
- blur_weight[:,:,1,0] = 2
217
- blur_weight[:,:,1,2] = 2
218
- blur_weight[:,:,2,1] = 2
219
- blur_weight[:,:,1,1] = 4
220
- blur_weight = blur_weight / 16.0
221
- self.register_buffer("blur_weight", blur_weight)
222
-
223
- if downsample:
224
- factor = 2
225
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
226
- pad0 = (p + 1) // 2
227
- pad1 = p // 2
228
-
229
- self.blur = Blur(blur_kernel, pad=(pad0, pad1))
230
-
231
- fan_in = in_channel * kernel_size ** 2
232
- self.scale = 1 / math.sqrt(fan_in)
233
- self.padding = kernel_size // 2 + dilation - 1 ##### modified
234
-
235
- self.weight = nn.Parameter(
236
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
237
- )
238
-
239
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
240
-
241
- self.demodulate = demodulate
242
-
243
- def __repr__(self):
244
- return (
245
- f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, '
246
- f'upsample={self.upsample}, downsample={self.downsample})'
247
- )
248
-
249
- def forward(self, input, style):
250
- batch, in_channel, height, width = input.shape
251
-
252
- style = self.modulation(style).view(batch, 1, in_channel, 1, 1)
253
- weight = self.scale * self.weight * style
254
-
255
- if self.demodulate:
256
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
257
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
258
-
259
- weight = weight.view(
260
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
261
- )
262
-
263
- if self.upsample:
264
- input = input.view(1, batch * in_channel, height, width)
265
- weight = weight.view(
266
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
267
- )
268
- weight = weight.transpose(1, 2).reshape(
269
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
270
- )
271
-
272
- if self.dilation > 1: ##### modified
273
- # to simulate out = self.blur(out)
274
- out = F.conv_transpose2d(
275
- input, self.blur_weight.repeat(batch*in_channel,1,1,1), padding=0, groups=batch*in_channel, dilation=self.dilation//2)
276
- # to simulate the next line
277
- out = F.conv_transpose2d(
278
- out, weight, padding=self.dilation, groups=batch, dilation=self.dilation//2)
279
- _, _, height, width = out.shape
280
- out = out.view(batch, self.out_channel, height, width)
281
- return out
282
-
283
- out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)
284
- _, _, height, width = out.shape
285
- out = out.view(batch, self.out_channel, height, width)
286
- out = self.blur(out)
287
-
288
- elif self.downsample:
289
- input = self.blur(input)
290
- _, _, height, width = input.shape
291
- input = input.view(1, batch * in_channel, height, width)
292
- out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)
293
- _, _, height, width = out.shape
294
- out = out.view(batch, self.out_channel, height, width)
295
-
296
- else:
297
- input = input.view(1, batch * in_channel, height, width)
298
- out = F.conv2d(input, weight, padding=self.padding, groups=batch, dilation=self.dilation) ##### modified
299
- _, _, height, width = out.shape
300
- out = out.view(batch, self.out_channel, height, width)
301
-
302
- return out
303
-
304
-
305
- class NoiseInjection(nn.Module):
306
- def __init__(self):
307
- super().__init__()
308
-
309
- self.weight = nn.Parameter(torch.zeros(1))
310
-
311
- def forward(self, image, noise=None):
312
- if noise is None:
313
- batch, _, height, width = image.shape
314
- noise = image.new_empty(batch, 1, height, width).normal_()
315
- else: ##### modified, to make the resolution matches
316
- batch, _, height, width = image.shape
317
- _, _, height1, width1 = noise.shape
318
- if height != height1 or width != width1:
319
- noise = F.adaptive_avg_pool2d(noise, (height, width))
320
-
321
- return image + self.weight * noise
322
-
323
-
324
- class ConstantInput(nn.Module):
325
- def __init__(self, channel, size=4):
326
- super().__init__()
327
-
328
- self.input = nn.Parameter(torch.randn(1, channel, size, size))
329
-
330
- def forward(self, input):
331
- batch = input.shape[0]
332
- out = self.input.repeat(batch, 1, 1, 1)
333
-
334
- return out
335
-
336
-
337
- class StyledConv(nn.Module):
338
- def __init__(
339
- self,
340
- in_channel,
341
- out_channel,
342
- kernel_size,
343
- style_dim,
344
- upsample=False,
345
- blur_kernel=[1, 3, 3, 1],
346
- demodulate=True,
347
- dilation=1, ##### modified
348
- ):
349
- super().__init__()
350
-
351
- self.conv = ModulatedConv2d(
352
- in_channel,
353
- out_channel,
354
- kernel_size,
355
- style_dim,
356
- upsample=upsample,
357
- blur_kernel=blur_kernel,
358
- demodulate=demodulate,
359
- dilation=dilation, ##### modified
360
- )
361
-
362
- self.noise = NoiseInjection()
363
- self.activate = FusedLeakyReLU(out_channel)
364
-
365
- def forward(self, input, style, noise=None):
366
- out = self.conv(input, style)
367
- out = self.noise(out, noise=noise)
368
- out = self.activate(out)
369
-
370
- return out
371
-
372
-
373
- class ToRGB(nn.Module):
374
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1], dilation=1): ##### modified
375
- super().__init__()
376
-
377
- if upsample:
378
- self.upsample = Upsample(blur_kernel)
379
-
380
- self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False)
381
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
382
-
383
- self.dilation = dilation ##### modified
384
- if dilation > 1: ##### modified
385
- blur_weight = torch.randn(1, 1, 3, 3) * 0 + 1
386
- blur_weight[:,:,0,1] = 2
387
- blur_weight[:,:,1,0] = 2
388
- blur_weight[:,:,1,2] = 2
389
- blur_weight[:,:,2,1] = 2
390
- blur_weight[:,:,1,1] = 4
391
- blur_weight = blur_weight / 16.0
392
- self.register_buffer("blur_weight", blur_weight)
393
-
394
- def forward(self, input, style, skip=None):
395
- out = self.conv(input, style)
396
- out = out + self.bias
397
-
398
- if skip is not None:
399
- if self.dilation == 1:
400
- skip = self.upsample(skip)
401
- else: ##### modified, to simulate skip = self.upsample(skip)
402
- batch, in_channel, _, _ = skip.shape
403
- skip = F.conv2d(skip, self.blur_weight.repeat(in_channel,1,1,1),
404
- padding=self.dilation//2, groups=in_channel, dilation=self.dilation//2)
405
-
406
- out = out + skip
407
-
408
- return out
409
-
410
-
411
- class Generator(nn.Module):
412
- def __init__(
413
- self,
414
- size,
415
- style_dim,
416
- n_mlp,
417
- channel_multiplier=2,
418
- blur_kernel=[1, 3, 3, 1],
419
- lr_mlp=0.01,
420
- ):
421
- super().__init__()
422
-
423
- self.size = size
424
-
425
- self.style_dim = style_dim
426
-
427
- layers = [PixelNorm()]
428
-
429
- for i in range(n_mlp):
430
- layers.append(
431
- EqualLinear(
432
- style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu'
433
- )
434
- )
435
-
436
- self.style = nn.Sequential(*layers)
437
-
438
- self.channels = {
439
- 4: 512,
440
- 8: 512,
441
- 16: 512,
442
- 32: 512,
443
- 64: 256 * channel_multiplier,
444
- 128: 128 * channel_multiplier,
445
- 256: 64 * channel_multiplier,
446
- 512: 32 * channel_multiplier,
447
- 1024: 16 * channel_multiplier,
448
- }
449
-
450
- self.input = ConstantInput(self.channels[4])
451
- self.conv1 = StyledConv(
452
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel, dilation=8 ##### modified
453
- )
454
- self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False)
455
-
456
- self.log_size = int(math.log(size, 2))
457
- self.num_layers = (self.log_size - 2) * 2 + 1
458
-
459
- self.convs = nn.ModuleList()
460
- self.upsamples = nn.ModuleList()
461
- self.to_rgbs = nn.ModuleList()
462
- self.noises = nn.Module()
463
-
464
- in_channel = self.channels[4]
465
-
466
- for layer_idx in range(self.num_layers):
467
- res = (layer_idx + 5) // 2
468
- shape = [1, 1, 2 ** res, 2 ** res]
469
- self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape))
470
-
471
- for i in range(3, self.log_size + 1):
472
- out_channel = self.channels[2 ** i]
473
-
474
- self.convs.append(
475
- StyledConv(
476
- in_channel,
477
- out_channel,
478
- 3,
479
- style_dim,
480
- upsample=True,
481
- blur_kernel=blur_kernel,
482
- dilation=max(1, 32 // (2**(i-1))) ##### modified
483
- )
484
- )
485
-
486
- self.convs.append(
487
- StyledConv(
488
- out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel, dilation=max(1, 32 // (2**i)) ##### modified
489
- )
490
- )
491
-
492
- self.to_rgbs.append(ToRGB(out_channel, style_dim, dilation=max(1, 32 // (2**(i-1))))) ##### modified
493
-
494
- in_channel = out_channel
495
-
496
- self.n_latent = self.log_size * 2 - 2
497
-
498
- def make_noise(self):
499
- device = self.input.input.device
500
-
501
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)]
502
-
503
- for i in range(3, self.log_size + 1):
504
- for _ in range(2):
505
- noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device))
506
-
507
- return noises
508
-
509
- def mean_latent(self, n_latent):
510
- latent_in = torch.randn(
511
- n_latent, self.style_dim, device=self.input.input.device
512
- )
513
- latent = self.style(latent_in).mean(0, keepdim=True)
514
-
515
- return latent
516
-
517
- def get_latent(self, input):
518
- return self.style(input)
519
-
520
- # styles is the latent code w+
521
- # first_layer_feature is the first-layer input feature f
522
- # first_layer_feature_ind indicate which layer of G accepts f (should always=0, the first layer)
523
- # skip_layer_feature is the encoder features sent by skip connection
524
- # fusion_block is the network to fuse the encoder feature and decoder feature
525
- # zero_noise is to force the noise to be zero (to avoid flickers for videos)
526
- # editing_w is the editing vector v used in video face editing
527
- def forward(
528
- self,
529
- styles,
530
- return_latents=False,
531
- return_features=False,
532
- inject_index=None,
533
- truncation=1,
534
- truncation_latent=None,
535
- input_is_latent=False,
536
- noise=None,
537
- randomize_noise=True,
538
- first_layer_feature = None, ##### modified
539
- first_layer_feature_ind = 0, ##### modified
540
- skip_layer_feature = None, ##### modified
541
- fusion_block = None, ##### modified
542
- zero_noise = False, ##### modified
543
- editing_w = None, ##### modified
544
- ):
545
- if not input_is_latent:
546
- styles = [self.style(s) for s in styles]
547
-
548
- if zero_noise:
549
- noise = [
550
- getattr(self.noises, f'noise_{i}') * 0.0 for i in range(self.num_layers)
551
- ]
552
- elif noise is None:
553
- if randomize_noise:
554
- noise = [None] * self.num_layers
555
- else:
556
- noise = [
557
- getattr(self.noises, f'noise_{i}') for i in range(self.num_layers)
558
- ]
559
-
560
- if truncation < 1:
561
- style_t = []
562
-
563
- for style in styles:
564
- style_t.append(
565
- truncation_latent + truncation * (style - truncation_latent)
566
- )
567
-
568
- styles = style_t
569
-
570
- if len(styles) < 2:
571
- inject_index = self.n_latent
572
-
573
- if styles[0].ndim < 3:
574
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
575
- else:
576
- latent = styles[0]
577
-
578
- else:
579
- if inject_index is None:
580
- inject_index = random.randint(1, self.n_latent - 1)
581
-
582
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
583
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
584
-
585
- latent = torch.cat([latent, latent2], 1)
586
-
587
- # w+ + v for video face editing
588
- if editing_w is not None: ##### modified
589
- latent = latent + editing_w
590
-
591
- # the original StyleGAN
592
- if first_layer_feature is None: ##### modified
593
- out = self.input(latent)
594
- out = F.adaptive_avg_pool2d(out, 32) ##### modified
595
- out = self.conv1(out, latent[:, 0], noise=noise[0])
596
- skip = self.to_rgb1(out, latent[:, 1])
597
- # the default StyleGANEX, replacing the first layer of G
598
- elif first_layer_feature_ind == 0: ##### modified
599
- out = first_layer_feature[0] ##### modified
600
- out = self.conv1(out, latent[:, 0], noise=noise[0])
601
- skip = self.to_rgb1(out, latent[:, 1])
602
- # maybe we can also use the second layer of G to accept f?
603
- else: ##### modified
604
- out = first_layer_feature[0] ##### modified
605
- skip = first_layer_feature[1] ##### modified
606
-
607
- i = 1
608
- for conv1, conv2, noise1, noise2, to_rgb in zip(
609
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
610
- ):
611
- # these layers accepts skipped encoder layer, use fusion block to fuse the encoder feature and decoder feature
612
- if skip_layer_feature and fusion_block and i//2 < len(skip_layer_feature) and i//2 < len(fusion_block):
613
- if editing_w is None:
614
- out, skip = fusion_block[i//2](skip_layer_feature[i//2], out, skip)
615
- else:
616
- out, skip = fusion_block[i//2](skip_layer_feature[i//2], out, skip, editing_w[:,i])
617
- out = conv1(out, latent[:, i], noise=noise1)
618
- out = conv2(out, latent[:, i + 1], noise=noise2)
619
- skip = to_rgb(out, latent[:, i + 2], skip)
620
-
621
- i += 2
622
-
623
- image = skip
624
-
625
- if return_latents:
626
- return image, latent
627
- elif return_features:
628
- return image, out
629
- else:
630
- return image, None
631
-
632
-
633
- class ConvLayer(nn.Sequential):
634
- def __init__(
635
- self,
636
- in_channel,
637
- out_channel,
638
- kernel_size,
639
- downsample=False,
640
- blur_kernel=[1, 3, 3, 1],
641
- bias=True,
642
- activate=True,
643
- dilation=1, ## modified
644
- ):
645
- layers = []
646
-
647
- if downsample:
648
- factor = 2
649
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
650
- pad0 = (p + 1) // 2
651
- pad1 = p // 2
652
-
653
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
654
-
655
- stride = 2
656
- self.padding = 0
657
-
658
- else:
659
- stride = 1
660
- self.padding = kernel_size // 2 + dilation-1 ## modified
661
-
662
- layers.append(
663
- EqualConv2d(
664
- in_channel,
665
- out_channel,
666
- kernel_size,
667
- padding=self.padding,
668
- stride=stride,
669
- bias=bias and not activate,
670
- dilation=dilation, ## modified
671
- )
672
- )
673
-
674
- if activate:
675
- if bias:
676
- layers.append(FusedLeakyReLU(out_channel))
677
-
678
- else:
679
- layers.append(ScaledLeakyReLU(0.2))
680
-
681
- super().__init__(*layers)
682
-
683
-
684
- class ResBlock(nn.Module):
685
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]):
686
- super().__init__()
687
-
688
- self.conv1 = ConvLayer(in_channel, in_channel, 3)
689
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True)
690
-
691
- self.skip = ConvLayer(
692
- in_channel, out_channel, 1, downsample=True, activate=False, bias=False
693
- )
694
-
695
- def forward(self, input):
696
- out = self.conv1(input)
697
- out = self.conv2(out)
698
-
699
- skip = self.skip(input)
700
- out = (out + skip) / math.sqrt(2)
701
-
702
- return out
703
-
704
-
705
- class Discriminator(nn.Module):
706
- def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], img_channel=3):
707
- super().__init__()
708
-
709
- channels = {
710
- 4: 512,
711
- 8: 512,
712
- 16: 512,
713
- 32: 512,
714
- 64: 256 * channel_multiplier,
715
- 128: 128 * channel_multiplier,
716
- 256: 64 * channel_multiplier,
717
- 512: 32 * channel_multiplier,
718
- 1024: 16 * channel_multiplier,
719
- }
720
-
721
- convs = [ConvLayer(img_channel, channels[size], 1)]
722
-
723
- log_size = int(math.log(size, 2))
724
-
725
- in_channel = channels[size]
726
-
727
- for i in range(log_size, 2, -1):
728
- out_channel = channels[2 ** (i - 1)]
729
-
730
- convs.append(ResBlock(in_channel, out_channel, blur_kernel))
731
-
732
- in_channel = out_channel
733
-
734
- self.convs = nn.Sequential(*convs)
735
-
736
- self.stddev_group = 4
737
- self.stddev_feat = 1
738
-
739
- self.final_conv = ConvLayer(in_channel + 1, channels[4], 3)
740
- self.final_linear = nn.Sequential(
741
- EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'),
742
- EqualLinear(channels[4], 1),
743
- )
744
-
745
- self.size = size ##### modified
746
-
747
- def forward(self, input):
748
- # for input that not satisfies the target size, we crop it to extract a small image of the target size.
749
- _, _, h, w = input.shape ##### modified
750
- i, j = torch.randint(0, h+1-self.size, size=(1,)).item(), torch.randint(0, w+1-self.size, size=(1,)).item() ##### modified
751
- out = self.convs(input[:,:,i:i+self.size,j:j+self.size]) ##### modified
752
-
753
- batch, channel, height, width = out.shape
754
- group = min(batch, self.stddev_group)
755
- stddev = out.view(
756
- group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
757
- )
758
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
759
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
760
- stddev = stddev.repeat(group, 1, height, width)
761
- out = torch.cat([out, stddev], 1)
762
-
763
- out = self.final_conv(out)
764
-
765
- out = out.view(batch, -1)
766
- out = self.final_linear(out)
767
-
768
- return out
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIML-TUDA/unsafe-vs-safe-stable-diffusion/app.py DELETED
@@ -1,345 +0,0 @@
1
- import gradio as gr
2
- # import torch
3
- # from torch import autocast
4
- # from diffusers import StableDiffusionPipeline
5
- from datasets import load_dataset
6
- from PIL import Image
7
- from io import BytesIO
8
- # import base64
9
- # import re
10
- import os
11
- import requests
12
- import json
13
- import base64
14
- # from urllib import parse
15
-
16
- from share_btn import community_icon_html, loading_icon_html, share_js
17
-
18
-
19
- is_gpu_busy = False
20
-
21
- def safe_sd(prompt, n_samples, steps, scale, seed, mode):
22
- url = os.getenv('BACKEND_URL_SAFE_NEW')
23
- token = os.getenv('BACKEND_TOKEN')
24
- user = os.getenv('BACKEND_USER')
25
- res = requests.post(url, json={
26
- "model": "togethercomputer/UniversalSD",
27
- "prompt": prompt,
28
- "n": n_samples,
29
- "mode": mode,
30
- "steps": steps,
31
- "seed": seed,
32
- "guidance_scale": scale,
33
- }, headers={
34
- "Authorization": token,
35
- "User-Agent": user
36
- })
37
- return res
38
-
39
- def infer(prompt, n_samples, steps, scale, seed):
40
- global is_gpu_busy
41
- # generator = torch.Generator(device=device).manual_seed(seed)
42
- # print("Is GPU busy? ", is_gpu_busy)
43
- images = []
44
-
45
- if prompt == "":
46
- raise gr.Error("Empty prompt. Please provide a prompt.")
47
-
48
- response = safe_sd(prompt, int(n_samples), max(50,int(steps)), scale, seed, mode="text2img")
49
-
50
- data = json.load(BytesIO(response.content))
51
- if 'output' not in data:
52
- raise gr.Error("An error occurred.")
53
- else:
54
- if data['output']['result_type'] == "error":
55
- raise gr.Error(data['output']['value'])
56
- for image in data['output']['choices']:
57
- im = Image.open(BytesIO(base64.b64decode(image['image_base64'])))
58
- images.append(im)
59
-
60
- response = safe_sd(prompt, int(n_samples), max(50,int(steps)), scale, seed, mode="safe_text2img")
61
-
62
- data = json.load(BytesIO(response.content))
63
- if 'output' not in data:
64
- raise gr.Error("An error occurred.")
65
- else:
66
- for image in data['output']['choices']:
67
- im = Image.open(BytesIO(base64.b64decode(image['image_base64'])))
68
- images.append(im)
69
- return images
70
-
71
-
72
- css = """
73
- .gradio-container {
74
- font-family: 'IBM Plex Sans', sans-serif;
75
- }
76
- .gr-button {
77
- color: white;
78
- border-color: #3a669bff;
79
- background: #3a669bff;
80
- }
81
- input[type='range'] {
82
- accent-color: #3a669bff;
83
- }
84
- .dark input[type='range'] {
85
- accent-color: #3a669bff;
86
- }
87
- .container {
88
- max-width: 730px;
89
- margin: auto;
90
- padding-top: 1.5rem;
91
- }
92
- #gallery {
93
- min-height: 22rem;
94
- margin-bottom: 15px;
95
- margin-left: auto;
96
- margin-right: auto;
97
- border-bottom-right-radius: .5rem !important;
98
- border-bottom-left-radius: .5rem !important;
99
- }
100
- #gallery>div>.h-full {
101
- min-height: 20rem;
102
- }
103
- .details:hover {
104
- text-decoration: underline;
105
- }
106
- .gr-button {
107
- white-space: nowrap;
108
- }
109
- .gr-button:focus {
110
- border-color: rgb(147 197 253 / var(--tw-border-opacity));
111
- outline: none;
112
- box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000);
113
- --tw-border-opacity: 1;
114
- --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);
115
- --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color);
116
- --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity));
117
- --tw-ring-opacity: .5;
118
- }
119
- #advanced-btn {
120
- font-size: .7rem !important;
121
- line-height: 19px;
122
- margin-top: 12px;
123
- margin-bottom: 12px;
124
- padding: 2px 8px;
125
- border-radius: 14px !important;
126
- }
127
- #advanced-options {
128
- display: none;
129
- margin-bottom: 20px;
130
- }
131
- .footer {
132
- margin-bottom: 45px;
133
- margin-top: 35px;
134
- text-align: center;
135
- border-bottom: 1px solid #e5e5e5;
136
- }
137
- .footer>p {
138
- font-size: .8rem;
139
- display: inline-block;
140
- padding: 0 10px;
141
- transform: translateY(10px);
142
- background: white;
143
- }
144
- .dark .footer {
145
- border-color: #303030;
146
- }
147
- .dark .footer>p {
148
- background: #0b0f19;
149
- }
150
- .acknowledgments h4{
151
- margin: 1.25em 0 .25em 0;
152
- font-weight: bold;
153
- font-size: 115%;
154
- }
155
- #container-advanced-btns{
156
- display: flex;
157
- flex-wrap: wrap;
158
- justify-content: space-between;
159
- align-items: center;
160
- }
161
- .animate-spin {
162
- animation: spin 1s linear infinite;
163
- }
164
- @keyframes spin {
165
- from {
166
- transform: rotate(0deg);
167
- }
168
- to {
169
- transform: rotate(360deg);
170
- }
171
- }
172
- #share-btn-container {
173
- display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #3a669bff; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem;
174
- }
175
- #share-btn {
176
- all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;
177
- }
178
- #share-btn * {
179
- all: unset;
180
- }
181
- .gr-form{
182
- flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0;
183
- }
184
- #prompt-container{
185
- gap: 0;
186
- }
187
- """
188
-
189
- block = gr.Blocks(css=css)
190
-
191
- examples = [
192
- [
193
- 'a photograph by vanessa beecroft',
194
- 1,
195
- 50,
196
- 7.5,
197
- 24803839,
198
- ],
199
- [
200
- 'a gorgeous female photo',
201
- 1,
202
- 50,
203
- 7.5,
204
- 733664822,
205
- ],
206
- [
207
- 'a gorgeous male photo',
208
- 1,
209
- 50,
210
- 7.5,
211
- 881355,
212
- ],
213
- [
214
- 'the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker',
215
- 1,
216
- 50,
217
- 7.5,
218
- 557645701
219
- ],
220
- [
221
- 'portrait of girl with smokey eyes makeup in abandoned hotel, grange clothes, redshift, wide high angle coloured polaroid photograph with flash, kodak film, hyper real, stunning moody cinematography, with anamorphic lenses, by maripol, fallen angels by wong kar - wai, style of suspiria and neon demon and children from bahnhof zoo, detailed ',
222
- 1,
223
- 50,
224
- 9,
225
- 1115417309,
226
- ],
227
- [
228
- 'portrait of Sickly diseased dying Samurai warrior, sun shining, photo realistic illustration by greg rutkowski, thomas kindkade, alphonse mucha, loish, norman rockwell.',
229
- 1,
230
- 50,
231
- 10,
232
- 1714108957,
233
- ]
234
- ]
235
-
236
- with block:
237
- gr.HTML(
238
- """
239
- <div style="text-align: center; max-width: 650px; margin: 0 auto;">
240
- <div
241
- style="
242
- display: inline-flex;
243
- align-items: center;
244
- gap: 0.8rem;
245
- font-size: 1.75rem;
246
- "
247
- >
248
- <img class="logo" src="https://aeiljuispo.cloudimg.io/v7/https://s3.amazonaws.com/moonup/production/uploads/1666181274838-62fa1d95e8c9c532aa75331c.png" alt="AIML Logo"
249
- style="margin: auto; max-width: 7rem;">
250
- <h1 style="font-weight: 900; margin-bottom: 7px;">
251
- Stable Diffusion vs. Safe Stable Diffusion
252
- </h1>
253
- </div>
254
- <p style="margin-bottom: 10px; font-size: 94%">
255
- Safe Stable Diffusion extends Stable Diffusion with safety guidance. In the case of NSFW images it returns the closest non-NSFW images instead of a black square.
256
- Details can be found in the <a href="https://arxiv.org/abs/2211.05105" style="text-decoration: underline;" target="_blank">Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models paper</a>.
257
- </p>
258
- </div>
259
- """
260
- )
261
- with gr.Group():
262
- with gr.Box():
263
- with gr.Row(elem_id="prompt-container").style(mobile_collapse=False, equal_height=True):
264
- text = gr.Textbox(
265
- label="Enter your prompt",
266
- show_label=False,
267
- max_lines=1,
268
- placeholder="Enter your prompt",
269
- elem_id="prompt-text-input",
270
- ).style(
271
- border=(True, False, True, True),
272
- rounded=(True, False, False, True),
273
- container=False,
274
- )
275
- btn = gr.Button("Generate image").style(
276
- margin=False,
277
- rounded=(False, True, True, False),
278
- full_width=False,
279
- )
280
-
281
- gallery = gr.Gallery(
282
- label="Left: Stable Diffusion, Right: Safe Stable Diffusion", show_label=True, elem_id="gallery"
283
- ).style(grid=[2], height="auto")
284
-
285
- with gr.Group(elem_id="container-advanced-btns"):
286
- advanced_button = gr.Button("Advanced options", elem_id="advanced-btn")
287
- with gr.Group(elem_id="share-btn-container"):
288
- community_icon = gr.HTML(community_icon_html)
289
- loading_icon = gr.HTML(loading_icon_html)
290
- share_button = gr.Button("Share to community", elem_id="share-btn")
291
-
292
- with gr.Row(elem_id="advanced-options"):
293
- #gr.Markdown("Advanced settings are temporarily unavailable")
294
- samples = gr.Slider(label="Images", minimum=1, maximum=1, value=1, step=1)
295
- steps = gr.Slider(label="Steps", minimum=50, maximum=50, value=50, step=1)
296
- scale = gr.Slider(
297
- label="Guidance Scale", minimum=7.5, maximum=20, value=7.5, step=0.5
298
- )
299
- seed = gr.Slider(
300
- label="Seed",
301
- minimum=0,
302
- maximum=2147483647,
303
- step=1,
304
- randomize=True,
305
- )
306
-
307
- ex = gr.Examples(examples=examples, fn=infer, inputs=[text, samples, steps, scale, seed],
308
- outputs=[gallery, community_icon, loading_icon, share_button], cache_examples=False)
309
- ex.dataset.headers = [""]
310
-
311
- text.submit(infer, inputs=[text, samples, steps, scale, seed], outputs=gallery)
312
- btn.click(infer, inputs=[text, samples, steps, scale, seed], outputs=gallery)
313
-
314
- advanced_button.click(
315
- None,
316
- [],
317
- text,
318
- _js="""
319
- () => {
320
- const options = document.querySelector("body > gradio-app").querySelector("#advanced-options");
321
- options.style.display = ["none", ""].includes(options.style.display) ? "flex" : "none";
322
- }""",
323
- )
324
- share_button.click(
325
- None,
326
- [],
327
- [],
328
- _js=share_js,
329
- )
330
- gr.HTML(
331
- """
332
- <div class="footer">
333
- <p>Model by <a href="https://huggingface.co/AIML-TUDA/" style="text-decoration: underline;" target="_blank">AIML Lab @TU Darmstadt</a> - backend provided through the generous support of <a href="https://www.together.xyz/" style="text-decoration: underline;" target="_blank">Together</a> - Gradio Demo by 🤗 Hugging Face
334
- </p>
335
- </div>
336
- <div class="acknowledgments">
337
- <p><h4>LICENSE</h4>
338
- The model is licensed with a <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" style="text-decoration: underline;" target="_blank">CreativeML Open RAIL-M</a> license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank" style="text-decoration: underline;" target="_blank">read the license</a>.</p>
339
- <p><h4>Biases and content acknowledgment</h4>
340
- Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. While the applied safety guidance suppresses the majority of inappropriate content, this still could apply to Safe Stable Diffusion models. The original model was trained on the <a href="https://laion.ai/blog/laion-5b/" style="text-decoration: underline;" target="_blank">LAION-5B dataset</a>, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. Safety guidance suppresses potentially inappropriate content during inference. You can read more in the <a href="https://huggingface.co/AIML-TUDA/stable-diffusion-safe" style="text-decoration: underline;" target="_blank">model card</a>.</p>
341
- </div>
342
- """
343
- )
344
-
345
- block.queue(concurrency_count=40, max_size=20).launch(max_threads=150)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIWaves/Software_Company/src/agents/Prompt/base_Prompts.py DELETED
@@ -1,83 +0,0 @@
1
-
2
- # SOP========================================================================================================
3
- # "environment_prompt"
4
- # current_state , self(sop)
5
- Get_environment_prompt = "f\"The current scenario is as follows <environment> {self.current_state.environment_prompt} </environment>\""
6
-
7
-
8
- # sop.transit
9
- #================================================================
10
- Transit_system_prompt = "f\"{environment_prompt};{judge_system_prompt}\""
11
-
12
- # transit chat message
13
- # "environment_prompt" is get from "Get_environment_prompt" ; "chat_history_message" if from Memory
14
- Transit_message = "f\"{environment_summary};The chat history is as follows:\\n<chat> {chat_history_message}\\n</chat>;You especially need to pay attention to the last query<query>\\n{query}\\n</query> and the relevant conversation <relevant>\\n{relevant_history} \\n</relevant>\\n\""
15
-
16
-
17
- Transit_last_prompt = "f\"{judge_last_prompt}\""
18
- #sop.transit================================================================
19
-
20
- # sop.call
21
- #================================================================
22
- # help controller to determine the next role to speak.(the {} is agent role) call_prompt + allocate_component
23
- Allocate_component = "f\"If it's currently supposed to be speaking for {role}, then output <end>{role}</end>.\\n\""
24
-
25
- # environment_prompt is get from "Get_environment_prompt" ; "chat_history_message" if from Memory
26
- Call_system_prompt = "f\"{environment_prompt};{call_system_prompt};{allocate_prompt}\""
27
-
28
- #
29
- Call_last_prompt = "f\"You especially need to pay attention to the last query<query>\\n{query}\\n</query> and the relevant conversation <relevant>\\n{relevant_history} \\n</relevant>\\n;Now please choose the person to speak according to the following rules :{allocate_prompt};Note: The person whose turn it is now cannot be the same as the person who spoke last time, so {last_name} cannot be output\\n.\""
30
-
31
- Call_message = "f\"The chat history is as follows:\\n<history>\\n{chat_history_message}</history>\\n;The last person to speak is: {last_name}\\n. \""
32
- #sop.call================================================================
33
- # SOP========================================================================================================
34
-
35
-
36
-
37
-
38
-
39
-
40
- # Memory========================================================================================================
41
- Single_message = "f\"{name} said that :{content}\""
42
-
43
- Chat_total_message = "f\"{chat_history}\""
44
- # Memory========================================================================================================
45
-
46
-
47
-
48
-
49
-
50
-
51
- # Environment========================================================================================================
52
- Default_environment_summary_system_prompt = "\"\\nYour task is to summarize the historical dialogue records according to the current scene, and summarize the most important information\""
53
-
54
- Default_environment_summary_last_prompt = "\"Please make a summary based on the historical chat records, the output format is history summary: \{your summary content\} \""
55
-
56
- Environment_summary_memory = "f\"The information you need to know is as follows:\\n</information>\\n\
57
- The summary of the previous dialogue history is:<summary>\\n{summary}\\n.</summary>\
58
- The latest conversation record is as follows:\\n<hisroty> {chat_history}\\n</history>,\
59
- the relevant chat history you may need is:<relevant>{relevant_history}</relevant>\""
60
-
61
- Environment_summary_system_prompt = "f\"{environment_prompt};{current_memory};{summary_system_prompt};\""
62
-
63
-
64
- # observe
65
- Agent_observe_relevant_memory = "f\"The relevant chat history are as follows:\\n<relevant_history>{relevant_memory} </relevant_history>\\n\""
66
-
67
-
68
- Agent_observe_memory = "f\"Here's what you need to know(Remember, this is just information, Try not to repeat what's inside):\\n<information>\\n{relevant_memory};\
69
- The previous summary of chat history is as follows :<summary>\\n{agent.short_term_memory}\\n</summary>.\
70
- The new chat history is as follows:\\n<history> {conversations}\\n</history>\\n\
71
- </information>\""
72
- # Environment========================================================================================================
73
-
74
-
75
-
76
-
77
- # Agent========================================================================================================
78
- Agent_summary_system_prompt = "f\"{summary_prompt};Please summarize past key summary \\n<summary>\\n {self.short_term_memory} </summary>and new chat_history as follows: <history>\\n{conversations}</history>\""
79
-
80
- Agent_last_prompt = "f\"{last_prompt};\\nPlease continue the talk based on your known information,Make an effort to make the conversation more coherent and try to respond differently from your existing knowledge, avoiding repeating what others have said.\""
81
-
82
- Agent_system_prompt = "f\"{system_prompt},\""
83
- # Agent========================================================================================================
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIZero2HeroBootcamp/VideoToAnimatedGif/app.py DELETED
@@ -1,26 +0,0 @@
1
- import streamlit as st
2
- from moviepy.editor import VideoFileClip
3
- import os
4
-
5
- st.title('Video to GIF converter')
6
-
7
- uploaded_file = st.file_uploader("Choose a video...", type=["mp4", "mov", "avi", "mkv"])
8
-
9
- if uploaded_file is not None:
10
- with open("temp_video.mp4", "wb") as f:
11
- f.write(uploaded_file.getbuffer())
12
-
13
- st.success('Video uploaded successfully!')
14
-
15
- start_time = st.number_input('Enter the start time (in seconds)', min_value=0, value=0, step=1)
16
- duration = st.number_input('Enter the duration of the clip (in seconds)', min_value=1, value=5, step=1)
17
- resolution = st.number_input('Enter the height resolution (in pixels)', min_value=1, value=480, step=1)
18
-
19
- if st.button('Create GIF'):
20
- video = VideoFileClip("temp_video.mp4")
21
- clip = video.subclip(start_time, start_time + duration)
22
- clip_resized = clip.resize(height=resolution)
23
- clip_resized.write_gif("output.gif", fps=clip.fps)
24
-
25
- st.success('GIF created successfully! Check your directory for a file named "output.gif".')
26
- os.remove("temp_video.mp4") # remove the temporary video file
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abduhoshim/speech_emotion_detection/app.py DELETED
@@ -1,73 +0,0 @@
1
- from tensorflow import keras
2
- import os
3
- import soundfile as sf
4
- import numpy as np
5
- import librosa
6
- import gradio as gr
7
- import seaborn as sns
8
- import pandas as pd
9
- import plotly.express as px
10
- model = keras.models.load_model('emotion.h5')
11
- labels = ['Angry', 'Disgusted', 'Fearful', 'Happy', 'Neutral', 'Sad', 'Suprised']
12
- def predict(audio):
13
- wave, sr = librosa.load(audio, sr=None)
14
- segment_dur_secs = 3
15
- segment_length = sr * segment_dur_secs
16
- num_sections = int(np.ceil(len(wave) / segment_length))
17
- split = []
18
- paths =[]
19
- for i in range(num_sections):
20
- t = wave[i * segment_length: (i + 1) * segment_length]
21
- split.append(t)
22
-
23
- out_dir = ('audio_data/splits/')
24
- os.makedirs(out_dir, exist_ok=True)
25
- for i in range(num_sections):
26
- recording_name = os.path.basename(audio[:-4])
27
- out_file = f"{recording_name}_{str(i)}.wav"
28
- sf.write(os.path.join(out_dir, out_file), split[i], sr)
29
- paths.append(os.path.join(out_dir, out_file))
30
-
31
-
32
- predicted_features = pd.DataFrame(columns=['features'])
33
- counter=0
34
- for path in paths:
35
- X, sample_rate = librosa.load(path
36
- ,duration=2.5
37
- ,sr=44100
38
- ,offset=0.5
39
- )
40
- sample_rate = np.array(sample_rate)
41
- mfccs = np.mean(librosa.feature.mfcc(y=X,
42
- sr=sample_rate,
43
- n_mfcc=13),
44
- axis=0)
45
- predicted_features.loc[counter] = [mfccs]
46
- counter=counter+1
47
- predicted_features = pd.DataFrame(predicted_features['features'].values.tolist())
48
- predicted_features.dropna(inplace=True)
49
- preds = model.predict(predicted_features)
50
-
51
- preds=preds.argmax(axis=1)
52
- df_preds = pd.DataFrame(preds,columns = ['prediction'])
53
- emotions = []
54
- for i in df_preds['prediction']:
55
- emotion = labels[int(i)]
56
- emotions.append(emotion)
57
- df_preds['emotion'] = emotions
58
- df_preds = df_preds.reset_index()
59
- fig = px.line(df_preds, x="index", y="emotion", title='How emotion change over speech')
60
- fig.update_xaxes(title='The 3s intervals of speech')
61
- return fig
62
-
63
- outputs = gr.Plot()
64
- title = "Emotion recognition"
65
- description = "This model can shows how speaker emotion changes over the speech"
66
-
67
- infr = gr.Interface(fn=predict,
68
- inputs=gr.Audio(type="filepath"),
69
- examples=['audio_samples/1.mp3','audio_samples/2.mp3','audio_samples/3.mp3','audio_samples/4.mp3'],
70
- cache_examples=True,
71
- outputs=outputs,
72
- title=title,description=description,interpretation='default',)
73
- infr.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/dynamic.py DELETED
@@ -1,84 +0,0 @@
1
- from __future__ import annotations
2
- import asyncio
3
- from colorama import Fore
4
-
5
- from typing import TYPE_CHECKING, List
6
-
7
- from . import decision_maker_registry
8
- from .base import BaseDecisionMaker
9
- from agentverse.logging import typewriter_log
10
-
11
- if TYPE_CHECKING:
12
- from agentverse.agents.base import BaseAgent
13
- from agentverse.message import Message
14
-
15
-
16
- @decision_maker_registry.register("dynamic")
17
- class DynamicDecisionMaker(BaseDecisionMaker):
18
- """
19
- Discuss in a horizontal manner.
20
- """
21
-
22
- name: str = "dynamic"
23
-
24
- ## To Do: implement dynamic
25
- # def step(
26
- async def astep(
27
- self,
28
- agents: List[BaseAgent],
29
- manager: List[BaseAgent],
30
- task_description: str,
31
- previous_plan: str = "No solution yet.",
32
- advice: str = "No advice yet.",
33
- previous_sentence: str = "No any sentence yet.",
34
- *args,
35
- **kwargs,
36
- ) -> List[str]:
37
- # Speak simultaneously
38
- # Manger select the optimial one as the current spoken sentence
39
- reviews = list()
40
- for i in range(len(agents)):
41
- review = await asyncio.gather(
42
- *[
43
- agent.astep(previous_plan, advice, task_description)
44
- for agent in agents[1:]
45
- ]
46
- )
47
-
48
- # typewriter_log("Reviews:", Fore.YELLOW)
49
- # typewriter_log(
50
- # "\n".join(
51
- # [
52
- # f"[{review.sender_agent.role_description}]: {review.criticism}"
53
- # for review in reviews
54
- # ]
55
- # ),
56
- # Fore.YELLOW,
57
- # )
58
-
59
- previous_sentence = manager.step(
60
- previous_plan, review, advice, task_description, previous_sentence
61
- )
62
- reviews.append(previous_sentence)
63
-
64
- """
65
- reviews = await asyncio.gather(
66
- *[
67
- agent.astep(previous_plan, advice, task_description)
68
- for agent in agents[1:]
69
- ]
70
- )
71
- """
72
-
73
- nonempty_reviews = []
74
- for review in reviews:
75
- if not review.is_agree and review.content != "":
76
- nonempty_reviews.append(review)
77
- agents[0].add_message_to_memory(nonempty_reviews)
78
-
79
- result = agents[0].step(previous_plan, advice, task_description)
80
-
81
- return [result]
82
-
83
- def reset(self):
84
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/utils/CreateChild.js DELETED
@@ -1,16 +0,0 @@
1
- import Make from '../../Make.js';
2
-
3
- var CreateChild = function (scene, data, subKey, view, styles, customBuilders) {
4
- var childData = data[subKey];
5
- if (!childData) {
6
- return undefined;
7
- }
8
-
9
- var child;
10
- child = Make(scene, childData, view, styles, customBuilders);
11
- data[subKey] = child;
12
-
13
- return child;
14
- }
15
-
16
- export default CreateChild;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlekseyKorshuk/thin-plate-spline-motion-model/app.py DELETED
@@ -1,100 +0,0 @@
1
- import torch
2
- import imageio
3
- import numpy as np
4
- import matplotlib.pyplot as plt
5
- import matplotlib.animation as animation
6
- from skimage.transform import resize
7
- import warnings
8
- import os
9
- from demo import make_animation
10
- from skimage import img_as_ubyte
11
- from demo import load_checkpoints
12
- import gradio
13
-
14
-
15
- def inference(source_image_path='./assets/source.png', driving_video_path='./assets/driving.mp4', dataset_name="vox"):
16
- # edit the config
17
- device = torch.device('cpu')
18
- # dataset_name = 'vox' # ['vox', 'taichi', 'ted', 'mgif']
19
- # source_image_path = './assets/source.png'
20
- # driving_video_path = './assets/driving.mp4'
21
- output_video_path = './generated.mp4'
22
-
23
- pixel = 256 # for vox, taichi and mgif, the resolution is 256*256
24
- if (dataset_name == 'ted'): # for ted, the resolution is 384*384
25
- pixel = 384
26
- config_path = f'config/{dataset_name}-{pixel}.yaml'
27
- checkpoint_path = f'checkpoints/{dataset_name}.pth.tar'
28
- predict_mode = 'relative' # ['standard', 'relative', 'avd']
29
-
30
- warnings.filterwarnings("ignore")
31
-
32
- source_image = imageio.imread(source_image_path)
33
- reader = imageio.get_reader(driving_video_path)
34
-
35
- source_image = resize(source_image, (pixel, pixel))[..., :3]
36
-
37
- fps = reader.get_meta_data()['fps']
38
- driving_video = []
39
- try:
40
- for im in reader:
41
- driving_video.append(im)
42
- except RuntimeError:
43
- pass
44
- reader.close()
45
-
46
- driving_video = [resize(frame, (pixel, pixel))[..., :3] for frame in driving_video]
47
-
48
- # driving_video = driving_video[:10]
49
-
50
- def display(source, driving, generated=None) -> animation.ArtistAnimation:
51
- fig = plt.figure(figsize=(8 + 4 * (generated is not None), 6))
52
-
53
- ims = []
54
- for i in range(len(driving)):
55
- cols = [source]
56
- cols.append(driving[i])
57
- if generated is not None:
58
- cols.append(generated[i])
59
- im = plt.imshow(np.concatenate(cols, axis=1), animated=True)
60
- plt.axis('off')
61
- ims.append([im])
62
-
63
- ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=1000)
64
- # plt.show()
65
- plt.close()
66
- return ani
67
-
68
- inpainting, kp_detector, dense_motion_network, avd_network = load_checkpoints(config_path=config_path,
69
- checkpoint_path=checkpoint_path,
70
- device=device)
71
-
72
- predictions = make_animation(source_image, driving_video, inpainting, kp_detector, dense_motion_network,
73
- avd_network, device=device, mode=predict_mode)
74
-
75
- # save resulting video
76
- imageio.mimsave(output_video_path, [img_as_ubyte(frame) for frame in predictions], fps=fps)
77
-
78
- ani = display(source_image, driving_video, predictions)
79
- ani.save('animation.mp4', writer='imagemagick', fps=60)
80
- return 'animation.mp4'
81
-
82
-
83
- demo = gradio.Interface(
84
- fn=inference,
85
- inputs=[
86
- gradio.inputs.Image(type="filepath", label="Input image"),
87
- gradio.inputs.Video(label="Input video"),
88
- gradio.inputs.Dropdown(['vox', 'taichi', 'ted', 'mgif'], type="value", default="vox", label="Model",
89
- optional=False),
90
-
91
- ],
92
- outputs=["video"],
93
- examples=[
94
- ['./assets/source.png', './assets/driving.mp4', "vox"],
95
- ['./assets/source_ted.png', './assets/driving_ted.mp4', "ted"],
96
- ],
97
- )
98
-
99
- if __name__ == "__main__":
100
- demo.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlexWortega/MailruQA/app.py DELETED
@@ -1,47 +0,0 @@
1
- import torch
2
- import gradio as gr
3
- from transformers import AutoModelForCausalLM, AutoTokenizer
4
- import random
5
- device = 'cpu'
6
-
7
- def ans(question ):
8
- description=''
9
- category=''
10
- seed = random.randint(1, 10000000)
11
- print(f'Seed: {seed}')
12
- torch.manual_seed(seed)
13
-
14
- inp = tokenizer.encode(f'Вопрос: {question}\nОписание: {description}\nОтвет:',return_tensors="pt").to(device)
15
- print('question',question)
16
- gen = model.generate(inp, do_sample=True, top_p=0.9, temperature=0.86, max_new_tokens=100, repetition_penalty=1.2) #, stop_token="<eos>")
17
-
18
- gen = tokenizer.decode(gen[0])
19
- gen = gen[:gen.index('<eos>') if '<eos>' in gen else len(gen)]
20
- gen = gen.split('Ответ:')[1]
21
- return gen
22
-
23
-
24
-
25
-
26
-
27
-
28
-
29
- # Download checkpoint:
30
- checkpoint = "its5Q/rugpt3large_mailqa"
31
- tokenizer = AutoTokenizer.from_pretrained(checkpoint)
32
- model = AutoModelForCausalLM.from_pretrained(checkpoint)
33
- model = model.eval()
34
-
35
- # Gradio
36
-
37
- title = "Ответы на главные вопросы жизни, вселенной и вообще"
38
- description = "ruGPT large дообученная на датасете https://www.kaggle.com/datasets/atleast6characterss/otvetmailru-solved-questions "
39
- article = "<p style='text-align: center'><a href='https://github.com/NeuralPushkin/MailRu_Q-A'>Github with fine-tuning ruGPT3large on QA</a></p> Cозданно при поддержке <p style='text-align: center'><a href='https://t.me/lovedeathtransformers'>Love Death Transformers</a></p>"
40
- examples = [
41
- ["Как какать?"]
42
- ]
43
-
44
- iface = gr.Interface(fn=ans, title=title, description=description, article=article, examples=examples, inputs="text", outputs="text")
45
-
46
- if __name__ == "__main__":
47
- iface.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ali36Ahmad/magic-diffusion/app.py DELETED
@@ -1,104 +0,0 @@
1
- import gradio as gr
2
- import os
3
- from share_btn import community_icon_html, loading_icon_html, share_js
4
-
5
- text_gen = gr.Interface.load(name="spaces/Gustavosta/MagicPrompt-Stable-Diffusion")
6
- stable_diffusion = gr.Blocks.load(name="spaces/runwayml/stable-diffusion-v1-5")
7
-
8
- def get_images(prompt):
9
- gallery_dir = stable_diffusion(prompt, fn_index=2)
10
- sd_output = [os.path.join(gallery_dir, image) for image in os.listdir(gallery_dir)]
11
- return sd_output, gr.update(visible=True), gr.update(visible=True), gr.update(visible=True)
12
-
13
- def get_prompts(prompt_text):
14
- return text_gen(prompt_text)
15
-
16
- css = '''
17
- .animate-spin {
18
- animation: spin 1s linear infinite;
19
- }
20
- @keyframes spin {
21
- from {
22
- transform: rotate(0deg);
23
- }
24
- to {
25
- transform: rotate(360deg);
26
- }
27
- }
28
- #share-btn-container {
29
- display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem;
30
- }
31
- #share-btn {
32
- all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;
33
- }
34
- #share-btn * {
35
- all: unset;
36
- }
37
- #share-btn-container div:nth-child(-n+2){
38
- width: auto !important;
39
- min-height: 0px !important;
40
- }
41
- #share-btn-container .wrap {
42
- display: none !important;
43
- }
44
- a {text-decoration-line: underline;}
45
- '''
46
-
47
- with gr.Blocks(css=css) as demo:
48
- gr.HTML("""<div style="text-align: center; max-width: 700px; margin: 0 auto;">
49
- <div
50
- style="
51
- display: inline-flex;
52
- align-items: center;
53
- gap: 0.8rem;
54
- font-size: 1.75rem;
55
- "
56
- >
57
- <h1 style="font-weight: 900; margin-bottom: 7px; margin-top: 5px;">
58
- Magic Diffusion 🪄
59
- </h1>
60
- </div>
61
- <p style="margin-bottom: 10px; font-size: 94%">
62
- This Space prettifies your prompt using <a href="https://huggingface.co/spaces/Gustavosta/MagicPrompt-Stable-Diffusion" target="_blank">MagicPrompt</a>
63
- and then runs it through Stable Diffusion to create aesthetically pleasing images. Simply enter a few concepts and let it improve your prompt. You can then diffuse the prompt.
64
- </p>
65
- </div>""")
66
-
67
- with gr.Row():
68
- with gr.Column():
69
- input_text = gr.Textbox(label="Short text prompt",
70
- lines=4, elem_id="input-text")
71
- with gr.Row():
72
- see_prompts = gr.Button("Feed in your text!")
73
-
74
- with gr.Column():
75
- text_output = gr.Textbox(
76
- label="Prettified text prompt",
77
- lines=4,
78
- elem_id="translated"
79
- )
80
- with gr.Row():
81
- diffuse_btn = gr.Button(value="Diffuse the Prompt!")
82
- with gr.Column(elem_id="generated-gallery"):
83
- sd_output = gr.Gallery().style(grid=2, height="auto")
84
- with gr.Group(elem_id="share-btn-container"):
85
- community_icon = gr.HTML(community_icon_html, visible=False)
86
- loading_icon = gr.HTML(loading_icon_html, visible=False)
87
- share_button = gr.Button("Share to community", elem_id="share-btn", visible=False)
88
-
89
- see_prompts.click(get_prompts,
90
- inputs = [input_text],
91
- outputs = [
92
- text_output
93
- ])
94
- diffuse_btn.click(get_images,
95
- inputs = [
96
- text_output
97
- ],
98
- outputs = [sd_output, community_icon, loading_icon, share_button]
99
- )
100
- share_button.click(None, [], [], _js=share_js)
101
-
102
-
103
-
104
- demo.launch(debug=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alpaca233/SadTalker/src/face3d/data/__init__.py DELETED
@@ -1,116 +0,0 @@
1
- """This package includes all the modules related to data loading and preprocessing
2
-
3
- To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset.
4
- You need to implement four functions:
5
- -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt).
6
- -- <__len__>: return the size of dataset.
7
- -- <__getitem__>: get a data point from data loader.
8
- -- <modify_commandline_options>: (optionally) add dataset-specific options and set default options.
9
-
10
- Now you can use the dataset class by specifying flag '--dataset_mode dummy'.
11
- See our template dataset class 'template_dataset.py' for more details.
12
- """
13
- import numpy as np
14
- import importlib
15
- import torch.utils.data
16
- from face3d.data.base_dataset import BaseDataset
17
-
18
-
19
- def find_dataset_using_name(dataset_name):
20
- """Import the module "data/[dataset_name]_dataset.py".
21
-
22
- In the file, the class called DatasetNameDataset() will
23
- be instantiated. It has to be a subclass of BaseDataset,
24
- and it is case-insensitive.
25
- """
26
- dataset_filename = "data." + dataset_name + "_dataset"
27
- datasetlib = importlib.import_module(dataset_filename)
28
-
29
- dataset = None
30
- target_dataset_name = dataset_name.replace('_', '') + 'dataset'
31
- for name, cls in datasetlib.__dict__.items():
32
- if name.lower() == target_dataset_name.lower() \
33
- and issubclass(cls, BaseDataset):
34
- dataset = cls
35
-
36
- if dataset is None:
37
- raise NotImplementedError("In %s.py, there should be a subclass of BaseDataset with class name that matches %s in lowercase." % (dataset_filename, target_dataset_name))
38
-
39
- return dataset
40
-
41
-
42
- def get_option_setter(dataset_name):
43
- """Return the static method <modify_commandline_options> of the dataset class."""
44
- dataset_class = find_dataset_using_name(dataset_name)
45
- return dataset_class.modify_commandline_options
46
-
47
-
48
- def create_dataset(opt, rank=0):
49
- """Create a dataset given the option.
50
-
51
- This function wraps the class CustomDatasetDataLoader.
52
- This is the main interface between this package and 'train.py'/'test.py'
53
-
54
- Example:
55
- >>> from data import create_dataset
56
- >>> dataset = create_dataset(opt)
57
- """
58
- data_loader = CustomDatasetDataLoader(opt, rank=rank)
59
- dataset = data_loader.load_data()
60
- return dataset
61
-
62
- class CustomDatasetDataLoader():
63
- """Wrapper class of Dataset class that performs multi-threaded data loading"""
64
-
65
- def __init__(self, opt, rank=0):
66
- """Initialize this class
67
-
68
- Step 1: create a dataset instance given the name [dataset_mode]
69
- Step 2: create a multi-threaded data loader.
70
- """
71
- self.opt = opt
72
- dataset_class = find_dataset_using_name(opt.dataset_mode)
73
- self.dataset = dataset_class(opt)
74
- self.sampler = None
75
- print("rank %d %s dataset [%s] was created" % (rank, self.dataset.name, type(self.dataset).__name__))
76
- if opt.use_ddp and opt.isTrain:
77
- world_size = opt.world_size
78
- self.sampler = torch.utils.data.distributed.DistributedSampler(
79
- self.dataset,
80
- num_replicas=world_size,
81
- rank=rank,
82
- shuffle=not opt.serial_batches
83
- )
84
- self.dataloader = torch.utils.data.DataLoader(
85
- self.dataset,
86
- sampler=self.sampler,
87
- num_workers=int(opt.num_threads / world_size),
88
- batch_size=int(opt.batch_size / world_size),
89
- drop_last=True)
90
- else:
91
- self.dataloader = torch.utils.data.DataLoader(
92
- self.dataset,
93
- batch_size=opt.batch_size,
94
- shuffle=(not opt.serial_batches) and opt.isTrain,
95
- num_workers=int(opt.num_threads),
96
- drop_last=True
97
- )
98
-
99
- def set_epoch(self, epoch):
100
- self.dataset.current_epoch = epoch
101
- if self.sampler is not None:
102
- self.sampler.set_epoch(epoch)
103
-
104
- def load_data(self):
105
- return self
106
-
107
- def __len__(self):
108
- """Return the number of data in the dataset"""
109
- return min(len(self.dataset), self.opt.max_dataset_size)
110
-
111
- def __iter__(self):
112
- """Return a batch of data"""
113
- for i, data in enumerate(self.dataloader):
114
- if i * self.opt.batch_size >= self.opt.max_dataset_size:
115
- break
116
- yield data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py DELETED
@@ -1,707 +0,0 @@
1
- # Copyright 2023 TSAIL Team and The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- # DISCLAIMER: This file is strongly influenced by https://github.com/LuChengTHU/dpm-solver
16
-
17
- import math
18
- from typing import List, Optional, Tuple, Union
19
-
20
- import numpy as np
21
- import torch
22
-
23
- from ..configuration_utils import ConfigMixin, register_to_config
24
- from ..utils import randn_tensor
25
- from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin, SchedulerOutput
26
-
27
-
28
- # Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar
29
- def betas_for_alpha_bar(
30
- num_diffusion_timesteps,
31
- max_beta=0.999,
32
- alpha_transform_type="cosine",
33
- ):
34
- """
35
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
36
- (1-beta) over time from t = [0,1].
37
-
38
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
39
- to that part of the diffusion process.
40
-
41
-
42
- Args:
43
- num_diffusion_timesteps (`int`): the number of betas to produce.
44
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
45
- prevent singularities.
46
- alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar.
47
- Choose from `cosine` or `exp`
48
-
49
- Returns:
50
- betas (`np.ndarray`): the betas used by the scheduler to step the model outputs
51
- """
52
- if alpha_transform_type == "cosine":
53
-
54
- def alpha_bar_fn(t):
55
- return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2
56
-
57
- elif alpha_transform_type == "exp":
58
-
59
- def alpha_bar_fn(t):
60
- return math.exp(t * -12.0)
61
-
62
- else:
63
- raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}")
64
-
65
- betas = []
66
- for i in range(num_diffusion_timesteps):
67
- t1 = i / num_diffusion_timesteps
68
- t2 = (i + 1) / num_diffusion_timesteps
69
- betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta))
70
- return torch.tensor(betas, dtype=torch.float32)
71
-
72
-
73
- class DPMSolverMultistepInverseScheduler(SchedulerMixin, ConfigMixin):
74
- """
75
- DPMSolverMultistepInverseScheduler is the reverse scheduler of [`DPMSolverMultistepScheduler`].
76
-
77
- We also support the "dynamic thresholding" method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space
78
- diffusion models, you can set both `algorithm_type="dpmsolver++"` and `thresholding=True` to use the dynamic
79
- thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as
80
- stable-diffusion).
81
-
82
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
83
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
84
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
85
- [`~SchedulerMixin.from_pretrained`] functions.
86
-
87
- Args:
88
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
89
- beta_start (`float`): the starting `beta` value of inference.
90
- beta_end (`float`): the final `beta` value.
91
- beta_schedule (`str`):
92
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
93
- `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
94
- trained_betas (`np.ndarray`, optional):
95
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
96
- solver_order (`int`, default `2`):
97
- the order of DPM-Solver; can be `1` or `2` or `3`. We recommend to use `solver_order=2` for guided
98
- sampling, and `solver_order=3` for unconditional sampling.
99
- prediction_type (`str`, default `epsilon`, optional):
100
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
101
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
102
- https://imagen.research.google/video/paper.pdf)
103
- thresholding (`bool`, default `False`):
104
- whether to use the "dynamic thresholding" method (introduced by Imagen, https://arxiv.org/abs/2205.11487).
105
- For pixel-space diffusion models, you can set both `algorithm_type=dpmsolver++` and `thresholding=True` to
106
- use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion
107
- models (such as stable-diffusion).
108
- dynamic_thresholding_ratio (`float`, default `0.995`):
109
- the ratio for the dynamic thresholding method. Default is `0.995`, the same as Imagen
110
- (https://arxiv.org/abs/2205.11487).
111
- sample_max_value (`float`, default `1.0`):
112
- the threshold value for dynamic thresholding. Valid only when `thresholding=True` and
113
- `algorithm_type="dpmsolver++`.
114
- algorithm_type (`str`, default `dpmsolver++`):
115
- the algorithm type for the solver. Either `dpmsolver` or `dpmsolver++` or `sde-dpmsolver` or
116
- `sde-dpmsolver++`. The `dpmsolver` type implements the algorithms in https://arxiv.org/abs/2206.00927, and
117
- the `dpmsolver++` type implements the algorithms in https://arxiv.org/abs/2211.01095. We recommend to use
118
- `dpmsolver++` or `sde-dpmsolver++` with `solver_order=2` for guided sampling (e.g. stable-diffusion).
119
- solver_type (`str`, default `midpoint`):
120
- the solver type for the second-order solver. Either `midpoint` or `heun`. The solver type slightly affects
121
- the sample quality, especially for small number of steps. We empirically find that `midpoint` solvers are
122
- slightly better, so we recommend to use the `midpoint` type.
123
- lower_order_final (`bool`, default `True`):
124
- whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically
125
- find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10.
126
- use_karras_sigmas (`bool`, *optional*, defaults to `False`):
127
- This parameter controls whether to use Karras sigmas (Karras et al. (2022) scheme) for step sizes in the
128
- noise schedule during the sampling process. If True, the sigmas will be determined according to a sequence
129
- of noise levels {σi} as defined in Equation (5) of the paper https://arxiv.org/pdf/2206.00364.pdf.
130
- lambda_min_clipped (`float`, default `-inf`):
131
- the clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for
132
- cosine (squaredcos_cap_v2) noise schedule.
133
- variance_type (`str`, *optional*):
134
- Set to "learned" or "learned_range" for diffusion models that predict variance. For example, OpenAI's
135
- guided-diffusion (https://github.com/openai/guided-diffusion) predicts both mean and variance of the
136
- Gaussian distribution in the model's output. DPM-Solver only needs the "mean" output because it is based on
137
- diffusion ODEs. whether the model's output contains the predicted Gaussian variance. For example, OpenAI's
138
- guided-diffusion (https://github.com/openai/guided-diffusion) predicts both mean and variance of the
139
- Gaussian distribution in the model's output. DPM-Solver only needs the "mean" output because it is based on
140
- diffusion ODEs.
141
- timestep_spacing (`str`, default `"linspace"`):
142
- The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample
143
- Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information.
144
- steps_offset (`int`, default `0`):
145
- an offset added to the inference steps. You can use a combination of `offset=1` and
146
- `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in
147
- stable diffusion.
148
- """
149
-
150
- _compatibles = [e.name for e in KarrasDiffusionSchedulers]
151
- order = 1
152
-
153
- @register_to_config
154
- def __init__(
155
- self,
156
- num_train_timesteps: int = 1000,
157
- beta_start: float = 0.0001,
158
- beta_end: float = 0.02,
159
- beta_schedule: str = "linear",
160
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
161
- solver_order: int = 2,
162
- prediction_type: str = "epsilon",
163
- thresholding: bool = False,
164
- dynamic_thresholding_ratio: float = 0.995,
165
- sample_max_value: float = 1.0,
166
- algorithm_type: str = "dpmsolver++",
167
- solver_type: str = "midpoint",
168
- lower_order_final: bool = True,
169
- use_karras_sigmas: Optional[bool] = False,
170
- lambda_min_clipped: float = -float("inf"),
171
- variance_type: Optional[str] = None,
172
- timestep_spacing: str = "linspace",
173
- steps_offset: int = 0,
174
- ):
175
- if trained_betas is not None:
176
- self.betas = torch.tensor(trained_betas, dtype=torch.float32)
177
- elif beta_schedule == "linear":
178
- self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32)
179
- elif beta_schedule == "scaled_linear":
180
- # this schedule is very specific to the latent diffusion model.
181
- self.betas = (
182
- torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2
183
- )
184
- elif beta_schedule == "squaredcos_cap_v2":
185
- # Glide cosine schedule
186
- self.betas = betas_for_alpha_bar(num_train_timesteps)
187
- else:
188
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
189
-
190
- self.alphas = 1.0 - self.betas
191
- self.alphas_cumprod = torch.cumprod(self.alphas, dim=0)
192
- # Currently we only support VP-type noise schedule
193
- self.alpha_t = torch.sqrt(self.alphas_cumprod)
194
- self.sigma_t = torch.sqrt(1 - self.alphas_cumprod)
195
- self.lambda_t = torch.log(self.alpha_t) - torch.log(self.sigma_t)
196
-
197
- # standard deviation of the initial noise distribution
198
- self.init_noise_sigma = 1.0
199
-
200
- # settings for DPM-Solver
201
- if algorithm_type not in ["dpmsolver", "dpmsolver++", "sde-dpmsolver", "sde-dpmsolver++"]:
202
- if algorithm_type == "deis":
203
- self.register_to_config(algorithm_type="dpmsolver++")
204
- else:
205
- raise NotImplementedError(f"{algorithm_type} does is not implemented for {self.__class__}")
206
-
207
- if solver_type not in ["midpoint", "heun"]:
208
- if solver_type in ["logrho", "bh1", "bh2"]:
209
- self.register_to_config(solver_type="midpoint")
210
- else:
211
- raise NotImplementedError(f"{solver_type} does is not implemented for {self.__class__}")
212
-
213
- # setable values
214
- self.num_inference_steps = None
215
- timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=np.float32).copy()
216
- self.timesteps = torch.from_numpy(timesteps)
217
- self.model_outputs = [None] * solver_order
218
- self.lower_order_nums = 0
219
- self.use_karras_sigmas = use_karras_sigmas
220
-
221
- def set_timesteps(self, num_inference_steps: int = None, device: Union[str, torch.device] = None):
222
- """
223
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
224
-
225
- Args:
226
- num_inference_steps (`int`):
227
- the number of diffusion steps used when generating samples with a pre-trained model.
228
- device (`str` or `torch.device`, optional):
229
- the device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
230
- """
231
- # Clipping the minimum of all lambda(t) for numerical stability.
232
- # This is critical for cosine (squaredcos_cap_v2) noise schedule.
233
- clipped_idx = torch.searchsorted(torch.flip(self.lambda_t, [0]), self.lambda_min_clipped).item()
234
- self.noisiest_timestep = self.config.num_train_timesteps - 1 - clipped_idx
235
-
236
- # "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891
237
- if self.config.timestep_spacing == "linspace":
238
- timesteps = (
239
- np.linspace(0, self.noisiest_timestep, num_inference_steps + 1).round()[:-1].copy().astype(np.int64)
240
- )
241
- elif self.config.timestep_spacing == "leading":
242
- step_ratio = (self.noisiest_timestep + 1) // (num_inference_steps + 1)
243
- # creates integer timesteps by multiplying by ratio
244
- # casting to int to avoid issues when num_inference_step is power of 3
245
- timesteps = (np.arange(0, num_inference_steps + 1) * step_ratio).round()[:-1].copy().astype(np.int64)
246
- timesteps += self.config.steps_offset
247
- elif self.config.timestep_spacing == "trailing":
248
- step_ratio = self.config.num_train_timesteps / num_inference_steps
249
- # creates integer timesteps by multiplying by ratio
250
- # casting to int to avoid issues when num_inference_step is power of 3
251
- timesteps = np.arange(self.noisiest_timestep + 1, 0, -step_ratio).round()[::-1].copy().astype(np.int64)
252
- timesteps -= 1
253
- else:
254
- raise ValueError(
255
- f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'linspace', "
256
- "'leading' or 'trailing'."
257
- )
258
-
259
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
260
- if self.config.use_karras_sigmas:
261
- log_sigmas = np.log(sigmas)
262
- sigmas = self._convert_to_karras(in_sigmas=sigmas, num_inference_steps=num_inference_steps)
263
- timesteps = np.array([self._sigma_to_t(sigma, log_sigmas) for sigma in sigmas]).round()
264
- timesteps = timesteps.copy().astype(np.int64)
265
-
266
- self.sigmas = torch.from_numpy(sigmas)
267
-
268
- # when num_inference_steps == num_train_timesteps, we can end up with
269
- # duplicates in timesteps.
270
- _, unique_indices = np.unique(timesteps, return_index=True)
271
- timesteps = timesteps[np.sort(unique_indices)]
272
-
273
- self.timesteps = torch.from_numpy(timesteps).to(device)
274
-
275
- self.num_inference_steps = len(timesteps)
276
-
277
- self.model_outputs = [
278
- None,
279
- ] * self.config.solver_order
280
- self.lower_order_nums = 0
281
-
282
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._threshold_sample
283
- def _threshold_sample(self, sample: torch.FloatTensor) -> torch.FloatTensor:
284
- """
285
- "Dynamic thresholding: At each sampling step we set s to a certain percentile absolute pixel value in xt0 (the
286
- prediction of x_0 at timestep t), and if s > 1, then we threshold xt0 to the range [-s, s] and then divide by
287
- s. Dynamic thresholding pushes saturated pixels (those near -1 and 1) inwards, thereby actively preventing
288
- pixels from saturation at each step. We find that dynamic thresholding results in significantly better
289
- photorealism as well as better image-text alignment, especially when using very large guidance weights."
290
-
291
- https://arxiv.org/abs/2205.11487
292
- """
293
- dtype = sample.dtype
294
- batch_size, channels, height, width = sample.shape
295
-
296
- if dtype not in (torch.float32, torch.float64):
297
- sample = sample.float() # upcast for quantile calculation, and clamp not implemented for cpu half
298
-
299
- # Flatten sample for doing quantile calculation along each image
300
- sample = sample.reshape(batch_size, channels * height * width)
301
-
302
- abs_sample = sample.abs() # "a certain percentile absolute pixel value"
303
-
304
- s = torch.quantile(abs_sample, self.config.dynamic_thresholding_ratio, dim=1)
305
- s = torch.clamp(
306
- s, min=1, max=self.config.sample_max_value
307
- ) # When clamped to min=1, equivalent to standard clipping to [-1, 1]
308
-
309
- s = s.unsqueeze(1) # (batch_size, 1) because clamp will broadcast along dim=0
310
- sample = torch.clamp(sample, -s, s) / s # "we threshold xt0 to the range [-s, s] and then divide by s"
311
-
312
- sample = sample.reshape(batch_size, channels, height, width)
313
- sample = sample.to(dtype)
314
-
315
- return sample
316
-
317
- # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._sigma_to_t
318
- def _sigma_to_t(self, sigma, log_sigmas):
319
- # get log sigma
320
- log_sigma = np.log(sigma)
321
-
322
- # get distribution
323
- dists = log_sigma - log_sigmas[:, np.newaxis]
324
-
325
- # get sigmas range
326
- low_idx = np.cumsum((dists >= 0), axis=0).argmax(axis=0).clip(max=log_sigmas.shape[0] - 2)
327
- high_idx = low_idx + 1
328
-
329
- low = log_sigmas[low_idx]
330
- high = log_sigmas[high_idx]
331
-
332
- # interpolate sigmas
333
- w = (low - log_sigma) / (low - high)
334
- w = np.clip(w, 0, 1)
335
-
336
- # transform interpolation to time range
337
- t = (1 - w) * low_idx + w * high_idx
338
- t = t.reshape(sigma.shape)
339
- return t
340
-
341
- # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._convert_to_karras
342
- def _convert_to_karras(self, in_sigmas: torch.FloatTensor, num_inference_steps) -> torch.FloatTensor:
343
- """Constructs the noise schedule of Karras et al. (2022)."""
344
-
345
- sigma_min: float = in_sigmas[-1].item()
346
- sigma_max: float = in_sigmas[0].item()
347
-
348
- rho = 7.0 # 7.0 is the value used in the paper
349
- ramp = np.linspace(0, 1, num_inference_steps)
350
- min_inv_rho = sigma_min ** (1 / rho)
351
- max_inv_rho = sigma_max ** (1 / rho)
352
- sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho
353
- return sigmas
354
-
355
- # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.convert_model_output
356
- def convert_model_output(
357
- self, model_output: torch.FloatTensor, timestep: int, sample: torch.FloatTensor
358
- ) -> torch.FloatTensor:
359
- """
360
- Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs.
361
-
362
- DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to
363
- discretize an integral of the data prediction model. So we need to first convert the model output to the
364
- corresponding type to match the algorithm.
365
-
366
- Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or
367
- DPM-Solver++ for both noise prediction model and data prediction model.
368
-
369
- Args:
370
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
371
- timestep (`int`): current discrete timestep in the diffusion chain.
372
- sample (`torch.FloatTensor`):
373
- current instance of sample being created by diffusion process.
374
-
375
- Returns:
376
- `torch.FloatTensor`: the converted model output.
377
- """
378
-
379
- # DPM-Solver++ needs to solve an integral of the data prediction model.
380
- if self.config.algorithm_type in ["dpmsolver++", "sde-dpmsolver++"]:
381
- if self.config.prediction_type == "epsilon":
382
- # DPM-Solver and DPM-Solver++ only need the "mean" output.
383
- if self.config.variance_type in ["learned", "learned_range"]:
384
- model_output = model_output[:, :3]
385
- alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep]
386
- x0_pred = (sample - sigma_t * model_output) / alpha_t
387
- elif self.config.prediction_type == "sample":
388
- x0_pred = model_output
389
- elif self.config.prediction_type == "v_prediction":
390
- alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep]
391
- x0_pred = alpha_t * sample - sigma_t * model_output
392
- else:
393
- raise ValueError(
394
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or"
395
- " `v_prediction` for the DPMSolverMultistepScheduler."
396
- )
397
-
398
- if self.config.thresholding:
399
- x0_pred = self._threshold_sample(x0_pred)
400
-
401
- return x0_pred
402
-
403
- # DPM-Solver needs to solve an integral of the noise prediction model.
404
- elif self.config.algorithm_type in ["dpmsolver", "sde-dpmsolver"]:
405
- if self.config.prediction_type == "epsilon":
406
- # DPM-Solver and DPM-Solver++ only need the "mean" output.
407
- if self.config.variance_type in ["learned", "learned_range"]:
408
- epsilon = model_output[:, :3]
409
- else:
410
- epsilon = model_output
411
- elif self.config.prediction_type == "sample":
412
- alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep]
413
- epsilon = (sample - alpha_t * model_output) / sigma_t
414
- elif self.config.prediction_type == "v_prediction":
415
- alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep]
416
- epsilon = alpha_t * model_output + sigma_t * sample
417
- else:
418
- raise ValueError(
419
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or"
420
- " `v_prediction` for the DPMSolverMultistepScheduler."
421
- )
422
-
423
- if self.config.thresholding:
424
- alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep]
425
- x0_pred = (sample - sigma_t * epsilon) / alpha_t
426
- x0_pred = self._threshold_sample(x0_pred)
427
- epsilon = (sample - alpha_t * x0_pred) / sigma_t
428
-
429
- return epsilon
430
-
431
- def dpm_solver_first_order_update(
432
- self,
433
- model_output: torch.FloatTensor,
434
- timestep: int,
435
- prev_timestep: int,
436
- sample: torch.FloatTensor,
437
- noise: Optional[torch.FloatTensor] = None,
438
- ) -> torch.FloatTensor:
439
- """
440
- One step for the first-order DPM-Solver (equivalent to DDIM).
441
-
442
- See https://arxiv.org/abs/2206.00927 for the detailed derivation.
443
-
444
- Args:
445
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
446
- timestep (`int`): current discrete timestep in the diffusion chain.
447
- prev_timestep (`int`): previous discrete timestep in the diffusion chain.
448
- sample (`torch.FloatTensor`):
449
- current instance of sample being created by diffusion process.
450
-
451
- Returns:
452
- `torch.FloatTensor`: the sample tensor at the previous timestep.
453
- """
454
- lambda_t, lambda_s = self.lambda_t[prev_timestep], self.lambda_t[timestep]
455
- alpha_t, alpha_s = self.alpha_t[prev_timestep], self.alpha_t[timestep]
456
- sigma_t, sigma_s = self.sigma_t[prev_timestep], self.sigma_t[timestep]
457
- h = lambda_t - lambda_s
458
- if self.config.algorithm_type == "dpmsolver++":
459
- x_t = (sigma_t / sigma_s) * sample - (alpha_t * (torch.exp(-h) - 1.0)) * model_output
460
- elif self.config.algorithm_type == "dpmsolver":
461
- x_t = (alpha_t / alpha_s) * sample - (sigma_t * (torch.exp(h) - 1.0)) * model_output
462
- elif "sde" in self.config.algorithm_type:
463
- raise NotImplementedError(
464
- f"Inversion step is not yet implemented for algorithm type {self.config.algorithm_type}."
465
- )
466
- return x_t
467
-
468
- def multistep_dpm_solver_second_order_update(
469
- self,
470
- model_output_list: List[torch.FloatTensor],
471
- timestep_list: List[int],
472
- prev_timestep: int,
473
- sample: torch.FloatTensor,
474
- noise: Optional[torch.FloatTensor] = None,
475
- ) -> torch.FloatTensor:
476
- """
477
- One step for the second-order multistep DPM-Solver.
478
-
479
- Args:
480
- model_output_list (`List[torch.FloatTensor]`):
481
- direct outputs from learned diffusion model at current and latter timesteps.
482
- timestep (`int`): current and latter discrete timestep in the diffusion chain.
483
- prev_timestep (`int`): previous discrete timestep in the diffusion chain.
484
- sample (`torch.FloatTensor`):
485
- current instance of sample being created by diffusion process.
486
-
487
- Returns:
488
- `torch.FloatTensor`: the sample tensor at the previous timestep.
489
- """
490
- t, s0, s1 = prev_timestep, timestep_list[-1], timestep_list[-2]
491
- m0, m1 = model_output_list[-1], model_output_list[-2]
492
- lambda_t, lambda_s0, lambda_s1 = self.lambda_t[t], self.lambda_t[s0], self.lambda_t[s1]
493
- alpha_t, alpha_s0 = self.alpha_t[t], self.alpha_t[s0]
494
- sigma_t, sigma_s0 = self.sigma_t[t], self.sigma_t[s0]
495
- h, h_0 = lambda_t - lambda_s0, lambda_s0 - lambda_s1
496
- r0 = h_0 / h
497
- D0, D1 = m0, (1.0 / r0) * (m0 - m1)
498
- if self.config.algorithm_type == "dpmsolver++":
499
- # See https://arxiv.org/abs/2211.01095 for detailed derivations
500
- if self.config.solver_type == "midpoint":
501
- x_t = (
502
- (sigma_t / sigma_s0) * sample
503
- - (alpha_t * (torch.exp(-h) - 1.0)) * D0
504
- - 0.5 * (alpha_t * (torch.exp(-h) - 1.0)) * D1
505
- )
506
- elif self.config.solver_type == "heun":
507
- x_t = (
508
- (sigma_t / sigma_s0) * sample
509
- - (alpha_t * (torch.exp(-h) - 1.0)) * D0
510
- + (alpha_t * ((torch.exp(-h) - 1.0) / h + 1.0)) * D1
511
- )
512
- elif self.config.algorithm_type == "dpmsolver":
513
- # See https://arxiv.org/abs/2206.00927 for detailed derivations
514
- if self.config.solver_type == "midpoint":
515
- x_t = (
516
- (alpha_t / alpha_s0) * sample
517
- - (sigma_t * (torch.exp(h) - 1.0)) * D0
518
- - 0.5 * (sigma_t * (torch.exp(h) - 1.0)) * D1
519
- )
520
- elif self.config.solver_type == "heun":
521
- x_t = (
522
- (alpha_t / alpha_s0) * sample
523
- - (sigma_t * (torch.exp(h) - 1.0)) * D0
524
- - (sigma_t * ((torch.exp(h) - 1.0) / h - 1.0)) * D1
525
- )
526
- elif "sde" in self.config.algorithm_type:
527
- raise NotImplementedError(
528
- f"Inversion step is not yet implemented for algorithm type {self.config.algorithm_type}."
529
- )
530
- return x_t
531
-
532
- # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.multistep_dpm_solver_third_order_update
533
- def multistep_dpm_solver_third_order_update(
534
- self,
535
- model_output_list: List[torch.FloatTensor],
536
- timestep_list: List[int],
537
- prev_timestep: int,
538
- sample: torch.FloatTensor,
539
- ) -> torch.FloatTensor:
540
- """
541
- One step for the third-order multistep DPM-Solver.
542
-
543
- Args:
544
- model_output_list (`List[torch.FloatTensor]`):
545
- direct outputs from learned diffusion model at current and latter timesteps.
546
- timestep (`int`): current and latter discrete timestep in the diffusion chain.
547
- prev_timestep (`int`): previous discrete timestep in the diffusion chain.
548
- sample (`torch.FloatTensor`):
549
- current instance of sample being created by diffusion process.
550
-
551
- Returns:
552
- `torch.FloatTensor`: the sample tensor at the previous timestep.
553
- """
554
- t, s0, s1, s2 = prev_timestep, timestep_list[-1], timestep_list[-2], timestep_list[-3]
555
- m0, m1, m2 = model_output_list[-1], model_output_list[-2], model_output_list[-3]
556
- lambda_t, lambda_s0, lambda_s1, lambda_s2 = (
557
- self.lambda_t[t],
558
- self.lambda_t[s0],
559
- self.lambda_t[s1],
560
- self.lambda_t[s2],
561
- )
562
- alpha_t, alpha_s0 = self.alpha_t[t], self.alpha_t[s0]
563
- sigma_t, sigma_s0 = self.sigma_t[t], self.sigma_t[s0]
564
- h, h_0, h_1 = lambda_t - lambda_s0, lambda_s0 - lambda_s1, lambda_s1 - lambda_s2
565
- r0, r1 = h_0 / h, h_1 / h
566
- D0 = m0
567
- D1_0, D1_1 = (1.0 / r0) * (m0 - m1), (1.0 / r1) * (m1 - m2)
568
- D1 = D1_0 + (r0 / (r0 + r1)) * (D1_0 - D1_1)
569
- D2 = (1.0 / (r0 + r1)) * (D1_0 - D1_1)
570
- if self.config.algorithm_type == "dpmsolver++":
571
- # See https://arxiv.org/abs/2206.00927 for detailed derivations
572
- x_t = (
573
- (sigma_t / sigma_s0) * sample
574
- - (alpha_t * (torch.exp(-h) - 1.0)) * D0
575
- + (alpha_t * ((torch.exp(-h) - 1.0) / h + 1.0)) * D1
576
- - (alpha_t * ((torch.exp(-h) - 1.0 + h) / h**2 - 0.5)) * D2
577
- )
578
- elif self.config.algorithm_type == "dpmsolver":
579
- # See https://arxiv.org/abs/2206.00927 for detailed derivations
580
- x_t = (
581
- (alpha_t / alpha_s0) * sample
582
- - (sigma_t * (torch.exp(h) - 1.0)) * D0
583
- - (sigma_t * ((torch.exp(h) - 1.0) / h - 1.0)) * D1
584
- - (sigma_t * ((torch.exp(h) - 1.0 - h) / h**2 - 0.5)) * D2
585
- )
586
- return x_t
587
-
588
- def step(
589
- self,
590
- model_output: torch.FloatTensor,
591
- timestep: int,
592
- sample: torch.FloatTensor,
593
- generator=None,
594
- return_dict: bool = True,
595
- ) -> Union[SchedulerOutput, Tuple]:
596
- """
597
- Step function propagating the sample with the multistep DPM-Solver.
598
-
599
- Args:
600
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
601
- timestep (`int`): current discrete timestep in the diffusion chain.
602
- sample (`torch.FloatTensor`):
603
- current instance of sample being created by diffusion process.
604
- return_dict (`bool`): option for returning tuple rather than SchedulerOutput class
605
-
606
- Returns:
607
- [`~scheduling_utils.SchedulerOutput`] or `tuple`: [`~scheduling_utils.SchedulerOutput`] if `return_dict` is
608
- True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
609
-
610
- """
611
- if self.num_inference_steps is None:
612
- raise ValueError(
613
- "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler"
614
- )
615
-
616
- if isinstance(timestep, torch.Tensor):
617
- timestep = timestep.to(self.timesteps.device)
618
- step_index = (self.timesteps == timestep).nonzero()
619
- if len(step_index) == 0:
620
- step_index = len(self.timesteps) - 1
621
- else:
622
- step_index = step_index.item()
623
- prev_timestep = (
624
- self.noisiest_timestep if step_index == len(self.timesteps) - 1 else self.timesteps[step_index + 1]
625
- )
626
- lower_order_final = (
627
- (step_index == len(self.timesteps) - 1) and self.config.lower_order_final and len(self.timesteps) < 15
628
- )
629
- lower_order_second = (
630
- (step_index == len(self.timesteps) - 2) and self.config.lower_order_final and len(self.timesteps) < 15
631
- )
632
-
633
- model_output = self.convert_model_output(model_output, timestep, sample)
634
- for i in range(self.config.solver_order - 1):
635
- self.model_outputs[i] = self.model_outputs[i + 1]
636
- self.model_outputs[-1] = model_output
637
-
638
- if self.config.algorithm_type in ["sde-dpmsolver", "sde-dpmsolver++"]:
639
- noise = randn_tensor(
640
- model_output.shape, generator=generator, device=model_output.device, dtype=model_output.dtype
641
- )
642
- else:
643
- noise = None
644
-
645
- if self.config.solver_order == 1 or self.lower_order_nums < 1 or lower_order_final:
646
- prev_sample = self.dpm_solver_first_order_update(
647
- model_output, timestep, prev_timestep, sample, noise=noise
648
- )
649
- elif self.config.solver_order == 2 or self.lower_order_nums < 2 or lower_order_second:
650
- timestep_list = [self.timesteps[step_index - 1], timestep]
651
- prev_sample = self.multistep_dpm_solver_second_order_update(
652
- self.model_outputs, timestep_list, prev_timestep, sample, noise=noise
653
- )
654
- else:
655
- timestep_list = [self.timesteps[step_index - 2], self.timesteps[step_index - 1], timestep]
656
- prev_sample = self.multistep_dpm_solver_third_order_update(
657
- self.model_outputs, timestep_list, prev_timestep, sample
658
- )
659
-
660
- if self.lower_order_nums < self.config.solver_order:
661
- self.lower_order_nums += 1
662
-
663
- if not return_dict:
664
- return (prev_sample,)
665
-
666
- return SchedulerOutput(prev_sample=prev_sample)
667
-
668
- # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.scale_model_input
669
- def scale_model_input(self, sample: torch.FloatTensor, *args, **kwargs) -> torch.FloatTensor:
670
- """
671
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
672
- current timestep.
673
-
674
- Args:
675
- sample (`torch.FloatTensor`): input sample
676
-
677
- Returns:
678
- `torch.FloatTensor`: scaled input sample
679
- """
680
- return sample
681
-
682
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.add_noise
683
- def add_noise(
684
- self,
685
- original_samples: torch.FloatTensor,
686
- noise: torch.FloatTensor,
687
- timesteps: torch.IntTensor,
688
- ) -> torch.FloatTensor:
689
- # Make sure alphas_cumprod and timestep have same device and dtype as original_samples
690
- alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype)
691
- timesteps = timesteps.to(original_samples.device)
692
-
693
- sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5
694
- sqrt_alpha_prod = sqrt_alpha_prod.flatten()
695
- while len(sqrt_alpha_prod.shape) < len(original_samples.shape):
696
- sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1)
697
-
698
- sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5
699
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
700
- while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape):
701
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1)
702
-
703
- noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise
704
- return noisy_samples
705
-
706
- def __len__(self):
707
- return self.config.num_train_timesteps
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/grid_rcnn/grid_rcnn_x101_32x4d_fpn_gn-head_2x_coco.py DELETED
@@ -1,23 +0,0 @@
1
- _base_ = './grid_rcnn_r50_fpn_gn-head_2x_coco.py'
2
- model = dict(
3
- pretrained='open-mmlab://resnext101_32x4d',
4
- backbone=dict(
5
- type='ResNeXt',
6
- depth=101,
7
- groups=32,
8
- base_width=4,
9
- num_stages=4,
10
- out_indices=(0, 1, 2, 3),
11
- frozen_stages=1,
12
- style='pytorch'))
13
- # optimizer
14
- optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
15
- optimizer_config = dict(grad_clip=None)
16
- # learning policy
17
- lr_config = dict(
18
- policy='step',
19
- warmup='linear',
20
- warmup_iters=3665,
21
- warmup_ratio=1.0 / 80,
22
- step=[17, 23])
23
- runner = dict(type='EpochBasedRunner', max_epochs=25)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r101_fpn_2x_coco.py DELETED
@@ -1,4 +0,0 @@
1
- _base_ = './vfnet_r50_fpn_1x_coco.py'
2
- model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
3
- lr_config = dict(step=[16, 22])
4
- runner = dict(type='EpochBasedRunner', max_epochs=24)
 
 
 
 
 
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/download_urls.py DELETED
@@ -1,65 +0,0 @@
1
- import concurrent.futures
2
- import requests
3
- import re
4
-
5
- from bs4 import BeautifulSoup
6
-
7
- import extensions.superboogav2.parameters as parameters
8
-
9
- from .data_processor import process_and_add_to_collector
10
- from .utils import create_metadata_source
11
-
12
- def _download_single(url):
13
- response = requests.get(url, timeout=5)
14
- if response.status_code == 200:
15
- return response.content
16
- else:
17
- raise Exception("Failed to download URL")
18
-
19
-
20
- def _download_urls(urls, threads=1):
21
- with concurrent.futures.ThreadPoolExecutor(max_workers=threads) as executor:
22
- futures = []
23
- for url in urls:
24
- future = executor.submit(_download_single, url)
25
- futures.append(future)
26
-
27
- results = []
28
- i = 0
29
- for future in concurrent.futures.as_completed(futures):
30
- try:
31
- result = future.result()
32
- results.append(result)
33
- i += 1
34
- yield f"{i}/{len(urls)}", results
35
- except Exception:
36
- pass
37
-
38
- yield "Done", results
39
-
40
-
41
- def feed_url_into_collector(urls, collector):
42
- all_text = ''
43
- cumulative = ''
44
-
45
- urls = urls.strip().split('\n')
46
- cumulative += f'Loading {len(urls)} URLs with {parameters.get_num_threads()} threads...\n\n'
47
- yield cumulative
48
- for update, contents in _download_urls(urls, threads=parameters.get_num_threads()):
49
- yield cumulative + update
50
-
51
- cumulative += 'Processing the HTML sources...'
52
- yield cumulative
53
- for content in contents:
54
- soup = BeautifulSoup(content, features="lxml")
55
- for script in soup(["script", "style"]):
56
- script.extract()
57
-
58
- strings = soup.stripped_strings
59
- if parameters.get_is_strong_cleanup():
60
- strings = [s for s in strings if re.search("[A-Za-z] ", s)]
61
-
62
- text = '\n'.join([s.strip() for s in strings])
63
- all_text += text
64
-
65
- process_and_add_to_collector(all_text, collector, False, create_metadata_source('url-download'))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Apex-X/ROOPOK/roop/predictor.py DELETED
@@ -1,43 +0,0 @@
1
- import threading
2
- import numpy
3
- import opennsfw2
4
- from PIL import Image
5
- from keras import Model
6
-
7
- from roop.typing import Frame
8
-
9
- PREDICTOR = None
10
- THREAD_LOCK = threading.Lock()
11
- MAX_PROBABILITY = 0.85
12
-
13
-
14
- def get_predictor() -> Model:
15
- global PREDICTOR
16
-
17
- with THREAD_LOCK:
18
- if PREDICTOR is None:
19
- PREDICTOR = opennsfw2.make_open_nsfw_model()
20
- return PREDICTOR
21
-
22
-
23
- def clear_predictor() -> None:
24
- global PREDICTOR
25
-
26
- PREDICTOR = None
27
-
28
-
29
- def predict_frame(target_frame: Frame) -> bool:
30
- image = Image.fromarray(target_frame)
31
- image = opennsfw2.preprocess_image(image, opennsfw2.Preprocessing.YAHOO)
32
- views = numpy.expand_dims(image, axis=0)
33
- _, probability = get_predictor().predict(views)[0]
34
- return probability > MAX_PROBABILITY
35
-
36
-
37
- def predict_image(target_path: str) -> bool:
38
- return opennsfw2.predict_image(target_path) > MAX_PROBABILITY
39
-
40
-
41
- def predict_video(target_path: str) -> bool:
42
- _, probabilities = opennsfw2.predict_video_frames(video_path=target_path, frame_interval=100)
43
- return any(probability > MAX_PROBABILITY for probability in probabilities)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ariharasudhan/YoloV5/utils/loggers/wandb/sweep.py DELETED
@@ -1,41 +0,0 @@
1
- import sys
2
- from pathlib import Path
3
-
4
- import wandb
5
-
6
- FILE = Path(__file__).resolve()
7
- ROOT = FILE.parents[3] # YOLOv5 root directory
8
- if str(ROOT) not in sys.path:
9
- sys.path.append(str(ROOT)) # add ROOT to PATH
10
-
11
- from train import parse_opt, train
12
- from utils.callbacks import Callbacks
13
- from utils.general import increment_path
14
- from utils.torch_utils import select_device
15
-
16
-
17
- def sweep():
18
- wandb.init()
19
- # Get hyp dict from sweep agent. Copy because train() modifies parameters which confused wandb.
20
- hyp_dict = vars(wandb.config).get("_items").copy()
21
-
22
- # Workaround: get necessary opt args
23
- opt = parse_opt(known=True)
24
- opt.batch_size = hyp_dict.get("batch_size")
25
- opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve))
26
- opt.epochs = hyp_dict.get("epochs")
27
- opt.nosave = True
28
- opt.data = hyp_dict.get("data")
29
- opt.weights = str(opt.weights)
30
- opt.cfg = str(opt.cfg)
31
- opt.data = str(opt.data)
32
- opt.hyp = str(opt.hyp)
33
- opt.project = str(opt.project)
34
- device = select_device(opt.device, batch_size=opt.batch_size)
35
-
36
- # train
37
- train(hyp_dict, opt, device, callbacks=Callbacks())
38
-
39
-
40
- if __name__ == "__main__":
41
- sweep()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/dist.py DELETED
@@ -1,1222 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- __all__ = ['Distribution']
3
-
4
- import io
5
- import sys
6
- import re
7
- import os
8
- import warnings
9
- import numbers
10
- import distutils.log
11
- import distutils.core
12
- import distutils.cmd
13
- import distutils.dist
14
- import distutils.command
15
- from distutils.util import strtobool
16
- from distutils.debug import DEBUG
17
- from distutils.fancy_getopt import translate_longopt
18
- from glob import iglob
19
- import itertools
20
- import textwrap
21
- from typing import List, Optional, TYPE_CHECKING
22
- from pathlib import Path
23
-
24
- from collections import defaultdict
25
- from email import message_from_file
26
-
27
- from distutils.errors import DistutilsOptionError, DistutilsSetupError
28
- from distutils.util import rfc822_escape
29
-
30
- from setuptools.extern import packaging
31
- from setuptools.extern import ordered_set
32
- from setuptools.extern.more_itertools import unique_everseen, partition
33
-
34
- from ._importlib import metadata
35
-
36
- from . import SetuptoolsDeprecationWarning
37
-
38
- import setuptools
39
- import setuptools.command
40
- from setuptools import windows_support
41
- from setuptools.monkey import get_unpatched
42
- from setuptools.config import setupcfg, pyprojecttoml
43
- from setuptools.discovery import ConfigDiscovery
44
-
45
- import pkg_resources
46
- from setuptools.extern.packaging import version
47
- from . import _reqs
48
- from . import _entry_points
49
-
50
- if TYPE_CHECKING:
51
- from email.message import Message
52
-
53
- __import__('setuptools.extern.packaging.specifiers')
54
- __import__('setuptools.extern.packaging.version')
55
-
56
-
57
- def _get_unpatched(cls):
58
- warnings.warn("Do not call this function", DistDeprecationWarning)
59
- return get_unpatched(cls)
60
-
61
-
62
- def get_metadata_version(self):
63
- mv = getattr(self, 'metadata_version', None)
64
- if mv is None:
65
- mv = version.Version('2.1')
66
- self.metadata_version = mv
67
- return mv
68
-
69
-
70
- def rfc822_unescape(content: str) -> str:
71
- """Reverse RFC-822 escaping by removing leading whitespaces from content."""
72
- lines = content.splitlines()
73
- if len(lines) == 1:
74
- return lines[0].lstrip()
75
- return '\n'.join((lines[0].lstrip(), textwrap.dedent('\n'.join(lines[1:]))))
76
-
77
-
78
- def _read_field_from_msg(msg: "Message", field: str) -> Optional[str]:
79
- """Read Message header field."""
80
- value = msg[field]
81
- if value == 'UNKNOWN':
82
- return None
83
- return value
84
-
85
-
86
- def _read_field_unescaped_from_msg(msg: "Message", field: str) -> Optional[str]:
87
- """Read Message header field and apply rfc822_unescape."""
88
- value = _read_field_from_msg(msg, field)
89
- if value is None:
90
- return value
91
- return rfc822_unescape(value)
92
-
93
-
94
- def _read_list_from_msg(msg: "Message", field: str) -> Optional[List[str]]:
95
- """Read Message header field and return all results as list."""
96
- values = msg.get_all(field, None)
97
- if values == []:
98
- return None
99
- return values
100
-
101
-
102
- def _read_payload_from_msg(msg: "Message") -> Optional[str]:
103
- value = msg.get_payload().strip()
104
- if value == 'UNKNOWN' or not value:
105
- return None
106
- return value
107
-
108
-
109
- def read_pkg_file(self, file):
110
- """Reads the metadata values from a file object."""
111
- msg = message_from_file(file)
112
-
113
- self.metadata_version = version.Version(msg['metadata-version'])
114
- self.name = _read_field_from_msg(msg, 'name')
115
- self.version = _read_field_from_msg(msg, 'version')
116
- self.description = _read_field_from_msg(msg, 'summary')
117
- # we are filling author only.
118
- self.author = _read_field_from_msg(msg, 'author')
119
- self.maintainer = None
120
- self.author_email = _read_field_from_msg(msg, 'author-email')
121
- self.maintainer_email = None
122
- self.url = _read_field_from_msg(msg, 'home-page')
123
- self.download_url = _read_field_from_msg(msg, 'download-url')
124
- self.license = _read_field_unescaped_from_msg(msg, 'license')
125
-
126
- self.long_description = _read_field_unescaped_from_msg(msg, 'description')
127
- if (
128
- self.long_description is None and
129
- self.metadata_version >= version.Version('2.1')
130
- ):
131
- self.long_description = _read_payload_from_msg(msg)
132
- self.description = _read_field_from_msg(msg, 'summary')
133
-
134
- if 'keywords' in msg:
135
- self.keywords = _read_field_from_msg(msg, 'keywords').split(',')
136
-
137
- self.platforms = _read_list_from_msg(msg, 'platform')
138
- self.classifiers = _read_list_from_msg(msg, 'classifier')
139
-
140
- # PEP 314 - these fields only exist in 1.1
141
- if self.metadata_version == version.Version('1.1'):
142
- self.requires = _read_list_from_msg(msg, 'requires')
143
- self.provides = _read_list_from_msg(msg, 'provides')
144
- self.obsoletes = _read_list_from_msg(msg, 'obsoletes')
145
- else:
146
- self.requires = None
147
- self.provides = None
148
- self.obsoletes = None
149
-
150
- self.license_files = _read_list_from_msg(msg, 'license-file')
151
-
152
-
153
- def single_line(val):
154
- """
155
- Quick and dirty validation for Summary pypa/setuptools#1390.
156
- """
157
- if '\n' in val:
158
- # TODO: Replace with `raise ValueError("newlines not allowed")`
159
- # after reviewing #2893.
160
- warnings.warn("newlines not allowed and will break in the future")
161
- val = val.strip().split('\n')[0]
162
- return val
163
-
164
-
165
- # Based on Python 3.5 version
166
- def write_pkg_file(self, file): # noqa: C901 # is too complex (14) # FIXME
167
- """Write the PKG-INFO format data to a file object."""
168
- version = self.get_metadata_version()
169
-
170
- def write_field(key, value):
171
- file.write("%s: %s\n" % (key, value))
172
-
173
- write_field('Metadata-Version', str(version))
174
- write_field('Name', self.get_name())
175
- write_field('Version', self.get_version())
176
-
177
- summary = self.get_description()
178
- if summary:
179
- write_field('Summary', single_line(summary))
180
-
181
- optional_fields = (
182
- ('Home-page', 'url'),
183
- ('Download-URL', 'download_url'),
184
- ('Author', 'author'),
185
- ('Author-email', 'author_email'),
186
- ('Maintainer', 'maintainer'),
187
- ('Maintainer-email', 'maintainer_email'),
188
- )
189
-
190
- for field, attr in optional_fields:
191
- attr_val = getattr(self, attr, None)
192
- if attr_val is not None:
193
- write_field(field, attr_val)
194
-
195
- license = self.get_license()
196
- if license:
197
- write_field('License', rfc822_escape(license))
198
-
199
- for project_url in self.project_urls.items():
200
- write_field('Project-URL', '%s, %s' % project_url)
201
-
202
- keywords = ','.join(self.get_keywords())
203
- if keywords:
204
- write_field('Keywords', keywords)
205
-
206
- platforms = self.get_platforms() or []
207
- for platform in platforms:
208
- write_field('Platform', platform)
209
-
210
- self._write_list(file, 'Classifier', self.get_classifiers())
211
-
212
- # PEP 314
213
- self._write_list(file, 'Requires', self.get_requires())
214
- self._write_list(file, 'Provides', self.get_provides())
215
- self._write_list(file, 'Obsoletes', self.get_obsoletes())
216
-
217
- # Setuptools specific for PEP 345
218
- if hasattr(self, 'python_requires'):
219
- write_field('Requires-Python', self.python_requires)
220
-
221
- # PEP 566
222
- if self.long_description_content_type:
223
- write_field('Description-Content-Type', self.long_description_content_type)
224
- if self.provides_extras:
225
- for extra in self.provides_extras:
226
- write_field('Provides-Extra', extra)
227
-
228
- self._write_list(file, 'License-File', self.license_files or [])
229
-
230
- long_description = self.get_long_description()
231
- if long_description:
232
- file.write("\n%s" % long_description)
233
- if not long_description.endswith("\n"):
234
- file.write("\n")
235
-
236
-
237
- sequence = tuple, list
238
-
239
-
240
- def check_importable(dist, attr, value):
241
- try:
242
- ep = metadata.EntryPoint(value=value, name=None, group=None)
243
- assert not ep.extras
244
- except (TypeError, ValueError, AttributeError, AssertionError) as e:
245
- raise DistutilsSetupError(
246
- "%r must be importable 'module:attrs' string (got %r)" % (attr, value)
247
- ) from e
248
-
249
-
250
- def assert_string_list(dist, attr, value):
251
- """Verify that value is a string list"""
252
- try:
253
- # verify that value is a list or tuple to exclude unordered
254
- # or single-use iterables
255
- assert isinstance(value, (list, tuple))
256
- # verify that elements of value are strings
257
- assert ''.join(value) != value
258
- except (TypeError, ValueError, AttributeError, AssertionError) as e:
259
- raise DistutilsSetupError(
260
- "%r must be a list of strings (got %r)" % (attr, value)
261
- ) from e
262
-
263
-
264
- def check_nsp(dist, attr, value):
265
- """Verify that namespace packages are valid"""
266
- ns_packages = value
267
- assert_string_list(dist, attr, ns_packages)
268
- for nsp in ns_packages:
269
- if not dist.has_contents_for(nsp):
270
- raise DistutilsSetupError(
271
- "Distribution contains no modules or packages for "
272
- + "namespace package %r" % nsp
273
- )
274
- parent, sep, child = nsp.rpartition('.')
275
- if parent and parent not in ns_packages:
276
- distutils.log.warn(
277
- "WARNING: %r is declared as a package namespace, but %r"
278
- " is not: please correct this in setup.py",
279
- nsp,
280
- parent,
281
- )
282
- msg = (
283
- "The namespace_packages parameter is deprecated, "
284
- "consider using implicit namespaces instead (PEP 420)."
285
- )
286
- warnings.warn(msg, SetuptoolsDeprecationWarning)
287
-
288
-
289
- def check_extras(dist, attr, value):
290
- """Verify that extras_require mapping is valid"""
291
- try:
292
- list(itertools.starmap(_check_extra, value.items()))
293
- except (TypeError, ValueError, AttributeError) as e:
294
- raise DistutilsSetupError(
295
- "'extras_require' must be a dictionary whose values are "
296
- "strings or lists of strings containing valid project/version "
297
- "requirement specifiers."
298
- ) from e
299
-
300
-
301
- def _check_extra(extra, reqs):
302
- name, sep, marker = extra.partition(':')
303
- if marker and pkg_resources.invalid_marker(marker):
304
- raise DistutilsSetupError("Invalid environment marker: " + marker)
305
- list(_reqs.parse(reqs))
306
-
307
-
308
- def assert_bool(dist, attr, value):
309
- """Verify that value is True, False, 0, or 1"""
310
- if bool(value) != value:
311
- tmpl = "{attr!r} must be a boolean value (got {value!r})"
312
- raise DistutilsSetupError(tmpl.format(attr=attr, value=value))
313
-
314
-
315
- def invalid_unless_false(dist, attr, value):
316
- if not value:
317
- warnings.warn(f"{attr} is ignored.", DistDeprecationWarning)
318
- return
319
- raise DistutilsSetupError(f"{attr} is invalid.")
320
-
321
-
322
- def check_requirements(dist, attr, value):
323
- """Verify that install_requires is a valid requirements list"""
324
- try:
325
- list(_reqs.parse(value))
326
- if isinstance(value, (dict, set)):
327
- raise TypeError("Unordered types are not allowed")
328
- except (TypeError, ValueError) as error:
329
- tmpl = (
330
- "{attr!r} must be a string or list of strings "
331
- "containing valid project/version requirement specifiers; {error}"
332
- )
333
- raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) from error
334
-
335
-
336
- def check_specifier(dist, attr, value):
337
- """Verify that value is a valid version specifier"""
338
- try:
339
- packaging.specifiers.SpecifierSet(value)
340
- except (packaging.specifiers.InvalidSpecifier, AttributeError) as error:
341
- tmpl = (
342
- "{attr!r} must be a string " "containing valid version specifiers; {error}"
343
- )
344
- raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) from error
345
-
346
-
347
- def check_entry_points(dist, attr, value):
348
- """Verify that entry_points map is parseable"""
349
- try:
350
- _entry_points.load(value)
351
- except Exception as e:
352
- raise DistutilsSetupError(e) from e
353
-
354
-
355
- def check_test_suite(dist, attr, value):
356
- if not isinstance(value, str):
357
- raise DistutilsSetupError("test_suite must be a string")
358
-
359
-
360
- def check_package_data(dist, attr, value):
361
- """Verify that value is a dictionary of package names to glob lists"""
362
- if not isinstance(value, dict):
363
- raise DistutilsSetupError(
364
- "{!r} must be a dictionary mapping package names to lists of "
365
- "string wildcard patterns".format(attr)
366
- )
367
- for k, v in value.items():
368
- if not isinstance(k, str):
369
- raise DistutilsSetupError(
370
- "keys of {!r} dict must be strings (got {!r})".format(attr, k)
371
- )
372
- assert_string_list(dist, 'values of {!r} dict'.format(attr), v)
373
-
374
-
375
- def check_packages(dist, attr, value):
376
- for pkgname in value:
377
- if not re.match(r'\w+(\.\w+)*', pkgname):
378
- distutils.log.warn(
379
- "WARNING: %r not a valid package name; please use only "
380
- ".-separated package names in setup.py",
381
- pkgname,
382
- )
383
-
384
-
385
- _Distribution = get_unpatched(distutils.core.Distribution)
386
-
387
-
388
- class Distribution(_Distribution):
389
- """Distribution with support for tests and package data
390
-
391
- This is an enhanced version of 'distutils.dist.Distribution' that
392
- effectively adds the following new optional keyword arguments to 'setup()':
393
-
394
- 'install_requires' -- a string or sequence of strings specifying project
395
- versions that the distribution requires when installed, in the format
396
- used by 'pkg_resources.require()'. They will be installed
397
- automatically when the package is installed. If you wish to use
398
- packages that are not available in PyPI, or want to give your users an
399
- alternate download location, you can add a 'find_links' option to the
400
- '[easy_install]' section of your project's 'setup.cfg' file, and then
401
- setuptools will scan the listed web pages for links that satisfy the
402
- requirements.
403
-
404
- 'extras_require' -- a dictionary mapping names of optional "extras" to the
405
- additional requirement(s) that using those extras incurs. For example,
406
- this::
407
-
408
- extras_require = dict(reST = ["docutils>=0.3", "reSTedit"])
409
-
410
- indicates that the distribution can optionally provide an extra
411
- capability called "reST", but it can only be used if docutils and
412
- reSTedit are installed. If the user installs your package using
413
- EasyInstall and requests one of your extras, the corresponding
414
- additional requirements will be installed if needed.
415
-
416
- 'test_suite' -- the name of a test suite to run for the 'test' command.
417
- If the user runs 'python setup.py test', the package will be installed,
418
- and the named test suite will be run. The format is the same as
419
- would be used on a 'unittest.py' command line. That is, it is the
420
- dotted name of an object to import and call to generate a test suite.
421
-
422
- 'package_data' -- a dictionary mapping package names to lists of filenames
423
- or globs to use to find data files contained in the named packages.
424
- If the dictionary has filenames or globs listed under '""' (the empty
425
- string), those names will be searched for in every package, in addition
426
- to any names for the specific package. Data files found using these
427
- names/globs will be installed along with the package, in the same
428
- location as the package. Note that globs are allowed to reference
429
- the contents of non-package subdirectories, as long as you use '/' as
430
- a path separator. (Globs are automatically converted to
431
- platform-specific paths at runtime.)
432
-
433
- In addition to these new keywords, this class also has several new methods
434
- for manipulating the distribution's contents. For example, the 'include()'
435
- and 'exclude()' methods can be thought of as in-place add and subtract
436
- commands that add or remove packages, modules, extensions, and so on from
437
- the distribution.
438
- """
439
-
440
- _DISTUTILS_UNSUPPORTED_METADATA = {
441
- 'long_description_content_type': lambda: None,
442
- 'project_urls': dict,
443
- 'provides_extras': ordered_set.OrderedSet,
444
- 'license_file': lambda: None,
445
- 'license_files': lambda: None,
446
- }
447
-
448
- _patched_dist = None
449
-
450
- def patch_missing_pkg_info(self, attrs):
451
- # Fake up a replacement for the data that would normally come from
452
- # PKG-INFO, but which might not yet be built if this is a fresh
453
- # checkout.
454
- #
455
- if not attrs or 'name' not in attrs or 'version' not in attrs:
456
- return
457
- key = pkg_resources.safe_name(str(attrs['name'])).lower()
458
- dist = pkg_resources.working_set.by_key.get(key)
459
- if dist is not None and not dist.has_metadata('PKG-INFO'):
460
- dist._version = pkg_resources.safe_version(str(attrs['version']))
461
- self._patched_dist = dist
462
-
463
- def __init__(self, attrs=None):
464
- have_package_data = hasattr(self, "package_data")
465
- if not have_package_data:
466
- self.package_data = {}
467
- attrs = attrs or {}
468
- self.dist_files = []
469
- # Filter-out setuptools' specific options.
470
- self.src_root = attrs.pop("src_root", None)
471
- self.patch_missing_pkg_info(attrs)
472
- self.dependency_links = attrs.pop('dependency_links', [])
473
- self.setup_requires = attrs.pop('setup_requires', [])
474
- for ep in metadata.entry_points(group='distutils.setup_keywords'):
475
- vars(self).setdefault(ep.name, None)
476
- _Distribution.__init__(
477
- self,
478
- {
479
- k: v
480
- for k, v in attrs.items()
481
- if k not in self._DISTUTILS_UNSUPPORTED_METADATA
482
- },
483
- )
484
-
485
- # Save the original dependencies before they are processed into the egg format
486
- self._orig_extras_require = {}
487
- self._orig_install_requires = []
488
- self._tmp_extras_require = defaultdict(ordered_set.OrderedSet)
489
-
490
- self.set_defaults = ConfigDiscovery(self)
491
-
492
- self._set_metadata_defaults(attrs)
493
-
494
- self.metadata.version = self._normalize_version(
495
- self._validate_version(self.metadata.version)
496
- )
497
- self._finalize_requires()
498
-
499
- def _validate_metadata(self):
500
- required = {"name"}
501
- provided = {
502
- key
503
- for key in vars(self.metadata)
504
- if getattr(self.metadata, key, None) is not None
505
- }
506
- missing = required - provided
507
-
508
- if missing:
509
- msg = f"Required package metadata is missing: {missing}"
510
- raise DistutilsSetupError(msg)
511
-
512
- def _set_metadata_defaults(self, attrs):
513
- """
514
- Fill-in missing metadata fields not supported by distutils.
515
- Some fields may have been set by other tools (e.g. pbr).
516
- Those fields (vars(self.metadata)) take precedence to
517
- supplied attrs.
518
- """
519
- for option, default in self._DISTUTILS_UNSUPPORTED_METADATA.items():
520
- vars(self.metadata).setdefault(option, attrs.get(option, default()))
521
-
522
- @staticmethod
523
- def _normalize_version(version):
524
- if isinstance(version, setuptools.sic) or version is None:
525
- return version
526
-
527
- normalized = str(packaging.version.Version(version))
528
- if version != normalized:
529
- tmpl = "Normalizing '{version}' to '{normalized}'"
530
- warnings.warn(tmpl.format(**locals()))
531
- return normalized
532
- return version
533
-
534
- @staticmethod
535
- def _validate_version(version):
536
- if isinstance(version, numbers.Number):
537
- # Some people apparently take "version number" too literally :)
538
- version = str(version)
539
-
540
- if version is not None:
541
- try:
542
- packaging.version.Version(version)
543
- except (packaging.version.InvalidVersion, TypeError):
544
- warnings.warn(
545
- "The version specified (%r) is an invalid version, this "
546
- "may not work as expected with newer versions of "
547
- "setuptools, pip, and PyPI. Please see PEP 440 for more "
548
- "details." % version
549
- )
550
- return setuptools.sic(version)
551
- return version
552
-
553
- def _finalize_requires(self):
554
- """
555
- Set `metadata.python_requires` and fix environment markers
556
- in `install_requires` and `extras_require`.
557
- """
558
- if getattr(self, 'python_requires', None):
559
- self.metadata.python_requires = self.python_requires
560
-
561
- if getattr(self, 'extras_require', None):
562
- # Save original before it is messed by _convert_extras_requirements
563
- self._orig_extras_require = self._orig_extras_require or self.extras_require
564
- for extra in self.extras_require.keys():
565
- # Since this gets called multiple times at points where the
566
- # keys have become 'converted' extras, ensure that we are only
567
- # truly adding extras we haven't seen before here.
568
- extra = extra.split(':')[0]
569
- if extra:
570
- self.metadata.provides_extras.add(extra)
571
-
572
- if getattr(self, 'install_requires', None) and not self._orig_install_requires:
573
- # Save original before it is messed by _move_install_requirements_markers
574
- self._orig_install_requires = self.install_requires
575
-
576
- self._convert_extras_requirements()
577
- self._move_install_requirements_markers()
578
-
579
- def _convert_extras_requirements(self):
580
- """
581
- Convert requirements in `extras_require` of the form
582
- `"extra": ["barbazquux; {marker}"]` to
583
- `"extra:{marker}": ["barbazquux"]`.
584
- """
585
- spec_ext_reqs = getattr(self, 'extras_require', None) or {}
586
- tmp = defaultdict(ordered_set.OrderedSet)
587
- self._tmp_extras_require = getattr(self, '_tmp_extras_require', tmp)
588
- for section, v in spec_ext_reqs.items():
589
- # Do not strip empty sections.
590
- self._tmp_extras_require[section]
591
- for r in _reqs.parse(v):
592
- suffix = self._suffix_for(r)
593
- self._tmp_extras_require[section + suffix].append(r)
594
-
595
- @staticmethod
596
- def _suffix_for(req):
597
- """
598
- For a requirement, return the 'extras_require' suffix for
599
- that requirement.
600
- """
601
- return ':' + str(req.marker) if req.marker else ''
602
-
603
- def _move_install_requirements_markers(self):
604
- """
605
- Move requirements in `install_requires` that are using environment
606
- markers `extras_require`.
607
- """
608
-
609
- # divide the install_requires into two sets, simple ones still
610
- # handled by install_requires and more complex ones handled
611
- # by extras_require.
612
-
613
- def is_simple_req(req):
614
- return not req.marker
615
-
616
- spec_inst_reqs = getattr(self, 'install_requires', None) or ()
617
- inst_reqs = list(_reqs.parse(spec_inst_reqs))
618
- simple_reqs = filter(is_simple_req, inst_reqs)
619
- complex_reqs = itertools.filterfalse(is_simple_req, inst_reqs)
620
- self.install_requires = list(map(str, simple_reqs))
621
-
622
- for r in complex_reqs:
623
- self._tmp_extras_require[':' + str(r.marker)].append(r)
624
- self.extras_require = dict(
625
- # list(dict.fromkeys(...)) ensures a list of unique strings
626
- (k, list(dict.fromkeys(str(r) for r in map(self._clean_req, v))))
627
- for k, v in self._tmp_extras_require.items()
628
- )
629
-
630
- def _clean_req(self, req):
631
- """
632
- Given a Requirement, remove environment markers and return it.
633
- """
634
- req.marker = None
635
- return req
636
-
637
- def _finalize_license_files(self):
638
- """Compute names of all license files which should be included."""
639
- license_files: Optional[List[str]] = self.metadata.license_files
640
- patterns: List[str] = license_files if license_files else []
641
-
642
- license_file: Optional[str] = self.metadata.license_file
643
- if license_file and license_file not in patterns:
644
- patterns.append(license_file)
645
-
646
- if license_files is None and license_file is None:
647
- # Default patterns match the ones wheel uses
648
- # See https://wheel.readthedocs.io/en/stable/user_guide.html
649
- # -> 'Including license files in the generated wheel file'
650
- patterns = ('LICEN[CS]E*', 'COPYING*', 'NOTICE*', 'AUTHORS*')
651
-
652
- self.metadata.license_files = list(
653
- unique_everseen(self._expand_patterns(patterns))
654
- )
655
-
656
- @staticmethod
657
- def _expand_patterns(patterns):
658
- """
659
- >>> list(Distribution._expand_patterns(['LICENSE']))
660
- ['LICENSE']
661
- >>> list(Distribution._expand_patterns(['setup.cfg', 'LIC*']))
662
- ['setup.cfg', 'LICENSE']
663
- """
664
- return (
665
- path
666
- for pattern in patterns
667
- for path in sorted(iglob(pattern))
668
- if not path.endswith('~') and os.path.isfile(path)
669
- )
670
-
671
- # FIXME: 'Distribution._parse_config_files' is too complex (14)
672
- def _parse_config_files(self, filenames=None): # noqa: C901
673
- """
674
- Adapted from distutils.dist.Distribution.parse_config_files,
675
- this method provides the same functionality in subtly-improved
676
- ways.
677
- """
678
- from configparser import ConfigParser
679
-
680
- # Ignore install directory options if we have a venv
681
- ignore_options = (
682
- []
683
- if sys.prefix == sys.base_prefix
684
- else [
685
- 'install-base',
686
- 'install-platbase',
687
- 'install-lib',
688
- 'install-platlib',
689
- 'install-purelib',
690
- 'install-headers',
691
- 'install-scripts',
692
- 'install-data',
693
- 'prefix',
694
- 'exec-prefix',
695
- 'home',
696
- 'user',
697
- 'root',
698
- ]
699
- )
700
-
701
- ignore_options = frozenset(ignore_options)
702
-
703
- if filenames is None:
704
- filenames = self.find_config_files()
705
-
706
- if DEBUG:
707
- self.announce("Distribution.parse_config_files():")
708
-
709
- parser = ConfigParser()
710
- parser.optionxform = str
711
- for filename in filenames:
712
- with io.open(filename, encoding='utf-8') as reader:
713
- if DEBUG:
714
- self.announce(" reading {filename}".format(**locals()))
715
- parser.read_file(reader)
716
- for section in parser.sections():
717
- options = parser.options(section)
718
- opt_dict = self.get_option_dict(section)
719
-
720
- for opt in options:
721
- if opt == '__name__' or opt in ignore_options:
722
- continue
723
-
724
- val = parser.get(section, opt)
725
- opt = self.warn_dash_deprecation(opt, section)
726
- opt = self.make_option_lowercase(opt, section)
727
- opt_dict[opt] = (filename, val)
728
-
729
- # Make the ConfigParser forget everything (so we retain
730
- # the original filenames that options come from)
731
- parser.__init__()
732
-
733
- if 'global' not in self.command_options:
734
- return
735
-
736
- # If there was a "global" section in the config file, use it
737
- # to set Distribution options.
738
-
739
- for (opt, (src, val)) in self.command_options['global'].items():
740
- alias = self.negative_opt.get(opt)
741
- if alias:
742
- val = not strtobool(val)
743
- elif opt in ('verbose', 'dry_run'): # ugh!
744
- val = strtobool(val)
745
-
746
- try:
747
- setattr(self, alias or opt, val)
748
- except ValueError as e:
749
- raise DistutilsOptionError(e) from e
750
-
751
- def warn_dash_deprecation(self, opt, section):
752
- if section in (
753
- 'options.extras_require',
754
- 'options.data_files',
755
- ):
756
- return opt
757
-
758
- underscore_opt = opt.replace('-', '_')
759
- commands = list(itertools.chain(
760
- distutils.command.__all__,
761
- self._setuptools_commands(),
762
- ))
763
- if (
764
- not section.startswith('options')
765
- and section != 'metadata'
766
- and section not in commands
767
- ):
768
- return underscore_opt
769
-
770
- if '-' in opt:
771
- warnings.warn(
772
- "Usage of dash-separated '%s' will not be supported in future "
773
- "versions. Please use the underscore name '%s' instead"
774
- % (opt, underscore_opt)
775
- )
776
- return underscore_opt
777
-
778
- def _setuptools_commands(self):
779
- try:
780
- return metadata.distribution('setuptools').entry_points.names
781
- except metadata.PackageNotFoundError:
782
- # during bootstrapping, distribution doesn't exist
783
- return []
784
-
785
- def make_option_lowercase(self, opt, section):
786
- if section != 'metadata' or opt.islower():
787
- return opt
788
-
789
- lowercase_opt = opt.lower()
790
- warnings.warn(
791
- "Usage of uppercase key '%s' in '%s' will be deprecated in future "
792
- "versions. Please use lowercase '%s' instead"
793
- % (opt, section, lowercase_opt)
794
- )
795
- return lowercase_opt
796
-
797
- # FIXME: 'Distribution._set_command_options' is too complex (14)
798
- def _set_command_options(self, command_obj, option_dict=None): # noqa: C901
799
- """
800
- Set the options for 'command_obj' from 'option_dict'. Basically
801
- this means copying elements of a dictionary ('option_dict') to
802
- attributes of an instance ('command').
803
-
804
- 'command_obj' must be a Command instance. If 'option_dict' is not
805
- supplied, uses the standard option dictionary for this command
806
- (from 'self.command_options').
807
-
808
- (Adopted from distutils.dist.Distribution._set_command_options)
809
- """
810
- command_name = command_obj.get_command_name()
811
- if option_dict is None:
812
- option_dict = self.get_option_dict(command_name)
813
-
814
- if DEBUG:
815
- self.announce(" setting options for '%s' command:" % command_name)
816
- for (option, (source, value)) in option_dict.items():
817
- if DEBUG:
818
- self.announce(" %s = %s (from %s)" % (option, value, source))
819
- try:
820
- bool_opts = [translate_longopt(o) for o in command_obj.boolean_options]
821
- except AttributeError:
822
- bool_opts = []
823
- try:
824
- neg_opt = command_obj.negative_opt
825
- except AttributeError:
826
- neg_opt = {}
827
-
828
- try:
829
- is_string = isinstance(value, str)
830
- if option in neg_opt and is_string:
831
- setattr(command_obj, neg_opt[option], not strtobool(value))
832
- elif option in bool_opts and is_string:
833
- setattr(command_obj, option, strtobool(value))
834
- elif hasattr(command_obj, option):
835
- setattr(command_obj, option, value)
836
- else:
837
- raise DistutilsOptionError(
838
- "error in %s: command '%s' has no such option '%s'"
839
- % (source, command_name, option)
840
- )
841
- except ValueError as e:
842
- raise DistutilsOptionError(e) from e
843
-
844
- def _get_project_config_files(self, filenames):
845
- """Add default file and split between INI and TOML"""
846
- tomlfiles = []
847
- standard_project_metadata = Path(self.src_root or os.curdir, "pyproject.toml")
848
- if filenames is not None:
849
- parts = partition(lambda f: Path(f).suffix == ".toml", filenames)
850
- filenames = list(parts[0]) # 1st element => predicate is False
851
- tomlfiles = list(parts[1]) # 2nd element => predicate is True
852
- elif standard_project_metadata.exists():
853
- tomlfiles = [standard_project_metadata]
854
- return filenames, tomlfiles
855
-
856
- def parse_config_files(self, filenames=None, ignore_option_errors=False):
857
- """Parses configuration files from various levels
858
- and loads configuration.
859
- """
860
- inifiles, tomlfiles = self._get_project_config_files(filenames)
861
-
862
- self._parse_config_files(filenames=inifiles)
863
-
864
- setupcfg.parse_configuration(
865
- self, self.command_options, ignore_option_errors=ignore_option_errors
866
- )
867
- for filename in tomlfiles:
868
- pyprojecttoml.apply_configuration(self, filename, ignore_option_errors)
869
-
870
- self._finalize_requires()
871
- self._finalize_license_files()
872
-
873
- def fetch_build_eggs(self, requires):
874
- """Resolve pre-setup requirements"""
875
- resolved_dists = pkg_resources.working_set.resolve(
876
- _reqs.parse(requires),
877
- installer=self.fetch_build_egg,
878
- replace_conflicting=True,
879
- )
880
- for dist in resolved_dists:
881
- pkg_resources.working_set.add(dist, replace=True)
882
- return resolved_dists
883
-
884
- def finalize_options(self):
885
- """
886
- Allow plugins to apply arbitrary operations to the
887
- distribution. Each hook may optionally define a 'order'
888
- to influence the order of execution. Smaller numbers
889
- go first and the default is 0.
890
- """
891
- group = 'setuptools.finalize_distribution_options'
892
-
893
- def by_order(hook):
894
- return getattr(hook, 'order', 0)
895
-
896
- defined = metadata.entry_points(group=group)
897
- filtered = itertools.filterfalse(self._removed, defined)
898
- loaded = map(lambda e: e.load(), filtered)
899
- for ep in sorted(loaded, key=by_order):
900
- ep(self)
901
-
902
- @staticmethod
903
- def _removed(ep):
904
- """
905
- When removing an entry point, if metadata is loaded
906
- from an older version of Setuptools, that removed
907
- entry point will attempt to be loaded and will fail.
908
- See #2765 for more details.
909
- """
910
- removed = {
911
- # removed 2021-09-05
912
- '2to3_doctests',
913
- }
914
- return ep.name in removed
915
-
916
- def _finalize_setup_keywords(self):
917
- for ep in metadata.entry_points(group='distutils.setup_keywords'):
918
- value = getattr(self, ep.name, None)
919
- if value is not None:
920
- ep.load()(self, ep.name, value)
921
-
922
- def get_egg_cache_dir(self):
923
- egg_cache_dir = os.path.join(os.curdir, '.eggs')
924
- if not os.path.exists(egg_cache_dir):
925
- os.mkdir(egg_cache_dir)
926
- windows_support.hide_file(egg_cache_dir)
927
- readme_txt_filename = os.path.join(egg_cache_dir, 'README.txt')
928
- with open(readme_txt_filename, 'w') as f:
929
- f.write(
930
- 'This directory contains eggs that were downloaded '
931
- 'by setuptools to build, test, and run plug-ins.\n\n'
932
- )
933
- f.write(
934
- 'This directory caches those eggs to prevent '
935
- 'repeated downloads.\n\n'
936
- )
937
- f.write('However, it is safe to delete this directory.\n\n')
938
-
939
- return egg_cache_dir
940
-
941
- def fetch_build_egg(self, req):
942
- """Fetch an egg needed for building"""
943
- from setuptools.installer import fetch_build_egg
944
-
945
- return fetch_build_egg(self, req)
946
-
947
- def get_command_class(self, command):
948
- """Pluggable version of get_command_class()"""
949
- if command in self.cmdclass:
950
- return self.cmdclass[command]
951
-
952
- eps = metadata.entry_points(group='distutils.commands', name=command)
953
- for ep in eps:
954
- self.cmdclass[command] = cmdclass = ep.load()
955
- return cmdclass
956
- else:
957
- return _Distribution.get_command_class(self, command)
958
-
959
- def print_commands(self):
960
- for ep in metadata.entry_points(group='distutils.commands'):
961
- if ep.name not in self.cmdclass:
962
- cmdclass = ep.load()
963
- self.cmdclass[ep.name] = cmdclass
964
- return _Distribution.print_commands(self)
965
-
966
- def get_command_list(self):
967
- for ep in metadata.entry_points(group='distutils.commands'):
968
- if ep.name not in self.cmdclass:
969
- cmdclass = ep.load()
970
- self.cmdclass[ep.name] = cmdclass
971
- return _Distribution.get_command_list(self)
972
-
973
- def include(self, **attrs):
974
- """Add items to distribution that are named in keyword arguments
975
-
976
- For example, 'dist.include(py_modules=["x"])' would add 'x' to
977
- the distribution's 'py_modules' attribute, if it was not already
978
- there.
979
-
980
- Currently, this method only supports inclusion for attributes that are
981
- lists or tuples. If you need to add support for adding to other
982
- attributes in this or a subclass, you can add an '_include_X' method,
983
- where 'X' is the name of the attribute. The method will be called with
984
- the value passed to 'include()'. So, 'dist.include(foo={"bar":"baz"})'
985
- will try to call 'dist._include_foo({"bar":"baz"})', which can then
986
- handle whatever special inclusion logic is needed.
987
- """
988
- for k, v in attrs.items():
989
- include = getattr(self, '_include_' + k, None)
990
- if include:
991
- include(v)
992
- else:
993
- self._include_misc(k, v)
994
-
995
- def exclude_package(self, package):
996
- """Remove packages, modules, and extensions in named package"""
997
-
998
- pfx = package + '.'
999
- if self.packages:
1000
- self.packages = [
1001
- p for p in self.packages if p != package and not p.startswith(pfx)
1002
- ]
1003
-
1004
- if self.py_modules:
1005
- self.py_modules = [
1006
- p for p in self.py_modules if p != package and not p.startswith(pfx)
1007
- ]
1008
-
1009
- if self.ext_modules:
1010
- self.ext_modules = [
1011
- p
1012
- for p in self.ext_modules
1013
- if p.name != package and not p.name.startswith(pfx)
1014
- ]
1015
-
1016
- def has_contents_for(self, package):
1017
- """Return true if 'exclude_package(package)' would do something"""
1018
-
1019
- pfx = package + '.'
1020
-
1021
- for p in self.iter_distribution_names():
1022
- if p == package or p.startswith(pfx):
1023
- return True
1024
-
1025
- def _exclude_misc(self, name, value):
1026
- """Handle 'exclude()' for list/tuple attrs without a special handler"""
1027
- if not isinstance(value, sequence):
1028
- raise DistutilsSetupError(
1029
- "%s: setting must be a list or tuple (%r)" % (name, value)
1030
- )
1031
- try:
1032
- old = getattr(self, name)
1033
- except AttributeError as e:
1034
- raise DistutilsSetupError("%s: No such distribution setting" % name) from e
1035
- if old is not None and not isinstance(old, sequence):
1036
- raise DistutilsSetupError(
1037
- name + ": this setting cannot be changed via include/exclude"
1038
- )
1039
- elif old:
1040
- setattr(self, name, [item for item in old if item not in value])
1041
-
1042
- def _include_misc(self, name, value):
1043
- """Handle 'include()' for list/tuple attrs without a special handler"""
1044
-
1045
- if not isinstance(value, sequence):
1046
- raise DistutilsSetupError("%s: setting must be a list (%r)" % (name, value))
1047
- try:
1048
- old = getattr(self, name)
1049
- except AttributeError as e:
1050
- raise DistutilsSetupError("%s: No such distribution setting" % name) from e
1051
- if old is None:
1052
- setattr(self, name, value)
1053
- elif not isinstance(old, sequence):
1054
- raise DistutilsSetupError(
1055
- name + ": this setting cannot be changed via include/exclude"
1056
- )
1057
- else:
1058
- new = [item for item in value if item not in old]
1059
- setattr(self, name, old + new)
1060
-
1061
- def exclude(self, **attrs):
1062
- """Remove items from distribution that are named in keyword arguments
1063
-
1064
- For example, 'dist.exclude(py_modules=["x"])' would remove 'x' from
1065
- the distribution's 'py_modules' attribute. Excluding packages uses
1066
- the 'exclude_package()' method, so all of the package's contained
1067
- packages, modules, and extensions are also excluded.
1068
-
1069
- Currently, this method only supports exclusion from attributes that are
1070
- lists or tuples. If you need to add support for excluding from other
1071
- attributes in this or a subclass, you can add an '_exclude_X' method,
1072
- where 'X' is the name of the attribute. The method will be called with
1073
- the value passed to 'exclude()'. So, 'dist.exclude(foo={"bar":"baz"})'
1074
- will try to call 'dist._exclude_foo({"bar":"baz"})', which can then
1075
- handle whatever special exclusion logic is needed.
1076
- """
1077
- for k, v in attrs.items():
1078
- exclude = getattr(self, '_exclude_' + k, None)
1079
- if exclude:
1080
- exclude(v)
1081
- else:
1082
- self._exclude_misc(k, v)
1083
-
1084
- def _exclude_packages(self, packages):
1085
- if not isinstance(packages, sequence):
1086
- raise DistutilsSetupError(
1087
- "packages: setting must be a list or tuple (%r)" % (packages,)
1088
- )
1089
- list(map(self.exclude_package, packages))
1090
-
1091
- def _parse_command_opts(self, parser, args):
1092
- # Remove --with-X/--without-X options when processing command args
1093
- self.global_options = self.__class__.global_options
1094
- self.negative_opt = self.__class__.negative_opt
1095
-
1096
- # First, expand any aliases
1097
- command = args[0]
1098
- aliases = self.get_option_dict('aliases')
1099
- while command in aliases:
1100
- src, alias = aliases[command]
1101
- del aliases[command] # ensure each alias can expand only once!
1102
- import shlex
1103
-
1104
- args[:1] = shlex.split(alias, True)
1105
- command = args[0]
1106
-
1107
- nargs = _Distribution._parse_command_opts(self, parser, args)
1108
-
1109
- # Handle commands that want to consume all remaining arguments
1110
- cmd_class = self.get_command_class(command)
1111
- if getattr(cmd_class, 'command_consumes_arguments', None):
1112
- self.get_option_dict(command)['args'] = ("command line", nargs)
1113
- if nargs is not None:
1114
- return []
1115
-
1116
- return nargs
1117
-
1118
- def get_cmdline_options(self):
1119
- """Return a '{cmd: {opt:val}}' map of all command-line options
1120
-
1121
- Option names are all long, but do not include the leading '--', and
1122
- contain dashes rather than underscores. If the option doesn't take
1123
- an argument (e.g. '--quiet'), the 'val' is 'None'.
1124
-
1125
- Note that options provided by config files are intentionally excluded.
1126
- """
1127
-
1128
- d = {}
1129
-
1130
- for cmd, opts in self.command_options.items():
1131
-
1132
- for opt, (src, val) in opts.items():
1133
-
1134
- if src != "command line":
1135
- continue
1136
-
1137
- opt = opt.replace('_', '-')
1138
-
1139
- if val == 0:
1140
- cmdobj = self.get_command_obj(cmd)
1141
- neg_opt = self.negative_opt.copy()
1142
- neg_opt.update(getattr(cmdobj, 'negative_opt', {}))
1143
- for neg, pos in neg_opt.items():
1144
- if pos == opt:
1145
- opt = neg
1146
- val = None
1147
- break
1148
- else:
1149
- raise AssertionError("Shouldn't be able to get here")
1150
-
1151
- elif val == 1:
1152
- val = None
1153
-
1154
- d.setdefault(cmd, {})[opt] = val
1155
-
1156
- return d
1157
-
1158
- def iter_distribution_names(self):
1159
- """Yield all packages, modules, and extension names in distribution"""
1160
-
1161
- for pkg in self.packages or ():
1162
- yield pkg
1163
-
1164
- for module in self.py_modules or ():
1165
- yield module
1166
-
1167
- for ext in self.ext_modules or ():
1168
- if isinstance(ext, tuple):
1169
- name, buildinfo = ext
1170
- else:
1171
- name = ext.name
1172
- if name.endswith('module'):
1173
- name = name[:-6]
1174
- yield name
1175
-
1176
- def handle_display_options(self, option_order):
1177
- """If there were any non-global "display-only" options
1178
- (--help-commands or the metadata display options) on the command
1179
- line, display the requested info and return true; else return
1180
- false.
1181
- """
1182
- import sys
1183
-
1184
- if self.help_commands:
1185
- return _Distribution.handle_display_options(self, option_order)
1186
-
1187
- # Stdout may be StringIO (e.g. in tests)
1188
- if not isinstance(sys.stdout, io.TextIOWrapper):
1189
- return _Distribution.handle_display_options(self, option_order)
1190
-
1191
- # Don't wrap stdout if utf-8 is already the encoding. Provides
1192
- # workaround for #334.
1193
- if sys.stdout.encoding.lower() in ('utf-8', 'utf8'):
1194
- return _Distribution.handle_display_options(self, option_order)
1195
-
1196
- # Print metadata in UTF-8 no matter the platform
1197
- encoding = sys.stdout.encoding
1198
- errors = sys.stdout.errors
1199
- newline = sys.platform != 'win32' and '\n' or None
1200
- line_buffering = sys.stdout.line_buffering
1201
-
1202
- sys.stdout = io.TextIOWrapper(
1203
- sys.stdout.detach(), 'utf-8', errors, newline, line_buffering
1204
- )
1205
- try:
1206
- return _Distribution.handle_display_options(self, option_order)
1207
- finally:
1208
- sys.stdout = io.TextIOWrapper(
1209
- sys.stdout.detach(), encoding, errors, newline, line_buffering
1210
- )
1211
-
1212
- def run_command(self, command):
1213
- self.set_defaults()
1214
- # Postpone defaults until all explicit configuration is considered
1215
- # (setup() args, config files, command line and plugins)
1216
-
1217
- super().run_command(command)
1218
-
1219
-
1220
- class DistDeprecationWarning(SetuptoolsDeprecationWarning):
1221
- """Class for warning about deprecations in dist in
1222
- setuptools. Not ignored by default, unlike DeprecationWarning."""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/feature-request.md DELETED
@@ -1,31 +0,0 @@
1
- ---
2
- name: "\U0001F680Feature Request"
3
- about: Suggest an improvement or new feature
4
- labels: enhancement
5
-
6
- ---
7
-
8
- ## 🚀 Feature
9
- A clear and concise description of the feature proposal.
10
-
11
- ## Motivation & Examples
12
-
13
- Tell us why the feature is useful.
14
-
15
- Describe what the feature would look like, if it is implemented.
16
- Best demonstrated using **code examples** in addition to words.
17
-
18
- ## Note
19
-
20
- We only consider adding new features if they are relevant to many users.
21
-
22
- If you request implementation of research papers -- we only consider papers that have enough significance and prevalance in the object detection field.
23
-
24
- We do not take requests for most projects in the `projects/` directory, because they are research code release that is mainly for other researchers to reproduce results.
25
-
26
- "Make X faster/accurate" is not a valid feature request. "Implement a concrete feature that can make X faster/accurate" can be a valid feature request.
27
-
28
- Instead of adding features inside detectron2,
29
- you can implement many features by [extending detectron2](https://detectron2.readthedocs.io/tutorials/extend.html).
30
- The [projects/](https://github.com/facebookresearch/detectron2/tree/main/projects/) directory contains many of such examples.
31
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/README.md DELETED
@@ -1,15 +0,0 @@
1
- # Read the docs:
2
-
3
- The latest documentation built from this directory is available at [detectron2.readthedocs.io](https://detectron2.readthedocs.io/).
4
- Documents in this directory are not meant to be read on github.
5
-
6
- # Build the docs:
7
-
8
- 1. Install detectron2 according to [INSTALL.md](../INSTALL.md).
9
- 2. Install additional libraries required to build docs:
10
- - docutils==0.16
11
- - Sphinx==3.2.0
12
- - recommonmark==0.6.0
13
- - sphinx_rtd_theme
14
-
15
- 3. Run `make html` from this directory.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/heatmap_focal_loss.py DELETED
@@ -1,92 +0,0 @@
1
- import torch
2
- from torch.nn import functional as F
3
-
4
- # TODO: merge these two function
5
- def heatmap_focal_loss(
6
- inputs,
7
- targets,
8
- pos_inds,
9
- labels,
10
- alpha: float = -1,
11
- beta: float = 4,
12
- gamma: float = 2,
13
- reduction: str = 'sum',
14
- sigmoid_clamp: float = 1e-4,
15
- ignore_high_fp: float = -1.,
16
- ):
17
- """
18
- Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.
19
- Args:
20
- inputs: (sum_l N*Hl*Wl, C)
21
- targets: (sum_l N*Hl*Wl, C)
22
- pos_inds: N
23
- labels: N
24
- Returns:
25
- Loss tensor with the reduction option applied.
26
- """
27
- pred = torch.clamp(inputs.sigmoid_(), min=sigmoid_clamp, max=1-sigmoid_clamp)
28
- neg_weights = torch.pow(1 - targets, beta)
29
- pos_pred_pix = pred[pos_inds] # N x C
30
- pos_pred = pos_pred_pix.gather(1, labels.unsqueeze(1))
31
- pos_loss = torch.log(pos_pred) * torch.pow(1 - pos_pred, gamma)
32
- neg_loss = torch.log(1 - pred) * torch.pow(pred, gamma) * neg_weights
33
-
34
- if ignore_high_fp > 0:
35
- not_high_fp = (pred < ignore_high_fp).float()
36
- neg_loss = not_high_fp * neg_loss
37
-
38
- if reduction == "sum":
39
- pos_loss = pos_loss.sum()
40
- neg_loss = neg_loss.sum()
41
-
42
- if alpha >= 0:
43
- pos_loss = alpha * pos_loss
44
- neg_loss = (1 - alpha) * neg_loss
45
-
46
- return - pos_loss, - neg_loss
47
-
48
- heatmap_focal_loss_jit = torch.jit.script(heatmap_focal_loss)
49
- # heatmap_focal_loss_jit = heatmap_focal_loss
50
-
51
- def binary_heatmap_focal_loss(
52
- inputs,
53
- targets,
54
- pos_inds,
55
- alpha: float = -1,
56
- beta: float = 4,
57
- gamma: float = 2,
58
- sigmoid_clamp: float = 1e-4,
59
- ignore_high_fp: float = -1.,
60
- ):
61
- """
62
- Args:
63
- inputs: (sum_l N*Hl*Wl,)
64
- targets: (sum_l N*Hl*Wl,)
65
- pos_inds: N
66
- Returns:
67
- Loss tensor with the reduction option applied.
68
- """
69
- pred = torch.clamp(inputs.sigmoid_(), min=sigmoid_clamp, max=1-sigmoid_clamp)
70
- neg_weights = torch.pow(1 - targets, beta)
71
- for i, ind in enumerate(pos_inds):
72
- if ind >= pred.shape[0]:
73
- print('%'*100)
74
- print(pred.shape, ind, pos_inds)
75
- pos_inds[i] = pred.shape[0] - 1
76
- pos_pred = pred[pos_inds] # N
77
- pos_loss = torch.log(pos_pred) * torch.pow(1 - pos_pred, gamma)
78
- neg_loss = torch.log(1 - pred) * torch.pow(pred, gamma) * neg_weights
79
- if ignore_high_fp > 0:
80
- not_high_fp = (pred < ignore_high_fp).float()
81
- neg_loss = not_high_fp * neg_loss
82
-
83
- pos_loss = - pos_loss.sum()
84
- neg_loss = - neg_loss.sum()
85
-
86
- if alpha >= 0:
87
- pos_loss = alpha * pos_loss
88
- neg_loss = (1 - alpha) * neg_loss
89
-
90
- return pos_loss, neg_loss
91
-
92
- # binary_heatmap_focal_loss_jit = torch.jit.script(binary_heatmap_focal_loss)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BAAI/vid2vid-zero/style.css DELETED
@@ -1,3 +0,0 @@
1
- h1 {
2
- text-align: center;
3
- }
 
 
 
 
spaces/BertChristiaens/youtube-dl/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Youtube Dl
3
- emoji: 🐠
4
- colorFrom: green
5
- colorTo: blue
6
- sdk: streamlit
7
- sdk_version: 1.19.0
8
- app_file: app.py
9
- pinned: false
10
- license: openrail
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/progress.py DELETED
@@ -1,1702 +0,0 @@
1
- import io
2
- import sys
3
- import typing
4
- import warnings
5
- from abc import ABC, abstractmethod
6
- from collections import deque
7
- from dataclasses import dataclass, field
8
- from datetime import timedelta
9
- from io import RawIOBase, UnsupportedOperation
10
- from math import ceil
11
- from mmap import mmap
12
- from operator import length_hint
13
- from os import PathLike, stat
14
- from threading import Event, RLock, Thread
15
- from types import TracebackType
16
- from typing import (
17
- Any,
18
- BinaryIO,
19
- Callable,
20
- ContextManager,
21
- Deque,
22
- Dict,
23
- Generic,
24
- Iterable,
25
- List,
26
- NamedTuple,
27
- NewType,
28
- Optional,
29
- Sequence,
30
- TextIO,
31
- Tuple,
32
- Type,
33
- TypeVar,
34
- Union,
35
- )
36
-
37
- if sys.version_info >= (3, 8):
38
- from typing import Literal
39
- else:
40
- from pip._vendor.typing_extensions import Literal # pragma: no cover
41
-
42
- from . import filesize, get_console
43
- from .console import Console, Group, JustifyMethod, RenderableType
44
- from .highlighter import Highlighter
45
- from .jupyter import JupyterMixin
46
- from .live import Live
47
- from .progress_bar import ProgressBar
48
- from .spinner import Spinner
49
- from .style import StyleType
50
- from .table import Column, Table
51
- from .text import Text, TextType
52
-
53
- TaskID = NewType("TaskID", int)
54
-
55
- ProgressType = TypeVar("ProgressType")
56
-
57
- GetTimeCallable = Callable[[], float]
58
-
59
-
60
- _I = typing.TypeVar("_I", TextIO, BinaryIO)
61
-
62
-
63
- class _TrackThread(Thread):
64
- """A thread to periodically update progress."""
65
-
66
- def __init__(self, progress: "Progress", task_id: "TaskID", update_period: float):
67
- self.progress = progress
68
- self.task_id = task_id
69
- self.update_period = update_period
70
- self.done = Event()
71
-
72
- self.completed = 0
73
- super().__init__()
74
-
75
- def run(self) -> None:
76
- task_id = self.task_id
77
- advance = self.progress.advance
78
- update_period = self.update_period
79
- last_completed = 0
80
- wait = self.done.wait
81
- while not wait(update_period):
82
- completed = self.completed
83
- if last_completed != completed:
84
- advance(task_id, completed - last_completed)
85
- last_completed = completed
86
-
87
- self.progress.update(self.task_id, completed=self.completed, refresh=True)
88
-
89
- def __enter__(self) -> "_TrackThread":
90
- self.start()
91
- return self
92
-
93
- def __exit__(
94
- self,
95
- exc_type: Optional[Type[BaseException]],
96
- exc_val: Optional[BaseException],
97
- exc_tb: Optional[TracebackType],
98
- ) -> None:
99
- self.done.set()
100
- self.join()
101
-
102
-
103
- def track(
104
- sequence: Union[Sequence[ProgressType], Iterable[ProgressType]],
105
- description: str = "Working...",
106
- total: Optional[float] = None,
107
- auto_refresh: bool = True,
108
- console: Optional[Console] = None,
109
- transient: bool = False,
110
- get_time: Optional[Callable[[], float]] = None,
111
- refresh_per_second: float = 10,
112
- style: StyleType = "bar.back",
113
- complete_style: StyleType = "bar.complete",
114
- finished_style: StyleType = "bar.finished",
115
- pulse_style: StyleType = "bar.pulse",
116
- update_period: float = 0.1,
117
- disable: bool = False,
118
- show_speed: bool = True,
119
- ) -> Iterable[ProgressType]:
120
- """Track progress by iterating over a sequence.
121
-
122
- Args:
123
- sequence (Iterable[ProgressType]): A sequence (must support "len") you wish to iterate over.
124
- description (str, optional): Description of task show next to progress bar. Defaults to "Working".
125
- total: (float, optional): Total number of steps. Default is len(sequence).
126
- auto_refresh (bool, optional): Automatic refresh, disable to force a refresh after each iteration. Default is True.
127
- transient: (bool, optional): Clear the progress on exit. Defaults to False.
128
- console (Console, optional): Console to write to. Default creates internal Console instance.
129
- refresh_per_second (float): Number of times per second to refresh the progress information. Defaults to 10.
130
- style (StyleType, optional): Style for the bar background. Defaults to "bar.back".
131
- complete_style (StyleType, optional): Style for the completed bar. Defaults to "bar.complete".
132
- finished_style (StyleType, optional): Style for a finished bar. Defaults to "bar.finished".
133
- pulse_style (StyleType, optional): Style for pulsing bars. Defaults to "bar.pulse".
134
- update_period (float, optional): Minimum time (in seconds) between calls to update(). Defaults to 0.1.
135
- disable (bool, optional): Disable display of progress.
136
- show_speed (bool, optional): Show speed if total isn't known. Defaults to True.
137
- Returns:
138
- Iterable[ProgressType]: An iterable of the values in the sequence.
139
-
140
- """
141
-
142
- columns: List["ProgressColumn"] = (
143
- [TextColumn("[progress.description]{task.description}")] if description else []
144
- )
145
- columns.extend(
146
- (
147
- BarColumn(
148
- style=style,
149
- complete_style=complete_style,
150
- finished_style=finished_style,
151
- pulse_style=pulse_style,
152
- ),
153
- TaskProgressColumn(show_speed=show_speed),
154
- TimeRemainingColumn(elapsed_when_finished=True),
155
- )
156
- )
157
- progress = Progress(
158
- *columns,
159
- auto_refresh=auto_refresh,
160
- console=console,
161
- transient=transient,
162
- get_time=get_time,
163
- refresh_per_second=refresh_per_second or 10,
164
- disable=disable,
165
- )
166
-
167
- with progress:
168
- yield from progress.track(
169
- sequence, total=total, description=description, update_period=update_period
170
- )
171
-
172
-
173
- class _Reader(RawIOBase, BinaryIO):
174
- """A reader that tracks progress while it's being read from."""
175
-
176
- def __init__(
177
- self,
178
- handle: BinaryIO,
179
- progress: "Progress",
180
- task: TaskID,
181
- close_handle: bool = True,
182
- ) -> None:
183
- self.handle = handle
184
- self.progress = progress
185
- self.task = task
186
- self.close_handle = close_handle
187
- self._closed = False
188
-
189
- def __enter__(self) -> "_Reader":
190
- self.handle.__enter__()
191
- return self
192
-
193
- def __exit__(
194
- self,
195
- exc_type: Optional[Type[BaseException]],
196
- exc_val: Optional[BaseException],
197
- exc_tb: Optional[TracebackType],
198
- ) -> None:
199
- self.close()
200
-
201
- def __iter__(self) -> BinaryIO:
202
- return self
203
-
204
- def __next__(self) -> bytes:
205
- line = next(self.handle)
206
- self.progress.advance(self.task, advance=len(line))
207
- return line
208
-
209
- @property
210
- def closed(self) -> bool:
211
- return self._closed
212
-
213
- def fileno(self) -> int:
214
- return self.handle.fileno()
215
-
216
- def isatty(self) -> bool:
217
- return self.handle.isatty()
218
-
219
- @property
220
- def mode(self) -> str:
221
- return self.handle.mode
222
-
223
- @property
224
- def name(self) -> str:
225
- return self.handle.name
226
-
227
- def readable(self) -> bool:
228
- return self.handle.readable()
229
-
230
- def seekable(self) -> bool:
231
- return self.handle.seekable()
232
-
233
- def writable(self) -> bool:
234
- return False
235
-
236
- def read(self, size: int = -1) -> bytes:
237
- block = self.handle.read(size)
238
- self.progress.advance(self.task, advance=len(block))
239
- return block
240
-
241
- def readinto(self, b: Union[bytearray, memoryview, mmap]): # type: ignore[no-untyped-def, override]
242
- n = self.handle.readinto(b) # type: ignore[attr-defined]
243
- self.progress.advance(self.task, advance=n)
244
- return n
245
-
246
- def readline(self, size: int = -1) -> bytes: # type: ignore[override]
247
- line = self.handle.readline(size)
248
- self.progress.advance(self.task, advance=len(line))
249
- return line
250
-
251
- def readlines(self, hint: int = -1) -> List[bytes]:
252
- lines = self.handle.readlines(hint)
253
- self.progress.advance(self.task, advance=sum(map(len, lines)))
254
- return lines
255
-
256
- def close(self) -> None:
257
- if self.close_handle:
258
- self.handle.close()
259
- self._closed = True
260
-
261
- def seek(self, offset: int, whence: int = 0) -> int:
262
- pos = self.handle.seek(offset, whence)
263
- self.progress.update(self.task, completed=pos)
264
- return pos
265
-
266
- def tell(self) -> int:
267
- return self.handle.tell()
268
-
269
- def write(self, s: Any) -> int:
270
- raise UnsupportedOperation("write")
271
-
272
-
273
- class _ReadContext(ContextManager[_I], Generic[_I]):
274
- """A utility class to handle a context for both a reader and a progress."""
275
-
276
- def __init__(self, progress: "Progress", reader: _I) -> None:
277
- self.progress = progress
278
- self.reader: _I = reader
279
-
280
- def __enter__(self) -> _I:
281
- self.progress.start()
282
- return self.reader.__enter__()
283
-
284
- def __exit__(
285
- self,
286
- exc_type: Optional[Type[BaseException]],
287
- exc_val: Optional[BaseException],
288
- exc_tb: Optional[TracebackType],
289
- ) -> None:
290
- self.progress.stop()
291
- self.reader.__exit__(exc_type, exc_val, exc_tb)
292
-
293
-
294
- def wrap_file(
295
- file: BinaryIO,
296
- total: int,
297
- *,
298
- description: str = "Reading...",
299
- auto_refresh: bool = True,
300
- console: Optional[Console] = None,
301
- transient: bool = False,
302
- get_time: Optional[Callable[[], float]] = None,
303
- refresh_per_second: float = 10,
304
- style: StyleType = "bar.back",
305
- complete_style: StyleType = "bar.complete",
306
- finished_style: StyleType = "bar.finished",
307
- pulse_style: StyleType = "bar.pulse",
308
- disable: bool = False,
309
- ) -> ContextManager[BinaryIO]:
310
- """Read bytes from a file while tracking progress.
311
-
312
- Args:
313
- file (Union[str, PathLike[str], BinaryIO]): The path to the file to read, or a file-like object in binary mode.
314
- total (int): Total number of bytes to read.
315
- description (str, optional): Description of task show next to progress bar. Defaults to "Reading".
316
- auto_refresh (bool, optional): Automatic refresh, disable to force a refresh after each iteration. Default is True.
317
- transient: (bool, optional): Clear the progress on exit. Defaults to False.
318
- console (Console, optional): Console to write to. Default creates internal Console instance.
319
- refresh_per_second (float): Number of times per second to refresh the progress information. Defaults to 10.
320
- style (StyleType, optional): Style for the bar background. Defaults to "bar.back".
321
- complete_style (StyleType, optional): Style for the completed bar. Defaults to "bar.complete".
322
- finished_style (StyleType, optional): Style for a finished bar. Defaults to "bar.finished".
323
- pulse_style (StyleType, optional): Style for pulsing bars. Defaults to "bar.pulse".
324
- disable (bool, optional): Disable display of progress.
325
- Returns:
326
- ContextManager[BinaryIO]: A context manager yielding a progress reader.
327
-
328
- """
329
-
330
- columns: List["ProgressColumn"] = (
331
- [TextColumn("[progress.description]{task.description}")] if description else []
332
- )
333
- columns.extend(
334
- (
335
- BarColumn(
336
- style=style,
337
- complete_style=complete_style,
338
- finished_style=finished_style,
339
- pulse_style=pulse_style,
340
- ),
341
- DownloadColumn(),
342
- TimeRemainingColumn(),
343
- )
344
- )
345
- progress = Progress(
346
- *columns,
347
- auto_refresh=auto_refresh,
348
- console=console,
349
- transient=transient,
350
- get_time=get_time,
351
- refresh_per_second=refresh_per_second or 10,
352
- disable=disable,
353
- )
354
-
355
- reader = progress.wrap_file(file, total=total, description=description)
356
- return _ReadContext(progress, reader)
357
-
358
-
359
- @typing.overload
360
- def open(
361
- file: Union[str, "PathLike[str]", bytes],
362
- mode: Union[Literal["rt"], Literal["r"]],
363
- buffering: int = -1,
364
- encoding: Optional[str] = None,
365
- errors: Optional[str] = None,
366
- newline: Optional[str] = None,
367
- *,
368
- total: Optional[int] = None,
369
- description: str = "Reading...",
370
- auto_refresh: bool = True,
371
- console: Optional[Console] = None,
372
- transient: bool = False,
373
- get_time: Optional[Callable[[], float]] = None,
374
- refresh_per_second: float = 10,
375
- style: StyleType = "bar.back",
376
- complete_style: StyleType = "bar.complete",
377
- finished_style: StyleType = "bar.finished",
378
- pulse_style: StyleType = "bar.pulse",
379
- disable: bool = False,
380
- ) -> ContextManager[TextIO]:
381
- pass
382
-
383
-
384
- @typing.overload
385
- def open(
386
- file: Union[str, "PathLike[str]", bytes],
387
- mode: Literal["rb"],
388
- buffering: int = -1,
389
- encoding: Optional[str] = None,
390
- errors: Optional[str] = None,
391
- newline: Optional[str] = None,
392
- *,
393
- total: Optional[int] = None,
394
- description: str = "Reading...",
395
- auto_refresh: bool = True,
396
- console: Optional[Console] = None,
397
- transient: bool = False,
398
- get_time: Optional[Callable[[], float]] = None,
399
- refresh_per_second: float = 10,
400
- style: StyleType = "bar.back",
401
- complete_style: StyleType = "bar.complete",
402
- finished_style: StyleType = "bar.finished",
403
- pulse_style: StyleType = "bar.pulse",
404
- disable: bool = False,
405
- ) -> ContextManager[BinaryIO]:
406
- pass
407
-
408
-
409
- def open(
410
- file: Union[str, "PathLike[str]", bytes],
411
- mode: Union[Literal["rb"], Literal["rt"], Literal["r"]] = "r",
412
- buffering: int = -1,
413
- encoding: Optional[str] = None,
414
- errors: Optional[str] = None,
415
- newline: Optional[str] = None,
416
- *,
417
- total: Optional[int] = None,
418
- description: str = "Reading...",
419
- auto_refresh: bool = True,
420
- console: Optional[Console] = None,
421
- transient: bool = False,
422
- get_time: Optional[Callable[[], float]] = None,
423
- refresh_per_second: float = 10,
424
- style: StyleType = "bar.back",
425
- complete_style: StyleType = "bar.complete",
426
- finished_style: StyleType = "bar.finished",
427
- pulse_style: StyleType = "bar.pulse",
428
- disable: bool = False,
429
- ) -> Union[ContextManager[BinaryIO], ContextManager[TextIO]]:
430
- """Read bytes from a file while tracking progress.
431
-
432
- Args:
433
- path (Union[str, PathLike[str], BinaryIO]): The path to the file to read, or a file-like object in binary mode.
434
- mode (str): The mode to use to open the file. Only supports "r", "rb" or "rt".
435
- buffering (int): The buffering strategy to use, see :func:`io.open`.
436
- encoding (str, optional): The encoding to use when reading in text mode, see :func:`io.open`.
437
- errors (str, optional): The error handling strategy for decoding errors, see :func:`io.open`.
438
- newline (str, optional): The strategy for handling newlines in text mode, see :func:`io.open`
439
- total: (int, optional): Total number of bytes to read. Must be provided if reading from a file handle. Default for a path is os.stat(file).st_size.
440
- description (str, optional): Description of task show next to progress bar. Defaults to "Reading".
441
- auto_refresh (bool, optional): Automatic refresh, disable to force a refresh after each iteration. Default is True.
442
- transient: (bool, optional): Clear the progress on exit. Defaults to False.
443
- console (Console, optional): Console to write to. Default creates internal Console instance.
444
- refresh_per_second (float): Number of times per second to refresh the progress information. Defaults to 10.
445
- style (StyleType, optional): Style for the bar background. Defaults to "bar.back".
446
- complete_style (StyleType, optional): Style for the completed bar. Defaults to "bar.complete".
447
- finished_style (StyleType, optional): Style for a finished bar. Defaults to "bar.finished".
448
- pulse_style (StyleType, optional): Style for pulsing bars. Defaults to "bar.pulse".
449
- disable (bool, optional): Disable display of progress.
450
- encoding (str, optional): The encoding to use when reading in text mode.
451
-
452
- Returns:
453
- ContextManager[BinaryIO]: A context manager yielding a progress reader.
454
-
455
- """
456
-
457
- columns: List["ProgressColumn"] = (
458
- [TextColumn("[progress.description]{task.description}")] if description else []
459
- )
460
- columns.extend(
461
- (
462
- BarColumn(
463
- style=style,
464
- complete_style=complete_style,
465
- finished_style=finished_style,
466
- pulse_style=pulse_style,
467
- ),
468
- DownloadColumn(),
469
- TimeRemainingColumn(),
470
- )
471
- )
472
- progress = Progress(
473
- *columns,
474
- auto_refresh=auto_refresh,
475
- console=console,
476
- transient=transient,
477
- get_time=get_time,
478
- refresh_per_second=refresh_per_second or 10,
479
- disable=disable,
480
- )
481
-
482
- reader = progress.open(
483
- file,
484
- mode=mode,
485
- buffering=buffering,
486
- encoding=encoding,
487
- errors=errors,
488
- newline=newline,
489
- total=total,
490
- description=description,
491
- )
492
- return _ReadContext(progress, reader) # type: ignore[return-value, type-var]
493
-
494
-
495
- class ProgressColumn(ABC):
496
- """Base class for a widget to use in progress display."""
497
-
498
- max_refresh: Optional[float] = None
499
-
500
- def __init__(self, table_column: Optional[Column] = None) -> None:
501
- self._table_column = table_column
502
- self._renderable_cache: Dict[TaskID, Tuple[float, RenderableType]] = {}
503
- self._update_time: Optional[float] = None
504
-
505
- def get_table_column(self) -> Column:
506
- """Get a table column, used to build tasks table."""
507
- return self._table_column or Column()
508
-
509
- def __call__(self, task: "Task") -> RenderableType:
510
- """Called by the Progress object to return a renderable for the given task.
511
-
512
- Args:
513
- task (Task): An object containing information regarding the task.
514
-
515
- Returns:
516
- RenderableType: Anything renderable (including str).
517
- """
518
- current_time = task.get_time()
519
- if self.max_refresh is not None and not task.completed:
520
- try:
521
- timestamp, renderable = self._renderable_cache[task.id]
522
- except KeyError:
523
- pass
524
- else:
525
- if timestamp + self.max_refresh > current_time:
526
- return renderable
527
-
528
- renderable = self.render(task)
529
- self._renderable_cache[task.id] = (current_time, renderable)
530
- return renderable
531
-
532
- @abstractmethod
533
- def render(self, task: "Task") -> RenderableType:
534
- """Should return a renderable object."""
535
-
536
-
537
- class RenderableColumn(ProgressColumn):
538
- """A column to insert an arbitrary column.
539
-
540
- Args:
541
- renderable (RenderableType, optional): Any renderable. Defaults to empty string.
542
- """
543
-
544
- def __init__(
545
- self, renderable: RenderableType = "", *, table_column: Optional[Column] = None
546
- ):
547
- self.renderable = renderable
548
- super().__init__(table_column=table_column)
549
-
550
- def render(self, task: "Task") -> RenderableType:
551
- return self.renderable
552
-
553
-
554
- class SpinnerColumn(ProgressColumn):
555
- """A column with a 'spinner' animation.
556
-
557
- Args:
558
- spinner_name (str, optional): Name of spinner animation. Defaults to "dots".
559
- style (StyleType, optional): Style of spinner. Defaults to "progress.spinner".
560
- speed (float, optional): Speed factor of spinner. Defaults to 1.0.
561
- finished_text (TextType, optional): Text used when task is finished. Defaults to " ".
562
- """
563
-
564
- def __init__(
565
- self,
566
- spinner_name: str = "dots",
567
- style: Optional[StyleType] = "progress.spinner",
568
- speed: float = 1.0,
569
- finished_text: TextType = " ",
570
- table_column: Optional[Column] = None,
571
- ):
572
- self.spinner = Spinner(spinner_name, style=style, speed=speed)
573
- self.finished_text = (
574
- Text.from_markup(finished_text)
575
- if isinstance(finished_text, str)
576
- else finished_text
577
- )
578
- super().__init__(table_column=table_column)
579
-
580
- def set_spinner(
581
- self,
582
- spinner_name: str,
583
- spinner_style: Optional[StyleType] = "progress.spinner",
584
- speed: float = 1.0,
585
- ) -> None:
586
- """Set a new spinner.
587
-
588
- Args:
589
- spinner_name (str): Spinner name, see python -m rich.spinner.
590
- spinner_style (Optional[StyleType], optional): Spinner style. Defaults to "progress.spinner".
591
- speed (float, optional): Speed factor of spinner. Defaults to 1.0.
592
- """
593
- self.spinner = Spinner(spinner_name, style=spinner_style, speed=speed)
594
-
595
- def render(self, task: "Task") -> RenderableType:
596
- text = (
597
- self.finished_text
598
- if task.finished
599
- else self.spinner.render(task.get_time())
600
- )
601
- return text
602
-
603
-
604
- class TextColumn(ProgressColumn):
605
- """A column containing text."""
606
-
607
- def __init__(
608
- self,
609
- text_format: str,
610
- style: StyleType = "none",
611
- justify: JustifyMethod = "left",
612
- markup: bool = True,
613
- highlighter: Optional[Highlighter] = None,
614
- table_column: Optional[Column] = None,
615
- ) -> None:
616
- self.text_format = text_format
617
- self.justify: JustifyMethod = justify
618
- self.style = style
619
- self.markup = markup
620
- self.highlighter = highlighter
621
- super().__init__(table_column=table_column or Column(no_wrap=True))
622
-
623
- def render(self, task: "Task") -> Text:
624
- _text = self.text_format.format(task=task)
625
- if self.markup:
626
- text = Text.from_markup(_text, style=self.style, justify=self.justify)
627
- else:
628
- text = Text(_text, style=self.style, justify=self.justify)
629
- if self.highlighter:
630
- self.highlighter.highlight(text)
631
- return text
632
-
633
-
634
- class BarColumn(ProgressColumn):
635
- """Renders a visual progress bar.
636
-
637
- Args:
638
- bar_width (Optional[int], optional): Width of bar or None for full width. Defaults to 40.
639
- style (StyleType, optional): Style for the bar background. Defaults to "bar.back".
640
- complete_style (StyleType, optional): Style for the completed bar. Defaults to "bar.complete".
641
- finished_style (StyleType, optional): Style for a finished bar. Defaults to "bar.finished".
642
- pulse_style (StyleType, optional): Style for pulsing bars. Defaults to "bar.pulse".
643
- """
644
-
645
- def __init__(
646
- self,
647
- bar_width: Optional[int] = 40,
648
- style: StyleType = "bar.back",
649
- complete_style: StyleType = "bar.complete",
650
- finished_style: StyleType = "bar.finished",
651
- pulse_style: StyleType = "bar.pulse",
652
- table_column: Optional[Column] = None,
653
- ) -> None:
654
- self.bar_width = bar_width
655
- self.style = style
656
- self.complete_style = complete_style
657
- self.finished_style = finished_style
658
- self.pulse_style = pulse_style
659
- super().__init__(table_column=table_column)
660
-
661
- def render(self, task: "Task") -> ProgressBar:
662
- """Gets a progress bar widget for a task."""
663
- return ProgressBar(
664
- total=max(0, task.total) if task.total is not None else None,
665
- completed=max(0, task.completed),
666
- width=None if self.bar_width is None else max(1, self.bar_width),
667
- pulse=not task.started,
668
- animation_time=task.get_time(),
669
- style=self.style,
670
- complete_style=self.complete_style,
671
- finished_style=self.finished_style,
672
- pulse_style=self.pulse_style,
673
- )
674
-
675
-
676
- class TimeElapsedColumn(ProgressColumn):
677
- """Renders time elapsed."""
678
-
679
- def render(self, task: "Task") -> Text:
680
- """Show time elapsed."""
681
- elapsed = task.finished_time if task.finished else task.elapsed
682
- if elapsed is None:
683
- return Text("-:--:--", style="progress.elapsed")
684
- delta = timedelta(seconds=int(elapsed))
685
- return Text(str(delta), style="progress.elapsed")
686
-
687
-
688
- class TaskProgressColumn(TextColumn):
689
- """Show task progress as a percentage.
690
-
691
- Args:
692
- text_format (str, optional): Format for percentage display. Defaults to "[progress.percentage]{task.percentage:>3.0f}%".
693
- text_format_no_percentage (str, optional): Format if percentage is unknown. Defaults to "".
694
- style (StyleType, optional): Style of output. Defaults to "none".
695
- justify (JustifyMethod, optional): Text justification. Defaults to "left".
696
- markup (bool, optional): Enable markup. Defaults to True.
697
- highlighter (Optional[Highlighter], optional): Highlighter to apply to output. Defaults to None.
698
- table_column (Optional[Column], optional): Table Column to use. Defaults to None.
699
- show_speed (bool, optional): Show speed if total is unknown. Defaults to False.
700
- """
701
-
702
- def __init__(
703
- self,
704
- text_format: str = "[progress.percentage]{task.percentage:>3.0f}%",
705
- text_format_no_percentage: str = "",
706
- style: StyleType = "none",
707
- justify: JustifyMethod = "left",
708
- markup: bool = True,
709
- highlighter: Optional[Highlighter] = None,
710
- table_column: Optional[Column] = None,
711
- show_speed: bool = False,
712
- ) -> None:
713
-
714
- self.text_format_no_percentage = text_format_no_percentage
715
- self.show_speed = show_speed
716
- super().__init__(
717
- text_format=text_format,
718
- style=style,
719
- justify=justify,
720
- markup=markup,
721
- highlighter=highlighter,
722
- table_column=table_column,
723
- )
724
-
725
- @classmethod
726
- def render_speed(cls, speed: Optional[float]) -> Text:
727
- """Render the speed in iterations per second.
728
-
729
- Args:
730
- task (Task): A Task object.
731
-
732
- Returns:
733
- Text: Text object containing the task speed.
734
- """
735
- if speed is None:
736
- return Text("", style="progress.percentage")
737
- unit, suffix = filesize.pick_unit_and_suffix(
738
- int(speed),
739
- ["", "×10³", "×10⁶", "×10⁹", "×10¹²"],
740
- 1000,
741
- )
742
- data_speed = speed / unit
743
- return Text(f"{data_speed:.1f}{suffix} it/s", style="progress.percentage")
744
-
745
- def render(self, task: "Task") -> Text:
746
- if task.total is None and self.show_speed:
747
- return self.render_speed(task.finished_speed or task.speed)
748
- text_format = (
749
- self.text_format_no_percentage if task.total is None else self.text_format
750
- )
751
- _text = text_format.format(task=task)
752
- if self.markup:
753
- text = Text.from_markup(_text, style=self.style, justify=self.justify)
754
- else:
755
- text = Text(_text, style=self.style, justify=self.justify)
756
- if self.highlighter:
757
- self.highlighter.highlight(text)
758
- return text
759
-
760
-
761
- class TimeRemainingColumn(ProgressColumn):
762
- """Renders estimated time remaining.
763
-
764
- Args:
765
- compact (bool, optional): Render MM:SS when time remaining is less than an hour. Defaults to False.
766
- elapsed_when_finished (bool, optional): Render time elapsed when the task is finished. Defaults to False.
767
- """
768
-
769
- # Only refresh twice a second to prevent jitter
770
- max_refresh = 0.5
771
-
772
- def __init__(
773
- self,
774
- compact: bool = False,
775
- elapsed_when_finished: bool = False,
776
- table_column: Optional[Column] = None,
777
- ):
778
- self.compact = compact
779
- self.elapsed_when_finished = elapsed_when_finished
780
- super().__init__(table_column=table_column)
781
-
782
- def render(self, task: "Task") -> Text:
783
- """Show time remaining."""
784
- if self.elapsed_when_finished and task.finished:
785
- task_time = task.finished_time
786
- style = "progress.elapsed"
787
- else:
788
- task_time = task.time_remaining
789
- style = "progress.remaining"
790
-
791
- if task.total is None:
792
- return Text("", style=style)
793
-
794
- if task_time is None:
795
- return Text("--:--" if self.compact else "-:--:--", style=style)
796
-
797
- # Based on https://github.com/tqdm/tqdm/blob/master/tqdm/std.py
798
- minutes, seconds = divmod(int(task_time), 60)
799
- hours, minutes = divmod(minutes, 60)
800
-
801
- if self.compact and not hours:
802
- formatted = f"{minutes:02d}:{seconds:02d}"
803
- else:
804
- formatted = f"{hours:d}:{minutes:02d}:{seconds:02d}"
805
-
806
- return Text(formatted, style=style)
807
-
808
-
809
- class FileSizeColumn(ProgressColumn):
810
- """Renders completed filesize."""
811
-
812
- def render(self, task: "Task") -> Text:
813
- """Show data completed."""
814
- data_size = filesize.decimal(int(task.completed))
815
- return Text(data_size, style="progress.filesize")
816
-
817
-
818
- class TotalFileSizeColumn(ProgressColumn):
819
- """Renders total filesize."""
820
-
821
- def render(self, task: "Task") -> Text:
822
- """Show data completed."""
823
- data_size = filesize.decimal(int(task.total)) if task.total is not None else ""
824
- return Text(data_size, style="progress.filesize.total")
825
-
826
-
827
- class MofNCompleteColumn(ProgressColumn):
828
- """Renders completed count/total, e.g. ' 10/1000'.
829
-
830
- Best for bounded tasks with int quantities.
831
-
832
- Space pads the completed count so that progress length does not change as task progresses
833
- past powers of 10.
834
-
835
- Args:
836
- separator (str, optional): Text to separate completed and total values. Defaults to "/".
837
- """
838
-
839
- def __init__(self, separator: str = "/", table_column: Optional[Column] = None):
840
- self.separator = separator
841
- super().__init__(table_column=table_column)
842
-
843
- def render(self, task: "Task") -> Text:
844
- """Show completed/total."""
845
- completed = int(task.completed)
846
- total = int(task.total) if task.total is not None else "?"
847
- total_width = len(str(total))
848
- return Text(
849
- f"{completed:{total_width}d}{self.separator}{total}",
850
- style="progress.download",
851
- )
852
-
853
-
854
- class DownloadColumn(ProgressColumn):
855
- """Renders file size downloaded and total, e.g. '0.5/2.3 GB'.
856
-
857
- Args:
858
- binary_units (bool, optional): Use binary units, KiB, MiB etc. Defaults to False.
859
- """
860
-
861
- def __init__(
862
- self, binary_units: bool = False, table_column: Optional[Column] = None
863
- ) -> None:
864
- self.binary_units = binary_units
865
- super().__init__(table_column=table_column)
866
-
867
- def render(self, task: "Task") -> Text:
868
- """Calculate common unit for completed and total."""
869
- completed = int(task.completed)
870
-
871
- unit_and_suffix_calculation_base = (
872
- int(task.total) if task.total is not None else completed
873
- )
874
- if self.binary_units:
875
- unit, suffix = filesize.pick_unit_and_suffix(
876
- unit_and_suffix_calculation_base,
877
- ["bytes", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB"],
878
- 1024,
879
- )
880
- else:
881
- unit, suffix = filesize.pick_unit_and_suffix(
882
- unit_and_suffix_calculation_base,
883
- ["bytes", "kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"],
884
- 1000,
885
- )
886
- precision = 0 if unit == 1 else 1
887
-
888
- completed_ratio = completed / unit
889
- completed_str = f"{completed_ratio:,.{precision}f}"
890
-
891
- if task.total is not None:
892
- total = int(task.total)
893
- total_ratio = total / unit
894
- total_str = f"{total_ratio:,.{precision}f}"
895
- else:
896
- total_str = "?"
897
-
898
- download_status = f"{completed_str}/{total_str} {suffix}"
899
- download_text = Text(download_status, style="progress.download")
900
- return download_text
901
-
902
-
903
- class TransferSpeedColumn(ProgressColumn):
904
- """Renders human readable transfer speed."""
905
-
906
- def render(self, task: "Task") -> Text:
907
- """Show data transfer speed."""
908
- speed = task.finished_speed or task.speed
909
- if speed is None:
910
- return Text("?", style="progress.data.speed")
911
- data_speed = filesize.decimal(int(speed))
912
- return Text(f"{data_speed}/s", style="progress.data.speed")
913
-
914
-
915
- class ProgressSample(NamedTuple):
916
- """Sample of progress for a given time."""
917
-
918
- timestamp: float
919
- """Timestamp of sample."""
920
- completed: float
921
- """Number of steps completed."""
922
-
923
-
924
- @dataclass
925
- class Task:
926
- """Information regarding a progress task.
927
-
928
- This object should be considered read-only outside of the :class:`~Progress` class.
929
-
930
- """
931
-
932
- id: TaskID
933
- """Task ID associated with this task (used in Progress methods)."""
934
-
935
- description: str
936
- """str: Description of the task."""
937
-
938
- total: Optional[float]
939
- """Optional[float]: Total number of steps in this task."""
940
-
941
- completed: float
942
- """float: Number of steps completed"""
943
-
944
- _get_time: GetTimeCallable
945
- """Callable to get the current time."""
946
-
947
- finished_time: Optional[float] = None
948
- """float: Time task was finished."""
949
-
950
- visible: bool = True
951
- """bool: Indicates if this task is visible in the progress display."""
952
-
953
- fields: Dict[str, Any] = field(default_factory=dict)
954
- """dict: Arbitrary fields passed in via Progress.update."""
955
-
956
- start_time: Optional[float] = field(default=None, init=False, repr=False)
957
- """Optional[float]: Time this task was started, or None if not started."""
958
-
959
- stop_time: Optional[float] = field(default=None, init=False, repr=False)
960
- """Optional[float]: Time this task was stopped, or None if not stopped."""
961
-
962
- finished_speed: Optional[float] = None
963
- """Optional[float]: The last speed for a finished task."""
964
-
965
- _progress: Deque[ProgressSample] = field(
966
- default_factory=lambda: deque(maxlen=1000), init=False, repr=False
967
- )
968
-
969
- _lock: RLock = field(repr=False, default_factory=RLock)
970
- """Thread lock."""
971
-
972
- def get_time(self) -> float:
973
- """float: Get the current time, in seconds."""
974
- return self._get_time()
975
-
976
- @property
977
- def started(self) -> bool:
978
- """bool: Check if the task as started."""
979
- return self.start_time is not None
980
-
981
- @property
982
- def remaining(self) -> Optional[float]:
983
- """Optional[float]: Get the number of steps remaining, if a non-None total was set."""
984
- if self.total is None:
985
- return None
986
- return self.total - self.completed
987
-
988
- @property
989
- def elapsed(self) -> Optional[float]:
990
- """Optional[float]: Time elapsed since task was started, or ``None`` if the task hasn't started."""
991
- if self.start_time is None:
992
- return None
993
- if self.stop_time is not None:
994
- return self.stop_time - self.start_time
995
- return self.get_time() - self.start_time
996
-
997
- @property
998
- def finished(self) -> bool:
999
- """Check if the task has finished."""
1000
- return self.finished_time is not None
1001
-
1002
- @property
1003
- def percentage(self) -> float:
1004
- """float: Get progress of task as a percentage. If a None total was set, returns 0"""
1005
- if not self.total:
1006
- return 0.0
1007
- completed = (self.completed / self.total) * 100.0
1008
- completed = min(100.0, max(0.0, completed))
1009
- return completed
1010
-
1011
- @property
1012
- def speed(self) -> Optional[float]:
1013
- """Optional[float]: Get the estimated speed in steps per second."""
1014
- if self.start_time is None:
1015
- return None
1016
- with self._lock:
1017
- progress = self._progress
1018
- if not progress:
1019
- return None
1020
- total_time = progress[-1].timestamp - progress[0].timestamp
1021
- if total_time == 0:
1022
- return None
1023
- iter_progress = iter(progress)
1024
- next(iter_progress)
1025
- total_completed = sum(sample.completed for sample in iter_progress)
1026
- speed = total_completed / total_time
1027
- return speed
1028
-
1029
- @property
1030
- def time_remaining(self) -> Optional[float]:
1031
- """Optional[float]: Get estimated time to completion, or ``None`` if no data."""
1032
- if self.finished:
1033
- return 0.0
1034
- speed = self.speed
1035
- if not speed:
1036
- return None
1037
- remaining = self.remaining
1038
- if remaining is None:
1039
- return None
1040
- estimate = ceil(remaining / speed)
1041
- return estimate
1042
-
1043
- def _reset(self) -> None:
1044
- """Reset progress."""
1045
- self._progress.clear()
1046
- self.finished_time = None
1047
- self.finished_speed = None
1048
-
1049
-
1050
- class Progress(JupyterMixin):
1051
- """Renders an auto-updating progress bar(s).
1052
-
1053
- Args:
1054
- console (Console, optional): Optional Console instance. Default will an internal Console instance writing to stdout.
1055
- auto_refresh (bool, optional): Enable auto refresh. If disabled, you will need to call `refresh()`.
1056
- refresh_per_second (Optional[float], optional): Number of times per second to refresh the progress information or None to use default (10). Defaults to None.
1057
- speed_estimate_period: (float, optional): Period (in seconds) used to calculate the speed estimate. Defaults to 30.
1058
- transient: (bool, optional): Clear the progress on exit. Defaults to False.
1059
- redirect_stdout: (bool, optional): Enable redirection of stdout, so ``print`` may be used. Defaults to True.
1060
- redirect_stderr: (bool, optional): Enable redirection of stderr. Defaults to True.
1061
- get_time: (Callable, optional): A callable that gets the current time, or None to use Console.get_time. Defaults to None.
1062
- disable (bool, optional): Disable progress display. Defaults to False
1063
- expand (bool, optional): Expand tasks table to fit width. Defaults to False.
1064
- """
1065
-
1066
- def __init__(
1067
- self,
1068
- *columns: Union[str, ProgressColumn],
1069
- console: Optional[Console] = None,
1070
- auto_refresh: bool = True,
1071
- refresh_per_second: float = 10,
1072
- speed_estimate_period: float = 30.0,
1073
- transient: bool = False,
1074
- redirect_stdout: bool = True,
1075
- redirect_stderr: bool = True,
1076
- get_time: Optional[GetTimeCallable] = None,
1077
- disable: bool = False,
1078
- expand: bool = False,
1079
- ) -> None:
1080
- assert refresh_per_second > 0, "refresh_per_second must be > 0"
1081
- self._lock = RLock()
1082
- self.columns = columns or self.get_default_columns()
1083
- self.speed_estimate_period = speed_estimate_period
1084
-
1085
- self.disable = disable
1086
- self.expand = expand
1087
- self._tasks: Dict[TaskID, Task] = {}
1088
- self._task_index: TaskID = TaskID(0)
1089
- self.live = Live(
1090
- console=console or get_console(),
1091
- auto_refresh=auto_refresh,
1092
- refresh_per_second=refresh_per_second,
1093
- transient=transient,
1094
- redirect_stdout=redirect_stdout,
1095
- redirect_stderr=redirect_stderr,
1096
- get_renderable=self.get_renderable,
1097
- )
1098
- self.get_time = get_time or self.console.get_time
1099
- self.print = self.console.print
1100
- self.log = self.console.log
1101
-
1102
- @classmethod
1103
- def get_default_columns(cls) -> Tuple[ProgressColumn, ...]:
1104
- """Get the default columns used for a new Progress instance:
1105
- - a text column for the description (TextColumn)
1106
- - the bar itself (BarColumn)
1107
- - a text column showing completion percentage (TextColumn)
1108
- - an estimated-time-remaining column (TimeRemainingColumn)
1109
- If the Progress instance is created without passing a columns argument,
1110
- the default columns defined here will be used.
1111
-
1112
- You can also create a Progress instance using custom columns before
1113
- and/or after the defaults, as in this example:
1114
-
1115
- progress = Progress(
1116
- SpinnerColumn(),
1117
- *Progress.default_columns(),
1118
- "Elapsed:",
1119
- TimeElapsedColumn(),
1120
- )
1121
-
1122
- This code shows the creation of a Progress display, containing
1123
- a spinner to the left, the default columns, and a labeled elapsed
1124
- time column.
1125
- """
1126
- return (
1127
- TextColumn("[progress.description]{task.description}"),
1128
- BarColumn(),
1129
- TaskProgressColumn(),
1130
- TimeRemainingColumn(),
1131
- )
1132
-
1133
- @property
1134
- def console(self) -> Console:
1135
- return self.live.console
1136
-
1137
- @property
1138
- def tasks(self) -> List[Task]:
1139
- """Get a list of Task instances."""
1140
- with self._lock:
1141
- return list(self._tasks.values())
1142
-
1143
- @property
1144
- def task_ids(self) -> List[TaskID]:
1145
- """A list of task IDs."""
1146
- with self._lock:
1147
- return list(self._tasks.keys())
1148
-
1149
- @property
1150
- def finished(self) -> bool:
1151
- """Check if all tasks have been completed."""
1152
- with self._lock:
1153
- if not self._tasks:
1154
- return True
1155
- return all(task.finished for task in self._tasks.values())
1156
-
1157
- def start(self) -> None:
1158
- """Start the progress display."""
1159
- if not self.disable:
1160
- self.live.start(refresh=True)
1161
-
1162
- def stop(self) -> None:
1163
- """Stop the progress display."""
1164
- self.live.stop()
1165
- if not self.console.is_interactive:
1166
- self.console.print()
1167
-
1168
- def __enter__(self) -> "Progress":
1169
- self.start()
1170
- return self
1171
-
1172
- def __exit__(
1173
- self,
1174
- exc_type: Optional[Type[BaseException]],
1175
- exc_val: Optional[BaseException],
1176
- exc_tb: Optional[TracebackType],
1177
- ) -> None:
1178
- self.stop()
1179
-
1180
- def track(
1181
- self,
1182
- sequence: Union[Iterable[ProgressType], Sequence[ProgressType]],
1183
- total: Optional[float] = None,
1184
- task_id: Optional[TaskID] = None,
1185
- description: str = "Working...",
1186
- update_period: float = 0.1,
1187
- ) -> Iterable[ProgressType]:
1188
- """Track progress by iterating over a sequence.
1189
-
1190
- Args:
1191
- sequence (Sequence[ProgressType]): A sequence of values you want to iterate over and track progress.
1192
- total: (float, optional): Total number of steps. Default is len(sequence).
1193
- task_id: (TaskID): Task to track. Default is new task.
1194
- description: (str, optional): Description of task, if new task is created.
1195
- update_period (float, optional): Minimum time (in seconds) between calls to update(). Defaults to 0.1.
1196
-
1197
- Returns:
1198
- Iterable[ProgressType]: An iterable of values taken from the provided sequence.
1199
- """
1200
- if total is None:
1201
- total = float(length_hint(sequence)) or None
1202
-
1203
- if task_id is None:
1204
- task_id = self.add_task(description, total=total)
1205
- else:
1206
- self.update(task_id, total=total)
1207
-
1208
- if self.live.auto_refresh:
1209
- with _TrackThread(self, task_id, update_period) as track_thread:
1210
- for value in sequence:
1211
- yield value
1212
- track_thread.completed += 1
1213
- else:
1214
- advance = self.advance
1215
- refresh = self.refresh
1216
- for value in sequence:
1217
- yield value
1218
- advance(task_id, 1)
1219
- refresh()
1220
-
1221
- def wrap_file(
1222
- self,
1223
- file: BinaryIO,
1224
- total: Optional[int] = None,
1225
- *,
1226
- task_id: Optional[TaskID] = None,
1227
- description: str = "Reading...",
1228
- ) -> BinaryIO:
1229
- """Track progress file reading from a binary file.
1230
-
1231
- Args:
1232
- file (BinaryIO): A file-like object opened in binary mode.
1233
- total (int, optional): Total number of bytes to read. This must be provided unless a task with a total is also given.
1234
- task_id (TaskID): Task to track. Default is new task.
1235
- description (str, optional): Description of task, if new task is created.
1236
-
1237
- Returns:
1238
- BinaryIO: A readable file-like object in binary mode.
1239
-
1240
- Raises:
1241
- ValueError: When no total value can be extracted from the arguments or the task.
1242
- """
1243
- # attempt to recover the total from the task
1244
- total_bytes: Optional[float] = None
1245
- if total is not None:
1246
- total_bytes = total
1247
- elif task_id is not None:
1248
- with self._lock:
1249
- total_bytes = self._tasks[task_id].total
1250
- if total_bytes is None:
1251
- raise ValueError(
1252
- f"unable to get the total number of bytes, please specify 'total'"
1253
- )
1254
-
1255
- # update total of task or create new task
1256
- if task_id is None:
1257
- task_id = self.add_task(description, total=total_bytes)
1258
- else:
1259
- self.update(task_id, total=total_bytes)
1260
-
1261
- return _Reader(file, self, task_id, close_handle=False)
1262
-
1263
- @typing.overload
1264
- def open(
1265
- self,
1266
- file: Union[str, "PathLike[str]", bytes],
1267
- mode: Literal["rb"],
1268
- buffering: int = -1,
1269
- encoding: Optional[str] = None,
1270
- errors: Optional[str] = None,
1271
- newline: Optional[str] = None,
1272
- *,
1273
- total: Optional[int] = None,
1274
- task_id: Optional[TaskID] = None,
1275
- description: str = "Reading...",
1276
- ) -> BinaryIO:
1277
- pass
1278
-
1279
- @typing.overload
1280
- def open(
1281
- self,
1282
- file: Union[str, "PathLike[str]", bytes],
1283
- mode: Union[Literal["r"], Literal["rt"]],
1284
- buffering: int = -1,
1285
- encoding: Optional[str] = None,
1286
- errors: Optional[str] = None,
1287
- newline: Optional[str] = None,
1288
- *,
1289
- total: Optional[int] = None,
1290
- task_id: Optional[TaskID] = None,
1291
- description: str = "Reading...",
1292
- ) -> TextIO:
1293
- pass
1294
-
1295
- def open(
1296
- self,
1297
- file: Union[str, "PathLike[str]", bytes],
1298
- mode: Union[Literal["rb"], Literal["rt"], Literal["r"]] = "r",
1299
- buffering: int = -1,
1300
- encoding: Optional[str] = None,
1301
- errors: Optional[str] = None,
1302
- newline: Optional[str] = None,
1303
- *,
1304
- total: Optional[int] = None,
1305
- task_id: Optional[TaskID] = None,
1306
- description: str = "Reading...",
1307
- ) -> Union[BinaryIO, TextIO]:
1308
- """Track progress while reading from a binary file.
1309
-
1310
- Args:
1311
- path (Union[str, PathLike[str]]): The path to the file to read.
1312
- mode (str): The mode to use to open the file. Only supports "r", "rb" or "rt".
1313
- buffering (int): The buffering strategy to use, see :func:`io.open`.
1314
- encoding (str, optional): The encoding to use when reading in text mode, see :func:`io.open`.
1315
- errors (str, optional): The error handling strategy for decoding errors, see :func:`io.open`.
1316
- newline (str, optional): The strategy for handling newlines in text mode, see :func:`io.open`.
1317
- total (int, optional): Total number of bytes to read. If none given, os.stat(path).st_size is used.
1318
- task_id (TaskID): Task to track. Default is new task.
1319
- description (str, optional): Description of task, if new task is created.
1320
-
1321
- Returns:
1322
- BinaryIO: A readable file-like object in binary mode.
1323
-
1324
- Raises:
1325
- ValueError: When an invalid mode is given.
1326
- """
1327
- # normalize the mode (always rb, rt)
1328
- _mode = "".join(sorted(mode, reverse=False))
1329
- if _mode not in ("br", "rt", "r"):
1330
- raise ValueError("invalid mode {!r}".format(mode))
1331
-
1332
- # patch buffering to provide the same behaviour as the builtin `open`
1333
- line_buffering = buffering == 1
1334
- if _mode == "br" and buffering == 1:
1335
- warnings.warn(
1336
- "line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used",
1337
- RuntimeWarning,
1338
- )
1339
- buffering = -1
1340
- elif _mode in ("rt", "r"):
1341
- if buffering == 0:
1342
- raise ValueError("can't have unbuffered text I/O")
1343
- elif buffering == 1:
1344
- buffering = -1
1345
-
1346
- # attempt to get the total with `os.stat`
1347
- if total is None:
1348
- total = stat(file).st_size
1349
-
1350
- # update total of task or create new task
1351
- if task_id is None:
1352
- task_id = self.add_task(description, total=total)
1353
- else:
1354
- self.update(task_id, total=total)
1355
-
1356
- # open the file in binary mode,
1357
- handle = io.open(file, "rb", buffering=buffering)
1358
- reader = _Reader(handle, self, task_id, close_handle=True)
1359
-
1360
- # wrap the reader in a `TextIOWrapper` if text mode
1361
- if mode in ("r", "rt"):
1362
- return io.TextIOWrapper(
1363
- reader,
1364
- encoding=encoding,
1365
- errors=errors,
1366
- newline=newline,
1367
- line_buffering=line_buffering,
1368
- )
1369
-
1370
- return reader
1371
-
1372
- def start_task(self, task_id: TaskID) -> None:
1373
- """Start a task.
1374
-
1375
- Starts a task (used when calculating elapsed time). You may need to call this manually,
1376
- if you called ``add_task`` with ``start=False``.
1377
-
1378
- Args:
1379
- task_id (TaskID): ID of task.
1380
- """
1381
- with self._lock:
1382
- task = self._tasks[task_id]
1383
- if task.start_time is None:
1384
- task.start_time = self.get_time()
1385
-
1386
- def stop_task(self, task_id: TaskID) -> None:
1387
- """Stop a task.
1388
-
1389
- This will freeze the elapsed time on the task.
1390
-
1391
- Args:
1392
- task_id (TaskID): ID of task.
1393
- """
1394
- with self._lock:
1395
- task = self._tasks[task_id]
1396
- current_time = self.get_time()
1397
- if task.start_time is None:
1398
- task.start_time = current_time
1399
- task.stop_time = current_time
1400
-
1401
- def update(
1402
- self,
1403
- task_id: TaskID,
1404
- *,
1405
- total: Optional[float] = None,
1406
- completed: Optional[float] = None,
1407
- advance: Optional[float] = None,
1408
- description: Optional[str] = None,
1409
- visible: Optional[bool] = None,
1410
- refresh: bool = False,
1411
- **fields: Any,
1412
- ) -> None:
1413
- """Update information associated with a task.
1414
-
1415
- Args:
1416
- task_id (TaskID): Task id (returned by add_task).
1417
- total (float, optional): Updates task.total if not None.
1418
- completed (float, optional): Updates task.completed if not None.
1419
- advance (float, optional): Add a value to task.completed if not None.
1420
- description (str, optional): Change task description if not None.
1421
- visible (bool, optional): Set visible flag if not None.
1422
- refresh (bool): Force a refresh of progress information. Default is False.
1423
- **fields (Any): Additional data fields required for rendering.
1424
- """
1425
- with self._lock:
1426
- task = self._tasks[task_id]
1427
- completed_start = task.completed
1428
-
1429
- if total is not None and total != task.total:
1430
- task.total = total
1431
- task._reset()
1432
- if advance is not None:
1433
- task.completed += advance
1434
- if completed is not None:
1435
- task.completed = completed
1436
- if description is not None:
1437
- task.description = description
1438
- if visible is not None:
1439
- task.visible = visible
1440
- task.fields.update(fields)
1441
- update_completed = task.completed - completed_start
1442
-
1443
- current_time = self.get_time()
1444
- old_sample_time = current_time - self.speed_estimate_period
1445
- _progress = task._progress
1446
-
1447
- popleft = _progress.popleft
1448
- while _progress and _progress[0].timestamp < old_sample_time:
1449
- popleft()
1450
- if update_completed > 0:
1451
- _progress.append(ProgressSample(current_time, update_completed))
1452
- if (
1453
- task.total is not None
1454
- and task.completed >= task.total
1455
- and task.finished_time is None
1456
- ):
1457
- task.finished_time = task.elapsed
1458
-
1459
- if refresh:
1460
- self.refresh()
1461
-
1462
- def reset(
1463
- self,
1464
- task_id: TaskID,
1465
- *,
1466
- start: bool = True,
1467
- total: Optional[float] = None,
1468
- completed: int = 0,
1469
- visible: Optional[bool] = None,
1470
- description: Optional[str] = None,
1471
- **fields: Any,
1472
- ) -> None:
1473
- """Reset a task so completed is 0 and the clock is reset.
1474
-
1475
- Args:
1476
- task_id (TaskID): ID of task.
1477
- start (bool, optional): Start the task after reset. Defaults to True.
1478
- total (float, optional): New total steps in task, or None to use current total. Defaults to None.
1479
- completed (int, optional): Number of steps completed. Defaults to 0.
1480
- visible (bool, optional): Enable display of the task. Defaults to True.
1481
- description (str, optional): Change task description if not None. Defaults to None.
1482
- **fields (str): Additional data fields required for rendering.
1483
- """
1484
- current_time = self.get_time()
1485
- with self._lock:
1486
- task = self._tasks[task_id]
1487
- task._reset()
1488
- task.start_time = current_time if start else None
1489
- if total is not None:
1490
- task.total = total
1491
- task.completed = completed
1492
- if visible is not None:
1493
- task.visible = visible
1494
- if fields:
1495
- task.fields = fields
1496
- if description is not None:
1497
- task.description = description
1498
- task.finished_time = None
1499
- self.refresh()
1500
-
1501
- def advance(self, task_id: TaskID, advance: float = 1) -> None:
1502
- """Advance task by a number of steps.
1503
-
1504
- Args:
1505
- task_id (TaskID): ID of task.
1506
- advance (float): Number of steps to advance. Default is 1.
1507
- """
1508
- current_time = self.get_time()
1509
- with self._lock:
1510
- task = self._tasks[task_id]
1511
- completed_start = task.completed
1512
- task.completed += advance
1513
- update_completed = task.completed - completed_start
1514
- old_sample_time = current_time - self.speed_estimate_period
1515
- _progress = task._progress
1516
-
1517
- popleft = _progress.popleft
1518
- while _progress and _progress[0].timestamp < old_sample_time:
1519
- popleft()
1520
- while len(_progress) > 1000:
1521
- popleft()
1522
- _progress.append(ProgressSample(current_time, update_completed))
1523
- if (
1524
- task.total is not None
1525
- and task.completed >= task.total
1526
- and task.finished_time is None
1527
- ):
1528
- task.finished_time = task.elapsed
1529
- task.finished_speed = task.speed
1530
-
1531
- def refresh(self) -> None:
1532
- """Refresh (render) the progress information."""
1533
- if not self.disable and self.live.is_started:
1534
- self.live.refresh()
1535
-
1536
- def get_renderable(self) -> RenderableType:
1537
- """Get a renderable for the progress display."""
1538
- renderable = Group(*self.get_renderables())
1539
- return renderable
1540
-
1541
- def get_renderables(self) -> Iterable[RenderableType]:
1542
- """Get a number of renderables for the progress display."""
1543
- table = self.make_tasks_table(self.tasks)
1544
- yield table
1545
-
1546
- def make_tasks_table(self, tasks: Iterable[Task]) -> Table:
1547
- """Get a table to render the Progress display.
1548
-
1549
- Args:
1550
- tasks (Iterable[Task]): An iterable of Task instances, one per row of the table.
1551
-
1552
- Returns:
1553
- Table: A table instance.
1554
- """
1555
- table_columns = (
1556
- (
1557
- Column(no_wrap=True)
1558
- if isinstance(_column, str)
1559
- else _column.get_table_column().copy()
1560
- )
1561
- for _column in self.columns
1562
- )
1563
- table = Table.grid(*table_columns, padding=(0, 1), expand=self.expand)
1564
-
1565
- for task in tasks:
1566
- if task.visible:
1567
- table.add_row(
1568
- *(
1569
- (
1570
- column.format(task=task)
1571
- if isinstance(column, str)
1572
- else column(task)
1573
- )
1574
- for column in self.columns
1575
- )
1576
- )
1577
- return table
1578
-
1579
- def __rich__(self) -> RenderableType:
1580
- """Makes the Progress class itself renderable."""
1581
- with self._lock:
1582
- return self.get_renderable()
1583
-
1584
- def add_task(
1585
- self,
1586
- description: str,
1587
- start: bool = True,
1588
- total: Optional[float] = 100.0,
1589
- completed: int = 0,
1590
- visible: bool = True,
1591
- **fields: Any,
1592
- ) -> TaskID:
1593
- """Add a new 'task' to the Progress display.
1594
-
1595
- Args:
1596
- description (str): A description of the task.
1597
- start (bool, optional): Start the task immediately (to calculate elapsed time). If set to False,
1598
- you will need to call `start` manually. Defaults to True.
1599
- total (float, optional): Number of total steps in the progress if known.
1600
- Set to None to render a pulsing animation. Defaults to 100.
1601
- completed (int, optional): Number of steps completed so far. Defaults to 0.
1602
- visible (bool, optional): Enable display of the task. Defaults to True.
1603
- **fields (str): Additional data fields required for rendering.
1604
-
1605
- Returns:
1606
- TaskID: An ID you can use when calling `update`.
1607
- """
1608
- with self._lock:
1609
- task = Task(
1610
- self._task_index,
1611
- description,
1612
- total,
1613
- completed,
1614
- visible=visible,
1615
- fields=fields,
1616
- _get_time=self.get_time,
1617
- _lock=self._lock,
1618
- )
1619
- self._tasks[self._task_index] = task
1620
- if start:
1621
- self.start_task(self._task_index)
1622
- new_task_index = self._task_index
1623
- self._task_index = TaskID(int(self._task_index) + 1)
1624
- self.refresh()
1625
- return new_task_index
1626
-
1627
- def remove_task(self, task_id: TaskID) -> None:
1628
- """Delete a task if it exists.
1629
-
1630
- Args:
1631
- task_id (TaskID): A task ID.
1632
-
1633
- """
1634
- with self._lock:
1635
- del self._tasks[task_id]
1636
-
1637
-
1638
- if __name__ == "__main__": # pragma: no coverage
1639
-
1640
- import random
1641
- import time
1642
-
1643
- from .panel import Panel
1644
- from .rule import Rule
1645
- from .syntax import Syntax
1646
- from .table import Table
1647
-
1648
- syntax = Syntax(
1649
- '''def loop_last(values: Iterable[T]) -> Iterable[Tuple[bool, T]]:
1650
- """Iterate and generate a tuple with a flag for last value."""
1651
- iter_values = iter(values)
1652
- try:
1653
- previous_value = next(iter_values)
1654
- except StopIteration:
1655
- return
1656
- for value in iter_values:
1657
- yield False, previous_value
1658
- previous_value = value
1659
- yield True, previous_value''',
1660
- "python",
1661
- line_numbers=True,
1662
- )
1663
-
1664
- table = Table("foo", "bar", "baz")
1665
- table.add_row("1", "2", "3")
1666
-
1667
- progress_renderables = [
1668
- "Text may be printed while the progress bars are rendering.",
1669
- Panel("In fact, [i]any[/i] renderable will work"),
1670
- "Such as [magenta]tables[/]...",
1671
- table,
1672
- "Pretty printed structures...",
1673
- {"type": "example", "text": "Pretty printed"},
1674
- "Syntax...",
1675
- syntax,
1676
- Rule("Give it a try!"),
1677
- ]
1678
-
1679
- from itertools import cycle
1680
-
1681
- examples = cycle(progress_renderables)
1682
-
1683
- console = Console(record=True)
1684
-
1685
- with Progress(
1686
- SpinnerColumn(),
1687
- *Progress.get_default_columns(),
1688
- TimeElapsedColumn(),
1689
- console=console,
1690
- transient=False,
1691
- ) as progress:
1692
-
1693
- task1 = progress.add_task("[red]Downloading", total=1000)
1694
- task2 = progress.add_task("[green]Processing", total=1000)
1695
- task3 = progress.add_task("[yellow]Thinking", total=None)
1696
-
1697
- while not progress.finished:
1698
- progress.update(task1, advance=0.5)
1699
- progress.update(task2, advance=0.3)
1700
- time.sleep(0.01)
1701
- if random.randint(0, 100) < 1:
1702
- progress.log(next(examples))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/glob.py DELETED
@@ -1,167 +0,0 @@
1
- """
2
- Filename globbing utility. Mostly a copy of `glob` from Python 3.5.
3
-
4
- Changes include:
5
- * `yield from` and PEP3102 `*` removed.
6
- * Hidden files are not ignored.
7
- """
8
-
9
- import os
10
- import re
11
- import fnmatch
12
-
13
- __all__ = ["glob", "iglob", "escape"]
14
-
15
-
16
- def glob(pathname, recursive=False):
17
- """Return a list of paths matching a pathname pattern.
18
-
19
- The pattern may contain simple shell-style wildcards a la
20
- fnmatch. However, unlike fnmatch, filenames starting with a
21
- dot are special cases that are not matched by '*' and '?'
22
- patterns.
23
-
24
- If recursive is true, the pattern '**' will match any files and
25
- zero or more directories and subdirectories.
26
- """
27
- return list(iglob(pathname, recursive=recursive))
28
-
29
-
30
- def iglob(pathname, recursive=False):
31
- """Return an iterator which yields the paths matching a pathname pattern.
32
-
33
- The pattern may contain simple shell-style wildcards a la
34
- fnmatch. However, unlike fnmatch, filenames starting with a
35
- dot are special cases that are not matched by '*' and '?'
36
- patterns.
37
-
38
- If recursive is true, the pattern '**' will match any files and
39
- zero or more directories and subdirectories.
40
- """
41
- it = _iglob(pathname, recursive)
42
- if recursive and _isrecursive(pathname):
43
- s = next(it) # skip empty string
44
- assert not s
45
- return it
46
-
47
-
48
- def _iglob(pathname, recursive):
49
- dirname, basename = os.path.split(pathname)
50
- glob_in_dir = glob2 if recursive and _isrecursive(basename) else glob1
51
-
52
- if not has_magic(pathname):
53
- if basename:
54
- if os.path.lexists(pathname):
55
- yield pathname
56
- else:
57
- # Patterns ending with a slash should match only directories
58
- if os.path.isdir(dirname):
59
- yield pathname
60
- return
61
-
62
- if not dirname:
63
- yield from glob_in_dir(dirname, basename)
64
- return
65
- # `os.path.split()` returns the argument itself as a dirname if it is a
66
- # drive or UNC path. Prevent an infinite recursion if a drive or UNC path
67
- # contains magic characters (i.e. r'\\?\C:').
68
- if dirname != pathname and has_magic(dirname):
69
- dirs = _iglob(dirname, recursive)
70
- else:
71
- dirs = [dirname]
72
- if not has_magic(basename):
73
- glob_in_dir = glob0
74
- for dirname in dirs:
75
- for name in glob_in_dir(dirname, basename):
76
- yield os.path.join(dirname, name)
77
-
78
-
79
- # These 2 helper functions non-recursively glob inside a literal directory.
80
- # They return a list of basenames. `glob1` accepts a pattern while `glob0`
81
- # takes a literal basename (so it only has to check for its existence).
82
-
83
-
84
- def glob1(dirname, pattern):
85
- if not dirname:
86
- if isinstance(pattern, bytes):
87
- dirname = os.curdir.encode('ASCII')
88
- else:
89
- dirname = os.curdir
90
- try:
91
- names = os.listdir(dirname)
92
- except OSError:
93
- return []
94
- return fnmatch.filter(names, pattern)
95
-
96
-
97
- def glob0(dirname, basename):
98
- if not basename:
99
- # `os.path.split()` returns an empty basename for paths ending with a
100
- # directory separator. 'q*x/' should match only directories.
101
- if os.path.isdir(dirname):
102
- return [basename]
103
- else:
104
- if os.path.lexists(os.path.join(dirname, basename)):
105
- return [basename]
106
- return []
107
-
108
-
109
- # This helper function recursively yields relative pathnames inside a literal
110
- # directory.
111
-
112
-
113
- def glob2(dirname, pattern):
114
- assert _isrecursive(pattern)
115
- yield pattern[:0]
116
- for x in _rlistdir(dirname):
117
- yield x
118
-
119
-
120
- # Recursively yields relative pathnames inside a literal directory.
121
- def _rlistdir(dirname):
122
- if not dirname:
123
- if isinstance(dirname, bytes):
124
- dirname = os.curdir.encode('ASCII')
125
- else:
126
- dirname = os.curdir
127
- try:
128
- names = os.listdir(dirname)
129
- except os.error:
130
- return
131
- for x in names:
132
- yield x
133
- path = os.path.join(dirname, x) if dirname else x
134
- for y in _rlistdir(path):
135
- yield os.path.join(x, y)
136
-
137
-
138
- magic_check = re.compile('([*?[])')
139
- magic_check_bytes = re.compile(b'([*?[])')
140
-
141
-
142
- def has_magic(s):
143
- if isinstance(s, bytes):
144
- match = magic_check_bytes.search(s)
145
- else:
146
- match = magic_check.search(s)
147
- return match is not None
148
-
149
-
150
- def _isrecursive(pattern):
151
- if isinstance(pattern, bytes):
152
- return pattern == b'**'
153
- else:
154
- return pattern == '**'
155
-
156
-
157
- def escape(pathname):
158
- """Escape all special characters.
159
- """
160
- # Escaping is done by wrapping any of "*?[" between square brackets.
161
- # Metacharacters do not work in the drive part and shouldn't be escaped.
162
- drive, pathname = os.path.splitdrive(pathname)
163
- if isinstance(pathname, bytes):
164
- pathname = magic_check_bytes.sub(br'[\1]', pathname)
165
- else:
166
- pathname = magic_check.sub(r'[\1]', pathname)
167
- return drive + pathname
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/util/url.py DELETED
@@ -1,435 +0,0 @@
1
- from __future__ import absolute_import
2
-
3
- import re
4
- from collections import namedtuple
5
-
6
- from ..exceptions import LocationParseError
7
- from ..packages import six
8
-
9
- url_attrs = ["scheme", "auth", "host", "port", "path", "query", "fragment"]
10
-
11
- # We only want to normalize urls with an HTTP(S) scheme.
12
- # urllib3 infers URLs without a scheme (None) to be http.
13
- NORMALIZABLE_SCHEMES = ("http", "https", None)
14
-
15
- # Almost all of these patterns were derived from the
16
- # 'rfc3986' module: https://github.com/python-hyper/rfc3986
17
- PERCENT_RE = re.compile(r"%[a-fA-F0-9]{2}")
18
- SCHEME_RE = re.compile(r"^(?:[a-zA-Z][a-zA-Z0-9+-]*:|/)")
19
- URI_RE = re.compile(
20
- r"^(?:([a-zA-Z][a-zA-Z0-9+.-]*):)?"
21
- r"(?://([^\\/?#]*))?"
22
- r"([^?#]*)"
23
- r"(?:\?([^#]*))?"
24
- r"(?:#(.*))?$",
25
- re.UNICODE | re.DOTALL,
26
- )
27
-
28
- IPV4_PAT = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}"
29
- HEX_PAT = "[0-9A-Fa-f]{1,4}"
30
- LS32_PAT = "(?:{hex}:{hex}|{ipv4})".format(hex=HEX_PAT, ipv4=IPV4_PAT)
31
- _subs = {"hex": HEX_PAT, "ls32": LS32_PAT}
32
- _variations = [
33
- # 6( h16 ":" ) ls32
34
- "(?:%(hex)s:){6}%(ls32)s",
35
- # "::" 5( h16 ":" ) ls32
36
- "::(?:%(hex)s:){5}%(ls32)s",
37
- # [ h16 ] "::" 4( h16 ":" ) ls32
38
- "(?:%(hex)s)?::(?:%(hex)s:){4}%(ls32)s",
39
- # [ *1( h16 ":" ) h16 ] "::" 3( h16 ":" ) ls32
40
- "(?:(?:%(hex)s:)?%(hex)s)?::(?:%(hex)s:){3}%(ls32)s",
41
- # [ *2( h16 ":" ) h16 ] "::" 2( h16 ":" ) ls32
42
- "(?:(?:%(hex)s:){0,2}%(hex)s)?::(?:%(hex)s:){2}%(ls32)s",
43
- # [ *3( h16 ":" ) h16 ] "::" h16 ":" ls32
44
- "(?:(?:%(hex)s:){0,3}%(hex)s)?::%(hex)s:%(ls32)s",
45
- # [ *4( h16 ":" ) h16 ] "::" ls32
46
- "(?:(?:%(hex)s:){0,4}%(hex)s)?::%(ls32)s",
47
- # [ *5( h16 ":" ) h16 ] "::" h16
48
- "(?:(?:%(hex)s:){0,5}%(hex)s)?::%(hex)s",
49
- # [ *6( h16 ":" ) h16 ] "::"
50
- "(?:(?:%(hex)s:){0,6}%(hex)s)?::",
51
- ]
52
-
53
- UNRESERVED_PAT = r"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._\-~"
54
- IPV6_PAT = "(?:" + "|".join([x % _subs for x in _variations]) + ")"
55
- ZONE_ID_PAT = "(?:%25|%)(?:[" + UNRESERVED_PAT + "]|%[a-fA-F0-9]{2})+"
56
- IPV6_ADDRZ_PAT = r"\[" + IPV6_PAT + r"(?:" + ZONE_ID_PAT + r")?\]"
57
- REG_NAME_PAT = r"(?:[^\[\]%:/?#]|%[a-fA-F0-9]{2})*"
58
- TARGET_RE = re.compile(r"^(/[^?#]*)(?:\?([^#]*))?(?:#.*)?$")
59
-
60
- IPV4_RE = re.compile("^" + IPV4_PAT + "$")
61
- IPV6_RE = re.compile("^" + IPV6_PAT + "$")
62
- IPV6_ADDRZ_RE = re.compile("^" + IPV6_ADDRZ_PAT + "$")
63
- BRACELESS_IPV6_ADDRZ_RE = re.compile("^" + IPV6_ADDRZ_PAT[2:-2] + "$")
64
- ZONE_ID_RE = re.compile("(" + ZONE_ID_PAT + r")\]$")
65
-
66
- _HOST_PORT_PAT = ("^(%s|%s|%s)(?::0*?(|0|[1-9][0-9]{0,4}))?$") % (
67
- REG_NAME_PAT,
68
- IPV4_PAT,
69
- IPV6_ADDRZ_PAT,
70
- )
71
- _HOST_PORT_RE = re.compile(_HOST_PORT_PAT, re.UNICODE | re.DOTALL)
72
-
73
- UNRESERVED_CHARS = set(
74
- "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._-~"
75
- )
76
- SUB_DELIM_CHARS = set("!$&'()*+,;=")
77
- USERINFO_CHARS = UNRESERVED_CHARS | SUB_DELIM_CHARS | {":"}
78
- PATH_CHARS = USERINFO_CHARS | {"@", "/"}
79
- QUERY_CHARS = FRAGMENT_CHARS = PATH_CHARS | {"?"}
80
-
81
-
82
- class Url(namedtuple("Url", url_attrs)):
83
- """
84
- Data structure for representing an HTTP URL. Used as a return value for
85
- :func:`parse_url`. Both the scheme and host are normalized as they are
86
- both case-insensitive according to RFC 3986.
87
- """
88
-
89
- __slots__ = ()
90
-
91
- def __new__(
92
- cls,
93
- scheme=None,
94
- auth=None,
95
- host=None,
96
- port=None,
97
- path=None,
98
- query=None,
99
- fragment=None,
100
- ):
101
- if path and not path.startswith("/"):
102
- path = "/" + path
103
- if scheme is not None:
104
- scheme = scheme.lower()
105
- return super(Url, cls).__new__(
106
- cls, scheme, auth, host, port, path, query, fragment
107
- )
108
-
109
- @property
110
- def hostname(self):
111
- """For backwards-compatibility with urlparse. We're nice like that."""
112
- return self.host
113
-
114
- @property
115
- def request_uri(self):
116
- """Absolute path including the query string."""
117
- uri = self.path or "/"
118
-
119
- if self.query is not None:
120
- uri += "?" + self.query
121
-
122
- return uri
123
-
124
- @property
125
- def netloc(self):
126
- """Network location including host and port"""
127
- if self.port:
128
- return "%s:%d" % (self.host, self.port)
129
- return self.host
130
-
131
- @property
132
- def url(self):
133
- """
134
- Convert self into a url
135
-
136
- This function should more or less round-trip with :func:`.parse_url`. The
137
- returned url may not be exactly the same as the url inputted to
138
- :func:`.parse_url`, but it should be equivalent by the RFC (e.g., urls
139
- with a blank port will have : removed).
140
-
141
- Example: ::
142
-
143
- >>> U = parse_url('http://google.com/mail/')
144
- >>> U.url
145
- 'http://google.com/mail/'
146
- >>> Url('http', 'username:password', 'host.com', 80,
147
- ... '/path', 'query', 'fragment').url
148
- 'http://username:[email protected]:80/path?query#fragment'
149
- """
150
- scheme, auth, host, port, path, query, fragment = self
151
- url = u""
152
-
153
- # We use "is not None" we want things to happen with empty strings (or 0 port)
154
- if scheme is not None:
155
- url += scheme + u"://"
156
- if auth is not None:
157
- url += auth + u"@"
158
- if host is not None:
159
- url += host
160
- if port is not None:
161
- url += u":" + str(port)
162
- if path is not None:
163
- url += path
164
- if query is not None:
165
- url += u"?" + query
166
- if fragment is not None:
167
- url += u"#" + fragment
168
-
169
- return url
170
-
171
- def __str__(self):
172
- return self.url
173
-
174
-
175
- def split_first(s, delims):
176
- """
177
- .. deprecated:: 1.25
178
-
179
- Given a string and an iterable of delimiters, split on the first found
180
- delimiter. Return two split parts and the matched delimiter.
181
-
182
- If not found, then the first part is the full input string.
183
-
184
- Example::
185
-
186
- >>> split_first('foo/bar?baz', '?/=')
187
- ('foo', 'bar?baz', '/')
188
- >>> split_first('foo/bar?baz', '123')
189
- ('foo/bar?baz', '', None)
190
-
191
- Scales linearly with number of delims. Not ideal for large number of delims.
192
- """
193
- min_idx = None
194
- min_delim = None
195
- for d in delims:
196
- idx = s.find(d)
197
- if idx < 0:
198
- continue
199
-
200
- if min_idx is None or idx < min_idx:
201
- min_idx = idx
202
- min_delim = d
203
-
204
- if min_idx is None or min_idx < 0:
205
- return s, "", None
206
-
207
- return s[:min_idx], s[min_idx + 1 :], min_delim
208
-
209
-
210
- def _encode_invalid_chars(component, allowed_chars, encoding="utf-8"):
211
- """Percent-encodes a URI component without reapplying
212
- onto an already percent-encoded component.
213
- """
214
- if component is None:
215
- return component
216
-
217
- component = six.ensure_text(component)
218
-
219
- # Normalize existing percent-encoded bytes.
220
- # Try to see if the component we're encoding is already percent-encoded
221
- # so we can skip all '%' characters but still encode all others.
222
- component, percent_encodings = PERCENT_RE.subn(
223
- lambda match: match.group(0).upper(), component
224
- )
225
-
226
- uri_bytes = component.encode("utf-8", "surrogatepass")
227
- is_percent_encoded = percent_encodings == uri_bytes.count(b"%")
228
- encoded_component = bytearray()
229
-
230
- for i in range(0, len(uri_bytes)):
231
- # Will return a single character bytestring on both Python 2 & 3
232
- byte = uri_bytes[i : i + 1]
233
- byte_ord = ord(byte)
234
- if (is_percent_encoded and byte == b"%") or (
235
- byte_ord < 128 and byte.decode() in allowed_chars
236
- ):
237
- encoded_component += byte
238
- continue
239
- encoded_component.extend(b"%" + (hex(byte_ord)[2:].encode().zfill(2).upper()))
240
-
241
- return encoded_component.decode(encoding)
242
-
243
-
244
- def _remove_path_dot_segments(path):
245
- # See http://tools.ietf.org/html/rfc3986#section-5.2.4 for pseudo-code
246
- segments = path.split("/") # Turn the path into a list of segments
247
- output = [] # Initialize the variable to use to store output
248
-
249
- for segment in segments:
250
- # '.' is the current directory, so ignore it, it is superfluous
251
- if segment == ".":
252
- continue
253
- # Anything other than '..', should be appended to the output
254
- elif segment != "..":
255
- output.append(segment)
256
- # In this case segment == '..', if we can, we should pop the last
257
- # element
258
- elif output:
259
- output.pop()
260
-
261
- # If the path starts with '/' and the output is empty or the first string
262
- # is non-empty
263
- if path.startswith("/") and (not output or output[0]):
264
- output.insert(0, "")
265
-
266
- # If the path starts with '/.' or '/..' ensure we add one more empty
267
- # string to add a trailing '/'
268
- if path.endswith(("/.", "/..")):
269
- output.append("")
270
-
271
- return "/".join(output)
272
-
273
-
274
- def _normalize_host(host, scheme):
275
- if host:
276
- if isinstance(host, six.binary_type):
277
- host = six.ensure_str(host)
278
-
279
- if scheme in NORMALIZABLE_SCHEMES:
280
- is_ipv6 = IPV6_ADDRZ_RE.match(host)
281
- if is_ipv6:
282
- # IPv6 hosts of the form 'a::b%zone' are encoded in a URL as
283
- # such per RFC 6874: 'a::b%25zone'. Unquote the ZoneID
284
- # separator as necessary to return a valid RFC 4007 scoped IP.
285
- match = ZONE_ID_RE.search(host)
286
- if match:
287
- start, end = match.span(1)
288
- zone_id = host[start:end]
289
-
290
- if zone_id.startswith("%25") and zone_id != "%25":
291
- zone_id = zone_id[3:]
292
- else:
293
- zone_id = zone_id[1:]
294
- zone_id = "%" + _encode_invalid_chars(zone_id, UNRESERVED_CHARS)
295
- return host[:start].lower() + zone_id + host[end:]
296
- else:
297
- return host.lower()
298
- elif not IPV4_RE.match(host):
299
- return six.ensure_str(
300
- b".".join([_idna_encode(label) for label in host.split(".")])
301
- )
302
- return host
303
-
304
-
305
- def _idna_encode(name):
306
- if name and any(ord(x) >= 128 for x in name):
307
- try:
308
- import idna
309
- except ImportError:
310
- six.raise_from(
311
- LocationParseError("Unable to parse URL without the 'idna' module"),
312
- None,
313
- )
314
- try:
315
- return idna.encode(name.lower(), strict=True, std3_rules=True)
316
- except idna.IDNAError:
317
- six.raise_from(
318
- LocationParseError(u"Name '%s' is not a valid IDNA label" % name), None
319
- )
320
- return name.lower().encode("ascii")
321
-
322
-
323
- def _encode_target(target):
324
- """Percent-encodes a request target so that there are no invalid characters"""
325
- path, query = TARGET_RE.match(target).groups()
326
- target = _encode_invalid_chars(path, PATH_CHARS)
327
- query = _encode_invalid_chars(query, QUERY_CHARS)
328
- if query is not None:
329
- target += "?" + query
330
- return target
331
-
332
-
333
- def parse_url(url):
334
- """
335
- Given a url, return a parsed :class:`.Url` namedtuple. Best-effort is
336
- performed to parse incomplete urls. Fields not provided will be None.
337
- This parser is RFC 3986 and RFC 6874 compliant.
338
-
339
- The parser logic and helper functions are based heavily on
340
- work done in the ``rfc3986`` module.
341
-
342
- :param str url: URL to parse into a :class:`.Url` namedtuple.
343
-
344
- Partly backwards-compatible with :mod:`urlparse`.
345
-
346
- Example::
347
-
348
- >>> parse_url('http://google.com/mail/')
349
- Url(scheme='http', host='google.com', port=None, path='/mail/', ...)
350
- >>> parse_url('google.com:80')
351
- Url(scheme=None, host='google.com', port=80, path=None, ...)
352
- >>> parse_url('/foo?bar')
353
- Url(scheme=None, host=None, port=None, path='/foo', query='bar', ...)
354
- """
355
- if not url:
356
- # Empty
357
- return Url()
358
-
359
- source_url = url
360
- if not SCHEME_RE.search(url):
361
- url = "//" + url
362
-
363
- try:
364
- scheme, authority, path, query, fragment = URI_RE.match(url).groups()
365
- normalize_uri = scheme is None or scheme.lower() in NORMALIZABLE_SCHEMES
366
-
367
- if scheme:
368
- scheme = scheme.lower()
369
-
370
- if authority:
371
- auth, _, host_port = authority.rpartition("@")
372
- auth = auth or None
373
- host, port = _HOST_PORT_RE.match(host_port).groups()
374
- if auth and normalize_uri:
375
- auth = _encode_invalid_chars(auth, USERINFO_CHARS)
376
- if port == "":
377
- port = None
378
- else:
379
- auth, host, port = None, None, None
380
-
381
- if port is not None:
382
- port = int(port)
383
- if not (0 <= port <= 65535):
384
- raise LocationParseError(url)
385
-
386
- host = _normalize_host(host, scheme)
387
-
388
- if normalize_uri and path:
389
- path = _remove_path_dot_segments(path)
390
- path = _encode_invalid_chars(path, PATH_CHARS)
391
- if normalize_uri and query:
392
- query = _encode_invalid_chars(query, QUERY_CHARS)
393
- if normalize_uri and fragment:
394
- fragment = _encode_invalid_chars(fragment, FRAGMENT_CHARS)
395
-
396
- except (ValueError, AttributeError):
397
- return six.raise_from(LocationParseError(source_url), None)
398
-
399
- # For the sake of backwards compatibility we put empty
400
- # string values for path if there are any defined values
401
- # beyond the path in the URL.
402
- # TODO: Remove this when we break backwards compatibility.
403
- if not path:
404
- if query is not None or fragment is not None:
405
- path = ""
406
- else:
407
- path = None
408
-
409
- # Ensure that each part of the URL is a `str` for
410
- # backwards compatibility.
411
- if isinstance(url, six.text_type):
412
- ensure_func = six.ensure_text
413
- else:
414
- ensure_func = six.ensure_str
415
-
416
- def ensure_type(x):
417
- return x if x is None else ensure_func(x)
418
-
419
- return Url(
420
- scheme=ensure_type(scheme),
421
- auth=ensure_type(auth),
422
- host=ensure_type(host),
423
- port=port,
424
- path=ensure_type(path),
425
- query=ensure_type(query),
426
- fragment=ensure_type(fragment),
427
- )
428
-
429
-
430
- def get_host(url):
431
- """
432
- Deprecated. Use :func:`parse_url` instead.
433
- """
434
- p = parse_url(url)
435
- return p.scheme or "http", p.hostname, p.port
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BigDL/bigdl_nano_demo/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: BigDL-Nano Demo
3
- emoji: 🦄
4
- colorFrom: yellow
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.0.13
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BimboAnon/BimboProxy/Dockerfile DELETED
@@ -1,11 +0,0 @@
1
- FROM node:18-bullseye-slim
2
- RUN apt-get update && \
3
- apt-get install -y git
4
- RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
5
- WORKDIR /app
6
- RUN npm install
7
- COPY Dockerfile greeting.md* .env* ./
8
- RUN npm run build
9
- EXPOSE 7860
10
- ENV NODE_ENV=production
11
- CMD [ "npm", "start" ]
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/equal.h DELETED
@@ -1,23 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
-
21
- // this system inherits equal
22
- #include <thrust/system/cpp/detail/equal.h>
23
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Text2Human/Text2Human/ui/ui.py DELETED
@@ -1,313 +0,0 @@
1
- from PyQt5 import QtCore, QtGui, QtWidgets
2
- from PyQt5.QtCore import *
3
- from PyQt5.QtGui import *
4
- from PyQt5.QtWidgets import *
5
-
6
-
7
- class Ui_Form(object):
8
-
9
- def setupUi(self, Form):
10
- Form.setObjectName("Form")
11
- Form.resize(1250, 670)
12
-
13
- self.pushButton_2 = QtWidgets.QPushButton(Form)
14
- self.pushButton_2.setGeometry(QtCore.QRect(20, 60, 97, 27))
15
- self.pushButton_2.setObjectName("pushButton_2")
16
-
17
- self.pushButton_6 = QtWidgets.QPushButton(Form)
18
- self.pushButton_6.setGeometry(QtCore.QRect(20, 100, 97, 27))
19
- self.pushButton_6.setObjectName("pushButton_6")
20
-
21
- # Generate Parsing
22
- self.pushButton_0 = QtWidgets.QPushButton(Form)
23
- self.pushButton_0.setGeometry(QtCore.QRect(126, 60, 150, 27))
24
- self.pushButton_0.setObjectName("pushButton_0")
25
-
26
- # Generate Human
27
- self.pushButton_1 = QtWidgets.QPushButton(Form)
28
- self.pushButton_1.setGeometry(QtCore.QRect(126, 100, 150, 27))
29
- self.pushButton_1.setObjectName("pushButton_1")
30
-
31
- # shape text box
32
- self.label_heading_1 = QtWidgets.QLabel(Form)
33
- self.label_heading_1.setText('Describe the shape.')
34
- self.label_heading_1.setObjectName("label_heading_1")
35
- self.label_heading_1.setGeometry(QtCore.QRect(320, 20, 200, 20))
36
-
37
- self.message_box_1 = QtWidgets.QLineEdit(Form)
38
- self.message_box_1.setGeometry(QtCore.QRect(320, 50, 256, 80))
39
- self.message_box_1.setObjectName("message_box_1")
40
- self.message_box_1.setAlignment(Qt.AlignTop)
41
-
42
- # texture text box
43
- self.label_heading_2 = QtWidgets.QLabel(Form)
44
- self.label_heading_2.setText('Describe the textures.')
45
- self.label_heading_2.setObjectName("label_heading_2")
46
- self.label_heading_2.setGeometry(QtCore.QRect(620, 20, 200, 20))
47
-
48
- self.message_box_2 = QtWidgets.QLineEdit(Form)
49
- self.message_box_2.setGeometry(QtCore.QRect(620, 50, 256, 80))
50
- self.message_box_2.setObjectName("message_box_2")
51
- self.message_box_2.setAlignment(Qt.AlignTop)
52
-
53
- # title icon
54
- self.title_icon = QtWidgets.QLabel(Form)
55
- self.title_icon.setGeometry(QtCore.QRect(30, 10, 200, 50))
56
- self.title_icon.setPixmap(
57
- QtGui.QPixmap('./ui/icons/icon_title.png').scaledToWidth(200))
58
-
59
- # palette icon
60
- self.palette_icon = QtWidgets.QLabel(Form)
61
- self.palette_icon.setGeometry(QtCore.QRect(950, 10, 256, 128))
62
- self.palette_icon.setPixmap(
63
- QtGui.QPixmap('./ui/icons/icon_palette.png').scaledToWidth(256))
64
-
65
- # top
66
- self.pushButton_8 = QtWidgets.QPushButton(' top', Form)
67
- self.pushButton_8.setGeometry(QtCore.QRect(940, 120, 120, 27))
68
- self.pushButton_8.setObjectName("pushButton_8")
69
- self.pushButton_8.setStyleSheet(
70
- "text-align: left; padding-left: 10px;")
71
- self.pushButton_8.setIcon(QIcon('./ui/color_blocks/class_top.png'))
72
- # skin
73
- self.pushButton_9 = QtWidgets.QPushButton(' skin', Form)
74
- self.pushButton_9.setGeometry(QtCore.QRect(940, 165, 120, 27))
75
- self.pushButton_9.setObjectName("pushButton_9")
76
- self.pushButton_9.setStyleSheet(
77
- "text-align: left; padding-left: 10px;")
78
- self.pushButton_9.setIcon(QIcon('./ui/color_blocks/class_skin.png'))
79
- # outer
80
- self.pushButton_10 = QtWidgets.QPushButton(' outer', Form)
81
- self.pushButton_10.setGeometry(QtCore.QRect(940, 210, 120, 27))
82
- self.pushButton_10.setObjectName("pushButton_10")
83
- self.pushButton_10.setStyleSheet(
84
- "text-align: left; padding-left: 10px;")
85
- self.pushButton_10.setIcon(QIcon('./ui/color_blocks/class_outer.png'))
86
- # face
87
- self.pushButton_11 = QtWidgets.QPushButton(' face', Form)
88
- self.pushButton_11.setGeometry(QtCore.QRect(940, 255, 120, 27))
89
- self.pushButton_11.setObjectName("pushButton_11")
90
- self.pushButton_11.setStyleSheet(
91
- "text-align: left; padding-left: 10px;")
92
- self.pushButton_11.setIcon(QIcon('./ui/color_blocks/class_face.png'))
93
- # skirt
94
- self.pushButton_12 = QtWidgets.QPushButton(' skirt', Form)
95
- self.pushButton_12.setGeometry(QtCore.QRect(940, 300, 120, 27))
96
- self.pushButton_12.setObjectName("pushButton_12")
97
- self.pushButton_12.setStyleSheet(
98
- "text-align: left; padding-left: 10px;")
99
- self.pushButton_12.setIcon(QIcon('./ui/color_blocks/class_skirt.png'))
100
- # hair
101
- self.pushButton_13 = QtWidgets.QPushButton(' hair', Form)
102
- self.pushButton_13.setGeometry(QtCore.QRect(940, 345, 120, 27))
103
- self.pushButton_13.setObjectName("pushButton_13")
104
- self.pushButton_13.setStyleSheet(
105
- "text-align: left; padding-left: 10px;")
106
- self.pushButton_13.setIcon(QIcon('./ui/color_blocks/class_hair.png'))
107
- # dress
108
- self.pushButton_14 = QtWidgets.QPushButton(' dress', Form)
109
- self.pushButton_14.setGeometry(QtCore.QRect(940, 390, 120, 27))
110
- self.pushButton_14.setObjectName("pushButton_14")
111
- self.pushButton_14.setStyleSheet(
112
- "text-align: left; padding-left: 10px;")
113
- self.pushButton_14.setIcon(QIcon('./ui/color_blocks/class_dress.png'))
114
- # headwear
115
- self.pushButton_15 = QtWidgets.QPushButton(' headwear', Form)
116
- self.pushButton_15.setGeometry(QtCore.QRect(940, 435, 120, 27))
117
- self.pushButton_15.setObjectName("pushButton_15")
118
- self.pushButton_15.setStyleSheet(
119
- "text-align: left; padding-left: 10px;")
120
- self.pushButton_15.setIcon(
121
- QIcon('./ui/color_blocks/class_headwear.png'))
122
- # pants
123
- self.pushButton_16 = QtWidgets.QPushButton(' pants', Form)
124
- self.pushButton_16.setGeometry(QtCore.QRect(940, 480, 120, 27))
125
- self.pushButton_16.setObjectName("pushButton_16")
126
- self.pushButton_16.setStyleSheet(
127
- "text-align: left; padding-left: 10px;")
128
- self.pushButton_16.setIcon(QIcon('./ui/color_blocks/class_pants.png'))
129
- # eyeglasses
130
- self.pushButton_17 = QtWidgets.QPushButton(' eyeglass', Form)
131
- self.pushButton_17.setGeometry(QtCore.QRect(940, 525, 120, 27))
132
- self.pushButton_17.setObjectName("pushButton_17")
133
- self.pushButton_17.setStyleSheet(
134
- "text-align: left; padding-left: 10px;")
135
- self.pushButton_17.setIcon(
136
- QIcon('./ui/color_blocks/class_eyeglass.png'))
137
- # rompers
138
- self.pushButton_18 = QtWidgets.QPushButton(' rompers', Form)
139
- self.pushButton_18.setGeometry(QtCore.QRect(940, 570, 120, 27))
140
- self.pushButton_18.setObjectName("pushButton_18")
141
- self.pushButton_18.setStyleSheet(
142
- "text-align: left; padding-left: 10px;")
143
- self.pushButton_18.setIcon(
144
- QIcon('./ui/color_blocks/class_rompers.png'))
145
- # footwear
146
- self.pushButton_19 = QtWidgets.QPushButton(' footwear', Form)
147
- self.pushButton_19.setGeometry(QtCore.QRect(940, 615, 120, 27))
148
- self.pushButton_19.setObjectName("pushButton_19")
149
- self.pushButton_19.setStyleSheet(
150
- "text-align: left; padding-left: 10px;")
151
- self.pushButton_19.setIcon(
152
- QIcon('./ui/color_blocks/class_footwear.png'))
153
-
154
- # leggings
155
- self.pushButton_20 = QtWidgets.QPushButton(' leggings', Form)
156
- self.pushButton_20.setGeometry(QtCore.QRect(1100, 120, 120, 27))
157
- self.pushButton_20.setObjectName("pushButton_10")
158
- self.pushButton_20.setStyleSheet(
159
- "text-align: left; padding-left: 10px;")
160
- self.pushButton_20.setIcon(
161
- QIcon('./ui/color_blocks/class_leggings.png'))
162
-
163
- # ring
164
- self.pushButton_21 = QtWidgets.QPushButton(' ring', Form)
165
- self.pushButton_21.setGeometry(QtCore.QRect(1100, 165, 120, 27))
166
- self.pushButton_21.setObjectName("pushButton_2`0`")
167
- self.pushButton_21.setStyleSheet(
168
- "text-align: left; padding-left: 10px;")
169
- self.pushButton_21.setIcon(QIcon('./ui/color_blocks/class_ring.png'))
170
-
171
- # belt
172
- self.pushButton_22 = QtWidgets.QPushButton(' belt', Form)
173
- self.pushButton_22.setGeometry(QtCore.QRect(1100, 210, 120, 27))
174
- self.pushButton_22.setObjectName("pushButton_2`0`")
175
- self.pushButton_22.setStyleSheet(
176
- "text-align: left; padding-left: 10px;")
177
- self.pushButton_22.setIcon(QIcon('./ui/color_blocks/class_belt.png'))
178
-
179
- # neckwear
180
- self.pushButton_23 = QtWidgets.QPushButton(' neckwear', Form)
181
- self.pushButton_23.setGeometry(QtCore.QRect(1100, 255, 120, 27))
182
- self.pushButton_23.setObjectName("pushButton_2`0`")
183
- self.pushButton_23.setStyleSheet(
184
- "text-align: left; padding-left: 10px;")
185
- self.pushButton_23.setIcon(
186
- QIcon('./ui/color_blocks/class_neckwear.png'))
187
-
188
- # wrist
189
- self.pushButton_24 = QtWidgets.QPushButton(' wrist', Form)
190
- self.pushButton_24.setGeometry(QtCore.QRect(1100, 300, 120, 27))
191
- self.pushButton_24.setObjectName("pushButton_2`0`")
192
- self.pushButton_24.setStyleSheet(
193
- "text-align: left; padding-left: 10px;")
194
- self.pushButton_24.setIcon(QIcon('./ui/color_blocks/class_wrist.png'))
195
-
196
- # socks
197
- self.pushButton_25 = QtWidgets.QPushButton(' socks', Form)
198
- self.pushButton_25.setGeometry(QtCore.QRect(1100, 345, 120, 27))
199
- self.pushButton_25.setObjectName("pushButton_2`0`")
200
- self.pushButton_25.setStyleSheet(
201
- "text-align: left; padding-left: 10px;")
202
- self.pushButton_25.setIcon(QIcon('./ui/color_blocks/class_socks.png'))
203
-
204
- # tie
205
- self.pushButton_26 = QtWidgets.QPushButton(' tie', Form)
206
- self.pushButton_26.setGeometry(QtCore.QRect(1100, 390, 120, 27))
207
- self.pushButton_26.setObjectName("pushButton_2`0`")
208
- self.pushButton_26.setStyleSheet(
209
- "text-align: left; padding-left: 10px;")
210
- self.pushButton_26.setIcon(QIcon('./ui/color_blocks/class_tie.png'))
211
-
212
- # earstuds
213
- self.pushButton_27 = QtWidgets.QPushButton(' necklace', Form)
214
- self.pushButton_27.setGeometry(QtCore.QRect(1100, 435, 120, 27))
215
- self.pushButton_27.setObjectName("pushButton_2`0`")
216
- self.pushButton_27.setStyleSheet(
217
- "text-align: left; padding-left: 10px;")
218
- self.pushButton_27.setIcon(
219
- QIcon('./ui/color_blocks/class_necklace.png'))
220
-
221
- # necklace
222
- self.pushButton_28 = QtWidgets.QPushButton(' earstuds', Form)
223
- self.pushButton_28.setGeometry(QtCore.QRect(1100, 480, 120, 27))
224
- self.pushButton_28.setObjectName("pushButton_2`0`")
225
- self.pushButton_28.setStyleSheet(
226
- "text-align: left; padding-left: 10px;")
227
- self.pushButton_28.setIcon(
228
- QIcon('./ui/color_blocks/class_earstuds.png'))
229
-
230
- # bag
231
- self.pushButton_29 = QtWidgets.QPushButton(' bag', Form)
232
- self.pushButton_29.setGeometry(QtCore.QRect(1100, 525, 120, 27))
233
- self.pushButton_29.setObjectName("pushButton_2`0`")
234
- self.pushButton_29.setStyleSheet(
235
- "text-align: left; padding-left: 10px;")
236
- self.pushButton_29.setIcon(QIcon('./ui/color_blocks/class_bag.png'))
237
-
238
- # glove
239
- self.pushButton_30 = QtWidgets.QPushButton(' glove', Form)
240
- self.pushButton_30.setGeometry(QtCore.QRect(1100, 570, 120, 27))
241
- self.pushButton_30.setObjectName("pushButton_2`0`")
242
- self.pushButton_30.setStyleSheet(
243
- "text-align: left; padding-left: 10px;")
244
- self.pushButton_30.setIcon(QIcon('./ui/color_blocks/class_glove.png'))
245
-
246
- # background
247
- self.pushButton_31 = QtWidgets.QPushButton(' background', Form)
248
- self.pushButton_31.setGeometry(QtCore.QRect(1100, 615, 120, 27))
249
- self.pushButton_31.setObjectName("pushButton_2`0`")
250
- self.pushButton_31.setStyleSheet(
251
- "text-align: left; padding-left: 10px;")
252
- self.pushButton_31.setIcon(QIcon('./ui/color_blocks/class_bg.png'))
253
-
254
- self.graphicsView = QtWidgets.QGraphicsView(Form)
255
- self.graphicsView.setGeometry(QtCore.QRect(20, 140, 256, 512))
256
- self.graphicsView.setObjectName("graphicsView")
257
- self.graphicsView_2 = QtWidgets.QGraphicsView(Form)
258
- self.graphicsView_2.setGeometry(QtCore.QRect(320, 140, 256, 512))
259
- self.graphicsView_2.setObjectName("graphicsView_2")
260
- self.graphicsView_3 = QtWidgets.QGraphicsView(Form)
261
- self.graphicsView_3.setGeometry(QtCore.QRect(620, 140, 256, 512))
262
- self.graphicsView_3.setObjectName("graphicsView_3")
263
-
264
- self.retranslateUi(Form)
265
- self.pushButton_2.clicked.connect(Form.open_densepose)
266
- self.pushButton_6.clicked.connect(Form.save_img)
267
- self.pushButton_8.clicked.connect(Form.top_mode)
268
- self.pushButton_9.clicked.connect(Form.skin_mode)
269
- self.pushButton_10.clicked.connect(Form.outer_mode)
270
- self.pushButton_11.clicked.connect(Form.face_mode)
271
- self.pushButton_12.clicked.connect(Form.skirt_mode)
272
- self.pushButton_13.clicked.connect(Form.hair_mode)
273
- self.pushButton_14.clicked.connect(Form.dress_mode)
274
- self.pushButton_15.clicked.connect(Form.headwear_mode)
275
- self.pushButton_16.clicked.connect(Form.pants_mode)
276
- self.pushButton_17.clicked.connect(Form.eyeglass_mode)
277
- self.pushButton_18.clicked.connect(Form.rompers_mode)
278
- self.pushButton_19.clicked.connect(Form.footwear_mode)
279
- self.pushButton_20.clicked.connect(Form.leggings_mode)
280
- self.pushButton_21.clicked.connect(Form.ring_mode)
281
- self.pushButton_22.clicked.connect(Form.belt_mode)
282
- self.pushButton_23.clicked.connect(Form.neckwear_mode)
283
- self.pushButton_24.clicked.connect(Form.wrist_mode)
284
- self.pushButton_25.clicked.connect(Form.socks_mode)
285
- self.pushButton_26.clicked.connect(Form.tie_mode)
286
- self.pushButton_27.clicked.connect(Form.earstuds_mode)
287
- self.pushButton_28.clicked.connect(Form.necklace_mode)
288
- self.pushButton_29.clicked.connect(Form.bag_mode)
289
- self.pushButton_30.clicked.connect(Form.glove_mode)
290
- self.pushButton_31.clicked.connect(Form.background_mode)
291
- self.pushButton_0.clicked.connect(Form.generate_parsing)
292
- self.pushButton_1.clicked.connect(Form.generate_human)
293
-
294
- QtCore.QMetaObject.connectSlotsByName(Form)
295
-
296
- def retranslateUi(self, Form):
297
- _translate = QtCore.QCoreApplication.translate
298
- Form.setWindowTitle(_translate("Form", "Text2Human"))
299
- self.pushButton_2.setText(_translate("Form", "Load Pose"))
300
- self.pushButton_6.setText(_translate("Form", "Save Image"))
301
-
302
- self.pushButton_0.setText(_translate("Form", "Generate Parsing"))
303
- self.pushButton_1.setText(_translate("Form", "Generate Human"))
304
-
305
-
306
- if __name__ == "__main__":
307
- import sys
308
- app = QtWidgets.QApplication(sys.argv)
309
- Form = QtWidgets.QWidget()
310
- ui = Ui_Form()
311
- ui.setupUi(Form)
312
- Form.show()
313
- sys.exit(app.exec_())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/datasets/pipelines/compose.py DELETED
@@ -1,51 +0,0 @@
1
- import collections
2
-
3
- from mmcv.utils import build_from_cfg
4
-
5
- from ..builder import PIPELINES
6
-
7
-
8
- @PIPELINES.register_module()
9
- class Compose(object):
10
- """Compose multiple transforms sequentially.
11
-
12
- Args:
13
- transforms (Sequence[dict | callable]): Sequence of transform object or
14
- config dict to be composed.
15
- """
16
-
17
- def __init__(self, transforms):
18
- assert isinstance(transforms, collections.abc.Sequence)
19
- self.transforms = []
20
- for transform in transforms:
21
- if isinstance(transform, dict):
22
- transform = build_from_cfg(transform, PIPELINES)
23
- self.transforms.append(transform)
24
- elif callable(transform):
25
- self.transforms.append(transform)
26
- else:
27
- raise TypeError('transform must be callable or a dict')
28
-
29
- def __call__(self, data):
30
- """Call function to apply transforms sequentially.
31
-
32
- Args:
33
- data (dict): A result dict contains the data to transform.
34
-
35
- Returns:
36
- dict: Transformed data.
37
- """
38
-
39
- for t in self.transforms:
40
- data = t(data)
41
- if data is None:
42
- return None
43
- return data
44
-
45
- def __repr__(self):
46
- format_string = self.__class__.__name__ + '('
47
- for t in self.transforms:
48
- format_string += '\n'
49
- format_string += f' {t}'
50
- format_string += '\n)'
51
- return format_string
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/lama-example/saicinpainting/training/trainers/__init__.py DELETED
@@ -1,30 +0,0 @@
1
- import logging
2
- import torch
3
- from saicinpainting.training.trainers.default import DefaultInpaintingTrainingModule
4
-
5
-
6
- def get_training_model_class(kind):
7
- if kind == 'default':
8
- return DefaultInpaintingTrainingModule
9
-
10
- raise ValueError(f'Unknown trainer module {kind}')
11
-
12
-
13
- def make_training_model(config):
14
- kind = config.training_model.kind
15
- kwargs = dict(config.training_model)
16
- kwargs.pop('kind')
17
- kwargs['use_ddp'] = config.trainer.kwargs.get('accelerator', None) == 'ddp'
18
-
19
- logging.info(f'Make training model {kind}')
20
-
21
- cls = get_training_model_class(kind)
22
- return cls(config, **kwargs)
23
-
24
-
25
- def load_checkpoint(train_config, path, map_location='cuda', strict=True):
26
- model: torch.nn.Module = make_training_model(train_config)
27
- state = torch.load(path, map_location=map_location)
28
- model.load_state_dict(state['state_dict'], strict=strict)
29
- model.on_load_checkpoint(state)
30
- return model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/chat/g4f/Provider/Providers/Lockchat.py DELETED
@@ -1,32 +0,0 @@
1
- import requests
2
- import os
3
- import json
4
- from ...typing import sha256, Dict, get_type_hints
5
- url = 'http://supertest.lockchat.app'
6
- model = ['gpt-4', 'gpt-3.5-turbo']
7
- supports_stream = True
8
- needs_auth = False
9
-
10
- def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
11
-
12
- payload = {
13
- "temperature": 0.7,
14
- "messages": messages,
15
- "model": model,
16
- "stream": True,
17
- }
18
- headers = {
19
- "user-agent": "ChatX/39 CFNetwork/1408.0.4 Darwin/22.5.0",
20
- }
21
- response = requests.post("http://supertest.lockchat.app/v1/chat/completions",
22
- json=payload, headers=headers, stream=True)
23
- for token in response.iter_lines():
24
- if b'The model: `gpt-4` does not exist' in token:
25
- print('error, retrying...')
26
- _create_completion(model=model, messages=messages, stream=stream, temperature=temperature, **kwargs)
27
- if b"content" in token:
28
- token = json.loads(token.decode('utf-8').split('data: ')[1])['choices'][0]['delta'].get('content')
29
- if token: yield (token)
30
-
31
- params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
32
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cognomen/CatCon-Controlnet-WD-1-5-b2/README.md DELETED
@@ -1,15 +0,0 @@
1
- ---
2
- title: CatCon Controlnet WD 1 5 B2
3
- emoji: 🐱
4
- colorFrom: gray
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.28.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- tags:
12
- - jax-diffusers-event
13
- ---
14
-
15
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cosmo-Hug/Cosmo-Hug-FeverDream/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Cosmo Hug FeverDream
3
- emoji: 📉
4
- colorFrom: purple
5
- colorTo: yellow
6
- sdk: gradio
7
- sdk_version: 3.23.0
8
- app_file: app.py
9
- pinned: false
10
- license: creativeml-openrail-m
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/__init__.py DELETED
File without changes