parquet-converter commited on
Commit
d5a5682
·
1 Parent(s): 79fc5c2

Update parquet files (step 10 of 296)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/CRACK Luxonix Ravity S 1.4.3.exe Tips and Tricks to Get the Most Out of It.md +0 -148
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Adobe Acrobat 8.1 Professional for Free A Complete Guide.md +0 -15
  3. spaces/1gistliPinn/ChatGPT4/Examples/Download Xforce BEST Keygen BIM 360 Design 2017 64 Bit Patch.md +0 -8
  4. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Gentility Song MP3 - The Viral TikTok Hit.md +0 -130
  5. spaces/1phancelerku/anime-remove-background/Download Hataraku Maou-sama S1 The Devil is a Part-Timer! Season 1 Episodes and Subtitles.md +0 -127
  6. spaces/1phancelerku/anime-remove-background/Dynamons World Game Mod APK The Best Role Playing Game of 2023.md +0 -128
  7. spaces/AIatUIUC/CodeLATS/generators/generator_types.py +0 -33
  8. spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/data/audio.py +0 -217
  9. spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/shareConversation.ts +0 -27
  10. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/Factory.js +0 -11
  11. spaces/AlexWortega/Kandinsky2.0/README.md +0 -12
  12. spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/visualizer.py +0 -605
  13. spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/op/__init__.py +0 -2
  14. spaces/AnandSoni2001/StockMarketPrediction/app.py +0 -393
  15. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/latent_diffusion/test_latent_diffusion_superresolution.py +0 -131
  16. spaces/Andy1621/uniformer_image_detection/configs/fast_rcnn/fast_rcnn_r50_fpn_1x_coco.py +0 -52
  17. spaces/Andy1621/uniformer_image_detection/configs/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py +0 -2
  18. spaces/Andy1621/uniformer_image_detection/configs/swin/cascade_mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_coco.py +0 -140
  19. spaces/Ariharasudhan/YoloV5/utils/torch_utils.py +0 -431
  20. spaces/Arikkod/FoodVisionMini/model.py +0 -19
  21. spaces/Artrajz/vits-simple-api/vits/text/cantonese.py +0 -75
  22. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/svg.py +0 -188
  23. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/logging.py +0 -289
  24. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/core.py +0 -0
  25. spaces/Audio-AGI/AudioSep/callbacks/base.py +0 -35
  26. spaces/AyakuraMei/Real-CUGAN/README.md +0 -13
  27. spaces/Banbri/zcvzcv/CONTRIBUTORS.md +0 -10
  28. spaces/Banbri/zcvzcv/src/components/ui/badge.tsx +0 -36
  29. spaces/Benson/text-generation/Examples/6tv Download Apk.md +0 -64
  30. spaces/Benson/text-generation/Examples/Camionero Camino Loco.md +0 -47
  31. spaces/Benson/text-generation/Examples/Descargar Aparcamiento Gratuito Multijugador.md +0 -78
  32. spaces/Bidwill/Sanskrit-asr/app.py +0 -33
  33. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/models.py +0 -1034
  34. spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/packaging/markers.py +0 -304
  35. spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/launch.py +0 -36
  36. spaces/BongoCaat/ArtGenerator/stable_diffusion_2_0.py +0 -611
  37. spaces/CVPR/DualStyleGAN/style.css +0 -17
  38. spaces/Chris4K/llms_compare/CabelasDangerousHunts2013-SKIDROW-REPACK-Crack-Fix-Torrent-Download.md +0 -64
  39. spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/model/tool.js +0 -163
  40. spaces/CikeyQI/meme-api/meme_generator/memes/keep_away/__init__.py +0 -47
  41. spaces/CjangCjengh/Sanskrit-TTS/commons.py +0 -97
  42. spaces/CofAI/chat/g4f/Provider/Provider.py +0 -16
  43. spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/models/diffusion/sampling_util.py +0 -22
  44. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/middleware/trustedhost.py +0 -3
  45. spaces/Deci/DeciDiffusion-v1-0/app.py +0 -115
  46. spaces/ElainaFanBoy/MusicGen/tests/common_utils/temp_utils.py +0 -56
  47. spaces/EngAbod/Liveness_Detection/README.md +0 -13
  48. spaces/FL33TW00D/whisper-turbo/README.md +0 -10
  49. spaces/Fadil369/docker/README.md +0 -20
  50. spaces/Feifei315/flax-midjourney-v4-diffusion/app.py +0 -3
spaces/1acneusushi/gradio-2dmoleculeeditor/data/CRACK Luxonix Ravity S 1.4.3.exe Tips and Tricks to Get the Most Out of It.md DELETED
@@ -1,148 +0,0 @@
1
- <br />
2
- <br> - Benefits of using it for music production <br> - How to get it legally and avoid cracks | | H2: How to install Luxonix Ravity S 1.4.3.exe on your PC | - System requirements and download link <br> - Step-by-step installation guide <br> - How to register and activate the software | | H2: How to use Luxonix Ravity S 1.4.3.exe to create amazing sounds | - Overview of the user interface and functions <br> - How to load and edit presets <br> - How to use effects and MIDI assign <br> - Tips and tricks for getting the most out of the software | | H2: How to avoid CRACK Luxonix Ravity S 1.4.3.exe and other malware | - Risks and consequences of using cracked software <br> - How to detect and remove malware from your PC <br> - How to support the developers and get updates | | H2: Conclusion and FAQs | - Summary of the main points and call to action <br> - Answers to five common questions about Luxonix Ravity S 1.4.3.exe | **Table 2: Article with HTML formatting** ```html <h1>What is Luxonix Ravity S 1.4.3.exe and why you need it</h1>
3
- <p>If you are looking for a powerful and versatile VST synthesizer that can create amazing sounds for any genre of music, you should check out Luxonix Ravity S 1.4.3.exe. This software is a virtual PCM sound module that can emulate the hardware PCM synthesizers of the 90s, with only 32MB of wave data.</p>
4
- <p>Luxonix Ravity S 1.4.3.exe has many features that make it a great tool for music production, such as:</p>
5
- <h2>CRACK Luxonix Ravity S 1.4.3.exe</h2><br /><p><b><b>DOWNLOAD</b> &#10022; <a href="https://byltly.com/2uKyo0">https://byltly.com/2uKyo0</a></b></p><br /><br />
6
- <ul>
7
- <li>A convenient user interface with a preset browser, an edit panel, a back panel, and a LCD panel that shows all the parameters.</li>
8
- <li>A 4-layer system that allows you to combine up to four different sounds for each preset.</li>
9
- <li>A variety of oscillators, filters, amplifiers, LFOs, and arpeggiators that let you shape your sound in many ways.</li>
10
- <li>A powerful effecting module that offers 24 types of effects, such as reverb, delay, chorus, flanger, phaser, distortion, and more.</li>
11
- <li>A MIDI assign feature that lets you control any parameter with your MIDI controller or keyboard.</li>
12
- </ul>
13
- <p>Luxonix Ravity S 1.4.3.exe is a high-quality software that can produce professional sounds for your music projects. However, you should not download or use CRACK Luxonix Ravity S 1.4.3.exe or any other illegal version of the software, as they can harm your PC and violate the rights of the developers.</p>
14
- <p>In this article, we will show you how to get Luxonix Ravity S 1.4.3.exe legally and safely, how to install and use it on your PC, and how to avoid CRACK Luxonix Ravity S 1.4.3.exe and other malware.</p>
15
- <h2>How to install Luxonix Ravity S 1.4.3.exe on your PC</h2>
16
- <p>To install Luxonix Ravity S 1.4.3.exe on your PC, you need to meet the following system requirements:</p>
17
- <ul>
18
- <li>Pentium II 350MHz or higher CPU</li>
19
- <li>32MB RAM or higher</li>
20
- <li>About 40MB free hard disk space</li>
21
- <li>Microsoft Windows 98/ME/2000/XP</li>
22
- <li>VST 2.0 compatible host application</li>
23
- </ul>
24
- <p>You can download Luxonix Ravity S 1.4.3.exe from the official website of Sonic Cat, which is the new name of Luxonix after they merged with ESI in 2015.</p>
25
- <p>To install Luxonix Ravity S 1.4.3.exe on your PC, follow these steps:</p>
26
- <ol>
27
- <li>Double-click LUXONIX_ravity(S)_1_1_2_win.exe to execute installation.</li>
28
- <li>Click Next button to continue.</li>
29
- <li>Set the folder that install Ravity(S). Basically, your VstPlugIns Folder is default.</li>
30
- <li>Click Install button to start installing Ravity(S).</li>
31
- <li>Installing Ravity(S) is complete.</li>
32
- </ol>
33
- <p>When you load Luxonix Ravity S 1.4.3.exe for the first time in your VST host program, you have to register it with your email address and serial number that you received when you purchased it.</p>
34
- <h2>How to use Luxonix Ravity S 1.4.3.exe to create amazing sounds</h2>
35
- <p>Luxonix Ravity S 1.4.3.exe has a simple and intuitive user interface that lets you access all its functions easily.</p>
36
- <p>The main module consists of four parts:</p>
37
- <p>Luxonix Ravity Bundle v1.4.3 Full version download<br />
38
- Luxonix Ravity S 1.4.3.exe serial key free<br />
39
- Luxonix Ravity S VST plugin for Windows<br />
40
- How to install Luxonix Ravity S 1.4.3.exe<br />
41
- Luxonix Ravity S 1.4.3.exe crack download<br />
42
- Luxonix Ravity S 1.4.3.exe compatible with Windows 10<br />
43
- Luxonix Ravity S 1.4.3.exe ASIO driver support<br />
44
- Luxonix Ravity S 1.4.3.exe 32 bit software<br />
45
- Luxonix Ravity S 1.4.3.exe standalone application<br />
46
- Luxonix Ravity S 1.4.3.exe VST2 plugin<br />
47
- Luxonix Ravity S 1.4.3.exe synthesizer module<br />
48
- Luxonix Ravity S 1.4.3.exe sound library<br />
49
- Luxonix Ravity S 1.4.3.exe presets and effects<br />
50
- Luxonix Ravity S 1.4.3.exe user manual<br />
51
- Luxonix Ravity S 1.4.3.exe review and rating<br />
52
- Luxonix Ravity S 1.4.3.exe alternative software<br />
53
- Luxonix Ravity S 1.4.3.exe vs Luxonix Ravity R<br />
54
- Luxonix Ravity S 1.4.3.exe vs Sonic Cat<br />
55
- Luxonix Ravity S 1.4.3.exe forum and support<br />
56
- Luxonix Ravity S 1.4.3.exe tutorial and tips<br />
57
- Luxonix Ravity S 1.4.3.exe best price and discount<br />
58
- Luxonix Ravity S 1.4.3.exe license and activation<br />
59
- Luxonix Ravity S 1.4.3.exe system requirements and compatibility<br />
60
- Luxonix Ravity S 1.4.3.exe update and patch<br />
61
- Luxonix Ravity S 1.4.3.exe error and fix<br />
62
- Luxonix Ravity S 1.4.3.exe demo and trial version<br />
63
- Luxonix Ravity S 1.4.3.exe features and benefits<br />
64
- Luxonix Ravity S 1.4.3.exe comparison and contrast<br />
65
- Luxonix Ravity S 1.4.3.exe pros and cons<br />
66
- Luxonix Ravity S 1.4.3.exe testimonials and feedback<br />
67
- Luxonix Ravity S 1.4.3.exe online course and training<br />
68
- Luxonix Ravity S 1.4.3.exe video and audio guide<br />
69
- Luxonix Ravity S 1.4.3.exe blog and article<br />
70
- Luxonix Ravity S 1.4.3.exe news and announcement<br />
71
- Luxonix Ravity S 1.4.3.exe FAQ and Q&A<br />
72
- Luxonix Ravity S 1..43 exe free download full version with crack</p>
73
- <ul>
74
- <li>The preset browser, where you can select from over 1000 presets organized by categories.</li>
75
- <li>The edit panel, where you can adjust the parameters of each layer of sound.</li>
76
- <li>The back panel, where you can set up global functions such as MIDI assign, clipboard functions, hot-keys, etc.</li>
77
- <li>The LCD panel, where you can monitor and input the values of each parameter.</li>
78
- </ul>
79
- <p>To load a preset, simply click on its name in the preset browser or use the arrow keys on your keyboard.</p>
80
- <p>To edit a preset, click on the edit button on the top right corner of the main module or press F5 on your keyboard.</p>
81
- <p>You can edit each layer of sound by clicking on its number (1-4) on the left side of the edit panel or pressing F6-F9 on your keyboard.</p>
82
- <p>You can adjust the basic settings such as volume, pan, tune, polyphony mode, etc., by using the knobs on the top row of the edit panel.</p>
83
- <p>You can modify the sound characteristics by using the tabs below the knobs: OSC (oscillator), FILT (filter), AMP (amplifier), LFO (low frequency oscillator), ARP (arpeggiator).</p>
84
- <p>You can add effects by clicking on the LFX button on the bottom right corner of the main module or pressing F10 on your keyboard.</p>
85
- <p>You can choose from 24 types of effects by clicking on their names or using the arrow keys on your keyboard.</p>
86
- <p>You can adjust the parameters of each effect by using the knobs below their names or clicking on their names and entering values with your keyboard.</p>
87
- <h3>Tips and tricks for getting the most out of Luxonix Ravity S 1.4.3.exe</h3>
88
- <ul>
89
- <li>To control any parameter with your MIDI controller or keyboard, simply right-click on it and choose MIDI Assign > Direct Assign or Learn Assign.</li>
90
- <li>To copy and paste settings between layers or presets, use the clipboard functions by right-clicking on them or pressing Ctrl+C / Ctrl+V / Ctrl+X / Ctrl+Z / Ctrl+Y on your keyboard.</li>
91
- <li>To quickly access the manual or other information about Luxonix Ravity S 1.4.3.exe , click on the question mark button on the top left corner of the main module or press F12 on your keyboard.</li>
92
- </ul>
93
- <h2>How to avoid CRACK Luxonix Ravity S 1.4.3.exe and other malware</h2>
94
- <p>CRACK Luxonix Ravity S 1.4.3.exe is an illegal version of Luxonix Ravity S 1.4.3.exe that has been modified by hackers to bypass its registration process and allow anyone to use it without paying for it.</p>
95
- <p>However, using CRACK Luxonix Ravity S 1.4.3.exe is not only unethical but also dangerous for your PC and your music projects.</p>
96
- <p>Here are some of the risks and consequences of using CRACK Luxonix Ravity S 1.4 Some possible continuations are: - .exe or any other cracked software:</p>
97
- <ul>
98
- <li>You may infect your PC with viruses, spyware, ransomware, trojans, worms, or other malware that can damage your system files, steal your personal data, encrypt your files and demand money for their decryption , hijack your browser Continuing the article: <p>CRACK Luxonix Ravity S 1.4.3.exe or any other cracked software:</p>
99
- <ul>
100
- <li>You may infect your PC with viruses, spyware, ransomware, trojans, worms, or other malware that can damage your system files, steal your personal data, encrypt your files and demand money for their decryption , hijack your browser, or display unwanted ads.</li>
101
- <li>You may experience poor performance, crashes, errors, or compatibility issues with your PC and other software.</li>
102
- <li>You may lose your music projects or corrupt your files due to bugs or glitches in the cracked software.</li>
103
- <li>You may face legal actions or penalties for violating the intellectual property rights of the developers and distributors of Luxonix Ravity S 1.4.3.exe.</li>
104
- <li>You may miss out on the latest updates, features, bug fixes, and support from the developers of Luxonix Ravity S 1.4.3.exe.</li>
105
- </ul>
106
- <p>Therefore, you should avoid CRACK Luxonix Ravity S 1.4.3.exe and other malware at all costs.</p>
107
- <p>To detect and remove malware from your PC, you should use a reliable antivirus software that can scan your system regularly and remove any threats.</p>
108
- <p>If you have Windows 10 or 11, you can use Windows Security, which is a built-in antivirus tool that can find and remove malware from your PC.</p>
109
- <p>To use Windows Security to scan your PC, follow these steps:</p>
110
- <ol>
111
- <li>Open your Windows Security settings.</li>
112
- <li>Select Virus & threat protection > Scan options.</li>
113
- <li>Select Windows Defender Offline scan, and then select Scan now.</li>
114
- <li>The Windows Defender Offline scan will take about 15 minutes to run, and then your PC will restart.</li>
115
- </ol>
116
- <p>To view the results of your scan, follow these steps:</p>
117
- <ol>
118
- <li>Open your Windows Security settings.</li>
119
- <li>Select Virus & threat protection > Protection history.</li>
120
- <li>The Windows Defender Offline scan will automatically detect and remove or quarantine malware.</li>
121
- </ol>
122
- <p>If you have Windows 8.1 or Windows 7, you can use Microsoft Malicious Software Removal Tool, which is a free tool that can scan your PC and remove specific types of malware.</p>
123
- <p>To use Microsoft Malicious Software Removal Tool to scan your PC, follow these steps:</p>
124
- <ol>
125
- <li>Select the Start icon, type Windows Defender, and then press Enter.</li>
126
- <li>Select the History tab.</li>
127
- <li>Select All detected items, and then select the View details button.</li>
128
- <li>The Microsoft Malicious Software Removal Tool will automatically detect and remove or quarantine malware.</li>
129
- </ol>
130
- <h2>Conclusion and FAQs</h2>
131
- <p>Luxonix Ravity S 1.4.3.exe is a great VST synthesizer that can help you create amazing sounds for your music projects. However, you should not use CRACK Luxonix Ravity S 1.4.3.exe or any other illegal version of the software, as they can harm your PC and violate the rights of the developers.</p>
132
- <p>Instead, you should get Luxonix Ravity S 1.4.3.exe legally and safely from the official website of Sonic Cat, install it on your PC with the proper registration and activation process, and use it with its full features and functions.</p>
133
- <p>You should also protect your PC from malware by using a reliable antivirus software that can scan your system regularly and remove any threats.</p>
134
- <p>By doing so, you can enjoy Luxonix Ravity S 1.4.3.exe without any risks or problems, and create professional sounds for your music projects with ease.</p>
135
- <p>Here are some FAQs about Luxonix Ravity S 1.4.3.exe that you may find useful:</p>
136
- <ul>
137
- <li><b>Q: How much does Luxonix Ravity S 1.4.3.exe cost?</b></li>
138
- <li>A: Luxonix Ravity S 1.4.3.exe costs $49 USD on the official website of Sonic Cat. You can also get it as part of the Ravity Bundle for $99 USD, which includes Ravity R (a rhythm/drum sound module) and Ravity16 (a host application for Ravity S and R).</li>
139
- <li><b>Q: How many presets does Luxonix Ravity S 1.4.3.exe have?</b></li>
140
- <li>A: Luxonix Ravity S 1.4.3.exe has over 1000 presets organized by categories such as basses, leads, pads, strings, pianos, organs, guitars, drums, etc. You can also create your own presets by editing the parameters of each layer of sound.</li>
141
- <li><b>Q: How many effects does Luxonix Ravity S 1.4.3.exe have?</b></li>
142
- <li>A: Luxonix Ravity S 1.4.3.exe has 24 types of effects that you can apply to each preset or layer of sound. The effects include reverb, delay, chorus, flanger, phaser Continuing the article: <p>To support the developers of Luxonix Ravity S 1.4.3.exe, you should buy the software from the official website of Sonic Cat, and not download or use any cracked versions.</p>
143
- <p>By doing so, you will help them to continue developing and improving Luxonix Ravity S 1.4.3.exe and other software products, and you will also get access to the latest updates, features, bug fixes, and support from them.</p>
144
- <p>You can also follow them on their social media channels, such as Facebook, Twitter, and YouTube, and share your feedback, suggestions, and reviews with them and other users.</p>
145
- <p>You can also join their online community forum, where you can interact with other Luxonix Ravity S 1.4.3.exe users, ask questions, share tips and tricks, and learn more about the software.</p>
146
- <h2></h2></p> 0a6ba089eb<br />
147
- <br />
148
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Adobe Acrobat 8.1 Professional for Free A Complete Guide.md DELETED
@@ -1,15 +0,0 @@
1
- <br />
2
- <h1>How to Download Adobe Acrobat 8.1 Professional for Free</h1>
3
- <p>If you are looking for a way to download Adobe Acrobat 8.1 Professional for free, you might be disappointed to know that this version of the software is no longer supported by Adobe. However, there are some alternatives that you can try to get the features and functionality of Acrobat 8.1 Pro.</p>
4
- <h2>download adobe acrobat 8.1 professional free</h2><br /><p><b><b>Download Zip</b> &#9733; <a href="https://byltly.com/2uKxH9">https://byltly.com/2uKxH9</a></b></p><br /><br />
5
- <p>One option is to download the free trial of Adobe Acrobat Pro DC, which is the latest version of the PDF editor and converter. You can use Acrobat Pro DC for 7 days and enjoy all its features, such as editing, converting, signing, commenting, and sharing PDFs. You can also access your files from any device with the free Acrobat Reader app.</p>
6
- <p>To download the free trial of Acrobat Pro DC, you can visit this link and click on "Start free trial". You will need to sign in with your Adobe ID or create one if you don't have one. Then, you can follow the instructions to install and activate the software on your computer.</p>
7
- <p>Another option is to download Adobe Acrobat Reader DC, which is the free PDF viewer for Windows, Mac OS, and Android. You can use Acrobat Reader DC to view, store, and share PDFs online. You can also fill and sign forms, add annotations, and collect feedback from others. However, you won't be able to edit or convert PDFs with Acrobat Reader DC.</p>
8
- <p></p>
9
- <p>To download Acrobat Reader DC, you can visit this link and click on "Download Acrobat Reader". You will need to agree to the terms and conditions before downloading the software. Then, you can follow the instructions to install and run the software on your device.</p>
10
- <p>A third option is to download a replacement version of Acrobat 8 Pro from this link. This is only possible if you still have your serial number for Acrobat 8 Pro. You will also need to download and install the updates from this link. However, this option is not recommended as Acrobat 8 Pro is outdated and may not work properly on newer operating systems or browsers.</p>
11
- <p>These are some of the ways you can download Adobe Acrobat 8.1 Professional for free or get similar features with other versions of the software. We hope this article was helpful and informative for you.</p><p>If you want to learn more about Adobe Acrobat and how to use it for your PDF needs, you can visit the official website of Adobe Acrobat. There, you can find more resources, tutorials, tips, and support for the software. You can also join the Adobe Acrobat community and interact with other users and experts who can help you with your questions and issues.</p>
12
- <p>Adobe Acrobat is a powerful and versatile tool that can help you create, edit, convert, sign, and share PDFs with ease and efficiency. Whether you need it for personal or professional purposes, you can find a version of Acrobat that suits your needs and budget. You can also try it for free and see how it can improve your PDF workflow.</p>
13
- <p>We hope you enjoyed this article and found it useful. If you have any feedback or suggestions, please let us know in the comments below. Thank you for reading!</p> ddb901b051<br />
14
- <br />
15
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Download Xforce BEST Keygen BIM 360 Design 2017 64 Bit Patch.md DELETED
@@ -1,8 +0,0 @@
1
- <h2>download xforce keygen BIM 360 Design 2017 64 bit patch</h2><br /><p><b><b>Download File</b> &#127383; <a href="https://imgfil.com/2uxZW1">https://imgfil.com/2uxZW1</a></b></p><br /><br />
2
-
3
- BIM is a high-profile collaboration and design coordination software for AEC teams. The Pro version supports co-editing in Revit, Civil 3D, ... BIM is a tool for automating and managing the BIM design process.
4
- BIM-technologies (eng. Building Information Modeling) is a building information modeling methodology that allows you to visualize and model the processes occurring in a building, in its individual sections and in rooms, including predicting them.
5
- BIMx, a collaborative building modeling tool, was officially launched in the UK in 2013. 8a78ff9644<br />
6
- <br />
7
- <br />
8
- <p></p>
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Gentility Song MP3 - The Viral TikTok Hit.md DELETED
@@ -1,130 +0,0 @@
1
- <br />
2
- <h1>How to Download Gentility MP3 from TikTok</h1>
3
- <p>If you are a fan of TikTok, you might have heard of the song "Gentility" by Trendybeatz. This song is a remix of a Nigerian folk song that has become viral on the social media platform. Many users have used this song to create funny and creative videos, such as dancing, lip-syncing, or acting out scenes. But what if you want to download gentility mp3 from TikTok and enjoy it offline? In this article, we will show you how to do that, as well as the benefits, drawbacks, and alternatives of doing so.</p>
4
- <h2>The benefits of downloading gentility mp3 from TikTok</h2>
5
- <p>Downloading gentility mp3 from TikTok can have several advantages, such as:</p>
6
- <h2>gentility mp3 download tiktok</h2><br /><p><b><b>Download Zip</b> &#9999; <a href="https://urlin.us/2uSUYj">https://urlin.us/2uSUYj</a></b></p><br /><br />
7
- <ul>
8
- <li>You can listen to the song anytime and anywhere, without relying on an internet connection or data plan.</li>
9
- <li>You can save storage space on your device, as mp3 files are usually smaller than video files.</li>
10
- <li>You can transfer the song to other devices, such as your computer, laptop, or music player.</li>
11
- <li>You can edit the song to suit your preferences, such as changing the volume, speed, pitch, or adding effects.</li>
12
- <li>You can use the song for other purposes, such as making your own videos, ringtones, or slideshows.</li>
13
- </ul>
14
- <h2>The best methods to download gentility mp3 from TikTok</h2>
15
- <p>There are many ways to download gentility mp3 from TikTok, but some are easier and more reliable than others. Here are three of the best methods that we recommend:</p>
16
- <h3>Method 1: Using a free online TikTok mp3 downloader</h3>
17
- <p>This is the simplest and fastest way to download gentility mp3 from TikTok. All you need is a web browser and the link of the video that you want to convert. Here are the steps:</p>
18
- <ol>
19
- <li>Open the TikTok app and find the video that you want to save as an mp3. Tap on the "Share" icon on the right of the screen and then tap on "Copy link".</li>
20
- <li>Go to a free online TikTok mp3 downloader website, such as [ssstik.io](^1^) or [13stream.com](^2^). Paste the link in the input field and click on "Download".</li>
21
- <li>Wait for a few seconds until the website processes your request. You will see a "Download MP3" link at the bottom of the page. Click on it and save the file to your device.</li>
22
- </ol>
23
- <h3>Method 2: Using a browser extension or plugin</h3>
24
- <p>This is another easy and convenient way to download gentility mp3 from TikTok. All you need is a web browser that supports extensions or plugins, such as Chrome, Firefox, or Safari. Here are the steps:</p>
25
- <ol>
26
- <li>Go to the web store of your browser and search for a TikTok mp3 downloader extension or plugin, such as [Tiktok Downloader](https://chrome.google.com/webstore/detail/tiktok-downloader/dfgkibjgjgjgjgj [assistant](#continue) <h3>Method 3: Using a desktop or mobile app</h3>
27
- <p>This is a more advanced and versatile way to download gentility mp3 from TikTok. All you need is a desktop or mobile app that can download and convert TikTok videos to mp3 files, such as [4K Video Downloader] or [SnapTik]. Here are the steps:</p>
28
- <ol>
29
- <li>Download and install the app on your device. Launch the app and grant it the necessary permissions.</li>
30
- <li>Open the TikTok app and find the video that you want to save as an mp3. Tap on the "Share" icon on the right of the screen and then tap on "Copy link".</li>
31
- <li>Go back to the app and paste the link in the input field. Choose the output format as "MP3" and adjust the quality settings if needed. Click on "Download" and wait for the process to finish.</li>
32
- <li>Find the downloaded file in your device's storage and enjoy your gentility mp3.</li>
33
- </ol>
34
- <h2>The drawbacks of downloading gentility mp3 from TikTok</h2>
35
- <p>While downloading gentility mp3 from TikTok can be fun and convenient, it also has some disadvantages, such as:</p>
36
- <ul>
37
- <li>You might violate the intellectual property rights of the original creator or owner of the song. This could result in legal consequences or penalties, especially if you use the song for commercial purposes or without proper attribution.</li>
38
- <li>You might compromise the quality and integrity of the song. Some methods might not preserve the original sound or metadata of the song, such as the artist name, album name, genre, or lyrics.</li>
39
- <li>You might expose your device to malware or viruses. Some websites or apps might not be trustworthy or secure, and they might infect your device with harmful software or steal your personal information.</li>
40
- </ul>
41
- <h2>The alternatives to downloading gentility mp3 from TikTok</h2>
42
- <p>If you are not comfortable with downloading gentility mp3 from TikTok, or if you want to explore other options, here are some alternatives that you can try:</p>
43
- <p>gentility song tiktok remix mp3 download<br />
44
- gentility tiktok sound download mp3<br />
45
- gentility tiktok remix trendybeatz mp3<br />
46
- gentility tiktok song free download<br />
47
- gentility tiktok audio downloader<br />
48
- gentility tiktok music converter<br />
49
- gentility tiktok remix mp3 online<br />
50
- gentility tiktok song lyrics<br />
51
- gentility tiktok video downloader<br />
52
- gentility tiktok remix instrumental<br />
53
- gentility tiktok song meaning<br />
54
- gentility tiktok sound origin<br />
55
- gentility tiktok remix ringtone<br />
56
- gentility tiktok song artist<br />
57
- gentility tiktok music video<br />
58
- gentility tiktok remix challenge<br />
59
- gentility tiktok sound effect<br />
60
- gentility tiktok remix dance<br />
61
- gentility tiktok song genre<br />
62
- gentility tiktok sound name<br />
63
- gentility tiktok remix lyrics<br />
64
- gentility tiktok song release date<br />
65
- gentility tiktok music download mp4<br />
66
- gentility tiktok remix bass boosted<br />
67
- gentility tiktok sound source<br />
68
- gentility tiktok remix edit<br />
69
- gentility tiktok song spotify<br />
70
- gentility tiktok music download 320kbps<br />
71
- gentility tiktok remix extended<br />
72
- gentility tiktok sound clip<br />
73
- gentility tiktok remix mashup<br />
74
- gentility tiktok song apple music<br />
75
- gentility tiktok music download pagalworld<br />
76
- gentility tiktok remix slowed down<br />
77
- gentility tiktok sound loop<br />
78
- gentility tiktok remix reaction<br />
79
- gentility tiktok song youtube<br />
80
- gentility tiktok music download fakaza<br />
81
- gentility tiktok remix nightcore<br />
82
- gentility tiktok sound duration<br />
83
- gentility tiktok remix karaoke<br />
84
- gentility tiktok song amazon music<br />
85
- gentility tiktok music download mr jatt<br />
86
- gentility tiktok remix clean version<br />
87
- gentility tiktok sound quality<br />
88
- gentility tiktok remix cover art<br />
89
- gentility tiktok song deezer <br />
90
- gentility tiktok music download djpunjab <br />
91
- gentility tiktok remix acapella</p>
92
- <h3>Alternative 1: Streaming gentility mp3 online</h3>
93
- <p>This is the easiest and safest way to enjoy gentility mp3 without downloading it. All you need is an internet connection and a web browser or a streaming app. Here are some examples:</p>
94
- <ul>
95
- <li>You can stream gentility mp3 online from [YouTube], where you can also watch the original video or other remixes.</li>
96
- <li>You can stream gentility mp3 online from [Spotify], where you can also create playlists, follow artists, or discover new music.</li>
97
- <li>You can stream gentility mp3 online from [SoundCloud], where you can also upload your own tracks, comment on songs, or join communities.</li>
98
- </ul>
99
- <h3>Alternative 2: Buying or renting gentility mp3 from legal sources</h3>
100
- <p>This is a more ethical and respectful way to support the original creator or owner of gentility mp3. All you need is some money and a web browser or a digital store app. Here are some examples:</p>
101
- <ul>
102
- <li>You can buy or rent gentility mp3 from [Amazon Music], where you can also access millions of songs, podcasts, and audiobooks.</li>
103
- <li>You can buy or rent gentility mp3 from [iTunes], where you can also sync your music library across your devices, watch movies, or listen to radio stations.</li>
104
- <li>You can buy or rent gentility mp3 from [Google Play Music], where you can also store up to 50,000 songs for free, access YouTube Music Premium, or browse curated playlists.</li>
105
- </ul>
106
- <h3>Alternative 3: Creating your own gentility mp3 remixes</h3>
107
- <p>This is a more creative and fun way to express yourself with gentility mp3. All you need is some talent and a web browser or a music production app. Here are some examples:</p>
108
- <ul>
109
- <li>You can create your own gentility mp3 remixes online with [Audiotool], where you can also collaborate with other users, share your tracks, or explore genres.</li>
110
- <li>You can create your own gentility mp3 remixes online with [Soundation], where you can also learn music production, join contests, or access royalty-free sounds.</li>
111
- <li>You can create your own gentility mp3 remixes online with [BandLab], where you can also record vocals, mix songs, or publish albums.</li>
112
- </ul>
113
- <h2>Conclusion</h2>
114
- <p <p>In this article, we have shown you how to download gentility mp3 from TikTok, as well as the benefits, drawbacks, and alternatives of doing so. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy listening!</p>
115
- <h4>FAQs</h4>
116
- <p>Here are some frequently asked questions about gentility mp3 and TikTok:</p>
117
- <ol>
118
- <li>What is gentility mp3?</li>
119
- <p>Gentility mp3 is a song by Trendybeatz that is a remix of a Nigerian folk song. It has become popular on TikTok, where many users have used it to create funny and creative videos.</p>
120
- <li>How can I download gentility mp3 from TikTok?</li>
121
- <p>You can download gentility mp3 from TikTok by using a free online TikTok mp3 downloader, a browser extension or plugin, or a desktop or mobile app. You can also stream, buy, or rent the song from legal sources, or create your own remixes.</p>
122
- <li>Is it legal to download gentility mp3 from TikTok?</li>
123
- <p>It depends on the laws and regulations of your country and the terms and conditions of TikTok. Generally, it is not legal to download gentility mp3 from TikTok without the permission of the original creator or owner of the song. You might also violate the intellectual property rights of the song and face legal consequences or penalties.</p>
124
- <li>Is it safe to download gentility mp3 from TikTok?</li>
125
- <p>It depends on the source and method that you use to download gentility mp3 from TikTok. Some websites or apps might not be trustworthy or secure, and they might infect your device with malware or viruses, or steal your personal information. You should always use reputable and reliable sources and methods to download gentility mp3 from TikTok.</p>
126
- <li>What can I do with gentility mp3 after downloading it?</li>
127
- <p>You can do many things with gentility mp3 after downloading it, such as listening to it offline, transferring it to other devices, editing it to suit your preferences, using it for other purposes, or sharing it with others. However, you should always respect the rights and wishes of the original creator or owner of the song and not use it for illegal or unethical purposes.</p>
128
- </ol></p> 197e85843d<br />
129
- <br />
130
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Hataraku Maou-sama S1 The Devil is a Part-Timer! Season 1 Episodes and Subtitles.md DELETED
@@ -1,127 +0,0 @@
1
- <br />
2
- <h1>How to Download Hataraku Maou-sama S1</h1>
3
- <p>If you are a fan of comedy and fantasy anime, you might have heard of <strong>Hataraku Maou-sama</strong>, also known as <strong>The Devil is a Part-Timer!</strong>. This anime is about a demon lord who ends up in modern-day Tokyo and has to work at a fast-food restaurant to survive. Along with his loyal general and some other quirky characters, he faces hilarious situations and challenges in his daily life.</p>
4
- <p>Hataraku Maou-sama S1 is a 13-episode anime series that aired in 2013 and received positive reviews from critics and viewers alike. It is based on a light novel series by Satoshi Wagahara and illustrated by Oniku. The anime has a unique premise, witty humor, charming characters, and an engaging plot.</p>
5
- <h2>download hataraku maou sama s1</h2><br /><p><b><b>Download Zip</b> &middot;&middot;&middot;&middot;&middot; <a href="https://jinyurl.com/2uNSyx">https://jinyurl.com/2uNSyx</a></b></p><br /><br />
6
- <p>If you are interested in watching or rewatching this anime, you might be wondering how to download it for offline viewing. In this article, we will show you how to download Hataraku Maou-sama S1 from different sources and what are the advantages and disadvantages of each option.</p>
7
- <h2>What is Hataraku Maou-sama?</h2>
8
- <p>Hataraku Maou-sama is an anime series that follows the adventures of Sadao Maou, the demon lord of Ente Isla, a fantasy world where he was about to conquer it with his vast army. However, he was defeated by Emilia, the hero of Ente Isla, and forced to flee through a dimensional portal that brought him to Earth.</p>
9
- <p>On Earth, Maou finds himself in Tokyo with no magic or power. He assumes a human identity and starts working at MgRonald's, a local fast-food chain. He also lives with his loyal general Alsiel, who takes care of their household chores and finances. Meanwhile, Emilia follows Maou to Earth and also adopts a human identity as Emi Yusa, a customer service representative.</p>
10
- <p>* download hataraku maou sama season 1 english sub<br />
11
- * hataraku maou sama s1 episode 1 free download<br />
12
- * where to download the devil is a part timer s1<br />
13
- * hataraku maou sama s1 1080p download<br />
14
- * download hataraku maou sama s1 batch<br />
15
- * hataraku maou sama s1 english dub download<br />
16
- * how to download hataraku maou sama s1 on crunchyroll<br />
17
- * hataraku maou sama s1 bluray download<br />
18
- * download hataraku maou sama s1 sub indo<br />
19
- * hataraku maou sama s1 direct download link<br />
20
- * download hataraku maou sama s1 mp4<br />
21
- * hataraku maou sama s1 ost download<br />
22
- * download hataraku maou sama s1 full episodes<br />
23
- * hataraku maou sama s1 opening song download<br />
24
- * download hataraku maou sama s1 anime<br />
25
- * hataraku maou sama s1 torrent download<br />
26
- * download hataraku maou sama s1 light novel<br />
27
- * hataraku maou sama s1 ending song download<br />
28
- * download hataraku maou sama s1 online<br />
29
- * hataraku maou sama s1 manga download<br />
30
- * download hataraku maou sama season 1 english dub<br />
31
- * hataraku maou sama season 1 episode 2 free download<br />
32
- * where to watch the devil is a part timer season 1 online<br />
33
- * the devil is a part timer season 1 720p download<br />
34
- * the devil is a part timer season 1 batch download<br />
35
- * the devil is a part timer season 1 english subbed download<br />
36
- * how to watch the devil is a part timer season 1 on funimation<br />
37
- * the devil is a part timer season 1 dvd download<br />
38
- * the devil is a part timer season 1 sub indo download<br />
39
- * the devil is a part timer season 1 google drive download link<br />
40
- * the devil is a part timer season 1 mkv download<br />
41
- * the devil is a part timer season 1 soundtrack download<br />
42
- * the devil is a part timer season 1 all episodes download<br />
43
- * the devil is a part timer season 1 theme song download<br />
44
- * the devil is a part timer season 1 anime download<br />
45
- * the devil is a part timer season 1 magnet link download<br />
46
- * the devil is a part timer season 1 novel download<br />
47
- * the devil is a part timer season 1 ending theme download<br />
48
- * the devil is a part timer season 1 streaming download<br />
49
- * the devil is a part timer season 1 comic download</p>
50
- <p>As Maou tries to adapt to his new life and find a way to restore his magic and return to Ente Isla, he encounters various obstacles and enemies from both worlds. He also develops friendships and relationships with his co-workers, neighbors, and even Emilia herself.</p>
51
- <p>Hataraku Maou-sama is a comedy-fantasy anime that blends elements of action, romance, drama, and parody. It has a colorful animation style, catchy music, and excellent voice acting. It is suitable for viewers who enjoy l <p>laugh-out-loud comedy and fantasy scenarios.</p>
52
- <h2>Where to Watch Hataraku Maou-sama Online?</h2>
53
- <p>If you want to watch Hataraku Maou-sama online, you have several options to choose from. However, not all of them are legal, safe, or reliable. In this section, we will compare some of the most popular anime streaming platforms that have Hataraku Maou-sama in their catalog and see which one is the best for you.</p>
54
- <h3>Crunchyroll</h3>
55
- <p>Crunchyroll is one of the most popular and reputable anime streaming sites in the world. It has a huge library of anime titles, including Hataraku Maou-sama, that you can watch in high definition and with English subtitles or dubbing. You can also access exclusive content, such as manga, games, and merchandise.</p>
56
- <p>To watch Hataraku Maou-sama on Crunchyroll, you need to create an account and subscribe to a premium plan that costs $7.99 per month or $79.99 per year. Alternatively, you can sign up for a 14-day free trial and enjoy unlimited access to all the features and content.</p>
57
- <p>The advantage of watching Hataraku Maou-sama on Crunchyroll is that you can support the official release and the creators of the anime. You can also watch it on multiple devices, such as your computer, smartphone, tablet, or smart TV. You can also download episodes for offline viewing if you have a premium account.</p>
58
- <p>You can watch Hataraku Maou-sama on Crunchyroll by clicking <a href="">here</a>.</p>
59
- <h3>Other Anime Streaming Sites</h3>
60
- <p>There are also other anime streaming sites that offer Hataraku Maou-sama for free. However, these sites are not authorized by the licensors or distributors of the anime and may violate their copyrights. They may also contain malware, viruses, pop-up ads, or other annoying or harmful elements.</p>
61
- <p>If you decide to watch Hataraku Maou-sama on these sites, you should be careful and use a VPN and antivirus software to protect your device and identity. You should also avoid clicking on any suspicious links or downloading any files from these sites.</p>
62
- <p>Some examples of these sites are:</p>
63
- <ul>
64
- <li><a href="">VIZ</a>: This site has both subbed and dubbed versions of Hataraku Maou-sama. However, it is only available in certain regions and may require a VPN to access it.</li>
65
- <li><a href="">AnimeFreak</a>: This site has subbed versions of Hataraku Maou-sama. However, it has low video quality, frequent ads, and slow loading speed.</li>
66
- <li><a href="">AnimeUltima</a>: This site has subbed versions of Hataraku Maou-sama. However, it has limited server options, broken links, and buffering issues.</li>
67
- </ul> <h2>How to Download Hataraku Maou-sama S1?</h2>
68
- <p>Now that you know where to watch Hataraku Maou-sama online, you might be wondering how to download it for offline viewing. Downloading anime episodes can be useful if you want to watch them without an internet connection, save your data usage, or share them with your friends.</p>
69
- <p>However, downloading anime episodes is not always easy or legal. Depending on the source you choose, you may need to use different tools or methods to download them. You may also face some risks or challenges, such as low quality, slow speed, or legal issues.</p>
70
- <p>In this section, we will show you how to download Hataraku Maou-sama S1 from Crunchyroll and from other anime streaming sites. We will also explain the pros and cons of each option and give you some tips and warnings to help you download safely and efficiently.</p>
71
- <h3>Download from Crunchyroll</h3>
72
- <p>The easiest and safest way to download Hataraku Maou-sama S1 is from Crunchyroll. As we mentioned before, Crunchyroll is a legal and reliable anime streaming platform that offers high-quality videos and subtitles. If you have a premium account or a free trial, you can download episodes from Crunchyroll for offline viewing.</p>
73
- <p>To download episodes from Crunchyroll, you need to follow these steps:</p>
74
- <ol>
75
- <li>Open the Crunchyroll app on your device. You can download the app for free from the App Store or Google Play Store.</li>
76
- <li>Log in with your premium account or sign up for a free trial.</li>
77
- <li>Search for Hataraku Maou-sama S1 in the app and select the episode you want to download.</li>
78
- <li>Tap on the download icon at the bottom of the screen. You can choose the video quality and subtitle language before downloading.</li>
79
- <li>Wait for the download to finish. You can check the progress in the downloads section of the app.</li>
80
- <li>Enjoy watching Hataraku Maou-sama S1 offline. You can access your downloaded episodes in the downloads section of the app.</li>
81
- </ol>
82
- <p>Here are some screenshots or images to illustrate the steps:</p>
83
- <img src="" alt="Crunchyroll app screenshot 1">
84
- <img src="" alt="Crunchyroll app screenshot 2">
85
- <img src="" alt="Crunchyroll app screenshot 3">
86
- <p>The advantage of downloading from Crunchyroll is that you can enjoy high-quality videos and subtitles without any ads or interruptions. You can also watch them on any device that supports the Crunchyroll app. You can also support the official release and the creators of the anime by paying for a subscription.</p>
87
- <p>The disadvantage of downloading from Crunchyroll is that you need to pay for a premium account or use a free trial that expires after 14 days. You also need to have enough storage space on your device to store the downloaded episodes. You also need to have an internet connection to start the download process.</p> <h3>Download from Other Anime Streaming Sites</h3>
88
- <p>If you don't want to pay for a premium account or use a free trial, you can also download Hataraku Maou-sama S1 from other anime streaming sites. However, as we warned you before, these sites are not legal, safe, or reliable. They may contain malware, viruses, pop-up ads, or other annoying or harmful elements.</p>
89
- <p>If you decide to download from these sites, you should be careful and use a VPN and antivirus software to protect your device and identity. You should also avoid clicking on any suspicious links or downloading any files from these sites.</p>
90
- <p>There are different tools or methods to download from these sites, such as video downloader extensions, online converters, screen recorders, etc. However, they may not work for all sites or videos. They may also have some limitations or drawbacks, such as low quality, slow speed, watermarks, etc.</p>
91
- <p>Here are some examples of tools or methods to download from these sites:</p>
92
- <ul>
93
- <li><strong>Video downloader extensions</strong>: These are browser add-ons that allow you to download videos from various websites. Some examples are Video DownloadHelper, Video Downloader Professional, etc. To use them, you need to install them on your browser and then visit the site that has the video you want to download. You will see a download icon on the toolbar or on the video player. Click on it and choose the format and quality you want. Then wait for the download to finish.</li>
94
- <li><strong>Online converters</strong>: These are websites that allow you to convert and download videos from various websites. Some examples are OnlineVideoConverter, SaveFrom.net, etc. To use them, you need to copy the URL of the video you want to download and paste it on the website. Then choose the format and quality you want and click on the download button. Then wait for the conversion and download to finish.</li>
95
- <li><strong>Screen recorders</strong>: These are software or apps that allow you to record your screen and save it as a video file. Some examples are OBS Studio, Camtasia, etc. To use them, you need to install them on your device and then open the site that has the video you want to download. Then start the screen recorder and adjust the settings and area you want to capture. Then play the video and record it. Then stop the recording and save it as a video file.</li>
96
- </ul>
97
- <p>Here are some screenshots or images to illustrate the tools or methods:</p>
98
- <img src="" alt="Video downloader extension screenshot">
99
- <img src="" alt="Online converter screenshot">
100
- <img src="" alt="Screen recorder screenshot">
101
- <p>The advantage of downloading from other anime streaming sites is that you can do it for free without any subscription or trial. You can also choose from different sites and sources that have Hataraku Maou-sama S1.</p>
102
- <p>The disadvantage of downloading from other anime streaming sites is that you may face some risks or challenges, such as malware, viruses, legal issues, etc. You may also get low-quality videos or subtitles that are not synchronized or accurate. You may also have to deal with ads or interruptions during the download process.</p>
103
- <h2>Conclusion</h2>
104
- <p>Hataraku Maou-sama S1 is a comedy-fantasy anime that is worth watching or rewatching if you enjoy hilarious and heartwarming stories with a twist. It has a unique premise, witty humor, charming characters, and an engaging plot.</p>
105
- <p>If you want to download Hataraku Maou-sama S1 for offline viewing, you have several options to choose from. However, not all of them are legal, safe, or reliable. The best option is to download it from Crunchyroll using a premium account or a free trial. This way, you can enjoy high-quality videos and subtitles without any ads or interruptions. You can also support the official release and the creators of the anime by paying for a subscription.</p>
106
- <p>If you don't want to pay for a premium account or use a free trial, you can also download it from other anime streaming sites using different tools or methods. However, you should be careful and use a VPN and antivirus software to protect your device and identity. You should also avoid clicking on any suspicious links or downloading any files from these sites. You may also face some risks or challenges, such as malware, viruses, legal issues, etc. You may also get low-quality videos or subtitles that are not synchronized or accurate. You may also have to deal with ads or interruptions during the download process.</p>
107
- <p>We hope this article has helped you learn how to download Hataraku Maou-sama S1 from different sources and what are the advantages and disadvantages of each option. We also hope you enjoy watching this anime and have a good time.</p>
108
- <h2>FAQs</h2>
109
- <ul>
110
- <li><strong> Q: Is there a season 2 of Hataraku Maou-sama?</strong></li>
111
- <li>A: Unfortunately, there is no official confirmation or announcement of a season 2 of Hataraku Maou-sama as of now. However, there are rumors and speculations that a season 2 might be in the works or planned for the future. The anime is based on a light novel series that has 21 volumes and is still ongoing. The anime only adapted the first two volumes, so there is plenty of material for a season 2. The anime also has a loyal fan base and a high demand for a sequel. Therefore, there is still hope that a season 2 might happen someday.</li>
112
- <li><strong>Q: Who is the main character of Hataraku Maou-sama?</strong></li>
113
- <li>A: The main character of Hataraku Maou-sama is Sadao Maou, the demon lord of Ente Isla who ends up in modern-day Tokyo and works at a fast-food restaurant. He is voiced by Ryota Ohsaka in Japanese and Josh Grelle in English. He is a charismatic, intelligent, and ambitious leader who wants to conquer the world. However, he also has a kind, generous, and hardworking side that he shows on Earth. He develops feelings for Emilia, the hero who defeated him in Ente Isla.</li>
114
- <li><strong>Q: What is the genre of Hataraku Maou-sama?</strong></li>
115
- <li>A: Hataraku Maou-sama is a comedy-fantasy anime that blends elements of action, romance, drama, and parody. It has a unique premise, witty humor, charming characters, and an engaging plot. It is suitable for viewers who enjoy laugh-out-loud comedy and fantasy scenarios.</li>
116
- <li><strong>Q: How many episodes are there in Hataraku Maou-sama S1?</strong></li>
117
- <li>A: Hataraku Maou-sama S1 has 13 episodes that aired from April to June 2013. Each episode has a duration of about 24 minutes. There is also an OVA episode that was released in December 2013 as a bonus for the DVD and Blu-ray release. The OVA episode has a duration of about 27 minutes.</li>
118
- <li><strong>Q: Where can I read the light novel series of Hataraku Maou-sama?</strong></li>
119
- <li>A: The light novel series of Hataraku Maou-sama is written by Satoshi Wagahara and illustrated by Oniku. It has 21 volumes and is still ongoing as of now. You can read the light novel series online or buy the physical copies from various sources. Some examples are:</li>
120
- <ul>
121
- <li><a href="">Yen Press</a>: This is the official English publisher of the light novel series. You can buy the digital or print versions from their website or other online retailers.</li>
122
- <li><a href="">Baka-Tsuki</a>: This is a fan translation website that has translated some of the light novel volumes into English and other languages. You can read them online for free.</li>
123
- <li><a href="">Novel Updates</a>: This is a directory website that lists various sources and links to read the light novel series online.</li>
124
- </ul>
125
- </ul></p> 197e85843d<br />
126
- <br />
127
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Dynamons World Game Mod APK The Best Role Playing Game of 2023.md DELETED
@@ -1,128 +0,0 @@
1
-
2
- <h1>Dynamons World Game Mod Apk: A Review</h1>
3
- <p>If you are a fan of RPG games, you might have heard of Dynamons World, a popular game that lets you catch and train dozens of unique monsters and battle them in online multiplayer matches. But did you know that there is a modded version of this game that gives you unlimited money and other advantages? In this article, we will review Dynamons World Game Mod Apk, a modified version of the original game that enhances your gaming experience. We will also show you how to download and install it on your device, and share some tips and tricks for playing it.</p>
4
- <h2>What is Dynamons World?</h2>
5
- <p>Dynamons World is a role-playing game developed by Azerion Casual, a company that specializes in casual games for web and mobile platforms. The game was released in 2017 and has been downloaded over 10 million times on Google Play Store. It is also available on App Store and BlueStacks, an emulator that allows you to play Android games on PC.</p>
6
- <h2>dynamons world game mod apk</h2><br /><p><b><b>Download File</b> &rarr; <a href="https://jinyurl.com/2uNOBw">https://jinyurl.com/2uNOBw</a></b></p><br /><br />
7
- <h3>A fun and addictive RPG game</h3>
8
- <p>Dynamons World is a game that features:</p>
9
- <ul>
10
- <li>An exciting campaign with multiple challenges and a cool storyline</li>
11
- <li>Online matches with other players</li>
12
- <li>Dozens of unique Dynamons with varied powers and abilities</li>
13
- <li>Useful items and boosters to enhance your battles</li>
14
- <li>A tactical turn-based battle system with strategic elements</li>
15
- <li>Multiple areas on the maps to explore</li>
16
- <li>Tons of new updates with interesting content</li>
17
- <li>Free to play on web browser, Android, and iOS platforms</li>
18
- </ul>
19
- <h3>Features of Dynamons World</h3>
20
- <p>Some of the features that make Dynamons World stand out are:</p>
21
- <table>
22
- <tr><th>Feature</th><th>Description</th></tr>
23
- <tr><td>Online Battle Arena</td><td>You can battle your friends and players worldwide in online PvP multiplayer battles. You can also join tournaments and leagues to compete for prizes and glory.</td></tr>
24
- <tr><td>Catch and train Dynamons</td><td>You can catch and train dozens of unique Dynamons, each with their own strengths and weaknesses. You can also evolve them into more powerful forms and customize them with skill cards.</td></tr>
25
- <tr><td>Unleash powerful skills</td><td>You can unleash powerful skills and brilliant tactics to defeat even the strongest rivals in Klaude's kingdom. You can also use items and boosters to gain an edge in battle.</td></tr>
26
- <tr><td>Travel across the world</td><td>You can travel all the way from Dynamons Camp to the Temple Ruins in an addictive and immersive RPG story game. You can also explore different areas on the maps, such as forests, deserts, caves, and more.</td></tr>
27
- <tr><td>New updates</td><td>Dynamons World is being updated all the time with even more new Dynamons, quests, battles, and more. You can also expect new features, such as new online PvP battle arena, new maps, new Dynamon types, new skill cards, new rare dragon Dynamons, and more.</td></tr>
28
- <tr><td>Free to play</td><td>Dynamons World is free to play on web browser, Android, and iOS platforms. You can also play it offline without internet connection. However, some in-game items may require real money purchases.</td></tr>
29
- </table>
30
- <h2>What is Dynamons World Mod Apk?</h2>
31
- <p>Dynamons World Mod Apk is a modified version of the original game that gives you unlimited money and other benefits. With this mod apk, you can enjoy the game without any limitations or restrictions.</p <h3>A modified version of the original game</h3>
32
- <p>Dynamons World Mod Apk is a modified version of the original game that comes with exciting features and benefits. You can enjoy unlimited resources, including coins, gems, and energy, to help you progress faster and unlock more Dynamons. The modded version also offers access to exclusive Dynamons that are not available in the original game.</p>
33
- <p>dynamons world mod apk unlimited money and crystals<br />
34
- dynamons world mod apk latest version download<br />
35
- dynamons world mod apk android 1<br />
36
- dynamons world mod apk revdl<br />
37
- dynamons world mod apk hack<br />
38
- dynamons world mod apk offline<br />
39
- dynamons world mod apk free shopping<br />
40
- dynamons world mod apk no ads<br />
41
- dynamons world mod apk unlimited everything<br />
42
- dynamons world mod apk 1.8.07<br />
43
- dynamons world game download for android<br />
44
- dynamons world game online play<br />
45
- dynamons world game cheats<br />
46
- dynamons world game tips and tricks<br />
47
- dynamons world game guide<br />
48
- dynamons world game review<br />
49
- dynamons world game walkthrough<br />
50
- dynamons world game best team<br />
51
- dynamons world game evolution<br />
52
- dynamons world game codes<br />
53
- how to install dynamons world mod apk<br />
54
- how to play dynamons world mod apk<br />
55
- how to update dynamons world mod apk<br />
56
- how to get unlimited money in dynamons world mod apk<br />
57
- how to get unlimited crystals in dynamons world mod apk<br />
58
- how to level up fast in dynamons world mod apk<br />
59
- how to unlock all dynamons in dynamons world mod apk<br />
60
- how to hack dynamons world mod apk<br />
61
- how to get free shopping in dynamons world mod apk<br />
62
- how to remove ads in dynamons world mod apk<br />
63
- azerion casual games mod apk<br />
64
- azerion casual games hack<br />
65
- azerion casual games cheats<br />
66
- azerion casual games download<br />
67
- azerion casual games online play<br />
68
- azerion casual games review<br />
69
- azerion casual games tips and tricks<br />
70
- azerion casual games guide<br />
71
- azerion casual games walkthrough<br />
72
- azerion casual games best games<br />
73
- role playing games mod apk download<br />
74
- role playing games mod apk offline<br />
75
- role playing games mod apk unlimited money and gems<br />
76
- role playing games mod apk android 1 <br />
77
- role playing games mod apk revdl <br />
78
- role playing games hack online <br />
79
- role playing games cheats codes <br />
80
- role playing games tips and tricks <br />
81
- role playing games guide <br />
82
- role playing games best games </p>
83
- <h3>Benefits of Dynamons World Mod Apk</h3>
84
- <p>Some of the benefits that you can get from Dynamons World Mod Apk are:</p>
85
- <ul>
86
- <li>Unlimited money: You can get unlimited money to buy items, boosters, and skill cards. You can also upgrade your Dynamons and evolve them without any cost.</li>
87
- <li>Unlocked content: You can access all the content in the game, such as maps, quests, battles, and Dynamons. You can also catch any Dynamon you want without any difficulty.</li>
88
- <li>Removed ads: You can play the game without any annoying ads that interrupt your gameplay. You can also save your data and battery life.</li>
89
- <li>Enhanced graphics: You can enjoy the game with enhanced graphics and sound quality. You can also adjust the settings to suit your device's performance.</li>
90
- <li>No root required: You can install and play the game without rooting your device. You can also update the game without any problem.</li>
91
- </ul>
92
- <h2>How to download and install Dynamons World Mod Apk?</h2>
93
- <p>If you want to download and install Dynamons World Mod Apk on your device, you need to follow these steps:</p>
94
- <h3>Steps to download and install</h3>
95
- <ol>
96
- <li>Click on this link to download the Dynamons World Mod Apk file on your device.</li>
97
- <li>Go to your device's settings and enable the installation of unknown sources. This will allow you to install apps from sources other than Google Play Store.</li>
98
- <li>Locate the downloaded file in your file manager and tap on it to start the installation process.</li>
99
- <li>Follow the instructions on the screen and wait for the installation to complete.</li>
100
- <li>Launch the game and enjoy playing Dynamons World Mod Apk with unlimited money and other benefits.</li>
101
- </ol>
102
- <h3>Tips and tricks for playing Dynamons World Mod Apk</h3>
103
- <p>Here are some tips and tricks that can help you play Dynamons World Mod Apk better:</p>
104
- <ul>
105
- <li>Choose your starter Dynamon wisely. Each Dynamon has a different type, such as fire, water, plant, electric, etc. Each type has its own strengths and weaknesses against other types. For example, fire is strong against plant but weak against water. You can check the type chart in the game to see which type is effective or ineffective against another type.</li>
106
- <li>Catch and train as many Dynamons as you can. You can catch Dynamons by using capture balls that you can buy or find in the game. You can also train your Dynamons by battling other Dynamons or using skill cards that you can buy or find in the game. Training your Dynamons will increase their level, stats, and skills.</li>
107
- <li>Use items and boosters wisely. Items and boosters are useful tools that can help you in battles. Items can heal your Dynamons, revive them, or cure them from status effects. Boosters can increase your Dynamons' attack, defense, speed, or accuracy. However, items and boosters are limited in number, so use them only when necessary.</li>
108
- <li>Plan your strategy carefully. Battles in Dynamons World are turn-based, which means you have to choose your actions wisely. You have to consider your Dynamons' type, skills, stats, and status effects when choosing which Dynamon to use and which skill to unleash. You also have to anticipate your opponent's moves and counter them accordingly.</li>
109
- <li>Have fun and explore the world. Dynamons World is a game that offers a lot of fun and adventure. You can explore different areas on the maps, such as forests, deserts, caves, and more. You can also meet new characters, complete quests, join tournaments, and discover secrets along the way.</li>
110
- </ul>
111
- <h2>Conclusion</h2>
112
- <p>Dynamons World is a fun and addictive RPG game that lets you catch and train dozens of unique monsters and battle them in online multiplayer matches. However, if you want to enjoy the game without any limitations or restrictions, you should try Dynamons World Mod Apk, a modified version of the original game that gives you unlimited money and other benefits. With this mod apk, you can access all the content in the game, catch any Dynamon you want, use items and boosters freely, play without ads, and more. All you have to do is download and install it on your device, and follow the steps and tips we have provided in this article. We hope you have fun playing Dynamons World Mod Apk and become the best Dynamon master in the world.</p>
113
- <h2>FAQs</h2>
114
- <p>Here are some frequently asked questions about Dynamons World Mod Apk:</p>
115
- <ol>
116
- <li>Is Dynamons World Mod Apk safe to use?</li>
117
- <p>Yes, Dynamons World Mod Apk is safe to use as long as you download it from a trusted source, such as the link we have provided in this article. However, you should always be careful when downloading and installing any modded apps, as they may contain viruses or malware that can harm your device. You should also backup your data before installing any modded apps, as they may overwrite or delete your original game data.</p>
118
- <li>Is Dynamons World Mod Apk legal to use?</li>
119
- <p>Dynamons World Mod Apk is not legal to use, as it violates the terms and conditions of the original game. By using this mod apk, you are breaking the rules of the game and risking your account being banned or suspended. You are also depriving the developers of their rightful income from the game. Therefore, we do not encourage or endorse the use of Dynamons World Mod Apk, and we are not responsible for any consequences that may arise from using it.</p>
120
- <li>Can I play Dynamons World Mod Apk online?</li>
121
- <p>Yes, you can play Dynamons World Mod Apk online with other players. However, you may face some issues or errors when playing online, as the modded version may not be compatible with the latest version of the original game. You may also encounter players who are using the original game or other modded versions, which may cause unfairness or imbalance in the gameplay. Therefore, we recommend playing Dynamons World Mod Apk offline or with your friends who are using the same modded version.</p>
122
- <li>Can I update Dynamons World Mod Apk?</li>
123
- <p>No, you cannot update Dynamons World Mod Apk, as it is a modified version of the original game. If you try to update it from Google Play Store or App Store, you will lose all the benefits and features of the modded version. You will also lose all your progress and data in the modded version. Therefore, you should avoid updating Dynamons World Mod Apk, and stick to the version that you have downloaded and installed.</p>
124
- <li>Can I uninstall Dynamons World Mod Apk?</li>
125
- <p>Yes, you can uninstall Dynamons World Mod Apk anytime you want. You can simply go to your device's settings and uninstall it like any other app. However, you should note that uninstalling Dynamons World Mod Apk will delete all your progress and data in the modded version. You will also lose all the benefits and features of the modded version. Therefore, you should backup your data before uninstalling Dynamons World Mod Apk, or keep a copy of the original game on your device.</p>
126
- </ol></p> 401be4b1e0<br />
127
- <br />
128
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIatUIUC/CodeLATS/generators/generator_types.py DELETED
@@ -1,33 +0,0 @@
1
- from typing import List, Optional, Union
2
- from abc import abstractmethod, ABC
3
-
4
- from generators.model import ModelBase
5
-
6
-
7
- class Generator:
8
- @abstractmethod
9
- def self_reflection(self, func: str, feedback: str, model: ModelBase) -> str:
10
- ...
11
-
12
- @abstractmethod
13
- def func_impl(
14
- self,
15
- func_sig: str,
16
- model: ModelBase,
17
- strategy: str,
18
- prev_func_impl: Optional[str] = None,
19
- feedback: Optional[str] = None,
20
- self_reflection: Optional[str] = None,
21
- num_comps: int = 1,
22
- temperature: float = 0.0,
23
- ) -> Union[str, List[str]]:
24
- ...
25
-
26
- @abstractmethod
27
- def internal_tests(
28
- self,
29
- func_sig: str,
30
- model: ModelBase,
31
- max_num_tests: int = 5
32
- ) -> List[str]:
33
- ...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/data/audio.py DELETED
@@ -1,217 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- """
8
- Audio IO methods are defined in this module (info, read, write),
9
- We rely on av library for faster read when possible, otherwise on torchaudio.
10
- """
11
-
12
- from dataclasses import dataclass
13
- from pathlib import Path
14
- import logging
15
- import typing as tp
16
-
17
- import numpy as np
18
- import soundfile
19
- import torch
20
- from torch.nn import functional as F
21
- import torchaudio as ta
22
-
23
- import av
24
-
25
- from .audio_utils import f32_pcm, i16_pcm, normalize_audio, convert_audio
26
-
27
-
28
- _av_initialized = False
29
-
30
-
31
- def _init_av():
32
- global _av_initialized
33
- if _av_initialized:
34
- return
35
- logger = logging.getLogger('libav.mp3')
36
- logger.setLevel(logging.ERROR)
37
- _av_initialized = True
38
-
39
-
40
- @dataclass(frozen=True)
41
- class AudioFileInfo:
42
- sample_rate: int
43
- duration: float
44
- channels: int
45
-
46
-
47
- def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
48
- _init_av()
49
- with av.open(str(filepath)) as af:
50
- stream = af.streams.audio[0]
51
- sample_rate = stream.codec_context.sample_rate
52
- duration = float(stream.duration * stream.time_base)
53
- channels = stream.channels
54
- return AudioFileInfo(sample_rate, duration, channels)
55
-
56
-
57
- def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
58
- info = soundfile.info(filepath)
59
- return AudioFileInfo(info.samplerate, info.duration, info.channels)
60
-
61
-
62
- def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
63
- # torchaudio no longer returns useful duration informations for some formats like mp3s.
64
- filepath = Path(filepath)
65
- if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info
66
- # ffmpeg has some weird issue with flac.
67
- return _soundfile_info(filepath)
68
- else:
69
- return _av_info(filepath)
70
-
71
-
72
- def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]:
73
- """FFMPEG-based audio file reading using PyAV bindings.
74
- Soundfile cannot read mp3 and av_read is more efficient than torchaudio.
75
-
76
- Args:
77
- filepath (str or Path): Path to audio file to read.
78
- seek_time (float): Time at which to start reading in the file.
79
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
80
- Returns:
81
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate
82
- """
83
- _init_av()
84
- with av.open(str(filepath)) as af:
85
- stream = af.streams.audio[0]
86
- sr = stream.codec_context.sample_rate
87
- num_frames = int(sr * duration) if duration >= 0 else -1
88
- frame_offset = int(sr * seek_time)
89
- # we need a small negative offset otherwise we get some edge artifact
90
- # from the mp3 decoder.
91
- af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream)
92
- frames = []
93
- length = 0
94
- for frame in af.decode(streams=stream.index):
95
- current_offset = int(frame.rate * frame.pts * frame.time_base)
96
- strip = max(0, frame_offset - current_offset)
97
- buf = torch.from_numpy(frame.to_ndarray())
98
- if buf.shape[0] != stream.channels:
99
- buf = buf.view(-1, stream.channels).t()
100
- buf = buf[:, strip:]
101
- frames.append(buf)
102
- length += buf.shape[1]
103
- if num_frames > 0 and length >= num_frames:
104
- break
105
- assert frames
106
- # If the above assert fails, it is likely because we seeked past the end of file point,
107
- # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp.
108
- # This will need proper debugging, in due time.
109
- wav = torch.cat(frames, dim=1)
110
- assert wav.shape[0] == stream.channels
111
- if num_frames > 0:
112
- wav = wav[:, :num_frames]
113
- return f32_pcm(wav), sr
114
-
115
-
116
- def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0.,
117
- duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]:
118
- """Read audio by picking the most appropriate backend tool based on the audio format.
119
-
120
- Args:
121
- filepath (str or Path): Path to audio file to read.
122
- seek_time (float): Time at which to start reading in the file.
123
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
124
- pad (bool): Pad output audio if not reaching expected duration.
125
- Returns:
126
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate.
127
- """
128
- fp = Path(filepath)
129
- if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg
130
- # There is some bug with ffmpeg and reading flac
131
- info = _soundfile_info(filepath)
132
- frames = -1 if duration <= 0 else int(duration * info.sample_rate)
133
- frame_offset = int(seek_time * info.sample_rate)
134
- wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32)
135
- assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}"
136
- wav = torch.from_numpy(wav).t().contiguous()
137
- if len(wav.shape) == 1:
138
- wav = torch.unsqueeze(wav, 0)
139
- elif (
140
- fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats()
141
- and duration <= 0 and seek_time == 0
142
- ):
143
- # Torchaudio is faster if we load an entire file at once.
144
- wav, sr = ta.load(fp)
145
- else:
146
- wav, sr = _av_read(filepath, seek_time, duration)
147
- if pad and duration > 0:
148
- expected_frames = int(duration * sr)
149
- wav = F.pad(wav, (0, expected_frames - wav.shape[-1]))
150
- return wav, sr
151
-
152
-
153
- def audio_write(stem_name: tp.Union[str, Path],
154
- wav: torch.Tensor, sample_rate: int,
155
- format: str = 'wav', mp3_rate: int = 320, normalize: bool = True,
156
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
157
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
158
- loudness_compressor: bool = False,
159
- log_clipping: bool = True, make_parent_dir: bool = True,
160
- add_suffix: bool = True, channels:int = 1) -> Path:
161
- """Convenience function for saving audio to disk. Returns the filename the audio was written to.
162
-
163
- Args:
164
- stem_name (str or Path): Filename without extension which will be added automatically.
165
- format (str): Either "wav" or "mp3".
166
- mp3_rate (int): kbps when using mp3s.
167
- normalize (bool): if `True` (default), normalizes according to the prescribed
168
- strategy (see after). If `False`, the strategy is only used in case clipping
169
- would happen.
170
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
171
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
172
- with extra headroom to avoid clipping. 'clip' just clips.
173
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
174
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
175
- than the `peak_clip` one to avoid further clipping.
176
- loudness_headroom_db (float): Target loudness for loudness normalization.
177
- loudness_compressor (bool): Uses tanh for soft clipping when strategy is 'loudness'.
178
- when strategy is 'loudness'log_clipping (bool): If True, basic logging on stderr when clipping still
179
- occurs despite strategy (only for 'rms').
180
- make_parent_dir (bool): Make parent directory if it doesn't exist.
181
- Returns:
182
- Path: Path of the saved audio.
183
- """
184
- assert wav.dtype.is_floating_point, "wav is not floating point"
185
- if wav.dim() == 1:
186
- wav = wav[None]
187
- elif wav.dim() > 2:
188
- raise ValueError("Input wav should be at most 2 dimension.")
189
- assert wav.isfinite().all()
190
- wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db,
191
- rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping,
192
- sample_rate=sample_rate, stem_name=str(stem_name))
193
- if channels > 1:
194
- wav = convert_audio(wav,sample_rate, sample_rate, channels)
195
- kwargs: dict = {}
196
- if format == 'mp3':
197
- suffix = '.mp3'
198
- kwargs.update({"compression": mp3_rate})
199
- elif format == 'wav':
200
- wav = i16_pcm(wav)
201
- suffix = '.wav'
202
- kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16})
203
- else:
204
- raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.")
205
- if not add_suffix:
206
- suffix = ''
207
- path = Path(str(stem_name) + suffix)
208
- if make_parent_dir:
209
- path.parent.mkdir(exist_ok=True, parents=True)
210
- try:
211
- ta.save(path, wav, sample_rate, **kwargs)
212
- except Exception:
213
- if path.exists():
214
- # we do not want to leave half written files around.
215
- path.unlink()
216
- raise
217
- return path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/shareConversation.ts DELETED
@@ -1,27 +0,0 @@
1
- import { base } from "$app/paths";
2
- import { ERROR_MESSAGES, error } from "$lib/stores/errors";
3
- import { share } from "./utils/share";
4
-
5
- export async function shareConversation(id: string, title: string) {
6
- try {
7
- const res = await fetch(`${base}/conversation/${id}/share`, {
8
- method: "POST",
9
- headers: {
10
- "Content-Type": "application/json",
11
- },
12
- });
13
-
14
- if (!res.ok) {
15
- error.set("Error while sharing conversation, try again.");
16
- console.error("Error while sharing conversation: " + (await res.text()));
17
- return;
18
- }
19
-
20
- const { url } = await res.json();
21
-
22
- share(url, title);
23
- } catch (err) {
24
- error.set(ERROR_MESSAGES.default);
25
- console.error(err);
26
- }
27
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/Factory.js DELETED
@@ -1,11 +0,0 @@
1
- import Maker from './Maker.js';
2
- import ObjectFactory from '../ObjectFactory.js';
3
- import SetValue from '../../../plugins/utils/object/SetValue.js';
4
-
5
- ObjectFactory.register('maker', function (styles, customBuilders) {
6
- return new Maker(this.scene, styles, customBuilders);
7
- });
8
-
9
- SetValue(window, 'RexPlugins.UI.Maker', Maker);
10
-
11
- export default Maker;
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlexWortega/Kandinsky2.0/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Kandinsky2.0
3
- emoji: 📉
4
- colorFrom: indigo
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.11.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/visualizer.py DELETED
@@ -1,605 +0,0 @@
1
- # python 3.7
2
- """Utility functions for visualizing results on html page."""
3
-
4
- import base64
5
- import os.path
6
- import cv2
7
- import numpy as np
8
-
9
- __all__ = [
10
- 'get_grid_shape', 'get_blank_image', 'load_image', 'save_image',
11
- 'resize_image', 'add_text_to_image', 'fuse_images', 'HtmlPageVisualizer',
12
- 'VideoReader', 'VideoWriter', 'adjust_pixel_range'
13
- ]
14
-
15
-
16
- def adjust_pixel_range(images, min_val=-1.0, max_val=1.0, channel_order='NCHW'):
17
- """Adjusts the pixel range of the input images.
18
-
19
- This function assumes the input array (image batch) is with shape [batch_size,
20
- channel, height, width] if `channel_order = NCHW`, or with shape [batch_size,
21
- height, width] if `channel_order = NHWC`. The returned images are with shape
22
- [batch_size, height, width, channel] and pixel range [0, 255].
23
-
24
- NOTE: The channel order of output images will remain the same as the input.
25
-
26
- Args:
27
- images: Input images to adjust pixel range.
28
- min_val: Min value of the input images. (default: -1.0)
29
- max_val: Max value of the input images. (default: 1.0)
30
- channel_order: Channel order of the input array. (default: NCHW)
31
-
32
- Returns:
33
- The postprocessed images with dtype `numpy.uint8` and range [0, 255].
34
-
35
- Raises:
36
- ValueError: If the input `images` are not with type `numpy.ndarray` or the
37
- shape is invalid according to `channel_order`.
38
- """
39
- if not isinstance(images, np.ndarray):
40
- raise ValueError(f'Images should be with type `numpy.ndarray`!')
41
-
42
- channel_order = channel_order.upper()
43
- if channel_order not in ['NCHW', 'NHWC']:
44
- raise ValueError(f'Invalid channel order `{channel_order}`!')
45
-
46
- if images.ndim != 4:
47
- raise ValueError(f'Input images are expected to be with shape `NCHW` or '
48
- f'`NHWC`, but `{images.shape}` is received!')
49
- if channel_order == 'NCHW' and images.shape[1] not in [1, 3]:
50
- raise ValueError(f'Input images should have 1 or 3 channels under `NCHW` '
51
- f'channel order!')
52
- if channel_order == 'NHWC' and images.shape[3] not in [1, 3]:
53
- raise ValueError(f'Input images should have 1 or 3 channels under `NHWC` '
54
- f'channel order!')
55
-
56
- images = images.astype(np.float32)
57
- images = (images - min_val) * 255 / (max_val - min_val)
58
- images = np.clip(images + 0.5, 0, 255).astype(np.uint8)
59
- if channel_order == 'NCHW':
60
- images = images.transpose(0, 2, 3, 1)
61
-
62
- return images
63
-
64
-
65
- def get_grid_shape(size, row=0, col=0, is_portrait=False):
66
- """Gets the shape of a grid based on the size.
67
-
68
- This function makes greatest effort on making the output grid square if
69
- neither `row` nor `col` is set. If `is_portrait` is set as `False`, the height
70
- will always be equal to or smaller than the width. For example, if input
71
- `size = 16`, output shape will be `(4, 4)`; if input `size = 15`, output shape
72
- will be (3, 5). Otherwise, the height will always be equal to or larger than
73
- the width.
74
-
75
- Args:
76
- size: Size (height * width) of the target grid.
77
- is_portrait: Whether to return a portrait size of a landscape size.
78
- (default: False)
79
-
80
- Returns:
81
- A two-element tuple, representing height and width respectively.
82
- """
83
- assert isinstance(size, int)
84
- assert isinstance(row, int)
85
- assert isinstance(col, int)
86
- if size == 0:
87
- return (0, 0)
88
-
89
- if row > 0 and col > 0 and row * col != size:
90
- row = 0
91
- col = 0
92
-
93
- if row > 0 and size % row == 0:
94
- return (row, size // row)
95
- if col > 0 and size % col == 0:
96
- return (size // col, col)
97
-
98
- row = int(np.sqrt(size))
99
- while row > 0:
100
- if size % row == 0:
101
- col = size // row
102
- break
103
- row = row - 1
104
-
105
- return (col, row) if is_portrait else (row, col)
106
-
107
-
108
- def get_blank_image(height, width, channels=3, is_black=True):
109
- """Gets a blank image, either white of black.
110
-
111
- NOTE: This function will always return an image with `RGB` channel order for
112
- color image and pixel range [0, 255].
113
-
114
- Args:
115
- height: Height of the returned image.
116
- width: Width of the returned image.
117
- channels: Number of channels. (default: 3)
118
- is_black: Whether to return a black image or white image. (default: True)
119
- """
120
- shape = (height, width, channels)
121
- if is_black:
122
- return np.zeros(shape, dtype=np.uint8)
123
- return np.ones(shape, dtype=np.uint8) * 255
124
-
125
-
126
- def load_image(path):
127
- """Loads an image from disk.
128
-
129
- NOTE: This function will always return an image with `RGB` channel order for
130
- color image and pixel range [0, 255].
131
-
132
- Args:
133
- path: Path to load the image from.
134
-
135
- Returns:
136
- An image with dtype `np.ndarray` or `None` if input `path` does not exist.
137
- """
138
- if not os.path.isfile(path):
139
- return None
140
-
141
- image = cv2.imread(path)
142
- return image[:, :, ::-1]
143
-
144
-
145
- def save_image(path, image):
146
- """Saves an image to disk.
147
-
148
- NOTE: The input image (if colorful) is assumed to be with `RGB` channel order
149
- and pixel range [0, 255].
150
-
151
- Args:
152
- path: Path to save the image to.
153
- image: Image to save.
154
- """
155
- if image is None:
156
- return
157
-
158
- assert len(image.shape) == 3 and image.shape[2] in [1, 3]
159
- cv2.imwrite(path, image[:, :, ::-1])
160
-
161
-
162
- def resize_image(image, *args, **kwargs):
163
- """Resizes image.
164
-
165
- This is a wrap of `cv2.resize()`.
166
-
167
- NOTE: THe channel order of the input image will not be changed.
168
-
169
- Args:
170
- image: Image to resize.
171
- """
172
- if image is None:
173
- return None
174
-
175
- assert image.ndim == 3 and image.shape[2] in [1, 3]
176
- image = cv2.resize(image, *args, **kwargs)
177
- if image.ndim == 2:
178
- return image[:, :, np.newaxis]
179
- return image
180
-
181
-
182
- def add_text_to_image(image,
183
- text='',
184
- position=None,
185
- font=cv2.FONT_HERSHEY_TRIPLEX,
186
- font_size=1.0,
187
- line_type=cv2.LINE_8,
188
- line_width=1,
189
- color=(255, 255, 255)):
190
- """Overlays text on given image.
191
-
192
- NOTE: The input image is assumed to be with `RGB` channel order.
193
-
194
- Args:
195
- image: The image to overlay text on.
196
- text: Text content to overlay on the image. (default: '')
197
- position: Target position (bottom-left corner) to add text. If not set,
198
- center of the image will be used by default. (default: None)
199
- font: Font of the text added. (default: cv2.FONT_HERSHEY_TRIPLEX)
200
- font_size: Font size of the text added. (default: 1.0)
201
- line_type: Line type used to depict the text. (default: cv2.LINE_8)
202
- line_width: Line width used to depict the text. (default: 1)
203
- color: Color of the text added in `RGB` channel order. (default:
204
- (255, 255, 255))
205
-
206
- Returns:
207
- An image with target text overlayed on.
208
- """
209
- if image is None or not text:
210
- return image
211
-
212
- cv2.putText(img=image,
213
- text=text,
214
- org=position,
215
- fontFace=font,
216
- fontScale=font_size,
217
- color=color,
218
- thickness=line_width,
219
- lineType=line_type,
220
- bottomLeftOrigin=False)
221
-
222
- return image
223
-
224
-
225
- def fuse_images(images,
226
- image_size=None,
227
- row=0,
228
- col=0,
229
- is_row_major=True,
230
- is_portrait=False,
231
- row_spacing=0,
232
- col_spacing=0,
233
- border_left=0,
234
- border_right=0,
235
- border_top=0,
236
- border_bottom=0,
237
- black_background=True):
238
- """Fuses a collection of images into an entire image.
239
-
240
- Args:
241
- images: A collection of images to fuse. Should be with shape [num, height,
242
- width, channels].
243
- image_size: Int or two-element tuple. This field is used to resize the image
244
- before fusing. `None` disables resizing. (default: None)
245
- row: Number of rows used for image fusion. If not set, this field will be
246
- automatically assigned based on `col` and total number of images.
247
- (default: None)
248
- col: Number of columns used for image fusion. If not set, this field will be
249
- automatically assigned based on `row` and total number of images.
250
- (default: None)
251
- is_row_major: Whether the input images should be arranged row-major or
252
- column-major. (default: True)
253
- is_portrait: Only active when both `row` and `col` should be assigned
254
- automatically. (default: False)
255
- row_spacing: Space between rows. (default: 0)
256
- col_spacing: Space between columns. (default: 0)
257
- border_left: Width of left border. (default: 0)
258
- border_right: Width of right border. (default: 0)
259
- border_top: Width of top border. (default: 0)
260
- border_bottom: Width of bottom border. (default: 0)
261
-
262
- Returns:
263
- The fused image.
264
-
265
- Raises:
266
- ValueError: If the input `images` is not with shape [num, height, width,
267
- width].
268
- """
269
- if images is None:
270
- return images
271
-
272
- if not images.ndim == 4:
273
- raise ValueError(f'Input `images` should be with shape [num, height, '
274
- f'width, channels], but {images.shape} is received!')
275
-
276
- num, image_height, image_width, channels = images.shape
277
- if image_size is not None:
278
- if isinstance(image_size, int):
279
- image_size = (image_size, image_size)
280
- assert isinstance(image_size, (list, tuple)) and len(image_size) == 2
281
- width, height = image_size
282
- else:
283
- height, width = image_height, image_width
284
- row, col = get_grid_shape(num, row=row, col=col, is_portrait=is_portrait)
285
- fused_height = (
286
- height * row + row_spacing * (row - 1) + border_top + border_bottom)
287
- fused_width = (
288
- width * col + col_spacing * (col - 1) + border_left + border_right)
289
- fused_image = get_blank_image(
290
- fused_height, fused_width, channels=channels, is_black=black_background)
291
- images = images.reshape(row, col, image_height, image_width, channels)
292
- if not is_row_major:
293
- images = images.transpose(1, 0, 2, 3, 4)
294
-
295
- for i in range(row):
296
- y = border_top + i * (height + row_spacing)
297
- for j in range(col):
298
- x = border_left + j * (width + col_spacing)
299
- if image_size is not None:
300
- image = cv2.resize(images[i, j], image_size)
301
- else:
302
- image = images[i, j]
303
- fused_image[y:y + height, x:x + width] = image
304
-
305
- return fused_image
306
-
307
-
308
- def get_sortable_html_header(column_name_list, sort_by_ascending=False):
309
- """Gets header for sortable html page.
310
-
311
- Basically, the html page contains a sortable table, where user can sort the
312
- rows by a particular column by clicking the column head.
313
-
314
- Example:
315
-
316
- column_name_list = [name_1, name_2, name_3]
317
- header = get_sortable_html_header(column_name_list)
318
- footer = get_sortable_html_footer()
319
- sortable_table = ...
320
- html_page = header + sortable_table + footer
321
-
322
- Args:
323
- column_name_list: List of column header names.
324
- sort_by_ascending: Default sorting order. If set as `True`, the html page
325
- will be sorted by ascending order when the header is clicked for the first
326
- time.
327
-
328
- Returns:
329
- A string, which represents for the header for a sortable html page.
330
- """
331
- header = '\n'.join([
332
- '<script type="text/javascript">',
333
- 'var column_idx;',
334
- 'var sort_by_ascending = ' + str(sort_by_ascending).lower() + ';',
335
- '',
336
- 'function sorting(tbody, column_idx){',
337
- ' this.column_idx = column_idx;',
338
- ' Array.from(tbody.rows)',
339
- ' .sort(compareCells)',
340
- ' .forEach(function(row) { tbody.appendChild(row); })',
341
- ' sort_by_ascending = !sort_by_ascending;',
342
- '}',
343
- '',
344
- 'function compareCells(row_a, row_b) {',
345
- ' var val_a = row_a.cells[column_idx].innerText;',
346
- ' var val_b = row_b.cells[column_idx].innerText;',
347
- ' var flag = sort_by_ascending ? 1 : -1;',
348
- ' return flag * (val_a > val_b ? 1 : -1);',
349
- '}',
350
- '</script>',
351
- '',
352
- '<html>',
353
- '',
354
- '<head>',
355
- '<style>',
356
- ' table {',
357
- ' border-spacing: 0;',
358
- ' border: 1px solid black;',
359
- ' }',
360
- ' th {',
361
- ' cursor: pointer;',
362
- ' }',
363
- ' th, td {',
364
- ' text-align: left;',
365
- ' vertical-align: middle;',
366
- ' border-collapse: collapse;',
367
- ' border: 0.5px solid black;',
368
- ' padding: 8px;',
369
- ' }',
370
- ' tr:nth-child(even) {',
371
- ' background-color: #d2d2d2;',
372
- ' }',
373
- '</style>',
374
- '</head>',
375
- '',
376
- '<body>',
377
- '',
378
- '<table>',
379
- '<thead>',
380
- '<tr>',
381
- ''])
382
- for idx, column_name in enumerate(column_name_list):
383
- header += f' <th onclick="sorting(tbody, {idx})">{column_name}</th>\n'
384
- header += '</tr>\n'
385
- header += '</thead>\n'
386
- header += '<tbody id="tbody">\n'
387
-
388
- return header
389
-
390
-
391
- def get_sortable_html_footer():
392
- """Gets footer for sortable html page.
393
-
394
- Check function `get_sortable_html_header()` for more details.
395
- """
396
- return '</tbody>\n</table>\n\n</body>\n</html>\n'
397
-
398
-
399
- def encode_image_to_html_str(image, image_size=None):
400
- """Encodes an image to html language.
401
-
402
- Args:
403
- image: The input image to encode. Should be with `RGB` channel order.
404
- image_size: Int or two-element tuple. This field is used to resize the image
405
- before encoding. `None` disables resizing. (default: None)
406
-
407
- Returns:
408
- A string which represents the encoded image.
409
- """
410
- if image is None:
411
- return ''
412
-
413
- assert len(image.shape) == 3 and image.shape[2] in [1, 3]
414
-
415
- # Change channel order to `BGR`, which is opencv-friendly.
416
- image = image[:, :, ::-1]
417
-
418
- # Resize the image if needed.
419
- if image_size is not None:
420
- if isinstance(image_size, int):
421
- image_size = (image_size, image_size)
422
- assert isinstance(image_size, (list, tuple)) and len(image_size) == 2
423
- image = cv2.resize(image, image_size)
424
-
425
- # Encode the image to html-format string.
426
- encoded_image = cv2.imencode(".jpg", image)[1].tostring()
427
- encoded_image_base64 = base64.b64encode(encoded_image).decode('utf-8')
428
- html_str = f'<img src="data:image/jpeg;base64, {encoded_image_base64}"/>'
429
-
430
- return html_str
431
-
432
-
433
- class HtmlPageVisualizer(object):
434
- """Defines the html page visualizer.
435
-
436
- This class can be used to visualize image results as html page. Basically, it
437
- is based on an html-format sorted table with helper functions
438
- `get_sortable_html_header()`, `get_sortable_html_footer()`, and
439
- `encode_image_to_html_str()`. To simplify the usage, specifying the following
440
- fields is enough to create a visualization page:
441
-
442
- (1) num_rows: Number of rows of the table (header-row exclusive).
443
- (2) num_cols: Number of columns of the table.
444
- (3) header contents (optional): Title of each column.
445
-
446
- NOTE: `grid_size` can be used to assign `num_rows` and `num_cols`
447
- automatically.
448
-
449
- Example:
450
-
451
- html = HtmlPageVisualizer(num_rows, num_cols)
452
- html.set_headers([...])
453
- for i in range(num_rows):
454
- for j in range(num_cols):
455
- html.set_cell(i, j, text=..., image=...)
456
- html.save('visualize.html')
457
- """
458
-
459
- def __init__(self,
460
- num_rows=0,
461
- num_cols=0,
462
- grid_size=0,
463
- is_portrait=False,
464
- viz_size=None):
465
- if grid_size > 0:
466
- num_rows, num_cols = get_grid_shape(
467
- grid_size, row=num_rows, col=num_cols, is_portrait=is_portrait)
468
- assert num_rows > 0 and num_cols > 0
469
-
470
- self.num_rows = num_rows
471
- self.num_cols = num_cols
472
- self.viz_size = viz_size
473
- self.headers = ['' for _ in range(self.num_cols)]
474
- self.cells = [[{
475
- 'text': '',
476
- 'image': '',
477
- } for _ in range(self.num_cols)] for _ in range(self.num_rows)]
478
-
479
- def set_header(self, column_idx, content):
480
- """Sets the content of a particular header by column index."""
481
- self.headers[column_idx] = content
482
-
483
- def set_headers(self, contents):
484
- """Sets the contents of all headers."""
485
- if isinstance(contents, str):
486
- contents = [contents]
487
- assert isinstance(contents, (list, tuple))
488
- assert len(contents) == self.num_cols
489
- for column_idx, content in enumerate(contents):
490
- self.set_header(column_idx, content)
491
-
492
- def set_cell(self, row_idx, column_idx, text='', image=None):
493
- """Sets the content of a particular cell.
494
-
495
- Basically, a cell contains some text as well as an image. Both text and
496
- image can be empty.
497
-
498
- Args:
499
- row_idx: Row index of the cell to edit.
500
- column_idx: Column index of the cell to edit.
501
- text: Text to add into the target cell.
502
- image: Image to show in the target cell. Should be with `RGB` channel
503
- order.
504
- """
505
- self.cells[row_idx][column_idx]['text'] = text
506
- self.cells[row_idx][column_idx]['image'] = encode_image_to_html_str(
507
- image, self.viz_size)
508
-
509
- def save(self, save_path):
510
- """Saves the html page."""
511
- html = ''
512
- for i in range(self.num_rows):
513
- html += f'<tr>\n'
514
- for j in range(self.num_cols):
515
- text = self.cells[i][j]['text']
516
- image = self.cells[i][j]['image']
517
- if text:
518
- html += f' <td>{text}<br><br>{image}</td>\n'
519
- else:
520
- html += f' <td>{image}</td>\n'
521
- html += f'</tr>\n'
522
-
523
- header = get_sortable_html_header(self.headers)
524
- footer = get_sortable_html_footer()
525
-
526
- with open(save_path, 'w') as f:
527
- f.write(header + html + footer)
528
-
529
-
530
- class VideoReader(object):
531
- """Defines the video reader.
532
-
533
- This class can be used to read frames from a given video.
534
- """
535
-
536
- def __init__(self, path):
537
- """Initializes the video reader by loading the video from disk."""
538
- if not os.path.isfile(path):
539
- raise ValueError(f'Video `{path}` does not exist!')
540
-
541
- self.path = path
542
- self.video = cv2.VideoCapture(path)
543
- assert self.video.isOpened()
544
- self.position = 0
545
-
546
- self.length = int(self.video.get(cv2.CAP_PROP_FRAME_COUNT))
547
- self.frame_height = int(self.video.get(cv2.CAP_PROP_FRAME_HEIGHT))
548
- self.frame_width = int(self.video.get(cv2.CAP_PROP_FRAME_WIDTH))
549
- self.fps = self.video.get(cv2.CAP_PROP_FPS)
550
-
551
- def __del__(self):
552
- """Releases the opened video."""
553
- self.video.release()
554
-
555
- def read(self, position=None):
556
- """Reads a certain frame.
557
-
558
- NOTE: The returned frame is assumed to be with `RGB` channel order.
559
-
560
- Args:
561
- position: Optional. If set, the reader will read frames from the exact
562
- position. Otherwise, the reader will read next frames. (default: None)
563
- """
564
- if position is not None and position < self.length:
565
- self.video.set(cv2.CAP_PROP_POS_FRAMES, position)
566
- self.position = position
567
-
568
- success, frame = self.video.read()
569
- self.position = self.position + 1
570
-
571
- return frame[:, :, ::-1] if success else None
572
-
573
-
574
- class VideoWriter(object):
575
- """Defines the video writer.
576
-
577
- This class can be used to create a video.
578
-
579
- NOTE: `.avi` and `DIVX` is the most recommended codec format since it does not
580
- rely on other dependencies.
581
- """
582
-
583
- def __init__(self, path, frame_height, frame_width, fps=24, codec='DIVX'):
584
- """Creates the video writer."""
585
- self.path = path
586
- self.frame_height = frame_height
587
- self.frame_width = frame_width
588
- self.fps = fps
589
- self.codec = codec
590
-
591
- self.video = cv2.VideoWriter(filename=path,
592
- fourcc=cv2.VideoWriter_fourcc(*codec),
593
- fps=fps,
594
- frameSize=(frame_width, frame_height))
595
-
596
- def __del__(self):
597
- """Releases the opened video."""
598
- self.video.release()
599
-
600
- def write(self, frame):
601
- """Writes a target frame.
602
-
603
- NOTE: The input frame is assumed to be with `RGB` channel order.
604
- """
605
- self.video.write(frame[:, :, ::-1])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/models/stylegan2/op/__init__.py DELETED
@@ -1,2 +0,0 @@
1
- from .fused_act import FusedLeakyReLU, fused_leaky_relu
2
- from .upfirdn2d import upfirdn2d
 
 
 
spaces/AnandSoni2001/StockMarketPrediction/app.py DELETED
@@ -1,393 +0,0 @@
1
- #Import Libraries
2
- import streamlit as st
3
- import plotly.graph_objects as go
4
- import pandas as pd
5
- import plotly.express as px
6
- from yahoo_fin import stock_info
7
- from yahoo_fin.stock_info import *
8
- import math
9
- import numpy as np
10
- from sklearn.preprocessing import MinMaxScaler
11
- import joblib
12
-
13
- #Heading
14
- st.title('Research Project on Stock Market Analysis and Prediction')
15
- st.write("#")
16
-
17
- #TCS Data Taken
18
- tcsdaily = stock_info.get_data("TCS.NS", interval="1d")
19
- tcsmonthly= stock_info.get_data("TCS.NS", interval="1mo")
20
- tcsyearly = pd.read_csv('data/tcs-yearly.csv')
21
-
22
- #Reliance Data Taken
23
- reldaily = stock_info.get_data("RELIANCE.NS", interval="1d")
24
- relmonthly= stock_info.get_data("RELIANCE.NS", interval="1mo")
25
- relyearly = pd.read_csv('data/relianceind-yearly.csv')
26
-
27
- #Infosys Data Taken
28
- infdaily = stock_info.get_data("INFY.NS", interval="1d")
29
- infmonthly= stock_info.get_data("INFY.NS", interval="1mo")
30
- infyearly = pd.read_csv('data/infosys-yearly.csv')
31
-
32
- #Select Box
33
- comp = st.selectbox('Select a Company from the below options :', ('Tata Consultancy Services - TCS', 'Reliance Industries - RELIANCE', 'Infosys - INFY'))
34
-
35
- if comp == 'Tata Consultancy Services - TCS':
36
- col1, col2, col3, col4 = st.columns(4)
37
- x = round(stock_info.get_live_price("TCS.NS"),2)
38
- y = round(tcsdaily['close'].iloc[-2],2)
39
- tcs = get_stats('TCS.NS')['Value']
40
- col1.metric(label="Market Price", value=x, delta = round(x-y,2))
41
- col2.metric(label="52 Week High", value=tcs[3])
42
- col3.metric(label="52 Week Low", value=tcs[4])
43
- col4.metric(label="Return on Equity", value=tcs[34])
44
-
45
- col1, col2, col3, col4 = st.columns(4)
46
- col1.metric(label='Previous Close', value=y)
47
- col2.metric(label="Book Value Per Share", value=tcs[48])
48
- col3.metric(label='Earning Per Share', value=tcs[41])
49
- col4.metric(label="Dividend Yield", value=tcs[22])
50
-
51
-
52
- if comp == 'Reliance Industries - RELIANCE':
53
- col1, col2, col3, col4 = st.columns(4)
54
- x = round(stock_info.get_live_price("RELIANCE.NS"),2)
55
- y = round(reldaily['close'].iloc[-2],2)
56
- rel = get_stats('RELIANCE.NS')['Value']
57
- col1.metric(label="Market Price", value=x, delta = round(x-y,2))
58
- col2.metric(label="52 Week High", value=rel[3])
59
- col3.metric(label="52 Week Low", value=rel[4])
60
- col4.metric(label="Return on Equity", value='8.21%')
61
-
62
- col1, col2, col3, col4 = st.columns(4)
63
- col1.metric(label='Previous Close', value=y)
64
- col2.metric(label="Book Value Per Share", value=1202.45)
65
- col3.metric(label='Earning Per Share', value=93.96)
66
- col4.metric(label="Dividend Yield", value='0.36%')
67
-
68
- if comp == 'Infosys - INFY':
69
- col1, col2, col3, col4 = st.columns(4)
70
- x = round(stock_info.get_live_price("INFY.NS"),2)
71
- y = round(infdaily['close'].iloc[-2],2)
72
- inf = get_stats('INFY.NS')['Value']
73
- col1.metric(label="Market Price", value=x, delta = round(x-y,2))
74
- col2.metric(label="52 Week High", value=inf[3])
75
- col3.metric(label="52 Week Low", value=inf[4])
76
- col4.metric(label="Return on Equity", value=inf[34])
77
-
78
- col1, col2, col3, col4 = st.columns(4)
79
- col1.metric(label='Previous Close', value=y)
80
- col2.metric(label="Book Value Per Share", value=inf[48])
81
- col3.metric(label='Earning Per Share', value=inf[41])
82
- col4.metric(label="Dividend Yield", value=inf[22])
83
-
84
- #Tab for Hist Data
85
- st.write("#")
86
- st.subheader('Historic data : ')
87
- option1, option2, option3 = st.tabs(["Daily", "Monthly", "Yearly"])
88
-
89
- cl1, cl2, cl3, cl4 = st.columns(4)
90
- with cl1:
91
- ag1 = st.checkbox('Close', value='True')
92
- with cl2:
93
- ag2 = st.checkbox('Open', value='True')
94
- with cl3:
95
- ag3 = st.checkbox('High', value='True')
96
- with cl4:
97
- ag4 = st.checkbox('Low', value='True')
98
-
99
- with option1:
100
- opt = st.radio("Select timelength :", ('All Time', '1 Week', '1 Month', '1 Year'))
101
- st.write('<style>div.row-widget.stRadio > div{flex-direction:row;}</style>', unsafe_allow_html=True)
102
-
103
- if comp == 'Tata Consultancy Services - TCS':
104
- if opt=='All Time' :
105
- fig = px.line(tcsdaily, y='close',markers=False, title='Tata Consultancy Services daily data of all time')
106
- if opt=='1 Week' :
107
- fig = px.line(tcsdaily.tail(5), y='close',markers=False, title='Tata Consultancy Services daily data of 1 week')
108
- if opt=='1 Month' :
109
- fig = px.line(tcsdaily.tail(20), y='close',markers=False, title='Tata Consultancy Services daily data of 1 month')
110
- if opt=='1 Year' :
111
- fig = px.line(tcsdaily.tail(251), y='close',markers=False, title='Tata Consultancy Services daily data of 1 year')
112
- st.plotly_chart(fig, use_container_width=True)
113
-
114
- fig = go.Figure()
115
- if(ag1):
116
- fig.add_trace(go.Scatter(x=tcsdaily.index,y=tcsdaily['close'], name='Closing'))
117
- if(ag2):
118
- fig.add_trace(go.Scatter(x=tcsdaily.index,y=tcsdaily['open'], name = 'Opening', line=dict(color='yellow')))
119
- if(ag3):
120
- fig.add_trace(go.Scatter(x=tcsdaily.index,y=tcsdaily['high'], name = 'High', line=dict(color='green')))
121
- if(ag4):
122
- fig.add_trace(go.Scatter(x=tcsdaily.index,y=tcsdaily['low'], name = 'Low', line=dict(color='red')))
123
- fig.update_layout(xaxis_title='Date', yaxis_title='Price', title='Comparing other relevant parameters along close')
124
- st.plotly_chart(fig, use_container_width=True, title='Comparing other relevant parameters')
125
-
126
- if comp == 'Infosys - INFY':
127
- if opt=='All Time' :
128
- fig = px.line(infdaily, y='close',markers=False, title='Infosys daily data of all time')
129
- if opt=='1 Week' :
130
- fig = px.line(infdaily.tail(5), y='close',markers=False, title='Infosys daily data of 1 week')
131
- if opt=='1 Month' :
132
- fig = px.line(infdaily.tail(20), y='close',markers=False, title='Infosys daily data of 1 month')
133
- if opt=='1 Year' :
134
- fig = px.line(infdaily.tail(251), y='close',markers=False, title='Infosys daily data of 1 year')
135
- st.plotly_chart(fig, use_container_width=True)
136
-
137
- fig = go.Figure()
138
- if(ag1):
139
- fig.add_trace(go.Scatter(x=infdaily.index, y=infdaily['close'], name='Closing', line=dict(color='blue')))
140
- if(ag2):
141
- fig.add_trace(go.Scatter(x=infdaily.index,y=infdaily['open'], name = 'Opening', line=dict(color='yellow')))
142
- if(ag3):
143
- fig.add_trace(go.Scatter(x=infdaily.index,y=infdaily['high'], name = 'High', line=dict(color='green')))
144
- if(ag4):
145
- fig.add_trace(go.Scatter(x=infdaily.index,y=infdaily['low'], name = 'Low', line=dict(color='red')))
146
- fig.update_layout(xaxis_title='Date', yaxis_title='Price', title='Comparing other relevant parameters')
147
- st.plotly_chart(fig, use_container_width=True)
148
-
149
- if comp == 'Reliance Industries - RELIANCE':
150
- if opt=='All Time' :
151
- fig = px.line(reldaily, y='close',markers=False, title='Reliance Industries daily data of all time')
152
- if opt=='1 Week' :
153
- fig = px.line(reldaily.tail(5), y='close',markers=False, title='Reliance Industries daily data of 1 week')
154
- if opt=='1 Month' :
155
- fig = px.line(reldaily.tail(20), y='close',markers=False, title='Reliance Industries daily data of 1 month')
156
- if opt=='1 Year' :
157
- fig = px.line(reldaily.tail(251), y='close',markers=False, title='Reliance Industries daily data of 1 year')
158
- st.plotly_chart(fig, use_container_width=True)
159
-
160
- fig = go.Figure()
161
- if(ag1):
162
- fig.add_trace(go.Scatter(x=reldaily.index, y=reldaily['close'], name='Closing', line=dict(color='blue')))
163
- if(ag2):
164
- fig.add_trace(go.Scatter(x=reldaily.index,y=reldaily['open'], name = 'Opening', line=dict(color='yellow')))
165
- if(ag3):
166
- fig.add_trace(go.Scatter(x=reldaily.index,y=reldaily['high'], name = 'High', line=dict(color='green')))
167
- if(ag4):
168
- fig.add_trace(go.Scatter(x=reldaily.index,y=reldaily['low'], name = 'Low', line=dict(color='red')))
169
- fig.update_layout(xaxis_title='Date', yaxis_title='Price', title='Comparing other relevant parameters along close')
170
- st.plotly_chart(fig, use_container_width=True)
171
-
172
- with option2:
173
- if comp == 'Tata Consultancy Services - TCS':
174
- fig = px.line(tcsmonthly,y='close', markers=False, title='Tata Consultancy Services monthly data')
175
- st.plotly_chart(fig, use_container_width=True)
176
-
177
- fig = go.Figure()
178
- if(ag1):
179
- fig.add_trace(go.Scatter(x=tcsmonthly.index,y=tcsmonthly['close'], name='Closing', line=dict(color='blue')))
180
- if(ag2):
181
- fig.add_trace(go.Scatter(x=tcsmonthly.index,y=tcsmonthly['open'], name = 'Opening', line=dict(color='yellow')))
182
- if(ag3):
183
- fig.add_trace(go.Scatter(x=tcsmonthly.index,y=tcsmonthly['high'], name = 'High', line=dict(color='green')))
184
- if(ag4):
185
- fig.add_trace(go.Scatter(x=tcsmonthly.index,y=tcsmonthly['low'], name = 'Low', line=dict(color='red')))
186
- fig.update_layout(xaxis_title='Month', yaxis_title='Price', title='Comparing other relevant parameters')
187
- st.plotly_chart(fig, use_container_width=True)
188
-
189
- if comp == 'Infosys - INFY':
190
- fig = px.line(infmonthly, y='close',markers=False, title='Infosys monthly data')
191
- st.plotly_chart(fig, use_container_width=True)
192
-
193
- fig = go.Figure()
194
- if(ag1):
195
- fig.add_trace(go.Scatter(x=infmonthly.index, y=infmonthly['close'], name='Closing', line=dict(color='blue')))
196
- if(ag2):
197
- fig.add_trace(go.Scatter(x=infmonthly.index,y=infmonthly['open'], name = 'Opening', line=dict(color='yellow')))
198
- if(ag3):
199
- fig.add_trace(go.Scatter(x=infmonthly.index,y=infmonthly['high'], name = 'High', line=dict(color='green')))
200
- if(ag4):
201
- fig.add_trace(go.Scatter(y=infmonthly['low'], name = 'Low', line=dict(color='red')))
202
- fig.update_layout(xaxis_title='Month', yaxis_title='Price', title='Comparing other relevant parameters')
203
- st.plotly_chart(fig, use_container_width=True)
204
-
205
- if comp == 'Reliance Industries - RELIANCE':
206
- fig = px.line(relmonthly, y='close',markers=False, title='Reliance Industries monthly data')
207
- st.plotly_chart(fig, use_container_width=True)
208
-
209
- fig = go.Figure()
210
- if(ag1):
211
- fig.add_trace(go.Scatter(x=relmonthly.index,y=relmonthly['close'], name='Closing', line=dict(color='blue')))
212
- if(ag2):
213
- fig.add_trace(go.Scatter(x=relmonthly.index,y=relmonthly['open'], name = 'Opening', line=dict(color='yellow')))
214
- if(ag3):
215
- fig.add_trace(go.Scatter(x=relmonthly.index,y=relmonthly['high'], name = 'High', line=dict(color='green')))
216
- if(ag4):
217
- fig.add_trace(go.Scatter(x=relmonthly.index,y=relmonthly['low'], name = 'Low', line=dict(color='red')))
218
- fig.update_layout(xaxis_title='Month', yaxis_title='Price', title='Comparing other relevant parameters')
219
- st.plotly_chart(fig, use_container_width=True)
220
-
221
- with option3:
222
- if comp == 'Tata Consultancy Services - TCS':
223
- fig = px.line(tcsyearly, x='Year', y='Close Price',markers=True, title='Tata Consultancy Services Yearly Data from 2004')
224
- st.plotly_chart(fig, use_container_width=True)
225
-
226
- fig = go.Figure()
227
- if(ag1):
228
- fig.add_trace(go.Scatter(x=tcsyearly['Year'], y=tcsyearly['Close Price'], name='Closing', line=dict(color='blue')))
229
- if(ag2):
230
- fig.add_trace(go.Scatter(x=tcsyearly['Year'], y=tcsyearly['Open Price'], name = 'Opening', line=dict(color='yellow')))
231
- if(ag3):
232
- fig.add_trace(go.Scatter(x=tcsyearly['Year'], y=tcsyearly['High Price'], name = 'High', line=dict(color='green')))
233
- if(ag4):
234
- fig.add_trace(go.Scatter(x=tcsyearly['Year'], y=tcsyearly['Low Price'], name = 'Low', line=dict(color='red')))
235
- fig.update_layout(xaxis_title='Year', yaxis_title='Price', title='Comparing other relevant parameters along close price')
236
- st.plotly_chart(fig, use_container_width=True, title='Comparing other relevant parameters')
237
-
238
- if comp == 'Infosys - INFY':
239
- fig = px.line(infyearly, x='Year', y='Close Price',markers=True, title='Infosys Yearly Data from 2004')
240
- st.plotly_chart(fig, use_container_width=True)
241
-
242
- fig = go.Figure()
243
- if(ag1):
244
- fig.add_trace(go.Scatter(x=infyearly['Year'], y=infyearly['Close Price'], name='Closing', line=dict(color='blue')))
245
- if(ag2):
246
- fig.add_trace(go.Scatter(x=infyearly['Year'], y=infyearly['Open Price'], name = 'Opening', line=dict(color='yellow')))
247
- if(ag3):
248
- fig.add_trace(go.Scatter(x=infyearly['Year'], y=infyearly['High Price'], name = 'High', line=dict(color='green')))
249
- if(ag4):
250
- fig.add_trace(go.Scatter(x=infyearly['Year'], y=infyearly['Low Price'], name = 'Low', line=dict(color='red')))
251
- fig.update_layout(xaxis_title='Year', yaxis_title='Price', title='Comparing other relevant parameters')
252
- st.plotly_chart(fig, use_container_width=True)
253
-
254
- if comp == 'Reliance Industries - RELIANCE':
255
- fig = px.line(relyearly, x='Year', y='Close Price',markers=True, title='Reliance Industries Yearly Data from 2004')
256
- st.plotly_chart(fig, use_container_width=True)
257
-
258
- fig = go.Figure()
259
- if(ag1):
260
- fig.add_trace(go.Scatter(x=relyearly['Year'], y=relyearly['Close Price'], name='Closing', line=dict(color='blue')))
261
- if(ag2):
262
- fig.add_trace(go.Scatter(x=relyearly['Year'], y=relyearly['Open Price'], name = 'Opening', line=dict(color='yellow')))
263
- if(ag3):
264
- fig.add_trace(go.Scatter(x=relyearly['Year'], y=relyearly['High Price'], name = 'High', line=dict(color='green')))
265
- if(ag4):
266
- fig.add_trace(go.Scatter(x=relyearly['Year'], y=relyearly['Low Price'], name = 'Low', line=dict(color='red')))
267
- fig.update_layout(xaxis_title='Year', yaxis_title='Price', title='Comparing other relevant parameters')
268
- st.plotly_chart(fig, use_container_width=True)
269
-
270
- #Predictions
271
- st.write("#")
272
- st.subheader('Predict : ')
273
-
274
- if st.button('Click Here'):
275
- if comp == 'Tata Consultancy Services - TCS':
276
- x = round(stock_info.get_live_price("TCS.NS"),2)
277
- tcsweekly = stock_info.get_data("TCS.NS", interval="1d")
278
- tcsweekly=tcsweekly.dropna()
279
- values = tcsweekly['close'].values
280
- data_len = math.ceil(len(values)*0.8)
281
- scaler = MinMaxScaler(feature_range=(0,1))
282
- scaled_data = scaler.fit_transform(values.reshape(-1,1))
283
- test_data = scaled_data[data_len-60: , : ]
284
- x_test = []
285
- for i in range(60, len(test_data)):
286
- x_test.append(test_data[i-60:i, 0])
287
- x_test = np.array(x_test)
288
- x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
289
- new = joblib.load('tcsdail_1.pkl')
290
- ans = new.predict(x_test)
291
- ans1 = scaler.inverse_transform(ans)
292
- val = np.around(ans1[-1][0], decimals=2)
293
- st.metric(label="Prediction", value=val, delta = round(val-x,2))
294
-
295
- if comp == 'Reliance Industries - RELIANCE':
296
- x = round(stock_info.get_live_price("RELIANCE.NS"),2)
297
- relweekly = stock_info.get_data("RELIANCE.NS", interval="1d")
298
- relweekly=relweekly.dropna()
299
- values = relweekly['close'].values
300
- data_len = math.ceil(len(values)*0.8)
301
- scaler = MinMaxScaler(feature_range=(0,1))
302
- scaled_data = scaler.fit_transform(values.reshape(-1,1))
303
- test_data = scaled_data[data_len-60: , : ]
304
- x_test = []
305
- for i in range(60, len(test_data)):
306
- x_test.append(test_data[i-60:i, 0])
307
- x_test = np.array(x_test)
308
- x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
309
- new = joblib.load('reldail_1.pkl')
310
- ans = new.predict(x_test)
311
- ans1 = scaler.inverse_transform(ans)
312
- val = np.around(ans1[-1][0], decimals=2)
313
- st.metric(label="Prediction", value=val, delta = round(val-x,2))
314
-
315
- if comp == 'Infosys - INFY':
316
- x = round(stock_info.get_live_price("INFY.NS"),2)
317
- infweekly = stock_info.get_data("INFY.NS", interval="1d")
318
- infweekly=infweekly.dropna()
319
- values = infweekly['close'].values
320
- data_len = math.ceil(len(values)*0.8)
321
- scaler = MinMaxScaler(feature_range=(0,1))
322
- scaled_data = scaler.fit_transform(values.reshape(-1,1))
323
- test_data = scaled_data[data_len-60: , : ]
324
- x_test = []
325
- for i in range(60, len(test_data)):
326
- x_test.append(test_data[i-60:i, 0])
327
- x_test = np.array(x_test)
328
- x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
329
- new = joblib.load('infdail_1.pkl')
330
- ans = new.predict(x_test)
331
- ans1 = scaler.inverse_transform(ans)
332
- val = np.around(ans1[-1][0], decimals=2)
333
- st.metric(label="Prediction", value=val, delta = round(val-x,2))
334
-
335
-
336
- #Tab for Hist Data
337
- st.write("#")
338
- st.subheader('Financial data : ')
339
- a1, a2, a3 = st.tabs(["Revenue & Profit", "Net Worth", "Shareholding Pattern"])
340
-
341
- tier=['Promoters', 'Mutual Funds', 'Retail', 'Foreign Institutions','Others']
342
- y=['2018', '2019', '2020', '2021', '2022']
343
-
344
- with a1:
345
- st.caption('All values in Crs')
346
- if comp == 'Infosys - INFY':
347
- chart_data = pd.DataFrame([[70522,16029], [82675,15404], [90791,16594], [100472,19351], [121641,22110]],
348
- index=y, columns=["Revenue", "Profit"])
349
- st.bar_chart(chart_data, height=350)
350
-
351
- if comp == 'Tata Consultancy Services - TCS':
352
- chart_data = pd.DataFrame([[123104,25826], [146463,31472], [156949,32430], [164177,32430], [191754,38327]],
353
- index=y, columns=["Revenue", "Profit"])
354
- st.bar_chart(chart_data, height=350)
355
-
356
- if comp == 'Reliance Industries - RELIANCE':
357
- chart_data = pd.DataFrame([[408265,36075], [583094,39588], [611645,39354], [486326,49128], [721634,60705]],
358
- index=y, columns=["Revenue", "Profit"])
359
- st.bar_chart(chart_data, height=350)
360
-
361
-
362
- with a2:
363
- st.caption('All values in Crs')
364
- if comp == 'Infosys - INFY':
365
- chart_data = pd.DataFrame([64923, 64948, 65450, 76351, 75350], index=y, columns=['Net Worth'])
366
- st.bar_chart(chart_data, height=350)
367
-
368
- if comp == 'Tata Consultancy Services - TCS':
369
- chart_data = pd.DataFrame([85128, 89446, 84126, 86433, 89139], index=y, columns=['Net Worth'])
370
- st.bar_chart(chart_data, height=350)
371
-
372
- if comp == 'Reliance Industries - RELIANCE':
373
- chart_data = pd.DataFrame([293506, 387112, 453331, 700172, 779485], index=y, columns=['Net Worth'])
374
- st.bar_chart(chart_data, height=350)
375
-
376
- with a3:
377
- st.caption('As of March, 2023')
378
- if comp == 'Infosys - INFY':
379
- x = [15.11, 17.71, 18.22, 36.28, 12.68]
380
- fig = px.pie(values=x, names=tier)
381
- st.plotly_chart(fig, use_container_width=True, height=350)
382
-
383
- if comp == 'Tata Consultancy Services - TCS':
384
- x = [72.30, 3.31, 5.96, 12.94, 5.49]
385
- fig = px.pie(values=x, names=tier)
386
- st.plotly_chart(fig, use_container_width=True, height=350)
387
-
388
- if comp == 'Reliance Industries - RELIANCE':
389
- x = [50.49, 5.81, 11.64, 23.43, 8.63]
390
- fig = px.pie(values=x, names=tier)
391
- st.plotly_chart(fig, use_container_width=True, height=350)
392
-
393
- st.caption('The Web Application was made by Anand Soni and Deepak Rathore.')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/latent_diffusion/test_latent_diffusion_superresolution.py DELETED
@@ -1,131 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2023 HuggingFace Inc.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import random
17
- import unittest
18
-
19
- import numpy as np
20
- import torch
21
-
22
- from diffusers import DDIMScheduler, LDMSuperResolutionPipeline, UNet2DModel, VQModel
23
- from diffusers.utils import PIL_INTERPOLATION, floats_tensor, load_image, slow, torch_device
24
- from diffusers.utils.testing_utils import enable_full_determinism, require_torch
25
-
26
-
27
- enable_full_determinism()
28
-
29
-
30
- class LDMSuperResolutionPipelineFastTests(unittest.TestCase):
31
- @property
32
- def dummy_image(self):
33
- batch_size = 1
34
- num_channels = 3
35
- sizes = (32, 32)
36
-
37
- image = floats_tensor((batch_size, num_channels) + sizes, rng=random.Random(0)).to(torch_device)
38
- return image
39
-
40
- @property
41
- def dummy_uncond_unet(self):
42
- torch.manual_seed(0)
43
- model = UNet2DModel(
44
- block_out_channels=(32, 64),
45
- layers_per_block=2,
46
- sample_size=32,
47
- in_channels=6,
48
- out_channels=3,
49
- down_block_types=("DownBlock2D", "AttnDownBlock2D"),
50
- up_block_types=("AttnUpBlock2D", "UpBlock2D"),
51
- )
52
- return model
53
-
54
- @property
55
- def dummy_vq_model(self):
56
- torch.manual_seed(0)
57
- model = VQModel(
58
- block_out_channels=[32, 64],
59
- in_channels=3,
60
- out_channels=3,
61
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
62
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
63
- latent_channels=3,
64
- )
65
- return model
66
-
67
- def test_inference_superresolution(self):
68
- device = "cpu"
69
- unet = self.dummy_uncond_unet
70
- scheduler = DDIMScheduler()
71
- vqvae = self.dummy_vq_model
72
-
73
- ldm = LDMSuperResolutionPipeline(unet=unet, vqvae=vqvae, scheduler=scheduler)
74
- ldm.to(device)
75
- ldm.set_progress_bar_config(disable=None)
76
-
77
- init_image = self.dummy_image.to(device)
78
-
79
- generator = torch.Generator(device=device).manual_seed(0)
80
- image = ldm(image=init_image, generator=generator, num_inference_steps=2, output_type="numpy").images
81
-
82
- image_slice = image[0, -3:, -3:, -1]
83
-
84
- assert image.shape == (1, 64, 64, 3)
85
- expected_slice = np.array([0.8678, 0.8245, 0.6381, 0.6830, 0.4385, 0.5599, 0.4641, 0.6201, 0.5150])
86
-
87
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
88
-
89
- @unittest.skipIf(torch_device != "cuda", "This test requires a GPU")
90
- def test_inference_superresolution_fp16(self):
91
- unet = self.dummy_uncond_unet
92
- scheduler = DDIMScheduler()
93
- vqvae = self.dummy_vq_model
94
-
95
- # put models in fp16
96
- unet = unet.half()
97
- vqvae = vqvae.half()
98
-
99
- ldm = LDMSuperResolutionPipeline(unet=unet, vqvae=vqvae, scheduler=scheduler)
100
- ldm.to(torch_device)
101
- ldm.set_progress_bar_config(disable=None)
102
-
103
- init_image = self.dummy_image.to(torch_device)
104
-
105
- image = ldm(init_image, num_inference_steps=2, output_type="numpy").images
106
-
107
- assert image.shape == (1, 64, 64, 3)
108
-
109
-
110
- @slow
111
- @require_torch
112
- class LDMSuperResolutionPipelineIntegrationTests(unittest.TestCase):
113
- def test_inference_superresolution(self):
114
- init_image = load_image(
115
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
116
- "/vq_diffusion/teddy_bear_pool.png"
117
- )
118
- init_image = init_image.resize((64, 64), resample=PIL_INTERPOLATION["lanczos"])
119
-
120
- ldm = LDMSuperResolutionPipeline.from_pretrained("duongna/ldm-super-resolution", device_map="auto")
121
- ldm.set_progress_bar_config(disable=None)
122
-
123
- generator = torch.manual_seed(0)
124
- image = ldm(image=init_image, generator=generator, num_inference_steps=20, output_type="numpy").images
125
-
126
- image_slice = image[0, -3:, -3:, -1]
127
-
128
- assert image.shape == (1, 256, 256, 3)
129
- expected_slice = np.array([0.7644, 0.7679, 0.7642, 0.7633, 0.7666, 0.7560, 0.7425, 0.7257, 0.6907])
130
-
131
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/fast_rcnn/fast_rcnn_r50_fpn_1x_coco.py DELETED
@@ -1,52 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/fast_rcnn_r50_fpn.py',
3
- '../_base_/datasets/coco_detection.py',
4
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
5
- ]
6
- dataset_type = 'CocoDataset'
7
- data_root = 'data/coco/'
8
- img_norm_cfg = dict(
9
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
10
- train_pipeline = [
11
- dict(type='LoadImageFromFile'),
12
- dict(type='LoadProposals', num_max_proposals=2000),
13
- dict(type='LoadAnnotations', with_bbox=True),
14
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
15
- dict(type='RandomFlip', flip_ratio=0.5),
16
- dict(type='Normalize', **img_norm_cfg),
17
- dict(type='Pad', size_divisor=32),
18
- dict(type='DefaultFormatBundle'),
19
- dict(type='Collect', keys=['img', 'proposals', 'gt_bboxes', 'gt_labels']),
20
- ]
21
- test_pipeline = [
22
- dict(type='LoadImageFromFile'),
23
- dict(type='LoadProposals', num_max_proposals=None),
24
- dict(
25
- type='MultiScaleFlipAug',
26
- img_scale=(1333, 800),
27
- flip=False,
28
- transforms=[
29
- dict(type='Resize', keep_ratio=True),
30
- dict(type='RandomFlip'),
31
- dict(type='Normalize', **img_norm_cfg),
32
- dict(type='Pad', size_divisor=32),
33
- dict(type='ImageToTensor', keys=['img']),
34
- dict(type='ToTensor', keys=['proposals']),
35
- dict(
36
- type='ToDataContainer',
37
- fields=[dict(key='proposals', stack=False)]),
38
- dict(type='Collect', keys=['img', 'proposals']),
39
- ])
40
- ]
41
- data = dict(
42
- samples_per_gpu=2,
43
- workers_per_gpu=2,
44
- train=dict(
45
- proposal_file=data_root + 'proposals/rpn_r50_fpn_1x_train2017.pkl',
46
- pipeline=train_pipeline),
47
- val=dict(
48
- proposal_file=data_root + 'proposals/rpn_r50_fpn_1x_val2017.pkl',
49
- pipeline=test_pipeline),
50
- test=dict(
51
- proposal_file=data_root + 'proposals/rpn_r50_fpn_1x_val2017.pkl',
52
- pipeline=test_pipeline))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py'
2
- model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/swin/cascade_mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_coco.py DELETED
@@ -1,140 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/cascade_mask_rcnn_swin_fpn.py',
3
- '../_base_/datasets/coco_instance.py',
4
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
5
- ]
6
-
7
- model = dict(
8
- backbone=dict(
9
- embed_dim=96,
10
- depths=[2, 2, 6, 2],
11
- num_heads=[3, 6, 12, 24],
12
- window_size=7,
13
- ape=False,
14
- drop_path_rate=0.2,
15
- patch_norm=True,
16
- use_checkpoint=False
17
- ),
18
- neck=dict(in_channels=[96, 192, 384, 768]),
19
- roi_head=dict(
20
- bbox_head=[
21
- dict(
22
- type='ConvFCBBoxHead',
23
- num_shared_convs=4,
24
- num_shared_fcs=1,
25
- in_channels=256,
26
- conv_out_channels=256,
27
- fc_out_channels=1024,
28
- roi_feat_size=7,
29
- num_classes=80,
30
- bbox_coder=dict(
31
- type='DeltaXYWHBBoxCoder',
32
- target_means=[0., 0., 0., 0.],
33
- target_stds=[0.1, 0.1, 0.2, 0.2]),
34
- reg_class_agnostic=False,
35
- reg_decoded_bbox=True,
36
- norm_cfg=dict(type='SyncBN', requires_grad=True),
37
- loss_cls=dict(
38
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
39
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0)),
40
- dict(
41
- type='ConvFCBBoxHead',
42
- num_shared_convs=4,
43
- num_shared_fcs=1,
44
- in_channels=256,
45
- conv_out_channels=256,
46
- fc_out_channels=1024,
47
- roi_feat_size=7,
48
- num_classes=80,
49
- bbox_coder=dict(
50
- type='DeltaXYWHBBoxCoder',
51
- target_means=[0., 0., 0., 0.],
52
- target_stds=[0.05, 0.05, 0.1, 0.1]),
53
- reg_class_agnostic=False,
54
- reg_decoded_bbox=True,
55
- norm_cfg=dict(type='SyncBN', requires_grad=True),
56
- loss_cls=dict(
57
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
58
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0)),
59
- dict(
60
- type='ConvFCBBoxHead',
61
- num_shared_convs=4,
62
- num_shared_fcs=1,
63
- in_channels=256,
64
- conv_out_channels=256,
65
- fc_out_channels=1024,
66
- roi_feat_size=7,
67
- num_classes=80,
68
- bbox_coder=dict(
69
- type='DeltaXYWHBBoxCoder',
70
- target_means=[0., 0., 0., 0.],
71
- target_stds=[0.033, 0.033, 0.067, 0.067]),
72
- reg_class_agnostic=False,
73
- reg_decoded_bbox=True,
74
- norm_cfg=dict(type='SyncBN', requires_grad=True),
75
- loss_cls=dict(
76
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
77
- loss_bbox=dict(type='GIoULoss', loss_weight=10.0))
78
- ]))
79
-
80
- img_norm_cfg = dict(
81
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
82
-
83
- # augmentation strategy originates from DETR / Sparse RCNN
84
- train_pipeline = [
85
- dict(type='LoadImageFromFile'),
86
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
87
- dict(type='RandomFlip', flip_ratio=0.5),
88
- dict(type='AutoAugment',
89
- policies=[
90
- [
91
- dict(type='Resize',
92
- img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333),
93
- (608, 1333), (640, 1333), (672, 1333), (704, 1333),
94
- (736, 1333), (768, 1333), (800, 1333)],
95
- multiscale_mode='value',
96
- keep_ratio=True)
97
- ],
98
- [
99
- dict(type='Resize',
100
- img_scale=[(400, 1333), (500, 1333), (600, 1333)],
101
- multiscale_mode='value',
102
- keep_ratio=True),
103
- dict(type='RandomCrop',
104
- crop_type='absolute_range',
105
- crop_size=(384, 600),
106
- allow_negative_crop=True),
107
- dict(type='Resize',
108
- img_scale=[(480, 1333), (512, 1333), (544, 1333),
109
- (576, 1333), (608, 1333), (640, 1333),
110
- (672, 1333), (704, 1333), (736, 1333),
111
- (768, 1333), (800, 1333)],
112
- multiscale_mode='value',
113
- override=True,
114
- keep_ratio=True)
115
- ]
116
- ]),
117
- dict(type='Normalize', **img_norm_cfg),
118
- dict(type='Pad', size_divisor=32),
119
- dict(type='DefaultFormatBundle'),
120
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
121
- ]
122
- data = dict(train=dict(pipeline=train_pipeline))
123
-
124
- optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05,
125
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
126
- 'relative_position_bias_table': dict(decay_mult=0.),
127
- 'norm': dict(decay_mult=0.)}))
128
- lr_config = dict(step=[27, 33])
129
- runner = dict(type='EpochBasedRunnerAmp', max_epochs=36)
130
-
131
- # do not use mmdet version fp16
132
- fp16 = None
133
- optimizer_config = dict(
134
- type="DistOptimizerHook",
135
- update_interval=1,
136
- grad_clip=None,
137
- coalesce=True,
138
- bucket_size_mb=-1,
139
- use_fp16=True,
140
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ariharasudhan/YoloV5/utils/torch_utils.py DELETED
@@ -1,431 +0,0 @@
1
- # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
- """
3
- PyTorch utils
4
- """
5
-
6
- import math
7
- import os
8
- import platform
9
- import subprocess
10
- import time
11
- import warnings
12
- from contextlib import contextmanager
13
- from copy import deepcopy
14
- from pathlib import Path
15
-
16
- import torch
17
- import torch.distributed as dist
18
- import torch.nn as nn
19
- import torch.nn.functional as F
20
- from torch.nn.parallel import DistributedDataParallel as DDP
21
-
22
- from utils.general import LOGGER, check_version, colorstr, file_date, git_describe
23
-
24
- LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html
25
- RANK = int(os.getenv('RANK', -1))
26
- WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1))
27
-
28
- try:
29
- import thop # for FLOPs computation
30
- except ImportError:
31
- thop = None
32
-
33
- # Suppress PyTorch warnings
34
- warnings.filterwarnings('ignore', message='User provided device_type of \'cuda\', but CUDA is not available. Disabling')
35
-
36
-
37
- def smart_inference_mode(torch_1_9=check_version(torch.__version__, '1.9.0')):
38
- # Applies torch.inference_mode() decorator if torch>=1.9.0 else torch.no_grad() decorator
39
- def decorate(fn):
40
- return (torch.inference_mode if torch_1_9 else torch.no_grad)()(fn)
41
-
42
- return decorate
43
-
44
-
45
- def smartCrossEntropyLoss(label_smoothing=0.0):
46
- # Returns nn.CrossEntropyLoss with label smoothing enabled for torch>=1.10.0
47
- if check_version(torch.__version__, '1.10.0'):
48
- return nn.CrossEntropyLoss(label_smoothing=label_smoothing)
49
- if label_smoothing > 0:
50
- LOGGER.warning(f'WARNING ⚠️ label smoothing {label_smoothing} requires torch>=1.10.0')
51
- return nn.CrossEntropyLoss()
52
-
53
-
54
- def smart_DDP(model):
55
- # Model DDP creation with checks
56
- assert not check_version(torch.__version__, '1.12.0', pinned=True), \
57
- 'torch==1.12.0 torchvision==0.13.0 DDP training is not supported due to a known issue. ' \
58
- 'Please upgrade or downgrade torch to use DDP. See https://github.com/ultralytics/yolov5/issues/8395'
59
- if check_version(torch.__version__, '1.11.0'):
60
- return DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK, static_graph=True)
61
- else:
62
- return DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK)
63
-
64
-
65
- def reshape_classifier_output(model, n=1000):
66
- # Update a TorchVision classification model to class count 'n' if required
67
- from models.common import Classify
68
- name, m = list((model.model if hasattr(model, 'model') else model).named_children())[-1] # last module
69
- if isinstance(m, Classify): # YOLOv5 Classify() head
70
- if m.linear.out_features != n:
71
- m.linear = nn.Linear(m.linear.in_features, n)
72
- elif isinstance(m, nn.Linear): # ResNet, EfficientNet
73
- if m.out_features != n:
74
- setattr(model, name, nn.Linear(m.in_features, n))
75
- elif isinstance(m, nn.Sequential):
76
- types = [type(x) for x in m]
77
- if nn.Linear in types:
78
- i = types.index(nn.Linear) # nn.Linear index
79
- if m[i].out_features != n:
80
- m[i] = nn.Linear(m[i].in_features, n)
81
- elif nn.Conv2d in types:
82
- i = types.index(nn.Conv2d) # nn.Conv2d index
83
- if m[i].out_channels != n:
84
- m[i] = nn.Conv2d(m[i].in_channels, n, m[i].kernel_size, m[i].stride, bias=m[i].bias)
85
-
86
-
87
- @contextmanager
88
- def torch_distributed_zero_first(local_rank: int):
89
- # Decorator to make all processes in distributed training wait for each local_master to do something
90
- if local_rank not in [-1, 0]:
91
- dist.barrier(device_ids=[local_rank])
92
- yield
93
- if local_rank == 0:
94
- dist.barrier(device_ids=[0])
95
-
96
-
97
- def device_count():
98
- # Returns number of CUDA devices available. Safe version of torch.cuda.device_count(). Supports Linux and Windows
99
- assert platform.system() in ('Linux', 'Windows'), 'device_count() only supported on Linux or Windows'
100
- try:
101
- cmd = 'nvidia-smi -L | wc -l' if platform.system() == 'Linux' else 'nvidia-smi -L | find /c /v ""' # Windows
102
- return int(subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1])
103
- except Exception:
104
- return 0
105
-
106
-
107
- def select_device(device='', batch_size=0, newline=True):
108
- # device = None or 'cpu' or 0 or '0' or '0,1,2,3'
109
- s = f'YOLOv5 🚀 {git_describe() or file_date()} Python-{platform.python_version()} torch-{torch.__version__} '
110
- device = str(device).strip().lower().replace('cuda:', '').replace('none', '') # to string, 'cuda:0' to '0'
111
- cpu = device == 'cpu'
112
- mps = device == 'mps' # Apple Metal Performance Shaders (MPS)
113
- if cpu or mps:
114
- os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False
115
- elif device: # non-cpu device requested
116
- os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable - must be before assert is_available()
117
- assert torch.cuda.is_available() and torch.cuda.device_count() >= len(device.replace(',', '')), \
118
- f"Invalid CUDA '--device {device}' requested, use '--device cpu' or pass valid CUDA device(s)"
119
-
120
- if not cpu and not mps and torch.cuda.is_available(): # prefer GPU if available
121
- devices = device.split(',') if device else '0' # range(torch.cuda.device_count()) # i.e. 0,1,6,7
122
- n = len(devices) # device count
123
- if n > 1 and batch_size > 0: # check batch_size is divisible by device_count
124
- assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}'
125
- space = ' ' * (len(s) + 1)
126
- for i, d in enumerate(devices):
127
- p = torch.cuda.get_device_properties(i)
128
- s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / (1 << 20):.0f}MiB)\n" # bytes to MB
129
- arg = 'cuda:0'
130
- elif mps and getattr(torch, 'has_mps', False) and torch.backends.mps.is_available(): # prefer MPS if available
131
- s += 'MPS\n'
132
- arg = 'mps'
133
- else: # revert to CPU
134
- s += 'CPU\n'
135
- arg = 'cpu'
136
-
137
- if not newline:
138
- s = s.rstrip()
139
- LOGGER.info(s)
140
- return torch.device(arg)
141
-
142
-
143
- def time_sync():
144
- # PyTorch-accurate time
145
- if torch.cuda.is_available():
146
- torch.cuda.synchronize()
147
- return time.time()
148
-
149
-
150
- def profile(input, ops, n=10, device=None):
151
- """ YOLOv5 speed/memory/FLOPs profiler
152
- Usage:
153
- input = torch.randn(16, 3, 640, 640)
154
- m1 = lambda x: x * torch.sigmoid(x)
155
- m2 = nn.SiLU()
156
- profile(input, [m1, m2], n=100) # profile over 100 iterations
157
- """
158
- results = []
159
- if not isinstance(device, torch.device):
160
- device = select_device(device)
161
- print(f"{'Params':>12s}{'GFLOPs':>12s}{'GPU_mem (GB)':>14s}{'forward (ms)':>14s}{'backward (ms)':>14s}"
162
- f"{'input':>24s}{'output':>24s}")
163
-
164
- for x in input if isinstance(input, list) else [input]:
165
- x = x.to(device)
166
- x.requires_grad = True
167
- for m in ops if isinstance(ops, list) else [ops]:
168
- m = m.to(device) if hasattr(m, 'to') else m # device
169
- m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m
170
- tf, tb, t = 0, 0, [0, 0, 0] # dt forward, backward
171
- try:
172
- flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPs
173
- except Exception:
174
- flops = 0
175
-
176
- try:
177
- for _ in range(n):
178
- t[0] = time_sync()
179
- y = m(x)
180
- t[1] = time_sync()
181
- try:
182
- _ = (sum(yi.sum() for yi in y) if isinstance(y, list) else y).sum().backward()
183
- t[2] = time_sync()
184
- except Exception: # no backward method
185
- # print(e) # for debug
186
- t[2] = float('nan')
187
- tf += (t[1] - t[0]) * 1000 / n # ms per op forward
188
- tb += (t[2] - t[1]) * 1000 / n # ms per op backward
189
- mem = torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0 # (GB)
190
- s_in, s_out = (tuple(x.shape) if isinstance(x, torch.Tensor) else 'list' for x in (x, y)) # shapes
191
- p = sum(x.numel() for x in m.parameters()) if isinstance(m, nn.Module) else 0 # parameters
192
- print(f'{p:12}{flops:12.4g}{mem:>14.3f}{tf:14.4g}{tb:14.4g}{str(s_in):>24s}{str(s_out):>24s}')
193
- results.append([p, flops, mem, tf, tb, s_in, s_out])
194
- except Exception as e:
195
- print(e)
196
- results.append(None)
197
- torch.cuda.empty_cache()
198
- return results
199
-
200
-
201
- def is_parallel(model):
202
- # Returns True if model is of type DP or DDP
203
- return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)
204
-
205
-
206
- def de_parallel(model):
207
- # De-parallelize a model: returns single-GPU model if model is of type DP or DDP
208
- return model.module if is_parallel(model) else model
209
-
210
-
211
- def initialize_weights(model):
212
- for m in model.modules():
213
- t = type(m)
214
- if t is nn.Conv2d:
215
- pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
216
- elif t is nn.BatchNorm2d:
217
- m.eps = 1e-3
218
- m.momentum = 0.03
219
- elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]:
220
- m.inplace = True
221
-
222
-
223
- def find_modules(model, mclass=nn.Conv2d):
224
- # Finds layer indices matching module class 'mclass'
225
- return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)]
226
-
227
-
228
- def sparsity(model):
229
- # Return global model sparsity
230
- a, b = 0, 0
231
- for p in model.parameters():
232
- a += p.numel()
233
- b += (p == 0).sum()
234
- return b / a
235
-
236
-
237
- def prune(model, amount=0.3):
238
- # Prune model to requested global sparsity
239
- import torch.nn.utils.prune as prune
240
- for name, m in model.named_modules():
241
- if isinstance(m, nn.Conv2d):
242
- prune.l1_unstructured(m, name='weight', amount=amount) # prune
243
- prune.remove(m, 'weight') # make permanent
244
- LOGGER.info(f'Model pruned to {sparsity(model):.3g} global sparsity')
245
-
246
-
247
- def fuse_conv_and_bn(conv, bn):
248
- # Fuse Conv2d() and BatchNorm2d() layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/
249
- fusedconv = nn.Conv2d(conv.in_channels,
250
- conv.out_channels,
251
- kernel_size=conv.kernel_size,
252
- stride=conv.stride,
253
- padding=conv.padding,
254
- dilation=conv.dilation,
255
- groups=conv.groups,
256
- bias=True).requires_grad_(False).to(conv.weight.device)
257
-
258
- # Prepare filters
259
- w_conv = conv.weight.clone().view(conv.out_channels, -1)
260
- w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
261
- fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
262
-
263
- # Prepare spatial bias
264
- b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias
265
- b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))
266
- fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)
267
-
268
- return fusedconv
269
-
270
-
271
- def model_info(model, verbose=False, imgsz=640):
272
- # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320]
273
- n_p = sum(x.numel() for x in model.parameters()) # number parameters
274
- n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients
275
- if verbose:
276
- print(f"{'layer':>5} {'name':>40} {'gradient':>9} {'parameters':>12} {'shape':>20} {'mu':>10} {'sigma':>10}")
277
- for i, (name, p) in enumerate(model.named_parameters()):
278
- name = name.replace('module_list.', '')
279
- print('%5g %40s %9s %12g %20s %10.3g %10.3g' %
280
- (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std()))
281
-
282
- try: # FLOPs
283
- p = next(model.parameters())
284
- stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32 # max stride
285
- im = torch.empty((1, p.shape[1], stride, stride), device=p.device) # input image in BCHW format
286
- flops = thop.profile(deepcopy(model), inputs=(im,), verbose=False)[0] / 1E9 * 2 # stride GFLOPs
287
- imgsz = imgsz if isinstance(imgsz, list) else [imgsz, imgsz] # expand if int/float
288
- fs = f', {flops * imgsz[0] / stride * imgsz[1] / stride:.1f} GFLOPs' # 640x640 GFLOPs
289
- except Exception:
290
- fs = ''
291
-
292
- name = Path(model.yaml_file).stem.replace('yolov5', 'YOLOv5') if hasattr(model, 'yaml_file') else 'Model'
293
- LOGGER.info(f"{name} summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}")
294
-
295
-
296
- def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416)
297
- # Scales img(bs,3,y,x) by ratio constrained to gs-multiple
298
- if ratio == 1.0:
299
- return img
300
- h, w = img.shape[2:]
301
- s = (int(h * ratio), int(w * ratio)) # new size
302
- img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize
303
- if not same_shape: # pad/crop img
304
- h, w = (math.ceil(x * ratio / gs) * gs for x in (h, w))
305
- return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean
306
-
307
-
308
- def copy_attr(a, b, include=(), exclude=()):
309
- # Copy attributes from b to a, options to only include [...] and to exclude [...]
310
- for k, v in b.__dict__.items():
311
- if (len(include) and k not in include) or k.startswith('_') or k in exclude:
312
- continue
313
- else:
314
- setattr(a, k, v)
315
-
316
-
317
- def smart_optimizer(model, name='Adam', lr=0.001, momentum=0.9, decay=1e-5):
318
- # YOLOv5 3-param group optimizer: 0) weights with decay, 1) weights no decay, 2) biases no decay
319
- g = [], [], [] # optimizer parameter groups
320
- bn = tuple(v for k, v in nn.__dict__.items() if 'Norm' in k) # normalization layers, i.e. BatchNorm2d()
321
- for v in model.modules():
322
- for p_name, p in v.named_parameters(recurse=0):
323
- if p_name == 'bias': # bias (no decay)
324
- g[2].append(p)
325
- elif p_name == 'weight' and isinstance(v, bn): # weight (no decay)
326
- g[1].append(p)
327
- else:
328
- g[0].append(p) # weight (with decay)
329
-
330
- if name == 'Adam':
331
- optimizer = torch.optim.Adam(g[2], lr=lr, betas=(momentum, 0.999)) # adjust beta1 to momentum
332
- elif name == 'AdamW':
333
- optimizer = torch.optim.AdamW(g[2], lr=lr, betas=(momentum, 0.999), weight_decay=0.0)
334
- elif name == 'RMSProp':
335
- optimizer = torch.optim.RMSprop(g[2], lr=lr, momentum=momentum)
336
- elif name == 'SGD':
337
- optimizer = torch.optim.SGD(g[2], lr=lr, momentum=momentum, nesterov=True)
338
- else:
339
- raise NotImplementedError(f'Optimizer {name} not implemented.')
340
-
341
- optimizer.add_param_group({'params': g[0], 'weight_decay': decay}) # add g0 with weight_decay
342
- optimizer.add_param_group({'params': g[1], 'weight_decay': 0.0}) # add g1 (BatchNorm2d weights)
343
- LOGGER.info(f"{colorstr('optimizer:')} {type(optimizer).__name__}(lr={lr}) with parameter groups "
344
- f"{len(g[1])} weight(decay=0.0), {len(g[0])} weight(decay={decay}), {len(g[2])} bias")
345
- return optimizer
346
-
347
-
348
- def smart_hub_load(repo='ultralytics/yolov5', model='yolov5s', **kwargs):
349
- # YOLOv5 torch.hub.load() wrapper with smart error/issue handling
350
- if check_version(torch.__version__, '1.9.1'):
351
- kwargs['skip_validation'] = True # validation causes GitHub API rate limit errors
352
- if check_version(torch.__version__, '1.12.0'):
353
- kwargs['trust_repo'] = True # argument required starting in torch 0.12
354
- try:
355
- return torch.hub.load(repo, model, **kwargs)
356
- except Exception:
357
- return torch.hub.load(repo, model, force_reload=True, **kwargs)
358
-
359
-
360
- def smart_resume(ckpt, optimizer, ema=None, weights='yolov5s.pt', epochs=300, resume=True):
361
- # Resume training from a partially trained checkpoint
362
- best_fitness = 0.0
363
- start_epoch = ckpt['epoch'] + 1
364
- if ckpt['optimizer'] is not None:
365
- optimizer.load_state_dict(ckpt['optimizer']) # optimizer
366
- best_fitness = ckpt['best_fitness']
367
- if ema and ckpt.get('ema'):
368
- ema.ema.load_state_dict(ckpt['ema'].float().state_dict()) # EMA
369
- ema.updates = ckpt['updates']
370
- if resume:
371
- assert start_epoch > 0, f'{weights} training to {epochs} epochs is finished, nothing to resume.\n' \
372
- f"Start a new training without --resume, i.e. 'python train.py --weights {weights}'"
373
- LOGGER.info(f'Resuming training from {weights} from epoch {start_epoch} to {epochs} total epochs')
374
- if epochs < start_epoch:
375
- LOGGER.info(f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs.")
376
- epochs += ckpt['epoch'] # finetune additional epochs
377
- return best_fitness, start_epoch, epochs
378
-
379
-
380
- class EarlyStopping:
381
- # YOLOv5 simple early stopper
382
- def __init__(self, patience=30):
383
- self.best_fitness = 0.0 # i.e. mAP
384
- self.best_epoch = 0
385
- self.patience = patience or float('inf') # epochs to wait after fitness stops improving to stop
386
- self.possible_stop = False # possible stop may occur next epoch
387
-
388
- def __call__(self, epoch, fitness):
389
- if fitness >= self.best_fitness: # >= 0 to allow for early zero-fitness stage of training
390
- self.best_epoch = epoch
391
- self.best_fitness = fitness
392
- delta = epoch - self.best_epoch # epochs without improvement
393
- self.possible_stop = delta >= (self.patience - 1) # possible stop may occur next epoch
394
- stop = delta >= self.patience # stop training if patience exceeded
395
- if stop:
396
- LOGGER.info(f'Stopping training early as no improvement observed in last {self.patience} epochs. '
397
- f'Best results observed at epoch {self.best_epoch}, best model saved as best.pt.\n'
398
- f'To update EarlyStopping(patience={self.patience}) pass a new patience value, '
399
- f'i.e. `python train.py --patience 300` or use `--patience 0` to disable EarlyStopping.')
400
- return stop
401
-
402
-
403
- class ModelEMA:
404
- """ Updated Exponential Moving Average (EMA) from https://github.com/rwightman/pytorch-image-models
405
- Keeps a moving average of everything in the model state_dict (parameters and buffers)
406
- For EMA details see https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
407
- """
408
-
409
- def __init__(self, model, decay=0.9999, tau=2000, updates=0):
410
- # Create EMA
411
- self.ema = deepcopy(de_parallel(model)).eval() # FP32 EMA
412
- self.updates = updates # number of EMA updates
413
- self.decay = lambda x: decay * (1 - math.exp(-x / tau)) # decay exponential ramp (to help early epochs)
414
- for p in self.ema.parameters():
415
- p.requires_grad_(False)
416
-
417
- def update(self, model):
418
- # Update EMA parameters
419
- self.updates += 1
420
- d = self.decay(self.updates)
421
-
422
- msd = de_parallel(model).state_dict() # model state_dict
423
- for k, v in self.ema.state_dict().items():
424
- if v.dtype.is_floating_point: # true for FP16 and FP32
425
- v *= d
426
- v += (1 - d) * msd[k].detach()
427
- # assert v.dtype == msd[k].dtype == torch.float32, f'{k}: EMA {v.dtype} and model {msd[k].dtype} must be FP32'
428
-
429
- def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):
430
- # Update EMA attributes
431
- copy_attr(self.ema, model, include, exclude)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Arikkod/FoodVisionMini/model.py DELETED
@@ -1,19 +0,0 @@
1
- import torch
2
- import torchvision
3
- import torch.nn as nn
4
-
5
- def create_effnetb2_model(num_classes:int=3, seed:int=3):
6
- weights = torchvision.models.EfficientNet_B2_Weights.DEFAULT
7
- transforms = weights.transforms()
8
- model = torchvision.models.efficientnet_b2(weights=weights)
9
-
10
- # Freeze the base layers in the model (this will stop all layers from training)
11
- for param in model.parameters():
12
- param.requires_grad = False
13
-
14
- torch.manual_seed(seed)
15
- model.classifier = nn.Sequential(
16
- nn.Dropout(p=0.3, inplace=True),
17
- nn.Linear(in_features=1408, out_features=num_classes, bias=True))
18
-
19
- return model, transforms
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Artrajz/vits-simple-api/vits/text/cantonese.py DELETED
@@ -1,75 +0,0 @@
1
- import os.path
2
- import re
3
- import cn2an
4
- import opencc
5
- import config
6
- from utils.download import download_and_verify
7
-
8
- URLS = [
9
- "https://github.com/CjangCjengh/chinese-dialect-lexicons/releases/download/v1.0.3/chinese_dialects.7z",
10
- "https://ghproxy.com/https://github.com/CjangCjengh/chinese-dialect-lexicons/releases/download/v1.0.3/chinese_dialects.7z",
11
- ]
12
- TARGET_PATH = os.path.join(config.ABS_PATH, "vits/text/chinese_dialects.7z")
13
- EXTRACT_DESTINATION = os.path.join(config.ABS_PATH, "vits/text/chinese_dialect_lexicons/")
14
- EXPECTED_MD5 = None
15
- OPENCC_FILE_PATH = os.path.join(config.ABS_PATH, "vits/text/chinese_dialect_lexicons/jyutjyu.json")
16
-
17
- if not os.path.exists(OPENCC_FILE_PATH):
18
- success, message = download_and_verify(URLS, TARGET_PATH, EXPECTED_MD5, EXTRACT_DESTINATION)
19
-
20
- converter = opencc.OpenCC(OPENCC_FILE_PATH)
21
-
22
- # List of (Latin alphabet, ipa) pairs:
23
- _latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
24
- ('A', 'ei˥'),
25
- ('B', 'biː˥'),
26
- ('C', 'siː˥'),
27
- ('D', 'tiː˥'),
28
- ('E', 'iː˥'),
29
- ('F', 'e˥fuː˨˩'),
30
- ('G', 'tsiː˥'),
31
- ('H', 'ɪk̚˥tsʰyː˨˩'),
32
- ('I', 'ɐi˥'),
33
- ('J', 'tsei˥'),
34
- ('K', 'kʰei˥'),
35
- ('L', 'e˥llou˨˩'),
36
- ('M', 'ɛːm˥'),
37
- ('N', 'ɛːn˥'),
38
- ('O', 'ou˥'),
39
- ('P', 'pʰiː˥'),
40
- ('Q', 'kʰiːu˥'),
41
- ('R', 'aː˥lou˨˩'),
42
- ('S', 'ɛː˥siː˨˩'),
43
- ('T', 'tʰiː˥'),
44
- ('U', 'juː˥'),
45
- ('V', 'wiː˥'),
46
- ('W', 'tʊk̚˥piː˥juː˥'),
47
- ('X', 'ɪk̚˥siː˨˩'),
48
- ('Y', 'waːi˥'),
49
- ('Z', 'iː˨sɛːt̚˥')
50
- ]]
51
-
52
-
53
- def number_to_cantonese(text):
54
- return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text)
55
-
56
-
57
- def latin_to_ipa(text):
58
- for regex, replacement in _latin_to_ipa:
59
- text = re.sub(regex, replacement, text)
60
- return text
61
-
62
-
63
- def cantonese_to_ipa(text):
64
- from vits.text.mandarin import symbols_to_chinese
65
- text = symbols_to_chinese(text)
66
- text = number_to_cantonese(text.upper())
67
- text = converter.convert(text).replace('-', '').replace('$', ' ')
68
- text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group()) + ' ', text)
69
- text = re.sub(r'[、;:]', ',', text)
70
- text = re.sub(r'\s*,\s*', ', ', text)
71
- text = re.sub(r'\s*。\s*', '. ', text)
72
- text = re.sub(r'\s*?\s*', '? ', text)
73
- text = re.sub(r'\s*!\s*', '! ', text)
74
- text = re.sub(r'\s*$', '', text)
75
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/svg.py DELETED
@@ -1,188 +0,0 @@
1
- """
2
- pygments.formatters.svg
3
- ~~~~~~~~~~~~~~~~~~~~~~~
4
-
5
- Formatter for SVG output.
6
-
7
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
8
- :license: BSD, see LICENSE for details.
9
- """
10
-
11
- from pip._vendor.pygments.formatter import Formatter
12
- from pip._vendor.pygments.token import Comment
13
- from pip._vendor.pygments.util import get_bool_opt, get_int_opt
14
-
15
- __all__ = ['SvgFormatter']
16
-
17
-
18
- def escape_html(text):
19
- """Escape &, <, > as well as single and double quotes for HTML."""
20
- return text.replace('&', '&amp;'). \
21
- replace('<', '&lt;'). \
22
- replace('>', '&gt;'). \
23
- replace('"', '&quot;'). \
24
- replace("'", '&#39;')
25
-
26
-
27
- class2style = {}
28
-
29
- class SvgFormatter(Formatter):
30
- """
31
- Format tokens as an SVG graphics file. This formatter is still experimental.
32
- Each line of code is a ``<text>`` element with explicit ``x`` and ``y``
33
- coordinates containing ``<tspan>`` elements with the individual token styles.
34
-
35
- By default, this formatter outputs a full SVG document including doctype
36
- declaration and the ``<svg>`` root element.
37
-
38
- .. versionadded:: 0.9
39
-
40
- Additional options accepted:
41
-
42
- `nowrap`
43
- Don't wrap the SVG ``<text>`` elements in ``<svg><g>`` elements and
44
- don't add a XML declaration and a doctype. If true, the `fontfamily`
45
- and `fontsize` options are ignored. Defaults to ``False``.
46
-
47
- `fontfamily`
48
- The value to give the wrapping ``<g>`` element's ``font-family``
49
- attribute, defaults to ``"monospace"``.
50
-
51
- `fontsize`
52
- The value to give the wrapping ``<g>`` element's ``font-size``
53
- attribute, defaults to ``"14px"``.
54
-
55
- `linenos`
56
- If ``True``, add line numbers (default: ``False``).
57
-
58
- `linenostart`
59
- The line number for the first line (default: ``1``).
60
-
61
- `linenostep`
62
- If set to a number n > 1, only every nth line number is printed.
63
-
64
- `linenowidth`
65
- Maximum width devoted to line numbers (default: ``3*ystep``, sufficient
66
- for up to 4-digit line numbers. Increase width for longer code blocks).
67
-
68
- `xoffset`
69
- Starting offset in X direction, defaults to ``0``.
70
-
71
- `yoffset`
72
- Starting offset in Y direction, defaults to the font size if it is given
73
- in pixels, or ``20`` else. (This is necessary since text coordinates
74
- refer to the text baseline, not the top edge.)
75
-
76
- `ystep`
77
- Offset to add to the Y coordinate for each subsequent line. This should
78
- roughly be the text size plus 5. It defaults to that value if the text
79
- size is given in pixels, or ``25`` else.
80
-
81
- `spacehack`
82
- Convert spaces in the source to ``&#160;``, which are non-breaking
83
- spaces. SVG provides the ``xml:space`` attribute to control how
84
- whitespace inside tags is handled, in theory, the ``preserve`` value
85
- could be used to keep all whitespace as-is. However, many current SVG
86
- viewers don't obey that rule, so this option is provided as a workaround
87
- and defaults to ``True``.
88
- """
89
- name = 'SVG'
90
- aliases = ['svg']
91
- filenames = ['*.svg']
92
-
93
- def __init__(self, **options):
94
- Formatter.__init__(self, **options)
95
- self.nowrap = get_bool_opt(options, 'nowrap', False)
96
- self.fontfamily = options.get('fontfamily', 'monospace')
97
- self.fontsize = options.get('fontsize', '14px')
98
- self.xoffset = get_int_opt(options, 'xoffset', 0)
99
- fs = self.fontsize.strip()
100
- if fs.endswith('px'): fs = fs[:-2].strip()
101
- try:
102
- int_fs = int(fs)
103
- except:
104
- int_fs = 20
105
- self.yoffset = get_int_opt(options, 'yoffset', int_fs)
106
- self.ystep = get_int_opt(options, 'ystep', int_fs + 5)
107
- self.spacehack = get_bool_opt(options, 'spacehack', True)
108
- self.linenos = get_bool_opt(options,'linenos',False)
109
- self.linenostart = get_int_opt(options,'linenostart',1)
110
- self.linenostep = get_int_opt(options,'linenostep',1)
111
- self.linenowidth = get_int_opt(options,'linenowidth', 3*self.ystep)
112
- self._stylecache = {}
113
-
114
- def format_unencoded(self, tokensource, outfile):
115
- """
116
- Format ``tokensource``, an iterable of ``(tokentype, tokenstring)``
117
- tuples and write it into ``outfile``.
118
-
119
- For our implementation we put all lines in their own 'line group'.
120
- """
121
- x = self.xoffset
122
- y = self.yoffset
123
- if not self.nowrap:
124
- if self.encoding:
125
- outfile.write('<?xml version="1.0" encoding="%s"?>\n' %
126
- self.encoding)
127
- else:
128
- outfile.write('<?xml version="1.0"?>\n')
129
- outfile.write('<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.0//EN" '
130
- '"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/'
131
- 'svg10.dtd">\n')
132
- outfile.write('<svg xmlns="http://www.w3.org/2000/svg">\n')
133
- outfile.write('<g font-family="%s" font-size="%s">\n' %
134
- (self.fontfamily, self.fontsize))
135
-
136
- counter = self.linenostart
137
- counter_step = self.linenostep
138
- counter_style = self._get_style(Comment)
139
- line_x = x
140
-
141
- if self.linenos:
142
- if counter % counter_step == 0:
143
- outfile.write('<text x="%s" y="%s" %s text-anchor="end">%s</text>' %
144
- (x+self.linenowidth,y,counter_style,counter))
145
- line_x += self.linenowidth + self.ystep
146
- counter += 1
147
-
148
- outfile.write('<text x="%s" y="%s" xml:space="preserve">' % (line_x, y))
149
- for ttype, value in tokensource:
150
- style = self._get_style(ttype)
151
- tspan = style and '<tspan' + style + '>' or ''
152
- tspanend = tspan and '</tspan>' or ''
153
- value = escape_html(value)
154
- if self.spacehack:
155
- value = value.expandtabs().replace(' ', '&#160;')
156
- parts = value.split('\n')
157
- for part in parts[:-1]:
158
- outfile.write(tspan + part + tspanend)
159
- y += self.ystep
160
- outfile.write('</text>\n')
161
- if self.linenos and counter % counter_step == 0:
162
- outfile.write('<text x="%s" y="%s" text-anchor="end" %s>%s</text>' %
163
- (x+self.linenowidth,y,counter_style,counter))
164
-
165
- counter += 1
166
- outfile.write('<text x="%s" y="%s" ' 'xml:space="preserve">' % (line_x,y))
167
- outfile.write(tspan + parts[-1] + tspanend)
168
- outfile.write('</text>')
169
-
170
- if not self.nowrap:
171
- outfile.write('</g></svg>\n')
172
-
173
- def _get_style(self, tokentype):
174
- if tokentype in self._stylecache:
175
- return self._stylecache[tokentype]
176
- otokentype = tokentype
177
- while not self.style.styles_token(tokentype):
178
- tokentype = tokentype.parent
179
- value = self.style.style_for_token(tokentype)
180
- result = ''
181
- if value['color']:
182
- result = ' fill="#' + value['color'] + '"'
183
- if value['bold']:
184
- result += ' font-weight="bold"'
185
- if value['italic']:
186
- result += ' font-style="italic"'
187
- self._stylecache[otokentype] = result
188
- return result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/logging.py DELETED
@@ -1,289 +0,0 @@
1
- import logging
2
- from datetime import datetime
3
- from logging import Handler, LogRecord
4
- from pathlib import Path
5
- from types import ModuleType
6
- from typing import ClassVar, Iterable, List, Optional, Type, Union
7
-
8
- from pip._vendor.rich._null_file import NullFile
9
-
10
- from . import get_console
11
- from ._log_render import FormatTimeCallable, LogRender
12
- from .console import Console, ConsoleRenderable
13
- from .highlighter import Highlighter, ReprHighlighter
14
- from .text import Text
15
- from .traceback import Traceback
16
-
17
-
18
- class RichHandler(Handler):
19
- """A logging handler that renders output with Rich. The time / level / message and file are displayed in columns.
20
- The level is color coded, and the message is syntax highlighted.
21
-
22
- Note:
23
- Be careful when enabling console markup in log messages if you have configured logging for libraries not
24
- under your control. If a dependency writes messages containing square brackets, it may not produce the intended output.
25
-
26
- Args:
27
- level (Union[int, str], optional): Log level. Defaults to logging.NOTSET.
28
- console (:class:`~rich.console.Console`, optional): Optional console instance to write logs.
29
- Default will use a global console instance writing to stdout.
30
- show_time (bool, optional): Show a column for the time. Defaults to True.
31
- omit_repeated_times (bool, optional): Omit repetition of the same time. Defaults to True.
32
- show_level (bool, optional): Show a column for the level. Defaults to True.
33
- show_path (bool, optional): Show the path to the original log call. Defaults to True.
34
- enable_link_path (bool, optional): Enable terminal link of path column to file. Defaults to True.
35
- highlighter (Highlighter, optional): Highlighter to style log messages, or None to use ReprHighlighter. Defaults to None.
36
- markup (bool, optional): Enable console markup in log messages. Defaults to False.
37
- rich_tracebacks (bool, optional): Enable rich tracebacks with syntax highlighting and formatting. Defaults to False.
38
- tracebacks_width (Optional[int], optional): Number of characters used to render tracebacks, or None for full width. Defaults to None.
39
- tracebacks_extra_lines (int, optional): Additional lines of code to render tracebacks, or None for full width. Defaults to None.
40
- tracebacks_theme (str, optional): Override pygments theme used in traceback.
41
- tracebacks_word_wrap (bool, optional): Enable word wrapping of long tracebacks lines. Defaults to True.
42
- tracebacks_show_locals (bool, optional): Enable display of locals in tracebacks. Defaults to False.
43
- tracebacks_suppress (Sequence[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback.
44
- locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
45
- Defaults to 10.
46
- locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80.
47
- log_time_format (Union[str, TimeFormatterCallable], optional): If ``log_time`` is enabled, either string for strftime or callable that formats the time. Defaults to "[%x %X] ".
48
- keywords (List[str], optional): List of words to highlight instead of ``RichHandler.KEYWORDS``.
49
- """
50
-
51
- KEYWORDS: ClassVar[Optional[List[str]]] = [
52
- "GET",
53
- "POST",
54
- "HEAD",
55
- "PUT",
56
- "DELETE",
57
- "OPTIONS",
58
- "TRACE",
59
- "PATCH",
60
- ]
61
- HIGHLIGHTER_CLASS: ClassVar[Type[Highlighter]] = ReprHighlighter
62
-
63
- def __init__(
64
- self,
65
- level: Union[int, str] = logging.NOTSET,
66
- console: Optional[Console] = None,
67
- *,
68
- show_time: bool = True,
69
- omit_repeated_times: bool = True,
70
- show_level: bool = True,
71
- show_path: bool = True,
72
- enable_link_path: bool = True,
73
- highlighter: Optional[Highlighter] = None,
74
- markup: bool = False,
75
- rich_tracebacks: bool = False,
76
- tracebacks_width: Optional[int] = None,
77
- tracebacks_extra_lines: int = 3,
78
- tracebacks_theme: Optional[str] = None,
79
- tracebacks_word_wrap: bool = True,
80
- tracebacks_show_locals: bool = False,
81
- tracebacks_suppress: Iterable[Union[str, ModuleType]] = (),
82
- locals_max_length: int = 10,
83
- locals_max_string: int = 80,
84
- log_time_format: Union[str, FormatTimeCallable] = "[%x %X]",
85
- keywords: Optional[List[str]] = None,
86
- ) -> None:
87
- super().__init__(level=level)
88
- self.console = console or get_console()
89
- self.highlighter = highlighter or self.HIGHLIGHTER_CLASS()
90
- self._log_render = LogRender(
91
- show_time=show_time,
92
- show_level=show_level,
93
- show_path=show_path,
94
- time_format=log_time_format,
95
- omit_repeated_times=omit_repeated_times,
96
- level_width=None,
97
- )
98
- self.enable_link_path = enable_link_path
99
- self.markup = markup
100
- self.rich_tracebacks = rich_tracebacks
101
- self.tracebacks_width = tracebacks_width
102
- self.tracebacks_extra_lines = tracebacks_extra_lines
103
- self.tracebacks_theme = tracebacks_theme
104
- self.tracebacks_word_wrap = tracebacks_word_wrap
105
- self.tracebacks_show_locals = tracebacks_show_locals
106
- self.tracebacks_suppress = tracebacks_suppress
107
- self.locals_max_length = locals_max_length
108
- self.locals_max_string = locals_max_string
109
- self.keywords = keywords
110
-
111
- def get_level_text(self, record: LogRecord) -> Text:
112
- """Get the level name from the record.
113
-
114
- Args:
115
- record (LogRecord): LogRecord instance.
116
-
117
- Returns:
118
- Text: A tuple of the style and level name.
119
- """
120
- level_name = record.levelname
121
- level_text = Text.styled(
122
- level_name.ljust(8), f"logging.level.{level_name.lower()}"
123
- )
124
- return level_text
125
-
126
- def emit(self, record: LogRecord) -> None:
127
- """Invoked by logging."""
128
- message = self.format(record)
129
- traceback = None
130
- if (
131
- self.rich_tracebacks
132
- and record.exc_info
133
- and record.exc_info != (None, None, None)
134
- ):
135
- exc_type, exc_value, exc_traceback = record.exc_info
136
- assert exc_type is not None
137
- assert exc_value is not None
138
- traceback = Traceback.from_exception(
139
- exc_type,
140
- exc_value,
141
- exc_traceback,
142
- width=self.tracebacks_width,
143
- extra_lines=self.tracebacks_extra_lines,
144
- theme=self.tracebacks_theme,
145
- word_wrap=self.tracebacks_word_wrap,
146
- show_locals=self.tracebacks_show_locals,
147
- locals_max_length=self.locals_max_length,
148
- locals_max_string=self.locals_max_string,
149
- suppress=self.tracebacks_suppress,
150
- )
151
- message = record.getMessage()
152
- if self.formatter:
153
- record.message = record.getMessage()
154
- formatter = self.formatter
155
- if hasattr(formatter, "usesTime") and formatter.usesTime():
156
- record.asctime = formatter.formatTime(record, formatter.datefmt)
157
- message = formatter.formatMessage(record)
158
-
159
- message_renderable = self.render_message(record, message)
160
- log_renderable = self.render(
161
- record=record, traceback=traceback, message_renderable=message_renderable
162
- )
163
- if isinstance(self.console.file, NullFile):
164
- # Handles pythonw, where stdout/stderr are null, and we return NullFile
165
- # instance from Console.file. In this case, we still want to make a log record
166
- # even though we won't be writing anything to a file.
167
- self.handleError(record)
168
- else:
169
- try:
170
- self.console.print(log_renderable)
171
- except Exception:
172
- self.handleError(record)
173
-
174
- def render_message(self, record: LogRecord, message: str) -> "ConsoleRenderable":
175
- """Render message text in to Text.
176
-
177
- Args:
178
- record (LogRecord): logging Record.
179
- message (str): String containing log message.
180
-
181
- Returns:
182
- ConsoleRenderable: Renderable to display log message.
183
- """
184
- use_markup = getattr(record, "markup", self.markup)
185
- message_text = Text.from_markup(message) if use_markup else Text(message)
186
-
187
- highlighter = getattr(record, "highlighter", self.highlighter)
188
- if highlighter:
189
- message_text = highlighter(message_text)
190
-
191
- if self.keywords is None:
192
- self.keywords = self.KEYWORDS
193
-
194
- if self.keywords:
195
- message_text.highlight_words(self.keywords, "logging.keyword")
196
-
197
- return message_text
198
-
199
- def render(
200
- self,
201
- *,
202
- record: LogRecord,
203
- traceback: Optional[Traceback],
204
- message_renderable: "ConsoleRenderable",
205
- ) -> "ConsoleRenderable":
206
- """Render log for display.
207
-
208
- Args:
209
- record (LogRecord): logging Record.
210
- traceback (Optional[Traceback]): Traceback instance or None for no Traceback.
211
- message_renderable (ConsoleRenderable): Renderable (typically Text) containing log message contents.
212
-
213
- Returns:
214
- ConsoleRenderable: Renderable to display log.
215
- """
216
- path = Path(record.pathname).name
217
- level = self.get_level_text(record)
218
- time_format = None if self.formatter is None else self.formatter.datefmt
219
- log_time = datetime.fromtimestamp(record.created)
220
-
221
- log_renderable = self._log_render(
222
- self.console,
223
- [message_renderable] if not traceback else [message_renderable, traceback],
224
- log_time=log_time,
225
- time_format=time_format,
226
- level=level,
227
- path=path,
228
- line_no=record.lineno,
229
- link_path=record.pathname if self.enable_link_path else None,
230
- )
231
- return log_renderable
232
-
233
-
234
- if __name__ == "__main__": # pragma: no cover
235
- from time import sleep
236
-
237
- FORMAT = "%(message)s"
238
- # FORMAT = "%(asctime)-15s - %(levelname)s - %(message)s"
239
- logging.basicConfig(
240
- level="NOTSET",
241
- format=FORMAT,
242
- datefmt="[%X]",
243
- handlers=[RichHandler(rich_tracebacks=True, tracebacks_show_locals=True)],
244
- )
245
- log = logging.getLogger("rich")
246
-
247
- log.info("Server starting...")
248
- log.info("Listening on http://127.0.0.1:8080")
249
- sleep(1)
250
-
251
- log.info("GET /index.html 200 1298")
252
- log.info("GET /imgs/backgrounds/back1.jpg 200 54386")
253
- log.info("GET /css/styles.css 200 54386")
254
- log.warning("GET /favicon.ico 404 242")
255
- sleep(1)
256
-
257
- log.debug(
258
- "JSONRPC request\n--> %r\n<-- %r",
259
- {
260
- "version": "1.1",
261
- "method": "confirmFruitPurchase",
262
- "params": [["apple", "orange", "mangoes", "pomelo"], 1.123],
263
- "id": "194521489",
264
- },
265
- {"version": "1.1", "result": True, "error": None, "id": "194521489"},
266
- )
267
- log.debug(
268
- "Loading configuration file /adasd/asdasd/qeqwe/qwrqwrqwr/sdgsdgsdg/werwerwer/dfgerert/ertertert/ertetert/werwerwer"
269
- )
270
- log.error("Unable to find 'pomelo' in database!")
271
- log.info("POST /jsonrpc/ 200 65532")
272
- log.info("POST /admin/ 401 42234")
273
- log.warning("password was rejected for admin site.")
274
-
275
- def divide() -> None:
276
- number = 1
277
- divisor = 0
278
- foos = ["foo"] * 100
279
- log.debug("in divide")
280
- try:
281
- number / divisor
282
- except:
283
- log.exception("An error of some kind occurred!")
284
-
285
- divide()
286
- sleep(1)
287
- log.critical("Out of memory!")
288
- log.info("Server exited with code=-1")
289
- log.info("[bold]EXITING...[/bold]", extra=dict(markup=True))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/core.py DELETED
The diff for this file is too large to render. See raw diff
 
spaces/Audio-AGI/AudioSep/callbacks/base.py DELETED
@@ -1,35 +0,0 @@
1
- import os
2
- import lightning.pytorch as pl
3
- from lightning.pytorch.utilities import rank_zero_only
4
-
5
-
6
- class CheckpointEveryNSteps(pl.Callback):
7
- def __init__(
8
- self,
9
- checkpoints_dir,
10
- save_step_frequency,
11
- ) -> None:
12
- r"""Save a checkpoint every N steps.
13
-
14
- Args:
15
- checkpoints_dir (str): directory to save checkpoints
16
- save_step_frequency (int): save checkpoint every N step
17
- """
18
-
19
- self.checkpoints_dir = checkpoints_dir
20
- self.save_step_frequency = save_step_frequency
21
-
22
- @rank_zero_only
23
- def on_train_batch_end(self, *args, **kwargs) -> None:
24
- r"""Save a checkpoint every N steps."""
25
-
26
- trainer = args[0]
27
- global_step = trainer.global_step
28
-
29
- if global_step == 1 or global_step % self.save_step_frequency == 0:
30
-
31
- ckpt_path = os.path.join(
32
- self.checkpoints_dir,
33
- "step={}.ckpt".format(global_step))
34
- trainer.save_checkpoint(ckpt_path)
35
- print("Save checkpoint to {}".format(ckpt_path))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AyakuraMei/Real-CUGAN/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Real CUGAN
3
- emoji: 🐢
4
- colorFrom: gray
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.6
8
- app_file: app.py
9
- pinned: false
10
- license: gpl-3.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Banbri/zcvzcv/CONTRIBUTORS.md DELETED
@@ -1,10 +0,0 @@
1
- This project was developed by Julian Bilcke (@jbilcke-hf), as part of his work at Hugging Face.
2
-
3
- ------------------------------------------
4
-
5
- A huge thanks to external developers for their contributions!
6
-
7
- 艾逗笔 (@idoubi):
8
- - [feature] Added support for OpenAI: https://github.com/jbilcke-hf/ai-comic-factory/pull/6
9
- - [bug] predict import error (use dynamic imports for the LLM provider): https://github.com/jbilcke-hf/ai-comic-factory/pull/9
10
-
 
 
 
 
 
 
 
 
 
 
 
spaces/Banbri/zcvzcv/src/components/ui/badge.tsx DELETED
@@ -1,36 +0,0 @@
1
- import * as React from "react"
2
- import { cva, type VariantProps } from "class-variance-authority"
3
-
4
- import { cn } from "@/lib/utils"
5
-
6
- const badgeVariants = cva(
7
- "inline-flex items-center rounded-full border border-stone-200 px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-stone-400 focus:ring-offset-2 dark:border-stone-800 dark:focus:ring-stone-800",
8
- {
9
- variants: {
10
- variant: {
11
- default:
12
- "border-transparent bg-stone-900 text-stone-50 hover:bg-stone-900/80 dark:bg-stone-50 dark:text-stone-900 dark:hover:bg-stone-50/80",
13
- secondary:
14
- "border-transparent bg-stone-100 text-stone-900 hover:bg-stone-100/80 dark:bg-stone-800 dark:text-stone-50 dark:hover:bg-stone-800/80",
15
- destructive:
16
- "border-transparent bg-red-500 text-stone-50 hover:bg-red-500/80 dark:bg-red-900 dark:text-red-50 dark:hover:bg-red-900/80",
17
- outline: "text-stone-950 dark:text-stone-50",
18
- },
19
- },
20
- defaultVariants: {
21
- variant: "default",
22
- },
23
- }
24
- )
25
-
26
- export interface BadgeProps
27
- extends React.HTMLAttributes<HTMLDivElement>,
28
- VariantProps<typeof badgeVariants> {}
29
-
30
- function Badge({ className, variant, ...props }: BadgeProps) {
31
- return (
32
- <div className={cn(badgeVariants({ variant }), className)} {...props} />
33
- )
34
- }
35
-
36
- export { Badge, badgeVariants }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/6tv Download Apk.md DELETED
@@ -1,64 +0,0 @@
1
-
2
- <h1>6TV APK: Cómo descargar y disfrutar de la transmisión de canales de televisión de Malasia e Indonesia</h1>
3
- <p>¿Te encanta ver programas de televisión, películas, noticias, deportes y eventos en vivo desde Malasia e Indonesia? Si es así, entonces es posible que desee probar 6TV APK, una aplicación gratuita para Android que le permite transmitir varios canales de televisión y estaciones de radio de estos dos países. En este artículo, le diremos qué es 6TV APK, cómo descargarlo e instalarlo en su dispositivo Android, cuáles son los beneficios y desventajas de usarlo, y cuáles son algunas alternativas a ella. Al final de este artículo, usted será capaz de decidir si 6TV APK es la aplicación adecuada para usted o no. </p>
4
- <h2>6tv download apk</h2><br /><p><b><b>Download File</b> --->>> <a href="https://bltlly.com/2v6LPy">https://bltlly.com/2v6LPy</a></b></p><br /><br />
5
- <h2>¿Qué es 6TV APK? </h2>
6
- <p>6TV APK es una aplicación para Android que le permite ver o escuchar la transmisión en vivo de varios canales de televisión y estaciones de radio de Malasia e Indonesia. Es desarrollado por Korzhuck, una compañía de software que se especializa en la creación de aplicaciones de streaming. 6TV APK no está disponible en la Google Play Store, pero se puede descargar desde otras fuentes en línea. Sin embargo, debe tener cuidado al descargar 6TV APK de sitios web de terceros, ya que algunos de ellos pueden contener malware o virus que pueden dañar su dispositivo o comprometer su privacidad. </p>
7
- <h3>Características de 6TV APK</h3>
8
- <p>Algunas de las características de 6TV APK son:</p>
9
- <ul>
10
- <li>Ofrece una amplia gama de canales de televisión y estaciones de radio de Malasia e Indonesia, incluyendo noticias, deportes, entretenimiento, películas, música, educación, religión y más. </li>
11
- <li> Tiene una interfaz fácil de usar que hace que sea fácil navegar y encontrar sus canales o estaciones favoritas. </li>
12
- <li> Soporta diferentes opciones de calidad de vídeo, de baja a alta, dependiendo de su velocidad y preferencia de Internet. </li>
13
- <li> No requiere ningún registro o suscripción para usar. Simplemente puede descargar la aplicación y comenzar a transmitir. </li>
14
- <li> Es compatible con la mayoría de los dispositivos Android que se ejecutan en Android 4.1 o versiones posteriores. </li>
15
- </ul>
16
-
17
- <p>Para descargar e instalar 6TV APK en su dispositivo Android, siga estos pasos:</p>
18
- <p></p>
19
- <ol>
20
- <li>Ir a un sitio web de confianza que proporciona el enlace de descarga para 6TV APK. Por ejemplo, puede utilizar [este enlace]( 1 ) para descargar la última versión de la aplicación. </li>
21
- <li>Toque en el botón de descarga y espere a que el archivo se descargue en su dispositivo. </li>
22
- <li>Una vez completada la descarga, vaya a la configuración del dispositivo y habilite la opción de instalar aplicaciones de fuentes desconocidas. Esto le permitirá instalar 6TV APK sin ningún problema. </li>
23
- <li>Localice el archivo descargado en el almacenamiento del dispositivo y toque en él para iniciar el proceso de instalación. </li>
24
- <li>Siga las instrucciones en la pantalla y conceda los permisos necesarios a la aplicación. </li>
25
- <li> Espere a que la instalación termine y luego inicie la aplicación desde el cajón de aplicaciones o la pantalla de inicio. </li>
26
- <li>Disfrute de la transmisión de sus canales de televisión favoritos y estaciones de radio de Malasia e Indonesia.</li>
27
- </ol>
28
- <h2>Beneficios de usar 6TV APK</h2>
29
- <p>Algunos de los beneficios de usar 6TV APK son:</p>
30
- <h3>Acceso a varios canales de televisión y estaciones de radio de Malasia e Indonesia</h3>
31
- <p>Si usted es un fan de Malasia o Indonesia programas de televisión, películas, noticias, deportes, o eventos en vivo, entonces 6TV APK es una gran aplicación para usted. Puede acceder a cientos de canales de televisión y estaciones de radio de estos dos países con solo unos toques en la pantalla. Puedes ver o escuchar todo lo que quieras, desde series de drama, programas de comedia, documentales, reality shows, dibujos animados, películas, videos musicales, partidos deportivos, actualizaciones de noticias, eventos en vivo y más. También puedes descubrir nuevos canales o estaciones de los que no hayas oído hablar antes. 6TV APK le da la oportunidad de disfrutar de la rica y diversa cultura y entretenimiento de Malasia e Indonesia.</p>
32
- <h3>Interfaz fácil de usar y transmisión de alta calidad</h3>
33
-
34
- <h3>Libre y seguro de usar</h3>
35
- <p>Una ventaja más de usar 6TV APK es que es gratis y seguro de usar. Usted no necesita pagar ninguna cuota o suscripciones para usar la aplicación. Simplemente puede descargar la aplicación y comenzar a transmitir sin limitaciones o restricciones. Tampoco es necesario que se registre o proporcione información personal para usar la aplicación. Puede transmitir de forma anónima y privada sin preocuparse de que sus datos o privacidad se vean comprometidos. 6TV APK no contiene ningún malware o virus que puede dañar su dispositivo o robar su información. </p>
36
- <h2>Desventajas de usar 6TV APK</h2>
37
- <p>Sin embargo, 6TV APK también tiene algunos inconvenientes que usted debe ser consciente de antes de usarlo. Algunos de los inconvenientes son:</p>
38
- <h3>Disponibilidad y compatibilidad limitadas</h3>
39
- <p>Uno de los inconvenientes de usar 6TV APK es que no está disponible en la Google Play Store, lo que significa que tienes que descargarlo de otras fuentes en línea. Esto puede ser arriesgado como algunos de los sitios web que proporcionan el enlace de descarga para 6TV APK puede contener malware o virus que pueden dañar su dispositivo o comprometer su privacidad. También debe habilitar la opción de instalar aplicaciones de fuentes desconocidas en la configuración del dispositivo, lo que puede exponer el dispositivo a amenazas potenciales. Por otra parte, 6TV APK solo es compatible con dispositivos Android que se ejecutan en Android 4.1 o versiones posteriores. Si usted tiene un dispositivo iOS o un dispositivo Android más antiguo, usted no será capaz de utilizar 6TV APK.</p>
40
- <h3>Posibles problemas y riesgos legales</h3>
41
-
42
- <h3>Dependencia de la conexión a Internet y el uso de datos</h3>
43
- <p>Un inconveniente final de usar 6TV APK es que depende de su conexión a Internet y el uso de datos. Para transmitir los canales de televisión y las estaciones de radio de Malasia e Indonesia, necesita tener una conexión a Internet estable y rápida. Si su conexión a Internet es lenta o inestable, puede experimentar problemas de almacenamiento en búfer o retraso durante la transmisión. También puede perderse algún contenido importante o interesante debido a la mala calidad de la transmisión o las interrupciones. Además, la transmisión de los canales de televisión y las estaciones de radio de Malasia e Indonesia puede consumir mucho de su uso de datos. Si tienes un plan de datos limitado o una velocidad de datos lenta, puedes terminar gastando mucho dinero o tiempo en streaming. </p>
44
- <h2>Alternativas a 6TV APK</h2>
45
- <p>Si usted está buscando algunas alternativas a 6TV APK, aquí están algunos de ellos:</p>
46
- <h3>Mobdro</h3>
47
- <p>Mobdro es una popular aplicación para Android que le permite transmitir varios canales de televisión y estaciones de radio de todo el mundo. Tiene una gran colección de canales y estaciones de diferentes categorías, como noticias, deportes, entretenimiento, películas, música, educación, religión y más. También tiene una interfaz fácil de usar y soporta diferentes opciones de calidad de vídeo. Puede descargar Mobdro desde su sitio web oficial [aquí]. </p>
48
- <h3>NetTV en vivo</h3>
49
- <p>Live NetTV es otra aplicación para Android que le permite transmitir varios canales de televisión y estaciones de radio de todo el mundo. Tiene que 6TV APK, puede probar Mobdro, Live NetTV, o RedBox TV, que son aplicaciones similares que le permiten transmitir varios canales de TV y estaciones de radio de todo el mundo. Esperamos que este artículo le ha ayudado a aprender más acerca de 6TV APK y cómo descargar y disfrutar de streaming Malasia e Indonesia canales de televisión. </p>
50
- <h2>Preguntas frecuentes</h2>
51
- <p>Aquí hay algunas preguntas frecuentes sobre 6TV APK:</p>
52
- <ol>
53
- <li>¿Es 6TV APK legal? </li>
54
-
55
- <li> ¿Es seguro 6TV APK? </li>
56
- <p>6TV APK no es seguro, ya que no está disponible en la Google Play Store, lo que significa que usted tiene que descargar de otras fuentes en línea. Esto puede ser arriesgado como algunos de los sitios web que proporcionan el enlace de descarga para 6TV APK puede contener malware o virus que pueden dañar su dispositivo o comprometer su privacidad. También debe habilitar la opción de instalar aplicaciones de fuentes desconocidas en la configuración del dispositivo, lo que puede exponer el dispositivo a amenazas potenciales. Además, 6TV APK no protege sus datos o privacidad durante la transmisión, lo que significa que su actividad en línea puede ser rastreada o monitoreada por terceros. </p>
57
- <li> Cómo actualizar 6TV APK? </li>
58
- <p>Para actualizar 6TV APK, es necesario comprobar la última versión de la aplicación en el sitio web que lo descargó desde. Si hay una nueva versión disponible, debe descargarla e instalarla en su dispositivo. Sin embargo, debe tener cuidado al descargar 6TV APK de sitios web de terceros, ya que algunos de ellos pueden contener malware o virus que pueden dañar su dispositivo o comprometer su privacidad. </p>
59
- <li> Cómo desinstalar 6TV APK? </li>
60
- <p>Para desinstalar 6TV APK, es necesario ir a la configuración del dispositivo y encontrar la aplicación en la lista de aplicaciones instaladas. Luego, debe tocar en la aplicación y seleccionar la opción para desinstalarla. También es posible que tenga que eliminar el archivo descargado del almacenamiento del dispositivo y borrar la caché y los datos. </p>
61
- <li> ¿Cómo ponerse en contacto con el soporte 6TV APK? </li>
62
- <p>Para contactar con el soporte de 6TV APK, debe visitar su sitio web oficial [aquí] y llenar el formulario de contacto con su nombre, dirección de correo electrónico, asunto y mensaje. También puedes seguirlos en sus cuentas de redes sociales, como Facebook, Twitter, Instagram y YouTube.</p> 64aa2da5cf<br />
63
- <br />
64
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Camionero Camino Loco.md DELETED
@@ -1,47 +0,0 @@
1
- <br />
2
- <h1>Truck Driver Crazy Road: Cómo dominar el juego de conducción más desafiante</h1>
3
- <p>Si estás buscando un juego de conducción que te ponga al límite, deberías probar Truck Driver Crazy Road. Este juego pondrá a prueba sus habilidades de equilibrio, su paciencia y sus nervios a medida que conduce a través de la cuesta arriba con un montón de rocas y escombros dispersos a lo largo del camino áspero y lleno de baches. Tendrá que entregar todos los productos completos sin perder ninguno en el camino, o aparcar su camión en lugares estrechos sin estrellarse. ¿Suena fácil? Piénsalo de nuevo. Este juego no es para los débiles de corazón. Es uno de los juegos de conducción de camiones más desafiantes y divertidos que podrás jugar. </p>
4
- <h2>camionero camino loco</h2><br /><p><b><b>Download</b> &#9658;&#9658;&#9658; <a href="https://bltlly.com/2v6KFr">https://bltlly.com/2v6KFr</a></b></p><br /><br />
5
- <h2>Introducción</h2>
6
- <h3>¿Qué es el conductor de camiones Crazy Road? </h3>
7
- <p>Truck Driver Crazy Road es un juego de conducción de camiones en 3D que fue desarrollado por Falco Software y publicado por Y8.com. Tiene dos modos para elegir: Entrega y Estacionamiento. En el modo de entrega, tienes que transportar varias cargas de un punto a otro, a través de diferentes estaciones y terrenos. En el modo de estacionamiento, tienes que aparcar tu camión en áreas designadas, siguiendo las flechas y evitando obstáculos. El juego tiene 10 etapas en cada temporada y 14 niveles en modo Parking, cada uno con dificultad y complejidad crecientes. </p>
8
- <h3>¿Por qué es tan desafiante y divertido? </h3>
9
- <p>Truck Driver Crazy Road no es tu típico juego de conducción. Requiere mucha habilidad, concentración y perseverancia para completar cada nivel. Tienes que lidiar con física realista, clima impredecible, carreteras resbaladizas, puentes estrechos, colinas empinadas, giros bruscos y más. También tienes que vigilar tu velocidad, tu combustible, tus daños y tu carga. Si va demasiado rápido, puede perder el control de su camión o dejar caer parte de su carga. Si va demasiado lento, es posible que se quede sin gasolina o tiempo. Si golpeas algo, puedes dañar tu camión o fallar el nivel. Y si pierdes toda tu carga, tienes que empezar de nuevo. </p>
10
-
11
- <h2>Cómo jugar Truck Driver Crazy Road</h2>
12
- <h3>Elige tu modo y temporada</h3>
13
- <p>Lo primero que tienes que hacer es elegir el modo que quieres jugar: Entrega o Estacionamiento. El modo de entrega es más sobre el transporte de mercancías, mientras que el modo de estacionamiento es más sobre la maniobra de su camión. Puede cambiar entre modos en cualquier momento desde el menú principal. </p>
14
- <p></p>
15
- <p>A continuación, debe elegir qué temporada desea jugar: Verano, Invierno o Desierto. Cada temporada tiene diferentes condiciones climáticas, superficies de carreteras y paisajes que afectan su experiencia de conducción. Por ejemplo, en invierno, tienes que lidiar con nieve y hielo que hacen que la carretera sea resbaladiza y reduce tu visibilidad. En el desierto, tienes que lidiar con tormentas de arena y el calor que hacen que el camino polvoriento y seco. Y en verano, tienes que lidiar con la lluvia y el barro que hacen que el camino sea húmedo y pegajoso. </p>
16
- <h3>Conduce con cuidado y equilibra tu carga</h3>
17
- <p>Una vez que haya elegido su modo y temporada, está listo para comenzar a conducir. Utilice las teclas de flecha o las teclas WASD para controlar su camión. La flecha arriba o la tecla W es para acelerar, la flecha abajo o la tecla S es para frenar o invertir, la flecha izquierda o la tecla A es para girar a la izquierda, y la flecha derecha o la tecla D es para girar a la derecha. También puede utilizar la barra espaciadora para activar el freno de mano. Tenga cuidado de no presionar las teclas demasiado fuerte o demasiado tiempo, ya que esto podría causar que su camión se deslice, se voltee o se caiga. </p>
18
- <p>A medida que conduce, tiene que equilibrar su carga y asegurarse de que no se caiga de su camión. Puedes ver cuánta carga te queda en la esquina superior izquierda de la pantalla. Si pierdes toda tu carga, tienes que reiniciar el nivel. También debe prestar atención al límite de tiempo, el medidor de combustible y el medidor de daños en la esquina superior derecha de la pantalla. Si te quedas sin tiempo, gas o salud, tienes que reiniciar el nivel también. </p>
19
- <h3>Usar la cámara y opciones de pausa</h3>
20
-
21
- <p>Si necesita tomar un descanso o ajustar algunos ajustes, puede presionar la tecla P para pausar el juego. A continuación, puede reanudar el juego, reiniciar el nivel, volver al menú principal, o cambiar las opciones de sonido y gráficos. </p>
22
- <h2>Consejos y trucos para el conductor de camiones Crazy Road</h2>
23
- <h3>Práctica en el modo de estacionamiento</h3>
24
- <p>Si usted es nuevo en este juego o quiere mejorar sus habilidades de conducción, usted debe practicar en el modo de estacionamiento primero. Este modo te ayudará a familiarizarte con los controles, la física y las características de este juego. También aprenderá a maniobrar su camión en espacios reducidos, evitar obstáculos y aparcar con precisión. El modo de aparcamiento tiene 14 niveles con diferentes escenarios y desafíos. Puedes desbloquear nuevos niveles completando los anteriores. </p>
25
- <h3>Cuidado con los obstáculos y peligros</h3>
26
- <p>Una de las principales dificultades de este juego es que hay muchos obstáculos y peligros en la carretera que pueden ralentizarte, dañar tu camión o hacerte perder tu carga. Algunos de estos incluyen rocas, troncos, barriles, conos, cercas, automóviles, autobuses, trenes, aviones, helicópteros, animales, personas y más. Usted tiene que estar alerta y cuidadoso al conducir a través de estos obstáculos y peligros. También tienes que estar atento a las señales que te advierten de los peligros o direcciones que se avecinan. Por ejemplo, una señal roja con un signo de exclamación significa que hay algo peligroso por delante. Un signo amarillo con una flecha significa que tienes que girar a la izquierda o a la derecha. </p>
27
- <h3>Actualiza tu camión y desbloquea nuevos niveles</h3>
28
- <p>A medida que avanzas en el juego, ganarás dinero para completar cada nivel. Puede utilizar este dinero para actualizar su camión y hacerlo más rápido, más fuerte y más eficiente. Puede actualizar su motor, transmisión, frenos, neumáticos, suspensión, tanque de combustible y capacidad de carga. Actualizar su camión le ayudará a hacer frente a la creciente dificultad y complejidad de cada nivel. </p>
29
-
30
- <h2>Conclusión</h2>
31
- <h3>Resumen de los puntos principales</h3>
32
- <p>Truck Driver Crazy Road es un juego de conducción de camiones en 3D que desafiará sus habilidades de equilibrio, su paciencia y sus nervios. Puede elegir entre el modo de entrega y el modo de estacionamiento, y entre las temporadas de verano, invierno y desierto. Tienes que conducir con cuidado y equilibrar su carga, utilizar la cámara y las opciones de pausa, practicar en el modo de estacionamiento, cuidado con los obstáculos y peligros, y actualizar su camión y desbloquear nuevos niveles. Este juego no es para los débiles de corazón, pero es uno de los juegos de conducción de camiones más divertidos y adictivos que nunca jugará. </p>
33
- <h3>Llamada a la acción para los lectores</h3>
34
- <p>Si estás listo para asumir este desafío, puedes jugar Truck Driver Crazy Road online gratis en Y8.com. También puede descargar el juego para dispositivos Windows o Android desde el mismo sitio web. También puedes ver otros juegos de conducción de camiones en Y8.com, como Truck Driver Simulator, Russian Car Driver ZIL 130, o Offroad Cargo Drive Simulator. ¡Diviértete y buena suerte! </p>
35
- <h2>Preguntas frecuentes</h2>
36
- <h4>P: ¿Cómo puedo guardar mi progreso en Truck Driver Crazy Road? </h4>
37
- <p>A: El juego guarda automáticamente tu progreso después de cada nivel. Puedes reanudar tu juego desde el menú principal haciendo clic en el botón Continuar. </p>
38
- <h4>P: ¿Cómo puedo cambiar el lenguaje de Truck Driver Crazy Road? </h4>
39
- <p>A: El juego es compatible con 10 idiomas: inglés, ruso, español, portugués, francés, alemán, italiano, turco, árabe y chino. Puede cambiar el idioma desde el menú principal haciendo clic en el botón Idioma. </p>
40
- <h4>P: ¿Cómo silencio el sonido o la música de Truck Driver Crazy Road? </h4>
41
- <p>A: Puede silenciar el sonido o la música desde el menú principal haciendo clic en el botón Sonido o Música. También puede ajustar el volumen desde el menú de configuración haciendo clic en el botón Configuración. </p>
42
- <h4>P: ¿Cómo informo de un error o un problema con Truck Driver Crazy Road? </h4>
43
-
44
- <h4>Q: ¿Cómo evalúo o reviso Truck Driver Crazy Road? </h4>
45
- <p>A: Puede calificar o revisar el juego en Y8.com haciendo clic en el botón Tasa o en el botón Reseña. También puedes compartir tus comentarios o sugerencias con otros jugadores en la sección de comentarios. </p> 64aa2da5cf<br />
46
- <br />
47
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Aparcamiento Gratuito Multijugador.md DELETED
@@ -1,78 +0,0 @@
1
-
2
- <h1>Descargar Aparcamiento gratuito Multijugador</h1>
3
- <p>Parking multijugador es un juego de simulación que te permite experimentar la emoción de conducir, aparcar y personalizar diferentes coches en un entorno de mundo abierto. Puede jugar solo o unirse a millones de otros jugadores en línea en varios modos como carreras, policía, juegos de rol y más. También puedes explorar diferentes ubicaciones, interactuar con otros jugadores y chatear por voz con tus amigos. Aparcamiento multijugador es más que solo aparcamiento: es un juego divertido y realista que te mantendrá entretenido durante horas. </p>
4
- <h2>Características de Aparcamiento Multijugador</h2>
5
- <p>El multijugador de estacionamiento tiene muchas características que lo hacen destacar de otros juegos de estacionamiento. Aquí están algunas de ellas:</p>
6
- <h2>descargar aparcamiento gratuito multijugador</h2><br /><p><b><b>Download</b> &hArr; <a href="https://bltlly.com/2v6Mm1">https://bltlly.com/2v6Mm1</a></b></p><br /><br />
7
- <ul>
8
- <li><b>Modo de mundo abierto multijugador:</b> Puede caminar libremente, conducir libremente e interactuar libremente con estaciones de servicio y servicios de automóviles reales. También puedes competir contra jugadores reales en carreras multijugador, intercambiar coches con otros jugadores, hacer amigos, chat de voz y jugar como oficial de policía o criminal. </li>
9
- <li><b>Personalización del automóvil:</b> Puede ajustar la suspensión, el ángulo de la rueda, el motor, el turbo, la caja de cambios, el escape y más de su automóvil. También puede cambiar la apariencia visual de su automóvil con vinilos dinámicos, partes del cuerpo del automóvil y tipos de placas. </li>
10
- <li><b>Mundo abierto de alta calidad:</b> Puedes disfrutar de entornos muy detallados con 100 coches con interiores reales. También puede elegir entre 16 pieles de jugador y entrar en edificios con interiores. </li>
11
- <li><b>Juego interesante:</b> Puede desafiarse a sí mismo con 82 escenarios de estacionamiento y conducción en la vida real. También puede conducir diferentes vehículos como remolques, camionetas, camiones, coches deportivos y coches clásicos. </li>
12
- </ul>
13
- <h2>Cómo descargar el multijugador de estacionamiento de coches gratis</h2>
14
- <p>Si quieres descargar el multijugador de estacionamiento gratis, tienes varias opciones dependiendo de tu dispositivo. Estas son algunas de ellas:</p>
15
- <ul>
16
-
17
- <li><b>Para dispositivos iOS:</b> Puedes descargar el juego desde App Store o desde otras fuentes de terceros como Panda Helper . Sin embargo, tenga cuidado al descargar de fuentes desconocidas, ya que pueden requerir jailbreak o sideloading que pueden anular su garantía o comprometer su seguridad. </li>
18
- <li><b>Para dispositivos PC:</b> Puede descargar el juego desde BlueStacks o desde otras fuentes de terceros como LDPlayer . Sin embargo, tenga cuidado al descargar de fuentes desconocidas, ya que pueden contener malware o virus. También necesitará un emulador para ejecutar el juego en su PC.</li>
19
- </ul>
20
- <h2> Consejos y trucos para el estacionamiento de coches multijugador</h2>
21
- <p>Si quieres mejorar tus habilidades y disfrutar más del juego, aquí hay algunos consejos y trucos que puedes usar:</p>
22
- <ul>
23
- <li><b>Aprender a la deriva:</b> La deriva es una técnica que le permite deslizar su coche hacia los lados mientras gira. Puede ayudarle a evitar obstáculos, tomar esquinas afiladas, e impresionar a otros jugadores. Para la deriva, es necesario presionar el botón del freno de mano mientras se dirige en la dirección que desea ir. También puede ajustar la relación de transmisión y la suspensión de su coche para que sea más fácil a la deriva. </li>
24
- <li><b>Bloquea tus puertas:</b> Si no quieres que personas al azar salten en tu auto mientras estás conduciendo o estacionado, puedes bloquear tus puertas tocando el icono de bloqueo en la pantalla. Esto evitará que otros jugadores entren o roben tu auto. </li>
25
- <li><b>Usa el chat de voz:</b> El chat de voz es una función que te permite comunicarte con otros jugadores usando tu micrófono. Puedes usar el chat de voz para hacer amigos, pedir ayuda o coordinar tus acciones. Para usar el chat de voz, debe habilitarlo en la configuración y presionar el botón del micrófono en la pantalla. También puedes silenciar o bloquear a otros jugadores si son molestos o abusivos. </li>
26
-
27
- </ul>
28
- <h2>Conclusión</h2>
29
- <p>Aparcamiento multijugador es un juego divertido y realista que le permite conducir, aparcar y personalizar diferentes coches en un entorno de mundo abierto. Puede jugar solo o unirse a millones de otros jugadores en línea en varios modos como carreras, policía, juegos de rol y más. También puedes explorar diferentes ubicaciones, interactuar con otros jugadores y chatear por voz con tus amigos. Aparcamiento multijugador es más que solo aparcamiento: es un juego que te mantendrá entretenido durante horas. </p>
30
- <p>Si quieres descargar el multijugador de estacionamiento gratis, tienes varias opciones dependiendo de tu dispositivo. Sin embargo, tenga cuidado al descargar de fuentes desconocidas, ya que pueden contener malware o virus. También necesitarás un emulador para ejecutar el juego en tu PC.</p>
31
- <p>Si quieres mejorar tus habilidades y disfrutar más del juego, puedes utilizar algunos consejos y trucos como aprender a la deriva, cerrar las puertas, usar el chat de voz y ver anuncios para obtener recompensas. También puede ajustar la configuración y los controles del juego para adaptarse a sus preferencias. </p>
32
- <p>Entonces, ¿qué estás esperando? Descarga el estacionamiento de coches multijugador hoy y experimenta la emoción de conducir y estacionar en un mundo abierto realista! </p>
33
- <h2>Preguntas frecuentes</h2>
34
- <h3>¿Cuáles son los requisitos del sistema para el aparcamiento multijugador? </h3>
35
- <p>Los requisitos del sistema para el multijugador de estacionamiento son los siguientes:</p>
36
- <tabla>
37
- <tr><th>Plataforma</th><th>Requisitos mínimos</th></tr>
38
- <tr><td>Android</td><td>Android 4.4 o superior, 1 GB de RAM, 300 MB de espacio libre</td></tr>
39
- <tr><td>iOS</td><td>iOS 9.0 o superior, iPhone 5s o posterior, iPad Air o posterior, iPod touch de sexta generación o posterior, 300 MB de espacio libre</td></tr>
40
- <tr><td>PC</td><td>Windows 7 o superior, 4 GB de RAM, 2 GB de espacio libre, DirectX 9.0c o superior, emulador de BlueStacks o similar</td></tr>
41
- </tabla>
42
- <h3>¿Cómo puedo obtener más dinero o monedas en el aparcamiento multijugador? </h3>
43
- <p>Puedes obtener más dinero o monedas en el multijugador de estacionamiento haciendo lo siguiente:</p>
44
- <p></p>
45
-
46
- <li>Completando escenarios de estacionamiento y conducción</li>
47
- <li>Ganar carreras multijugador</li>
48
- <li>Venta o intercambio de coches con otros jugadores</li>
49
- <li>Ver anuncios de recompensas</li>
50
- <li>Comprarlos con dinero real (opcional)</li>
51
- </ul>
52
- <h3>¿Cómo puedo cambiar mi nombre o avatar en el multijugador de estacionamiento? </h3>
53
- <p>Puedes cambiar tu nombre o avatar en el multijugador de estacionamiento haciendo lo siguiente:</p>
54
- <ul>
55
- <li>Pulsando en el icono de perfil en la esquina superior izquierda de la pantalla</li>
56
- <li>Pulsando en el icono de edición en la esquina superior derecha de la pantalla</li>
57
- <li>Introducir un nuevo nombre o elegir un nuevo avatar de la lista</li>
58
- <li>Pulsando en el icono de guardar en la esquina superior derecha de la pantalla</li>
59
- </ul>
60
- <h3>¿Cómo puedo reportar un error o un problema en el multijugador de estacionamiento? </h3>
61
- <p>Puedes reportar un error o un problema en el multijugador de estacionamiento haciendo lo siguiente:</p>
62
- <ul>
63
- <li>Pulsando en el icono de configuración en la esquina superior derecha de la pantalla</li>
64
- <li>Pulsando en la opción de retroalimentación</li>
65
- <li> Llenar el formulario con su nombre, correo electrónico, modelo de dispositivo, versión del juego, y la descripción de la cuestión</li>
66
- <li>Pulsando en el botón enviar</li>
67
- </ul>
68
- <h3>¿Cómo puedo contactar a los desarrolladores del multijugador de aparcamiento? </h3>
69
- <p>Puede ponerse en contacto con los desarrolladores de aparcamiento multijugador haciendo lo siguiente:</p>
70
- <ul>
71
- <li>Enviarlos por correo electrónico a [email protected]</li>
72
- <li>Visitar su sitio web en https:/olzhass.com/</li>
73
- <li>Siguiéndolos en Facebook en https://www.facebook.com/olzhassgames/</li>
74
- <li>Siguiéndolos en Instagram en https://www.instagram.com/olzhassgames/</li>
75
- <li>Siguiéndolos en Twitter en https://twitter.com/olzhassgames</li>
76
- </ul></p> 64aa2da5cf<br />
77
- <br />
78
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bidwill/Sanskrit-asr/app.py DELETED
@@ -1,33 +0,0 @@
1
- from transformers import pipeline
2
- import gradio as gr
3
-
4
- pipe = pipeline(model="Bidwill/whisper-small-sanskrit_4") # change to "your-username/the-name-you-picked"
5
-
6
- def transcribe(audio):
7
- text = pipe(audio)["text"]
8
- return text
9
-
10
- demo = gr.Blocks()
11
-
12
- mic_transcribe = gr.Interface(
13
- fn=transcribe,
14
- inputs=gr.Audio(source="microphone", type="filepath"),
15
- outputs="text",
16
- title="Sanskrit Speech to Text",
17
- description="Realtime demo for Sanskrit speech recognition.",
18
- )
19
-
20
- file_transcribe = gr.Interface(
21
- fn=transcribe,
22
- inputs=gr.Audio(source="upload", type="filepath"),
23
- outputs=gr.outputs.Textbox(),
24
- title="Sanskrit STT",
25
- description= "Realtime demo for Sanskrit speech recognition."
26
- )
27
- with demo:
28
- gr.TabbedInterface(
29
- [mic_transcribe, file_transcribe],
30
- ["Transcribe Microphone", "Transcribe Audio File"],
31
- )
32
-
33
- demo.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/models.py DELETED
@@ -1,1034 +0,0 @@
1
- """
2
- requests.models
3
- ~~~~~~~~~~~~~~~
4
-
5
- This module contains the primary objects that power Requests.
6
- """
7
-
8
- import datetime
9
-
10
- # Import encoding now, to avoid implicit import later.
11
- # Implicit import within threads may cause LookupError when standard library is in a ZIP,
12
- # such as in Embedded Python. See https://github.com/psf/requests/issues/3578.
13
- import encodings.idna # noqa: F401
14
- from io import UnsupportedOperation
15
-
16
- from pip._vendor.urllib3.exceptions import (
17
- DecodeError,
18
- LocationParseError,
19
- ProtocolError,
20
- ReadTimeoutError,
21
- SSLError,
22
- )
23
- from pip._vendor.urllib3.fields import RequestField
24
- from pip._vendor.urllib3.filepost import encode_multipart_formdata
25
- from pip._vendor.urllib3.util import parse_url
26
-
27
- from ._internal_utils import to_native_string, unicode_is_ascii
28
- from .auth import HTTPBasicAuth
29
- from .compat import (
30
- Callable,
31
- JSONDecodeError,
32
- Mapping,
33
- basestring,
34
- builtin_str,
35
- chardet,
36
- cookielib,
37
- )
38
- from .compat import json as complexjson
39
- from .compat import urlencode, urlsplit, urlunparse
40
- from .cookies import _copy_cookie_jar, cookiejar_from_dict, get_cookie_header
41
- from .exceptions import (
42
- ChunkedEncodingError,
43
- ConnectionError,
44
- ContentDecodingError,
45
- HTTPError,
46
- InvalidJSONError,
47
- InvalidURL,
48
- )
49
- from .exceptions import JSONDecodeError as RequestsJSONDecodeError
50
- from .exceptions import MissingSchema
51
- from .exceptions import SSLError as RequestsSSLError
52
- from .exceptions import StreamConsumedError
53
- from .hooks import default_hooks
54
- from .status_codes import codes
55
- from .structures import CaseInsensitiveDict
56
- from .utils import (
57
- check_header_validity,
58
- get_auth_from_url,
59
- guess_filename,
60
- guess_json_utf,
61
- iter_slices,
62
- parse_header_links,
63
- requote_uri,
64
- stream_decode_response_unicode,
65
- super_len,
66
- to_key_val_list,
67
- )
68
-
69
- #: The set of HTTP status codes that indicate an automatically
70
- #: processable redirect.
71
- REDIRECT_STATI = (
72
- codes.moved, # 301
73
- codes.found, # 302
74
- codes.other, # 303
75
- codes.temporary_redirect, # 307
76
- codes.permanent_redirect, # 308
77
- )
78
-
79
- DEFAULT_REDIRECT_LIMIT = 30
80
- CONTENT_CHUNK_SIZE = 10 * 1024
81
- ITER_CHUNK_SIZE = 512
82
-
83
-
84
- class RequestEncodingMixin:
85
- @property
86
- def path_url(self):
87
- """Build the path URL to use."""
88
-
89
- url = []
90
-
91
- p = urlsplit(self.url)
92
-
93
- path = p.path
94
- if not path:
95
- path = "/"
96
-
97
- url.append(path)
98
-
99
- query = p.query
100
- if query:
101
- url.append("?")
102
- url.append(query)
103
-
104
- return "".join(url)
105
-
106
- @staticmethod
107
- def _encode_params(data):
108
- """Encode parameters in a piece of data.
109
-
110
- Will successfully encode parameters when passed as a dict or a list of
111
- 2-tuples. Order is retained if data is a list of 2-tuples but arbitrary
112
- if parameters are supplied as a dict.
113
- """
114
-
115
- if isinstance(data, (str, bytes)):
116
- return data
117
- elif hasattr(data, "read"):
118
- return data
119
- elif hasattr(data, "__iter__"):
120
- result = []
121
- for k, vs in to_key_val_list(data):
122
- if isinstance(vs, basestring) or not hasattr(vs, "__iter__"):
123
- vs = [vs]
124
- for v in vs:
125
- if v is not None:
126
- result.append(
127
- (
128
- k.encode("utf-8") if isinstance(k, str) else k,
129
- v.encode("utf-8") if isinstance(v, str) else v,
130
- )
131
- )
132
- return urlencode(result, doseq=True)
133
- else:
134
- return data
135
-
136
- @staticmethod
137
- def _encode_files(files, data):
138
- """Build the body for a multipart/form-data request.
139
-
140
- Will successfully encode files when passed as a dict or a list of
141
- tuples. Order is retained if data is a list of tuples but arbitrary
142
- if parameters are supplied as a dict.
143
- The tuples may be 2-tuples (filename, fileobj), 3-tuples (filename, fileobj, contentype)
144
- or 4-tuples (filename, fileobj, contentype, custom_headers).
145
- """
146
- if not files:
147
- raise ValueError("Files must be provided.")
148
- elif isinstance(data, basestring):
149
- raise ValueError("Data must not be a string.")
150
-
151
- new_fields = []
152
- fields = to_key_val_list(data or {})
153
- files = to_key_val_list(files or {})
154
-
155
- for field, val in fields:
156
- if isinstance(val, basestring) or not hasattr(val, "__iter__"):
157
- val = [val]
158
- for v in val:
159
- if v is not None:
160
- # Don't call str() on bytestrings: in Py3 it all goes wrong.
161
- if not isinstance(v, bytes):
162
- v = str(v)
163
-
164
- new_fields.append(
165
- (
166
- field.decode("utf-8")
167
- if isinstance(field, bytes)
168
- else field,
169
- v.encode("utf-8") if isinstance(v, str) else v,
170
- )
171
- )
172
-
173
- for (k, v) in files:
174
- # support for explicit filename
175
- ft = None
176
- fh = None
177
- if isinstance(v, (tuple, list)):
178
- if len(v) == 2:
179
- fn, fp = v
180
- elif len(v) == 3:
181
- fn, fp, ft = v
182
- else:
183
- fn, fp, ft, fh = v
184
- else:
185
- fn = guess_filename(v) or k
186
- fp = v
187
-
188
- if isinstance(fp, (str, bytes, bytearray)):
189
- fdata = fp
190
- elif hasattr(fp, "read"):
191
- fdata = fp.read()
192
- elif fp is None:
193
- continue
194
- else:
195
- fdata = fp
196
-
197
- rf = RequestField(name=k, data=fdata, filename=fn, headers=fh)
198
- rf.make_multipart(content_type=ft)
199
- new_fields.append(rf)
200
-
201
- body, content_type = encode_multipart_formdata(new_fields)
202
-
203
- return body, content_type
204
-
205
-
206
- class RequestHooksMixin:
207
- def register_hook(self, event, hook):
208
- """Properly register a hook."""
209
-
210
- if event not in self.hooks:
211
- raise ValueError(f'Unsupported event specified, with event name "{event}"')
212
-
213
- if isinstance(hook, Callable):
214
- self.hooks[event].append(hook)
215
- elif hasattr(hook, "__iter__"):
216
- self.hooks[event].extend(h for h in hook if isinstance(h, Callable))
217
-
218
- def deregister_hook(self, event, hook):
219
- """Deregister a previously registered hook.
220
- Returns True if the hook existed, False if not.
221
- """
222
-
223
- try:
224
- self.hooks[event].remove(hook)
225
- return True
226
- except ValueError:
227
- return False
228
-
229
-
230
- class Request(RequestHooksMixin):
231
- """A user-created :class:`Request <Request>` object.
232
-
233
- Used to prepare a :class:`PreparedRequest <PreparedRequest>`, which is sent to the server.
234
-
235
- :param method: HTTP method to use.
236
- :param url: URL to send.
237
- :param headers: dictionary of headers to send.
238
- :param files: dictionary of {filename: fileobject} files to multipart upload.
239
- :param data: the body to attach to the request. If a dictionary or
240
- list of tuples ``[(key, value)]`` is provided, form-encoding will
241
- take place.
242
- :param json: json for the body to attach to the request (if files or data is not specified).
243
- :param params: URL parameters to append to the URL. If a dictionary or
244
- list of tuples ``[(key, value)]`` is provided, form-encoding will
245
- take place.
246
- :param auth: Auth handler or (user, pass) tuple.
247
- :param cookies: dictionary or CookieJar of cookies to attach to this request.
248
- :param hooks: dictionary of callback hooks, for internal usage.
249
-
250
- Usage::
251
-
252
- >>> import requests
253
- >>> req = requests.Request('GET', 'https://httpbin.org/get')
254
- >>> req.prepare()
255
- <PreparedRequest [GET]>
256
- """
257
-
258
- def __init__(
259
- self,
260
- method=None,
261
- url=None,
262
- headers=None,
263
- files=None,
264
- data=None,
265
- params=None,
266
- auth=None,
267
- cookies=None,
268
- hooks=None,
269
- json=None,
270
- ):
271
-
272
- # Default empty dicts for dict params.
273
- data = [] if data is None else data
274
- files = [] if files is None else files
275
- headers = {} if headers is None else headers
276
- params = {} if params is None else params
277
- hooks = {} if hooks is None else hooks
278
-
279
- self.hooks = default_hooks()
280
- for (k, v) in list(hooks.items()):
281
- self.register_hook(event=k, hook=v)
282
-
283
- self.method = method
284
- self.url = url
285
- self.headers = headers
286
- self.files = files
287
- self.data = data
288
- self.json = json
289
- self.params = params
290
- self.auth = auth
291
- self.cookies = cookies
292
-
293
- def __repr__(self):
294
- return f"<Request [{self.method}]>"
295
-
296
- def prepare(self):
297
- """Constructs a :class:`PreparedRequest <PreparedRequest>` for transmission and returns it."""
298
- p = PreparedRequest()
299
- p.prepare(
300
- method=self.method,
301
- url=self.url,
302
- headers=self.headers,
303
- files=self.files,
304
- data=self.data,
305
- json=self.json,
306
- params=self.params,
307
- auth=self.auth,
308
- cookies=self.cookies,
309
- hooks=self.hooks,
310
- )
311
- return p
312
-
313
-
314
- class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
315
- """The fully mutable :class:`PreparedRequest <PreparedRequest>` object,
316
- containing the exact bytes that will be sent to the server.
317
-
318
- Instances are generated from a :class:`Request <Request>` object, and
319
- should not be instantiated manually; doing so may produce undesirable
320
- effects.
321
-
322
- Usage::
323
-
324
- >>> import requests
325
- >>> req = requests.Request('GET', 'https://httpbin.org/get')
326
- >>> r = req.prepare()
327
- >>> r
328
- <PreparedRequest [GET]>
329
-
330
- >>> s = requests.Session()
331
- >>> s.send(r)
332
- <Response [200]>
333
- """
334
-
335
- def __init__(self):
336
- #: HTTP verb to send to the server.
337
- self.method = None
338
- #: HTTP URL to send the request to.
339
- self.url = None
340
- #: dictionary of HTTP headers.
341
- self.headers = None
342
- # The `CookieJar` used to create the Cookie header will be stored here
343
- # after prepare_cookies is called
344
- self._cookies = None
345
- #: request body to send to the server.
346
- self.body = None
347
- #: dictionary of callback hooks, for internal usage.
348
- self.hooks = default_hooks()
349
- #: integer denoting starting position of a readable file-like body.
350
- self._body_position = None
351
-
352
- def prepare(
353
- self,
354
- method=None,
355
- url=None,
356
- headers=None,
357
- files=None,
358
- data=None,
359
- params=None,
360
- auth=None,
361
- cookies=None,
362
- hooks=None,
363
- json=None,
364
- ):
365
- """Prepares the entire request with the given parameters."""
366
-
367
- self.prepare_method(method)
368
- self.prepare_url(url, params)
369
- self.prepare_headers(headers)
370
- self.prepare_cookies(cookies)
371
- self.prepare_body(data, files, json)
372
- self.prepare_auth(auth, url)
373
-
374
- # Note that prepare_auth must be last to enable authentication schemes
375
- # such as OAuth to work on a fully prepared request.
376
-
377
- # This MUST go after prepare_auth. Authenticators could add a hook
378
- self.prepare_hooks(hooks)
379
-
380
- def __repr__(self):
381
- return f"<PreparedRequest [{self.method}]>"
382
-
383
- def copy(self):
384
- p = PreparedRequest()
385
- p.method = self.method
386
- p.url = self.url
387
- p.headers = self.headers.copy() if self.headers is not None else None
388
- p._cookies = _copy_cookie_jar(self._cookies)
389
- p.body = self.body
390
- p.hooks = self.hooks
391
- p._body_position = self._body_position
392
- return p
393
-
394
- def prepare_method(self, method):
395
- """Prepares the given HTTP method."""
396
- self.method = method
397
- if self.method is not None:
398
- self.method = to_native_string(self.method.upper())
399
-
400
- @staticmethod
401
- def _get_idna_encoded_host(host):
402
- from pip._vendor import idna
403
-
404
- try:
405
- host = idna.encode(host, uts46=True).decode("utf-8")
406
- except idna.IDNAError:
407
- raise UnicodeError
408
- return host
409
-
410
- def prepare_url(self, url, params):
411
- """Prepares the given HTTP URL."""
412
- #: Accept objects that have string representations.
413
- #: We're unable to blindly call unicode/str functions
414
- #: as this will include the bytestring indicator (b'')
415
- #: on python 3.x.
416
- #: https://github.com/psf/requests/pull/2238
417
- if isinstance(url, bytes):
418
- url = url.decode("utf8")
419
- else:
420
- url = str(url)
421
-
422
- # Remove leading whitespaces from url
423
- url = url.lstrip()
424
-
425
- # Don't do any URL preparation for non-HTTP schemes like `mailto`,
426
- # `data` etc to work around exceptions from `url_parse`, which
427
- # handles RFC 3986 only.
428
- if ":" in url and not url.lower().startswith("http"):
429
- self.url = url
430
- return
431
-
432
- # Support for unicode domain names and paths.
433
- try:
434
- scheme, auth, host, port, path, query, fragment = parse_url(url)
435
- except LocationParseError as e:
436
- raise InvalidURL(*e.args)
437
-
438
- if not scheme:
439
- raise MissingSchema(
440
- f"Invalid URL {url!r}: No scheme supplied. "
441
- f"Perhaps you meant https://{url}?"
442
- )
443
-
444
- if not host:
445
- raise InvalidURL(f"Invalid URL {url!r}: No host supplied")
446
-
447
- # In general, we want to try IDNA encoding the hostname if the string contains
448
- # non-ASCII characters. This allows users to automatically get the correct IDNA
449
- # behaviour. For strings containing only ASCII characters, we need to also verify
450
- # it doesn't start with a wildcard (*), before allowing the unencoded hostname.
451
- if not unicode_is_ascii(host):
452
- try:
453
- host = self._get_idna_encoded_host(host)
454
- except UnicodeError:
455
- raise InvalidURL("URL has an invalid label.")
456
- elif host.startswith(("*", ".")):
457
- raise InvalidURL("URL has an invalid label.")
458
-
459
- # Carefully reconstruct the network location
460
- netloc = auth or ""
461
- if netloc:
462
- netloc += "@"
463
- netloc += host
464
- if port:
465
- netloc += f":{port}"
466
-
467
- # Bare domains aren't valid URLs.
468
- if not path:
469
- path = "/"
470
-
471
- if isinstance(params, (str, bytes)):
472
- params = to_native_string(params)
473
-
474
- enc_params = self._encode_params(params)
475
- if enc_params:
476
- if query:
477
- query = f"{query}&{enc_params}"
478
- else:
479
- query = enc_params
480
-
481
- url = requote_uri(urlunparse([scheme, netloc, path, None, query, fragment]))
482
- self.url = url
483
-
484
- def prepare_headers(self, headers):
485
- """Prepares the given HTTP headers."""
486
-
487
- self.headers = CaseInsensitiveDict()
488
- if headers:
489
- for header in headers.items():
490
- # Raise exception on invalid header value.
491
- check_header_validity(header)
492
- name, value = header
493
- self.headers[to_native_string(name)] = value
494
-
495
- def prepare_body(self, data, files, json=None):
496
- """Prepares the given HTTP body data."""
497
-
498
- # Check if file, fo, generator, iterator.
499
- # If not, run through normal process.
500
-
501
- # Nottin' on you.
502
- body = None
503
- content_type = None
504
-
505
- if not data and json is not None:
506
- # urllib3 requires a bytes-like body. Python 2's json.dumps
507
- # provides this natively, but Python 3 gives a Unicode string.
508
- content_type = "application/json"
509
-
510
- try:
511
- body = complexjson.dumps(json, allow_nan=False)
512
- except ValueError as ve:
513
- raise InvalidJSONError(ve, request=self)
514
-
515
- if not isinstance(body, bytes):
516
- body = body.encode("utf-8")
517
-
518
- is_stream = all(
519
- [
520
- hasattr(data, "__iter__"),
521
- not isinstance(data, (basestring, list, tuple, Mapping)),
522
- ]
523
- )
524
-
525
- if is_stream:
526
- try:
527
- length = super_len(data)
528
- except (TypeError, AttributeError, UnsupportedOperation):
529
- length = None
530
-
531
- body = data
532
-
533
- if getattr(body, "tell", None) is not None:
534
- # Record the current file position before reading.
535
- # This will allow us to rewind a file in the event
536
- # of a redirect.
537
- try:
538
- self._body_position = body.tell()
539
- except OSError:
540
- # This differentiates from None, allowing us to catch
541
- # a failed `tell()` later when trying to rewind the body
542
- self._body_position = object()
543
-
544
- if files:
545
- raise NotImplementedError(
546
- "Streamed bodies and files are mutually exclusive."
547
- )
548
-
549
- if length:
550
- self.headers["Content-Length"] = builtin_str(length)
551
- else:
552
- self.headers["Transfer-Encoding"] = "chunked"
553
- else:
554
- # Multi-part file uploads.
555
- if files:
556
- (body, content_type) = self._encode_files(files, data)
557
- else:
558
- if data:
559
- body = self._encode_params(data)
560
- if isinstance(data, basestring) or hasattr(data, "read"):
561
- content_type = None
562
- else:
563
- content_type = "application/x-www-form-urlencoded"
564
-
565
- self.prepare_content_length(body)
566
-
567
- # Add content-type if it wasn't explicitly provided.
568
- if content_type and ("content-type" not in self.headers):
569
- self.headers["Content-Type"] = content_type
570
-
571
- self.body = body
572
-
573
- def prepare_content_length(self, body):
574
- """Prepare Content-Length header based on request method and body"""
575
- if body is not None:
576
- length = super_len(body)
577
- if length:
578
- # If length exists, set it. Otherwise, we fallback
579
- # to Transfer-Encoding: chunked.
580
- self.headers["Content-Length"] = builtin_str(length)
581
- elif (
582
- self.method not in ("GET", "HEAD")
583
- and self.headers.get("Content-Length") is None
584
- ):
585
- # Set Content-Length to 0 for methods that can have a body
586
- # but don't provide one. (i.e. not GET or HEAD)
587
- self.headers["Content-Length"] = "0"
588
-
589
- def prepare_auth(self, auth, url=""):
590
- """Prepares the given HTTP auth data."""
591
-
592
- # If no Auth is explicitly provided, extract it from the URL first.
593
- if auth is None:
594
- url_auth = get_auth_from_url(self.url)
595
- auth = url_auth if any(url_auth) else None
596
-
597
- if auth:
598
- if isinstance(auth, tuple) and len(auth) == 2:
599
- # special-case basic HTTP auth
600
- auth = HTTPBasicAuth(*auth)
601
-
602
- # Allow auth to make its changes.
603
- r = auth(self)
604
-
605
- # Update self to reflect the auth changes.
606
- self.__dict__.update(r.__dict__)
607
-
608
- # Recompute Content-Length
609
- self.prepare_content_length(self.body)
610
-
611
- def prepare_cookies(self, cookies):
612
- """Prepares the given HTTP cookie data.
613
-
614
- This function eventually generates a ``Cookie`` header from the
615
- given cookies using cookielib. Due to cookielib's design, the header
616
- will not be regenerated if it already exists, meaning this function
617
- can only be called once for the life of the
618
- :class:`PreparedRequest <PreparedRequest>` object. Any subsequent calls
619
- to ``prepare_cookies`` will have no actual effect, unless the "Cookie"
620
- header is removed beforehand.
621
- """
622
- if isinstance(cookies, cookielib.CookieJar):
623
- self._cookies = cookies
624
- else:
625
- self._cookies = cookiejar_from_dict(cookies)
626
-
627
- cookie_header = get_cookie_header(self._cookies, self)
628
- if cookie_header is not None:
629
- self.headers["Cookie"] = cookie_header
630
-
631
- def prepare_hooks(self, hooks):
632
- """Prepares the given hooks."""
633
- # hooks can be passed as None to the prepare method and to this
634
- # method. To prevent iterating over None, simply use an empty list
635
- # if hooks is False-y
636
- hooks = hooks or []
637
- for event in hooks:
638
- self.register_hook(event, hooks[event])
639
-
640
-
641
- class Response:
642
- """The :class:`Response <Response>` object, which contains a
643
- server's response to an HTTP request.
644
- """
645
-
646
- __attrs__ = [
647
- "_content",
648
- "status_code",
649
- "headers",
650
- "url",
651
- "history",
652
- "encoding",
653
- "reason",
654
- "cookies",
655
- "elapsed",
656
- "request",
657
- ]
658
-
659
- def __init__(self):
660
- self._content = False
661
- self._content_consumed = False
662
- self._next = None
663
-
664
- #: Integer Code of responded HTTP Status, e.g. 404 or 200.
665
- self.status_code = None
666
-
667
- #: Case-insensitive Dictionary of Response Headers.
668
- #: For example, ``headers['content-encoding']`` will return the
669
- #: value of a ``'Content-Encoding'`` response header.
670
- self.headers = CaseInsensitiveDict()
671
-
672
- #: File-like object representation of response (for advanced usage).
673
- #: Use of ``raw`` requires that ``stream=True`` be set on the request.
674
- #: This requirement does not apply for use internally to Requests.
675
- self.raw = None
676
-
677
- #: Final URL location of Response.
678
- self.url = None
679
-
680
- #: Encoding to decode with when accessing r.text.
681
- self.encoding = None
682
-
683
- #: A list of :class:`Response <Response>` objects from
684
- #: the history of the Request. Any redirect responses will end
685
- #: up here. The list is sorted from the oldest to the most recent request.
686
- self.history = []
687
-
688
- #: Textual reason of responded HTTP Status, e.g. "Not Found" or "OK".
689
- self.reason = None
690
-
691
- #: A CookieJar of Cookies the server sent back.
692
- self.cookies = cookiejar_from_dict({})
693
-
694
- #: The amount of time elapsed between sending the request
695
- #: and the arrival of the response (as a timedelta).
696
- #: This property specifically measures the time taken between sending
697
- #: the first byte of the request and finishing parsing the headers. It
698
- #: is therefore unaffected by consuming the response content or the
699
- #: value of the ``stream`` keyword argument.
700
- self.elapsed = datetime.timedelta(0)
701
-
702
- #: The :class:`PreparedRequest <PreparedRequest>` object to which this
703
- #: is a response.
704
- self.request = None
705
-
706
- def __enter__(self):
707
- return self
708
-
709
- def __exit__(self, *args):
710
- self.close()
711
-
712
- def __getstate__(self):
713
- # Consume everything; accessing the content attribute makes
714
- # sure the content has been fully read.
715
- if not self._content_consumed:
716
- self.content
717
-
718
- return {attr: getattr(self, attr, None) for attr in self.__attrs__}
719
-
720
- def __setstate__(self, state):
721
- for name, value in state.items():
722
- setattr(self, name, value)
723
-
724
- # pickled objects do not have .raw
725
- setattr(self, "_content_consumed", True)
726
- setattr(self, "raw", None)
727
-
728
- def __repr__(self):
729
- return f"<Response [{self.status_code}]>"
730
-
731
- def __bool__(self):
732
- """Returns True if :attr:`status_code` is less than 400.
733
-
734
- This attribute checks if the status code of the response is between
735
- 400 and 600 to see if there was a client error or a server error. If
736
- the status code, is between 200 and 400, this will return True. This
737
- is **not** a check to see if the response code is ``200 OK``.
738
- """
739
- return self.ok
740
-
741
- def __nonzero__(self):
742
- """Returns True if :attr:`status_code` is less than 400.
743
-
744
- This attribute checks if the status code of the response is between
745
- 400 and 600 to see if there was a client error or a server error. If
746
- the status code, is between 200 and 400, this will return True. This
747
- is **not** a check to see if the response code is ``200 OK``.
748
- """
749
- return self.ok
750
-
751
- def __iter__(self):
752
- """Allows you to use a response as an iterator."""
753
- return self.iter_content(128)
754
-
755
- @property
756
- def ok(self):
757
- """Returns True if :attr:`status_code` is less than 400, False if not.
758
-
759
- This attribute checks if the status code of the response is between
760
- 400 and 600 to see if there was a client error or a server error. If
761
- the status code is between 200 and 400, this will return True. This
762
- is **not** a check to see if the response code is ``200 OK``.
763
- """
764
- try:
765
- self.raise_for_status()
766
- except HTTPError:
767
- return False
768
- return True
769
-
770
- @property
771
- def is_redirect(self):
772
- """True if this Response is a well-formed HTTP redirect that could have
773
- been processed automatically (by :meth:`Session.resolve_redirects`).
774
- """
775
- return "location" in self.headers and self.status_code in REDIRECT_STATI
776
-
777
- @property
778
- def is_permanent_redirect(self):
779
- """True if this Response one of the permanent versions of redirect."""
780
- return "location" in self.headers and self.status_code in (
781
- codes.moved_permanently,
782
- codes.permanent_redirect,
783
- )
784
-
785
- @property
786
- def next(self):
787
- """Returns a PreparedRequest for the next request in a redirect chain, if there is one."""
788
- return self._next
789
-
790
- @property
791
- def apparent_encoding(self):
792
- """The apparent encoding, provided by the charset_normalizer or chardet libraries."""
793
- return chardet.detect(self.content)["encoding"]
794
-
795
- def iter_content(self, chunk_size=1, decode_unicode=False):
796
- """Iterates over the response data. When stream=True is set on the
797
- request, this avoids reading the content at once into memory for
798
- large responses. The chunk size is the number of bytes it should
799
- read into memory. This is not necessarily the length of each item
800
- returned as decoding can take place.
801
-
802
- chunk_size must be of type int or None. A value of None will
803
- function differently depending on the value of `stream`.
804
- stream=True will read data as it arrives in whatever size the
805
- chunks are received. If stream=False, data is returned as
806
- a single chunk.
807
-
808
- If decode_unicode is True, content will be decoded using the best
809
- available encoding based on the response.
810
- """
811
-
812
- def generate():
813
- # Special case for urllib3.
814
- if hasattr(self.raw, "stream"):
815
- try:
816
- yield from self.raw.stream(chunk_size, decode_content=True)
817
- except ProtocolError as e:
818
- raise ChunkedEncodingError(e)
819
- except DecodeError as e:
820
- raise ContentDecodingError(e)
821
- except ReadTimeoutError as e:
822
- raise ConnectionError(e)
823
- except SSLError as e:
824
- raise RequestsSSLError(e)
825
- else:
826
- # Standard file-like object.
827
- while True:
828
- chunk = self.raw.read(chunk_size)
829
- if not chunk:
830
- break
831
- yield chunk
832
-
833
- self._content_consumed = True
834
-
835
- if self._content_consumed and isinstance(self._content, bool):
836
- raise StreamConsumedError()
837
- elif chunk_size is not None and not isinstance(chunk_size, int):
838
- raise TypeError(
839
- f"chunk_size must be an int, it is instead a {type(chunk_size)}."
840
- )
841
- # simulate reading small chunks of the content
842
- reused_chunks = iter_slices(self._content, chunk_size)
843
-
844
- stream_chunks = generate()
845
-
846
- chunks = reused_chunks if self._content_consumed else stream_chunks
847
-
848
- if decode_unicode:
849
- chunks = stream_decode_response_unicode(chunks, self)
850
-
851
- return chunks
852
-
853
- def iter_lines(
854
- self, chunk_size=ITER_CHUNK_SIZE, decode_unicode=False, delimiter=None
855
- ):
856
- """Iterates over the response data, one line at a time. When
857
- stream=True is set on the request, this avoids reading the
858
- content at once into memory for large responses.
859
-
860
- .. note:: This method is not reentrant safe.
861
- """
862
-
863
- pending = None
864
-
865
- for chunk in self.iter_content(
866
- chunk_size=chunk_size, decode_unicode=decode_unicode
867
- ):
868
-
869
- if pending is not None:
870
- chunk = pending + chunk
871
-
872
- if delimiter:
873
- lines = chunk.split(delimiter)
874
- else:
875
- lines = chunk.splitlines()
876
-
877
- if lines and lines[-1] and chunk and lines[-1][-1] == chunk[-1]:
878
- pending = lines.pop()
879
- else:
880
- pending = None
881
-
882
- yield from lines
883
-
884
- if pending is not None:
885
- yield pending
886
-
887
- @property
888
- def content(self):
889
- """Content of the response, in bytes."""
890
-
891
- if self._content is False:
892
- # Read the contents.
893
- if self._content_consumed:
894
- raise RuntimeError("The content for this response was already consumed")
895
-
896
- if self.status_code == 0 or self.raw is None:
897
- self._content = None
898
- else:
899
- self._content = b"".join(self.iter_content(CONTENT_CHUNK_SIZE)) or b""
900
-
901
- self._content_consumed = True
902
- # don't need to release the connection; that's been handled by urllib3
903
- # since we exhausted the data.
904
- return self._content
905
-
906
- @property
907
- def text(self):
908
- """Content of the response, in unicode.
909
-
910
- If Response.encoding is None, encoding will be guessed using
911
- ``charset_normalizer`` or ``chardet``.
912
-
913
- The encoding of the response content is determined based solely on HTTP
914
- headers, following RFC 2616 to the letter. If you can take advantage of
915
- non-HTTP knowledge to make a better guess at the encoding, you should
916
- set ``r.encoding`` appropriately before accessing this property.
917
- """
918
-
919
- # Try charset from content-type
920
- content = None
921
- encoding = self.encoding
922
-
923
- if not self.content:
924
- return ""
925
-
926
- # Fallback to auto-detected encoding.
927
- if self.encoding is None:
928
- encoding = self.apparent_encoding
929
-
930
- # Decode unicode from given encoding.
931
- try:
932
- content = str(self.content, encoding, errors="replace")
933
- except (LookupError, TypeError):
934
- # A LookupError is raised if the encoding was not found which could
935
- # indicate a misspelling or similar mistake.
936
- #
937
- # A TypeError can be raised if encoding is None
938
- #
939
- # So we try blindly encoding.
940
- content = str(self.content, errors="replace")
941
-
942
- return content
943
-
944
- def json(self, **kwargs):
945
- r"""Returns the json-encoded content of a response, if any.
946
-
947
- :param \*\*kwargs: Optional arguments that ``json.loads`` takes.
948
- :raises requests.exceptions.JSONDecodeError: If the response body does not
949
- contain valid json.
950
- """
951
-
952
- if not self.encoding and self.content and len(self.content) > 3:
953
- # No encoding set. JSON RFC 4627 section 3 states we should expect
954
- # UTF-8, -16 or -32. Detect which one to use; If the detection or
955
- # decoding fails, fall back to `self.text` (using charset_normalizer to make
956
- # a best guess).
957
- encoding = guess_json_utf(self.content)
958
- if encoding is not None:
959
- try:
960
- return complexjson.loads(self.content.decode(encoding), **kwargs)
961
- except UnicodeDecodeError:
962
- # Wrong UTF codec detected; usually because it's not UTF-8
963
- # but some other 8-bit codec. This is an RFC violation,
964
- # and the server didn't bother to tell us what codec *was*
965
- # used.
966
- pass
967
- except JSONDecodeError as e:
968
- raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
969
-
970
- try:
971
- return complexjson.loads(self.text, **kwargs)
972
- except JSONDecodeError as e:
973
- # Catch JSON-related errors and raise as requests.JSONDecodeError
974
- # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
975
- raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
976
-
977
- @property
978
- def links(self):
979
- """Returns the parsed header links of the response, if any."""
980
-
981
- header = self.headers.get("link")
982
-
983
- resolved_links = {}
984
-
985
- if header:
986
- links = parse_header_links(header)
987
-
988
- for link in links:
989
- key = link.get("rel") or link.get("url")
990
- resolved_links[key] = link
991
-
992
- return resolved_links
993
-
994
- def raise_for_status(self):
995
- """Raises :class:`HTTPError`, if one occurred."""
996
-
997
- http_error_msg = ""
998
- if isinstance(self.reason, bytes):
999
- # We attempt to decode utf-8 first because some servers
1000
- # choose to localize their reason strings. If the string
1001
- # isn't utf-8, we fall back to iso-8859-1 for all other
1002
- # encodings. (See PR #3538)
1003
- try:
1004
- reason = self.reason.decode("utf-8")
1005
- except UnicodeDecodeError:
1006
- reason = self.reason.decode("iso-8859-1")
1007
- else:
1008
- reason = self.reason
1009
-
1010
- if 400 <= self.status_code < 500:
1011
- http_error_msg = (
1012
- f"{self.status_code} Client Error: {reason} for url: {self.url}"
1013
- )
1014
-
1015
- elif 500 <= self.status_code < 600:
1016
- http_error_msg = (
1017
- f"{self.status_code} Server Error: {reason} for url: {self.url}"
1018
- )
1019
-
1020
- if http_error_msg:
1021
- raise HTTPError(http_error_msg, response=self)
1022
-
1023
- def close(self):
1024
- """Releases the connection back to the pool. Once this method has been
1025
- called the underlying ``raw`` object must not be accessed again.
1026
-
1027
- *Note: Should not normally need to be called explicitly.*
1028
- """
1029
- if not self._content_consumed:
1030
- self.raw.close()
1031
-
1032
- release_conn = getattr(self.raw, "release_conn", None)
1033
- if release_conn is not None:
1034
- release_conn()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/packaging/markers.py DELETED
@@ -1,304 +0,0 @@
1
- # This file is dual licensed under the terms of the Apache License, Version
2
- # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3
- # for complete details.
4
-
5
- import operator
6
- import os
7
- import platform
8
- import sys
9
- from typing import Any, Callable, Dict, List, Optional, Tuple, Union
10
-
11
- from setuptools.extern.pyparsing import ( # noqa: N817
12
- Forward,
13
- Group,
14
- Literal as L,
15
- ParseException,
16
- ParseResults,
17
- QuotedString,
18
- ZeroOrMore,
19
- stringEnd,
20
- stringStart,
21
- )
22
-
23
- from .specifiers import InvalidSpecifier, Specifier
24
-
25
- __all__ = [
26
- "InvalidMarker",
27
- "UndefinedComparison",
28
- "UndefinedEnvironmentName",
29
- "Marker",
30
- "default_environment",
31
- ]
32
-
33
- Operator = Callable[[str, str], bool]
34
-
35
-
36
- class InvalidMarker(ValueError):
37
- """
38
- An invalid marker was found, users should refer to PEP 508.
39
- """
40
-
41
-
42
- class UndefinedComparison(ValueError):
43
- """
44
- An invalid operation was attempted on a value that doesn't support it.
45
- """
46
-
47
-
48
- class UndefinedEnvironmentName(ValueError):
49
- """
50
- A name was attempted to be used that does not exist inside of the
51
- environment.
52
- """
53
-
54
-
55
- class Node:
56
- def __init__(self, value: Any) -> None:
57
- self.value = value
58
-
59
- def __str__(self) -> str:
60
- return str(self.value)
61
-
62
- def __repr__(self) -> str:
63
- return f"<{self.__class__.__name__}('{self}')>"
64
-
65
- def serialize(self) -> str:
66
- raise NotImplementedError
67
-
68
-
69
- class Variable(Node):
70
- def serialize(self) -> str:
71
- return str(self)
72
-
73
-
74
- class Value(Node):
75
- def serialize(self) -> str:
76
- return f'"{self}"'
77
-
78
-
79
- class Op(Node):
80
- def serialize(self) -> str:
81
- return str(self)
82
-
83
-
84
- VARIABLE = (
85
- L("implementation_version")
86
- | L("platform_python_implementation")
87
- | L("implementation_name")
88
- | L("python_full_version")
89
- | L("platform_release")
90
- | L("platform_version")
91
- | L("platform_machine")
92
- | L("platform_system")
93
- | L("python_version")
94
- | L("sys_platform")
95
- | L("os_name")
96
- | L("os.name") # PEP-345
97
- | L("sys.platform") # PEP-345
98
- | L("platform.version") # PEP-345
99
- | L("platform.machine") # PEP-345
100
- | L("platform.python_implementation") # PEP-345
101
- | L("python_implementation") # undocumented setuptools legacy
102
- | L("extra") # PEP-508
103
- )
104
- ALIASES = {
105
- "os.name": "os_name",
106
- "sys.platform": "sys_platform",
107
- "platform.version": "platform_version",
108
- "platform.machine": "platform_machine",
109
- "platform.python_implementation": "platform_python_implementation",
110
- "python_implementation": "platform_python_implementation",
111
- }
112
- VARIABLE.setParseAction(lambda s, l, t: Variable(ALIASES.get(t[0], t[0])))
113
-
114
- VERSION_CMP = (
115
- L("===") | L("==") | L(">=") | L("<=") | L("!=") | L("~=") | L(">") | L("<")
116
- )
117
-
118
- MARKER_OP = VERSION_CMP | L("not in") | L("in")
119
- MARKER_OP.setParseAction(lambda s, l, t: Op(t[0]))
120
-
121
- MARKER_VALUE = QuotedString("'") | QuotedString('"')
122
- MARKER_VALUE.setParseAction(lambda s, l, t: Value(t[0]))
123
-
124
- BOOLOP = L("and") | L("or")
125
-
126
- MARKER_VAR = VARIABLE | MARKER_VALUE
127
-
128
- MARKER_ITEM = Group(MARKER_VAR + MARKER_OP + MARKER_VAR)
129
- MARKER_ITEM.setParseAction(lambda s, l, t: tuple(t[0]))
130
-
131
- LPAREN = L("(").suppress()
132
- RPAREN = L(")").suppress()
133
-
134
- MARKER_EXPR = Forward()
135
- MARKER_ATOM = MARKER_ITEM | Group(LPAREN + MARKER_EXPR + RPAREN)
136
- MARKER_EXPR << MARKER_ATOM + ZeroOrMore(BOOLOP + MARKER_EXPR)
137
-
138
- MARKER = stringStart + MARKER_EXPR + stringEnd
139
-
140
-
141
- def _coerce_parse_result(results: Union[ParseResults, List[Any]]) -> List[Any]:
142
- if isinstance(results, ParseResults):
143
- return [_coerce_parse_result(i) for i in results]
144
- else:
145
- return results
146
-
147
-
148
- def _format_marker(
149
- marker: Union[List[str], Tuple[Node, ...], str], first: Optional[bool] = True
150
- ) -> str:
151
-
152
- assert isinstance(marker, (list, tuple, str))
153
-
154
- # Sometimes we have a structure like [[...]] which is a single item list
155
- # where the single item is itself it's own list. In that case we want skip
156
- # the rest of this function so that we don't get extraneous () on the
157
- # outside.
158
- if (
159
- isinstance(marker, list)
160
- and len(marker) == 1
161
- and isinstance(marker[0], (list, tuple))
162
- ):
163
- return _format_marker(marker[0])
164
-
165
- if isinstance(marker, list):
166
- inner = (_format_marker(m, first=False) for m in marker)
167
- if first:
168
- return " ".join(inner)
169
- else:
170
- return "(" + " ".join(inner) + ")"
171
- elif isinstance(marker, tuple):
172
- return " ".join([m.serialize() for m in marker])
173
- else:
174
- return marker
175
-
176
-
177
- _operators: Dict[str, Operator] = {
178
- "in": lambda lhs, rhs: lhs in rhs,
179
- "not in": lambda lhs, rhs: lhs not in rhs,
180
- "<": operator.lt,
181
- "<=": operator.le,
182
- "==": operator.eq,
183
- "!=": operator.ne,
184
- ">=": operator.ge,
185
- ">": operator.gt,
186
- }
187
-
188
-
189
- def _eval_op(lhs: str, op: Op, rhs: str) -> bool:
190
- try:
191
- spec = Specifier("".join([op.serialize(), rhs]))
192
- except InvalidSpecifier:
193
- pass
194
- else:
195
- return spec.contains(lhs)
196
-
197
- oper: Optional[Operator] = _operators.get(op.serialize())
198
- if oper is None:
199
- raise UndefinedComparison(f"Undefined {op!r} on {lhs!r} and {rhs!r}.")
200
-
201
- return oper(lhs, rhs)
202
-
203
-
204
- class Undefined:
205
- pass
206
-
207
-
208
- _undefined = Undefined()
209
-
210
-
211
- def _get_env(environment: Dict[str, str], name: str) -> str:
212
- value: Union[str, Undefined] = environment.get(name, _undefined)
213
-
214
- if isinstance(value, Undefined):
215
- raise UndefinedEnvironmentName(
216
- f"{name!r} does not exist in evaluation environment."
217
- )
218
-
219
- return value
220
-
221
-
222
- def _evaluate_markers(markers: List[Any], environment: Dict[str, str]) -> bool:
223
- groups: List[List[bool]] = [[]]
224
-
225
- for marker in markers:
226
- assert isinstance(marker, (list, tuple, str))
227
-
228
- if isinstance(marker, list):
229
- groups[-1].append(_evaluate_markers(marker, environment))
230
- elif isinstance(marker, tuple):
231
- lhs, op, rhs = marker
232
-
233
- if isinstance(lhs, Variable):
234
- lhs_value = _get_env(environment, lhs.value)
235
- rhs_value = rhs.value
236
- else:
237
- lhs_value = lhs.value
238
- rhs_value = _get_env(environment, rhs.value)
239
-
240
- groups[-1].append(_eval_op(lhs_value, op, rhs_value))
241
- else:
242
- assert marker in ["and", "or"]
243
- if marker == "or":
244
- groups.append([])
245
-
246
- return any(all(item) for item in groups)
247
-
248
-
249
- def format_full_version(info: "sys._version_info") -> str:
250
- version = "{0.major}.{0.minor}.{0.micro}".format(info)
251
- kind = info.releaselevel
252
- if kind != "final":
253
- version += kind[0] + str(info.serial)
254
- return version
255
-
256
-
257
- def default_environment() -> Dict[str, str]:
258
- iver = format_full_version(sys.implementation.version)
259
- implementation_name = sys.implementation.name
260
- return {
261
- "implementation_name": implementation_name,
262
- "implementation_version": iver,
263
- "os_name": os.name,
264
- "platform_machine": platform.machine(),
265
- "platform_release": platform.release(),
266
- "platform_system": platform.system(),
267
- "platform_version": platform.version(),
268
- "python_full_version": platform.python_version(),
269
- "platform_python_implementation": platform.python_implementation(),
270
- "python_version": ".".join(platform.python_version_tuple()[:2]),
271
- "sys_platform": sys.platform,
272
- }
273
-
274
-
275
- class Marker:
276
- def __init__(self, marker: str) -> None:
277
- try:
278
- self._markers = _coerce_parse_result(MARKER.parseString(marker))
279
- except ParseException as e:
280
- raise InvalidMarker(
281
- f"Invalid marker: {marker!r}, parse error at "
282
- f"{marker[e.loc : e.loc + 8]!r}"
283
- )
284
-
285
- def __str__(self) -> str:
286
- return _format_marker(self._markers)
287
-
288
- def __repr__(self) -> str:
289
- return f"<Marker('{self}')>"
290
-
291
- def evaluate(self, environment: Optional[Dict[str, str]] = None) -> bool:
292
- """Evaluate a marker.
293
-
294
- Return the boolean from evaluating the given marker against the
295
- environment. environment is an optional argument to override all or
296
- part of the determined environment.
297
-
298
- The environment is determined from the current Python process.
299
- """
300
- current_environment = default_environment()
301
- if environment is not None:
302
- current_environment.update(environment)
303
-
304
- return _evaluate_markers(self._markers, current_environment)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/launch.py DELETED
@@ -1,36 +0,0 @@
1
- """
2
- Launch the Python script on the command line after
3
- setuptools is bootstrapped via import.
4
- """
5
-
6
- # Note that setuptools gets imported implicitly by the
7
- # invocation of this script using python -m setuptools.launch
8
-
9
- import tokenize
10
- import sys
11
-
12
-
13
- def run():
14
- """
15
- Run the script in sys.argv[1] as if it had
16
- been invoked naturally.
17
- """
18
- __builtins__
19
- script_name = sys.argv[1]
20
- namespace = dict(
21
- __file__=script_name,
22
- __name__='__main__',
23
- __doc__=None,
24
- )
25
- sys.argv[:] = sys.argv[1:]
26
-
27
- open_ = getattr(tokenize, 'open', open)
28
- with open_(script_name) as fid:
29
- script = fid.read()
30
- norm_script = script.replace('\\r\\n', '\\n')
31
- code = compile(norm_script, script_name, 'exec')
32
- exec(code, namespace)
33
-
34
-
35
- if __name__ == '__main__':
36
- run()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BongoCaat/ArtGenerator/stable_diffusion_2_0.py DELETED
@@ -1,611 +0,0 @@
1
- {
2
- "cells": [
3
- {
4
- "cell_type": "markdown",
5
- "metadata": {
6
- "id": "view-in-github",
7
- "colab_type": "text"
8
- },
9
- "source": [
10
- "<a href=\"https://colab.research.google.com/github/qunash/stable-diffusion-2-gui/blob/main/stable_diffusion_2_0.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
11
- ]
12
- },
13
- {
14
- "cell_type": "markdown",
15
- "metadata": {
16
- "id": "620o1BxdNbgq"
17
- },
18
- "source": [
19
- "# **Stable Diffusion 2.1**\n",
20
- "Gradio app for [Stable Diffusion 2](https://huggingface.co/stabilityai/stable-diffusion-2) by [Stability AI](https://stability.ai/) (v2-1_768-ema-pruned.ckpt).\n",
21
- "It uses [Hugging Face](https://huggingface.co/) Diffusers🧨 implementation.\n",
22
- "\n",
23
- "Currently supported pipelines are `text-to-image`, `image-to-image`, `inpainting`, `4x upscaling` and `depth-to-image`.\n",
24
- "\n",
25
- "<br>\n",
26
- "\n",
27
- "Colab by [anzorq](https://twitter.com/hahahahohohe). If you like it, please consider supporting me:\n",
28
- "\n",
29
- "[<a href=\"https://www.buymeacoffee.com/anzorq\" target=\"_blank\"><img src=\"https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png\" height=\"32px\" width=\"108px\" alt=\"Buy Me A Coffee\"></a>](https://www.buymeacoffee.com/anzorq)\n",
30
- "<br>\n",
31
- "[![GitHub Repo stars](https://img.shields.io/github/stars/qunash/stable-diffusion-2-gui?style=social)](https://github.com/qunash/stable-diffusion-2-gui)\n",
32
- "\n",
33
- "![visitors](https://visitor-badge.glitch.me/badge?page_id=anzorq.sd-2-colab-header)"
34
- ]
35
- },
36
- {
37
- "cell_type": "markdown",
38
- "metadata": {
39
- "id": "KQI4RX20DW_8"
40
- },
41
- "source": [
42
- "# Install dependencies (~1.5 mins)"
43
- ]
44
- },
45
- {
46
- "cell_type": "code",
47
- "execution_count": null,
48
- "metadata": {
49
- "id": "78HoqRAB-cES",
50
- "cellView": "form"
51
- },
52
- "outputs": [],
53
- "source": [
54
- "!pip install --upgrade git+https://github.com/huggingface/diffusers.git\n",
55
- "# !pip install diffusers\n",
56
- "!pip install --upgrade git+https://github.com/huggingface/transformers/\n",
57
- "# !pip install transformers\n",
58
- "!pip install accelerate==0.12.0\n",
59
- "!pip install scipy\n",
60
- "!pip install ftfy\n",
61
- "!pip install gradio -q\n",
62
- "\n",
63
- "#@markdown ### ⬅️ Run this cell\n",
64
- "#@markdown ---\n",
65
- "#@markdown ### Install **xformers**?\n",
66
- "#@markdown This will take an additional ~3.5 mins.<br>But images will generate 25-40% faster.\n",
67
- "install_xformers = False #@param {type:\"boolean\"}\n",
68
- "\n",
69
- "if install_xformers:\n",
70
- " import os\n",
71
- " from subprocess import getoutput\n",
72
- "\n",
73
- " os.system(\"pip install --extra-index-url https://download.pytorch.org/whl/cu113 torch torchvision==0.13.1+cu113\")\n",
74
- " os.system(\"pip install triton==2.0.0.dev20220701\")\n",
75
- " gpu_info = getoutput('nvidia-smi')\n",
76
- " if(\"A10G\" in gpu_info):\n",
77
- " os.system(f\"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl\")\n",
78
- " elif(\"T4\" in gpu_info):\n",
79
- " os.system(f\"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl\")\n",
80
- "\n",
81
- "\n",
82
- "# ### install xformers\n",
83
- "# from IPython.utils import capture\n",
84
- "# from subprocess import getoutput\n",
85
- "# from re import search\n",
86
- "\n",
87
- "# with capture.capture_output() as cap:\n",
88
- " \n",
89
- "# smi_out = getoutput('nvidia-smi')\n",
90
- "# supported = search('(T4|P100|V100|A100|K80)', smi_out)\n",
91
- "\n",
92
- "# if not supported:\n",
93
- "# while True:\n",
94
- "# print(\"\\x1b[1;31mThe current GPU is not supported, try starting a new session.\\x1b[0m\")\n",
95
- "# else:\n",
96
- "# supported = supported.group(0)\n",
97
- "\n",
98
- "# !pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/{supported}/xformers-0.0.13.dev0-py3-none-any.whl\n",
99
- "# !pip install -q https://github.com/ShivamShrirao/xformers-wheels/releases/download/4c06c79/xformers-0.0.15.dev0+4c06c79.d20221201-cp38-cp38-linux_x86_64.whl"
100
- ]
101
- },
102
- {
103
- "cell_type": "markdown",
104
- "metadata": {
105
- "id": "OOPHNsFYDbc0"
106
- },
107
- "source": [
108
- "# Run the app"
109
- ]
110
- },
111
- {
112
- "cell_type": "code",
113
- "execution_count": null,
114
- "metadata": {
115
- "cellView": "form",
116
- "id": "gId0-asCBVwL"
117
- },
118
- "outputs": [],
119
- "source": [
120
- "#@title ⬇️🖼️\n",
121
- "from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, StableDiffusionUpscalePipeline, DiffusionPipeline, StableDiffusionDepth2ImgPipeline, DPMSolverMultistepScheduler\n",
122
- "import gradio as gr\n",
123
- "import torch\n",
124
- "from PIL import Image\n",
125
- "import random\n",
126
- "\n",
127
- "state = None\n",
128
- "current_steps = 25\n",
129
- "attn_slicing_enabled = True\n",
130
- "mem_eff_attn_enabled = install_xformers\n",
131
- "\n",
132
- "# model_id = 'stabilityai/stable-diffusion-2'\n",
133
- "model_id = 'stabilityai/stable-diffusion-2-1'\n",
134
- "\n",
135
- "scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder=\"scheduler\")\n",
136
- "\n",
137
- "pipe = StableDiffusionPipeline.from_pretrained(\n",
138
- " model_id,\n",
139
- " revision=\"fp16\" if torch.cuda.is_available() else \"fp32\",\n",
140
- " torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,\n",
141
- " scheduler=scheduler\n",
142
- " ).to(\"cuda\")\n",
143
- "pipe.enable_attention_slicing()\n",
144
- "if mem_eff_attn_enabled:\n",
145
- " pipe.enable_xformers_memory_efficient_attention()\n",
146
- "\n",
147
- "pipe_i2i = None\n",
148
- "pipe_upscale = None\n",
149
- "pipe_inpaint = None\n",
150
- "pipe_depth2img = None\n",
151
- "\n",
152
- "\n",
153
- "modes = {\n",
154
- " 'txt2img': 'Text to Image',\n",
155
- " 'img2img': 'Image to Image',\n",
156
- " 'inpaint': 'Inpainting',\n",
157
- " 'upscale4x': 'Upscale 4x',\n",
158
- " 'depth2img': 'Depth to Image'\n",
159
- "}\n",
160
- "current_mode = modes['txt2img']\n",
161
- "\n",
162
- "def error_str(error, title=\"Error\"):\n",
163
- " return f\"\"\"#### {title}\n",
164
- " {error}\"\"\" if error else \"\"\n",
165
- "\n",
166
- "def update_state(new_state):\n",
167
- " global state\n",
168
- " state = new_state\n",
169
- "\n",
170
- "def update_state_info(old_state):\n",
171
- " if state and state != old_state:\n",
172
- " return gr.update(value=state)\n",
173
- "\n",
174
- "def set_mem_optimizations(pipe):\n",
175
- " if attn_slicing_enabled:\n",
176
- " pipe.enable_attention_slicing()\n",
177
- " else:\n",
178
- " pipe.disable_attention_slicing()\n",
179
- " \n",
180
- " if mem_eff_attn_enabled:\n",
181
- " pipe.enable_xformers_memory_efficient_attention()\n",
182
- " else:\n",
183
- " pipe.disable_xformers_memory_efficient_attention()\n",
184
- "\n",
185
- "def get_i2i_pipe(scheduler):\n",
186
- " \n",
187
- " update_state(\"Loading image to image model...\")\n",
188
- "\n",
189
- " pipe = StableDiffusionImg2ImgPipeline.from_pretrained(\n",
190
- " model_id,\n",
191
- " revision=\"fp16\" if torch.cuda.is_available() else \"fp32\",\n",
192
- " torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,\n",
193
- " scheduler=scheduler\n",
194
- " )\n",
195
- " set_mem_optimizations(pipe)\n",
196
- " pipe.to(\"cuda\")\n",
197
- " return pipe\n",
198
- "\n",
199
- "def get_inpaint_pipe():\n",
200
- " \n",
201
- " update_state(\"Loading inpainting model...\")\n",
202
- "\n",
203
- " pipe = DiffusionPipeline.from_pretrained(\n",
204
- " \"stabilityai/stable-diffusion-2-inpainting\",\n",
205
- " revision=\"fp16\" if torch.cuda.is_available() else \"fp32\",\n",
206
- " torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,\n",
207
- " # scheduler=scheduler # TODO currently setting scheduler here messes up the end result. A bug in Diffusers🧨\n",
208
- " ).to(\"cuda\")\n",
209
- " pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)\n",
210
- " pipe.enable_attention_slicing()\n",
211
- " pipe.enable_xformers_memory_efficient_attention()\n",
212
- " return pipe\n",
213
- "\n",
214
- "def get_upscale_pipe(scheduler):\n",
215
- " \n",
216
- " update_state(\"Loading upscale model...\")\n",
217
- "\n",
218
- " pipe = StableDiffusionUpscalePipeline.from_pretrained(\n",
219
- " \"stabilityai/stable-diffusion-x4-upscaler\",\n",
220
- " revision=\"fp16\" if torch.cuda.is_available() else \"fp32\",\n",
221
- " torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,\n",
222
- " # scheduler=scheduler\n",
223
- " )\n",
224
- " # pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)\n",
225
- " set_mem_optimizations(pipe)\n",
226
- " pipe.to(\"cuda\")\n",
227
- " return pipe\n",
228
- " \n",
229
- "def get_depth2img_pipe():\n",
230
- " \n",
231
- " update_state(\"Loading depth to image model...\")\n",
232
- "\n",
233
- " pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(\n",
234
- " \"stabilityai/stable-diffusion-2-depth\",\n",
235
- " revision=\"fp16\" if torch.cuda.is_available() else \"fp32\",\n",
236
- " torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,\n",
237
- " # scheduler=scheduler\n",
238
- " )\n",
239
- " pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)\n",
240
- " set_mem_optimizations(pipe)\n",
241
- " pipe.to(\"cuda\")\n",
242
- " return pipe\n",
243
- "\n",
244
- "def switch_attention_slicing(attn_slicing):\n",
245
- " global attn_slicing_enabled\n",
246
- " attn_slicing_enabled = attn_slicing\n",
247
- "\n",
248
- "def switch_mem_eff_attn(mem_eff_attn):\n",
249
- " global mem_eff_attn_enabled\n",
250
- " mem_eff_attn_enabled = mem_eff_attn\n",
251
- "\n",
252
- "def pipe_callback(step: int, timestep: int, latents: torch.FloatTensor):\n",
253
- " update_state(f\"{step}/{current_steps} steps\")#\\nTime left, sec: {timestep/100:.0f}\")\n",
254
- "\n",
255
- "def inference(inf_mode, prompt, n_images, guidance, steps, width=768, height=768, seed=0, img=None, strength=0.5, neg_prompt=\"\"):\n",
256
- "\n",
257
- " update_state(\" \")\n",
258
- "\n",
259
- " global current_mode\n",
260
- " if inf_mode != current_mode:\n",
261
- " pipe.to(\"cuda\" if inf_mode == modes['txt2img'] else \"cpu\")\n",
262
- "\n",
263
- " if pipe_i2i is not None:\n",
264
- " pipe_i2i.to(\"cuda\" if inf_mode == modes['img2img'] else \"cpu\")\n",
265
- "\n",
266
- " if pipe_inpaint is not None:\n",
267
- " pipe_inpaint.to(\"cuda\" if inf_mode == modes['inpaint'] else \"cpu\")\n",
268
- "\n",
269
- " if pipe_upscale is not None:\n",
270
- " pipe_upscale.to(\"cuda\" if inf_mode == modes['upscale4x'] else \"cpu\")\n",
271
- " \n",
272
- " if pipe_depth2img is not None:\n",
273
- " pipe_depth2img.to(\"cuda\" if inf_mode == modes['depth2img'] else \"cpu\")\n",
274
- "\n",
275
- " current_mode = inf_mode\n",
276
- " \n",
277
- " if seed == 0:\n",
278
- " seed = random.randint(0, 2147483647)\n",
279
- "\n",
280
- " generator = torch.Generator('cuda').manual_seed(seed)\n",
281
- " prompt = prompt\n",
282
- "\n",
283
- " try:\n",
284
- " \n",
285
- " if inf_mode == modes['txt2img']:\n",
286
- " return txt_to_img(prompt, n_images, neg_prompt, guidance, steps, width, height, generator, seed), gr.update(visible=False, value=None)\n",
287
- " \n",
288
- " elif inf_mode == modes['img2img']:\n",
289
- " if img is None:\n",
290
- " return None, gr.update(visible=True, value=error_str(\"Image is required for Image to Image mode\"))\n",
291
- "\n",
292
- " return img_to_img(prompt, n_images, neg_prompt, img, strength, guidance, steps, width, height, generator, seed), gr.update(visible=False, value=None)\n",
293
- " \n",
294
- " elif inf_mode == modes['inpaint']:\n",
295
- " if img is None:\n",
296
- " return None, gr.update(visible=True, value=error_str(\"Image is required for Inpainting mode\"))\n",
297
- "\n",
298
- " return inpaint(prompt, n_images, neg_prompt, img, guidance, steps, width, height, generator, seed), gr.update(visible=False, value=None)\n",
299
- "\n",
300
- " elif inf_mode == modes['upscale4x']:\n",
301
- " if img is None:\n",
302
- " return None, gr.update(visible=True, value=error_str(\"Image is required for Upscale mode\"))\n",
303
- "\n",
304
- " return upscale(prompt, n_images, neg_prompt, img, guidance, steps, generator), gr.update(visible=False, value=None)\n",
305
- "\n",
306
- " elif inf_mode == modes['depth2img']:\n",
307
- " if img is None:\n",
308
- " return None, gr.update(visible=True, value=error_str(\"Image is required for Depth to Image mode\"))\n",
309
- "\n",
310
- " return depth2img(prompt, n_images, neg_prompt, img, guidance, steps, generator, seed), gr.update(visible=False, value=None)\n",
311
- "\n",
312
- " except Exception as e:\n",
313
- " return None, gr.update(visible=True, value=error_str(e))\n",
314
- "\n",
315
- "def txt_to_img(prompt, n_images, neg_prompt, guidance, steps, width, height, generator, seed):\n",
316
- "\n",
317
- " result = pipe(\n",
318
- " prompt,\n",
319
- " num_images_per_prompt = n_images,\n",
320
- " negative_prompt = neg_prompt,\n",
321
- " num_inference_steps = int(steps),\n",
322
- " guidance_scale = guidance,\n",
323
- " width = width,\n",
324
- " height = height,\n",
325
- " generator = generator,\n",
326
- " callback=pipe_callback).images\n",
327
- "\n",
328
- " update_state(f\"Done. Seed: {seed}\")\n",
329
- "\n",
330
- " return result\n",
331
- "\n",
332
- "def img_to_img(prompt, n_images, neg_prompt, img, strength, guidance, steps, width, height, generator, seed):\n",
333
- "\n",
334
- " global pipe_i2i\n",
335
- " if pipe_i2i is None:\n",
336
- " pipe_i2i = get_i2i_pipe(scheduler)\n",
337
- "\n",
338
- " img = img['image']\n",
339
- " ratio = min(height / img.height, width / img.width)\n",
340
- " img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS)\n",
341
- " result = pipe_i2i(\n",
342
- " prompt,\n",
343
- " num_images_per_prompt = n_images,\n",
344
- " negative_prompt = neg_prompt,\n",
345
- " image = img,\n",
346
- " num_inference_steps = int(steps),\n",
347
- " strength = strength,\n",
348
- " guidance_scale = guidance,\n",
349
- " # width = width,\n",
350
- " # height = height,\n",
351
- " generator = generator,\n",
352
- " callback=pipe_callback).images\n",
353
- "\n",
354
- " update_state(f\"Done. Seed: {seed}\")\n",
355
- " \n",
356
- " return result\n",
357
- "\n",
358
- "# TODO Currently supports only 512x512 images\n",
359
- "def inpaint(prompt, n_images, neg_prompt, img, guidance, steps, width, height, generator, seed):\n",
360
- "\n",
361
- " global pipe_inpaint\n",
362
- " if pipe_inpaint is None:\n",
363
- " pipe_inpaint = get_inpaint_pipe()\n",
364
- "\n",
365
- " inp_img = img['image']\n",
366
- " mask = img['mask']\n",
367
- " inp_img = square_padding(inp_img)\n",
368
- " mask = square_padding(mask)\n",
369
- "\n",
370
- " # # ratio = min(height / inp_img.height, width / inp_img.width)\n",
371
- " # ratio = min(512 / inp_img.height, 512 / inp_img.width)\n",
372
- " # inp_img = inp_img.resize((int(inp_img.width * ratio), int(inp_img.height * ratio)), Image.LANCZOS)\n",
373
- " # mask = mask.resize((int(mask.width * ratio), int(mask.height * ratio)), Image.LANCZOS)\n",
374
- "\n",
375
- " inp_img = inp_img.resize((512, 512))\n",
376
- " mask = mask.resize((512, 512))\n",
377
- "\n",
378
- " result = pipe_inpaint(\n",
379
- " prompt,\n",
380
- " image = inp_img,\n",
381
- " mask_image = mask,\n",
382
- " num_images_per_prompt = n_images,\n",
383
- " negative_prompt = neg_prompt,\n",
384
- " num_inference_steps = int(steps),\n",
385
- " guidance_scale = guidance,\n",
386
- " # width = width,\n",
387
- " # height = height,\n",
388
- " generator = generator,\n",
389
- " callback=pipe_callback).images\n",
390
- " \n",
391
- " update_state(f\"Done. Seed: {seed}\")\n",
392
- "\n",
393
- " return result\n",
394
- "\n",
395
- "def depth2img(prompt, n_images, neg_prompt, img, guidance, steps, generator, seed):\n",
396
- "\n",
397
- " global pipe_depth2img\n",
398
- " if pipe_depth2img is None:\n",
399
- " pipe_depth2img = get_depth2img_pipe()\n",
400
- "\n",
401
- " img = img['image']\n",
402
- " result = pipe_depth2img(\n",
403
- " prompt,\n",
404
- " num_images_per_prompt = n_images,\n",
405
- " negative_prompt = neg_prompt,\n",
406
- " image = img,\n",
407
- " num_inference_steps = int(steps),\n",
408
- " guidance_scale = guidance,\n",
409
- " # width = width,\n",
410
- " # height = height,\n",
411
- " generator = generator,\n",
412
- " callback=pipe_callback).images\n",
413
- "\n",
414
- " update_state(f\"Done. Seed: {seed}\")\n",
415
- " \n",
416
- " return result\n",
417
- "\n",
418
- "def square_padding(img):\n",
419
- " width, height = img.size\n",
420
- " if width == height:\n",
421
- " return img\n",
422
- " new_size = max(width, height)\n",
423
- " new_img = Image.new('RGB', (new_size, new_size), (0, 0, 0, 255))\n",
424
- " new_img.paste(img, ((new_size - width) // 2, (new_size - height) // 2))\n",
425
- " return new_img\n",
426
- "\n",
427
- "def upscale(prompt, n_images, neg_prompt, img, guidance, steps, generator):\n",
428
- "\n",
429
- " global pipe_upscale\n",
430
- " if pipe_upscale is None:\n",
431
- " pipe_upscale = get_upscale_pipe(scheduler)\n",
432
- "\n",
433
- " img = img['image']\n",
434
- " return upscale_tiling(prompt, neg_prompt, img, guidance, steps, generator)\n",
435
- "\n",
436
- " # result = pipe_upscale(\n",
437
- " # prompt,\n",
438
- " # image = img,\n",
439
- " # num_inference_steps = int(steps),\n",
440
- " # guidance_scale = guidance,\n",
441
- " # negative_prompt = neg_prompt,\n",
442
- " # num_images_per_prompt = n_images,\n",
443
- " # generator = generator).images[0]\n",
444
- "\n",
445
- " # return result\n",
446
- "\n",
447
- "def upscale_tiling(prompt, neg_prompt, img, guidance, steps, generator):\n",
448
- "\n",
449
- " width, height = img.size\n",
450
- "\n",
451
- " # calculate the padding needed to make the image dimensions a multiple of 128\n",
452
- " padding_x = 128 - (width % 128) if width % 128 != 0 else 0\n",
453
- " padding_y = 128 - (height % 128) if height % 128 != 0 else 0\n",
454
- "\n",
455
- " # create a white image of the right size to be used as padding\n",
456
- " padding_img = Image.new('RGB', (padding_x, padding_y), color=(255, 255, 255, 0))\n",
457
- "\n",
458
- " # paste the padding image onto the original image to add the padding\n",
459
- " img.paste(padding_img, (width, height))\n",
460
- "\n",
461
- " # update the image dimensions to include the padding\n",
462
- " width += padding_x\n",
463
- " height += padding_y\n",
464
- "\n",
465
- " if width > 128 or height > 128:\n",
466
- "\n",
467
- " num_tiles_x = int(width / 128)\n",
468
- " num_tiles_y = int(height / 128)\n",
469
- "\n",
470
- " upscaled_img = Image.new('RGB', (img.size[0] * 4, img.size[1] * 4))\n",
471
- " for x in range(num_tiles_x):\n",
472
- " for y in range(num_tiles_y):\n",
473
- " update_state(f\"Upscaling tile {x * num_tiles_y + y + 1}/{num_tiles_x * num_tiles_y}\")\n",
474
- " tile = img.crop((x * 128, y * 128, (x + 1) * 128, (y + 1) * 128))\n",
475
- "\n",
476
- " upscaled_tile = pipe_upscale(\n",
477
- " prompt=\"\",\n",
478
- " image=tile,\n",
479
- " num_inference_steps=steps,\n",
480
- " guidance_scale=guidance,\n",
481
- " # negative_prompt = neg_prompt,\n",
482
- " generator=generator,\n",
483
- " ).images[0]\n",
484
- "\n",
485
- " upscaled_img.paste(upscaled_tile, (x * upscaled_tile.size[0], y * upscaled_tile.size[1]))\n",
486
- "\n",
487
- " return [upscaled_img]\n",
488
- " else:\n",
489
- " return pipe_upscale(\n",
490
- " prompt=prompt,\n",
491
- " image=img,\n",
492
- " num_inference_steps=steps,\n",
493
- " guidance_scale=guidance,\n",
494
- " negative_prompt = neg_prompt,\n",
495
- " generator=generator,\n",
496
- " ).images\n",
497
- "\n",
498
- "\n",
499
- "\n",
500
- "def on_mode_change(mode):\n",
501
- " return gr.update(visible = mode in (modes['img2img'], modes['inpaint'], modes['upscale4x'], modes['depth2img'])), \\\n",
502
- " gr.update(visible = mode == modes['inpaint']), \\\n",
503
- " gr.update(visible = mode == modes['upscale4x']), \\\n",
504
- " gr.update(visible = mode == modes['img2img'])\n",
505
- "\n",
506
- "def on_steps_change(steps):\n",
507
- " global current_steps\n",
508
- " current_steps = steps\n",
509
- "\n",
510
- "css = \"\"\".main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem}\n",
511
- "\"\"\"\n",
512
- "with gr.Blocks(css=css) as demo:\n",
513
- " gr.HTML(\n",
514
- " f\"\"\"\n",
515
- " <div class=\"main-div\">\n",
516
- " <div>\n",
517
- " <h1>Stable Diffusion 2.1</h1>\n",
518
- " </div><br>\n",
519
- " <p> Model used: <a href=\"https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.ckpt\" target=\"_blank\">v2-1_768-ema-pruned.ckpt</a></p>\n",
520
- " Running on <b>{\"GPU 🔥\" if torch.cuda.is_available() else \"CPU 🥶\"}</b>\n",
521
- " </div>\n",
522
- " \"\"\"\n",
523
- " )\n",
524
- " with gr.Row():\n",
525
- " \n",
526
- " with gr.Column(scale=70):\n",
527
- " with gr.Group():\n",
528
- " with gr.Row():\n",
529
- " prompt = gr.Textbox(label=\"Prompt\", show_label=False, max_lines=2,placeholder=f\"Enter prompt\").style(container=False)\n",
530
- " generate = gr.Button(value=\"Generate\").style(rounded=(False, True, True, False))\n",
531
- "\n",
532
- " gallery = gr.Gallery(label=\"Generated images\", show_label=False).style(grid=[2], height=\"auto\")\n",
533
- " state_info = gr.Textbox(label=\"State\", show_label=False, max_lines=2).style(container=False)\n",
534
- " error_output = gr.Markdown(visible=False)\n",
535
- "\n",
536
- " with gr.Column(scale=30):\n",
537
- " inf_mode = gr.Radio(label=\"Inference Mode\", choices=list(modes.values()), value=modes['txt2img'])\n",
538
- " \n",
539
- " with gr.Group(visible=False) as i2i_options:\n",
540
- " image = gr.Image(label=\"Image\", height=128, type=\"pil\", tool='sketch')\n",
541
- " inpaint_info = gr.Markdown(\"Inpainting resizes and pads images to 512x512\", visible=False)\n",
542
- " upscale_info = gr.Markdown(\"\"\"Best for small images (128x128 or smaller).<br>\n",
543
- " Bigger images will be sliced into 128x128 tiles which will be upscaled individually.<br>\n",
544
- " This is done to avoid running out of GPU memory.\"\"\", visible=False)\n",
545
- " strength = gr.Slider(label=\"Transformation strength\", minimum=0, maximum=1, step=0.01, value=0.5)\n",
546
- "\n",
547
- " with gr.Group():\n",
548
- " neg_prompt = gr.Textbox(label=\"Negative prompt\", placeholder=\"What to exclude from the image\")\n",
549
- "\n",
550
- " n_images = gr.Slider(label=\"Number of images\", value=1, minimum=1, maximum=4, step=1)\n",
551
- " with gr.Row():\n",
552
- " guidance = gr.Slider(label=\"Guidance scale\", value=7.5, maximum=15)\n",
553
- " steps = gr.Slider(label=\"Steps\", value=current_steps, minimum=2, maximum=100, step=1)\n",
554
- "\n",
555
- " with gr.Row():\n",
556
- " width = gr.Slider(label=\"Width\", value=768, minimum=64, maximum=1024, step=8)\n",
557
- " height = gr.Slider(label=\"Height\", value=768, minimum=64, maximum=1024, step=8)\n",
558
- "\n",
559
- " seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1)\n",
560
- " with gr.Accordion(\"Memory optimization\"):\n",
561
- " attn_slicing = gr.Checkbox(label=\"Attention slicing (a bit slower, but uses less memory)\", value=attn_slicing_enabled)\n",
562
- " # mem_eff_attn = gr.Checkbox(label=\"Memory efficient attention (xformers)\", value=mem_eff_attn_enabled)\n",
563
- "\n",
564
- " inf_mode.change(on_mode_change, inputs=[inf_mode], outputs=[i2i_options, inpaint_info, upscale_info, strength], queue=False)\n",
565
- " steps.change(on_steps_change, inputs=[steps], outputs=[], queue=False)\n",
566
- " attn_slicing.change(lambda x: switch_attention_slicing(x), inputs=[attn_slicing], queue=False)\n",
567
- " # mem_eff_attn.change(lambda x: switch_mem_eff_attn(x), inputs=[mem_eff_attn], queue=False)\n",
568
- "\n",
569
- " inputs = [inf_mode, prompt, n_images, guidance, steps, width, height, seed, image, strength, neg_prompt]\n",
570
- " outputs = [gallery, error_output]\n",
571
- " prompt.submit(inference, inputs=inputs, outputs=outputs)\n",
572
- " generate.click(inference, inputs=inputs, outputs=outputs)\n",
573
- "\n",
574
- " demo.load(update_state_info, inputs=state_info, outputs=state_info, every=0.5, show_progress=False)\n",
575
- "\n",
576
- " gr.HTML(\"\"\"\n",
577
- " <div style=\"border-top: 1px solid #303030;\">\n",
578
- " <br>\n",
579
- " <p>Space by: <a href=\"https://twitter.com/hahahahohohe\"><img src=\"https://img.shields.io/twitter/follow/hahahahohohe?label=%40anzorq&style=social\" alt=\"Twitter Follow\"></a></p><br>\n",
580
- " <p>Enjoying this app? Please consider <a href=\"https://www.buymeacoffee.com/anzorq\">supporting me</a></p>\n",
581
- " <a href=\"https://www.buymeacoffee.com/anzorq\" target=\"_blank\"><img src=\"https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png\" alt=\"Buy Me A Coffee\" style=\"height: 45px !important;width: 162px !important;\" ></a><br><br>\n",
582
- " <a href=\"https://github.com/qunash/stable-diffusion-2-gui\" target=\"_blank\"><img alt=\"GitHub Repo stars\" src=\"https://img.shields.io/github/stars/qunash/stable-diffusion-2-gui?style=social\"></a>\n",
583
- " <p><img src=\"https://visitor-badge.glitch.me/badge?page_id=anzorq.sd-2-colab\" alt=\"visitors\"></p>\n",
584
- " </div>\n",
585
- " \"\"\")\n",
586
- "\n",
587
- "demo.queue()\n",
588
- "demo.launch(debug=True, share=True, height=768)\n"
589
- ]
590
- }
591
- ],
592
- "metadata": {
593
- "accelerator": "GPU",
594
- "colab": {
595
- "private_outputs": true,
596
- "provenance": [],
597
- "toc_visible": true,
598
- "include_colab_link": true
599
- },
600
- "gpuClass": "standard",
601
- "kernelspec": {
602
- "display_name": "Python 3",
603
- "name": "python3"
604
- },
605
- "language_info": {
606
- "name": "python"
607
- }
608
- },
609
- "nbformat": 4,
610
- "nbformat_minor": 0
611
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/DualStyleGAN/style.css DELETED
@@ -1,17 +0,0 @@
1
- h1 {
2
- text-align: center;
3
- }
4
-
5
- img#overview {
6
- max-width: 1000px;
7
- max-height: 600px;
8
- display: block;
9
- margin: auto;
10
- }
11
-
12
- img#style-image {
13
- max-width: 1000px;
14
- max-height: 600px;
15
- display: block;
16
- margin: auto;
17
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Chris4K/llms_compare/CabelasDangerousHunts2013-SKIDROW-REPACK-Crack-Fix-Torrent-Download.md DELETED
@@ -1,64 +0,0 @@
1
- ## Cabelas.Dangerous.Hunts.2013. -SKIDROW -Crack Fix Torrent Download
2
-
3
-
4
-
5
-
6
-
7
-
8
-
9
-
10
-
11
- **LINK ->>->>->> [https://urluso.com/2tBNzD](https://urluso.com/2tBNzD)**
12
-
13
-
14
-
15
-
16
-
17
-
18
-
19
-
20
-
21
-
22
-
23
-
24
-
25
- # Cabela's Dangerous Hunts 2013: A Hunting Game with Kill or Be Killed Consequences
26
-
27
-
28
-
29
- Cabela's Dangerous Hunts 2013 is a first-person shooter hunting game developed by FUN Labs and published by Activision in 2013. The game features a new Prowler animal AI engine that simulates pack social hierarchies, coordinates complex group tactics and sets up deadly ambushes. The game also introduces a new Maneater co-op mode, where two players can join together to take on wave after wave of increasingly deadly beasts in a split screen mode.
30
-
31
-
32
-
33
- The game is set in 12 exotic locations throughout the world, where players can hunt 27 big game animals, such as lions, bears, wolves, crocodiles and more. The game offers various modes of gameplay, such as Quick Hunt, Action Hunt and Career Hunt. The game also allows players to customize their weapons and gear with thousands of authentic options, including rifles, handguns, bows, crossbows, knives and various scopes.
34
-
35
-
36
-
37
- The game received mixed reviews from critics, who praised the graphics, sound effects and co-op mode, but criticized the repetitive gameplay, poor AI and texture glitches. The game is available for PC, Xbox 360, PlayStation 3 and Wii U platforms.
38
-
39
-
40
-
41
- If you are interested in downloading Cabela's Dangerous Hunts 2013 for PC, you can find a torrent link here[^1^]. You will need to mount or burn the image file, install the game, copy everything from the SKIDROW folder into the game installation folder, block the game in your firewall and antivirus program, and play the game. You can also find a crack fix here[^3^], which will solve some of the issues with the game.
42
-
43
-
44
-
45
- However, please note that downloading torrents is illegal and may expose you to viruses and malware. We do not condone or support piracy in any way. If you like the game, please support the developers and buy it from official sources.
46
-
47
-
48
-
49
- The game's main mode is the Story Mode, where the player takes the role of Cole Rainsford, a young hunter who joins his estranged father on an African safari. However, their trip turns into a nightmare when they encounter a mysterious cult that unleashes a horde of deadly animals on them. The player must survive the attacks of lions, hyenas, leopards, rhinos and more, while uncovering the truth behind the cult's motives.
50
-
51
-
52
-
53
- The game's gameplay is based on quick-time events and shooting sequences, where the player must react to the animal attacks and aim for their vital organs. The game also features a Fearmaster controller for some platforms, which measures the player's heart rate and motion. The higher the fear level, the harder it is to aim and shoot accurately. The game also has a dynamic weather system and day-night cycle, which affect the visibility and behavior of the animals.
54
-
55
-
56
-
57
- The game's graphics and sound effects are realistic and immersive, creating a tense and thrilling atmosphere. The game also features voice acting by Scott Eastwood and Rob Lowe as Cole and his father respectively. The game's co-op mode is also fun and challenging, allowing two players to team up and face different scenarios and objectives. However, the game also has some flaws, such as repetitive gameplay, poor AI and texture glitches. The game also received some criticism for its depiction of animal violence and hunting ethics.
58
-
59
- 145887f19f
60
-
61
-
62
-
63
-
64
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/model/tool.js DELETED
@@ -1,163 +0,0 @@
1
- import _ from 'lodash'
2
- import fs from 'fs'
3
- import { Version } from '../components/index.js'
4
-
5
- async function CreateMusicShare(data) {
6
- let appid, appname, appsign, style = 4;
7
- switch (data.subType) {
8
- case 'bilibili':
9
- appid = 100951776, appname = 'tv.danmaku.bili', appsign = '7194d531cbe7960a22007b9f6bdaa38b';
10
- break;
11
- case 'netease':
12
- appid = 100495085, appname = "com.netease.cloudmusic", appsign = "da6b069da1e2982db3e386233f68d76d";
13
- break;
14
- case 'kuwo':
15
- appid = 100243533, appname = "cn.kuwo.player", appsign = "bf9ff4ffb4c558a34ee3fd52c223ebf5";
16
- break;
17
- case 'kugou':
18
- appid = 205141, appname = "com.kugou.android", appsign = "fe4a24d80fcf253a00676a808f62c2c6";
19
- break;
20
- case 'migu':
21
- appid = 1101053067, appname = "cmccwm.mobilemusic", appsign = "6cdc72a439cef99a3418d2a78aa28c73";
22
- break;
23
- case 'qq':
24
- default:
25
- appid = 100497308, appname = "com.tencent.qqmusic", appsign = "cbd27cd7c861227d013a25b2d10f0799";
26
- break;
27
- }
28
-
29
- var text = '', title = data.title, singer = data.content, prompt = '[分享]', jumpUrl = data.url, preview = data.image, musicUrl = data.voice;
30
-
31
- prompt = '[分享]' + title + '-' + singer;
32
-
33
- let recv_uin = 0;
34
- let send_type = 0;
35
- let recv_guild_id = 0;
36
-
37
- if (data.message_type === 'group') {//群聊
38
- recv_uin = data.group_id;
39
- send_type = 1;
40
- } else if (data.message_type === 'guild') {//频道
41
- recv_uin = Number(data.channel_id);
42
- recv_guild_id = BigInt(data.guild_id);
43
- send_type = 3;
44
- } else if (data.message_type === 'private') {//私聊
45
- recv_uin = data.user_id;
46
- send_type = 0;
47
- }
48
-
49
- let body = {
50
- 1: appid,
51
- 2: 1,
52
- 3: style,
53
- 5: {
54
- 1: 1,
55
- 2: "0.0.0",
56
- 3: appname,
57
- 4: appsign,
58
- },
59
- 6: text,
60
- 10: send_type,
61
- 11: recv_uin,
62
- 12: {
63
- 10: title,
64
- 11: singer,
65
- 12: prompt,
66
- 13: jumpUrl,
67
- 14: preview,
68
- 16: musicUrl,
69
- },
70
- 19: recv_guild_id
71
- };
72
- return body;
73
- }
74
-
75
- async function SendMusicShare(data) {
76
- let core, bot
77
- if (Version.isTrss) {
78
- bot = Bot[data.bot_id]
79
- core = bot?.core
80
- } else {
81
- bot = Bot
82
- try {
83
- core = (await import('oicq')).core
84
- } catch (error) {
85
- core = null
86
- }
87
- }
88
- if (!core) {
89
- const msg = [data.url]
90
- if (data.message_type === 'group') {//群聊
91
- await bot?.pickGroup?.(data.group_id)?.sendMsg?.(msg)
92
- } else if (data.message_type === 'private') {//私聊
93
- await bot?.pickFriend?.(data.user_id)?.sendMsg?.(msg)
94
- }
95
- return
96
- }
97
- try {
98
- let body = await CreateMusicShare(data)
99
- let payload = await bot.sendOidb("OidbSvc.0xb77_9", core.pb.encode(body));
100
- let result = core.pb.decode(payload);
101
- if (result[3] != 0) {
102
- if (data.message_type === 'group') {//群聊
103
- await bot?.pickGroup(data.group_id).sendMsg('歌曲分享失败:' + result[3])
104
- } else if (data.message_type === 'private') {//私聊
105
- await bot?.pickFriend(data.user_id).sendMsg('歌曲分享失败:' + result[3])
106
- }
107
- // e.reply('歌曲分享失败:' + result[3], true);
108
- }
109
- } catch (error) {
110
- const msg = [data.url]
111
- if (data.message_type === 'group') {//群聊
112
- await bot?.pickGroup?.(data.group_id)?.sendMsg?.(msg)
113
- } else if (data.message_type === 'private') {//私聊
114
- await bot?.pickFriend?.(data.user_id)?.sendMsg?.(msg)
115
- }
116
- return
117
- }
118
- }
119
-
120
- function sleep(ms) {
121
- return new Promise((resolve) => setTimeout(resolve, ms))
122
- }
123
-
124
- const TMP_DIR = process.cwd() + '/plugins/ws-plugin/Temp'
125
- if (!fs.existsSync(TMP_DIR)) fs.mkdirSync(TMP_DIR)
126
-
127
- const mimeTypes = {
128
- '.html': 'text/html',
129
- '.js': 'text/javascript',
130
- '.css': 'text/css',
131
- '.json': 'application/json',
132
- '.png': 'image/png',
133
- '.jpg': 'image/jpg',
134
- '.gif': 'image/gif',
135
- '.ico': 'image/x-icon',
136
- '.txt': 'text/plain',
137
- '.xlsx': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
138
- }
139
-
140
- function decodeHtml(html) {
141
- var map = {
142
- '&amp;': '&',
143
- '&#91;': '[',
144
- '&#93;': ']',
145
- '&#44;': ','
146
- };
147
-
148
- for (var key in map) {
149
- const value = map[key];
150
- const regex = new RegExp(key, 'g');
151
- html = html.replace(regex, value);
152
- }
153
- return html;
154
- }
155
-
156
-
157
- export {
158
- SendMusicShare,
159
- sleep,
160
- TMP_DIR,
161
- mimeTypes,
162
- decodeHtml
163
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/meme_generator/memes/keep_away/__init__.py DELETED
@@ -1,47 +0,0 @@
1
- from typing import List
2
-
3
- from PIL.Image import Transpose
4
- from pil_utils import BuildImage
5
-
6
- from meme_generator import add_meme
7
-
8
-
9
- def keep_away(images: List[BuildImage], texts: List[str], args):
10
- def trans(img: BuildImage, n: int) -> BuildImage:
11
- img = img.convert("RGBA").square().resize((100, 100))
12
- if n < 4:
13
- return img.rotate(n * 90)
14
- else:
15
- return img.transpose(Transpose.FLIP_LEFT_RIGHT).rotate((n - 4) * 90)
16
-
17
- def paste(img: BuildImage):
18
- nonlocal count
19
- y = 90 if count < 4 else 190
20
- frame.paste(img, ((count % 4) * 100, y))
21
- count += 1
22
-
23
- text = texts[0] if texts else "如何提高社交质量 : \n远离以下头像的人"
24
- frame = BuildImage.new("RGB", (400, 290), "white")
25
- frame.draw_text((10, 10, 390, 80), text, max_fontsize=40, halign="left")
26
- count = 0
27
- num_per_user = 8 // len(images)
28
- for image in images:
29
- for n in range(num_per_user):
30
- paste(trans(image, n))
31
- num_left = 8 - num_per_user * len(images)
32
- for n in range(num_left):
33
- paste(trans(images[-1], n + num_per_user))
34
-
35
- return frame.save_jpg()
36
-
37
-
38
- add_meme(
39
- "keep_away",
40
- keep_away,
41
- min_images=1,
42
- max_images=8,
43
- min_texts=0,
44
- max_texts=1,
45
- default_texts=["如何提高社交质量 : \n远离以下头像的人"],
46
- keywords=["远离"],
47
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CjangCjengh/Sanskrit-TTS/commons.py DELETED
@@ -1,97 +0,0 @@
1
- import math
2
- import torch
3
- from torch.nn import functional as F
4
- import torch.jit
5
-
6
-
7
- def script_method(fn, _rcb=None):
8
- return fn
9
-
10
-
11
- def script(obj, optimize=True, _frames_up=0, _rcb=None):
12
- return obj
13
-
14
-
15
- torch.jit.script_method = script_method
16
- torch.jit.script = script
17
-
18
-
19
- def init_weights(m, mean=0.0, std=0.01):
20
- classname = m.__class__.__name__
21
- if classname.find("Conv") != -1:
22
- m.weight.data.normal_(mean, std)
23
-
24
-
25
- def get_padding(kernel_size, dilation=1):
26
- return int((kernel_size*dilation - dilation)/2)
27
-
28
-
29
- def intersperse(lst, item):
30
- result = [item] * (len(lst) * 2 + 1)
31
- result[1::2] = lst
32
- return result
33
-
34
-
35
- def slice_segments(x, ids_str, segment_size=4):
36
- ret = torch.zeros_like(x[:, :, :segment_size])
37
- for i in range(x.size(0)):
38
- idx_str = ids_str[i]
39
- idx_end = idx_str + segment_size
40
- ret[i] = x[i, :, idx_str:idx_end]
41
- return ret
42
-
43
-
44
- def rand_slice_segments(x, x_lengths=None, segment_size=4):
45
- b, d, t = x.size()
46
- if x_lengths is None:
47
- x_lengths = t
48
- ids_str_max = x_lengths - segment_size + 1
49
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
50
- ret = slice_segments(x, ids_str, segment_size)
51
- return ret, ids_str
52
-
53
-
54
- def subsequent_mask(length):
55
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
56
- return mask
57
-
58
-
59
- @torch.jit.script
60
- def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
61
- n_channels_int = n_channels[0]
62
- in_act = input_a + input_b
63
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
64
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
65
- acts = t_act * s_act
66
- return acts
67
-
68
-
69
- def convert_pad_shape(pad_shape):
70
- l = pad_shape[::-1]
71
- pad_shape = [item for sublist in l for item in sublist]
72
- return pad_shape
73
-
74
-
75
- def sequence_mask(length, max_length=None):
76
- if max_length is None:
77
- max_length = length.max()
78
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
79
- return x.unsqueeze(0) < length.unsqueeze(1)
80
-
81
-
82
- def generate_path(duration, mask):
83
- """
84
- duration: [b, 1, t_x]
85
- mask: [b, 1, t_y, t_x]
86
- """
87
- device = duration.device
88
-
89
- b, _, t_y, t_x = mask.shape
90
- cum_duration = torch.cumsum(duration, -1)
91
-
92
- cum_duration_flat = cum_duration.view(b * t_x)
93
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
94
- path = path.view(b, t_x, t_y)
95
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
96
- path = path.unsqueeze(1).transpose(2,3) * mask
97
- return path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/chat/g4f/Provider/Provider.py DELETED
@@ -1,16 +0,0 @@
1
- import os
2
- from ..typing import sha256, Dict, get_type_hints
3
-
4
- url = None
5
- model = None
6
- supports_stream = False
7
- needs_auth = False
8
-
9
-
10
- def _create_completion(model: str, messages: list, stream: bool, **kwargs):
11
- return
12
-
13
-
14
- params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
15
- '(%s)' % ', '.join(
16
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/models/diffusion/sampling_util.py DELETED
@@ -1,22 +0,0 @@
1
- import torch
2
- import numpy as np
3
-
4
-
5
- def append_dims(x, target_dims):
6
- """Appends dimensions to the end of a tensor until it has target_dims dimensions.
7
- From https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/utils.py"""
8
- dims_to_append = target_dims - x.ndim
9
- if dims_to_append < 0:
10
- raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less')
11
- return x[(...,) + (None,) * dims_to_append]
12
-
13
-
14
- def norm_thresholding(x0, value):
15
- s = append_dims(x0.pow(2).flatten(1).mean(1).sqrt().clamp(min=value), x0.ndim)
16
- return x0 * (value / s)
17
-
18
-
19
- def spatial_norm_thresholding(x0, value):
20
- # b c h w
21
- s = x0.pow(2).mean(1, keepdim=True).sqrt().clamp(min=value)
22
- return x0 * (value / s)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/middleware/trustedhost.py DELETED
@@ -1,3 +0,0 @@
1
- from starlette.middleware.trustedhost import ( # noqa
2
- TrustedHostMiddleware as TrustedHostMiddleware,
3
- )
 
 
 
 
spaces/Deci/DeciDiffusion-v1-0/app.py DELETED
@@ -1,115 +0,0 @@
1
- import gradio as gr
2
- import torch
3
- from PIL.ImageDraw import Draw
4
- from diffusers import StableDiffusionPipeline
5
- from PIL import Image, ImageOps
6
-
7
-
8
- # Load pipeline once
9
- model_id = 'Deci/DeciDiffusion-v1-0'
10
- device = "cuda" if torch.cuda.is_available() else "cpu"
11
- pipe = StableDiffusionPipeline.from_pretrained(model_id, custom_pipeline=model_id, torch_dtype=torch.float32)
12
- pipe.unet = pipe.unet.from_pretrained(model_id, subfolder='flexible_unet', torch_dtype=torch.float32)
13
- pipe = pipe.to(device)
14
-
15
-
16
- def read_content(file_path: str) -> str:
17
- """read the content of target file
18
- """
19
- with open(file_path, 'r', encoding='utf-8') as f:
20
- content = f.read()
21
-
22
- return content
23
-
24
-
25
- def predict(_prompt: str, _steps: int = 30, _seed: int = 42, _guidance_scale: float = 7.5, _negative_prompt: str = ""):
26
- _negative_prompt = [_negative_prompt] if _negative_prompt else None
27
-
28
- output = pipe(prompt=[_prompt],
29
- negative_prompt=_negative_prompt,
30
- num_inference_steps=int(_steps),
31
- guidance_scale=_guidance_scale,
32
- generator=torch.Generator(device).manual_seed(_seed),
33
- )
34
- output_image = output.images[0]
35
-
36
- # Add border beneath the image with Deci logo + prompt
37
- if len(_prompt) > 52:
38
- _prompt = _prompt[:52] + "..."
39
-
40
- original_image_height = output_image.size[1]
41
- output_image = ImageOps.expand(output_image, border=(0, 0, 0, 64), fill='white')
42
- deci_logo = Image.open('./deci_logo_white.png')
43
- output_image.paste(deci_logo, (0, original_image_height))
44
- Draw(output_image).text((deci_logo.size[0], original_image_height + 26), _prompt, (127, 127, 127))
45
- return output_image
46
-
47
-
48
- css = '''
49
- .gradio-container {
50
- max-width: 1100px !important;
51
- background-image: url(https://huggingface.co/spaces/Deci/Deci-DeciDiffusionClean/resolve/main/background-image.png);
52
- background-size: cover;
53
- background-position: center center;
54
- background-repeat: no-repeat;
55
- }
56
-
57
- .footer {margin-bottom: 45px;margin-top: 35px !important;text-align: center;border-bottom: 1px solid #e5e5e5}
58
- .footer>p {font-size: .8rem; display: inline-block; padding: 0 10px;transform: translateY(10px);background: white}
59
- .dark .footer {border-color: #303030}
60
- .dark .footer>p {background: #0b0f19}
61
- .acknowledgments h4{margin: 1.25em 0 .25em 0;font-weight: bold;font-size: 115%}
62
- @keyframes spin {
63
- from {
64
- transform: rotate(0deg);
65
- }
66
- to {
67
- transform: rotate(360deg);
68
- }
69
- }
70
- '''
71
-
72
- demo = gr.Blocks(css=css, elem_id="total-container")
73
- with demo:
74
- gr.HTML(read_content("header.html"))
75
- with gr.Row():
76
- with gr.Column():
77
- with gr.Row(mobile_collapse=False, equal_height=True):
78
- prompt = gr.Textbox(placeholder="Your prompt", show_label=False, elem_id="prompt", autofocus=True, lines=3, )
79
-
80
- with gr.Accordion(label="Advanced Settings", open=False):
81
- with gr.Row(mobile_collapse=False, equal_height=True):
82
- steps = gr.Slider(value=30, minimum=15, maximum=50, step=1, label="steps", interactive=True)
83
- seed = gr.Slider(value=42, minimum=1, maximum=100, step=1, label="seed", interactive=True)
84
- guidance_scale = gr.Slider(value=7.5, minimum=1, maximum=15, step=0.1, label='guidance_scale', interactive=True)
85
-
86
- with gr.Row(mobile_collapse=False, equal_height=True):
87
- negative_prompt = gr.Textbox(label="negative_prompt", placeholder="Your negative prompt",
88
- info="what you don't want to see in the image", lines=3)
89
- with gr.Row():
90
- btn = gr.Button(value="Generate!", elem_id="run_button")
91
-
92
- with gr.Column():
93
- image_out = gr.Image(label="Output", elem_id="output-img", height=400)
94
-
95
- btn.click(fn=predict,
96
- inputs=[prompt, steps, seed, guidance_scale, negative_prompt],
97
- outputs=[image_out],
98
- api_name='run')
99
-
100
- gr.HTML(
101
- """
102
- <div class="footer">
103
- <p>Model by <a href="https://deci.ai" style="text-decoration: underline;" target="_blank">Deci.ai</a> - Gradio Demo by 🤗 Hugging Face
104
- </p>
105
- </div>
106
- <div class="acknowledgments">
107
- <p><h4>LICENSE</h4>
108
- The model is licensed with a <a href="https://huggingface.co/Deci/DeciDiffusion-v1-0/blob/main/LICENSE-WEIGHTS.md" style="text-decoration: underline;" target="_blank">CreativeML Open RAIL-M</a> license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please <a href="https://huggingface.co/Deci/DeciDiffusion-v1-0/blob/main/LICENSE-WEIGHTS.md" target="_blank" style="text-decoration: underline;" target="_blank">read the license</a></p>
109
- <p><h4>Biases and content acknowledgment</h4>
110
- Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the <a href="https://laion.ai/blog/laion-5b/" style="text-decoration: underline;" target="_blank">LAION-5B dataset</a>, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the <a href="https://huggingface.co/Deci/DeciDiffusion-v1-0" style="text-decoration: underline;" target="_blank">model card</a></p>
111
- </div>
112
- """
113
- )
114
-
115
- demo.queue(max_size=50).launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ElainaFanBoy/MusicGen/tests/common_utils/temp_utils.py DELETED
@@ -1,56 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- import os
8
- import tempfile
9
-
10
-
11
- class TempDirMixin:
12
- """Mixin to provide easy access to temp dir.
13
- """
14
-
15
- temp_dir_ = None
16
-
17
- @classmethod
18
- def get_base_temp_dir(cls):
19
- # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory.
20
- # this is handy for debugging.
21
- key = "AUDIOCRAFT_TEST_DIR"
22
- if key in os.environ:
23
- return os.environ[key]
24
- if cls.temp_dir_ is None:
25
- cls.temp_dir_ = tempfile.TemporaryDirectory()
26
- return cls.temp_dir_.name
27
-
28
- @classmethod
29
- def tearDownClass(cls):
30
- if cls.temp_dir_ is not None:
31
- try:
32
- cls.temp_dir_.cleanup()
33
- cls.temp_dir_ = None
34
- except PermissionError:
35
- # On Windows there is a know issue with `shutil.rmtree`,
36
- # which fails intermittenly.
37
- # https://github.com/python/cpython/issues/74168
38
- # Following the above thread, we ignore it.
39
- pass
40
- super().tearDownClass()
41
-
42
- @property
43
- def id(self):
44
- return self.__class__.__name__
45
-
46
- def get_temp_path(self, *paths):
47
- temp_dir = os.path.join(self.get_base_temp_dir(), self.id)
48
- path = os.path.join(temp_dir, *paths)
49
- os.makedirs(os.path.dirname(path), exist_ok=True)
50
- return path
51
-
52
- def get_temp_dir(self, *paths):
53
- temp_dir = os.path.join(self.get_base_temp_dir(), self.id)
54
- path = os.path.join(temp_dir, *paths)
55
- os.makedirs(path, exist_ok=True)
56
- return path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/EngAbod/Liveness_Detection/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Liveness Detection
3
- emoji: 👀
4
- colorFrom: blue
5
- colorTo: blue
6
- sdk: streamlit
7
- sdk_version: 1.27.2
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/FL33TW00D/whisper-turbo/README.md DELETED
@@ -1,10 +0,0 @@
1
- ---
2
- title: Whisper Turbo
3
- emoji: 🗣️🏎️
4
- colorFrom: blue
5
- colorTo: gray
6
- sdk: static
7
- pinned: true
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
spaces/Fadil369/docker/README.md DELETED
@@ -1,20 +0,0 @@
1
- ---
2
- title: Shiny for Python template
3
- emoji: 🌍
4
- colorFrom: yellow
5
- colorTo: indigo
6
- sdk: docker
7
- pinned: false
8
- license: mit
9
- ---
10
-
11
- This is a templated Space for [Shiny for Python](https://shiny.rstudio.com/py/).
12
-
13
-
14
- To get started with a new app do the following:
15
-
16
- 1) Install Shiny with `pip install shiny`
17
- 2) Create a new app with `shiny create .`
18
- 3) Then run the app with `shiny run --reload`
19
-
20
- To learn more about this framework please see the [Documentation](https://shiny.rstudio.com/py/docs/overview.html).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Feifei315/flax-midjourney-v4-diffusion/app.py DELETED
@@ -1,3 +0,0 @@
1
- import gradio as gr
2
-
3
- gr.Interface.load("models/flax/midjourney-v4-diffusion").launch()